Are wine competitions reliable? Data from the USA
An interesting paper from The Journal of Wine Economics has analysed the reliability of judging at wine competitions in the USA. You can download it for free here.
From the abstract:
An analysis of over 4000 wines entered in 13 U.S. wine competitions shows little concordance among the venues in awarding Gold medals. Of the 2,440 wines entered in more than three competitions, 47 percent received Gold medals, but 84 percent of these same wines also received no award in another competition.
There's some good analysis of data in the paper. The conclusions are extremely discouraging:
(1) There is almost no consensus among the 13 wine competitions regarding wine quality, (2) for wines receiving a Gold medal in one or more competitions, it is very likely that the same wine received no award at another, (3) the likelihood of receiving a Gold medal can be statistically explained by chance alone.
We can conclude that there's something fundamentally wrong with these US competitions. Is it a US-specific problem, or does it also apply to competitions in other countries? Is it that the judges used simply aren't good enough tasters in this sort of environment?
[The same author has also published a paper looking at judge reliability at one of these US competitions.]
Labels: blind tasting