It’s simple: the results.
I don’t know why (but I can make guesses) that panels of tasters consistently produce embarassingly bad results.
This has been in the news over the last few days, as the bloggosphere and twittersphere have been alight with criticism of a Decanter magazine panel tasting of South African Chenin Blanc. In short, the panel didn’t like the wines much. And celebrated wines, such as the FMC (which got just 2 stars, a diss!) and the Raats Family (a solitary star) did badly (see this report by Christian Eades).
This isn’t an isolated example. I’ve seen many panel reports. And often, when I look at the verdicts on wines I know well, the results are non-sensical. Why?
One explanation is that tasting lots of wine blind at once is difficult, and few are very good at it. Even experienced tasters can get things wrong. The Decanter panels are made of good tasters, but the results are often puzzling. It’s not only because the best producers frequently don’t bother submitting to these sorts of tastings. It’s also that there’s something about the performance of a panel that results in odd results.
You would think that averaging the scores of a number of tasters would result in more robust results. But I don’t think it does. Look at the World of Fine Wine tastings. The tasting panels there, made up of three experienced tasters, are instructive. This is because as well as an average score, the score of each of the panellists is recorded.
Often, one taste will score 11, another 13, and another 17. The reasons for this spread of scoring are complex and worthy of further discussion. Yet they reflect the complex nature of fine wine. Panels, no matter how good, simply can’t deal with fine wines. These wines, with their individual personalities, divide even ‘expert’ tasters.
I don’t think I’m the best taster in the world. Far from it. But I usually have a clear opinion on a wine. And many times I’ll have different opinions to my peers. Now when it comes to commercial wines, and broad-brush distinctions of quality, panels can work quite well. But for more serious wines, I don’t think there is a single ‘truth’. There are different opinions. And readers need to get to know the palates of the critics who they are reading.
For this reason, panel verdicts are much less useful than the verdicts of individual tasters. Ultimately, the proof is in the results. I honestly think that a verdict of 2 stars for the FMC Chenin Blanc is ludicrous. It’s a really complex, distinctive expression of Chenin Blanc. I think Decanter and other magazines that use tasting panels should scrutinize their results to see whether they really make sense or not.