Yesterday Tim Atkin released his South African wine report. It’s created quite a bit of discussion, and it’s also raised once again for me the question of single critics versus panel tastings. Which is the best?
Tim has worked very hard tasting a lot of wines, as do the competing critics from the big publications such as The Wine Advocate and The Wine Spectator. And then of course, there’s also Jancis Robinson (and team), and James Suckling (and team) and the new Gallioni/Tanzer grouping. There are critics everywhere, making a living selling access to scores and tasting notes. The competition is fierce, and they are all chasing the same consumers.
The critic model champions one person’s opinion. But the unspoken assumption behind many of these publications is that their critics are so skilled that they can effectively taste objectively, and reveal (or get very close to) the truth about a wine. This assumption is that if several critics are equally highly skilled, they will reach the same judgement about any particular wine. And each critic would probably like you to think that they are especially skilled – they have a gift – and so their judgments are worth paying $$$ for.
This is wrong. Tasting is personal. Judging wine is personal. However objective we try to be, we can’t be, fully. I have style preferences. I like certain wines. Whether or not you find my tasting notes and recommendations useful depends on whether or not you like my palate preferences. That’s my branding, if you will. The major critics would like you to think that their pronouncements are relevant to all drinkers. Well, they aren’t.
Look at the way that experts – all skilled and experienced – disagree when faced with interesting wines. The World of Fine Wine tastings (where the scores of each taster are published) illustrate this beautifully. It’s not that some of them are doing a bad job. It represents a genuine disagreement about what constitutes a great wine.
So what about panel tastings? Are they of use? Yes, they’re really valuable. I take part in them regularly. The International Wine Challenge, the National Wine Awards of Canada, the South African Top 100 and the Standard Bank Top 10 Chenin Blanc competitions are all excellently run, with great judges, and produce results that are useful for producer and consumer alike. The averaging of several opinions provides robustness to the results. But it does have the side effect that it doesn’t serve edgy, distinctive and unusual wines very well – the varying opinions get averaged out and the wines will get lost in the middle.
So we really need both. I like to read reviews from critics whose palate I agree with. But when it comes to awards, trophies and classifications, I think the panel method is more robust. This is where I am slightly uncomfortable with Tim’s Cape Classification, because it’s attempting to produce something that is taken seriously by the industry, yet is based on one person’s opinion (albeit a very valuable opinion). We need to be a bit humble in the face of wine, and any attempt to produce an authoritative ranking is best done on the basis of several pooled expert opinions (as withe the annual Platter Guide’s five star wines).