Why tasting panels suck

uncategorized

Why tasting panels suck

It’s simple: the results.

I don’t know why (but I can make guesses) that panels of tasters consistently produce embarassingly bad results.

This has been in the news over the last few days, as the bloggosphere and twittersphere have been alight with criticism of a Decanter magazine panel tasting of South African Chenin Blanc. In short, the panel didn’t like the wines much. And celebrated wines, such as the FMC (which got just 2 stars, a diss!) and the Raats Family (a solitary star) did badly (see this report by Christian Eades).

This isn’t an isolated example. I’ve seen many panel reports. And often, when I look at the verdicts on wines I know well, the results are non-sensical. Why?

One explanation is that tasting lots of wine blind at once is difficult, and few are very good at it. Even experienced tasters can get things wrong. The Decanter panels are made of good tasters, but the results are often puzzling. It’s not only because the best producers frequently don’t bother submitting to these sorts of tastings. It’s also that there’s something about the performance of a panel that results in odd results.

You would think that averaging the scores of a number of tasters would result in more robust results. But I don’t think it does. Look at the World of Fine Wine tastings. The tasting panels there, made up of three experienced tasters, are instructive. This is because as well as an average score, the score of each of the panellists is recorded.

Often, one taste will score 11, another 13, and another 17. The reasons for this spread of scoring are complex and worthy of further discussion. Yet they reflect the complex nature of fine wine. Panels, no matter how good, simply can’t deal with fine wines. These wines, with their individual personalities, divide even ‘expert’ tasters.

I don’t think I’m the best taster in the world. Far from it. But I usually have a clear opinion on a wine. And many times I’ll have different opinions to my peers. Now when it comes to commercial wines, and broad-brush distinctions of quality, panels can work quite well. But for more serious wines, I don’t think there is a single ‘truth’. There are different opinions. And readers need to get to know the palates of the critics who they are reading.

For this reason, panel verdicts are much less useful than the verdicts of individual tasters. Ultimately, the proof is in the results. I honestly think that a verdict of 2 stars for the FMC Chenin Blanc is ludicrous. It’s a really complex, distinctive expression of Chenin Blanc. I think Decanter and other magazines that use tasting panels should scrutinize their results to see whether they really make sense or not.

38 Comments on Why tasting panels suck
wine journalist and flavour obsessive

38 thoughts on “Why tasting panels suck

  1. Jamie, I think you should declare an interest here, because you are involved with the DWWA’s closest rival, the IWC.

    A cynic might think that your post is just cheap point-scoring; it’s not the first time you’ve had a pop at Decanter. Is there a subtle agenda here, I wonder?

  2. Is it possible for you to expand your thoughts on your reference to “commercial wines and “serious wines”. I suspect I have a fair idea of the meaning of your words – I just feel that someone like you going on the record with some thoughts would have a cleansing effect on the industry.

    I think there is a lot of masquerading going on and consumers would benefit too.

  3. No subtle agenda “John”. I didn’t mention DWWA. Regular readers know I am a panel chair at IWC. I’d love to taste also at DWWA. I’ve been invited to taste at DWWA – but unfortunately, the two competitions are scheduled to clash with each other.

    I’m not singling out DWWA, more the tasting panels for Decanter magazine. The IWC has its own issues, as well – clearly, it is a panel tasting. But it does have a safety net that DWWA lacks, notably that every wine is tasted twice, and then the results go through to the co-chairs for checking. I’ve been co-opted as a co-chair for a morning, so see what goes on behind the scenes.

    Both DWWA and IWC are very useful, important competitions with a good deal of integrity, and are brilliant for helping consumers choose wine. But they don’t really suit fine wines. And panel tastings such as Decanter’s are looking to assess fine wines.

  4. Jamie

    What a confused argument. You state that “for more serious wines, I don’t think there is a single ‘truth’. There are different opinions” which when taken in isolation seems reasonable.

    But then you go on to say “I honestly think that a verdict of 2 stars for the FMC Chenin Blanc is ludicrous. It’s a really complex, distinctive expression of Chenin Blanc”

    Can I refer you to your first point. The panel has a different opinion to you. What’s ludicrous about that?

    I don’t think FMC – although I confess the last vintage I tasted was the 2006 – is complex although it is certainly distinctive. But I can think of many other Chenins (not from South Africa) which appeal more. Is this opinion ludicrous too?

  5. It would be interesting to see whether the “mass market” wines really did have a lower standard deviation of a score around the mean compared with “fine wines”. One way of putting this might be: it makes sense to say that “the panel has an opinion” (a concensus) when the scores range from 12-14, with an average of 13, but not so much if the scores ar 11, 13 and 17 for an average of 13.7.

    Thanks for this!

  6. Jamie,

    About 10 years ago, we at Wine Enthusiast did a fair amount of tasting in panels of three tasters, and the results were much as you describe those at World of Fine Wine. The average rating–and conflated opinions of the tasters–in some ways obscured the idiosyncracies and uniqueness of certain wines. I remember an Amarone tasting in which we tasted the ’97 Dal Forno, and ratings ranged from nearly undrinkable to prodigious. The result was a middling rating and a review that reflected this love-it-or-hate-it dichotomy.

    Taken all together, the effect of panel tasting was to reduce the number of wines at the tails of the distribution and increase the number of wines in the middle relative to wines reviewed by single tasters. Ideally, this would mean that the few wines unanimously liked or disliked are truly standouts/repulsive. In practice, as you’ve pointed out, this was not always the case; many of these tastings (IMO) rewarded wines that conformed to the expected style with minimal deviation rather than truly individualistic wines.

    In part because of this, we opted to focus on having single tasters do the majority of our wine reviewing. Although this has plenty of issues of its own, at least the reviews are written from a single perspective, so we hope that readers can learn our tasters preferences and use those to help guide their purchasing.

    Joe Czerwinski
    Senior Editor/Tasting Director
    Wine Enthusiast

  7. Although Jamie may have slightly contradicted himself as Chris points out, I do see what he is driving at. In simplistic terms, FMC is a ‘good’ wine so why did it get 2*. For me the question is…. does it matter. Firstly, is it going to affect the sales of this particular wine and personally, I’m finding far more interesting wines to drink through blogs such as Jamie’s than I ever did from print.

  8. Blind has to mean blind in every respect. IWC works because there are several layers, or rounds of blind tasting in order to reach a final verdict. It also has the advantage of limiting the power of a percieved ‘lead taster’.

  9. I agree with you Jamie .Think all of these huge tastings are a waste of time,not just the Decanter one.
    Frankly,I will be very interested to learn what the panel including you, consider are the best wines in South Africa,when the money makinng exercise for the organisors takes place!!
    Suspect though that many of the best wineries will not bother entering their wines.

  10. Winerackd: FMC got a 2* because some tasters don’t have your fine and discerning palette.

    I was interested by Jamie’s comment that ‘wines go through co-chairs for checking’ at the IWC. This rather suggests that the results are adjusted. Is that the case?

  11. I don’t think I’m the best taster in the world. Far from it.

    You’re especially insightful today, Jamie. 😉

    I agree on panels, but I’d go farther than you and say that I don’t think it helps if the tasters are talented or experienced. WoFW‘s system of individuated small panels is a good one, I think, and best of all when it’s a product of taster interaction (“this is good because,” “see, that’s exactly why I think it’s a failure,” “I believe that quality’s typical for the appellation,” “I don’t”), because it clearly spells out the preconceptions of the tasters.

    But this:

    I honestly think that a verdict of 2 stars for the FMC Chenin Blanc is ludicrous.

    …is, itself, ludicrous. It’s obviously a wine made with serious intent. Insisting that it must then be deserving of a better score is to attempt to assign some sort of objective inherency to its quality, which you know is impossible. (Yes, people attempt it. They’re wrong.) I know I would, each and every time, think the regular chenin a better wine than the FMC, because both the cellar treatment and the texture of the latter are not what I consider high-quality expressions of chenin. James Laube would, I suspect, have the opposite opinion. There’s no obviousness to the FMC’s quality, just subjectivity.

  12. And I love the irony that on the Decanter 2011 wall calendar, one of the labels that surround it (billed as “labels shown are a selection of Decanter’s top wines of the year”) is none other than… The FMC Chenin 2008.

  13. One of the deeply frustrating thing about tasting wines as a panel is that, for whatever reason, sometimes wines just don’t show well on the day. They’re not terrible wines. They’re not great wines. It’s just a bit of a marginal wine right then and there, and not the best bottle of that particular wine. I’ve personally sat on a panel where all four of us (one of whom was the panel chair, another of whom is a super-well-respected [and awarded] judge)thought a wine wasn’t great. All of us gave the wine 14. Neither here nor there. This wine has won a trophy and golds. But on the day, independently, it wasn’t up to much. And I’d argue that we did our job quite correctly on the day.

    Another component to this debate: wine producers are so used to kindly written reviews that when they get something that isn’t positive, they tend to be a bit more hyper-sensitive for mine. Most restauranteurs / directors / writers are alert to the prospect of a negative review. Wine producers just aren’t exposed to this and get a bit huffy in my experience.

  14. Perhaps Jamie should be reminded that the IWC in 2009 gave the FMC Chenin 2007 a “Commended” – ie, a fourth-level award – not much different from what Decanter now gives a later vintage, to his surprise. It was one of a number of peculiar IWC results for South African wines (the only category I investigated). The IWC is certainly not immune from the absurdities produced by ALL these big tastings – and for the same reason: that no-one can useful judge a few hundred wines in one day. The mystery is why people, like Jamie, who can see the absurdity of the procedure continue to take part as judges.

  15. Chris, I don’t see much contradiction here. The fact that Jamie thinks that the FMC score is so far off (and that you don’t) is precicely the point – every experienced taster has strong opinions which they are entitled, and indeed obliged to report. Just because a fine wine, or perhaps any wine, can be highly interpretable doesn’t mean that any one taster has to acknowledge, let alone support every possible opinion on it. I certainly prefer having a number of different honest opinions to pick from than a smudgy reduction by panel – can you imagine film/restaurant/exhibition reviews by panel? Useless!

  16. Interesting comments across the board…I am a bit late into the game here but seeing as I was on the tasting panel, I will lend my 2 cents worth…. all I wish to add was that A) In hind sight I was very surprised how poor the wines all showed in general, B) how the “big names” failed to impress, C) how low some of the scores were. It may have been a bad bio-dynamic tasting day (I have not checked) but many benchmark wines struggled to reach 15/20… when they would perhaps be expected to easily rate 16 or 17/20 or more. As for the FMC, it is normally an impressive wine despite its high RS (crowd pleasing), but I am affraid it just did not come up to scratch on the day. I dont think the answer lies in critisizing the tasting panel or the methodology. Panel tastings do serve a purpose and are a good guide to value and qualiity without the infuence of labels. As Cam Haskell points out in commens, some times score just don’t go a wines way on the day even if everyone is tasting on top form.

  17. precisely Tim!! hope you can come to dinner on the 24th March.Chris M will invite you 🙂

  18. I think Jamie has a couple of reasonable points. At least the last 2 tastings on Barolo have thrown up some strange results. Many of the best wines that are rated or known by those in the know (as well as locals) in Piemonte scored badly, and some not soo good wines and very poor represenations of Barolo got high marks. Perhaps its because a panel doesnt always understand a wine region or judging it on an “Anglicised” taste suitable for the UK, but they often get Nebbiolo wrong!!

  19. Thanks, Jamie, for raising these issues. Hope you don’t mind me adding my views here, as a co-chair at the International Wine Challenge, a consumer journalist and an experienced show judge.

    No panel tasting (or any tasting for that matter) is perfect. But I honestly believe, having judged wines all over the world, that our system at the IWC (I also use a modified version of this for the Top 100 IGP and New Wave Spain competitions, which I chair) is the fairest way of tasting wines. Every wine is tasted twice, and the medal worthy wines at least three and sometimes as many as seven times.

    The more experienced tasters look at a wine (and that includes fine wines, Jamie) the more likely they are to come up with the “right” result. I know the word “right” is subjective (I’m generally in the undrinkable camp on Dal Forno), but I still think that most wines get the medals they deserve at the IWC. There is a lot of consistency from year to year. Having a group of co-chairmen who work well together and respect each other’s palates helps, too, as does having excellent panel chairs, including Jamie.

    I think the IWC and the DWWA (which is a different but also very good competition) have an important role to play as blind tastings. They identify new stars and level the playing field, which is why famous wineries don’t generally like them. There is nothing like a blind tasting, as we all know, for confounding expectations.

    I agree that it’s important for individual tasters to have their say, too. That’s why I have published 2009 reports on Bordeaux and Burgundy on my site (http://www.timatkin.com/) over the last 12 months. But I love tasting wines with other, experienced tasters too, discussing them in depth and learning from the experience. That’s why we restrict the number of wines that panels taste at the IWC and encourage debate.

    Tim Atkin MW

  20. With reference to the Decanter tasting of SA Chenin Blanc, surely the crucial debate is whether or not the entire category deserves to be accused of “monotony, neutrality and non-existent regionality” as opposed to whether or not The FMC is a top wine?

  21. Tom, thanks for that reply to my thoughts. However, I find your statement that “Just because a fine wine, or perhaps any wine, can be highly interpretable doesn’t mean that any one taster has to acknowledge, let alone support every possible opinion on it” to be in itself a contradiction. And Jamie isn’t failing to acknowledge or support another opinion, he is openly attacking it for being different to his own.

    He does this despite his declared opinion that wines “with their individual personalities, divide even ‘expert’ tasters” and “for more serious wines, I don’t think there is a single ‘truth’”.

    That’s definietly a contradiction. And some of it smells of palate absolutism (ie. my opinion is right, yours is different and therefore wrong) although deep down I can’t really believe Jamie meant to come across like that.

    If the post were purely about the smudging of results by panels then fair enough – that’s a valid point and why the WOFW presentation of results work well, and it is a feature in *both* the IWC and DWWA (and I judge the Loire in the latter so understand the problems and how Decanter have tried to deal with it – I am certain the IWC team do the same). But in this case how does Jamie not know that the judges all agreed that the FMC should be rated at that level? He doesn’t. Because the opinion differs to his, he has assumed there is controversy and confusion within the panel’s decision on this wine.

  22. Fascinating debate! Unsurprisingly, I agree with much that Tim Atkin has said in his comment.

    At the IWC, I believe there are two crucial aspects which help us to better conclusions than many other panel discussions. Firstly, every panel discusses and debates thoughts and marks on wines. We do not simply add up marks and divide by the number of tasters. Whether you eliminate outlying marks or not, this is a recipe for ‘smudgy’ results. So, if one taster thinks a wine is great, it will be discussed (and probably retasted) by the whole panel. Ditto if one taster thinks a wine faulty. The process takes longer, but we believe it gives a fairer result.

    Secondly, as Tim says, we check and recheck at every stage. So by the time a wine wins a Trophy, it may have been tasted up to 7 times.

    Nobody is a perfect taster, and everyone has ‘bad’ tasting days. But discussion and checking marks give fairer results.

  23. Chris, I agree that it was quite starkly put (!), but I think that part of being critical is to (constructively) examine other people’s opinions – as I consumer, the information is useful to me (such as your recent criticism of other opinions on Bordeaux 2002, for example). I’m a regular visitor to both your sites as well as many other sources – perhaps the ultimate interpretation is up to the consumer, armed with a good handful of honest opinions from wine writers whose opinions are ‘right’for them, based on the glass of wine they have in front of them on that particular day, with that particular weather, from the end of a bottle (from a particular lot) which as at the wrong temperature, in a stuffy crowded room next to somebody wearing perfume….on a ‘root’ day…

  24. I agree with Christian but I think the tone of the report doesn’t wholly reflect the results, where some seriously good chenins have performed well and deserve due appreciation for those who love chenin. The problem with reputations, based as much on marketing as quality, is that they can easily be debunked in a blind tasting. If you like The FMC, don’t be put off by these results, as it isn’t any better or worse than previous vintages; it was only for those who do like it that the result appears ‘ludicrous’.

  25. Thanks Tim and Charles for your clarifications regarding the International Wine Challenge. Clearly, no tasting is going to have absolutely perfect results. But, if you try looking for anomalous results with wines you are familiar with, my experience is that the IWC process is highly robust.

  26. One quick question for Tim and Charles regarding the IWC procedure – I assume that at no point during the discussions and re-tasting (up to seven times…) the brand or identity of the wine is known to the tasters?

  27. Hi Chris B.

    The identity of the wines is at no point revealed to any of the judges (including the Co-chairs)right up to the announcement of the results at the LIWF. The only information provided is grape, region and vintage alongside a wine code that is only deciphered in the organisers office.

    IWC

  28. Isn’t part of the issue that some of these tastings perhaps ought not be blind in the first place?

    I mean, fair enough you do a panel tasting of say 82, or 89 claret now taste them blind. But tasting 2009 Californian Cabernets (or clarets) blind will always lead to strange results.
    Hence 2005 Pedesclaux ranked higher than a first growth (Lafite?) in a Decanter panel tasting. Actually that doesn’t suprise me, the 1st growth is aiming to be drinking well in 20 years, the 5th growth in 8 years so it makes sense that stylistically it would be more together at a younger phase.

    Without knowing you are tasting something for the long haul or something for the medium term you cannot make an accurate arguement.

    The other issue with Panels is that you end up with a lot of ‘average’ wines, which actually are probably Marmite jobs.

  29. Chris, the identity of the wines is never revealed to the tasters, including the co-chairmen. We don’t know what has won until the results are announced in May and September. There is a very sturdy Chinese wall between the organisers and the tasters.

  30. Ok, here goes. I guess that my own experience at the IWC until 2005 and at numerous other competitions across the globe, along with my more recent experience entering my own wines into wine competitions, leads me to admit that of course it’s an imperfect system. Wines can do well in one good competition and badly in another. Barometric pressure can have a role to play, as may biodynamic calendars for those who believe in them. The position in a lineup of wines will also be relevant, as may the temperature and the mood of the room. And yes, as a former consumer wine critic, I predictably reckon there’s also a lot to be said for having an individual like Tim (either of them) or Charles or Jamie whose tastes and prejudices you can get to know and understand. But those individuals have their (biodynamic?) good and bad days too. Panel tastings when the panels are good are probably rather like democracy. Highly fallible but generally better than any alternative. A wine that has done well in a few serious competitions or panel tastings is almost certainly worth looking at.

  31. I routinely tell friends to pick their guidance according to their needs. If I am not familiar with a particular type of wine and am curious to try one, I would choose something from the IWC winners list to buy and feel confident that as an IWC pick, it will certainly be a sound, well made example of that type of wine. Whether I actually like it or not is my palate’s problem, not the IWC’s! But when I am looking for more finely-tuned guidance within a group of wines, e.g. Burgundy, which I know fairly well but want to try more, or I am looking at more expensive wines where uninformed random punt purchases are too risky/expensive, then I rely on a critic or wine-seller whose palate I know and trust. For example, I have done tutored tastings with Jasper Morris, followed his reviews closely and mentally compared his comments with my own experiences of the same wines. I now know the standard deviation between his palate and mine – and I know that sometimes what is NOT his favourite or top-scoring choice might be perfect for my preferences. So, to me, there is a place for both blind panels and individual reviews, but I agree with Jamie that the in-between small panels which try to average do muddy the waters badly. In the WOFW panels, at least the reader can follow the discussion and decide for themselves what to make of it.

  32. I hate to appear the miserable old scrote which I am undoubtedly am but any normal punters reading this debate will by now be not only nonplussed at the spiralling arguments but also rapidly led to infer that wine nerddom is best steered clear of.

    Let’s get one thing straight, tasting panels are an artificial device designed to lend gravitas to any conclusions arrived at. As we all know, put a bunch of wine people in a room and you will inevitably be dealt a wide spread of opinions and likes / dislikes. This is because (don’t tell the WSET) tasting is an inexact science and subject to an impossibly large range of factors eg place, temperature, state of mind, experience, root or flower day, paying the bill yourself or not etc etc….if it wasn’t an inexact science then there would be no sommeliers, ****-mags for wino’s and the lad or lady on the street could easily cut a swathe through the vast and proliferating choice now available.

    Do you know what, I’ve forgotten what my point was now. Oh yes, let’s not take it so seriously!

    Thanks for an intelligent and absorbing blog anyway Mr Goode.

  33. Nice article! I’m all about trying out Love/Hate wines to form my own opinion, but I guess they just get buried somewhere in the “average” pile by panel tasting.

    But really, we should just get rid of medals altogether. Only anonymous supermarket shelves require them, and these wines aren’t really “serious” to begin with. Fine wine is not a consumer good to be rated, but an *expression* to be discussed among wine lovers!

  34. Some facts and a suggestion:

    (1) Some information is better than no information. So tasting opinions are of some use in giving me information but if there is to be a numeric score, the averaged score should ALWAYS be marked with a “x” or some other indication where the average was made up of averaged scores that showed a wide variation (say, 4 or more from best to worst)and this shows me that it is still worth investigating as there is something there which is due to individual palate.

    (2) Individual palates vary. Whay do they vary? There is some genetic basis. Some individuals for example can taste the bitterness in certain chemicals while others cannot, and for a few chemicals this has been shown to be inherited in a Mendelian dominant pattern. Additionally, the taste of a substance can change slightly depending on what you tasted immediately preceeding it. This is a simple fact of human physiology. You can therefore imagine what effects it has on an intense rapid serial tasting session.

    Know what you like, and don’t pay too much just for “points”. Kind regards all.

Leave a Reply

Back To Top