Some thoughts about the ‘Judgment of Princeton’ tasting

uncategorized

Some thoughts about the ‘Judgment of Princeton’ tasting

There’s been a fair bit of chat in the wine world about ‘The Judgment of Princeton’ (here). This was a wine tasting organized by the American Association of Wine Economists (AAWE) along the lines of the famous Judgment of Paris from 1976 in which Californian wines gave some classic French wines a bit of a kicking in a blind tasting judged by French professionals. And this was at a time when new world wines were seen as distinctly inferior to the old world cousins.

The Princeton tasting pitted French classics against wines from New Jersey in a blind setting, with nine judges from the USA, France and Belgium. The full set of scores plus analysis is here.

I was fascinated to see that Tyler Colman (the only one of the judges I know personally) gave his highest red wine score to a New Jersey wine, and gave 2004 Mouton and 2004 Haut Brion 11/20 each, his lowest scores. I’m not saying he’s wrong, because I wasn’t there tasting alongside him, and don’t know how the wines showed. It’s just interesting.

As well as publishing the preferred wines in rank order, the AAWE also did a statistical analysis which showed that, with the exception of a couple of wines (the winning white and the losing red):

‘the rank order of the wines was mostly insignificant. That is, if the wine judges repeated the tasting, the results would most likely be different. From a statistically viewpoint, most wines were undistinguishable.’

This, rather than the fact that the New Jersey wines did quite well, was the take home message for many of the commentators on the tasting.

Jonah Lehrer, writing in The New Yorker, delves into the questions this tasting raises here.

While I agree with some of his conclusions, others I have a problem with.

“What can we learn from these tests? First, that tasting wine is really hard, even for experts. Because the sensory differences between different bottles of rotten grape juice are so slight—and the differences get even more muddled after a few sips—there is often wide disagreement about which wines are best.”

“The results are even more distressing for non-experts. In recent decades, the wine world has become an increasingly quantitative place, as dependent on scores and statistics as Billy Beane. But these ratings suggest a false sense of precision, as if it were possible to reliably identify the difference between an eighty-nine-point Merlot from Jersey and a ninety-one-point blend from Bordeaux—or even a greater spread. And so we linger amid the wine racks, paralyzed by the alcoholic arithmetic. How much are we willing to pay for a few extra points? These calculations are almost certainly a waste of time.”

I would make the following points.

  • Sorry: wines don’t all taste the same. For some reason, newspapers love to run these stories suggesting that all wine taste pretty much the same, and therefore you are just wasting your money buying expensive wine. That is a false conclusion. Of course, there is not a strong correlation between wine price and quality, for all sorts of complex reasons. But this doesn’t mean that it isn’t possible to get much more interesting wines by paying more money.
  • Blind tasting is difficult, and few can do it well. Is it possible that the line-up of judges in the Princeton tasting wasn’t a strong one? I don’t recognize many of the names. Are they experienced wine judges with a broad experience of international wines and good palates? I would bet a pricey bottle of wine that I could put together a list of experienced, able tasters who would produce much more robust results in this sort of tasting. Such tasters are rare, even among wine professionals, but they do exist, and the fact that they can perform well in blind settings like this shows that there are indeed real differences among wines.
  • About the same time as this tasting was taking place, a sizeable number of students were sitting their Master of Wine examinations. These exams include a challenging blind tasting paper. The fact that many perform well in this examination shows that well trained tasters are able to differentiate quality in blind tasting with a degree of reliability. And in the Advanced Wine Assessment Course run by the Australian Wine Research Institute, participants’ palates are are tested in a statistically robust manner to see how good they are. Some are better than others, but those who reach show judge standard score consistently within 0.5 points on a 20 point scale.
  • Yes, there is a degree of preference and subjectivity involved in wine assessment. But good professionals can set aside their preferences, and work more objectively in judging quality. There will certainly never be perfect agreement among a group of professionals, and in some cases there will be strong disagreement. But to conclude that wine tasting is entirely subjective and that all opinions are valid, and that expensive wine is a waste of money, is just silly.
  • One further point about blind versus sighted tastings. Seeing the label influences our perception of the wine; it brings our knowledge about the wine into play. But it also helps us understand the liquid in the glass better. We can put into context the flavours we are experiencing. This is important: it is more than just bias; it is making sense of the sensation, which allows us to reach a more robust conclusion.
  • Did this tasting place New Jersey wines on the map? No, because I think it was flawed. I don’t think the tasters did a good enough job, performing in an almost random fashion. I haven’t tasted the New Jersey wines, but I would be interested to—and I’m always open minded when it comes to non-classic wine regions. Yet I would be really surprised if they genuinely could compete in terms of quality and complexity with benchmark wines from Bordeaux in this sort of blind tasting setting with a highly competent, experienced tasting panel.
7 Comments on Some thoughts about the ‘Judgment of Princeton’ tasting
wine journalist and flavour obsessive

7 thoughts on “Some thoughts about the ‘Judgment of Princeton’ tasting

  1. With my admittedly limited exposure to New Jersey wines, I have to express my extreme skepticism about their quality vis-à-vis well-known Bordeaux. This is not to say that the latter are always all they’re cracked up to be, but the olfactory and gustatory characteristics of wines from the two regions are very different.

    And, really, aside from students’ taking one of the big-time wine exams, what IS the value of tasting blind? The very difficulty you cite — the tendency to non-replicability of results with regard to individual wines — makes the practice seem nothing more than a parlor trick.

  2. It sounds like what you are saying is that experienced, skilled tasters would recognize the classified Bordeaux more ably. That is likely true, but I don’t think that is all that important. People are taught that classified growths are the archetype for Bordeaux, then use that as their basis. The skilled tasters will recognize the structure and flavor profile, but they also have learned the preference for this type of wine. It’s sort of the tail wagging the dog. I buy into this to some degree because I’ve acquired the preference for big structure in young wines and tertiary character in older wines. But its superiority is somewhat of a social construct in the first place. We adapt our palates to appreciate what the experts tell us are superior. I’m not in any way claiming there are not objective differences in industrial versus high quality wines. There are. But when it comes to conscientiously produced wines, the margins are often smaller than anticipated.

  3. greg ecrit
    ‘…when it comes to conscientiously produced wines, ‘
    I agree!
    Wine science is so advanced today that there is no excuse for producing uninteresting wines.
    Its the terroir stupid!

  4. I don’t read the results as ‘most wines taste the same, rather than ‘most wines will get statistically similar scores’, which is rather different.

    Suggesting the tasters aren’t good enough if you don’t like the results is unhelpful, especially if you don’t know anything about them. I’m not sure what ‘more robust results’ means here. Is there a suggestion that the tasting was technically flawed in some way? Or does robust just mean ‘more acceptable to me’?

    As to subjectivity if I don’t like a wine no professional taster in the world can tell me I’m in error (unless there is a technical fault I’m missing). Disliked wine is a waste of money, irregardless of price, although doubtless one is less tolerant the higher the price. It occurs to me that your desire to substitute a different set of tasters (whose tastes you know) is subjectivity of a sort.

    Given you weren’t at the tasting I think your conclusion that ‘the tasters didn’t do a good enough job’ and performed ‘in an almost random fashion’ (whatever that means) is arrogant and you claim that the tasting was flawed is not supported by evidence.

  5. After searching the names…

    Tyler Colman = drvino.com
    John Foy = thewineoddessy.com
    Jean-M Cardebat = wine economics researcher in Bordeaux
    Olivier Gergaud = Wine Economics Researcher in Bordeaux
    Robert Hodgson = Wine Economics at Fieldbrook Winery wrote a research paper about how wine judging has only a 10% reliability factor
    Linda Murphy = Wine Judge for SF Chronicle
    Daniele Meulders = wine prof in economics in Belgium
    Jamal Rayyis = palatesavvy.com
    Francis Schott = restaurantuer and restaurant radio host

    Are any of these people qualified at blind tasting wines? Blind tasting is specialized and does require training.

    Having not tried any NJ wines, I could not pick them out as such, but I believe a trained somm could definitely pick out the Bordeauxs.

    I used to think blind tasting was a pointless exercise (although fun!), If more tests like these fly around… maybe they are a necessary skill. Just sayin’

  6. You know I’m really not fond of oysters and can take or leave caviar – could I tell you the most expensive caviar or best oyster?
    While different types of wine can be compared and scored surely the merits of a really good wine need, to some extent, to be learned (and enjoyed)! I love a crisp sauvignon as much as the next man and a fresh, fruity beaujolais as well but does this make them the equal of a balanced, well integrated glass of wine evidently showing layers of taste and design and needing time to deliver them as well.
    We all use tasters as arbiters, guides for our ill disciplined mouths, yardsticks to underpin our purchases. Their scores represent their view, always to be seen as a subjective one, of the drink being assessed. And assessed in terms of it’s surroundings, not world dominance.
    Perhaps the democratic cry of ‘I know what I like’ should decide all but my vote is firmly cast with blind tasting and skilled tasters. I KNOW they do a better job than me!

Leave a Reply

Back To Top