Score inflation is everywhere and it’s killing wine criticism

uncategorized

Score inflation is everywhere and it’s killing wine criticism

These towers used to be the tallest....
These towers used to be the tallest….

Back in 1984, I sat my ‘O’ level exams at the Royal Grammar School in High Wycombe. I was a quiet, geeky pupil – a classic in-betweener, friends with some of the cool set but not part of it, and an ally of the square, loser kids. And lazy with my homework (it used to bore me). It’s perhaps for this reason that my teachers were surprised when I came away with 10 ‘A’ grades and a solitary ‘B’ (in French…). Those were exceptional results because they were rare, even in a good school like this. I surprised myself. Step forward 30 years, and the ‘O’ levels have been replaced by GCSEs, and suddenly every smart 16-year-old kid is getting straight ‘A’s to the point that these exams have lost their power to differentiate at the top end, and the elite universities can no longer use them to separate out the very smartest pupils. With government pressure to improve standards, their use of exam results as a metric, and a desire to make education more inclusive, there was a subtle pressure that ended up leading to a gradual score creep: each year, exam results improved slightly, and this creep compounded year on year.

There’s a parallel with wine criticsm.

Whatever you think about the merits of scoring, it is fundamental to the practice of rating wines. But it is being undermined by a gradual creep upwards in the scores being given by wine critics.

When Robert Parker began dishing out 100 point scores, he definitely used a wider range than is currently practised by the Wine Advocate. Back in the early 90s, the 89/90 boundary used to be a big deal. (See this analysis by Blake Gray.) And as a novice wine lover, the 86As (inexpensive wines, designated by A, that scored 86/100 points) used to be a happy hunting ground for me

Now, 90 is a very normal score, and 86 is a fail. No one wants to see an 89

Why have scores gone up? Has wine quality got better? To a degree, average wine quality has improved. But I don’t think this explains the creep at the top end.

This score inflation is caused by competition among critics, big egos, and the fact that these critics like being liked.

Competitive scoring began when Parker had competition from the Wine Spectator. And then a new generation of critics emerged, all doing the same sort of thing. There are a whole range of critical voices now, whose business model is based around attracting subscribers, selling reports, selling stickers, selling certificates, and putting on events paid for both by consumers and wineries.

What the point-scorers realised was that if they gave higher scores, the wineries were delighted. And they started quoting them in their marketing materials. And buying their stickers, and paying to be included in events, and displaying certificates they’d purchased in their tasting rooms. And retailers used the point scores at point of sale.

Basically, if you scored more generously than the competition, you would be the one quoted, and the wineries and retail stores would give you free publicity. And so the critics became addicted to this frenzy of dishing out high scores and being celebrated and loved

But there’s also the effect of affirmation from producers. If, as a critic, you have some personal insecurities, then it can feel great to be loved. If you give a high score, then you make producers very happy, and they like you and affirm you. You feel like you’ve made a bit of a difference. And when you see your scores being quoted, it makes you feel a bit more significant. There’s a psychological pressure right there to err on the side of generosity.

Even very good people have become sucked into this silly game. 95 is the new 90. There’s very little room left at the top end. People who should know better have found themselves unable to show some restraint. Their competitive spirit has sucked them in. They can’t kick the habit.

Look at the Australian situation: it’s probably the worst. Honest but ordinary table wines with 95 points. An Aussie said to me last week that you need 98 points, or else a score just isn’t any use in marketing these days.

Ultimately, the act of dishing out elevated scores to get attention, be quoted more, and make more money from wine producers themselves, is essentially a selfish act. It muddies the water for the rest of us, who are trying not to allow our scores to creep upwards. It’s greedy and destructive.

95 and above should be reserved for truly exceptional, world class wines.

I’m not sure whether the 100 point scale can be saved. These critics show no signs of slowing down, and the score creep continues. The only hope? The absurd situation of score extension beyond 100! Be it symbols or extra points, just wait and see…

24 Comments on Score inflation is everywhere and it’s killing wine criticism
wine journalist and flavour obsessive

24 thoughts on “Score inflation is everywhere and it’s killing wine criticism

  1. Good comments. I think it would be interesting to do some correlation of factors that might contribute to “score creep”. For example, pick a country or region, e.g. Australia; and then look at the wine score trends over the past 20 years, as compared with the trend in “sugar creep”. Is there a reasonably close fit with those trend lines?

    Maybe price trends for a region is another variable to look at over the past 20 years, as compared to average rating scores. A higher score may justify a bigger price.

    In any event, scores are still a valuable tool for overwhelmed consumers trying to pick a product in wine shops. Perhaps it becomes a case of relative scores rather than absolute scores, within an variety or region.

  2. It almost sounds like a cartel:

    “Basically, if you scored more generously than the competition, you would be the one quoted, and the wineries and retail stores would give you free publicity. And so the critics became addicted to this frenzy of dishing out high scores and being celebrated and loved.”

    Perhaps if wine critics spent more time with their words, rather than numbers, things might improve?

    Imagine a very short episode of “Top Gear” where Clarkson, May, and the little bloke just say, “I’ll give it a 98. It’s a 93 from me. And a 95 at best. Good night…”

  3. I know I’m stating the bleedin’ obvious and you are well aware, but scoring is just so contrary to the joy of wine drinking. Ban it now.

    I’d grudgingly accept some sort of simple 5 star rating. I’d find that useful when trying to choose the odd bottle from a region I know little about (i.e. everywhere apart from Burgundy in my case).

  4. A good article, thank you.
    I remember being told years ago that to give something full marks it must be so exceptional that it was virtually impossible.
    Now 100/100 is handed out willy nilly, like 5 in feedback for Trip Advisor, Amazon etc.
    On the very few occasions I judge wine at small events in France, I have to recalibrate as my fellow tasters use the full range of numbers in the 100 scale. Which, of course, makes sense, not pfaffing about in the top 20 digits as in the UK.
    If we’re given all those numbers we should use them all.

  5. The new Guida Essenziale ai Vini d’Italia 2018 by Daniele Cernilli has 297 (not a typo!) at 95 points or over…

    It is a new guide in the market. Enough said.

  6. To get a meaningful ‘score’ I subtract 80 from the score and use that as a score out of 20… what’s the point of a 100-point range that only uses 15 of them?

    Alternatively we’ll need wine scores that do the equivalent of going to 11

  7. I rely on Cellar Tracker ratings (sometimes as confirmation of Blog/Pro ratings) and I think they are more reasonable. There is seldom a rating of 100 and 95+ are rare. What really works is the combined score of all the raters (mostly non professional) –some folks rate high and some lower but an overall 90/91 is still a very good score.

    Maybe what we need is a group or site that produces the average rating for each of these pro reviewers. Would need a lot of computer power to make the comparisons since reviewers don’t rate all the same wines but I am thinking it could be done.

    But I think wine is much better than it was when Parker started rating in the late 1970s and the ratings should go. There is also far more of it being produced and the reviewers will focus on the better wines which means the scores will go up. When I started reading Parker in 1980 he would review many mediocre wines–there were a lot of wines he rated in the 70s and lower. I am guessing those wines don’t even get looked at now–assuming they are even being produced.

  8. Most consumers don’t have time (or don’t allocate time) to read the words, or only read the words if the score is very high. They also don’t have the occasion (or time) to do a lot of comparative tasting, nor the confidence to make their own evaluation of wine. Important is also the ego/snob factor (“this wine got 99 points from so-and-so”). IN recent years, I’ve often tasted 97/98-point wines against other vintages of the same wine at, say, 93 points, from the same reviewer, with little meaningful quality difference. Selling the wines, we could not get enough of the 97/98s and we couldn’t give away the 93s. It is a bit sad.

  9. Is this a reprint from 2000? From 1995?

    Luckily, in my view, overall wine quality has improved from a technical viewpoint of spoilage, chemical faults or contamination, etc. So 20 or 30 yrs ago an 80 point wine probably had real flaw, but now so few flawed wines ever reach the consumer, almost all wines up for sale are pleasant. Now, style is a different issue. Big oak or no oak, too much residual sugar or none, super purple or natural color, are all over the map. C’est la vie.

  10. With all respect, I’m surprised to note the date on this piece isn’t October 5, 2007: this has been going on for quite awhile now.

  11. Well said. Some wineries have come to live or die by the numbers and so too have the critics. Getting caught up in the numbers game is a tough road with little room between success and failure. For some consumers the numbers prevail over wine chemistry and their own likes and dislikes.

  12. Well the counter intuitive reponse is that if you know what you like you don’t have to worry about scores. In the Parker age on claret my favourites were 75 to 85 points as they were balanced, interesting wines while over that was high alcohol and over extraction.

  13. I had a glass of NV Pommery Brut today. I enjoyed it/I never score/if I did guess an 87 pointer. It was a very nice glass of bubbly but you 95 pointers would look aghast eh!!

  14. I agree with Bruce Beaudin on the value of cellartracker, not least for older wines, as evolution is paramount, especially in determing the optimum drinking window.

    However Andreas Sundren Greniti – your suggestion wins today.

  15. Just look at the former Spectator critic who’s scores are consistently higher by 2-4 pts than most others and see how stickers with his high scores adorn so many wine bottles.

  16. “I’d grudgingly accept some sort of simple 5 star rating.”
    I agree. I even pay attention to the gold/silver/bronze medals, although I also take note of which judging is involved, since I believe some are more legit than others. On the whole, I pay almost no attention to scores, except sometimes in negative ways. Such as when I see a $12 southern French red that has a Parker 93; I assume it’s high in alcohol and tastes it, so I avoid it.

  17. A genuinely open question, Jamie, but do you think your scores have been inflated over the years? And if so, why? Anecdotally, you used to give out a lot of high 80s scores, but now I read a lot more low to mid 90s. This may be because you are tasting better wines, or that the types of wines you taste are much better now, or that you don’t taste many poorer or mediocre wine these days.

    Thoughts on your own history?

  18. The answer is obvious but I was mocked for introducing it a decade ago. Score wine out of 109 points 🙂

  19. When a $12 Malbec gets 92pts, the wine sells liked buttered biscuits, but this has made it quite challenging to sell a 92pt Langoa-Barton or Talbot priced, quite appropriately, for $50-60. In my experience, the $12 Malbec is hardly comparable to a good 3rd-5th growth. Obviously (to me), a different scale is used, but how many consumers recognize that? Perhaps at a given point level, perhaps some do; however, they appear to be few in number. It seems to me that as much as overall grade inflation, the distorted perspective occasioned by this mis-calibration between the two scales has misled many consumers, and harmed excellent quality producers in the very top areas (eg., Bordeaux 3rd, 4th and 5th growths; Village level Burgundies).

  20. Jamie:
    Just a point of reference. I have been writing a syndicated wine column for more than 38 years, and a newsletter (Vintage Experiences) since 1996, and have never used number scoring as part of my wine analysis.
    Dan

Leave a Reply

Back To Top