Scoring wines: peer group or absolute?

uncategorized

Scoring wines: peer group or absolute?

There’s an interesting discussion on Twitter, concerning the scoring of wines. It was prompted by a Decanter blog by the usually brilliant Andrew Jefford here. Jefford starts off on fire:

“Scores for wines are philosophically untenable, aesthetically noxious – but have great practical value.”

That’s entirely my view. I hate them. But I use them. Those who don’t use them frustrate me deeply because I never know exactly how much they liked the wine. If I score a wine 93, you know that I thought it was pretty serious. If I score it 95, you know I think it was VERY serious. If I score it 98, it is utterly mind-blowingly fabulous – one of the best wines I have ever tasted.

This is where the controversy comes. You will see from my comments that I use an absolute scale. I like a 94-point Beaujolais as much as a 94-point first growth Claret, or Grand Cru Burgundy. If I score a Portuguese red 95 and a Bordeaux 91, I’d prefer to drink the Portuguese red.

Jefford advocates peer-group scoring, as does Robert Parker (although in practice, I think Parker scores more absolutely, otherwise his 100 point tastings would be odd). Jefford makes out that peer-group scoring is the only sensible way to rate wines.

I disagree. Peer-group scoring protects the established, famous appellations. It patronizes ‘lesser’ wine regions. It’s silly and retrograde. Absolute scoring means that if someone on Tenerife has an amazing terroir, interprets this sensibly, and makes a stunning wine, it can compete with stunning wines from anywhere. I love this. If I encounter a profound wine, I’m not afraid to score highly, wherever it is from. Absolute scoring allows my readers to catch my enthusiasm. They know how much I like the wine. If I give a Tenerife wine 95 points and I am working on a peer-group scoring system, then you, as a reader, have no useful information at all.

Scoring wines is silly, but it is useful. It is our duty as writers to make it as un-silly as we can. That means a score is a score wherever the wine comes from.

13 Comments on Scoring wines: peer group or absolute?
wine journalist and flavour obsessive

13 thoughts on “Scoring wines: peer group or absolute?

  1. I watched the twitter discussion via you, Mr. Asimov and Mr. Bonne. I agree most with Mr. Jefford.

    My problem with scoring wines is that mostly the scores don’t reflect the way I drink wine. I open a bottle, two at the most, with dinner and we drink it throughout the meal because we like the way the wine changes through the course of that meal. Some wines that initially were not my favorites have bloomed with some time in the glass as the meal progressed. The opposite happens with some wines also.

    I just have little faith in most wine scores because they tend to be quick impressions.

    ..and don’t get me started on wine competitions that hand out medals.

  2. Give me an absolute rating and I know exactly where you stand. Tell me what it tastes like etc. and I know nothing unless I’ve followed you and become accustomed to your criteria – ie. know how to fit your palate relative to mine – an impossible task when you consider the number of critics. An instance when Peer may be meaningful if you’re keeping a private record of labels in your cellar. Once it comes out of the closet it needs to be prefixed with ‘of those in its ‘peer’ group’ but then it deteriorates into ‘of the same vintage’ or ‘of the same style’, etc. etc. Peer ratings would be great for creating discussions amongst colleagues.

  3. Fascinating that you regard peer group scoring as protecting the big name appellations, as I would say the exact same of a ‘universal’ (sorry, I find the idea of universal scales/palates vaguely absurd). The problem is that universal scoring holds back ‘lesser’ appellations as tasters decide on advance that a ‘lesser’ wine, e.g. Muscadet, can never be as good as a great white Burgundy, and so there is a glass ceiling, but a false ceiling, as no matter how great the wine the maximum score it can achieve might be 92, or 94, or whatever the best Muscadet is capable of in the eyes of the taster in question.

    In short, both concepts of scoring have their flaws.

  4. Other than what you describe Jamie, a key problem with “peer groups” is defining them, which in practise in impossible. Is syrah a different peer group from grenache? Probably yes, but what about blends? Is Bordeaux a different peer group from Aussie cabernets? Probably not, but what about Aussie cabernet-shiraz blends? And then it’s a slippery slop all the way to 100% shiraz. Or with such wines, if you don’t assign an absolute score, do you give it separate scores depending on the particular peer groups you classify it under?

    The whole thing gets absurd.

  5. Well, at least you’re honest enough to admit that your scores reflect your subjective take on a wine, i.e. the extent to which you like it, rather than holding it out to be some absolute measure of the quality of a wine (as Parker, for example, does).

    Scoring wines is indeed ridiculous. Hugh Johnson’s views on the subject, as expressed in “A Life Uncorked”, are spot on. The only context in which a score can have any real meaning is when used by the likes of Clive Coates, i.e. in relation to a “peer group within a peer group”: you take a range of, say, burgundies (one peer group), from a single vintage (peer group within a peer group) and it is then legitimate to compare and score one against another. Otherwise, a score simply reflects one taster’s impression of a single sip of wine at a point in time (or occasionally an impression gained from drinking or sharing one bottle of wine in the course of a meal).

  6. I wonder if you would consider price grouping to be a peer group? I judge at both IWC and Decanter awards. At IWC we are judging wines for outright quality with no consideration given to price. At Decanter, wines are grouped within price bands and judged for both quality and value. As such Decanter ought to be seen as a value award, and IWC as a quality award. I don’t think this difference is appreciated by the public, or buy those entering their wines.

  7. I personally find that scores without a qualitative description are worthless. I use CellarTracker extensively and enjoy other tasters notes and descriptions of a wine however if they just post a score that means nothing to me. (in fact I filter the notes so scores without notes are not shown)

    I don’t really agree with strictlytasting’s quote “Tell me what it tastes like etc. and I know nothing” – I think if a note gives even basics such as type of fruit, acid, oak etc that is far more value than a single number.

    I don’t think there is a perfect way and tend to use wine notes as a guide to explore new wines rather than a definitive statement.

  8. There is clearly a case to be made for either peer group tasting or universal tasting. There are merits and flaws in each. Maybe transparency is the key. Knowing how Jamie tastes is helpful in using his scores, as are his notes. He may score a wine as 88 or 89 and say it is a really good example of it’s type or value for money. Transparency is important with the IWC/DWWA difference. I think the DWWA approach is friendlier to less wealthy consumers (like me)who can identify good value (as opposed to just cheap) wines and can take the risk of the occasional trade-up with a bit more confidence.

  9. I think Andrew Jefford’s point is well made. In a sense scoring is shorthand for the taster’s assessment of quality (whether “objective” or subjective). Where I find scores useful is where tasting notes give little indication of whether that person actually liked the wine, and if so, how much. There are so many TNs out there that use descriptors of the nose and palate of a wine but have me screaming at the end, “yes, but but you actually like the wine?”. Points can make up for that, though I think only really subjectively.

    I think peer scoring doesn’t work for the reason Chris Kissack says. But it is really useful when professional TNs do include peer group context.

  10. Jamie,

    Because of its length, let me make this a two-part “reply.”

    As I understand from my reading of Clive Coates, M.W, he scores within the vintage as a “peer group” — not across vintages.

    So a “19 point” score in an “off” vintage is not qualitatively equal to a “19 point” score in a “great” vintage.

    That close-reading nuance is lost on most consumers, particularly if they are unfamiliar with the reputations of vintages.

    As for Robert Parker here in the States, see “part two” of my reply.

    ~~ Bob

  11. Jamie.

    Excerpts from a 1989 interview elaborating on his well-known 100 point system. And on his unknown 90 point system.

    (No, that last sentence is not a typo . . .)

    ~~ Bob

    Excerpts from Wine Times (September/October 1989 issue) interview
    with Robert Parker, publisher of The Wine Advocate

    WINE TIMES: How is your scoring system different from The Wine Spectator’s?

    PARKER: Theirs is really a different animal than mine, though if someone just looks at both of them, they are, quote, two 100-point systems. Theirs, in fact, is advertised as a 100-point system; MINE FROM THE VERY BEGINNING IS A 50-POINT SYSTEM. If you start at 50 and go to 100, it is clear it’s a 50-point system, and it has always been clear. MINE IS BASICALLY TWO 20-POINT SYSTEMS WITH A 10-POINT CUSHION ON TOP FOR WINES THAT HAVE THE ABILITY TO AGE. . . . [CAPITALIZATION added for emphasis. ~~ Bob Henry]

    . . . The newsletter was always meant to be a guide, one person’s opinion. The scoring system was always meant to be an accessory to the written reviews, tasting notes. That’s why I use sentences and try and make it interesting. Reading is a lost skill in America. There’s a certain segment of my readers who only look at numbers, but I think it is a much smaller segment than most wine writers would like to believe. The tasting notes are one thing, but in order to communicate effectively and quickly where a wine placed vis-à-vis its peer group, a numerical scale was necessary. If I didn’t do that, it would have been a sort of cop-out.

    I thought one of the jokes of the 20-point systems is that everyone uses half points, so it’s really a 40-point system — which no one will acknowledge — and MINE IS A 50-POINT SYSTEM, AND IN MOST CASES A 40-POINT SYSTEM.

    WINE TIMES: But how do you split the hairs between an 81 and an 83?

    PARKER: It’s a fairly methodical system. THE WINE GETS UP TO 5 POINTS ON COLOR, UP TO 15 ON BOUQUET AND AROMA, AND UP TO 20 POINTS ON FLAVOR, HARMONY AND LENGTH. And that gets you 40 points right there. AND THEN THE [BALANCE OF] 10 POINTS ARE . . . SIMPLY AWARDED TO WINES THAT HAVE THE ABILITY TO IMPROVE IN THE BOTTLE. THIS IS SORT OF ARBITRARY AND GETS ME INTO TROUBLE.

    WINE TIMES: You mean when you are in the cellars of Burgundy, you look at a wine and say this is a 4 for color, a 14 for bouquet, and so on [ ? ]

    PARKER: Yes, most of the times. What happens is that I’ve done so many wines by now that I know virtually right away that it’s, say, upper 80s, and you sort of start working backwards. And color now is sort of an academic issue. The technology of color is refined and most color is fine. MY SYSTEM APPLIES BEST TO YOUNG WINES BECAUSE OLDER WINES, ONCE THEY’VE PASSED THEIR PRIME, END UP GETTING LOWER SCORES.

    WINE TIMES: Your scores get 50 points added on and look like the grades boys and girls get in school, and I know that’s why you ended up with a system with 100 points, but don’t you give out too many high grades? The highest percentage of your grades are in the 80s and then some are in the 90s. Are there lots of wines you taste that you don’t evaluate?

    PARKER: Yes. I try to focus on the best wines in The Wine Advocate, or especially when I do the Buyer’s Guide, my publisher doesn’t want to take up space with 50s, 60s, or even 70s. When I’m looking for a best buy, I might go through hundreds of wines, or when I go through the wines of Hungary or Yugoslavia, I’ll never put most of them in The Wine Advocate. I could never justify taking two or three pages to publish those results. . . .

    WINE TIMES: The answer is partly to give you credibility. Right now the argument is that your average score in The Wine Advocate is in the 80s, and it doesn’t matter if its 81 or 84. If it’s in the newsletter, buy it.

    PARKER: No. I buy wines, and I buy wines that are 85 or 86, not below that. But to me 90 is a special score and should be considered “outstanding” for its type.

    WINE TIMES: How do you determine merit versus value in a wine? Are there wines that will never get an 85? How do you compare the Chenin Blancs of the world with the . . . [ question interrupted ]

    PARKER: I had the two best Chenin Blancs I ever tasted out of California last year, and one [1987 vintage Preston] got 87, I think, and the other [1987 vintage Pine Ridge] 86, and they were both $6 bottles of wine. Most people are looking for good values, and I have a responsibility to these readers. The scores are given based upon quality not price. To me, the best values are under $10. Double digit prices are the point where consumers pause. Wine prices are rather high right now across the board. That’s where tasting notes come in. A wine that gets an 85 and costs $4 is obviously a very good value.

    WINE TIMES: You are arguing price versus quality. Take a $30 bottle [of] wine. To get an 87 does it have to show much better than a $7 bottle?

    PARKER: No. It’s one man’s opinion, but I think that 87-point [1987 vintage Preston] Chenin Blanc can go right on the table next to a Leflaive white Burgundy rated 87. They will give you different sets of flavors, but are every bit as good as each other. That’s the way the system was meant to work.

    WINE TIMES: Do you have a bias toward red wines? WHY AREN’T WHITE WINES GETTING AS MANY SCORES IN THE UPPER 90s? IS IT YOU OR IS IT THE WINE?

    PARKER: BECAUSE OF THAT 10-POINT CUSHION. Points are assigned to the overall quality but also to the potential period of time that wine can provide pleasure. And white Burgundies today have a lifespan of, at most, a decade with rare exceptions. Most top red wines can last 15 years and most top Bordeaux can last 20, 25 years.

    IT’S A SIGN OF THE SYSTEM THAT A GREAT 1985 MORGON [CRU BEAUJOLAIS] IS NOT GOING TO GET 100 POINTS BECAUSE IT’S NOT FAIR TO THE READER TO EQUATE A BEAUJOLAIS WITH A 1982 MOUTON-ROTHSCHILD.

    WINE TIMES: IN YOUR SYSTEM, WHAT WOULD BE THE HIGHEST RATED BEAUJOLAIS?

    PARKER: 90. THAT WOULD BE A “PERFECT” BEAUJOLAIS, AND I’VE NEVER GIVEN ONE. I have given a lot of 87s and 88s.

    [Bob Henry’s aside : In 1990, Parker awarded a score of 92 points to the 1989 vintage Georges Duboeuf “Jean Descombes” Morgon Cru Beaujolais, contradicting his then year-old statement above.

    Fast forward to 2011: the fabulous 2009 vintage Cru Beaujolais garnered scores in the 91 to 94 point range from Wine Advocate.]

    WINE TIMES: SO IT’S THE AGING POTENTIAL THAT IS THE KEY FACTOR THAT GETS A WINE INTO THE 90s.

    PARKER: YES. And it goes back to

    HOW I EVALUATE VINTAGES IN GENERAL. TO ME THE GREATNESS OF A VINTAGE IS ASSESSED TWO WAYS: 1) THE ABILITY T5O PROVED PLEASURE — wine provides, above all, pleasure; 2) THE TIME PERIOD OVER WHICH IT CAN PROVIDE THAT PLEASURE.

    If a vintage can provide pleasure after 4 or 5 years and continue for 25 to 30 years, all the time being drinkable and providing immense satisfaction, that’s an extraordinary vintage. If you have to wait 20 years before you can drink the wines and you have basically a 5 or 10 year period to drink them before [the fruit flavors] “dry out,” it’s debatable then whether that’s a great vintage.

    . . .

  12. I find absolute scores useful for my work. I am generally working with Spanish wines, where the idea that one DO is automatically more prestigious than another is, I think, not relevant.

    I use a scoring system picked up when working as a winemaker, it goes from A > E. A is extremely good, C is decent but nothing special, E is faulty. You can subdivide if you like, but the key point for me is that with so few layers, it is hard for grade inflation to creep in.

    I like Jamie´s blog, but I find the scores almost totally irrelevant to me, just because the difference between 93 and 95 is apparently a lot, but sounds so little and due to what I perceive as grade inflation. Maybe wines are conistently getting better or maybe Jamie just reviews better and better wines. I am fine with that, but these numbers don´t matter to me, as everything seems to score highly in such a tight range. They would matter to me if Jamie just did something like 85-89 / 90-94 / 95+, but that´s the same as my earlier point.

    As for absolute scores, I can see both sides of the argument and a good point made by Mark about how do you define a peer group. But for me I prefer absolute, as I might want to compare Syrahs from Toledo, from the Barossa and from Hermitage in terms of how much I like them and absolute scores are a crude way to get this info across, though much more useful if backed up with tasting notes.

  13. Even Romans were ranking wines. Nothing wrong with scoring them. What I found absolutely unethical, annoying and unrespectful (for both consumer and producers) is the lack of transparency in the criteria. Where a 92+ or 18.5 score come from? Even when I buy a bike I get reviews that report the breakdown score of each single criterion. Why wine should be different??

    I have wrote a piece long time ago (and recently updated it) on my blog :

    http://salvybignose.blogspot.co.uk/p/on-power-of-critics.html

    Thanks

Leave a Reply

Back To Top