More thoughts on panel tastings
More thoughts on panels, prompted by yesterday’s judging experiences.
There are some levels of discrimination of quality where, irrespective of biological differences, personal preferences and cultural likings or dislikes, a panel of experienced tasters can reasonably hope to agree on broad-brush ratings of wines. [I’m thinking here of whether a wine is worthy of a bronze, silver or gold medal in a competition.]
These sorts of panels are good at filtering out poor or badly made wines, but can lack discrimination at the higher end. We were averaging points (on the 20 point scale) yesterday, and that makes it quite hard to get silver medals, and very hard to get golds – especially if you are using 18.5/20 as your benchmark for gold as in the Australian show system.
For this reason, the benchmark for gold was set lower at 17/20, with silver 15.5 and bronze 14. These may sound low, but they are realistic when you consider the way that averaging marks tends to bring down the overall score considerably.
To come up with sensible results, some level of conferring is necessary after each flight, to make sure that every wine is given a fair chance to get the medal it deserves. We found that we agreed on the majority of the wines (perhaps with one outlier out of the six), but some wines seemed to split opinion somewhat. We went back to these to reassess them.
The quality of the tasters is really important. One or two ‘random’ tasters in a panel can really mess up the results.
Panels like these can lose their effectiveness when dealing with the highest quality wines. While they serve a useful purpose in rating commercial wines, panels (and averaging scores) don’t work well for fine wine. They end up creating too many anomalies.