wa2.gif (4241 bytes)


abut9.gif (3095 bytes)



abut12.gif (3207 bytes)
abut10.gif (3636 bytes)


abut11.gif (4039 bytes)



 

Back to square one
The WSA musty taint survey

With a fierce debate ranging in the wine trade over the validity of the Wine and Spirit Association’s research into cork taint, Jamie Goode raises doubts over the methodology employed and asks is it time to rip up the report and start again?
(Reproduced with permission from Harpers Wine and Spirit Weekly, 11 October 2002, p 36-38 )

Back in May 1999, John Corbet-Milward of The Wine and Spirit Association (WSA) was in the audience at a panel debate on cork taint, held as part of that year’s London Wine Trade Fair. ‘The mood in the audience was one of confusion and crossness’, he recalls. ‘People were coming up with all sorts of figures, and there was no scientific basis to what was being said’. Instead of this ‘internicine strife’, as Corbet-Milward puts it, he thought it would be much better for the trade as a whole to work together to help produce taint-free wine for the consumer. So, after discussions with several key companies from various parts of the supply chain, and a quick whip round, the ‘WSA Musty Flavour Defects in Wine in the UK’ survey was born.

This survey involved a consortium of 18 companies, including retailers, producers, wholesalers and stopper manufacturers. Over the course of 12 months, from January 2001 to January 2002, data were collected on over 13 000 wines tasted by assessors in the contributing companies during the course of their work. The goal was to establish a ‘benchmarking baseline’ to estimate the true level of musty defects in wines on the UK market. Quentin Rappoport, director of the WSA emphasizes that ‘these are not WSA results; we merely facilitated this study—we felt it was about time that everyone stopped fighting each other.’ 

Just 0.7%?
The need for solid data on the rate of cork taint is an acute one, so you’d think a study like this that promised to demarcate the extent of this problem would have been welcomed by the trade. But the publication of the final report, in June 2002, provoked a storm of controversy, principally because the final quoted figure of verified musty taint prevalence was almost bizarrely low, at 0.7%. (Table 1)  ‘We were astounded to see such a low figure’, says Warren Adamson, UK head of New Zealand’s Villa Maria. ‘Everything we’ve seen, from show results to specific tastings, suggests the real figure is 5–6%’. Helen McGinn, product development manager for wine at consortium member Tesco, concurs that the WSA results ‘don’t reflect our experience of TCA taint.’ She adds that ‘on the basis of our tasting experience, the level is nearer to 5%. We taste 100–200 wines on site every week as part of our regular quality control checks.’

What’s going on here? Is the real rate of cork taint very much lower than most of us had previously suspected, or is the WSA survey deeply flawed? Time to investigate.

TABLE 1 Results of the WSA Musty Flavour Defects in Wine in the UK survey

 

Number of samples

Number of samples as
% of total samples

Total number of samples

13780

100.0

Reported as musty (before verification)

277

2.0

Verified as musty

94

0.7

Other reported defects (e.g. oxidation)

202

1.5

Total samples with reported defects

470a

3.4

Notes:
a
This number is reduced by 9 as some samples exhibited more than one defect.

Survey methodology
The raw material for the study consisted of wines that ‘approved assessors’ from nine of the participating companies tasted as part of their normal duties. These assessors had to attend two training days held by the Camden and Chorleywood Food Research Association (CCFRA), who later collated all the data from the survey and produced the report. For each wine tasted a form was filled in indicating the wine type, country of origin, price, closure type and the condition of the wine. So far so good.

The next stage involved ‘verification’ of suspected TCA taint. If wines were judged to have a ‘musty’ taint by the assessors, the ullaged bottles were then resealed with the original closure and sent to one of two independent companies for verification: Geoff Taylor’s Corkwise and David Bird’s DBQA. The idea behind this stage of the process was to check that the assessor’s hadn’t misattributed the source of the wine fault. The final report doesn’t mention the method of verification used by these labs, but when questioned they both confirmed that it was another round of sensory analysis: in each case the wines were re-tasted by company staff within a week of receipt.

Remarkably, of the 277 samples identified by company assessors as musty, only 34% were ‘verified’ as musty. The report concludes, ‘It can be assumed from this that there was a significant degree of misclassification in terms of false positives, either in the form of other defects being wrongly reported as musty or satisfactory samples being classified as musty.’ This is a staggering discrepancy. Indeed, it seems to have been the cause of some internal strife within the consortium itself. Intriguingly, the report tells us that ‘One participant withdrew from the trial in October 2001 due to their concerns over the disparity between tasters and verifiers.’ Initially, the WSA chose not to disclose the identity of this participant, but some digging around revealed that it was Oddbins who had opted out.

Oddbins’ concerns
Steve Daniel of Oddbins confirms ‘Our major objection was methodology, specifically the lack of scientific controls and how the wine was verified as musty.’ The major problem was indeed the ‘huge discrepancy’ between what was submitted as musty and what was found in the verification step. This led to a further disagreement centred around the WSA’s decision to focus solely on ‘commercially significant’ mustiness. Samples with low-level taint might not come across as overtly musty, but could still be out of condition. Things came to a head at a public meeting of consortium members in June 2001, where the WSA were openly questioned by many members about the verification procedure, and in particular what had happened to the budget supposed to be in place for chemical analysis of submitted samples. The Consortium was reassured that this was still in place. Subsequent to this meeting, the steps to rectify the verification problem ‘weren’t aggressive enough’, and Oddbins eventually withdrew in October. When questioned about the overall budget for the survey, the WSA’s Rappoport revealed that it was in the order of £30 000—so it’s hardly surprising that there wasn’t any chemical analysis.

Fudge factors
Aside from the rather opaque verification process, Oddbins’ criticism of the study highlights one of a couple of fudge factors that could be in part responsible for the low final rate of taint claimed by the survey. The emphasis of the project was on ‘defects considered to be at a level that is likely to be detected by a discerning consumer’. Rappoport confirms that ‘we were not measuring incidence [of cork taint] in terms of zero tolerance’. This leads us to an unanswerable question: what constitutes commercially significant musty taint? Who gets to decide this?  It is known that people differ in their sensitivity to TCA; what is not clear is that a discerning consumer is any less sensitive to TCA than the professional assessors in this survey. What about low-level cork taint that introduces a very faint musty taint and strips the wine of its fruit?

A second fudge factor is the fact that the final rate of musty taint quoted by the report includes all closure types, not just cork-based ones. Ironically, while the report studiously avoids using the term ‘cork taint’, none of the 1934 wines that were sealed with non-cork closures showed any mustiness—an important observation.

Faulty methodology
But the most damning criticism of the WSA’s methodology comes from the findings of the Australian Wine Research Institute (AWRI). A single 10 minute phone conversation with the AWRI’s Peter Godden was enough to expose the gaping holes into the scientific design of this survey.

There are two fundamental assumptions underlying the verification step in the WSA survey. The first is that TCA is stable enough that musty taint detected by assessors will still be detectable by the verifying laboratories up to a week later, when the wines are re-tasted. The second is that TCA is readily detectable against a background of oxidation, which will have occurred between the first tasting and the retasting of the ullaged bottles. Godden thinks that both of these assumptions are false.

When the AWRI began their first closure survey three years ago, they tested the stability of TCA in opened bottles of corked wine. ‘We tested ullaged bottles with a reasonably high level of TCA—15 ng/l. Ullaged bottles were recorked and left on a desk for 2 weeks. Just a trace of TCA was found: all the rest had been absorbed back into the cork.’ Godden thinks it is ‘quite probable’ that most of the TCA in the musty wines submitted for verification could have been absorbed back into the cork. For this reason, in their studies the AWRI insist that samples to be tested later for TCA should be transferred after opening to all-glass containers (with ground glass stoppers) or glass bottles with an aluminium foil barrier between the wine and the stopper. Plastic is no good because the TCA will all be absorbed by the plastic within a few days. 

Godden also strongly disagrees that musty off-flavours will be readily detectable over the background of oxidation. ‘In the last 6 months’, he says, ‘we have done a major investigation in an insurance case where there has been random bottle oxidation, in which we have investigated how oxidation affects the perception of TCA.’ The AWRI carried out sensory analysis of all the bottles, together with chemical analysis of the oxidised bottles. The conclusion? ‘Oxidation has a massive effect on the ability of experienced tasters to assess TCA’.

‘We expected the cork people to be touting these WSA results more’, adds Godden. He guesses that the reason the pro-cork lobby haven’t done this is that they may suspect there is a problem with the study.

Like many others, Godden is convinced that the real level of cork taint is substantially higher than the 0.7% claimed by the WSA study. Once a year the AWRI run a wine assessment course for potential wine show judges. In this event there are at least two bottles of the same wine open at the same time, and for a wine to be recorded as tainted with TCA, there as to be an overwhelming consensus. ‘We’ve got good stats that 62 out of 1062 bottles we have opened have been corked’, says Godden. ‘That’s 5.5%, and statistically, we can be 99% confident that the real level of taint is 4%–7.7%.’

Conclusions
Where does this leave the WSA survey? First, it provides an explanation for the controversial discrepancy between the number of submitted musty samples and those that were verified as musty by the independent labs, making a mockery of the survey methodology in the process. Second, it means that the bizarrely low quoted rate of musty taint or 0.7% is anything but a ‘benchmark baseline’. Instead, it can be dismissed as an artefact of a flawed methodology. While credit is due to the WSA for initiating this survey in the first place, it is a shame that the poor study design meant that this turned out to be a largely wasted opportunity. In addition, there are good grounds for suggesting that the final report— available free to journalists but otherwise £500 a pop—should be rewritten in light of the methodological problems exposed here.  

Back to top