Is There Consensus Among Wine Critics?: Who Can You Trust?

*Edited on 8/1/2014 to correct the tasting policy of Wine Enthusiast, and to include comments from Lauren Buzzeo, Tasting Director and Senior Editor for Wine Enthusiast Media.*

Wine Enthusiast, Wine Spectator, The Wine Advocate and International Wine Cellar:  most (if not all) people in the wine world have heard of these four publications, and many people get information regarding the quality of a particular wine from the expert opinions penned therein.  However, how do we know if a high scoring wine rated in one magazine is really a high quality wine, and not just a function of the taste of the person doing the tasting?  Is there a consensus among the wine critics at these major wine publications?  Or, do the ratings depend upon who happened to taste the wine for that particular

Photo By kerinin [CC-BY-SA-2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

Photo By kerinin [CC-BY-SA-2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

entity?

One recent study examined this very question looking at wines from California and Washington, and I’ll briefly present to you their results.

  • Using correlation analysis, the researchers determined that there was a high level of consensus between International Wine Cellar (IWC), The Wine Advocate and Wine Spectator.
  • The consensus between Wine Advocate and Wine Spectator, while still relatively high, was lower than the consensus between IWC and the two publications.
  • Wine Enthusiast had a low correlation with both IWC and with Wine Spectator, but was moderately correlated with Wine Advocate.

Could these slight differences in consensus be explained by tasting policy?

Each publication has their own policy for tasting wine samples.  For example, Wine Spectator and Wine Enthusiast policies requires completely blind tasting, including both the price and the winemaker.  Finally, seemingly the most lax of the four, Wine Advocate and International Wine Cellar had no real clear tasting policy (according to the researchers at the time of this study).

Do these tasting policies explain the slight differences in consensus between experts at the different publications?

  • According to the results of this study, there is no clear link between tasting policy and the consensus between the experts.

What about varietal?

Interestingly, this analysis found that the consensus between publications varied depending upon the particular varietal that was tasted.

  • There was less of a consensus between Wine Spectator and Wine Enthusiast, as well as between Wine Spectator and The Wine Advocate, in Cabernet Sauvignon than any other varietal.

o   So, the wine critics at the different magazines were more torn on the scores for Cabernet Sauvignon and more in agreement on the scores of all other varietals tasted.

  • The Wine Advocate and Wine Enthusiast shared a high consensus when reporting on red varietals, however, when it came to Chardonnay, there was almost no consensus between the two entities.
  • Wine Advocate tended to score Merlot wines 2.3 points lower than all other varietals.

The Ratings

What about the actual scores?  Does one magazine tend to in general score their wines higher than all the others?  Or is the average score of one representative of the average score of the rest?

  • Wine Advocate had the highest ratings compared with the other three publications.

o   The authors note this is probably due to their wine selection process.  The Wine Advocate focuses on a smaller subset of excellent quality wines, while Wine Spectator, Wine Enthusiast, and International Wine Cellar review a larger range of wines with both high and low scores.  The inclusion of lower-scoring wines effectively drop the average score for the wines reviewed in the latter three publications.

  • Wine Spectator and Wine Advocate tended to rate Merlot wines lower than Cabernet Sauvignon and Chardonnay wines.

The Critics

Could the experts themselves have something to do with the slight differences in consensus seen in this study?  Does one wine critic consistently rate a certain wine one way, while a critic from a different publication rates the same wine in a completely different way?

Photo By winestem [CC-BY-2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

Photo By winestem [CC-BY-2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

o   Wine Advocate: Robert Parker, Jr. and his associate Antonio Galloni tasted the California wines, while associate Pierre Rovani tasted the Washington wines.

o   Wine Spectator: James Laube and MaryAnn Worobiec tasted the California Cabernet Sauvignon and Chardonnay, while Tim Fish reviewed the California Merlot.  Harvey Steiman reviewed all Washington wine.

o   Wine Enthusiast: Steve Heimoff tasted the California wines, while Paul Gregutt tasted the Washington wines.

o   International Wine Cellar: Stephen Tanzer is the primary critic, though it was unclear if he had any assistants or not.

  • Every extra point assigned by Stephen Tanzer from the International Wine Cellar was matched by a point assigned by Robert Parker from The Wine Advocate, indicating a very tight correlation between these two publications.

Pairwise Correlations

Taking a closer look at the consensus between pairs of publications, the results showed that while some shared a high consensus, others were not so well correlated.

  • Wine Enthusiast and International Wine Cellar were not significantly correlated with one another.  In other words, there was no consensus between Wine Enthusiast ratings and International Wine Cellar ratings for the same wines.
  • There was a high correlation between International Wine Cellar and Wine Spectator.
    Photo By Agne27 (Own work) [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], via Wikimedia Commons

    Photo By Agne27 (Own work) [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], via Wikimedia Commons

  • There was a high correlation between International Wine Cellar and Wine Advocate.
  • There was a high correlation between Wine Spectator and Wine Advocate for Merlot wines only, while the overall consensus between the two was slightly lower.
  • Wine Enthusiast had the lowest consensus between it and the other publications.

o   Statistically no consensus with International Wine Cellar.

o   Low consensus with Wine Spectator.

o   Moderate consensus with Wine Advocate.

  • This moderate consensus between Wine Enthusiast and Wine Advocate is a result of a very high consensus with Merlot and Cabernet Sauvignon, and no consensus with Chardonnay.

Regional Differences?

When examining the region of origin (i.e. California versus Washington), some differences were noted.

  • The average Wine Spectator rating for Washington wines was higher than California wines (3.8 points average).

o   The same trend was seen in Wine Enthusiast, though the point difference was not as great.

Conclusions

While there was some consensus between the four publications, there is not complete consensus.  Specifically, Wine Enthusiast tended to “stick out” in terms of sharing the least consensus between the four wine publications, while the other three were more tightly entwined in terms of their wine ratings.

Why are the wine ratings in Wine Enthusiast so much less correlated or in consensus with the wine ratings in Wine Advocate, Wine Spectator, and International Wine Cellar?  Could it be that Steve Heimoff and Paul Gregutt have markedly different taste preferences and/or evaluation methods than the experts at the other publications?  The data showed that there was no relationship between tasting policy and consensus, so there must be something deeper going on than just individual publication methodologies.In terms of this research, of course a lot more needs to be done.  This is just one study, and as we all know “one study does not a scientific conclusion make”.

One thing that might be causing some  of the discrepancy noted in regards to the consensus among the 4 major wine publications is related to the numerical scales used by each of the publications.

Specifically, Lauren Buzzeo, Tasting Director and Senior Editor for Wine Enthusiast Media, stated that “the sample set doesn’t seem to factor the concept that numeric ratings actually mean different things to each publication. As an example, [Wine Enthusiast] 80-82 point wines are considered “Acceptable”, while that would translate to somewhere in the 70-79 point range for [International Wine Cellar] and [Wine Advocate], and 75-79 for [Wine Spectator].  So an 84-to-84 comparison of reviews across publications does not actually mean a consistent industry result. It would likely be more accurate of a comparison to keep the sample set for all reviewing publications to wines scoring 85 points and above, as there is greater consistency among the numbers and their corresponding quality terminology than there is at the lower end of the spectrum.”

Could part of the observed variation be related to this discrepancy in what an “acceptable” wine means across publications?  It’s certainly a possibility that should be addressed.

It’ll be interesting to see where this research goes.   Not as much focus on potential regional bias was done in this study, contrary to what I thought would be present based on the title of the original paper.  The door is wide open for interpretation at this point, and follow-up studies should be performed in order to further our understanding on wine expert consensus.

Photo by Ion Theodorescu-Sion [Public domain], via Wikimedia Commons

Photo by Ion Theodorescu-Sion [Public domain], via Wikimedia Commons

So, who can you trust?  Well, in my opinion, if you like the wines that a particular wine critic rates highly, then stick with that critic! If you find yourself at odds with a particular wine critic, then maybe it’s time you sought another source of knowledge.

I’m genuinely curious to know what you think about the discrepancies found in this study.  Taking into account everything these researchers found, what else do you think could be causing this variation in ratings between Wine Enthusiast and the other three publications that seem to share stronger consensus? Please share your thoughts with the group and let’s start a discussion!

Source: Stuen, E.T., Miller, J.R., and Stone, R.W. 2014. An analysis of wine critic consensus: A study of Washington and California wines. American Association of Wine Economics Working Paper No. 160. ISSN 2166-9112.

6 comments for “Is There Consensus Among Wine Critics?: Who Can You Trust?

Comments are closed.