Some reviewers seem to be pretty consistent with their ratings, until they review one of the "TOP" rated wines. Then their numbers change significantly. Are they tasting blind? Why do they hand out a 100 rating to a wine they've also rated 85? Numbers go up & down, so it's not like the wine is improving with time. And if individuals tastings can vary greatly, why wouldn't 2nd & 3rd growth Bordeaux's fluctuate as wildly as first growths? (I never see a smaller player "accidentally" receive a 100, whereas a significant one can score in the 80's until someone realizes what they've done and "corrects" it).
My impression (correct me if I'm wrong) is that at least the "JS" guy tastes blind, but if he realizes that he's tasting a wine that is *supposed to* get a high score, he will then "revise" that number. If so, why? Does he not want to look like a fool by giving an '82 Mouton a low score? Or is there some marketing pressure for a high score?
Just something I sometimes wonder about while lying awake late at night....
\/
WS Scores for 1982 Mouton-Rothschild
Review: Jun, 1986; Score: 85 Review: May, 1991; Score: 93 (PM) Review: Aug, 1992; Score: 95 (JS) Review: Jul, 1997; Score: 100 (PM) Review: Nov, 1998; Score: 98 (JS) Review: Jun, 2001; Score: 98 (JS)