The 97 % consensus fraud on global warming
Richard Tol:
While it might be argued that those coming to this conclusion were just sloppy in their analysis, that so many have embraced such a sloppy work suggest something else is at work. If these conclusions were really scientific their projections and models would not be so out of whack. They are out of whack because whenever projections are off base it is because some of the underlying assumptions are invalid, but they cannot even figure out which assumptions fall into that category.
...There is more.
Cook and co selected some 12,000 papers from the scientific literature to test whether these papers support the hypothesis that humans played a substantial role in the observed warming of the Earth. 12,000 is a strange number. The climate literature is much larger. The number of papers on the detection and attribution of climate change is much, much smaller.
Cook’s sample is not representative. Any conclusion they draw is not about “the literature” but rather about the papers they happened to find.
Most of the papers they studied are not about climate change and its causes, but many were taken as evidence nonetheless. Papers on carbon taxes naturally assume that carbon dioxide emissions cause global warming – but assumptions are not conclusions. Cook’s claim of an increasing consensus over time is entirely due to an increase of the number of irrelevant papers that Cook and co mistook for evidence.
The abstracts of the 12,000 papers were rated, twice, by 24 volunteers. Twelve rapidly dropped out, leaving an enormous task for the rest. This shows. There are patterns in the data that suggest that raters may have fallen asleep with their nose on the keyboard. In July 2013, Mr Cook claimed to have data that showed this is not the case. In May 2014, he claimed that data never existed.
The data is also ridden with error. By Cook’s own calculations, 7% of the ratings are wrong. Spot checks suggest a much larger number of errors, up to one-third.
Cook tried to validate the results by having authors rate their own papers. In almost two out of three cases, the author disagreed with Cook’s team about the message of the paper in question.
Attempts to obtain Cook’s data for independent verification have been in vain. Cook sometimes claims that the raters are interviewees who are entitled to privacy – but the raters were never asked any personal detail. At other times, Cook claims that the raters are not interviewees but interviewers.
...
While it might be argued that those coming to this conclusion were just sloppy in their analysis, that so many have embraced such a sloppy work suggest something else is at work. If these conclusions were really scientific their projections and models would not be so out of whack. They are out of whack because whenever projections are off base it is because some of the underlying assumptions are invalid, but they cannot even figure out which assumptions fall into that category.
Comments
Post a Comment