Google+
Close

Crichton & Stats



Text  



An e-mail:

Hi Folks, I like the new blog even though it will further dampen my productivity. If you haven’t seen the following presentation by Dr. Crichton, it is very much worth your time: (here )
My training is in statistics, so I have studied global warming (and much else) from its perspective. My wife is a research scientist in medicine and while I understand little of the mechanisms being studied in her journals littering the house, I read the methodology and statistics to see the approach taken for analysis. More often than not, the suboptimal, and sometimes flat out wrong statistical tools are applied. These are in leading peer reviewed journals. There seems to be two major problems with scientists and statistics. First, is a general lack of knowledge with respect to statistical techniques. This is entirely understandable, as it is a complex field. When I was in grad school at Ohio State, all Masters and Ph.D. candidates had access to free consulting by statisticians for this very reason. With millions of dollars spent on research a couple grand on a statistical consult to make sure the results mean something certainly seems appropriate. When my brother-in-law was finishing his dissertation (again at Ohio State) he used a statistical technique recommended and set up by a Ph.D. in statistics. He was then forced by his adviser to redo the statistics with an inappropriate technique used in his adviser’s dissertation 30 years earlier. The second problem is the substitution of statistics for science. Finding a statistical link between to things without understanding the underlying mechanism is not science. Clusters and anomalous correlations are a part of random data. A few years back after the completion of the human genome project, about once a week there was some new gene link. The “gay gene” for instance. Virtually all of these associations have been quietly dropped. They arose simply by data mining with high speed computers. The infinite monkeys typing all of Shakespeare’s plays have been created by high speed computers. In short there are always tons of meaningless correlations out there you can find by applying enough computing power. When applying statistics to extremely complex systems such as the environment a third problem arises. We don’t know the interactions within variables to make any kind of projection of the global environment 50 years hence. We can’t even come close to identifying all the variables. We make guesses on variable to variable relationships, with at most secondary or tertiary levels, with error rates overwhelming results in short order. This explains why global warming models that point to a .5 degree warming in half a century, when backcast 50 years are off by 4 to 6 degrees. Climatologists readily acknowledge their lack of understanding on key subsets of variables such as solar activity and cloud formation and yet tell us horrendous overall models result in settled science. One last point. Using the last 100 years of data for greenhouse gasses and global temperatures yields a statistically insignificant correlation coefficient. So we have a theory that doesn’t have empirical support. What it does have is political and grant money support.


Text  


Sign up for free NRO e-mails today:

Subscribe to National Review