Time to Ditch 'Climate Models'?

Steven F. Hayward17 May, 2022 5 Min Read
But you're not hot.

Just about every projected environmental catastrophe going back to the population bomb of the late 1960s, the “Club of Rome” and “Global 2000” resource-exhaustion panics of the 1970s, the ozone depletion crisis of the 1980s, and beyond has depended on computer models, all of which turned out to be wrong, sometimes by an order of magnitude. No putative environmental crisis has depended more on computer models than "climate change." But the age of high confidence in supercomputing and rapidly advancing “big data” analytics, computer climate models have arguably gone in reverse, generating a crisis in the climate-change community.

The defects of the computer climate models—more than 60 are used at the present time—which the whole climate crusade depends on have become openly acknowledged over the past few years, and a fresh study in the mainstream scientific literature recently highlights the problem afresh: too many of the climate models are “running hot,” which calls into question the accuracy of future temperature projections.

Did somebody say, "hot-model problem"?

Nature magazine, one of the premier “mainstream” science journals, last week published “Climate simulations: recognize the ‘hot model’ problem,” by four scientists all firmly established within the “consensus” climate science community. It is a carefully worded article, aiming to avoid giving ammunition to climate-change skeptics, while honestly acknowledging that the computer models have major problems that can lead to predictions of doom that lack sufficient evidence.

“Users beware: a subset of the newest generation of models are ‘too hot’ and project climate warming in response to carbon dioxide emissions that might be larger than that supported by other evidence,” the authors write. While affirming the general message that human-caused climate change is a serious problem, the clear subtext is that climate scientists need to do better lest the climate science community surrenders its credibility.

One major anomaly of the climate modeling scene is that, as the authors write, “As models become more realistic, they are expected to converge.” But the opposite has happened—there is more divergence among the models. Almost a quarter of recent computer climate models show much higher potential future temperatures than past model suites, and don’t match up with known climate history: “Numerous studies have found that these high-sensitivity models do a poor job of reproducing historical temperatures over time and in simulating the climates of the distant past.”

What this means is that our uncertainty about the future climate is increasing. To paraphrase James Q. Wilson’s famous admonition to social scientists, never mind predicting the future; many climate models can’t even predict the past.

Some models might be larger than those supported by evidence.

A quick primer: in general  the average of computer climate models predict that a doubling of the level of greenhouse gases (GHGs), principally carbon dioxide (CO2), by the end of this century would increase global average temperature by range of 1.5 degrees C to 4.5 degrees C. At present rates of GHG emissions, we’re on course to double the GHG level in the atmosphere about 80-100 years from now.

Why is the range so wide, and why does it matter? First, the direct thermal effect of doubling GHGs is only about 1.1 degrees. So how do so many models predict 4.5 degrees or more? Two words: feedback effects. That is, changes in atmospheric water vapor (clouds, which both trap and reflect heat), wind patterns, ocean temperatures, shrinkage of ice caps at the poles, and other dynamic changes in ecosystems on a large scale.

Yet it is precisely these feedback effects where the computer models are the weakest and perform most poorly. The huge uncertainties in the models (especially for the most important factor—clouds) are always candidly acknowledged in the voluminous technical reports the U.N.’s Intergovernmental Panel on Climate Change (IPCC) issues every few years, but few people—and no one in the media—bother to read the technical sections carefully.

Why are climate models so bad? And can we expect them to improve any time soon? Steven Koonin, a former senior appointee in the Department of Energy in the Obama administration, explains the problem concisely in his recent book Unsettled: What Climate Science Tells Us, What It Doesn’t and Why It Matters. The most fundamental problem with all climate models is their limited “resolution.” Climate models are surprisingly crude, as they divide up the atmosphere into 100 km x 100 km grids, which are then stacked like pancakes from the ground to the upper atmosphere. Most climate models have one million atmospheric grid squares, and as many as 100 million smaller (10 sq. km) grid squares for the ocean. The models then attempt to simulate what happens within each grid square and sum the results. It can take up to two months for the fastest supercomputers to complete a model “run” based on the data assumptions input into the model.

The problem is that “many important [climate] phenomena occur on scales smaller than the 100 sq. km. (60 mile) grid size, (such as mountains, clouds, and thunderstorms).” In other words, the accuracy of the models is highly limited. Why can’t we scale down the model resolution? Koonin, who taught computational physics at Cal Tech, explains: “A simulation that takes two months to run with 100 km grid squares would take more than a century if it instead used 10 km grid squares. The run time would remain at two months if we had a supercomputer one thousand times faster than today’s—a capability probably two or three decades in the future.”

Two words: feedback effects.

But even if the models get better at the dynamics of what happens in the atmosphere on a more granular scale, the models still depend on future GHG emissions forecasts, and there is a wide range of emissions scenarios the modelers use. The high-end temperature forecasts depend on extreme projections of future emissions that are no longer credible, such as one model included in previous U.N. reports that relied on a six-fold increase in the use of coal over the next 80 years, an outcome no one thinks is going to happen (or only with massive carbon-capture technology if it does).

Emissions forecasts made just 20 years ago turned out to be much too high for today. Nearly all of the most alarming claims of the effects of future warming depend on these discredited forecasts, but the media has failed to keep up with the changing estimates. It’s a classic garbage-in, garbage out problem.

The Nature article is candid about this problem:

The largest source of uncertainty in global temperatures 50 or 100 years from now is the volume of future greenhouse-gas emissions, which are largely under human control. However, even if we knew precisely what that volume would be, we would still not know exactly how warm the planet would get.

The authors of the Nature article are taking a risk in dissenting from the politicized party line on climate science, however cautiously worded, and deserve credit for their candor and self-criticism of climate modeling.

Steven F. Hayward is a resident scholar at the Institute of Governmental Studies at UC Berkeley, and lecturer at Berkeley Law. His most recent book is "M. Stanton Evans: Conservative Wit, Apostle of Freedom." He writes daily at Powerlineblog.com.

MORE ARTICLES

See All

9 comments on “Time to Ditch 'Climate Models'?”

  1. Models are merely tools. You use them when you can't do real science. Furthermore, they can be, and often are, tweaked to render the results that generate the most funding.

  2. Great point. In addition, Koonin is sidestepping the issue. Computer technology can drive detail on a granular scale and it won’t take “a century”. Many other types of this level of high volume data analyses are conducted on scales on days and weeks - a pitiful excuse by Koonin. Koonin also still casts the general mistaken assumption that CO2 still is significant at these concentrations.
    What more “granular analyses” will do is vastly compound the same errors they’re not incorporating in the “less granular models”
    The real issue boils down to assumptions of radiant effects of CO2. They haven’t realized the Earth has previously seen CO2 at 10x today’s levels, and still during similar periods with glacial periodicity. CO2 has minimal effect at what really are trace levels. Plus, in addition to the inability of modeling clouds, CO2 presents a similar issue on its highly mobile state. The models assume CO2 at point of measure is static. It’s not. At these concentrations and mobility through the atmosphere, biosphere, substrate, and more importantly, the oceanic and general water column, it drastically loses its capacity to deliver any radiant heat factor.
    As with the catastrophic manipulation of today’s Big Pharma data, what we really need to do now is to start analyzing the real root cause motivations of these analyses.
    That in a significant manner points to more monetary goals than true scientific goals, or better put, certainly nothing to do with protection of the public.

  3. All climate Models by Mann, Gore DiCaprio and Greenpeace Etc should be rejected as non proof

  4. Computers predict whatever they are programmed to predict.
    For the past 40 years they have not been programmed to predict tpid global warming that has not happened.
    The Russian IMN model, that least over predicts the rate of warming, gets no individual attention, because accuracy of predictions is not the goal of these computer games.
    The latest group of 38 CMIP6 models have am ECS range from +1.83°C to +5.67°C., per CP2 level doubling. Observations since the 1950s are about +1.0 to +1.5 degrees C. per CO2 doubling, ONLY if you assume CO2 was the only cause of global warming, which is very unlikely to be true.

  5. The very fact that the model set predicts temperatures increases of 3 degrees +/- 50% says this is simply lacks the PRECISION needed to form correct public policy, even if the ACCURACY was good, which it clearly is not. The average of error is still an error.

    It also ignore the fact that these same models, from the EPA and IPCC, given specific reductions in manmade CO2, predict global temperature reductions of 0.02 to 0.37 degrees, 100 years from now, depending on how drastic the cuts, who participates, and for how long. That's negligible!

    One other thing: If you look simply at the trend in the official temperature record-- either 1860-2020, or 1960-2020, you will see that neither exceeds the 2.0 degree Paris "targets"! Where the heck is this "crisis"???

  6. Left out the "acid rain" hysteria from the '80s, but that's quibbling, in the laundry list of panics the environmental left has inflicted on the public.

  7. “ While affirming the general message that human-caused climate change is a serious problem, the clear subtext is that climate scientists need to do better lest the climate science community surrenders its credibility.” The climate science community has no credibility having been wrong as often as the predictions of the Congressional Budget Office.

  8. A grid measuring 100 km x 100 km has an area of 10,000, not 100, sq. km. And a grid with an area of 100 sq. km has an area of about 38.6, not 60, sq. miles. I hope these errors happened during the editing of the article.

Leave a Reply to Dave Owens Cancel reply

Your email address will not be published. Required fields are marked *

twitterfacebook-official