Arguing With Climate Skeptics: Part n of n

Originally shared by Jon Lawhead

Arguing With Climate Skeptics: Part n of n

I made the mistake of getting involved in a Disqus comment thread on this article last week, and have actually been trying to keep up with it.  I’ve written so much about this topic already that a lot of participating in this just consists in copying and pasting things I’ve already said, but the discussion is still a good resource for other people to refer to.  Here’s the meat of it, with my words in plaintext and skeptical objections in italics.  Enjoy.

 

The scientific slang which they understand “forcing” is not an actual controlling force, it’s used to make the handwaving argument that if it forces, it must be a driver, and must necessarily have the predicted effect. In fact, forcing has effect with predictability only on the model and not on the earth. A physicist of course might say that it MUST have an impact, and the fact that it doesn’t “is a travesty.”

This is just as silly as the widespread objection that evolution is “just a theory,” and so doesn’t deserve any special status. In both cases, the problem is a mismatch between how terminology is used in a professional context and how it’s used colloquially. ‘Forcing’ is (most of the time) shorthand for ‘radiative forcing,’ which is a value representing a quantity of energy per unit area. In most climate science contexts, the attached unit is “watts per square meter” (W/m^2). The fact that the addition of CO2–along with other greenhouse gases (GHG)–increases the radiative forcing on the planet is entirely uncontroversial, and is a consequence of elementary physics. 

Molecules of different gases have different molecular structures, which (among other things) affects their size and chemical properties. As incoming radiation passes through the atmosphere, it strikes a (quite large) number of different molecules. In some cases, the molecule will absorb a few of the photons (quanta of energy for electromagnetic radiation) as the radiation passes through, which can push some of the electrons in the molecule into an “excited” state. This can be thought of as the electron moving into an orbit at a greater distance from the nucleus, though it is more accurate to simply say that the electron is more energetic. This new excited state is unstable, though, which means that the electron will (eventually) “calm down,” returning to its previous ground state. Because energy is conserved throughout this process, the molecule must re-emit the energy it absorbed during the excitation, which it does in the form of more E/M radiation, which might be of different wavelengths than the energy originally absorbed. Effectively, the gas molecule has “stored” some of the radiation’s incoming energy for a time, only to re-radiate it later. 

More technically, the relationship between E/M radiation wavelength and molecular absorption depends on quantum mechanical facts about the structure of the gas molecules populating the atmosphere.  The “excited” and “ground” states correspond to electrons transitioning between discrete energy levels, so the wavelengths that molecules are able to absorb and emit depend on facts about which energy levels are available for electrons to transition between in particular molecules. The relationship between the energy change of a given molecule and an electromagnetic wave with wavelength λ is: 

ΔE = ħ/λ 

where ħ is the reduced Planck constant (h/2π), so larger energy transitions correspond to shorter wavelengths. When ΔE is positive, a photon is absorbed by the molecule; when ΔE is negative, a photon is emitted by the molecule. Possible transitions are limited by open energy levels of the atoms composing a given atom, so in general triatomic molecules (e.g. water, with its two hydrogen and single oxygen atoms) are capable of interesting interactions with a larger spectrum of wavelengths than are diatomic molecules (e.g. carbon monoxide, with its single carbon and single oxygen atoms), since the presence of three atomic nuclei generally means more open energy orbital states. 

Because the incoming solar radiation and the outgoing radiation leaving the Earth are of very different wavelengths, they interact with the gasses in the atmosphere very differently. Most saliently, the atmosphere is nearly transparent with respect to the peak wavelengths of incoming radiation, and nearly opaque (with some exceptions) with respect to the peak wavelengths of outgoing radiation. Specifically, incoming solar radiation is not absorbed efficiently by any molecule, whereas outgoing radiation is efficiently absorbed by a number of molecules, particularly carbon dioxide, nitrous oxide, water vapor, and ozone. This is the source of the greenhouse effect. 

This image depicts the absorption spectrum for the constituents of the atmosphere: http://tinypic.com/r/vyqu5h/9

The E/M frequency spectrum is represented on the x-axis, and the absorption efficiency (i.e. the probability that a molecule of the gas will absorb a photon when it encounters an E/M wave of the given wavelength) of various molecules in Earth’s atmosphere is represented on the y-axis. The peak emission range of incoming solar radiation is colored yellow, and the peak emission range of outgoing radiation is colored blue (though of course some emission occurs from both sources outside those ranges). Note the fact that incoming solar radiation is not absorbed efficiently by any molecule, whereas outgoing radiation is efficiently absorbed by a number of molecules, particularly carbon dioxide, nitrous oxide, water vapor, and ozone. The absorbed and reradiated photons increase the amount of energy hitting a particular area of the ground–they increase radiative forcing. This is not controversial in any way.

What you’re disputing here is the climate sensitivity, not the radiative forcing value. Sensitivity is expressed in °C/(W/m^2), and corresponds to the amount of the expected mean temperature increase resulting from a given increase in radiative forcing. The radiative forcing value most people work with is 3.7 W/m^2, which corresponds (uncontroversially) to a doubling of CO2-equivalent GHG in the atmosphere. It’s true that calculating the temperature response to this change in forcing is non-trivial because of the interplay of various feedbacks and other non-linear processes, but the difficulty lies in calculating the sensitivity, not the forcing itself (and we’ve gotten pretty confident in our estimates of the lower bound on the sensitivity). This might seem pedantic, but understanding the technical language is really important if you’re going to meaningfully engage with climate science; just like any mature science, it has a robust set of terminology that doesn’t necessarily map well onto colloquial definitions of words, and failing to understand that vocabulary hampers your ability to understand the scientific literature. You end up saying things like:

We know what the effects of GHGs are, and we know that incremental GHG changes have no PREDICTABLE effect.

This is simply false. Incremental changes in CO2-e GHG concentrations have a very straightforward effect on radiative forcing. Their effect on average temperature is less straightforward, but hardly “unpredictable.”

Physics says GHGs modulate temperature changes, and it acknowledges that 98% of the GHGs, and 95% of the greenhouse effect, is water vapor. Of the remaining 5%, CO2 is most of it at 3.6%, of which 3.2% is produced by mankind, the plurality of that by cement production and the plurality of that by China. Thus, we produce 0.12% of the CO2 that may have an influence on global climate.

These figures are well outside the generally accepted range of contributions, and I’d love to see a citation. The claim that “98% of the GHGs, and 95% of the greenhouse effect, is water vapor” is simply not well-supported by the literature, no matter how the claim is interpreted. Estimates for the contribution of water vapor to the overall greenhouse effect range between 36% and 72%, depending on the measure used. Water vapor is somewhat different from most other GHGs in that it has an incredibly short residence time in the atmosphere (on the order of days instead of years) because of the water cycle: unlike CO2 and most other GHG, water vapor can precipitate out, and often does. In addition, the water vapor carrying capacity of a quantity of atmosphere depends strongly on the atmosphere’s temperature, which is not the case for most other GHG. This makes it somewhat more intuitive to consider water vapor as a part of a feedback loop, rather than as part of the greenhouse effect. Even considered as part of the greenhouse effect, though, the 98% figure is wildly inaccurate. The upper bound of its contribution given current conditions is about 72%. CO2, on the other hand, contributes somewhere between 9% and 26%, and has a residence time measured in decades (i.e. about 60 years). Other greenhouse gases currently contribute less, but many are much stronger than CO2, and so small increases can make a large difference. Methane, for instance, is not only a much more potent GHG than CO2 (in the sense of producing more radiative forcing per molecule), but also promotes the formation of ozone, another GHG. Small increases in methane can lead to significant increases in radiative forcing. This is all drawn from http://journals.ametsoc.org/doi/pdf/10.1175/1520-0477%281997%29078<0197%3AEAGMEB>2.0.CO%3B2.

Reducing that by any measurable amount is a formidable task, and because of the logarithmic decrease in the efficacy of CO2 warming (doubling it may cause a 0.5°C increase) it is as unlikely to have any effect as it is impractical to cripple industry to attempt it.

Again, you’re talking about climate sensitivity here. 0.5°C/(3.7 W/m^2) is far below the generally accepted lower bound for the value. The Bayesian probabilities for climate sensitivity to a doubling of CO2-e (that is, an increase in radiative forcing of 3.7 W/m^2) range from 1.5°C to 4.5°C, with 90% confidence. It is extremely unlikely that the sensitivity is less than 1°C (less than a 10% probability), and very unlikely greater than 6°C (less than a 25% probability). It’s possible to point to model runs in which extreme outlier values like 0.5°C/(3.7 W/m^2) or 10°C/(3.7 W/m^2) have been found, but we tend to pay more attention to the statistical ensemble of model predictions than we do to the outliers in either direction. Even still, there is significantly more of a chance that the sensitivity value is much higher than the ensemble estimate than that it is much lower. 0.5°C/(3.7 W/m^2) is far, far below any believable estimate, and represents the most extreme low outlier to be found in the literature. It is not representative.

In fact, CO2’s effect (at present) is to NOT keep water in the vapor phase, since at the tropopause it cools the stratosphere and condenses any vapor into ice crystals. And more cools it more. Dyson thinks the major risk of CO2 increase is to have ice crystals diminish the ozone layer. But, bless his heart, he does think the benefits outweigh the risks.

This is a half-truth. Like much of what you’ve written here, it suggests to me that you’re repeating arguments you’ve seen without really understanding what you’re saying. “The tropopause” just refers to the boundary between the troposphere and the stratosphere, and is defined as the point at which the atmosphere stops getting colder as you go higher and starts (for a time) getting warmer (that is, the point where there is a sign change in the lapse rate of the atmosphere). The tropopause is cold. Very cold. The average temperature at the tropopause is -51°C, in fact. Since there’s a sign change in the lapse rate at the tropopause, the temperature from there gradually increases, to an average maximum of -15°C. At the boundary of the stratosphere and the mesosphere, the lapse rate undergoes another sign change and things start to cool off again; the mesopause–the top of the mesosphere–is generally the coldest location that’s still considered part of Earth, with a mean temperature of around −145 °C. It should be clear that in all these cases, the atmosphere near and above the tropopause is well below the freezing temperature of water, and in need of no further negative radiative forcings to condense ice crystals. The presence of aerosols and other particulate matter can indeed form crystallization nuclei and promote the formation of solid ice, but this effect is mostly independent of CO2 concentration. 

Freeman Dyson is/was a great physicist, but he’s not a climate scientist. He has no formal background in climate science, and evinces very little understanding of the methods or concepts that are part of the science. Expertise in one area of science does not translate into expertise in another. I find it incredibly strange that climate change skeptics are fond of citing Dyson’s work as if it represents evidence against anthropogenic climate change, and yet are comfortable dismissing the accumulated scientific work of literally hundreds of people who are just as accomplished as Dyson, but within the relevant field. It seems a lot like cherry picking.

“…models matching the past is a necessary but not sufficient condition for them to match the future.” which is what I’ve said before. My point is that they haven’t been able to match the past except for the last 150 years

I don’t know where you’re getting this information, but (again) this is just wildly false. Contemporary global circulation ensemble models retrodict the paleoclimate incredibly well. The results from the paleoclimate model intercomparison project and future climate models are depicted here: http://tinypic.com/r/jqgs5x/9, with the model predictions shaded and empirical temperature record represented as the black line. Each data set represents a different epoch, from the future under RCP8.5 at the top (2081-2100), back to the early eocene (48-54 million years ago) at the bottom. In all cases, ensemble averages of model projections matches empirical values almost perfectly.

The warming post 1850 preceded any rise in CO2.

This is irrelevant. There are many factors that can contribute to an increase in global average temperature. The fact that temperatures have increased when anthropogenic CO2 emissions haven’t shows absolutely nothing relevant here. Arguing that temperature increases without anthropogenic GHG contributions shows that those emissions can’t be causing the observed trends now is like arguing that the fact that some houses burn down without arson shows that this house couldn’t have been subject to arson, despite evidence to the contrary. 

Human influences on climate and environment are significant. CO2 is only one, and certainly not the most important, so far. Natural factors have been operating for millions of years (billions?) and our pitiful influence is overwhelmed – no evidence to the contrary. It’s a big universe out there.

Again, this is completely wrong. We have a broad spectrum of observations and models, virtually all of which agree with one another to an almost perfect degree. All the evidence points to the fact that current trends can’t be explained without reference to anthropogenic contributions, and that those contributions are currently among the most significant factors influencing the evolution of the global climate. The global climate is a monstrously complex system, and attributions of single causes are notoriously tricky in cases where there are a lot of interlinking feedbacks operating on multiple different scales. However, we’ve come up with a lot of clever ways to check our results. The degree of intermodal agreement between observations and different models / model methods strongly suggests we have this right.

Jim Hansen pioneered this back in the mid-90s, but our models have improved a lot since then, so let’s stick to more recent stuff. One of the more prominent recent pieces looking at this is “Combinations of natural and anthropogenic forcings in twentieth-century climate” by Gerald Meehl et. al. in the Journal of Climate in 2004 (http://journals.ametsoc.org/doi/full/10.1175/1520-0442%282004%29017%3C3721%3ACONAAF%3E2.0.CO%3B2). They used an ensemble of CGCMs to try to hindcast the observed temperature trends of the 20th centuries using four different scenarios. The first three included the various natural forcings only (volcanic aerosols, varying insolation, &c.), while the last included both the natural variations and anthropogenic GHG emissions. They were unable to reproduce the observed trends without accounting for anthropogenic emissions, and including those emissions resulted in ensemble predictions that matched the observed data almost perfectly. Here’s the graph: https://lh3.googleusercontent.com/-3kEVIjnPryw/VNQ2Kp7eBpI/AAAAAAAAY44/PKAtsfu_wxE/w852-h1762/Screenshot%2B2015-02-05%2Bat%2B6.51.37%2BPM.png

We’re able to discern anthropogenic GHG emissions from natural emissions of the same compounds by tracking the relative prevalence of different isotopes of carbon in CO2 molecules in the atmosphere. Because fossil fuels are composed of decomposed organic matter, they have a distinctive ratio of carbon-13 to carbon-12 that’s not found in other sources. By tracking changes in the observed ratio in the atmosphere, we can get a fairly good sense of what proportion of CO2 is coming from us and what proportion is coming from other sources, giving the picture that Meehl found in his study.

Climate models also predict that if a large portion of the shifts in temperature ranges at the regional level are due to anthropogenic GHG emissions, we should also see a reduction in the difference between maximum and minimum daily temperatures, a value called the “diurnal temperature range” (DTR). We wouldn’t expect to see a significant shift in the DTR if warming trends were the result of natural forcings alone, as natural forcings would alter temperatures more-or-less uniformly or (depending on the forcing) would have a much larger positive effect on the maximum temperature. Multiple data sets have shown that observed DTR in the last 50 years has narrowed significantly, and that there’s been a relatively larger increase in minimum temperatures than in maximum temperatures, a fact which again can’t be explained without reference to the impact of anthropogenic GHG. This is a good roundup of those studies: http://onlinelibrary.wiley.com/doi/10.1029/2004GL019998/epdf

One of the most important thing about the DTR as a fingerprint is that variations of DTR are more-or-less independent of variations in global mean temperature, as increases in the global mean could be accounted for by a lot of different combinations of changes in the DTR in different regions (a large increase in daily maximums in an isolated region will artificially inflate the global mean, for instance). However, we see a fairly constant increase in daily minimum temperatures in geographically disparate regions, and a consequent shrinking of the average DTR. This can’t be accounted for without including anthropogenic GHGs, and is an important independent confirmation of humanity’s impact.

Yet another independent measure is the variation in the observed wavelengths of incoming radiation. Because the physical basis of the greenhouse effect is (as I’m sure you know) quantum mechanical, different molecules absorb and reradiate energy at distinctive frequencies because of differences in their structure. By using a spectrometer, we can figure out how much of the incoming radiative forcing is due to insolation itself and how much is due to the reradiative effect of different GHGs. If anthropogenic emissions were a major factor in planetary warming, we’d expect to see a significant contribution to overall radiative forcing coming from the molecular constituents of anthropogenic GHGs. This is exactly the observed result. In “Measurements of the radiative surface forcing of climate,” (2006), Evans and Puckrin observed that anthropogenic GHGs were responsible for a radiative flux increase of 3.52 W/m^2, which is hugely significant (and in line with model predictions). I can’t find a copy of the paper that isn’t behind a paywall, so I put it up on my Google drive here: https://drive.google.com/open?id=0B4wR7F7Si7hsaUE3elBlRF94OHM

Those are three independent measures for estimating the human emission impact on observed temperature trends, and all three agree in their attribution. There are other independent measures as well that I can talk about if you want (atmospheric temperature gradient, wavelength flux in upward IR radiation) that also agree, but I think I’ve said enough here already. In each of these cases, the models not only agree with one another, but agree with observation. The notion that climate models are unreliable is completely mistaken.

So…yes, the world has gotten warmer and yes, the ocean has too. With all the formidable thermal inertia in the oceans, it would be amazing if ocean warming (they estimated at 0.4C per century) had paused at the same time as atmospheric temps. Note also the ZERO INCREASE AT DEPTH, and remember that warming at the surface produces increased evaporation, which removes heat. So it’s thermodynamically impossible for heat to hide in the deep ocean, just as it is impossible for a photon of energy to be trapped at the surface (Back-radiated IR is absorbed and either radiated out again, or causes evaporation and loss of heat… so the photon’s trip out to space is delayed and bounced around, but not trapped).

All of the discussion about the “warming pause” is complicated. I actually have a post a PlanetExperts reviewing the latest data, and discussing how to interpret it here: http://www.planetexperts.com/warming-hiatus-and-climate-sensitivity-new-data-explained/

Hopefully that will address your concerns.

In closing, let me say a little bit about the temperature and the climate more generally. The global average temperature is only one component of the climate system, and while it’s expected to increase rather steadily, this steady increase is likely to significantly alter the behavior of other parts of the climate. Many of the most important processes driving the climate are cyclical in nature, and it’s the stability of these cycles that gives rise to the relatively placid and (at least short-term) predictable behavior of the climate we’re familiar with. A large magnitude global temperature increase, however, has the potential to degrade the stability of many of these cycles, resulting in more and more eccentric behavior.

You can imagine the climate as being something like a collection of spinning tops, all of different sizes and spinning at different speeds, and all of which are linked together. When everything is operating normally, some of the tops occasionally wobble a little bit, but their spins are usually stabilized by the motion of all the other tops on the table; everything keeps spinning at more-or-less the same rate (at least in the short term), and long-term changes in spin rates happens gradually as a result of all the mutually-supporting spins. However, if a significant amount of wobble is introduced into a few of those tops very quickly and from the outside, the usual stabilization mechanisms aren’t strong enough to compensate–at least not immediately. Introduce enough wobble in just a few of the tops, and the instability can cascade throughout the whole system, causing lots of other tops to pick up some wobble too, and possibly even causing some to fall over entirely.

What we’ve got right now is a little bit of wobble, caused primarily by the infusion of a lot of excess energy due to the greenhouse effect. This wobble is getting bad enough that it’s starting to get picked up by some other systems too, and the “freak” weather we’re seeing–look at the unprecedented heat wave that happened in the Arctic a few weeks ago–as well as the quick and wild shifts between different extremes–look at Texas in the last few months–is the result of that wobble starting to cascade, destabilizing what are otherwise relatively stable cycles. The longer this goes on, the more wobble we’ll get, the stronger it will be, and the more systems will be affected. The consequence of this is that we expect some extremely strange behavior in various climate systems as the magnitude of warming continues and its impacts continue to cascade throughout the climate system.

Take just a single example of what this kind of “wobble cascade” looks like. As the global temperature goes up, some of the sea ice in the Arctic and Antarctic melts. In addition to the positive feedback effect because of albedo I mentioned in my other post, this has other effects. As all that ice melts, it dumps a lot of very cold freshwater into the surrounding oceans, changing their temperature and density. This, in turn, changes the behavior of some ocean currents, pushing cold water to places where it wasn’t before and pushing warm water to places where it wasn’t before. The change in distribution of cold and fresh water has an impact on where (and how much) water evaporates from the oceans, which changes where (and how many) storm systems form. This changes patterns in air circulation, pushing warm air, cold air, wet air, and dry air to places that don’t normally get so much of them. Air circulation patterns are also big drivers of ocean currents, so this changes ocean currents even more, causing the whole chain of changes to operate even more strongly, and driving the whole system further and further away from the relatively stable state it was in before this whole thing started. All of this can and will happen as the result of just a few degrees of warming in the Arctic; to a certain extent it’s already happening.

The main takeaway here is that the global climate isn’t just a non-linear system: it’s a non-linear complex system. This means that the dynamics of each of the sub-processes that make up the climate isn’t just responsive to changes in odd, non-linear ways–it means that the dynamics of each of those sub-processes is sensitive in that way to changes in almost all the other sub-processes. Most complex systems display non-linear behavior, but not all non-linear systems are complex in this way. Those that are–like the global climate–can be extremely difficult to predict under the right (or, rather, wrong) circumstances because non-linearity combined with a very high density of interdependence between processes can easily give rise to abrupt, severe, and often surprising changes in the behavior of the overall system as a result of just a little bit of tinkering with one or two components.

This is emphatically the best reason to try to minimize global temperature increase. The complexity of the climate is such that we just don’t know exactly what to expect as a result of significant changes in temperature. That should be deeply worrying to all of us.

http://www.theatlantic.com/technology/archive/2016/02/the-supreme-courts-devastating-decision-on-climate/462108/#comment-2518191580

Leave a Reply