Saturday, February 28, 2009

Carbon Dioxide - A Satellite Crash raises a Question

Last week one of the rockets carrying a satellite into orbit failed to shed its fairing (the clamshell nose cone that protects the rocket and helps it move up through the atmosphere more easily) and with that extra weight could not make orbit. The rocket was carrying the Orbiting Carbon Observatory and had five major tasks including:
> - Scientists don't know why the amount of carbon dioxide absorbed by Earth's natural ocean and land "sinks" varies dramatically from year to year. These sinks help limit global warming. The Orbiting Carbon Observatory will help scientists better understand what causes this variability and whether natural absorption will continue, stop or even reverse. 

-- Data collected by OCO will help solve the mystery of "missing" carbon--the 30 percent of human-produced carbon dioxide that disappears into unknown places.
Now it seems to me that if you don’t know where carbon is going and why, nor why it is varying dramatically each year, then you can’t predict how much there is going to be in the atmosphere, or necessarily be accurate in stating that CO2 resides in the atmosphere for a hundred years, since short of putting a tracer on a molecule and following it to see what happens there are too many unknowns.

The idea that carbon dioxide stays in the atmosphere for up to a century is apparently due to a study by the National Academy of Science, chaired by Roger Revelle, who reported this in 1977. Former Vice President Gore took Professor Revelle’s class while at Harvard. The estimate as to the length of time came initially from the ability of seawater to absorb carbon dioxide from the air, (Craig in 1975 and Plass in 1956). Revelle headed the Scripps Institution of Oceanography in California, and studied the process, opining in 1957 that CO2 resided in the atmosphere for about a decade, with the control being the ability of the sea to take it up. This ability was apparently more limited than had been anticipated, since the underwater layers did not intermingle as much as had been anticipated in earlier studies.

However the short-term life of the CO2 molecule in the atmosphere was predicated on the assumption that once it was absorbed into the sea it stayed there. Revelle included the idea that most of the CO2 that would be adsorbed would promptly evaporate back out, so that the realistic life of the molecule in the atmosphere might be ten times longer (i.e. a hundred years).

Now the study is complicated in the sense that it is indirect. For example when a group of British scientists stated in 2007 that the oceans were absorbing less CO2 than expected, this was based on measurements of the seawater. Similarly a study in Hawaii that reported that increasing salinity had reduced the ability of the ocean to adsorb CO2 looked at the concentration in the water. Incidentally that study also noted
From 1989 to 2001, the ocean-surface salinity at station ALOHA went up about 1 percent. That's because in recent years there's been less rainfall and more evaporation in the area, both of which concentrate salt in the surface water. When the concentration of dissolved substances goes up, it becomes more difficult for a liquid to absorb a gas, says Karl.

Increased salinity accounts for about 40 percent of the decrease in carbon dioxide absorption over the 13-year period, says Karl. He and his colleagues haven't identified the cause of the rest of the absorption slowdown, but some candidates are changes in biological productivity and fluctuations in ocean-surface mixing.

The problem of determining exactly how much carbon dioxide will be absorbed is not easy
What scientists generally use are not "rates" but "concentrations" at equilibrium. This turns a horrifically uncontrolled irreproducible problem into just an extremely complex problem. But that is a step in the right direction. In practice measurements are usually equilibrium quantities.
And
To reemphasize: "Rates of solution" are virtually useless as a scientific tool in the study of the distribution of carbon dioxide in the biosphere because it only applies to the specific place, time and conditions that the concentration is measured. It would only be useful for some very localized study or problem. Because of the number of rogue variables "rates" will likely vary "all over the map" unless there is some "driving force"


Now I am not looking at this with a detailed scientific eye, for now I will leave that to others. My current point is just that if we don’t know where significant amounts of carbon dioxide are going, and yet significantly more is disappearing from the atmosphere than the current models suggest, then this is an indication that the models are not correct. And if the models are not correct, and more is disappearing then the underlying assumption that the concentrations of carbon dioxide in the atmosphere in the future are wrong. And if the concentrations are wrong, then the forcings that they are purported to induce will similarly be wrong.

What I actually wanted to end up with in this particular post, before I wandered down this little alley, was all the different sources of CO2 that there were and their volumes produced. I never quite got that far, but for the sake of reference (which is part of what this post was meant to be about) the EPA has produced a fairly comprehensive document on the Greenhouse Gas Inventory from which I took this figure for the fossil fuel industry. And I will close with this for today.

Source EPA

3 comments:

  1. Deep water indeed, HO, too deep for me to paddle in. Nevertheless, I’ll post a link to a recent summary in Nature of what we know and have yet to learn about climate change, http://tiny.cc/pf863, where the suggestion is that the uncertainties go beyond the “missing carbon”. They include nitrogen trifluoride, which is about 17,000 times as potent a greenhouse gas as carbon dioxide, and good old methane (potency 20 times CO2), whose atmospheric concentration appears to be rising considerably faster than was previously modelled.

    The emission data for Canada have a pattern different to your picture above for the USA. I’d post graphics but I don’t seem to know how to put html on this site, for which I apologise. Data are for 2004, in kilotonnes CO2e per year:
    Electricity generation: 130,000 (17.2% of total)
    Industry: 277,800 (36.7% of total)
    Of which the fossil fuel production sector is: 123,020 (16.2% of total)
    Residential: 43,000 (5.7% of total)
    Commercial: 37,900 (5.1% of total)

    The Canadian data include gases other than CO2 and so are not exactly comparable to the US data, which are for CO2 from fossil fuel combustion only. However, CO2 emissions dominate. Note that these four major categories sum to only 64.7% of total emissions for Canada, and that electricity generation represents a much smaller proportion of emissions in Canada than in the USA . The data source is Canada’s greenhouse gas inventory at http://tiny.cc/E3aXx.

    As for proportions by source, the Canadian data are (2004):
    Petroleum: 55%
    Coal: 2.8%
    Gas: 41.6%
    Biomass: 0.6%
    Data come from Environment Canada’s 2006 annual report to the UN Framework Committee on Climate Change.

    ReplyDelete
  2. Sorry, should have said "proportions by fuel type" in the last paragraph. Less coal, eh!

    ReplyDelete
  3. The costs of coal are offset by its presence in many areas of the world where there are few other sources of fuel with the capacity, in the relatively short term, to provide enough energy for the needs of the current population. South Asia will be a recurrent mention as one of these. Even in the United States coal is relatively cheap and common, and likely for those reasons, to be difficult to replace.

    ReplyDelete