Sunday, February 28, 2010

Test results from nuclear stimulation of oil and gas reservoirs

Hmmm! Well the tone of some the comments on my last post -dealing with nuclear development of oil shale,, both recently and when I initially posted it on TOD helps illustrate one of the points that I want to make in this, a continuation in the posts on oil shale. The tone was quite negative, in general, with a number of folk being disturbed at my even bringing it up. It points to the fact that, as a political reality (bearing in mind that I try to stick to technical matters in this series) the use of nuclear adjustment to the local geology is not likely going to be popular. As tstreet noted after the original post, there is an article in the Colorado Constitution (article XXVI) that he helped put in there.
Section 1. Nuclear detonations prohibited exceptions. No nuclear explosive device may be detonated or placed in the ground for the purpose of detonation in this state except in accordance with this article. (Adopted by the People, November 5, 1974. Effective upon proclamation of the Governor, December 20, 1974.)

Section 2. Election required. Before the emplacement of any nuclear explosive device in the ground in this state, the detonation of that device shall first have been approved by the voters through enactment of an initiated or referred measure authorizing that detonation, such measure having been ordered, proposed, submitted to the voters, and approved as provided in section 1 of article V of this constitution. (Adopted by the People, November 5, 1974 Effective upon proclamation of the Governor, December 20, 1974.)
While I did not know about that as I initially planned this series, I had intended just to point out that the unhappiness of just one Senator with a nuclear program (and I was thinking of Senator Reid and Yucca Mountain) can delay and ultimately kill its implementation. In this case it is likely that there would be at least eight senators opposing, and I think the point is made. However, since I do think it is useful for folk to know these things, I thought I would continue with the rest of the story from a technical point of view. Particularly since the use of nuclear energy for excavation has recently been revisited by WIRED magazine.


Following the debates about the potential benefits that might occur from the use of nuclear explosives it was decided to see if it would work in three test detonations, that were given the names Gasbuggy ; Rulison and Rio Blanco.

The Gasbuggy shot, in 1967 used a 29 KT device at a depth of a 4,240 ft deep shaft, and created a cavity that was 80 ft wide and 335 ft tall, when one included the chimney. It also fractured the light shale around the opening. Anticipated dimensions were 165 ft with a 350 ft chimney.

The Rulison shot, in 1969, used a 43 KT device at a depth of 8,426 ft. it produced a cavity that was 152 ft wide, with a fracture zone that extends some 200 ft into the surrounding sandstone. (Predicted size was 160 ft with a 300 ft chimney). It is interesting to note that contractors have sought to drill near that shot, in order to extract gas from the shale. They were initially restricted to drilling no closer than half a mile. That was back in 2004, but interest in drilling at the site has continued. In the latest development Noble Energy Production is planning on drilling some 78 wells near the site, with DOE apparently having plans to drill closer than the half-mile imposition, though the wells planned in this case are all more than 1.5 miles from the site. The County Commissioners are not amused And, lest there be some concern for gas released at the time, let me quote from the article.
All the gas freed by the nuclear blast was produced and burned off at the surface, Bennetts said. The radioactivity at the site wasn't high to begin with, and since has decreased to below background levels, he said.

The blast formed a sealed cavity underground, according to state and federal authorities. "Even if you drilled a well into that cavity again, there's very little radioactivity remaining to be produced," Bennetts said.

There was some measure of the gas produced
Following the blast, in 1970 and 1971, the companies burned off, or "flared," 430 million cubic feet of gas into the open sky. The commission said that the level of radioactivity in the air surrounding the site did not exceed normal background levels.
Rio Blanco, shot in 1973, was made up of a series of 3 30-KT devices stacked up the shaft, at a depth of 7,000 ft, with the devices actually at 5,840; 6,230 and 6,670 ft. Each device created a cavity that was some 120 ft in diameter, and about 250 ft high. (Against predictions of a 140 ft diameter with a 300 ft chimney.) Fractures from the explosions extended about 200 ft into the rock around the shaft.

The production of gas from the shots was reported to be less than had been anticipated and the levels of radiation higher, so that while the volume of gas that could have been collected "would have been commercially viable," that only held true had the gas been uncontaminated. It was not.

Interestingly there have also been tests of this technology in the Former Soviet Union and when I wrote about gas fires in Turkmenistan there was a comment by Syndroma who posted pictures of devices, which I am reposting here. Also noting
As to extinguishing of gas fountains: 1 in Turkmenistan, 2 in Uzbekistan, 1 in Ukraine (objective not achieved). Also in Ukraine, there was 0.3 kt explosion to alter the geology of coal mine, to make it safer for the miners. Objective achieved. Later, coal was extracted up to 70 meters from the chamber. No excess radioactivity detected.

Of ~150 peaceful explosion only 4 turned out "nasty" (contamination of the surface).


Soviet weapons that could be used in gas and oil well stimulation (from Wonderful Russia via Syndroma)

Syndroma also posted pictures of the result of three shots to generate a trench which I am also moving here. This was the model of the crater:

And this was the resulting crater that was achieved.

Results of the excavation when 3 nuclear devices were used to excavate a trench in the Soviet Union (Syndroma) (You can see the site on Google Earth at 61 18 16.93, 56 35 55.77)

There is more information on the Soviet Program here.

However our purpose is to look at the development of reserves and their contribution to the marketplace within the foreseeable future. Particularly within the next fifteen years, when we can assume that the shortages of supply will become evident, it can, I think, be realistically assumed that there can be no use of nuclear devices to enhance oil shale recovery out West.

At the same time, the toughness of the rock its strength and behavior under mechanical attack make machine mining of the shale a likely impracticality on a sufficient scale to produce perhaps much more than 100,000 barrels a day within that time frame. That judgment on my part is based also on the need to regenerate the capital for the program, reconstruct the facilities and get through all the necessary paperwork.

There are alternate methods for mining the material, including those that are used in conventional metal mining of large-scale surface and underground deposits. However, the mining of something that can generate high levels of potentially explosive gases, if very large scale fracturing and blasting is undertaken, creates levels of risk that will make development of such plans a lengthy process if carried out underground. The mining of Gilsonite for example, was only realistically achieved when the hydrocarbon was mined using high pressure waterjets. But the strength of the oil shale makes the conventional use of that technique impractical - even if it were allowable, which is conjectural.

With these prospects being diminished, the only likely potential for oil shale to have a significant impact in the next fifteen years is likely to be either through some smaller scale in-situ retorting or possibly through a surface mining approach . I will discuss these in the next two posts on the subject.

Read more!

Saturday, February 27, 2010

Temperature data for Kansas - does it help?

Well this is the third in what was not planned as a continuing saga. So far some significant climate assumptions haven’t held up so well on examination, so today I am moving from the data for Missouri into Kansas. And the first hypothesis we will look at is that the data and trends for Kansas are the same as for Missouri. As a subsidiary I will put the two sets of data together, and with a total of around 60 stations, see if this improves any of the statistics that we looked at earlier.

Going through the same process that I followed in putting together the data for Missouri, there are 31 stations in the USHCN data base, and with just a little bit of judicious editing I can use the same format that I have already developed to start putting the different plots together. There are in addition 3 stations in the GISS network (Wichita, Topeka and Concordia). The difference between Missouri and Kansas is that while the former has all the GISS stations in the larger metropolises the Kansas GISS data does include one rural station (Concordia – listed as rural in GISS).

Now there is a second modification to the procedures, since I don’t have a map of Kansas from which to get the census data. Instead I went to the Internet, and, for consistency, used the information from the series of city data that can be found on the Web. For example, if I seek “population Anthony Kansas” the top site is www.city-data.com/city/Anthony-Kansas.html. There are similar sites for all the places in both the USHCN and GISS stations, and, for consistency therefore I used the population numbers from this series of sites. (The data is given for 2008).

I got the information on which were the GISS sites in Kansas by using the list provided by Chiefio and as I noted these are in Wichita, Topeka and Concordia. I also added the station height information (from a Google search under “elevation Wichita Kansas” ), since we are moving closer to the Rockies.

So, if you remember there was no significant warming in Missouri over the last 114 years (the data sets are from 1895 on). My initial hypothesis is that this is also true for Kansas. For our purposes an r-squared value of less than 0.05 is considered not to be significant. (I explained this a little last week). And the data says:


And so Kansas is not showing the same trend as Missouri. Here there has been a significant warming over the past 114 years. So how about the other hypotheses that we looked at in that earlier post?

First of all is there a difference between the GISS stations and the overall average for the state. In Missouri that was a 1.19 degree F difference between the GISS stations and the rest. In Kansas this is, on average only a 0.27 deg F difference. Looking at the trend in the data over the years, we find:


And this is also not entirely expected, if one follows conventional UHI theory, since the two largest stations are in the GISS trio, and one might have thought that this would have led to an increase in the GISS temperatures over the rest of the state, which is largely rural.

However, if you remember from the Missouri data, there was a logarithmic relationship between temperature and population, which gave a greater temperature change as small towns grew, over larger city changes. Thus if Kansas, a largely rural state, was seeing a greater proportion of growth in its smaller communities then perhaps this would explain the change.


The significance is still rather low (since as Luis has pointed out, working with data from the rural states reduces the sample size for larger communities). But if we combine the two data sets, what does this do to the correlation?


Now, with more data, there is a significant correlation between population and temperature.

So let’s have a look at a couple of other parameters, there was a very strong (r-squared 0f 0.8) correlation of temperature with latitude, but not much of a correlation with longitude. How does that stand up for the Kansas data?


Hmm! The correlation isn’t nearly as good. So maybe there is another factor coming into play – how about longitude, which was not significant in Missouri?


I suppose this makes a bit of sense. The further west we are going the higher, since we are approaching the Rockies. So after inputting the elevation of the stations we can see if that gives us the same correlation:


And it does not!

Which means, I suppose, that we had better continue this investigation, and move the data acquisition another state West – which was not what I expected when I started writing this, but we’ll leave that investigation until next week.

And one last bit of curiosity, how do the standard deviations hold, over time, with the new state data?


Well we are still getting that improvement in quality with time, which, as I explained initially, may be due to the change from manually reading thermometers to the automated systems being introduced. We will have to see how this holds up as our search for meaning in the data continues.

P.S. As with all the information in this series, if you want a set of the data please let me know, through comments where you want it sent.

Read more!

Tuesday, February 23, 2010

Of Graduate Starting Salaries and the underlying message

While there are many different criteria that cause students to choose different careers, it is a reality that whenever the salaries for those working in the extraction of fossil fuels goes up, so does our enrollment.

So, this being the Olympic weeks, I won’t write a whole lot of comment on this but here are the current top 10 average starting salaries.


Given that the mining and petroleum industries were hit badly after the mid 80’s with a drop in demand for fuel, given the global availability of cheap oil, for 2 decades salaries and the need for graduates were both very limited.

Thus when the last upturn in demand came along, there were not a lot of qualified engineers in the system as it regrew, particularly those in the middle levels of management. And those of us who were around before the ‘80s are now moving into retirement so that there is a need beyond that which can be met by existing supply from the most critical disciplines at the Universities.

The numbers in the above table are averages, I have heard of more than one petroleum graduate starting at above $100k and mining engineers going out at around $85k, it all depends on the quality of the student, and which part of the industry they aim at getting into. But even in these tough times generally because of the lack of supply there is still a strong demand for graduates. Of course we are now starting to see some of the larger incoming classes starting to work their way through the system and start to graduate and meet demand – but I suspect that the top three will continue to be in that position for a while.

Read more!

Sunday, February 21, 2010

Nuclear weapons and oil shale

There is a distinct possibility that we will see the global supply of oil begin to decline within the next decade. In fact the drop may come significantly quicker than some have previously predicted. As Dr. James Schlesinger, the first Secretary of Energy once noted, the American public operates in either Complacent or Panic mode. Given that we may soon reach the latter condition it could be that we may need access to all that oil locked up in the oil shale somewhat sooner than Shell might get it out (and I'll cover that in a later post). The Administration should, therefore, have a crash plan available in case that need becomes critical. This post is written in that vein. Now before I get into the piece that follows I should explain that I don't hold any particular animus towards the states of Colorado, Utah, Wyoming or Idaho and so when I start talking about disposing of nuclear weapons in those states by making use of them it should be taken as merely a technical discussion (grin).

The need for a relatively rapidly available resource to allow us to continue being able to supply the worlds needs for oil, even as it increases into the future, will require some fairly rapid and agile production of resources, and as I noted in the first post of this series, with some 2 trillion extractable barrels of oil locked up in the oil shales of the above four states, there lies a potential answer to the problem. But conventional means for extraction, particularly the levels of capital required, and other issues that I will discuss later, make it unlikely that these normal means will produce any significant impact on the gap in economic supply that will develop in the near future. The use of nuclear explosives has the potential to solve that problem. And to explain, rather simply how this might be done (as with the other techie talks), I will explain how, conceptually, this might be achieved.

The papers that I am going to take the concepts from were given at the second and third oil shale symposia and are listed at the end of the post. They describe the application of results from over 150 underground nuclear detonations which were carried out as the United States sought to find peaceful uses for nuclear explosives as part of the Plowshare Program. I will also be using 1960's costs since these were used in the papers.

To set the stage, as I have described earlier, the Western oil shales occur in rock with almost no permeability, and the kerogen that is in the rock will, under normal conditions stay there, rather than flowing even when it has the chance. So if the oil (kerogen) is to be recovered two things will be needed. The first is a way of massively fracturing the rock, and the second is the maintenance of some level of heating to liquefy the oil, and then to keep it flowing. Large scale fracture of the rock will, in turn, require the application of massive levels of energy, and here nuclear explosives are in a class of their own. Explosive yields are usually given in kilotons, where a kiloton has the effective energy in a thousand tons of TNT. (A ton of TNT has an energy content of 4,184 Megajoules). At the same time the devices themselves are relatively small. A 250 KT device would be around 20 inches in diameter and about two to four times that long. The cost to place it, and the device itself, was estimated to be around $500,000 in 1965.

The oil shale layers are about 2,000 ft thick, and under an additional cover of 1,000 ft of overlying rock (overburden to mining engineers). If a 250 KT device was placed at the bottom of the shale layer, therefore, and detonated, it could be expected to create a cavity that would be around 400 ft in diameter. Much of the radioactive material generated (anticipated to be tritium) would be fused into the wall of the cavity, or caught in the gas that could be drawn off and collected through the boreholes subsequently used to take advantage of the blast.

The shockwave from the event is anticipated to create damaging surface motion to a distance of 2 miles or so, and be substantially disturbing to 6 miles, however, for our purpose, in the immediate vicinity of the blast it will induce significant fractures in the surrounding, and overlying rock. This will cause the rock immediately over the blasted cavity to collapse, and to fall in until a chimney of broken rock has been formed. This chimney will grow upwards until the bulking of the rock as it breaks (that gain of 60% I mentioned last post) fills the space available. For the 250 KT shot this chimney is estimated to be around 1,000 ft high. Experience suggests that the blocks will break into pieces up to 3-ft in size, though the collapse and internal fracturing may increase their ignition potential. The rock surrounding the cavity will, for a distance of around 3-cavity diameters be fractured with a permeability of up to 1 darcy. (The Ghawar field in Saudi Arabia has an average permeability of 617 millidarcies). Beyond that range, and out to about 6 to 8 radii the rock will continue to be fractured, but with fractures more widely spaced and less useful.

Thus, if the entire area is to be treated, then shots would need to be fired around 3 - 4 cavity radii apart in order to maximize the break-up of the rock. (Say for our hypothetical model this would be around 750 ft). By drilling sets of 5 shot holes to create individual retorts, and grouping these in sets of four, to create a "plant," we could create a production operation for the recovery of the oil. Depending on whether the intent is to optimize the fragmentation of the rock, or the fracturing of the surrounding rock with the patterns, some 240,000,000 to 1,000,000,000 cubic feet of rock will be broken per shot, at a cost of $0.015 to $0.05 per ton.

Which brings up the second advantage of nuclear explosives. About 2.5 months after the shot the temperature at the wall of the cavity will still be around 1,000 degrees F, and some 11 months after the shot it will be around 180 degrees. Since the only place for this heat to go is into the surrounding rock, it will cook the kerogen in the vicinity into oil, with, at the sustaining temperature, a low enough viscosity that it will flow into any adjacent collection point.

And it is here that the advances of the past 40-years come into play, since oil drilling is now capable of drilling a "bottle brush" collection pattern under the cavity in order to access and collect the oil (and some water) as it drains down through the fractures. However drilling will also be required to feed air into the chimney and to turn it into a large-scale retort to complete the transition of the kerogen in the vicinity to oil, and to mobilize it. Based on USBM experiments, some 75-90% of the oil in the shale can be recovered from such an in-situ retort. Where necessary some of the gas produced may also be used, in the later stages of the upward progression of the fire front, to enhance the strength of the fire front and to ensure that it continues to move up through the shale, not only in the chimney, but then also into the overlying and surrounding rock. (The fire can be controlled to either burn up or down what now becomes an extremely large retort).

Using this technique and applying it to each of the plants, that I have just described, it is anticipated that each plant, which would cover an area about a mile in diameter, would produce some 450 million barrels of oil over twelve years, at a production rate per day of 100,000 barrels, assuming a 75% recovery of the oil over the 2,000 ft interval. It is anticipated that with a feed of around 3,000 cfm/ton of air at 50 psi, that the flame front could progress at a speed of between 1 and 2 ft per day. In 1965 dollars, it was anticipated that the operation could make a profit if the oil were then sold to a refinery at a cost of $1.50 a barrel. Oil recovery would, however, be controlled by the quantity of oil in each "retort" layer, and, by the nature of the operation, all the oil would be anticipated to be recovered but at the rate controlled by the layers as they produced. However the process is considered economic for oil shale at grades above 15 gallons/ton with thicknesses of greater than 400 ft.

So just think, when we talk about "the nuclear option" in future, we may have an entirely different concept in mind (/grin).

(Note that, for consistency I changed some of the numbers to reflect use in the 2,000 ft shale column, rather than the 1,000 ft used in some of the example calculations in the papers).
Reference papers for this post are:
M.A. Lekas and H.C. Carpenter "Fracturing Oil Shale with Nuclear Explosives for In-Situ Retorting", 2nd Symposium on Oil Shale, CSM, 1965.
H.F. Coffer and E.R. Spiess "Commercial Applications of Nuclear Explosives, the Answer to Oil Shale?", 3rd Symposium on Oil Shale, CSM, 1966.
M.E. Lekas "Economics of Producing Shale Oil, the Nuclear In-Situ Retorting Method," 3rd Symposium on Oil Shale, CSM, 1966.

Read more!

Saturday, February 20, 2010

The Precipitation Hypothesis - is it true?

Today we test the “Diane Sawyer” hypothesis. The hypothesis we are testing is that global warming has led to an increase in precipitation. I’d call it rain for short, but Ms Sawyer made the remark at the end of an ABC newscast the other day about snow. Namely she stated that we should not think, because we have more snow that Global Warming is not happening. In fact, she went on to say words to the effect that because the Earth is getting warmer this causes more precipitation, which, at the time was what Washington was seeing, and that this has been predicted.

A quick check and there are sites that state, for example:
Numerous empirical observations and models of the global climate confirm the hypothesis that global warming enhances the global hydrologic cycle. For instance, a global warming by 4°C (7.2°F) is expected to increase global precipitation by about 10 percent. Models suggest that the increase is more likely to come as heavier rainfall, rather than as more frequent rainfalls or falls of longer duration.
Well that is pretty definitive, and so let us, using the same procedure as last week, see if in fact this hypothesis is true.


And, to make my life (and yours if you are following along) easier I am going to use the structure of the spreadsheet that I created last week, and merely, where possible, replace the temperature data with annual precipitation data from the same set of weather stations. To get the precipitation information I step through the same sequence from the United States Historical Climatology Network to get to the station sites for Missouri, which if you will remember, gets me to this map.


As before I will step down through the list of weather stations that are given for the state to get the data that I need. After clicking on the name of the station (and again I will use Appleton City as the example) I already have the station data entered from last time, so all I need to do is click on the “Get Monthly Data” phrase.

Now, when I get to the Monthly data page I now scroll down to the bottom of the page, where I can see the following selection:


As before this will take you to a screen that gives you the file name that has been created,


And when you click the blue line, then a file is downloaded to your computer. As before if you open the file, then you get the list of annual precipitation for the site since 1895.


Returning to the spreadsheet with all the stations on it, I now paste that column of data into the table, starting in square C13. As before I continue doing this, until I have all the precipitation data from the USHCN weather stations (all 26).

In contrast with the temperature data I don’t know where to go to get the precipitation data for Missouri from GISS – a visit to their site notes that they get their information from the Climate Research Unit at the University of East Anglia – and given the current kerfuffle over there, I am going to give trying to get info from them a pass. Still we have the information from the last 114 years for Missouri, based on 26 stations.

I already have the average for those stations set up from when I created the original spreadsheet (though I have to change the data ranges for the plots). And now I can check the hypothesis that we started with. Given the reported increase in global temperature, has there been an increase in precipitation in Missouri?


Well if one looks there has been a very small, and statistically insignificant, increase in precipitation over the last 114 years, so it appears that the Diane Sawyer hypothesis is incorrect.

To explain the “statistically insignificant” remark, I am going to give a relatively simple explanation that I found here of the meaning of the r-squared values that I put on every graph, to show how significant the trend is
The main result of a correlation is called the correlation coefficient (or "r"). It ranges from -1.0 to +1.0. The closer r is to +1 or -1, the more closely the two variables are related.

If r is close to 0, it means there is no relationship between the variables. If r is positive, it means that as one variable gets larger the other gets larger. If r is negative it means that as one gets larger, the other gets smaller (often called an "inverse" correlation).

While correlation coefficients are normally reported as r = (a value between -1 and +1), squaring them makes then easier to understand. The square of the coefficient (or r square) is equal to the percent of the variation in one variable that is related to the variation in the other. After squaring r, ignore the decimal point. An r of .5 means 25% of the variation is related (.5 squared =.25). An r value of .7 means 49% of the variance is related (.7 squared = .49).
To show where the significant variable is for the state data, lets look at the change in precipitation with latitude (r-squared is 0.82):


Still nothing significant with longitude, but it is interesting to revisit (since the plots were already set up) the questions I posed last time on data scatter, and population. I had hypothesized that (based on Anthony Watts evaluation of weather stations) the scatter in the measure of the data (as identified by the standard deviation) would get worse over time. It did not for temperatures in Missouri – possibly because it might be tied to rates of temperature change, and Missouri hasn't seen one - but also that could have been because of the change in thermometers. Here is the plot for precipitation:


Well there is a trend, if not a very significant one, which gives a little scientific credence to Anthony Watts for the state of Missouri. And does population size have an impact?


I was going to leave that without comment, but I suspect that the apparent correlation has more to do with where folk live in regard to latitude, than the actual size of the population, but to validate the relationship would require a lot more data than I have input to date.

Now you may say that what I have posted today really has little relevance, since Missouri has had an insignificant amount of warming over the past 114 years but the moisture that Missouri sees (and feels in the rain and snow) is largely generated elsewhere, and so if there is a correlation, given the relatively large amount of precipitation the state gets, then it should show up, or should it?

The reason I ask the last question is that if you go to GISS you will find this plot:


Now (I checked) there is no real correlation between Missouri precipitation and that which is global, but the conclusions regarding changing rates are the same. I quote NOAA :
Globally-averaged land-based precipitation shows a statistically insignificant upward trend with most of the increase occurring in the first half of the 20th century.

So I am afraid it is not just Missouri data but also global data that falsifies the “Sawyer” hypothesis.

Read more!

Tuesday, February 16, 2010

An updated look at Lithium production

Just over a year ago, and spurred by an article in Time, I wrote a post on the possible global supply of lithium, which is used in renewable batteries, and a major choice for use in the batteries of electric vehicles, such as the Chevy Volt. Since the story has acquired more recent interest this week, and with new information, it is worth re-visiting the topic.

I began the original post by noting that our first introduction to these batteries was in our role as an Explosives Lab when we found out - in a series of experiments a long time ago - that they can blow up if handled wrongly. And it turns out that such a risk is still around, though not that common. But to put the event in context
Fifteen incidents in the last two decades were serious enough to warrant a decision to re-route a plane or perform an emergency landing, according to FAA data.

For instance, in 2008, there were nine battery accidents resulting in two minor injuries. To put that figure in perspective, that year 3.3 billion lithium batteries were transported on 77 million flights, including 56 million passenger and combination passenger/cargo flights.

Based on that data, one's chances of being on the same flight with someone who suffers a minor injury because of a malfunctioning battery was about 1 in 28 million in 2008. In comparison, the one-year odds of dying from a car accident in the U.S. are 1 in 6,584, according to the National Safety Council
.
Since we also look at processing, I became curious about where and how the lithium is mined, however, since then , h/t to JoulesBurn, there was a more critical article by Jack Lifton. So what I thought I’d do is to integrate some of this additional information into a more up-to-date post.

It turns out that most lithium comes from salt lake deposits such as those in Chile and Bolivia.

The biggest deposit in the world lies in the Salar de Uyini, which is also the world’s largest salt flat. A quick look through Google Earth,gives the location, with the white in the picture being the salt flat, and not snow. La Paz, the capital of Bolivia is at the top.

The world’s largest lithium deposit is at Salar di Uyuni (Google Earth)

Tthe lithium is found in the crystallized salt, and in the brine that underlies the crust. As the world gears up to demand more, Bolivia is determined to keep as much of the “value added” part of the processing to itself. Thus the intent has been that the state would initially act alone in industrializing their deposits, and not look for foreign partners until 2013. Unfortunately its attitude has not drawn a lot of excitement from the world press, since there appears to be more than enough for current demands available from elsewhere.
Chile provides 61% of lithium exports to the US, with Argentina providing 36%, says the US Geological Survey (USGS), with Chile having estimated reserves of 3m tonnes, and Argentina about 400,000 tonnes. . . . . . Lithium production via the brine method is much less expensive than mining, says John McNulty, analyst at global bank Credit Suisse. Lithium from minerals or ores costs about $4,200-4,500/tonne (€2,800-3,000/tonne) to produce, while brine-based lithium costs around $1,500-2,300/tonne to produce.

Melting snow from the Andes Mountains runs about 130 feet (39.6 meters) underground, into lithium deposits, then gathering into pools of salt water, or brine. The brine is pumped out from under salt flats such as Chile's Salar de Atacama, and spread among networks of ponds where the desert sun and high altitude provide a beneficial environment for evaporation.

It takes about a year for the brine to reach a lithium concentration of 6%, when it is shipped to a plant to be purified, dried and crystallized into lithium carbonate, which then is granulated into a fine powder for battery makers. Lithium stores a very large amount of energy for its volume, which makes it perfect for electronics.

Unfortunately for those who are expecting electric cars to spring out of the woodwork in the next few years (remembering that the President’s plan calls for 1 million plug-in hybrids by 2015) Mitsubishi estimates that the world will need 500,000 tons per year. The deposit itself holds at least 9 million tons, although the country has, in total, perhaps as much as 73 million tons. To put the current progress in perspective the pilot plant was intended to produce some 40 tons by the end of last year, as it geared up to full production, with the product coming from brine processing. The world supply of lithium itself is considered to be 28.4 million tons, equivalent to 150 million tons of lithium carbonate. The USGS has estimated that the deposit can produce about 5.4 million tons of lithium, relative to a total US reserve base of 410,000 tons. With the slump in the world economy last year demand dropped, and so one SQM SA has recently dropped the price 20%since there is more than enough to go around.

Source USGS

Of course that all depends on how Chinese demand changes in the next short while.

Source Research in China

In terms of how much lithium goes into a battery, it is about 20 lb for an EV, and about 0.1 oz for your cell phone. However there are other industrial uses for lithium so that at present only about 25% of world production ends up in a battery.

Part of the problem with the Bolivian deposit, as Jack Lifton noted is that the deposit is contaminated with magnesium, which is also true at the Atacama deposit in Chile, except that while the Mg/Li ratio there is 6.4 to 1, the deposit is 0.15% Lithium. At Hombre Muerto the Argentinean deposit, the Mg/Li ratio is down to 1.37 to 1, making it easier to produce, even though the grade is lower, at only 0.062% Li. Unfortunately the Bolivian deposit has only a 0.028% lithium, while an Mg/Li ratio of 19.9:1 so that it has both a poorer grade, and a higher Mg content. To add to these disadvantages, being high in the Andes means that evaporation is not as fast, and so processing costs go even further. THis is especially true since the lake apparently floods every year, slowing evaporation even further.

So put it all together, and, for the moment, the production of much lithium from Bolivia might be a bit further in the future than they currently expect. Which is perhaps why the plant gets being pushed further and further into the future. By November last it had been put back to 2014. (And the claim that the technology will all be homegrown is a little more suspect.
companies like Japan's Sumitomo and Mitsubishi, and South Korea's state-run Kores- Korea Resources Corporation, are helping the government find the best way to extract lithium from Uyuni "free of charge," but will be the preferential buyers of Bolivia's lithium carbonate.
Lithium is also produced from coarse grained igneous rocks called pegmatites, with spodumene being the most common. American mines were in the Carolinas, but closed since brine processing is cheaper than the mining and processing of the hard rock.

Geothermal power plants draw hot brine from underground as a power source, and these brines can contain dissolved minerals. Thus, for example the seven Geothermal plants at the Salton Sea are reported to be able to produce up to 16,000 tons of lithium per year. The facilities are better known as a source of zinc (pdf). However the potential as a source of lithium is becoming increasingly recognized.

Read more!

Sunday, February 14, 2010

Conventional Mining of Oil Shale

So, there we have all this oil, sitting in these nice thick oil shale beds out West and just waiting to turn some local in Colorado into the next "world's richest person". All they have to do is to figure out how to get the oil out of the ground cheaply enough to make money from it. (And if you remember from the last post on the subject there are over 2,000 patents on ways to do this - if it were that simple there would not be nearly that many). Congress thinks so too, since the Energy Policy Act of 2005 called oil shale a strategically important domestic resource (pdf file). More recently, there is currently a House bill in Committee governing oil shale development.

What's the big deal? Drill a hole down there and it flows it - isn't that how you get oil out of the ground? Well not in this case. As I said last time the oil is really a waxy kerogen that does not want to flow at all. And there is also a problem with the rock. About 40 years ago a guy called Brace (Ref 1 - sorry I can’t find these on the internet) found that the cracks in a rock are related to the size of the grains of the material that make up the rock. A rock with large grains has large cracks, and this gives it a permeability, which is the joining of these cracks to give a path through which oil (or water or gas) can flow through the rock. It also gives the rock its porosity, which are the holes in the rock into which the oil can collect. Unfortunately the grain size of the average particle in oil shale is around 5.8 microns. This is about a tenth of the thickness of a human hair, medium human hair being about 60 - 90 microns wide. As a result the typical oil shale has very poor porosity, and it is only when it has a high oil content (above 50 gallons/ton) that permeability can be easily measured (Ref 2) , below 20 gal/ton it becomes very difficult, because it is so small. The average grade is around 25 gal/ton.

The simple message from those numbers is that oil will not normally flow into any holes that are drilled into oil shale. So where do we go from here? Well it you won't go to the mountain, then the mountain must come to you. In other words, let's mine the oil shale, bring it to the surface, and then get the oil out of it.

That’s what they do in Canada with the oil sands, and these beds are thicker. In fact the layers are thick enough so that they can be mined by a number of different ways , including surface mining, what we call room and pillar mining, and then by a third method that I will, for now, call sub-level stoping. Remember that we need to break the rock down into pieces no bigger than 3-inches in size for the retorts.

Union Oil (now Unocal) used the room and pillar method for their mine at Parachute Creek, where mining interest had, for a while, been growing again. Room and pillar mining was also used for the Colony Mine, which was the largest project in hand back in the 1980's . Since there have been a number of reasons suggested for the closing of that project, it might be appropriate to ask you to remember the words I quoted from Harold Carver last week.
What is needed is assurance that shale oil production will face a stable economic environment in which it can share in the spectrum of raw materials for our future energy needs.
. And then read on:
Tosco's interest in the Colony project was sold in 1979, and again in 1980, to Exxon Company for the Colony II development. Exxon planned to invest up to $5 billion in a planned 47,000 bpd plant using a Tosco retort design. After spending more than $1 billion, Exxon announced on May 2, 1982, that it was closing the project and laying off 2,200 workers. . . . . . The economic incentive for producing oil shale has long been tied to the price of crude oil. The highest price that crude oil ever reached -- $87/bbl (2005 dollars) -- occurred in January 1981. Exxon's decision to cancel its Colony oil shale project came a year and half later, after prices began to decline and newly discovered, less-costly-to-produce reserves came online. . . . . oil had become plentiful, with about 8 to 10 million barrels per day in excess worldwide capacity, and the trend in rising oil prices had reversed after early 1981.
The failure, in short, at that time became one due to economics, rather than technology.

Using a machine to mine the oil shale poses some problems, since it is much stronger than, for example, the tar sands of Alberta, that can be scooped up with a shovel. Rather the rock has a strength that goes down as oil content goes up to a value of about 13,000 psi with an oil grade of 30 gal/ton, at which point it stabilizes even as the grade continues to increase. This means that the openings for mining can be quite large, as they need to be to achieve the tonnages planned. And there are ways to make them larger.

Rooms mined with the rock were some 55-ft wide, with 58-ft pillars. It also means that the machines to grind the rock from the solid will need, either to be jet-assisted, or of relatively large size. One of the first proposed (for you EROI fans) was designed to produce 17,500 tons per 2-shift day, with an oil content of 40 gal/ton, and with 6.5 operating hours in a shift. Machine power requirements would be 37,500 kwh per working day (Ref. 3).

The advantage of the large mining machines, over drill and blast methods, which remain the most common practice, is that the operation is continuous, with rock being carried away by conveyors, and production need not stop to ventilate away the products from the use of explosives. On the other hand the use of explosives to fragment the rock does provide a relatively effective way to fracture it (though with less size control). One of the questions that I have always had, though, in doing EROI on explosive use is whether to count the energy input as that required to make the explosive, or that liberated when it is set off.

In the days when the industry was last planned, the throughput for single plants was considered to be on the order of 100,000 tons/day. A ton occupies 16 cubic feet (Ref 4) and so if the mine is 30 ft high, a cubic foot of floor space would have 2 tons of rock on it. This would translate into having to mine 2.5 acres of rock per day. The point has been made, however, that underground mining of layers of rock one slice at a time down through the deposit would be inefficient and energy intensive. Further that it would be restricted only to mining the high grade layers.
The matter of mining, by underground methods, the rich, deep oil shale beds in the center of the basin probably needs little consideration because better methods of producing the resource appear to be at hand. If our civilization has any conscience and if it has any regard for posterity it cannot give serious consideration to any method of production of shale oil from the center of the basin that does not result in substantially complete recovery. Our civilization has passed the stage in which it can kill the whole buffalo merely to consume the tongue and liver as was done in this area less than a century ago.
What he is arguing against is the intent to set up the mine to mine out the rich layers, so that when our grand-children have to mine the rest they must work in the dangerous conditions of a partially mined volume, with only the poorest grades of shale as a reward.

In contrast he argues that the area should be strip-mined since even with a 1,000 ft cover, the thickness of the oil shale would justify the process as a means of recovering the entire volume of oil from the deposit. Part of the problem comes, of course, not only from the fact that a hole a mile in diameter and 3,000 ft deep has been created, but that also all the material that has been mined, has to be stored before being returned. And this is one of the significant problems that mining the deposit either by strip mining or by underground mining generates, that of the waste volumes and condition.

For a long time mining has used some of the waste rock that has been mined to pump back into the mine and fill the holes left. By mixing a small amount of cement with the rock powder it can be made strong enough that the rest of the valuable ore can be mined, and the roof is held up by the newly placed columns. However, when you mine and mill the rock it is broken into small pieces. These bulk in volume by about 60% on average, over the original volume of the rock, and so even with the use of the mine to put back some of the rock there will be about 40% of it left for disposal somewhere else. (Note that this does not include the thermal swelling that occurs when the rock is heated - I will get to that in a couple of posts).

But it should be pointed out that there is already some 50-odd years of experience in dealing with this waste in the area, and while I am not familiar with the problems and their solution, the general mining practice with waste fills is ultimately to cover and seed them so that there is a binding vegetation - unfortunately this, as with some of the other parts of the extraction process, requires considerable water, and that is an issue that we haven't reached yet. But, on the other hand, we don't seem to hear much about the piles that already exist.

Again I am going to pause here, since the post may otherwise get too long, but next time I will talk a little bit about the nuclear option, which might otherwise be forgot.
Ref. 1 Brace W.F. "Dependence of Fracture Strength of Rocks on Grain Size," 4th Symposium on Rock Mechanics, Penn State, 1961,

Ref. 2 Thorne H.M. "Bureau of Mines Oil-Shale Research", First Symposium on Oil Shale, CSM, 1964.

Ref. 3 Hamilton W.H. "Preliminary Design and Evaluation of the Alkirk Oil Shale Miner," Proc. 2nd Annual Symposium on Oil Shale, CSM, 1965.

Ref. 4 Ertl T. "Mining Colorado Oil Shale", Proc. 2nd Annual Symposium on Oil Shale, CSM, 1965.

Read more!

Saturday, February 13, 2010

Being a Climate Scientist for a day

So who is right, Phil Jones or Anthony Watts? Basically they disagree over the influence of town size on measured temperature. So being an experimentalist I wondered what I could do to check and see who was right. And, since this is something that you can do at home, I’m going to explain exactly what I did, since I only used the data from Missouri, and there is data available for all the states, so that those who want to can repeat for their state, what I did. (And for that reason I will explain it in excruciating detail).

Now, if you are going to analyze data it is a good idea to define what the questions are that you are seeking an answer to before you start. So let me state 3 initial hypotheses. The first is from Jones et al in 2007, which refers back to a paper by Karl and James in 1990, which says, in part
If the Canadian stations behave similarly to stations in the United States, the decrease of the DTR (diurnal temperature range) may be exaggerated by about 0.1 dg C due to urbanization.
This correlates with the 2007 paper which says
Urban-related warming over China is shown to be about 0.1°C/ decade over the period 1951–2004, with true climatic warming accounting for 0.81°C over this period.
So the hypothesis is that the rate of warming does not significantly change, as a function of the size of the community around the weather station.

The second comes from the way in which the Goddard Institute for Space Sciences classifies site sizes, calling communities below 10,000 rural. So the hypothesis is that there is no change in temperature with population below a community size of 10,000.

And the third hypothesis comes from reading Anthony Watts, and it seems to me that if his finding about the deteriorating condition of weather stations holds true, then the scatter of the data should get worse with time. So my hypothesis is that the standard deviation of the results should increase with time.

OK so I have my hypotheses – where do I get my data (you also need to have a spread sheet program, I am going to use Microsoft Excel running on a Mac) and a state map (or equivalent source for town populations). As I said I am going to look at the data for Missouri, but before I get the data I need a table to put it into. So I open Excel and create the table I want. To do this I first type titles starting in square B3, and going sequentially down, inserting the titles Station; GISS; USHCN Code; Latitude; Longitude; Elevation; and Population. I then move to square A11, and type in Calendar Year. So we start with a table that looks like this:


I am going to be putting data in from 1895 to the year 2008, and so I put 1895 into square A12, and then (=A12+1) into square A13. Then I highlight from A13 to A125 and select “Fill down” from the EDIT menu at the top of the Excel page. This now puts the years to 2008 into the A column. So now I need to get the data to put into the table. To get this I go to the US Historical Climatology Network and select Missouri from the scrolling list on the top left of the page.


I then clicked on the Map Sites button to get to the data that I wanted.


The map that now comes up shows the 26 sites that are listed for which there are records going back to 1895 in Missouri. These are listed to the right, and if you click on any one of them then the identification for it shows on the main map. I have done that for the first site on the list, Appleton City, so that you can see what I mean.


From the map information I can get the USHCN Code (230204), the latitude, longitude and elevation of the site. So I can enter those below that station name in my table, which now looks like this:


Now I need the historic temperature data for that site, and to get this I click on the “Get Monthly Data” phrase in the map balloon. This takes me to a new window.


We need to have the data in a form that will fit into the EXCEL spreadsheet, so click on the middle line in the second set of options, to create a download file. This drops you down to the bottom half of the page, and what we want (for today) is the Annual Average Mean Temperature, which is on the upper half of the screen, so I click on that box (a tick mark appears). Then I press on the submit button.



This brings up a response, which tells you the name of the file that will be downloaded to your computer, when you click the blue line, which I did.


This is the data that you want from the file (and why I included the site number when I made the table, so that I could check that I was getting each range copied into the right column, and that when I was finished I had data from all the sites). The file is downloaded into your downloads file on your computer, and when you open it in EXCEL, you get:


The data that you want is in the third column (C ) and you want to make sure that you have the Annual Average Temp so C2 should read as shown. Copy the numbers in the column (Select C3 to C116 and copy or command C). Then re-open the initial EXCEL page where we are storing the data (I call mine Missouri Annual Temp, so I will refer to it as the MAT page from now on). Place the cursor on box C13 and tell the computer to Paste (either from the Edit menu or by using command V). You should get the data pasted into the spreadsheet, as I have shown.


This is the information for the first site, and should fill down to C126. It is now a good idea to save the file.

Now we go back to the USHCN page, close the window that had the data file on it, and you should be looking at the map and list of sites again. Now select the second name on the list (in this case Bowling Green) and repeat the steps to input the site information to the spreadsheet, then select Monthly Data, and the Annual Average Mean Temperature, download the file and copy the information, and then paste it into the spreadsheet.

Keep doing this as you work down the list of stations and by the end you should have 26 station records (if you are doing Missouri – the numbers vary – Illinois has 36 for example). The right end of the EXCEL file now looks like this (except that I have put in some information in boxes W4 and Y4 that I will explain in a minute).

(and the files extend down to row 126).
Now I need the information on population, and for that I went to the State Map (since it has it all in one place), and by each town there is the population from the 2000 Census). So I then inserted this set of numbers as I moved along row 9. I had a problem when I got to Steffenville, since no population is shown. So I went to Google Earth and had a look at the place. It looks as though there is perhaps a dozen or so houses there, so I gave it a population of 30, and for Truman Dam, looking it up on Google, the Corps of Engineers Office is in Web City, so I used the population of that town for the station.


Now I’m not quite done getting data, since the Goddard Institute for Space Studies, GISS, actually doesn’t use any of these, but uses information from three of the largest cities in the state, Columbia, Springfield and Saint Louis. So I need to get the data that they use, but since they record the data in degrees C, and the rest are in degrees F, I don’t want to put their data right next to the information in the table, so while I create the station names, and put them in boxes AC3, AD4 and AE4 I am going to create the initial data columns further over in the table (actually in columns AN, AO and AP) so that after I insert the numbers I can convert them back to Farenheit.

So where do I get the GISS temperatures? Unfortunately I have forgotten the exact route I used, but you can start with the GISTEMP Station Selector page and if you click on the shape of Missouri on the map


You will get a list of the stations that they have in Missouri, though, as I said above, apparently they only use 3 of them - in part because some of the other series are not complete. (I am only going to show the top of the list down past Columbia. ) You click on the name of the station you want (we are going to use the three listed above, so the procedure is the same for each from this point).


Clicking on the station, at GISS, gives you a plot of the average temperature over the past , but you need to go one step further and click on the bottom phrase of those shown to get to the data page. Oh, and it is this page, with the populations that are less than 10,000 being considered as rural, that gave rise to the second hypothesis that we are testing. (Which I guess you might call the “and is James Hansen right?” corollary to the initial question I asked).


And being their usual helpful selves this is not easily downloadable into our table. So, having no other way to proceed, I hand-downloaded the info that I wanted into the table. I have only shown the top of the table, and the numbers that I want started out in the column on the far end, but, as I explain below, I changed the array (If you are doing this, the table can be stretched so that all the data for one year are on a single line, and then you need the data in the far right column). So the data that I am inserting in a column, starting in box AN13 is the metANN (mean temperature Annual) data. (Small hint, to check that I had the years and data correct I squeezed the frame down so that the row data for a year appeared in 2 rows with the metANN in the second row one column over, as shown, and I started with 1895 (12.05) to be consistent with the other tabulated values. (And I only copied the left half of the screen that contains what I need).


This took a little while, since the data had to be hand entered for all three sites, but after maybe an hour (remember I titled this a day as a climate scientist – this is why), we have the data from the three sites that GISS uses in Missouri entered into the table, in Centigrade. The latitude and longitude aren’t given quite as precisely, we don’t have heights, and the population values don’t match the census, but this is the GISS data, and we gratefully take what we are given.


Now we need to go over to the initial columns for these stations (AC to AE) and insert the same data, converted to deg F. (so box AC13 has the equation = 32+(AN13*9/5)). Then I filled down from AC13 to AC126, and then filled right AC to AE, and the conversion was done). In the new columns I also entered the 2000 census data for the three sites, rather than the GISS values.

Now I have the raw data that I need to do the analysis. As they say most research is in the preparation, the fun part comes a lot more rapidly. To get to that we need to add just a few calculation steps into the raw data table. The first one that I am going to add is to simply take three different sets of averages. The first is for each year for the USHCN data without the GISS stations, the second is each year for the GISS stations, and the third is for each station over the full period of the data set. (I am not going to show the screens for this since they are rather straightforward.)

To get the first average I go to column AG – call it Historic ave, without GISS) and in AG13 type =SUM(C13:AB13)/26 which calculates the average or mean value. Then I select from AG13 to AG126 and fill down from the EDIT menu. That gives me the average annual temp for the USHCN data. Then I add, in column AH, which I call Standard Deviation, and in box AH13 a statistical formula (that I get from the Insert menu > Function > STDEV ). This puts =STDEV() into the box, and I give it the range I want examined by either selecting all the boxes from C13 to AE13. Or typing that in, so that the box reads =STDEV(C13:AE13) and click ENTER. This gives me a measure of the scatter in the data for the year 1895. I then select the boxes AH13 to AH126 and filled down. This now has tabulated the change in the scatter of the data over the last 113 years. (And I’ll come back to this in a minute).

Now I want the average of the GISS stations, so I created this in column AJ, typing the formula =(AC13+AD13+AE13)/3 into that box. Then, as before, selecting and filling down the column. And then there is a final column – which I call Difference, which is the difference between the Historic data set and the GISS set. That is created in column AL, and is simply typed into AL13 as =AJ13 – AG13, then filled down to AL129. (It is extended to include the average values that are calculated next). And to provide the overall average for the state I combined all 29 station data into a combined average in column AF.

And the individual station averages are created by typing =sum(c13:C126)/114 into box C129, and then filling that right to column AG129. This formula can then be copied and pasted into box AJ129 to give the average value over the years for the stations that GISS relies on. And immediately it is clear, by looking at box AL129 that there is an average difference, over the years, of 1.19 deg F between the stations that GISS are using, which are in the larger cities, and those of the more rural stations in Missouri. (Which would seem to validate the criticism from E.M. Smith, but that, as they used to say in debate, “is not the question before the house.”)

Now you are going to have to take my word for this, but when I started making this data set I had no knowledge as to how it would turn out, though I had some expectations. Let us now see what happens when we plot the data. (I am going to use the charting function of EXCEL, and add trendlines to the data, with equations and the r-squared value, so that we can see what is happening, since the data is scattered about a bit on each individual plot).

So the first hypothesis we wanted to verify was - the rate of warming does not significantly change, as a function of the size of the community around the weather station. Given that the historic average is for smaller stations and the GISS average is for the larger communities, this would, initially suggest that a plot of difference against time should show no change, if this hypothesis is true.
(Plot of column AL against A)


Well this shows that the difference has been getting less, rather than increasing – which, if anything I suppose initially supports the hypothesis. But out of curiosity I wondered how much temperature change we have had, since the actual relationship hypothesized was about rates of change. So let’s plot average temperature against time.


Now if you look at that plot Missouri has had quite a wimpy warming, less than half a degree F over a hundred and fifteen years, so given that small range, detecting changes in the rates is perhaps not feasible. But let’s plot the difference as a function of temperature just to see if there is anything.


And still it goes down? Wonder if that is trying to tell us something?

Moving on to the second hypothesis, which comes from GISS, and is that temperature is insensitive to adjacent population below a community size of 10,000 folk. This is a plot of row 129 plotted against row 9. I am going to show the plot twice. The first time I am using a log scale for the horizontal axis to cover the range from a population of 30 to that of over a million.


And now I am going to change the scale so that the horizontal scale is linear, and truncate it so that it only shows the data up to a population of 50,000.


Notice how the temperature is much more sensitive to population BELOW a population of 10,000 relative to the sensitivity above that size. Thus the assumption that GISS makes in classifying every town below 10,000 as rural without any sensitivity to population is clearly not correct.

And interestingly this also possibly explains the decline in the temperature difference with time (although it would require inputting data from earlier years census to fully explore the topic). The assumption behind the first two hypotheses was that the larger towns had a greater sensitivity to urban heat, which is getting worse, but in reality, if the smaller towns were growing faster (and require less population change to have an impact on the measured temperature) then they would be gaining temperature, because of that growth, faster than the urban sites – hence the negative slope to the graph.

Which brings me to my hypothesis that the scatter in the data would get larger with time, given the deterioration and urbanization around the weather stations. By using standard deviation to illustrate scatter, the plot, if I am right should have an upward slope, over time.


Hmm! Well it looks as though I got that wrong – it was heading the way I thought until the 1940’s and then it started to bend the other way. Apparently the change from glass thermometers to the automated Maximum/Minimum Temperature System (MMTS) started about then and the changing shape of the curve is perhaps indicative of the spread of the new system.

In all these graphs it should be borne in mind that Missouri has had a relatively stable climate over the past hundred and fifteen years or so. There are also likely influences across the state due to changes in latitude and longitude. And since, with the data table assembled, generating additional plots is easy and relatively fast, we can take a look. It turns out that Longitude doesn’t have that much effect, but the temperature values are much more sensitive to Latitude than anything else that we have discussed.


And it may well be that dependence that hides some of the nuances of the other relationships.

Well there we are, a little exercise in climate science. Of the three hypotheses we looked into, it turned out that the second and third were wrong, and because of that it may be that the data on which the first was based was not focused sufficiently on the changes in the small size of some of the communities (if the sensitivity gets less above a town size of perhaps 15,000.

The procedures that I spelled out in such detail should allow anyone else to run this same series of steps to determine if what I found for Missouri holds true over other states in the Union, and if anyone wants a copy of the spreadsheet, let me know where to send it through comments. (While yes I work at a University and yes I acquire data, I have no clue how to store it on the master servers –all of ours, for lots of good reasons, are stored otherwise, and so I don’t know how to make the file available in other ways than by attaching it to an e-mail).

And so to summarize the exercise, which as I noted in the title took me about a day to do and write up – by analyzing the data from 29 weather stations in Missouri, which have a continuous record of temperature from 1895 to 2008 (and are still running I assume) we have shown that
a) It is not possible to decide if Anthony Watts or Phil Jones is correct, since there may have been an incorrect assumption made in the data collection, which (conclusion b) means that the wrong initial assumptions were made in parsing the data.
b) The assumption that the “urban heat island” effect gets greater with larger conurbations is not correct in Missouri, where the data suggests that the sensitivity is most critical as the community grows to a size of 15,000 people.
c) The hypothesis that the data scatter gets worse with time because of deterioration in station conditions does not hold in Missouri, when the assumption is predicated on increase in the standard deviation of the readout between stations in a community. However this assumption may have been valid where reliance was placed on glass thermometers, since it is possible that the change to automated instrumentation has, at least for the present, over-ridden that deterioration.

So much for my venture into climate science, at least for now. I just wanted to show that it is not that difficult to check things out for yourself, and, provided you have the time, it can yield some unexpected results. (Though I should point out that Anthony Watts and Joseph D’Aleo quoted Oke in their report on surface temperature records, (page 34)
Oke (1973) * found that the urban heat-island (in °C) increases according to the formula –

➢ Urban heat-island warming = 0.317 ln P, where P = population.

Thus a village with a population of 10 has a warm bias of 0.73°C. A village with 100 has a warm bias of 1.46°C and a town with a population of 1000 people has a warm bias of 2.2°C. A large city with a million people has a warm bias of 4.4°C.
It is interesting to note that his coefficient is 0.317 and the one I found is 0.396.

Which is the other thing that you learn when doing research, most of the time someone else has been there before you, and there is little that is new, under the sun.

* Oke, T.R. 1973. City size and the urban heat island. Atmospheric Environment 7: 769-779.


Read more!