Saturday, April 30, 2011

Arkansas - the homogenized data results

Unfortunately I still do not have access to the CDIAC server showing the information on the USHCN sites in Arkansas, so as I discussed last time I have obtained the information from the surface stations site and then downloaded the various station temperatures from the GISS site. Unfortunately this does not include the TOBS data, only that homogenized before being published. There are some problems with that homogenization, which really requires both sets of data to illustrate, but I’ll leave that for now, just to do an initial review of the Arkansas initial data set, and to look at the population data.

There are only 15 USHCN stations, and one GISS station (Fort Smith) on the list, so that the initial work was not that demanding.

Fort Smith GISS temperatures

The difference between the GISS station and the average of the USHCN curves uses the same data set as in the rest of the series:

Difference between the GISS temperature and the average of the homogenized station data for the USHCN stations in Arkansas

The shape of the curve seems to show an increasing difference through about 1948, and then a decline since that time. Given the switch in some series to only using data after 1948 this is slightly curious.

Turning to the overall change in temperature of the state over the century, and now using the homogenized data (hence the purple) rather than the TOBS raw data, the line still does not show very much of an increase over the past century.

Average temperature change in Arkansas over the past 115 years.

The temperature has been increasing at the rate of some 0.05 deg F per century, which is not a lot.

Arkansas is some 260 miles by 240 miles in size running from roughly 89.6 deg to 94.7 deg W in Longitude, and 33 deg to 36.5 deg N in Latitude. The highest point is at 839 m, and the lowest is at 16.7 m, with the average elevation of the state being at 198 m. The average USHCN station is at 176 m, and the GISS station is at 134 m.

Checking the populations, that of Rohwer is too small for citi-data, so I used zip area code to find 77. Fort Smith (the site of the GISS station) has the largest population of the stations.

Looking at the effects of the geography on the temperatures, beginning with the effect of Latitude.

Effect of latitude on temperature in Arkansas

There is the usual strong correlation, whereas with Longitude:

Effect of longitude on temperature in Arkansas

As has been discussed before, any correlation is likely an artifact reflecting changes in elevation.

Effect of elevation on station temperature in Arkansas

When one looks at the effect of population, using the homogenized data, rather than the TOBS data, then there is sensibly no effect of population, in fact such correlation as exists is negative.

Effect of population on temperature in Arkansas after the data has been homogenized.

Read more!

Tuesday, April 26, 2011

OGPSS - The Appalachian Basin, simple lessons in the beginning

There are a number of different ways of getting oil from a reservoir, and, to radically oversimplify, the harder that you try to maximize the rate of production of oil from a well, then the shorter the overall life of the well will be, and there is a strong likelihood that the amount of oil, in total, that the well recovers will also decline with that increased extraction rate. This lesson, that of controlling the extraction rate from a reservoir to maximize absolute production volumes is one that Saudi Aramco practices, and because of that approach, has me quite doubtful that they will ever produce more than 12 mbd. (At least as long as the present faction of the House of Saud remains in power). To do so would hurt their absolute recoveries, and with the slower and more controlled extraction they have demonstrated that they can recover higher percentages of the total volume of oil originally present in the reservoir. It is not, however, a philosophy that is widely adopted. And yet, instead of the owners of the wells having that prescience, on occasion it has been other external forces that have driven the extraction rates from the wells, and thus controlled the length of their life and the total ultimate recoveries. The Appalachian oilfields in the United States, where much of this story began, has a history that helps illustrate some of these points.

The Oil Age is famously credited as having started with the drilling of the Drake Well in Titusville, Pennsylvania in 1859, although Ohio claims that the “First Oil Well”, was the Thorla-McKee well which was drilled and cased in oak in 1814. There is similarly a claim for the first great American oil well in Burkesville, Kentucky in 1829. I mention these as much to indicate that it was not that difficult initially to encounter oil in different parts of the country, and relatively close to the surface. The Drake well had only to extend down to 69.5 ft before it struck oil, and it made around 20 barrels of oil a day – but it was this well that opened the way to the oil industry of today.

Geological column at the Drake well site. The oil was found in the Riceville Shale (The Paleontological Research Institution)

Following on from that development, the production of oil from the nearby oilfields, first of Pennsylvania and then also into West Virginia in what became known as the Appalachian Basin dominated early American crude oil supply. :
Consider this - Pennsylvania was responsible for 1/2 of the WORLD'S production of oil until the East Texas oil boom of 1901.
Yet when we look at the map of the top oilfields ranked by proven reserves, as issued by the EIA, none of the fields in that region even appears today.

Top 100 oilfields ranked by proven reserves in 2009 (EIA)

There are two thoughts that come from this. The first is that just because an area was productive of oil at one time does not mean that it can continue to be so. And as productive fields are exhausted and production moves elsewhere the remaining oil in the field becomes less attractive as a reserve to be developed. The original Drake well at Titusville, for example, stopped producing after two years. Yet many of the wells that continued production were not produced with the tools that drive higher production today, and so tended to last much longer, but at low levels of production.

Which brings me to a second thought, which is more exemplified by wells in the Gulf of Mexico that were damaged during the Hurricanes in 2005. Until then these platforms had been producing small quantities of oil on a regular basis. But after the hurricane destruction those small quantities of oil that could be still recovered did not frequently justify the increased financial investment in extensive new drilling and repair that would have been required to re-create productive wells. And so the remaining oil in parts of the area was abandoned. Many of the wells in Appalachia ended up in a similar state.

The oil and gas fields of PA lie in the upper left (North-West) corner of the state, and are, at the moment, under much greater scrutiny again because of the natural gas that is found in the Marcellus shale that runs through the state.

Conventional Oil and gas fields in PA (PA Department of Conservation and Natural Resources )

The Marcellus is not shown on the above map, but I showed its extent in the recent post on the EIA shale gas report. Twenty-four fields are currently listed. Conditions during the oil boom at the end of the 19th Century were more intense even than the gas boom of today.
Less than twelve months ago McDonald, Pennsylvania, eighteen miles west of Pittsburgh, was a sleepy and commonplace little coal-mining town. In six months it doubled in population and became the busiest and most typical oil town in the country.

In this oil field on June 1st 1891 there were three completed oil wells. By November there were over three hundred wells in various stages, of which nearly one half were in and about McDonald.
Interestingly as Caplinger notes
Drilling and production in the United States have been governed by the price of oil, classic supply-and-demand responses to economic reality. After the initial discovery of oil at Drake’s well in 1859, prices were up to $16.00 a barrel, but high production quickly lowered the price to a fraction of this. Appalachian oil averaged $1.80 a barrel after the initial period of extremely high prices, reaching a low of 56 cents per barrel in 1892, and a high of $5.35 a barrel in 1920. Between 1859 and 1930 the price of oil in the United States averaged $1.34 per barrel. Well production in Appalachia averaged 0.6 barrels a day (the nation’s lowest), while the national average was 8.4 barrels per day.

However, Appalachia’s oil wells were the longest-lived in the country, and proved their worth in long-term production. On average, they returned 2.9 percent of their initial drilling cost per year.
Of the 600,000 wells drilled in the United States between 1959 and 1930 about half were still producing in 1930, and of these 242,000 had been drilled in Appalachia, of which 20% were dry holes and 149,000 were still producing in 1930. The average cost to drill a well in Appalachia at that time was $11,474.

Not all wells were that unproductive, although higher producers also tended to be shorter lived, (ibid)
The Funk (or Fountain) well produced 300 barrels a day for over a year before suddenly going dry. The Empire well, drilled in September of 1861, produced 3,000 barrels a day for eight months, slowing to 1,200 barrels by May 1862 before production dropped to nothing. In October of 1861, a well drilled by William Phillips on the Tarr farm on lower Oil Creek began flowing 4,000 barrels a day, probably the largest flowing well in the region’s history.
The Tarr Farm property, which once controlled the price of crude, is now long gone.

Yet not all wells ran dry that quickly. The well at the McClintock property in PA was drilled in 1861, and with long term care, and low production, is still producing 150 years later. It has, however, never produced more than 50 barrels of oil a day, and today only produces 1 - 2 bd, which is largely sold to tourists.

Unfortunately the typical well in PA was not that long lived, and the 350,000 that have been drilled, have collectively produced around 1.4 billion barrels of oil, since 1959. The Appalachian oil fields of PA oil had reached a production of 2 million barrels a year (mbpa) by 1862, and peaked at 32 million barrels a year (721 kbd) in 1892. By 1990 production had fallen to 2 mbpa again.

PA oil production (from Caplinger after Harper and Cozard

Oil production from the basin had fallen to not much more than a million barrels a year by 2001 and even though PA has no severance tax, as the following table shows, once a resource is gone . . . . .

State regulations on wells in the Appalachian Basin (Center for Rural Pennyslvania)

There was some move to looking deeper (and more expensively) at regions of the state for more resources but that has been overtaken by the developments in the Marcellus Shale. Crude oil production from PA, however, rose from 1.3 mbpa in 1997 to 3.6 mbpa in 2008.

Read more!

Saturday, April 23, 2011

Arkansas unavailable - hopefully just a short break on the temperature study

How long should a government run website be down? I ask because I was getting ready to download the data for the continuing study of state temperatures on Friday, intending to download the statistics from the USHCN network for Arkansas, and then compile the data, plot the curves, and write the post today. Unfortunately when I went to the site through which I go to get to where the data is:

my laptop got a notice that:

Well without the data, it is hard to do any analysis. I presumed that it was a transient glitch, and went off to do other things. Came back this evening (after driving through the driving snow to Dartmouth College at noon) and got the same message, about a day later. So I went back to the NASA main website for climate change at GISS, and clicked on the:

pointer and got the same message. (Which is also where the NOAA folk tell you to go and look).

The long way around would be to download the names of the Arkansas state stations from the Surface Stations website and then go to the GISS site, and download the data from there. Unfortunately that would only provide the homogenized data for the stations (as I found in an earlier state) not the TOBS data, which correlates better with the geographical information and population.

So I will presume that this is just a temporary problem at the site, and that, after the Easter Weekend is over, perhaps the server will be put back on line, and I can get back in business.

For now, Happy Easter, and my apologies.

Oh, a small update, I (out of curiosity) entered one of the file data references that I have used before to see if I could get around the problem that way, and got the information that:

Safari can't find the server "

Which was where it came from - wonder what's going on??

Read more!

Friday, April 22, 2011

Drought, renewable energy, Texas, London and Florida

The Governor of Texas has asked the denizens of the state to pray for rain. The state has not had serious rain for months, and the drought conditions have turned severe.

The current drought conditions in Texas.

Conditions are not anticipated to get any better in the next few months. The recent predictions are that the conditions will persist in the area through July.

Predictions of the weather changes through July (NOAA )

Droughts around the world are one of the unfortunate consequences of changing weather, in much the same way as we have seen high snowfalls in other parts of the country this past winter. There will likely be cries about the underlying causes (with folks forgetting that droughts have occurred throughout history), but this post was started because of a potential event-changing occurrence.

This week Thames Water, which supplies 687 million gallons of water a day to the inhabitants of London and the immediate vicinity, was running its first water desalination plant. The region has faced droughts in the recent past as have other regions of the United Kingdom. Earlier solutions included building large water reservoirs, that at Kielder is the largest man-made reservoir in Europe. There is, however, only a certain limited amount of land that can be made available for this use, and so desalination proves an alternate way of supplying the increasing global demand for water.

In the United States droughts have threatened the viability of nuclear power stations, since the shortage of cooling water (as we recently learned again in Japan) is essential to safe plant operation. And therein lies one of the rubs to the situation. As the spokesman for Thames Water noted, the intent is not just to run the plant when there is a drought, since that is going to be too late. Rather at times of lower rainfall the plant (which can produce up to 150 million gallons a day) will pump clean water into the reservoirs, maintaining their integrity, and building up a reserve that will reduce the drought impact if it occurs. The plant was apparently completed last June at which time it was expected that it would just be used in times of drought. Within the last year that thinking has changed, and the plant is now running intermittently, partly to train operational staff, partly potentially also to help meet demand.
According to the Environment Agency, average water use is 148L per person per day in the UK and in the south-east of England it’s as high as 170L (far higher than the government target of 130L). Despite the popular perception of London as an overcast, rain-soaked city, its rainfall rate is, in fact, on a par with Rome, Dallas and Istanbul.

Schematic of the flow path through the desalination plant.

One question that I had relates to the amount of power that will be needed at Beckton. A calculation assuming that it takes 4 kWh to produce a cubic meter of fresh water, suggests that it will need a 21 MW plant. In 2009 the company CEO noted
The (20 MW) plant will be the first in the world to generate all its energy on-site, from renewable sources, including recycled cooking oil.
However the plant has been controversial, not least because although planning to use rapeseed oil, it could also burn palm oil, which is apparently cheaper. That is much more controversial. Planning permission for a second plant nearby at Southall, was refused in June 2010.

Biofuel powered stations in the UK, which includes that in the Tees Valley, where a plant burning 300,000 tons of recycled wood and specially grown wood from plantations, have had a somewhat mixed reception.

But to get back to the original idea of desalination, the largest plant in the United States is in Tampa with a maximum projected size of 35 million gallons/day, though it currently only produces some 25 million, sufficient for 10% of the region’s water needs. It went on line in 2008.
The plant uses about 44 million gallons per day (mgd) of seawater from a nearby power plant’s cooling system, which is pretreated with sand filters and a diatomaceous earth filtration system to remove particles. Reverse osmosis filters then separate 25 mgd of freshwater from the seawater. The unused concentrated seawater is diluted with up to 1.4 billion gallons of cooling water before it is discharged to the bay and that dilution is why environmental studies show no measurable salinity change in Tampa Bay related to plant production.
Which brings us back to Texas. In the latest report on desalination in the state, two possible developments are cited. There is a plan to install a 2.5 million gallon a day plant at Brownsville and a 1 million gallon a day facility on South Padre Island. Unfortunately the South Padre Island initiative failed in a bond election last year, and its future is considered doubtful. Meanwhile the Brownsville Public Utilities Board is considering combining a renewable energy source (shades of the UK) into the plant, in order to leverage funding, and possibly qualify for DOE funds.

But in the meantime, as the discussion continues, the drought gets worse. The discussions started with an initiative in 2002. As those in Texas may find out the hard way, waiting until the crisis is upon them makes it too late to construct the solution that might have helped. It appears that those in the UK and Florida were just a little more prescient. It might be noted that the initial planning for the Florida plant started in 1996.

Read more!

Wednesday, April 20, 2011

The rising number of earthquakes in America, not Iceland

This is just a short post drawing attention to a couple of things that are starting to look a little odd. The first was the subject of a post over at Chiefio’s website pointing out that there has been a recent doubling in the number of earthquakes per day along the Western coast. This led him onto a piece that suggests that the recent activity might be a pre-cursor to a volcanic eruption near Hawthorne, Nevada – since there have been over 500 of these quakes in that area. And if one goes to the USGS site that maps their location and strength, the latest image shows that activity is still ongoing.

Recent earthquakes in the Hawthorne area (USGS )

The concentration of activity around a fixed point argues more for volcanic activity than it does a growing risk of a major earthquake. However if one looks at the region in general there is also a lot of activity along the major fault lines through California. However that tends not to be as focused, suggesting that the normal movement along the faults is continuing, though intensified just south of the border.

Recent Earthquakes on the west coast (USGS)

The two phenomena may well be separate. Generally I look for a lack of earthquake activity along a fault line as an indicator that the fault is not moving in that region, and thus stress is building up, and a larger quake will be required at some time in the future to relieve that greater stress. And in that regard it is the zone without the current quakes along the fault path that is more worrying to me than the zones where there is a lot of quaking, and thus movement.

In contrast it is the focusing of lots of earthquakes in a small area that suggests that the cause of the quakes might be more due to volcanic activity. Though one really also needs the relative vertical location of the epi-centers to determine whether magma is moving towards the surface, which is generally a warning of something in the offing.

The other thing that has me a little puzzled is the opposite situation. I have been monitoring the quakes in Iceland since the eruption last year, as reported by the Icelandic Met Office, and recording all those quakes that exceed magnitude 3. (At the bottom of this post. I note that after seeing about a hundred quakes in the past year, there hasn’t been one (greater than 3) since March 12th. Which is kinda odd.

The combination of the two events may or may not be related, we will have to see how things progress, but it is worth taking note of and keeping a closer watch to see what happens next.

Read more!

OGPSS - an explanation as to where these talks go next

One of the problems in trying to project future demand and supply of oil and the other fossil fuels is that the decisions on their availability and use are often controlled by factors other than just their geological availability. Yet, at the same time, fuels cannot be created out of thin air (or empty rock) nor can viable technologies be created similarly, purely by having by the prevailing governing bodies pass the appropriate legislation (see, for example, cellulosic ethanol production). If fossil fuels are to be brought to market in a timely manner there are certain basic steps that have to have occurred.

The first step is that the deposit of whatever type has to have been found and identified. Thus, when folk discuss the shortfalls that are inevitably coming in the supply of crude oil, the first indication of this that is made, is usually the decline in the discovery of sufficient new oil in reservoirs to replace that which is being removed. And so, when the question is raised as to whether we are going to run out of oil, and if so when, the first place to look is to see what resources remain that could be counted as a future reserve for production.

And in that definition lies the first of the stumbling blocks that many commentators fail to recognize in writing about the future availability of fuels. It is a point that I made in my discussion of shale gas, namely that while there may be a lot of it out there, at present the volumes that are commercially producible are, in total, significantly less than the total volume (perhaps as little as 7% ). The total volumes that exist in the various fields are considered the resource (i.e. in this case 862 Tcf of natural gas), while the amount that can be commercially extracted is considered the reserve (in this case 60 Tcf). Confusion over the relevant values in each category, and the conditions under which volumes switch from one category to the other have been part of the debate on the glohal energy future that has, on occasion, bubbled up in places like The Oil Drum.

So if I am to develop a valid picture of where the world future is going in terms of the fossil fuels that will continue, in large measure, to power it over the next 20 years, it is important to look at the underlying resources that are available, and whether or not they can be considered reserves. And in many cases that is a relatively easy initial assessment since the volumes in question are already being produced, or are in process of being so.

But in that process there is another, somewhat controversial, number, and that is the rate at which production from a field will decline over the time that the fuel is being extracted. Back in 2009 when I wrote the initial tech talk explaining some of the reasons for this decline in an oilfield, the assumed value for this decline rate was on average 4% per annum, yet there are fields which have declined much faster than this. For example production from the Cantarell field in Mexico fell more than 75% between 2004 and 2010, with decline rates reaching more than 12% per year. Mexico has not been alone in seeing production collapse rates of this level, and yet the impact of declining production from existing fields, and the resulting need to replace it, is a factor that is not fully recognized, yet must be in any rational discussion of future conditions. If, for the sake of example, we accept that global oil production today is 88 mbd, a global decline rate average of 4% in production from existing wells per year, will require that, just to maintain production, an additional 3.52 mbd of new production must be brought on line each year. If, however, the true rate of decline is, on average, 5% then this number jumps to 4.4 mbd, and if the true decline is 6% then it rises to 5.28 mbd. (And bear in mind that this does not include the anticipated increases in demand which still run at around 1.5 mbd).

Now there are countries, such as the Kingdom of Saudi Arabia (KSA), that can manage production and the opening of new developments so that there is sufficient new production in existing fields that decline rates within the field can be kept to perhaps 2%, and new fields brought on line that reverse the total decline. KSA has been relatively forthright in the past in recognizing that without such activity they would have faced declining levels of production of perhaps 800,000 bd of their total of around 9 mbd of production. The problem, of course, is that all fields are finite. Particularly when production has been running for decades, there comes a point where the volumes available have been consumed, and there is nothing left.

Conventionally that has been a steadily changing process. As I noted in that earlier tech talk, vertical wells progressively fall in production. However, in recent years the production of oil and natural gas is being increasingly supplied from horizontal wells which do not have the same changes in production geometry over time. Rather the well may continue at relatively stable production levels until the underlying water used to maintain pressure in the formation, rises to the level of the well. And at that point the decline rate becomes very steep indeed.

In passing I should note that this “watering out” of the wells, which happens in oilwell production is not the likely cause of the dramatic fall in natural gas production from shale gas wells that has been documented.

I am reviewing these points about the changes in production with time, to explain why it is very difficult to make more than very broad generalizations when one talks about oil or natural gas (or coal for that matter) production into the future, looking only at the overall numbers. The volumes of fuel that will be available are more accurately assessed from considering the individual countries from which the production is and will be coming, what the potential futures of the fields in those countries are, and the other considerations (such as imminent or ongoing civil war) that might affect field production.

What I intend therefore to do next is to start with the major oil producers, as listed in the earlier review, although not always in that order. By looking at the different fields both past, present and future, in light of existing and possible relatively novel technologies for extraction (such as, for example, burning some of the oil in place so as to help produce the rest) I will try to bring a more accurate assessment of what the future production is likely to be, and thus build up an assessment of global production as the series continues. The top three historic producers have been the United States, Russia and Saudi Arabia, and so the series will start with North America, and more specifically the United States.

But particularly in these times when the stability of some of the producing countries is becoming more questionable, external factors do have to be addressed. Unfortunately many of these changes, being political, are harder to predict. As a result, while I will include the reasons for some of the decisions I use in building this series, I will not go into those in much depth. Rather I will focus more on the likely levels of future production that might be achieved.

Read more!

Sunday, April 17, 2011

Iowa combined temperatures

Iowa has 23 USHCN stations ranging from Albia to Washington, but it has no GISS stations on the list, which makes life a little simpler this week. I did continue the study as to how far back to run the temperature averages based on the current population estimates around the stations. The best value for Iowa came out to be only using the average of the last five years data, and that correlates with Nebraska and Texas, while Oklahoma was debatable. So I’ll continue that check for a couple more states, but it is clear that there is some consistency in correlation between the states.

What is interesting about the USHCN stations in Iowa is that most of them are in communities smaller than 10,000 and yet there is the same trend in temperature variation with population that holds true in other states. This is clearly indicating that the Goddard Institute for Space Studies (GISS) is in error to segregate its temperature data with a cut-off of 10,000 as the minimum number it considers for a station. Vide their list for Iowa . (Although they use none of these stations in the truncated number of US stations that they have shrunk down to). Note the following is just a sample from the list.

And here is the correlation, using the past 5-year average temperatures for the USHCN stations, using the TOBS data.

Average of the 2004-2009 temperatures for the Iowa USHCN stations plotted v local population.

Given that the same form of plot is evident in the other states, perhaps it is time for a gentle cough in their direction?

So let’s now look at how we got there and what other interesting things might exist in the data. Firstly the stations appear to be spread relatively evenly over the state:

Location of the USHCN stations in Iowa (USHCN)

The trend for the Time of Observation corrected data shows that temperatures have been rising in the state since 1885, at a rate of 0.5 deg F per century.

Iowa is 200 miles wide, and 310 miles long. It runs from roughly 89 deg W to 96.5 deg W, and from 40.5 deg N to 43.5 deg N. The center of the state is at Longitude 93 deg 23.1’ W, Latitude 41 deg 57.7’ N. The highest point is at 509 m, and the lowest at 146 m, with a mean elevation of 335 m. The average USHCN station is at 93.5 W, 42.2 N with an elevation of 319 m.

In that regard looking at the correlation of temperature with the geographical considerations in the state, first there is a strong correlation with latitude:

There is the correlation with longitude, which is again likely a by-product of the increasing elevation of the state as one moves west.

There is a much clearer correlation with elevation:

And so we come to the correlation of station population and temperature. As in the past I ran a secondary calculation looking at the change in regression coefficients as I increased the average number of years averaged to derive the temperature at each station. One is 2009, two is the average of 2008 and 2009 etc. The population data in this case all came from the citi-data set, and there were no missing cities to look for.

I therefore used the 5-year average data to derive the relationship of temperature with population.

It is interesting that the coefficient for the slope of the line is running relatively consistently with values that I have found for other states. (If I use a 5-year average the value for Nebraska is 0.808, and for Texas 0.875.)

There is still, however, a difference between the homogenized data shown in the USHCN record, and that of the original TOBS temperatures.

Read more!

Wednesday, April 13, 2011

Gas prices and oil supply in light of EIA and OPEC monthly reports

I paid $50 to fill my tank at a gas station in Maine this morning, at a cost of almost $4 a gallon. When the Actress muttered some comment of protest, I told her that she had better get used to the price, because it is hard to see any normal reason for a decline in that price in the near future.

The EIA TWIP today was discussing the transportation fuel market this summer, and begins by noting:
Regular-grade gasoline retail prices, which averaged $2.76 per gallon last summer, are projected to average $3.86 per gallon during the 2011 driving season. The monthly average gasoline price is expected to peak at about $3.91 per gallon by mid-summer. Diesel fuel prices, which averaged $2.98 per gallon last summer, are projected to average $4.09 per gallon this summer. Weekly and daily national average prices can differ significantly from monthly and seasonal averages, and there are also significant differences across regions, with monthly average prices in some areas exceeding the national average price by 25 cents per gallon or more.
Well right now, before driving season starts, the price was $3.97 for regular – but the EIA have the “out” that this is after all Maine, which is at the end of the delivery line. Ah, well!! But I suspect that the EIA is still being a tad optimistic, and may regret that $0.25 error bar by the end of the season.

Their estimate, and the rationale for it are given in the new Short-term Energy and Summer Fuels Outlook with the price of West Texas Intermediate (WTI) at $112 (it has since fallen $5) . The EIA is expecting the market to tighten, based on the turmoil in the Middle East and North Africa, and “robust” growth of demand. But they only increase the anticipated average price of WTI to $106 this year, and $114 next. And in this I think that they are being rather too optimistic given the times. And that includes their estimate that the price of gasoline will still be below $4 (at $3.80 average) through the end of next year. (Though they do add a caveat that there is a 33% probability that prices could get over $4 on average this July).

And in an aside (since the topic today is mainly crude oil) it is worth noting relative to my post on the EIA World Gas Shale report that the EIA are projecting that the Henry Hub price for natural gas will remain around $4.10 per kcf in 2011 i.e. below the 2010 average, and it will only rise to $4.55 per kcf in 2012 – which doesn’t make those gas shale drilling balance sheets look any prettier.

The oil supply problem itself is sufficiently worrying. As with others they are still predicting a global increase in demand of 1.5 mbd this year, expecting that it will rise an additional 1.6 mbd in 2012. OPEC (whose daily barrel is currently at $117) expects that with the tragedy of the earthquake and tsunami in Japan, that there won’t be quite as much growth as previously expected, and thus are only anticipating a growth in demand of 1.4 mbd this year. As their April Monthly Oil Market Report notes, they do not expect countries outside of Japan to be affected, and thus they continue to anticipate a world economic growth of around 3.9%.

As I mentioned in an earlier post there were a number of Japanese refineries which were damaged, and the country which was refining about 4.5 mbd had an immediate drop to 3.1 mbd. That has now been partially restored as some refineries have increased production, and others have been repaired. However the country as a whole is still reported to be about 617 kbd short of the pre-earthquake figure. (And there are three coal-fired power plants Haramachi Tohoku, Kashima Ibaraki, and Hitachinaka Ibaraki that are still off-line and 9 of 210 hydro-electric plants were damaged. )

Nevertheless Middle Eastern suppliers stopped some of the shipments to Japan, and this may well be what is being seen as a short-term decline in demand. (OPEC saw a drop of around 0.5 mbd in tanker shipments in March). However OPEC anticipate that there will have to be substitution for the loss in Japanese nuclear power, since that cannot be restored or replaced with equivalent new nuclear power stations in less than several years. As a result they expect that the demand for oil as a replacement fuel (the loss could be made up by about 200 kbd of oil equivalent fuel) will increase later in the year.

OPEC expects that 0.6 mbd of the overall global increase in demand will be supplied by non-OPEC countries, with Brazil, the United States, Canada, Colombia and China increasing production, while the UK and Norway will show the greatest declines. That leaves the rest for them, and bearing in mind the loss from Libya, and other potential losses around MENA, though OPEC itself only expects to see demand for its oil increase about 0.4 mbd. (Which arithmetic doesn’t quite compute – but never mind – I am assuming that the rise of 0,4 mbd includes the offset to cover the losses in production within OPEC, and that with the 0.4 mbd OPEC increase, and the 0.6 mbd non-OPEC increase, that the world will only be 0.4 mbd short – which might come from NGL increases). Incidentally OPEC anticipates that Chinese demand will grow 0.5 mbd to 9.5 mbd.

There is an interesting comment in the OPEC report relating to the poor performance of natural gas prices (as I have been discussing).
Nevertheless, a sustained upward trend in HH natural gas prices may appear if there is a radical change in the US energy policy regarding nuclear production, which seems unlikely at present. According to Barclays, in order to rebalance the US natural gas market via higher demand, it would be necessary to shutter a large amount (13-26%) of total North American nuclear capacity.
Well I have to confess that is one answer that I hadn’t thought of applying in order to get the shale drilling companies off the hook.

Read more!

Tuesday, April 12, 2011

OGPSS - The new EIA Shale gas report

I had intended starting my country-by-country more detailed analysis this week, but with the publishing of the EIA compendium on world gas shale deposits, and a little controversy I got into in comments elsewhere, I thought it worthwhile spending a post looking at the global prospects for gas shale. I am indebted to Art Berman for first bringing some of the problems to my attention, though I have since looked into the issue sufficiently to draw my own opinions, which this article reflects.

The EIA document (actually prepared on their behalf by ARI) begins by noting the importance of this new resource to the US Energy picture:
The development of shale gas plays has become a “game changer” for the U.S. natural gas market. The proliferation of activity into new shale plays has increased shale gas production in the United States from 0.39 trillion cubic feet in 2000 to 4.87 trillion cubic feet (tcf) in 2010, or 23 percent of U.S. dry gas production. Shale gas reserves have increased to about 60.6 trillion cubic feet by year-end 2009, when they comprised about 21 percent of overall U.S. natural gas reserves, now at the highest level since 1971.
Development of US shale deposits began to be aggressive around 2005, and “technically recoverable” gas resources in the USA are now considered to be 862 Tcf. With this discovery of this potential future domestic supply, it became wise to see what the equivalent potential was elsewhere, and thus the report which looked at some 32 countries, 48 shale basins, and 70 formations. The assessed basins (in red with an estimate, in yellow without) are shown in the map below.

Gas Shale resources evaluated by the EIA

Combined, the total resource base is then assessed to be 6,622 Tcf. Now it is important to draw a distinction at this point, because this is a resource. There is a considerable difference between a resource, which is something out there of value that may or may not be economic to develop, and a reserve, which is that fraction of the resource that is considered viable to develop. In the case of the USA for example, out of the 862 Tcf that makes up the resource, only 60 Tcf (6.9%) is considered to be viable as an addition to the US reserves.

Further (and a point I will get to later) gas from shale is somewhat more expensive to produce, relative to more conventional natural gas deposits. So, when the new volumes developed are put in context of the total natural gas resource available, while gas shales now provide a gain of 40% in the total volume of gas technically available, it may well be that the larger more conventionally available natural gas volumes may be less expensive to produce, and so will be developed preferentially in the next few years.

On the other hand, a domestically available reserve that does not require a nation to expend money on foreign fuels can have considerable benefit, even though it may be comparatively expensive. (And that is a judgment that brings in a lot of additional caveats, but it may very well drive development in countries without much other natural resource, and one thinks of countries such as Ukraine and Poland – where the alternate coal is becoming increasingly politically disfavored). Countries such as Russia and parts of the Middle East, where there are already large reserves of comparatively inexpensive natural gas were not considered in the study – so it is possible that the total volumes available long-term may be understated.

To derive the “technically recoverable” volumes the consultants looked at the volumes of free and absorbed gas available, after having decided the size of the shale deposit, and then used their best judgment as to how much of this could likely be recovered. In general they though it technically possible to recover between 20 and 30 percent, with some higher and some lower estimates within an additional 5%. They did not consider offshore deposits, nor critically, did they consider production costs or accessibility.
‘Free gas’ is gas that is trapped in the pore spaces of the shale. Free gas can be the dominant source of natural gas for the deeper shales.

‘Adsorbed gas’ is gas that adheres to the surface of the shale, primarily the organic matter of the shale, due to the forces of the chemical bonds in both the substrate and the gas that cause them to attract. Adsorbed gas can be the dominant source of natural gas for the shallower and higher organically rich shales.
I am not going to go into the details of the report as it pertains to individual countries, though this provides the bulk of the information that is provided within it. Rather I am going to include those resource values and the related discussion as I come to discuss the resources available to different countries. But as an illustration that not all the shale is likely to be productive, once could consider, for example, the drilling pattern for the Barnett Shale where most activity is concentrated in the area north of Fort Worth. And there are significant areas with very little activity at all.

Drilling activity in the Barnett shale (EIA)

However I will comment about the immediate significance of the resource, its size, and what I see as the short-term impact that these volumes will have on the global energy market. My sense is that they won’t make that much difference, once the initial hype and gasps over the size of the numbers has passed.

The reason for this is that this new resource is not cheap to develop. The initial cost comes in sinking the well, where once the initial vertical segment has been drilled, the driller must bend or deviate the drill until, when it reaches the formation with the gas in it, it is drilling sensibly horizontally. The driller must then hold the drill in the formation (which may be less than 50 ft thick) while the bit opens a hole that runs out perhaps two miles or more. The ability to sense where the bit is and to guide it along that path is a complex, expensive process, and you can’t take someone off the street and have them run a good well in a week. Over time there the number of qualified drillers who can do this has grown, so that in the week of April 8, Baker Hughes reports that of the 1782 rigs drilling in the USA some 50% were drilling for gas, and of the total 57% were drilling horizontal wells. It has taken some time for that level of expertise to develop.

Yet even with the right rig and crew, success is not assured, or even permission to drill the wells. One has also to get the right permits, and the right prospect. And here one can look to the map of the current producing wells in the Marcellus Shale in the Eastern U.S. to illustrate the point.

Wells in the Marcellus Shale (EIA )

You can see that the activity has concentrated much more in West Virginia than, say, New York State. This is partially due to the different compositions and inclinations of the state legislature in each state.

But even with the rigs, the crews the political support, and a well in a decent spot in the formation, success is not assured, or even the most likely outcome, because of the nature of the host rock. The major concern that I have had with the expectations regarding shale gas, relates to the rock in which it is found. Most natural gas deposits are found in relatively permeable rock, so that it is relatively easy for the gas not intersected by the well or the fractures to travel through the rock to those free spaces, and thus out of the well. The permeability of shale is, to be blunt, pathetic. That is, after all, why the expensive fracturing of the well, and the slick-water injection of proppants into those fractures is so critical. But the poor host rock permeability causes a fairly dramatic fall in production from gas shale wells after they are first brought on line. The easily accessible gas close to the fractures and the well makes its way out, and then the rest must struggle through increasingly thick layers of rock to reach the well. As a result there is a dramatic drop in production.

One of the wells that was cited in the comments debate I mentioned at the start of the post was the Day Kimball Hill #A1, which was the highest initial producer that Chesapeake ever drilled, at 12.97 mcf/day when it was brought on line. But:
The Day Kimball Hill #A1 is located in Southeast Tarrant County, Texas, and produced an average of 12.97 million cubic feet of natural gas per day in October 2009. Since shale gas wells decline sharply during the first few years, this Barnett Shale well has seen its production fall to 8.66 million cubic feet in November and 6.79 million in December.
That is a 47% decline in production in 3 months.

It was followed by White South #1H which came in at 17.8 mcf/day, last September. However, because of the low price of NG at the moment, the well has been partially shut-in to produce only 4 mcf/day since. Most shale gas wells don’t produce in this league. There are roughly 15,000 wells in the Barnett, and the wells in Arlington are an exception.
To join (the “monster” wells), a well’s output must average more than 8 million cubic feet of gas per day during its peak month.

Fewer than 1 in 1,000 Barnett wells has attained such a lofty yield for a month, which is enough gas to meet the heating and cooking needs for about 3,300 homes for a year, based on American Gas Association usage data.

While the Barnett Shale underlies more than 20 North Texas counties, the top “sweet spots” are in Tarrant and Johnson counties. All of the 35 biggest wells are in those counties, according to a new report by the Fort Worth-based Powell Barnett Shale Newsletter.

Day Kimball Hill site from Google Earth (Barnett Shale Drilling Activity)

Obviously, if you were a partner in these wells, you made your investment. There is a discussion of well economics which suggests that $5 per kcf is the minimum price (in 2005 costs) at which the wells can make money. But that assumes a 60% decline rate for well production over the first year. But in all the others, even as with these, decline rates from the initial numbers can be dramatically higher. And this is not just true for the Barnett shale . For example there is a report of a decline curve for the Haynesville shale that shows declines of up to 85% in the first year.

Haynesville shale decline curve (Haynesville Shale)

These high decline rates make it more difficult for the well to generate the return on investment that folk expect when they consider only the initial production volume.

The main problem that I see, in the short run, for the average shale gas well is that (as noted above) the producer really needs a price of better than $6 per kcf (and Art would argue higher) to make any money on the well. But there is a globally sufficient supply of natural gas more conventionally obtainable, and increasingly available as LNG at perhaps $4 a kcf delivered into the United States, that it is difficult to see the short-term viability of the industry.

On the other hand, with alternative sources of fuel failing to live up to various promises made for them, it may now be that natural gas is seen as the next hope for broad use vehicular fuel. Legislation has been introduced to encourage this use, and if this becomes more widely adopted then it may be that in this way the market may rise, and prices will hopefully rise with increased demand, to make the shale gas more viable sooner than I think.

Read more!

Saturday, April 9, 2011

Utah combined temperatures - revisited

When I first began this review of state temperatures I started in Missouri and moved west in subsequent posts. As a result Utah was the fourth state that I reviewed, and then, when the USHCN site also provided the Time of Observation corrected data, I looked at that data for Utah also. On April 4th Anthony Watts, at WUWT, wrote a post on the temperatures in Utah, based on the publication of a report by Weather Source, on temperatures in that state. As I have done, so they looked at populations in the state, but dividing population groups into three sizes, showed that temperature rise rates were different for the three groups.

Temperature trends for different station types in Utah (Weather Source)

The categorization of the station location as to whether it was urban, surrounded by agricultural activity (which will have some impact on temperatures) and those in natural wilderness or where the environs will have a low impact on the temperatures measured by the station. The study grouped the data in a different way than I do, and also only looked at the trends since 1948, while I look back over the data since 1895.

But primarily what I wanted to do today was to restructure the Utah posts so that they were in the same format as those of the later sites, where I included a couple of different plots and analysis not in the original posts. So this is a revision, and in the listing on the right side of the page it will replace the original two posts for Utah (though they will still be available through the references at the beginning of this article).

Utah has 40 stations that are not evenly distributed around the state, but clearly focus along a couple of highways.

USHCN stations in Utah

There is only one GISS station in Utah, and it is in Salt Lake City. (It has a full set of data for the period). So how does the GISS data compare with the USHCN homogenized data?

The average difference is that the GISS site is 4 degrees higher than the average USHCN site after the data for those sites has been homogenized.

Looking at the TOBS temperatures for the state over the 115 years:

This gives a temperature rise of 1.25 degrees per century, in contrast with the homogenized value of 2.55 degrees. (In comparison the Weather Source rates – calculated however only since 1948, give values of 4.2 degrees per century for urban locations, 2.7 degrees for agricultural sites, and 2 degrees per century for natural wilderness).

Looking at the Utah geography the state is 350 miles long and 270 miles wide. It runs from 109 deg W to 114 degW and from 37 to 42 deg N. The highest point is at 4,123 m and the lowest at 609 m. The average elevation is at 1,859 m. The average USHCN station is at an elevation of 1,604 m, and the GISS station is at 1,288 m. Given that there is a strong correlation with elevation (see below) it is not surprising that the GISS station reads high. The center of the state is at 111 deg 41.1” W, 39 deg 23.2”N. The average of the USHCN stations is at 111.6 deg W, 39.4 deg N.

In regard to the variation with latitude:

Note that the regression coefficient goes from 0.16 with homogenized data to 0.23 with the original TOBS data.

And with Longitude:

This tends to confirm that the true correlation is not with longitude but rather with elevation.

Turning to the relationship with population, it is not yet clear whether averaging the last 12 or 24 (or some value in between) for the number of years to correlate the temperature with is best. A quick check suggests that, of the three, a 12 year correlation is best:

And finally there is the difference between the homogenized and the TOBS data changes over the years. It is interesting to see that sudden upturn at the end.

Read more!

Tuesday, April 5, 2011

Coal - the TV series

Well, thanks to the Engineer, I watched my first episode of “Coal” this afternoon. The show is on the Spike network with a significant part of the first episode setting up the story. The episode is called “The Master Mines”, and presumably relates to the skill of one of the continuous miner operators, who is able to produce 7 cuts – each 20 ft deep – during the course of his shift. The mine owners apparently need that production each shift (and there are two a day) if they are going to manage to keep the mine financially profitable. Well, needless to say, they don’t manage to get there, but they had to condense the action of about a month into one episode in order to generate enough suspense to keep the plot moving. (In contrast with the miners of Gold Rush– Alaska, which at the end of the season was doing that it seemed every couple of days).On the other hand these folks seriously know a lot more about what they are doing.

For those who have never been in a coal mine, or had any concept of what it is like, the series does give some impression of the mining operation, but it scurried through the introduction at such a pace that I am still not that sure of the layout of the mine underground. I am going to comment on some of the operations in the mine during the rest of the post, so that if you don’t want me to spoil the suspense, then you might want to go watch the show before reading on.

The Cobalt Coal mine is on Westchester Mountain in McDowell County, VA and two of the cast (?) took over the rights to mine the property, at the beginning of the season. The way that it was financed, with a $4 million investment, apparently requires that they must get enough cash flow from the coal (a high grade metallurgical grade lying in the Sewell seam that is sold for steel making) to cover the cost of operations even for the first month. There are 35 local miners who were hired to work the mine, and they have varying levels of expertise. The 7 cuts a day (is that one or two shifts?) translate into 40 truckloads of coal leaving the property for the wash plant, where the coal is cleaned of dirt (more of which anon).

The current working operation is an adit, i.e. a tunnel that runs straight into the mountain, mined from the coal, from the outside. It is only driven at seam height, and it is 600 ft long from the entrance (portal) to the working face. Those who are curious about such things as the layout at the face, and what the curtain is that they drive through, might get some help from the series of posts that I wrote on coal mining some time ago. The most relevant is probably the one about continuous miners, since that is the machine that they use.

The show really does not give the audience much idea as to how hard operating the continuous miner is. It used to be that the miner sat on the machine and steered it with joysticks from a cab on the side. But the machine is long, and the law says that a miner cannot go under unsupported roof. So taking the miner off the machine and controlling it remotely means that it can mine more before it has to pull back out of the tunnel it has dug, and move over to the next passage, so that the roof can be supported. The problem for the operator is that he can no longer see at all what is going on at the cutting drum. As the crew note, he has to listen and tell by sound, and the way that the machine is responding, whether the drum is cutting in coal, roof rock or in the floor. Every ton of rock that is mined is that much less coal produced, and that much more expense in cleaning and disposal. It is quite difficult to accurately steer and control the machine, and thus I suspect that the “apprentice” driver was much more skilled than the show let on. (Which raises a question about the night-shift operator, but that may be a series plot so I will not go there).

However the coal varies in height a little, and so the mine has to take some of the rock out of the floor (which is generally a softer shale) in order to give enough working height for the roofbolter – a machine that inserts what I believe are resin anchored roofbolts into the rock over the opening to hold it in place. This can mean that the machine has to cut as much as a foot deep into the underlying rock. The problem that occurs when it does (and generally it is done at the same time as the coal is being mined from the solid) is that the two get mixed together. Since the rock doesn’t burn it has to be separated from the coal, and that is the job at the coal preparation or wash plant. Here the mix is typically put into a fluid bath (simplistically) so that the coal floats, while the rock sinks to the bottom of the tank. The two are thus separated, and the cleaned coal can then be sold on the market. Incidentally when the bits are cutting rock they wear out much faster, needing costly replacement.

One of the “dramatic” bits of film was when part of the roof collapsed. It happens more often than you might think, and the roof has to be tested, even before it is bolted, and loose top broken down. Generally, however, miners use something a little more specialized than a geologic hammer – there are special tools and the modern ones are of fiber glass and quite long, so that the crew are well back from any rock that might fall on them. (Wasn’t always that way however).

I was glad to see the inspectors featured in the first episode, a safe mining industry is not only healthier it is also more productive, and we need a well qualified and powerful inspectorate to achieve that goal.

Well I may add more comments as the series progresses, it does give some picture of mining , though I have to admit reality was not usually as exciting as the series would make it, but, on the other hand, you don’t feel the muscle ache of working in that height when you’re watching it on tv – I still have knee problems from those days. (And yes I was underground when men died, and yes I was involved in roof falls). And in regard to back injuries, I led a research crew that worked on mining machine concepts (among other things) for four decades, all of us have bad backs.

Read more!