Saturday, October 31, 2009

Dr Hansen was wrong - let's change the subject

I have pointed out in recent remarks, that if the debate on climate change were being conducted under normal scientific conventions, then the predictions that were made by Dr. Hansen back in 1988 would, by now, be considered to have been proved false, since the predictions that he made on global temperature rise, and the path that temperature would take, have not followed his models. His predictions were based upon:
Scenario A assumes continued exponential trace gas growth, scenario B assumes a reduced linear growth of trace gases, and scenario C assumes a rapid curtailment of trace gas emissions such that the net climate forcing ceases to increase after the year 2000.
The carbon dioxide path has continued to move along the path predicted for scenario A, yet the temperature rise through today has not only failed to accelerate with it, as Dr Hansen predicted, but instead is much closer to following along the temperature curve predicted for scenario C.

Comparison of Dr Hansen’s scenario predictions against actual temperature anomalies (deg C) (prediction here and actual temp from GISS) Baseline 1951-1980 mean.

I took the data from the plot of temperature predictions that Dr Hansen provided, since it allows me to easily read the predicted temperatures, and I then re-plotted the data with the actual temperatures from the GISS website as the black lines, since the original plot only went as far as 2005. I did note that the tabulated data were slightly different from those of the actual temp data plotted in the 2005 curve, and used the more up to date ones for the entire series.

If one looks at the above comparison that Dr Hansen which is extended from the one made in 2005 the difference between the actual temperature and his predictions clearly show that the actual (black) temperatures are falling away from all but the C scenario (blue) a predictive future which was only supposed to occur if we did ameliorate carbon dioxide emissions. There is no accelerated warming, and there was no amelioration of the rising carbon levels, and so his predictions fail.

However, rather than discuss this rather interesting fact, the global “warming” community has managed, yet again, to divert discussion by dragging debate over a different topic of their choosing, so that this primary point can be hidden from public debate. Their topic centers on whether the globe has been warming at all, an issue that depends very much on where your temperature starts. They might have been somewhat embarrassed to begin with 1998 (as some more realistic of the proponents recognize, following their earlier use of that temperature to show how fast the temperature was going up). However the commentators have chosen, very carefully, to attack the suggestion that the temperature might be cooling, rather than the more relevant issue that the global temperature is not following the models. When you can’t argue, then change the subject.

Now, for what it is worth I have made clear that I am more convinced by the hundreds of peer-reviewed publications (such as those referenced by Jean Grove in The Little Ice Age, or by Brian Fagan in his books) that document the Medieval Warming Period and its predecessors and the intervening cold periods, such as those of the Dark Ages and the Little Ice Age. Thus the fact that the present warming period has not reached the extent of the last warming period, let alone those before it, does not unduly disturb me. (That is evidenced by the ground conditions – such as permafrost - in the Arctic inter alia, and the positions of the ecotones in Europe). Yet it is sadly a denigration of true scientific debate to see how vociferously those arguing for AGW change the topic, or readjust the data whenever they seem at the stage of losing a point in the debate.

If one accepts that we are in this cyclic series then arguments about whether by some fraction of a degree or other the temperature was warmer in 1934 than 2005 in the United States become significantly less critical. The question then becomes whether this Warming Period is significantly less warm than the last, or the one before that. The Roman period was apparently warmer than the Medieval one (the evidence coming among other things from the Roman ruins appearing out of the ice in the Alps), and both were warmer than we are now. But we also need to go back to those periods and see what the conditions were that developed in different parts of the globe (such as the extended droughts in what is now California) and learn from that information, and take precautions accordingly, recognizing what we might be facing as the climate changes. For if we do not learn from history, then we are bound to repeat it - as once was said.

The question as to why these periods existed – or whether the intervening cooling periods are the anomalies – is probably a much more worthwhile study, but given the urge that folk have to change the way in which energy is produced, and the vast fortunes that hang on the changes in policy that are being debated, somehow I don’t see much attention being given to those questions soon.

Read more!

Friday, October 30, 2009

Turkmenistan, Nabucco, Azerbaijan and Russian natural gas

Robert Cutler has an interesting article in Gundogar this week in which he asks, concerning the recent articles questioning the size of Turkenistan’s gas reserves “Who stands to gain?” from the imbroglio. His conclusion is that it is likely the Russians, and certainly not the Turkmen.

The story, in brief, is that after a steadily rising projection of the size of the gas reserves in the country, the Turkmen President called in a Western auditing firm to look over the books and validate that the projections were real. The British firm, Gaffney Cline & Associates, came, looked at two fields, South Yolaton and Yashlar and certified, a year ago that they held probably 6 and 0.7 Tcm each. To put this in context, it would make South Yolaton the fourth or fifth largest gas field in the world, and would mean that Turkmenistan might have reserves as large as 80% of those reserves in the entire Russian nation. Turkmenistan is currently getting its gas from the Dovletabad field and it is this that was supplying natural gas to Russia and points west prior to April this year.

Two stories one Russian and one German recently suggested that the information on which the audit was based was bogus – a claim that the auditing firm disputes. They pointed out
”There is a very considerable volume of data to be assessed on a project of this nature. This data [comes] in a wide range of types and from a range of sources," Gillet (Jim Gillet of Gaffney Cline - ed) explained. "Therefore, in practical terms, it would be impossible to falsify it all [in a way] such that it could still appear to be coherent and could mislead an expert team. This is why companies like [Gaffney Cline] are used by organizations, such as stock exchanges and banks, to provide independent and expert opinion on such issues."
The current context is that Turkmenistan is moving away from the relatively expensive dependence on Russia to handle all its exports of natural gas. It is therefore seeking help in building a gas pipeline that would tie into Nabucco, the pipeline that would circumnavigate Russia and bring natural gas into Western Europe.

So far the Nabucco pipeline has not been able to generate enough natural gas supply to justify its existence.
Nabucco aims to diversify gas supplies by bringing Caspian and Middle East gas to Austria via Turkey. The pipeline venture, led by Vienna-based OMV, is vying with Asian and Russian projects for access to Azeri, Turkmen, Iranian and Iraqi gas.

Nabucco, set to start operating in 2014, will get its first gas from Iraq and Azerbaijan, Dolezal said. Reinhard Mitschek, the project’s managing director, said earlier this month that 8 billion cubic meters of gas would come from Iraq in 2015, more than a quarter of the pipe’s total volume, and that Shah Deniz would provide the same amount.

The link, which will send as much as 31 billion cubic meters of Caspian-region gas a year to Europe, has been delayed by a lack of commitments from customers, suppliers and transit nations. First deliveries were originally planned for 2013.
Russia badly wants to ensure that this regional natural gas continues to flow west through its gateway. Thus it has been bringing pressure to bear on both Turkmenistan and Azerbaijan to continue to direct all their deliveries to them. And so, to date, while the Turkmen has continued to speak favorably about Nabucco, for over a year, through July and to date Turkmenisan has yet made no firm commitment, even while claiming that it has the resource to supply Nabucco :
The Turkmen president pointed to the newly discovered gas fields, Yolatan and Othman, in the southern parts of his country, and said that huge gas reserves of the two fields have made it possible for Turkmenistan to join major international gas pipeline projects. 

Reminding that the Nabucco pipeline is at the center of the international community's attention, he added that development of gas and oil fields, construction of new facilities for refining oil and gas, construction of gas terminals, employment of modern technologies in his country's oil and gas sectors are among Ashgabat's priorities.
Ashgabad (the Turkmen capital) needs more Western support, and must convince investors that it has the long-term supplies to be able to do so.

Having made a commitment to China for up to 40 bcm per year, and having committed 20 bcm to Iran, and with commitments of up to 50 bcm to Russia, could Turkmenistan also be able to provide the gas for Nabucco? (As I noted production for the Chinese is coming from a different set of gas fields).

And here, as Robert Cutler points out, there is a benefit to Russia (or Gazprom) sowing some disinformation. If investors can be caused to doubt the credibility of the long-term supply to Nabucco, then it won’t get built, and Turkmen gas will continue to flow West through Russia.

Gasprom also has a secondary claim in that it cites a prior agreement that Russia be able to buy all of the Turkmen supply.
In fact, it complements what Russian media and officials have now insisted for many months, to wit, that Moscow has already contracted all future gas from Turkmenistan. There is a contract in principle signed under Niyazov (the former Turkmen President - ed) to provide Russia with 50 billion cubic meters per year (bcm/y), but that is subject to continual negotiations and re-negotiations over price.
Russia is also trying the same approach to the natural gas that will come to Nabucco from the Azerbaijan production from Shah Deniz - a field with about 1 Tcf of reserves). Robert Cutler notes:
For example, ever since the signature of the contract for Azerbaijan to send 0.5 bcm of gas from Shah-Deniz Phase 2 to Russia in 2010, Russian media and officials have stated at every opportunity that they will have what amounts to «first refusal» on subsequent Shah-Deniz Phase 2 production. However, no legal documents binding the Azeri side to such a bargain exist.
The Russian struggle to deny supplies to Nabucco sufficient to stall its construction, while more successful until now, appears to be fraying a bit at the edges. The very size of the natural gas deposits would indicate that Turkmenistan can meet all its current and anticipated commitments, and that the doubts raised about the reserves are meeting more questions than immediate acceptance. (Though firing the guys in charge didn’t help bolster the credibility of the Turkmen argument).

In the meanwhile Turkey has been negotiating with Iran, and it appears that some of the natural gas in the South Pars field (one of the three larger – at 14 Tcm - than South Yolaton) may come to Turkey (around 35 bcm) with Turkey sending forward what it does not use into the Nabucco line.

In short in this continuing saga the current week looks to have been better for the Nabucco pipeline and the West, and not so good for Gazprom – but don’t even think of counting them out yet!!

Read more!

Wednesday, October 28, 2009

Gasoline demand, August driving and winter weather

Wednesdays are when the TWIP (This Week in Petroleum) is issued by the EIA. Today’s cover note deals with the relatively changing prices of distillates (which includes diesel) relative to gasoline. In the past year the United States has become a net exporter of distillate, and even though the market has fallen, continues to be one, averaging about 350 kbd.

Looking at the gasoline demand, as we move further into a winter heating season, with less driving, there has been a fall-off in demand, though it is still higher than a year ago.

U.S. gasoline demand (TWIP )

When one contrasts this with the changes in driving patterns, now available from the FHWA for August, overall travel is up some 0.7% over August of 2008. Looking at the curve above the greatest impact of the recession on gasoline demand did not occur until September last year, and so this shows a further increase in the recovery. That is best shown by the running 12-month of vehicle miles driven.

12 month running total of vehicle miles driven in the USA. (FHWA)

The upturn in the curve is now clearly defined. The gains are not, however, this month distributed around the USA. Both the North East and the North Central parts of the country are showing a decline in driving, and it is in the South and West that there have been the gains, with Texas and the South Gulf showing the largest increase of some 1.7% over last year. Both urban and rural driving are still showing gains over last year, though now falling away from parity with 2007 (which rural driving almost matched for a short time in July).

We are left to await the coming winter, with the debate as to how hot it will be, relative to normal, given the El Nino condition. Going to the NOAA Winter Outlook prediction site, the winter is largely predicted to be warmer than usual across the country, and as a general rule perhaps a little drier in some parts, though wetter along the coasts. That may help (?) encourage driving over burning winter fuel – we’ll just have to wait and see how it plays out.

Winter outlook for temperatures.

Winter outlook for rain and snow.

These predictions are relatively-short term and it will be interesting to come back to this page in the Spring, and see how accurate the predictions were. (Likely better than mine from 30-years ago, to which I hope to return, perhaps tomorrow).

Read more!

How much natural gas does Turkmenistan have?

The pipeline from Turkmenistan to China is now almost complete, the section running through Turkmenistan, from the Malai field will take the gas to Uzbekistan (albeit Gazprom helped build the pipeline) and thence to Kazakhstan and on to China, with deliveries of up to 40 billion cu. m. (bcm) (1.5 trillion cu ft) per year.

This volume, and where it comes from, has suddenly become a little more important in the overall global supply of natural gas, given that, two weeks ago, the Turkmenistan President fired the folks leading the energy companies, and it is suggested that this was over a report of false data having been provided to a firm auditing the volumes of gas reserves remaining in the Turkmen cupboard. (Hat tip to Gail)

Back in 2008 these reserves had been reported to be around 10 – 14 trillion cubic meters (Tcm). However there has been a considerable question as to the accuracy of these numbers. The IEA has noted the increase in volumes with recent discoveries
Estimates of Turkmenistan’s natural gas reserves vary considerably, introducing a large element of uncertainty into projections of its future contribution to global gas supply. Representatives of Turkmenistan have put total gas reserves in the country at more than 20 Tcm, an amount approaching the range of proven reserves in Iran or Qatar – and far more than the 2.7 Tcm included by BP in its statistical review (2008).

Since 2006, Turkmenistan has announced new gas discoveries, the South Yolotan and Osman fields in the southeast of the country, and in August 2008 also a new gas condensate field, South Gutlyayak. The Turkmenistan government announced in April 2008 its intent to undertake an international audit of the country‘s gas reserves, and first findings of this audit were announced in October 2008. While more appraisal work is needed to confirm the reserves, the vast South Yolotan-Osman field (this is now considered to be a single structure) could hold an optimum 6 Tcm of gas, with estimates for the field ranging from a low of 4 Tcm to a high of 14 Tcm.
As an alternative point of reference the CIA thinks that the reserves are only 2.83 Tcm.

It is the results of that audit that are now being questioned.
On October 12, two separate sources -- Russian journalist Arkady Dubnov, writing for the Russian newspaper Vremya Novostei, and a Germany-based non-governmental organization called the Eurasian Transition Group (ETG) -- alleged that the Turkmen government misled the independent auditors by providing them with inaccurate, hyped data. The end result was that Turkmenistan’s actual reserves are probably far lower than the estimate contained in the 2008 Gaffney Cline report.
However, while denying the accuracy of the story, the President purged the energy sector leadership, for reasons other than the above .
President Gurbanguly Berdymukhamedov has sacked the heads of Turkmengaz, Turkmenneft and Turkmenneftegazstroy for showing an "irresponsible attitude" toward their jobs.
Russian media outlets are linking the top-level purge to allegations that Turkmenistan’s gas reserves have been grossly overestimated.

As a result of the purge, Nura Mukhammedov is replacing Dovlet Mommayev as the head of Turkmengaz. Mommayev had held the post only since July. But, according to a Turkmen state news agency report, Mommayev was around long enough to be guilty of "a number of shortcomings and omissions."

Orazdurdy Hojamuradov was relived from his post of chairman of Turkmenneft "due to transfer to another job."

Berdymukhamedov charged Hojamuradov with "not taking the necessary measures to increase oil production" and allowing the state-owned enterprise’s production to be eclipsed by foreign companies that are "twice as efficient." Annaguly Deryaev, the former minister for the oil and gas development, is taking over the top position at Turkmenneft. Oraznur Nurmuradov was appointed oil & gas minister in his place.

?tamurad Durdiyev was sacked from Turkmenneftegazstroy. According to Berdymukhamedov, the state oil and gas construction company is failing to "cope with its tasks." Durdiyev is being replaced by Akmurad Egeleev.
The Russian newspaper Vremya Novosti reported that a number of senior figures are now under investigation following the delivery of confidential information to western companies hoping to invest in the Turkmen oil and gas industry
The gas fields that were audited are not the ones that are being used to supply China, and China itself is involved in the assessment and production from those.

Having been given permission to develop the gas fields that will supply the pipeline. It will be developing fields along the right side of the Amu Darya river, itself part of the major gas field reserves found in the Amu Darya Basin.
“Turkmenistan guarantees that the Turkmen side will supply Turkmen natural gas in the volumes stipulated by the agreement,” the Turkmen president’s press service said.

“The Bagtyyarlyk territory is a major part of the promising gas area Sagkenar (on the right bank of the River Amu Darya)”, a Turkmen government official told ITAR-TASS on Wednesday.

Its gas reserves are 1.7 Tcm of natural gas, according to official data. The territory includes Samandepe and Altyn Asyr (Golden Age) gas fields. The former’s proven reserves are 100 bn cu.m. of sulphur-containing gas. The 500-square-kilometre Altyn Asyr has Bereketli, Pirguyy, Yanguyy and Sandykly gas fields, where seismic surveys have been recently carried out
Having a quick look through Google Earth, the Amu Darya river is currently being extensively dredged in the region.

Amu Darya River showing the dredges at work (Google Earth)

There are several gas fields in Turkmenistan – the largest being in the Amu Darya Basin, with the Dauletabad-Donmez, which straddles the Iranian border. It is considered to hold around 45 Tcm of recoverable gas.
Other major gas fields in the Amu Darya Basin include Gagarin and Yelkui, which are to be developed, and more recent discoveries at Beshir, Chayyrly, Sarykum, Severnyy Yankui and Yuzhnyy Shorkel which have combined reserves of 10.2 BCM. Work on developing the gas field of Byashgyzyl, in the south-eastern Kara Kum region, began in early 1998 and the field's gas reserves have been estimated at 100 BCM.

Yangyguyy, a major gas field on the right bank of Amu Darya River, is to be developed as well
There has also been some question as to the role of Gazprom in this whole affair. Until last year Turkmenistan was only able to sell the majority of its gas through Russia (it also sells some 8 bcm to northern Iran). This amounted to around 50 billion cu m per year, but as the IEA notes the Turkmen have only just started to get a decent price for their product (relative to what the Russians were selling it for – as I had noted in my earlier post on the subject).
Turkmenistan was receiving $60 per thousand cubic metres (tcm) for gas exports delivered at the Turkmenistan border. From the last quarter of 2006 the price paid by Russia increased to $100/tcm, then to $130/tcm in the first half of 2008, and to $150/tcm in the second half of 2008. Prices paid for gas from Kazakhstan and Uzbekistan have followed a similar trajectory.

In March 2008, Gazprom and the heads of the national oil and gas companies from Turkmenistan, Kazakhstan and Uzbekistan announced that trade in Central Asian gas would, from 2009, take place at European-level prices.
The problem is that Russia fixed the price, and it was significantly higher than the current price at which Gazprom can sell its product.

In fact Gazprom is in dispute with Western Europe because they are failing to purchase the volumes that they had promised in earlier deals, to the current tune of around 10 Bcm.

As a result there was no demand for the gas from Turkmenistan and so there was an accidental explosion in the pipeline leading into Russia, and no gas has been delivered since. Although there have been several meetings but apparently things are now so bad that even they have stopped.
For years, Gazprom has tried to monopolize gas supplies from Central Asia, but when European gas demand and prices dropped, Turkmenistan’s gas pipeline to Russia blew up just like two gas pipelines to Georgia a few years ago. Now, after the pipeline has been repaired, Gazprom refuses to accept the contracted volumes or pay the agreed-upon price. Therefore Turkmen President Gurbanguly Berdymukhamedov did not go to the CIS summit. Instead, the Chinese are filling the gap by building a large gas pipeline to Turkmenistan. China will buy most of Turkmenistan’s gas exports, as it already does from Kazakhstan.
The problem that will arise for the West if China does end up taking most of the natural gas from Turkmenistan is as to where they will be able to replace that 50 bcm delivery. This might be of greater interest to the UK, which is increasingly having to go the world market for natural gas, but given that Russia supplies most of Europe with a significant portion of their needs it is worthy of consideration. Gas from the Yamal Peninsula is not yet available, or even developed while the Barents Sea deposits are even further into the future. However that has not stopped Gazprom from getting half the pipeline that will deliver the gas to Poland.

Readers may note there has been a change in plan. I had planned to write about the changes since I wrote my OMNI letter tonight, but I will postpone that since this is more of a developing story, and thus of more immediate interest.

Read more!

Monday, October 26, 2009

Sometimes my predictions turn out to be wrong

Last Thursday I posted the first half of a letter that I wrote to OMNI magazine back in 1979, and in it I made some cost estimates for the potential for a satellite in space beaming energy back to the Earth. I then made some additional cost estimates for solar energy.

On a day when there is growing concern that energy prices are again heading upwards, and with OPEC maintaining control by discussing an increase in production I am going to perhaps weaken your faith in me as a prophet (I consider that OPEC can only continue to maintain this control for another year or so before they too run out of enough oil to satisfy demand – providing the price remains reasonable). I am going to do this by adding the second half of the letter that I sent to OMNI. This is where I became less of a prophet than I had expected, and I will discuss some of those inaccuracies in a follow-on post. But first back to that 30-year old letter:
The next alternative is the biomass option. Firstly have seen figures that it takes 80 gal of gasoline equivalent to raise an acre of corn (including fertilizer) and that 4 gal of gas input to biomass gives 2.5 gal of alcohol out not an equitable exchange. Mr. Pohl's argument that "burning biomass does not (add to atmospheric CO2) ." is specious. Not only does accelerating the decay time increase the rate of change and the volume involved, but it also disregards that portion of the material which turns into humus. I'm sorry but biomass is not a significant option either. Hydro electric and wind power are very site selective and, as a practical but mundane point, can a TV oriented society realistically be expected to tolerate that much TV interference (from windmills). No, while every little counts I'm afraid that this is all this will amount to.

And thus we come to the big four; oil, gas, nuclear, and coal (I'd like to leave geothermal until later).

There is no question that our petroleum based oil and gas is very rapidly diminishing. Price decontrol will have very little long term effect on this situation and recent studies have shown that reserves are often even lower that predicted . It hardly bears repeating that the reason this is not too evident at the pumps is that we now import almost half of what is used. The fuel component is significant in almost all manufacturing and if our suppliers abroad put up their prices (as they will continue to do), then the result of course, is that inflation is virtually guaranteed for as long as this continues.

The Iranian oil stoppage, and the declining levels of the international oil pool all urge that something must be done, in the short term as well as the long.

In this regard we need all the domestic fuel supplies we can develop, for at least the remainder of this century until the time in the next that the SPS or fusion becomes practical. I stress domestic, because the need for supplies is critical and must be guaranteed, and from abroad this is not a sure thing.

This, in turn, casts grave doubts on the economic installation of more nuclear reactors. Figures for economically recoverable domestic uranium seem to center about 380,000 tons, while demand projections for the reactors in and currently planned appear to vary up to 1,500,000 tons, depending on whose figures you believe.

In 1974, when the price of uranium was $7.90, spokesmen for the power companies were quoted as saying that nuclear power stations would become uneconomic if the price doubled . The price is currently $15 and this supply will run out in the near future, putting the price up further . To those who start waving the breeder flag, I would rejoin that it takes 20 years to double the fuel supply, that it will take 10 to 15 years to get one built and that already moves us into the next century.

To be honest I don't know who to believe on gas. In 1977 we had a shortage of gas and thousands of businesses closed. I have heard that this caused a lot of companies to switch out of gas and this resulted in the current surplus, not the fact that we found that much more. In either case it is currently a popular fuel again, yet available data would indicate our supply is even worse than that of oil, a point I will return to later.

So far I have been very negative, not through malice but because energy costs are going up and we need to understand the realistic options that face us in the remainder of this century.

It is common these days to hear cries from Washington at something must be done and conservation is the cry. But after adding insulation to my house, turning the thermostat down to 60 and cutting out pleasure driving I don't see that I can do much more in that regard. The population continues to grow and energy demand per capita will grow with it. To give just one reason, the ore required to produce metals gets thinner every year and more must be mined to give us the same volume of material.

There is also a direct correlation between energy levels and jobs, to prove which, I attach a graph from a paper by Congressman McCormick.

To make this point another way, after that well known actor made his walk along the canyon for the TV cameras in the campaign which stopped Kaiparowits we did not see him in Watts explaining to the young unemployed that his actions helped ensure there would still be no jobs in the 1980's.

I say this not meaning to be facetious, but to point out that those whose major concern is with the environment must accept the social burden which is implied. Those who delay the construction of a power station must accept some of the blame for the resultant rate increase when the plant is built, or the unemployment which will result from the ultimate lack of power if it is not.

What I also am seeking to establish is that we are in a mess and while we need the long term solutions which will perhaps be brought about by fusion or the SPS system we are also in desperate need for some short term solutions as well.

In this regard I would like to take exception to the remarks by Mr. Pohl who writes off coal mining in a short paragraph. I regret this because it is a very common occurrence when one reads reports on the current energy situation by a wide variety of people and unfortunately the attitude it conveys is pervasive. If one might first of all point out the fact that soil is dirty does not stop farmers from growing crops, and the thousands of fatalities a year do not stop Americans from driving cars. The dirty characteristic which is attached to the industry is, regrettable and based on history; more than current fact. It is not true, for example, that strip mining ruins everything it touches and there are areas in Texas and Wyoming, among others which would show that the 1,000 plus dollars put into each acre of reclaimed land have left the land in much better condition than it was before strip mining occurred. This does not make strip mine coal ruinously expensive and once a recognized set of regulations can be established and operated under I would expect that coal mining prices will stabilize. I would point out in this regard the experience of the National Coal Board in Britain would indicate that land can be restored to at least as good a condition after strip mining as it was beforehand.

One must accept that people are killed in coal mines and that much of the coal contains sulfur but surely these should pose challenges to science (as does developing "cheap" solar cells) rather than be shrugged off as absolute disqualifiers. Surely if we can develop robots to operate on Mars we can develop robots to operate within a thousand feet of the ground surface in coal seams and thereby make mining operations safe so that miners aren't killed.

This is perhaps a challenge for the future, however, in the short term while coal mining will produce as much energy as is required of it surveys indicate that, since its major use will be in power generation that to the turn of the century the supply will be demand limited rather than supply limited. Coal mines also take somewhere between 5 and 10 years to develop as do the power plants which must supply them.

Where then can we turn for more answers to the energy problem in the short term and what other techniques can possibly be used in the medium term. I would like to put forward two suggestions. Firstly, there is within the United States somewhere in the region of 4 trillion tons of coal of which proven reserves down to 3,000 ft run at levels of approximately 1,700 billion tons. This coal contains anything from 140 to 700 cu ft of methane per ton, and therefore gives a readily available additional volume of perhaps 500 trillion cu ft of gas. We currently use approximately 20 trillion cu ft of gas a year with conventional resources estimated at 220 trillion cu ft. (Hence my earlier comment about gas supplies.) One therefore would triple the amount of natural gas available if this resource were adequately developed. Since this supply occurs in deposits less than 3,000 ft from the surface it can very easily be accessed for utilization. It is, however, a reserve which is currently not being exploited mainly because of legal entanglements as to who exactly owns it.

It is frequently said that Congress cannot legislate technology however, in the situation which faces us, as we move towards a solution to an energy crisis, this is one instance where a move by Congress in regard to deciding on the exact ownership of this gas would free up a major resource rapidly as a means of supply. It would also, serendipitously, make later mining of the coal a much safer operation. In regard to the CO2 build up, one must accept that there is a problem which must be addressed. But, while we are responsible to future generations we are equally responsible to the current and past generation. Those people now returned have as much if not more right to power in their lifetimes and must also be factored into any solution.

The second option I would propose in relation to more proper use of underground space. This last winter a house built relatively close to mine with approximately the same sq footage was left unoccupied and unheated. The temperature inside never fell below 56 degrees.

As I have mentioned earlier it cost me up to 5,500 kilowatts/month to maintain the temperature inside my house at only 6 degrees higher. There would thus be great savings if a move were made to put at least part of future construction underground. To those who say it is expensive, traumatic, and unsafe, I would add three further facts, firstly, that the underground house cost $32/sq ft to build and this $32 is the same price as current super surface house construction in the Rolla area. Secondly, studies in Texas have shown that school children are if anything less anxious when taught in an underground school than they were on the surface. Thirdly, caves in Missouri survived, with no evident damage the worst earthquake in U.S. history. These structures are of course safer and much better able to withstand tornado, wind storms, ice storms, and other hazards of the weather which are prevalent in these times. The development of this technology is really already with us. It of course can only be applied to novel construction but nevertheless the savings which it would lead to in the long term would be not only in energy but quite frequently also in aesthetics and also in other potential areas since one can for example grow vegetables on one's roof. (Something one could not do underneath the solar collector which would cover my backyard.)

Well that is my response to the energy articles in your last issue. You must forgive me for being a little long winded but the matter is a little complex.

Thank you for your kind attention.

Respectfully I remain,

Yours sincerely,

Well that was the letter - we'll talk about how it really turned out next time.

Read more!

Sunday, October 25, 2009

Turning an Oil Well and down-hole motors

This is part of the series of technical posts on how to get fossil fuels out of the ground that appear here most Sundays. The last post in this series dealt with directional drilling, where I had mentioned the need to go back in time to the period where the then Soviet Government was developing the Volga-Ural basin in the Soviet Union, back in the 1950's. And I quote from John Grace's "Russian Oil Supply."
In the Volga-Ural basin, however, particularly after recognition of the enormous potential of the deeper Devonian strata, drilling targets were further below the Earth's surface. Moreover, the older, more lithified rock of the Volga-Ural basin was harder. This required higher drilling torque, which in turn demanded superior strength drill-string steel. The Soviet steel industry was basically unable to provide high-strength drill string in volumes necessary to develop the basin.

Engineers responded with turbo-drilling, which does not depend on rotating the drill string. Instead, immediately above the bit, they placed a turbo drilling motor, which itself did the work of turning the bit. This obviated the necessity of twisting the pipe and thereby reduced the required quality of steel.

Turbo drilling radically increased the productivity, Combined with the growing number of rigs available, the total number of feet of development drilling conducted per year nationwide jumped from 1.9 million feet in 1949 to 7.1 million feet by 1950 and 12.1 million feet by 1960."
They were also developing waterflood techniques in the basin at the same time, and we have discussed that in previous posts.

The idea of putting a motor directly behind the drilling bit was not new, the first Russian turbine drill having been designed in the mid 19th Century however it required a number of stages before the design could turn out enough power. And the first patent for an American downhole turbine was granted in 1873.

For those who are not familiar with a turbine drilling motor, essentially it consists of a set of fixed turning vanes at the top of the motor (the stator vanes) which direct the flow of mud going down the hole to flow onto a second set of vanes (the rotor vanes) which are pushed around by the flow, causing the drive shaft to which they are connected to rotate.

Single stage of a turbine motor, showing the stator and rotor vanes (Baker Hughes)

By combining a series of these stages together into a multi-stage turbine considerable torque, and speed, can be passed to the drilling bit which is attached to the rotating drive shaft.

Connection of a turbine drive to a drilling bit (Boraisegypt)

Putting the motor at the bottom end of the drill string had a couple of other advantages. One is that it allows the hole to make angle, i.e. to turn in a tighter radius than if the whole pipe were rotating. While conventional rotary rigs can build angle at only 10 degrees per 100 ft, with a down hole motor the angle can build at 13-15 deg per 100 ft. The Russian idea took a while to catch on in the West and to his credit, a guy in Houston called Bill Maurer, had a fair bit to do with that. Time and technology have however moved on a bit since then, and Bill’s company was acquired by Noble Drilling Corp so I can’t pass on links to the firm.

With the advent of down-hole motors there is no need to have the complexity of joining 30-ft lengths of drill pipe together to deliver power to the end of the bit. This had always been constrained by the steel strength and joint limitations. Now that could be designed out, and the power could be delivered to the bit hydraulically through the mud, since this could be used to drive the motor.

Later motors have included positive displacement designs, such as the progressing cavity motors which Dyna-drill illustrates with an animated figure at their web site.

For those interested in relative performance, and the gains that technology can bring there is a case study available of a well drilled with a down-hole motor and PDC bits with a rate of penetration (ROP) of 93.5 ft/hr.

Turbine motors work best at higher speeds, but to create the chips and achieve effective drilling with conventional tri-cones, rotation speeds had, historically been slow. And the problem remained of creating the high thrusts across the bit that were required for this type of drilling, when the motor turned faster.

One answer came in response to a second problem. As the rocks that have to be drilled become harder, so the forces used to cut through them also up, causing a materials problem. The materials used to make the drill bits were either wearing out, or teeth were being broken out as the bits pushed through the rock.

Worn out bit (Stavanger Oil Museum)

Drill with inserts knocked out (Stavanger Oil Museum)

But until now we had tried to break the rock in compression by pushing the tooth into the rock but if, instead we dragged the bit across the rock without trying to chip it, in the same way as a metal-cutting bit on a lathe peels off a layer of metal, maybe we could lower the forces on the bit.

And if we used a diamond tool to do this, then while each diamond insert would only remove a very small amount of rock, we could impregnate a whole bit face with small diamonds ( much cheaper than the single stone you buy for the intended, since they are much smaller, and more common). These diamonds can be dragged over the rock face and slice off very thin layers, but can do so when moved at a very fast speed. Putting the two together meant that a new drilling concept could be developed, and a new drilling bit.

Diamond drilling bit (Stavanger Oil Museum)

The next development came about with the development of larger polycrystalline diamond compacts (PDC's or PCD's depending on your level of technical correctness). By making these larger diamond coated discs and setting them on the drill bit it was easier to circulate the mud so that it kept the diamonds cool.

Used PDC bit (Stavanger Oil Museum)

This is important since, if you get the temperature of the inserts above about 3-400 degrees, the diamond starts to soften a bit and wears faster. In this regard the design of these bits is still not perfect, but it has become better.

The lower force required to drive these bits into the rock, and the ultimately faster ROP that they allowed meant that it became easier to consider turning the well, not only to some angle downwards to intersect an oil reservoir not directly below the rig, but that one could also turn the well so that it could be turned to the point that it was drilling a horizontal well.

And this had a lot of advantages – once the technique was developed. The first horizontal well that I know of was drilled at Rospo Mare in 1982, and it achieved a much higher initial and sustained production than vertical and slant wells in that reservoir. It was a technique also being developed for coal bed methane recovery, something still in development.

But I’ll leave that development, and more until next time.

The list of talks is getting longer, and as we are also getting a little more complicated, it might be more useful for those just finding this series to start at the bottom of the list on the right, and work upwards. Because this is a relatively informal it is also prone to the occasional short cut, which may not leave things as clear as I would like. So please ask, if I need to give more detail, or if you know more feel free to comment.

Read more!

Saturday, October 24, 2009

Is global warming regional - as in the MWP?

I was looking over Anthony Watts’ web site, and noted a picture that he had posted that shows the large number of stations around the United States that were showing record temperatures.

Places in the United States showing record temperatures (From WUWT)

What struck me was that, just as one could argue that this showed how cold we were getting, those that argue that carbon dioxide is raising global temperatures could as easily point to the points in the Western part of the country as evidence for their argument.

The problem of trying to determine which arguments are more accurate is made more difficult where there is this isolation of data, so that those seeking information are only presented with that which supports one side – so cheers to Anthony for showing both.

But individual data points, while interesting, don’t really reflect the sort of long-term changes that are predicted (or not) by various climate models. There is a considerable question over the accuracy and methodology used to derive an average measure of the global temperature, with satellite data, which was supposed to be more accurate, having its own limitations, in measuring ocean temperatures for example.

If one looks at the average temperature for the contiguous United States, for example,

Average US temperatures since 1880 (GISS )

One can see that the average for the country has been tending downwards over the last decade, and has not differed that much from the temperatures of the 1930’s (which still has the record for the hottest year). It is hard to argue for AGW based on this graph alone.

Yet if once compares the data from the two hemispheres, it is the Northern Hemisphere that is warming much faster than the Southern.

Hemispheric temp changes (GISS)

So if the US landmass isn’t warming, and the Southern Hemisphere isn’t currently doing much either where is all the global warming coming from? A look at the current temperature anomalies shows that it is the Arctic and Europe that is warming the most:

Global temperature anomalies (GISS)

But it is interesting that the actual anomaly is only 0.54 degrees, since, as I have noted before, the predictive values that Dr Hansen gave in the paper that is the most cited as authoritative on this predicted that by now the temperature would be between 1.0 and 1.2 degrees higher than the datum, and that only if there was dramatic reduction in greenhouse gases would we get to 0.65 degrees by now.

Now that is still a it higher than the 0.42 degrees – which Anthony is reporting and which is, I presume, the daily difference, but as he notes, the carbon dioxide levels are at 388 ppm, and if the impact was a severe as has been modeled it should be much higher (though the GISS graph above does show that it is heading backup).

But the other thing to comment on is that back when those of us who bring up the Medieval Warming Period (MWP) first did so, the response of the AGW proponents was that we were discussing something that was only true of Europe and the Arctic. Now that we are seeing a pattern that shows a regionalization of the global warming pattern I wonder if that argument can’t be reversed?

It is only by openly discussing these changes and their implications that true scientific understanding can be achieved, not by hiding, or adjusting data in less than transparent ways. Unfortunately scientific debate takes time and evidence only accumulates slowly. How long, for example must the current apparent cooling persist, (or the accelerated increase in temperature not occur) before the model predictions are discredited? Yet without that debate we risk the frittering away of resources on valueless measures that will have no impact on the future.

Read more!

Thursday, October 22, 2009

A 30-year old opinion on Space Power Systems

Just over 30 years ago there was a young science magazine called Omni. It ran from October 1978 through the fall of 1995. In the first April edition (1979) it ran three stories – one, by Mr. Stine, on the Satellite Power System (SPS), one on solar energy in general by Mr. Pohl, and one on an invention by a Mr. Ovshinsky.

I was sufficiently moved by these articles to write a response to the magazine, and I just uncovered that letter. For an amusing perspective on that time I, and since the first two are still around, I thought I would post the thoughts of a rather younger Heading Out (who wasn’t at the time). I will break it into two parts, one of which deals with my thoughts on solar power and the second, which is more of a general comment at the time on energy in general will be posted on Monday.

I am deeply concerned by the tenor of these article since their net promise is that, by implication the same sort of promise is now being made for solar power as was made, less than 20-years ago, by the nuclear industry: “Just give us our heads and you’ll have free energy for life.”

With your indulgence I would like to review the energy scene as I see it, starting with solar energy and working the opposite direction to Mr. Pohl.

Let us begin with the SPS system. If I may quote the article “ . . .the SPS system is generally feasible from the point of view of both technology and economics . . .it is technically feasible to convert that energy to microwaves and to transmit it to the Earth with no negative effects on the environment . . .this would lead to a small (5-Gigawatt or 5-million kW) SPS pilot plant operating in low earth orbit by 1987, giving us the technical and economic answers that we would need to begin construction of a full SPS system with 50 10-GW SPS units in geosynchronous orbit by the year 2000, supplying most of the projected electrical needs of the entire North American continent.”

Let’s do a little arithmetic on this prediction to understand what this means. (My facts, unless stated otherwise, are from the DOE/NASA SPS documentation.)

Firstly a 5 GW satellite would weigh up to 50,000 tons and have an area of 100 km2. Assuming that the shuttle can carry 32 tons means 1,500 plus trips by weight. Assuming an average thickness of 0.1 cm for the structure would give a volume of 100,000 cu. m. The shuttle hold is 91 cu. m. in size, so that, on a volume basis – with no voids –we’d still need about 1,100 trips. Accepting bulking and some larger components, I hope that you will agree that 1,600 trips would not be an out-of-line assumption.

To put the first SPS system in orbit by 1987 would therefore require 200 trips/year starting (in 1979). To then create an additional 50 10-GW stations would require, in the following 13 years, 160,000 trips or 11,500 trips a year.

In regard to the cost for the solar cells Heliotronics have predicted (Electronics, Oct 26, 1978) a cost of $0.25/watt in 8 years, at a 10% level of efficiency – about that considered by the DOE/NASA paper. That level of cost is included in the DOE/NASA reference design which predicts a capital cost of $2,500/kW installed, it also would require an initial R&D phase of $45 billion. (My quibble with these figures are that they are 1977-78 dollars admit no inflation and assume an interest rate of 6%).

The study also considers two other pertinent factors. The first is land for the rectennas. This has been evaluated at an average of 80 sq miles/site; land which could become permanently barred to people and hazardous to the health of all wildlife therein. To quote the report in regard to a full SPS system “Only a small number of sites, relative to population, could be located in either the Northeast or Mid-Atlantic states. Those sites that could be identified were in fairly mountainous areas.” Which means that our scenic wilderness will become hazardous to our health.

The second point which, in its way is more worrisome is the critical materials report. Two systems were evaluated – the silicon system and the gallium arsenide system. The report lists 25 commodities required to provide 2 5-GW satellites a year. It highlights those for which problems might arise. For the silicon option two items – mercury and tungsten are considered critical. The mercury need for 168 tons is a problem since it would require a 10% increase in domestic production. Tungsten at 1,220 tons would require a 25% increase in domestic production. Although gallium requirements are 7 tons, against a current annual production of 8 tons, this is not considered a problem in the silicon option.

The gallium arsenide option, however, has 6 critical items. The most severe of these is the Gallium of which 2,186 tons is required. This is still set against 8 tons of domestic and 7 tons of foreign production per year. It should also consider, as the report does, that the total domestic reserve is only 2,000 tons, with a world reserve of 112,000 tons. A similar requirement for 2,356 tons of arsenic per year meets a similar problem with an annual production of around 23 tons being predicated by the 2000.

The net result of the data to date, I believe, is to show that any realistic use of the SPS system is at least 30-50 years away, as the DOE/NASA study predicts, and will probably only become economic when the material is supplied either from a lunar or asteroidal source? (Incidentally won’t a 100 sq. km. surface act as a solar sail?)

Coming rather rapidly therefore down to Earth where we unfortunately lose a lot of the sun’s power, let us examine the current status of the solar industry. Unfortunately, the arguments about gallium and arsenic still hold, so as foar as the gallium arsenide cell is concerned (with 25% efficiency) the material is unavailable, and so we must, pro forma, accept a 10-15% cell efficiency.

Living in the mid-West I will use Missouri as my initial location for analysis. St. Louis receives about 1,000 Btu/sq ft on a horizontal surface on an average day in March. With a peak demand of about 15 GW this would require, at a 10% efficiency level, an area of 441 sq. miles of collectors or adding 30% for walkways, roads etc about 600 sq. miles.

I could take the analogy further and show that the 76 Quad energy demand of the United States could be satisfied by a collector area of 69,700 sq miles – the area of the state of Missouri – but I’d rather reduce the scale and talk about my back yard.

In January my energy consumption was 5,530 kWh, or an average of 7 kW/hour. Oman and Gelzer, in the Energy Technology Handbook tell me that I need a peak supply of 5 times average demand to get me through the bad spells. My solar cell requirement therefore should be about a 35 kW system. At $0.25/watt (which I would remind you is about 1/30th of current cost) the cells would cost some $8,750. Accepting Oman and Gelzer’s figures for the costs of installation (but multiplying by the necessary factors) I get:

Structure . . . . . . . . . . . . . . . . . . . . .$2,000
Fuel Cell . . . . . . . . . . . . . . . . . . . . . $7,500 ( 35kW peak at $400/kW)
Electrolyzer . . . . . . . . . . . . . . . . . .$2,450 (35 kW peak at $70/kW)
Metal hydride . . . . . . . . . . . . . . . . .$4,000 (4,000 lb FeTi at $0.50 a lb and structure)
Hydrogen and Oxygen storage tanks $2,500 ($300/1,000 scf and controls)
Electric power conditioning/controls$500
Installation . . . . . . . . . . . . . . . . . .$1,000
Total . . . . . . . . . . . . . . . . . . . . .$28,200

I have accepted the arguments that I should use metal hydride rather than batteries on the basis of their projected cost effectiveness. The area of collector that I would require is about 5,000 sq. ft. – which is where I came in because it’s bigger than my house and backyard, and since 8% interest on $28,000 is $2,240 and my total electricity bill last year was $903, I don’t think I, or probably anyone living North of me, can really afford solar electricity just yet, even at $0.25 a watt.

Incidentally in regard to solar heating, Forbes quotes the house built for Mark Hyman in Waltham, Mass where it cost $24,000 for heating only. I don’t think I can afford that option, even with the current tax credits.

If then solar power cannot be used North of me, what are the situations in the South? The general consensus would appear to be that solar systems are best developed in the desert. To which I would like to draw the following points:
Firstly for a 5-MW system we are still talking of somewhere in the region of a 100 sq. mile unit. The equipment for that installation would be installed by heavy construction would be installed by heavy construction equipment that would destroy the surface integrity of the desert lands. We know, from experience, for, example from the Badlands of the Dakotas, that when this occurs the underlying surface can e readily eroded by rain and winds.

The solar collectors will be set up at an angle to the surface and this will act as a wind trap, bringing the wind down across the exposed sand. The result will be to lift the sand into the air, and one can readily anticipate that it will fall across the surface of the collectors.

In a recent paper Hawthorne has shown that such grit, falling from a height of 1 m. will cut the reflectance of a surface by 25% within 10 seconds. One can therefore predict two things; firstly that the collector surfaces will gather considerable amounts of dust which will have to be cleaned off. Secondly that the surfaces will become rapidly scratched loosing their ability to transmit energy because of the impact of the sand particles. Since we are talking about somewhere around 8 million collectors, keeping these clean, and the energy efficient levels up and keeping them protected, will in itself e a major undertaking.

One other consideration is that we have, already, discounted the use of gallium arsenide as a collector. It has the advantage that it can be operated at high temperatures, and since the location of these collectors in the desert, where they get hot, as Mr. Pohl has pointed out this will mean that they need to be cooled, and thus we come back to the ubiquitous mater cooling towers. These symbols of current power station design will thus again be required in the desert.

So that was my opinion, and some data (the real reason for putting up the post) from some 30-years ago – I’ll have a comment on reality and how some of this turned out, after Monday’s post. There is a lot that I got wrong, but I'll come back to that.

Read more!

Wednesday, October 21, 2009

Gasoline consumption and today's TWIP

So how far has the recovery progressed? Not being one of the many financial gurus that have been peering over the bull’s entrails for the past six months my measure, as you may have noted has been a bit simpler. I have been looking, each month, at the amount of miles driven and the gasoline consumed, as reported in the weekly TWIP, by the EIA. It is not a particularly accurate measure but it does indicate a level of activity that first started reversing the decline last April, and it has been picking up ever since. The current curve, published today, shows a continuing increase in consumption.

U.S. gasoline demand (EIA TWIP)

However it does require a little clarification since the data that the TWIP is plotting (if one goes one layer deeper into the site) appears to be the finished gasoline supplied. However, since this tends to fairly rapidly transfer to the consumer in most cases it remains a simple guide to what may be happening.

So the curve is going up, and so I wondered where we stood relative to the conditions of the economy pre-recession. Going to the tabulated data – conveniently in a spreadsheet – I simply sorted the data by the size of the demand (thousands of barrels a day).
The data for the top weeks of production shows that we peaked (in the short term) in 2007.

Top 20 days of gasoline production/consumption in the United States (EIA)

However, if we look at the second block of 20:

Second top 20 days of gasoline consumption/production in the United States (EIA)

You will notice that in the week of May 22 (number 27) and of August 28 (number 40) 2009 has made it back into the list. This is not the season for high demand, and so, even though the trend is up, the next 2009 entry does not come until number 85, but the fact that the numbers have headed back, and that the difference between the current peak (9,762,000 bd) and the highest this year (9,538,000 bd) is only 224,000 bd might suggest that the public awareness of the OPEC restriction on production – or the relaxation of that quota system – may come a little earlier than we might otherwise have thought. The FHA plot for the month on actual vehicle miles driven for August is not out yet, but the picture for May was showing that, barring the South East, the rest of the nation was starting to drive more than in the same period of 2008.

Change in regional driving habits over 2008 for May of this year (FHA)

It will be interesting to see how the winter turns out, though the use of different fuel sources, the economy and the type of winter we’re going to have are all still in question.

Interestingly the commentary at the start of today’s TWIP dealt with the need for more transparency in the global oil market, noting that information on inventories is the most opaque. Since this is the buffer between production and consumption and the growth of “floating storage” – an even more opaque value – means that the real trends in production can be masked and even more difficult to predict.

Read more!

Dr Chu, Dr Aleklett and the price of oil

There are a number of us who write about the situation in regard to the world supply of liquid fuels, and the future availability of those supplies. In general we began by gleaning our information from the internet, or each other, and from those relatively amateurish beginnings a community has developed to study the condition of “Peak Oil.” That community was immeasurably helped coalesce and grow by the conferences that began under the ASPO banner, with ASPO standing for the Association for the Study of Peak Oil. Kjell Aleklett began these conferences on the study of peak oil some years ago, and has watched the growth of the community (shepherding, as International President, where necessary) since then.

He has recently reviewed the papers given at the ASPO – USA conference in Denver, providing the type of coverage I would have liked to provide had circumstances been different.

Kjell is located outside Stockholm, in a University that I almost made it to earlier this year and has shown, through his graduate student’s dissertations, that it is possible to acquire and publish a wealth of information about the condition of the various aspects of future energy supply that cast a relatively realistic view of what we might expect in the future. (And if I don’t always agree about some of the conclusions – that is, after all, the underlying basis of scientific discussion).

I look at what he has been able to accomplish, and then I contrast this with the current U.S. Secretary of Energy, an individual who has the vast resources of one of the larger Departments in the United States Administration at his disposal. Their request for funding this year totaled some $25 billion. The imperatives of the Department are listed as:
• Support deployment and expand research of cost-effective carbon capture and storage,
• Accelerate technological breakthroughs with the Advanced Energy Initiative,
• Provide additional energy security expansion of the Strategic Petroleum Reserve,
• Foster scientific leadership with the American Competitiveness Initiative,
• Advance environmental cleanup and nuclear waste management,
• Maintain the safety and reliability of the nuclear weapons stockpile and continue
transforming the weapons complex, and
• Work with other countries to prevent the spread of weapons of mass destruction
Today the Secretary noted that the rise in the price of oil to $80 a barrel was “making him nervous.”

Folk such as Dr Aleklett have studied the real situation in regard to the future of oil. Based on detailed studies of the actual rate of oilfield discoveries and oilfield production individuals such as Rembrandt Koppelaar (ASPO Netherlands) have been able to produce high quality analysis of the reality of the global oil situation that has caught the attention of groups such as Global Witness, who have in turn produced a report “Heads in the Sand” that documents some of the issues that the global economy faces as future supplies of crude oil are unable to meet demand.

The evidence that forewarns of a problem has been out there for a long time, Sites such as Energy Bulletin and The Oil Drum have documented the evidence that has come to show that non-OPEC production of crude oil has already likely peaked, and the ability of OPEC itself to much increase their production beyond another couple of million barrels a day or so is in serious question.

There is, in short, a problem, and in the United States the responsibility for resolving that problem sits at the desk of the Secretary of the Department of Energy. Who with all due respect should not be surprised at all by the current rise in the price of crude, and the path that the price will take in the future, yet he is!

At least it would if he were paying attention. Unfortunately, however, the listing of the priorities of the Department – given above – show that peak oil or the related issues over the supply volumes and prices of natural gas – are not that great a concern. In regard to coal, the Secretary is more energized in waving Dr Mann’s hockey stick curve relating to the inter-relationship between global warming and carbon dioxide levels (regardless of the uncertainties revealed by the Wegman review inter alia) and willingly ignores the lack of significant global warming since 1998, in order to push climate change activities (see list above) at the price of ignoring the coming crisis in fuel supplies.

Even the British Government have recognized that, while making the politically correct genuflection toward the motif of global warming, that they are responsible for the ultimate fuel supply security of the British Isles, and have gone ahead and permitted more coal mines. Reality has a nasty way of intruding into the discussions of ideology and mandating actions that provide realistic, rather than merely ideological, answers.

Unfortunately at the moment the United States does not seem as well served by its Administration in this area, since the Secretary seems woefully unaware of the underlying fragility of the energy supply situation. Sweden seems better served in this regard, since Dr. Aleklett does provide information to that government, and they seem more aware of the problem, and the steps needed to meet the situation.

Our Secretary sees the problem in a different light
"We've repeatedly said what the world wants and needs is stable prices," Chu said. "They have been inching up recently and it's a little bit concerning."

Oil price volatility can also harm the alternative energy sector, Chu said. He said the fall in energy costs after the oil price shocks of the 70s and early 80s wiped out many clean energy companies.

To help stabilize crude prices, Chu said the administration is working to improve market transparency. In particular, he said the Energy Department is focused on teaching developing countries how to compile energy data.
Um, yes I know, this is one of Matt Simmons pet peeves – but you know what – I don’t think it is really going to help assure our future energy supply, which might, just possibly be something in his job description.

There is a meaning to the current rise in oil prices, control of which has now been passed to the OPEC nations, at least in the short term. If the Secretary is not aware of this, it would be extremely unfortunate not only for him but for the nation, particularly if his on-the-job training meant that he was unprepared when the next phase of this rolls around in the next year or so. Maybe in the meantime he might sit in the odd seminar at Uppsala.

Read more!

Monday, October 19, 2009

The other meaning of "Clean Coal", and related mineral preparation

There is a lot of talk about Clean Coal these days. The Federal Government are issuing, though DoE, a significant number of requests for proposal (RFP’s) seeking those ideas to improve the combustion and reduce the carbon dioxide emissions when coal is burned. Largely these are related to the combustion process itself and the consequent separation of the carbon dioxide, which is where there is a relatively large body of expertise within the Universities, and where progress is likely to be incremental, given that most of the technologies in place are relatively mature and well-known.

Of lesser importance, it would appear, are the pre-cursor parts of the coal cycle that prepare coal for combustion, or those post-combustion parts where the gas, after concentration, is to be sequestered. Part of this lack of interest, I suspect, comes from the lack of experience within the Government and the funding agencies in these areas. Thus there is no-one to speak for them when the funding pie is divided and money is allocated to the different processes seen to be contributing to solving the problems.

Yet in the end these are the parts of the process that will prove to have as great a contribution to the solution of the problems as any. Coal is inherently a dirty fuel – by which I mean that is it virtually impossible to go into a coal mine or operation where coal is being processed without the dust leaving a residue on your skin, requiring that you either wash or take a shower to become clean again. That dust is part of the coal structure – we used to call it fusain – and it is defined as
the only constituent in coal which blackens objects with which it comes in contact.
In that classification the other constituents are clarain, durain and vitrain.

But most coal isn’t found as just a mix of these four parts. Coal was formed as vegetation (including trees) collapsed into the mud, and other plants grew on top of it. Mud got into the layers between the plant remains, and occasionally local water floods would carry sand and other material in and over the plants. As the swamps sank, and the vegetation grew thicker the band of material thus had thin layers of other material interspersed with the coal itself. As the layer was buried beneath later sediments (deposition was largely during the Carboniferous era, some 300 million years ago, or roughly a little longer ago than the time it takes the Solar system to go once around the galaxy). it was compressed and while the vegetation turned, in the end, to coal the included beds became shale, and sandstone or limestone layers within it.

When coal was mined a century ago the miners filled tubs of coal that held roughly a ton, with each tub marked with the miner’s “token”. When the tub came to the surface it was judged by management and if it was felt that it contained too much rock, then the tub was not counted as part of the miner’s production for the day. Thus there was a reliance on the miner himself to make sure that the coal being “loaded out” was just coal and did not contain rock, or dirt as it became colloquially known.

As mining became more automated, the pick of an individual miner was replaced with the multitude of picks that are mounted on the rotating drums of most mining machines. These drums are alternately raised and lowered to mine out the full section of the coal, and they indiscriminately mine, break up and load onto shuttle cars, or conveyors, the resulting mixture of small coal with some waste rock. Now there is little control of the coal quality at the face (apart from making sure that little of the rock above and below the coal is also mined). The coal, as a result, comes to the surface with the contained dirt still in it, and in many parts of the world that is what is then sold to the customer.

However the rock contents don’t burn well, and the residue can fuse to form clinkers that clog furnaces and reduce firing efficiency. So as the market grows for the mine from local consumption to a larger market, pressure comes to bear to “clean” the coal by “washing” it. In a simple form this involves running the coal through a bath that contains fluid of a carefully selected density. Coal floats in that medium, while the rock settles to the bottom. And thus the two are separated and the washed coal can then be sold to the customer as something that will burn in a cleaner way.

So why write about this tonight, rather than holding it over for a technical talk on a Sunday? Well the problem comes down to this – in the past the job of cleaning up the minerals that came out of the ground – whether separating the coal from the waste, or getting the valuable mineral components out of the different ores and waste rock in which they are found was allocated, at most universities to the Mineral Preparation division, which often ended up as a sub-group of Metallurgy.

Skip forward a decade or two and now Metallurgy departments have been merged, often with Ceramics, into Mineral Engineering or Science, and it is these departments where some of the more exciting research is done on nano-materials and the components that go into the electronic circuits on which we all rely. These folk also work on materials such as those sought to improve the operational lifetime of existing and planned power plants. Hiring new faculty into these departments will usually bring on board faculty that work at these cutting edges of those areas, and the world makes considerable progress from their efforts. But in the process, since in many departments total faculty numbers remain fixed, this has come at the cost of not replacing those who are experts in the fields of Extractive Metallurgy. Nor has the field of Coal Preparation seen strong support, as other issues have claimed greater visibility and funding.

So now here we are – we need cleaner coal to be supplied to the power stations, so that it can reduce the burden of dealing with the combustion products. As the ores from which valuable minerals come become leaner (the richer veins having been mined) it becomes more important, as part of the economic operation of the mine, to get the most mineral for the least cost from the ore. (Mines have closed when they could not do this well).

Who will provide that knowledge? Who is funding the research to advance it? Sadly the answers in both cases are likely to be almost no-one. It is a critical part of the continuation of our industrial society, but, being neglected, it has lost its voice and champions.

Sadly it takes more than the stroke of the pen by the Administration to create or recreate that collection of experts and as the current generation now retire, we will all, in time, mourn in one way or another, their passing.

Read more!

Sunday, October 18, 2009

Drilling Deviated oil wells and a couple of legal terms

This is part of the ongoing series of posts I make on Sundays about the technology behind some aspects of getting oil, natural gas and coal out of the ground. At the moment, as I noted recently, there are places where it costs more to get the fuel out of the ground, than folk are being paid for it, yet they are still pumping it out. There are a number of reasons for this, but I wanted to tie in a comment that goes back to the early days of oil production, when one could find pictures of oil derricks, built one right next to another. That density has been reproduced at Kilgore, TX where at one time they had 1,100 wells producing oil from the Great East Texas Oil field.

Reproduction of well density (from TexasEscapes )

When times got tough each well owner would still produce all the oil possible, why? (Apart, that is from paying the interest on all the money borrowed to drill the well in the first place).

Wells drilled in Ventura County, CA. (US Geological Survey)

Early photos of the oilwells when the first boom first began, showed that they were drilled almost on top of one another and in some difficult country.

There was an interesting ruling that came about at that time, and which has persisted since. It is called "The rule of capture" (which also pertains to water rights, and essentially it says that whatever flows into your oilwell, regardless of where it came from, is yours. (`cos you "captured" it).

So let's say that you live in a nice neighborhood in downtown San Diego, for instance. And under your neighborhood someone discovers there is a rich pool of oil. Well if your next door neighbor is a fast mover, he might drill his well and suck all the oil out of your particular bit of that pool, before you can blink, and yup! It's his (or hers).

So what do you do to counter this and get what's yours before it is half-inched? The obvious answer is to drill your own well, and (rather like two siblings drinking a soda through straws into the same glass) whoever sucks hardest gets the most. Which does all sorts of nasty things to the idea of holding a resource until it's value goes up - but then, that's the oil and gas business.

Well over the last century that obsessive attitude has ameliorated a tad, and now when wells are drilled (particularly since they are a bit more expensive) we try and space them at intervals so that each gets to drain out to the point where it naturally runs out of ability to drain further. There are intricate mathematical equations for this (which I long ago deliberately forgot) but as a first rule of thumb the industry uses a baseline of 40 acres per well. Which means that if you were looking to drill a second well you might step out and drill the next well some 440 yds away from the first. (A quarter of a mile, for those of us who think that way).

Which way would you go? Well that depends very much on the information that you managed to get from the well that you just put in. And this will rely on logs that you ran after drilling, and the map that you were given of the underground geology before you drilled the first well. Some of that information will relate to the permeability of the rock, and the quality of the oil. The lighter the oil and the higher the permeability then the further apart you may want to space the wells (given that they are becoming just a tad expensive these days). Generally in more favorable fields you may space at greater intervals (gas wells tend to be about a mile apart for example).

On the other hand, in some cases, having drilled a very well spaced set of wells you may discover that the permeability isn't quite as good as you had thought, and then you might go in and put wells at closer intervals than originally drilled, infilling the well pattern.

But there are places where this is not an easy option. One such, of course, is the current spate of discoveries and development in Deep Water. But one of the earlier places where this came about was in Western Siberia. After all, here you are, finally established on a man-made island in the marsh, and producing oil, and to drill the next well you have to drive another road through the swamp, build an island, move the equipment, recreate the drill site, before you can start drilling. As Ivan Ivanovich might have said, "why don't you be a good Comrade and let me drill over to my patch, from your site?"

And thus we get into directional drilling (and the entirely "accidental" occasional happening of "subsurface trespass". Directional drilling as a term is quite often used now to refer to techniques for putting pipes under rivers.

Historically wells were drilled, to as great a degree as possible, vertically downwards. There was the occasional meander, and this had to be corrected, after running a borehole survey, to make sure that the drill eventually arrived at the intended target. But in making those corrections, so a technology evolved that allowed wells to be steered at deliberate angles, and then the light dawned, and deviated wells could be steered away, at designated angles from the initial well, and out to some additional nearby possible sources.

When directional drilling first was developed, the process occurred in a series of steps. Remember that the wells are usually cased with a steel liner. So the first thing that had to be done was to cut a window in that casing. There are several ways to do this, one of the more efficient of which is to use a high-pressure stream of mud from a jet nozzle, that has had a certain amount of sand added to the mud. This abrasive jet can eat through the steel and cut out a segment or window so that the drill bit can reach and attack the rock.

5,000 psi pressurized abrasive waterjet cut through steel, concrete and rock, as could be used to cut a window in a casing

Now we have to make the drill deflect, and the easy way to do this was to slide a wedge into the hole right at the point where the window had been cut. This wedge, known as a whipstock was very carefully aligned and set into the borehole with the inclined side set so that as a new drilling bit slid down the hole it would be kicked off into the window, and began to drill and penetrate the rock in the required direction.

The hole could be started with a smaller bit until the hole is well established (maybe several feet into the new well) and then this pilot bit was removed and a full-scale bit put on the bit to allow the drill to continue creating a new hole moving out from, and down from, the new start to reach the new target either by a straight shot, or by drilling over to the vicinity of the new end point, and then kicking back over to the vertical to drill down into the new oil reservoir. The process is not that complicated, and with practice one could then drill a number of different wells out in different directions from that original hole. And from one location, or platform, one can then collectively extract the oil from an increasingly large surrounding area of the reservoir.

More recently there are services available such as the Autotrak system where, just behind the drill bit (which is rotating – as are the main pipe segments – yellow in the picture below) a non-rotating piece of equipment is located (blue) that has three small rams that can push out against the well bore and direct the drilling head over in the direction required.

Autotrak assembly (Oil Museum in Stavanger) – you can see one of the pads pushed out as though it was pushing the head over.

And thus it is now possible, through computer sensing of the position of the head, and control of the rams, to steer the drilling head to where you want it to go.

I said "historically" just before talking about Autotrak because when they tried the older system in Western Siberia they ran into a problem. The quality of Soviet steel was not, at that time, that good at keeping the bit turning down and through the bend it had to follow to deviate the well. And so they had to come up with a different approach. And that I’ll leave perhaps for next week's chapter.

This is a series of highly informal posts that are aimed at giving some background to what goes into drilling and production from oilwells. To keep them simple and short I have sometimes simplified a little more than I should to make the description clear, so if it isn’t, please ask.

Read more!