Sunday, November 29, 2009

Drilling Rigs and Drilling Ships

The debate about how much oil is left and recoverable in the world has brought increasing attention to the recovery of oil and natural gas from offshore. And while I suspect that most of those who comment on this site are very familiar with all the terms, some of the more general readership may not be. Let me, therefore, explain just a bit about some of the different words that are being used here - with references and videos, where I can find them - to pictures of the different types of structures that are being used. And if I miss some, please chip in either to ask or answer.

Just as on land, we need some form of drilling rig if we are going to drive a bit down through the rock and find us some oil. But, unless we are working somewhere like the North Slope where we can wait until the sea freezes over and drive ice roads out to build islands to drill from, we are going to have to find a different way of providing the infrastructure support for that bit. And here is the first distinction - because a drilling rig, in the offshore sense, is more of an exploratory tool, going out to find oil, rather that developing the known fields and bringing in production. That latter task is left more to production platforms, which can be sited where best to drain the field, and may not even use the initial holes drilled by the exploration rig.

As you may have read the Minerals Management Service keeps track of production offshore (as well as on Indian lands) and lest you think that it is not that significant a bunch of folks, they have just handed out $10.68 billion in 2009.

Of the $10.68 billion, $1.99 billion was disbursed directly to states and eligible political subdivisions such as counties and parishes. Another $5.74 billion was disbursed to the U.S. Treasury; $449 million was disbursed to 34 American Indian Tribes and 30,000 individual American Indian mineral owners; $1.45 billion was contributed to the Reclamation Fund for water projects; and $899 million went to the Land & Water Conservation Fund, along with $150 million to the Historic Preservation Fund.
As with many departments of the Administration the MMS is focusing on
the importance of renewable energy and job creation, climate impact and adaptation, and efforts to support and maintain the treasured landscapes of America in the emerging clean energy economy.
Since it also controls (for the Secretary of the Interior) lease sales – with a large upcoming one in the Gulf it is a good site to check on periodically. Now to get back to what we’re going to do to get the oil/gas out if we are fortunate enough to get one of those leases.

In really shallow water, or the bayous, you might launch your bit from a drilling barge, where the derrick can be assembled once the barge has been towed to the right place. Barges can be flooded to rest on the seabed in relatively shallow operations.

Drilling barge

However they are very susceptible to bad weather, and last April one had to be raised after being sunk during Hurricane Gustav.

Raising a sunken drilling barge

So, as one moves further offshore, then one might use a self-erecting tender from a barge, but would more likely move to something which could get the drilling floor stabilized and up above the waves. These are the jack-up rigs.

Model of a jack-up rock (Stavanger Oil Museum)

Typically they have three legs, that are raised as the rig moves around. Then when it has reached the desired site, the legs are lowered to rest on the sea floor and the entire structure jacks itself up out of the sea, and, hopefully, above the waves. (You can get a paper model to cut out and assemble for this rig). There is also an animated video showing the installation of a rig at a site.

They can do this to work in depths to around 500 ft. The discussion about damaged rigs gave links to the different rigs that have been damaged, and some of these have photos of the rigs in better days.

As one goes out to deeper depths, then one will look for a more substantial vessel and so one comes to the Semi-submersible.

Semi-submersible drilling rig (Stavenger Oil Museum)

These are built to either sail themselves, or to be towed out to the site, with the assembly floating, and then fluid is pumped into the bottom tanks to partially submerge the vessel and thus stabilize it. One can get some idea of the size of these from some of the photos shown where, (thanks to Ed Ames), wikipedia covers the subject.

Semi-submersible rig off Brazil (Wikipedia)

Since these are floating there has to be a way of holding them in place. One way is to have them dynamically positioned, using thrusters to hold them in position, such as these.

Thrusters that go under the pontoons to stabilize a semi-submersible (Wartsila)

Note that it takes about 3 years to build such a unit. The alternative is to have the rig attached to anchors on the sea bed using cables, or tethers. (And for those interested in natural gas production, note that the same rigs are used for both).

Cables with controlled winches to stabilize a semi-submersible rig (Stavanger Oil Museum)

The connection between the well and the platform now becomes more flexible and special connecting pipes called risers are designed to reach from the blow-out preventer (BOP) at the top of the well, but on the seabed, and the platform. These must allow the rig to rise and fall with the tides and so models of behavior have to be written to design ways of allowing this.

An alternative is to use a drillship to do the exploration. The drillship has the rig mounted in the middle of the ship, and can thus move around somewhat more easily than the others. It is generally held in position by dynamic positioning while drilling. (Video here).

Drill ship (Stavanger Oil Museum)

Once the field has been established, then a larger production platform can be brought out and placed where it can, using directional drilling, reach the best places to extract oil from the field. It is these large structures, such as that the Orlan platform from which Sakhalin Island oil finally began to flow recently, or the Thunder Horse, or Mars platforms. Although the former was due to produce by 2005, it was delayed by damage from Dennis and did not get up into major production until the end of last year, while the Mars platform was extensively damaged by Katrina. it is now back in production.

One of the problems with using these large platforms for Deepwater recovery is that they focus collection and so when these two are disabled, for example, they take about 400,000 bd out of production. And once they are damaged they are not so easily replaced. Back when I first wrote on this subject, in 2005 demand for rigs so strong that, as the International Herald Tribune reported
"If a customer comes today with an order, he'll have to wait until 2009 for delivery," Choo said in an interview last week. "That's how busy we are. If he's willing to pay more, he can get a rig by 2008 from our American shipyard."
By 2008 demand was such that prices had risen to half a billion dollars for a drill ship.

The original article noted that while it may only take 2 years to build a rig, the yard could only work on 8 at a time, and thus current deliveries were for 2009. The more recent story notes:
As a result, drilling costs for some of the newest deepwater rigs in the Gulf of Mexico — the nation’s top source of domestic oil and natural gas supplies — have reached about $600,000 a day, compared with $150,000 a day in 2002.

These record prices have spurred a new wave of drill-ship construction. This boom could lead to renewed offshore oil exploration that would eventually bring more supplies to the oil market, and push down prices.

Already, 16 new drill-ships are scheduled to be delivered to oil companies this year — more than double the number delivered over the last six years combined. In fact, 75 ultra-deepwater rigs should be delivered from 2008 to 2011, according to ODS-Petrodata, a firm that tracks drilling rigs.
The Chinese have also introduced a new design which is circular.
The Sevan Driller is the world’s first of its kind, with the most advanced deep-water drilling capabilities that allow it to drill wells of up to almost 13,500 metres (40,000 feet) in water depths of up to nearly 4,200 metres (12,500 feet) and an internal storage capacity of up to 150,000 barrels of oil.

The owner is Sevan Marine. The construction of this rig started at COSCO Nantong Shipyard in May 2007 and was relocated to COSCO’s Qidong Shipyard in April for derrick erection and final commissioning activities. The rig is due for delivery in the third quarter of this year and will be deployed by Petrobras in the Santos Basin, off the Brazilian coastline.
As of last week it was on its way to Brazil (which takes about 75 – 80 days) where it will drill in the Campos Basin in just over a mile of water.

Again this is a very, very brief and simplified look at some of the ways oil and gas can be produced from under the sea. It barely touches on some of the difficulties that are encountered, however. The sinking of the one barge shown at the top of the post could have been also repeated with pictures of other rigs in storms, and in battered condition.



Saturday, November 28, 2009

ClimateGate - and Dr Mann's changing story

The main stream media are still largely hiding the scope of the revelations from Climategate – it is much more important to them, for example, to discuss the couple that gatecrashed a dinner (the Washington Post); that Mayor Michael Bloomberg spent $102 million being re-elected (New York Times); that Tiger Woods crashed his car – but wasn’t hurt (LA Times); or that Pandas got a warm welcome when they moved to Australia for ten years (BBC News).

Commentators have largely tiptoed around the issue. It is interesting to note that Andrew Freedman at WaPo claims to be willing to view both sides, yet puts up interviews with two apologists for the “disinformation” side and promises to put up a skeptical viewpoint “in coming days.” (Perhaps by then the furor will have died down and it won’t have to be too strong a skeptic). And Eugene Robinson is willing to continue the insults:
Stop hyperventilating, all you climate-change deniers. The purloined e-mail correspondence published by skeptics last week -- portraying some leading climate researchers as petty, vindictive and tremendously eager to make their data fit accepted theories -- does not prove that global warming is a fraud.
But unfortunately does not recognize some of the implications to the revelations.
It would be great if this were all a big misunderstanding. But we know carbon dioxide is a greenhouse gas, and we know the planet is hotter than it was a century ago. The skeptics might have convinced one another, but so far they haven't gotten through to the vanishing polar ice.
Well, apart from the fact that the total sum of Arctic and Antarctic Ice hasn’t been changing much – in fact Antarctic ice sheet size has been growing – this fails to grasp an underlying change that is happening among the warmers. (It is perhaps a little early to start to find the same sort of pejorative title for folk that manipulate and hide data to promote their point of view, as has been given to those skeptics such as myself that question some of the data).

In the e-mails that were “released” from the University of East Anglia it is clear that the reliance on tree-ring data, one of the fundamental legs to the data that went into the development of the Mann, Bradley, and Hughes “hockey stick” paper, is unreliable. Those putting out the papers (and judging whether other papers should be published) clearly knew this, but went on using it anyway. As proof thereof Dr Michael Mann and his group have just issued another paper. - but now there is a change. Whereas for the past few years those of us who brought up the existence of the Medieval Warming Period were subject to abuse and ad hominem attacks, it now appears that perhaps the "Power That Is" has changed his mind. In the new paper, the faithful are called to a new recognition:
Mann and his colleagues reproduced the relatively cool interval from the 1400s to the 1800s known as the "Little Ice Age" and the relatively mild conditions of the 900s to 1300s sometimes termed the "Medieval Warm Period."
Of course the paper notes that it wasn’t really that hot back then (though actually there is an e-mail or two that seems to contradict this). So does this mean that those of us who objected to the linear nature of the “hockey stick” line through both this periods will get some admission that we were right. Don’t rush to hold your breath. Nope they don’t have to get that honest! And for those who will head out to Copenhagen waving the original figure – do I anticipate that folk such as Energy Secretary Steven Chu will change his talks – well not really, at that level admitting error almost never happens.

Interestingly one of the things that the “new” review of data that has Dr Mann “discovering” the MWP is that his models now project:
The researchers note that, if the thermostat response holds for the future human-caused climate change, it could have profound impacts on particular regions. It would, for example, make the projected tendency for increased drought in the Southwestern U.S. worse.
Now, and see my cynical nature come through here, I don’t suppose that this has anything to do with the historic record for what happened in those regions a thousand years ago when there were severe droughts that lasted up to a couple of hundred years. (There are tree remnants in the bottoms of lakes as proof I am not making this up).

All those folk who have been saying that the models had shown that there were no major impact due to solar changes are going to have to do a little backpeddling now that the High Priest has spoken:
The warmer conditions of the medieval era were tied to higher solar output and few volcanic eruptions, while the cooler conditions of the Little Ice Age resulted from lower solar output and frequent explosive volcanic eruptions.
It used to be so much easier to make these changes in doctrine, when the vast majority of the populace couldn’t read. But of course there is reading and also there is understanding, and so I anticipate that we will continue to see obfustication and data manipulation for some time to come. And those within the press will take a while to realize that some of the cardinal points that have been used to secure their belief are not, in fact, likely going to prove true.

Unfortunately, at that time, I suspect that folk such as Mr Robinson, will continue to think of those of who demurred earlier as still being evil and wrong, but will just conclude that the scientists that they believed in were also wrong and conclude that scientists as a whole behave this way. It will be difficult to convince them that they are wrong – even as it has proved in the last week, where denial of the meaning of these revelations has remained quite widespread. So what to call those who remain in that state?

Thursday, November 26, 2009

Happy Thanksgiving

I shall be back after the break, I hope, for those to whom it applies, that you have a happy and peaceful Thanksgiving.

Tuesday, November 24, 2009

George Will is afloat in a sea of fuel

Last Sunday George Will wrote about the abundance of fossil fuels in his column for the Washington Post. He began by surmising that Titusville, PA might have a claim to being most responsible for the modern world through the contribution of oil to world wealth. He then went on to list some of the many folk that have predicted (falsely) the arrival of a peak in oil production and the amount of reserves that are available.
In 1977, scold in chief Jimmy Carter predicted that mankind "could use up all the proven reserves of oil in the entire world by the end of the next decade." Since then the world has consumed three times more oil than was then in the world's proven reserves.
He does not see that America, or the world for that matter, can wean themselves from a dependence on hydrocarbons, and quotes Keith Rattie, the chief executive of Questar, from a commencement speech as well as Edward Morse, from a story in Foreign Affairs in which Morse states that there is plenty of natural gas available – perhaps a hundred years at the present rate of consumption, and that the deep water fields just beginning to be exploited are significantly larger than thought. And he includes Daniel Yergin as stating that" the resource base of the planet is sufficient to keep up with demand for decades to come.”

From all this he concludes that it is only the environmentalists that are going to ensure scarcity for the planet. So where does one begin to disassemble what would appear, at first sight, to be a rational argument based on the utterances of three prominent individuals?

To pick the last first, it has unfortunately not been that long since I last wrote about Dr Yergin and CERA. Earlier this month I noted, after giving a number of occasions in the past that CERA had been wrong, that some of the assumptions that they were currently using to project future oil production were likely to be wrong also.

The most obvious immediate reason is the collapse of the oil industry in Mexico. Having fallen from the projected 4 mbd to potentially only 2.7 mbd next year, global supply is going to have to find another source for that 1.3 mbd. Bear in mind that Mexican production has been falling as fast as 100,000 bd per month at Cantarell and the magnitude of this problem becomes apparent. The current hope of seeing 10 mbd from Iraq by 2020 is, I would suggest, also perhaps more than a little optimistic.

So let’s tick Dr Yergin from the list, and move on to Edward Morse and the prediction of natural gas production from fields such as the Marcellus and from the deep waters of the world. This case is a little more tricky to argue, but only in the sense that there are considerable resources in both these regions of the world and that they will continue to produce significant quantities of fuel for some considerable time. The questions that are being raised are, however, more along the lines of how much and for how long will they produce and at what cost?

Evidence to date (that I am trying to explain in the tech talks, but haven’t got to the downside talks yet) is that by using new technologies these fields can be produced with very high initial yields from slickwater, multi-frac’ed horizontal wells, but that these wells have a much shorter life and faster decline rate than was predicted when the production from these fields were estimated. The deepwater fields require dramatically higher investment costs and while the technology and equipment is available, to some limited degree, there aren’t that many rigs that can be mobilized to drill in these conditions. Further continued production from some of the weaker rocks that the oil lies in, at these depths, is going to continue to be a technical challenge, driving costs higher and higher and further limiting levels of production available. Yes there will be oil, but it will be more expensive and there won’t be nearly as much as one might think at any one given time (the production rate) to even maintain current levels of production very much longer.

Which brings me to look at Mr Rattie’s speech, which is one that I have considerable agreement with. Except for the assumption that we can continue to grow our way into the future in terms of increasing supplies to meet an increase in global demand for energy that will increase by 30 – 50% over the next two decades. He lists some of the anticipated growth:
The Salt Lake Tribune recently celebrated the startup of a 14 MW geothermal plant near Beaver, Utah. That‟s wonderful! But the Tribune failed to put 14 MW into perspective. Utah has over 7,000 MW of installed generating capacity, primarily coal. America has about 1,000,000 MW of installed capacity. Because U.S. demand for electricity has been growing at 1-2 % per year, on average we’ve been adding 10-20,000 MW of new capacity every year to keep pace with growth. Around the world coal demand is booming – 200,000 MW of new coal capacity is under construction, over 30,000 MW in China alone. In fact, there are 30 coal plants under construction in the U.S. today that when complete will burn about 70 million tons of coal per year.
Mr Rattie also points out something that is sometimes missed in the debate about coal and natural gas fired power plants:
America has about one million MW of installed electric generation capacity. Forty percent of that capacity runs on natural gas – about 400,000 MW, compared to just 312,000 MW of coal capacity.

But unlike those coal plants, which run at an average load factor of about 75%, America’s existing natural gas-fired power plants operate with an average load factor of less than 25%. Turns out that the market has found a way to cut CO2 emissions without driving the price of electricity through the roof – natural gas’s share of the electricity market is growing, and it will continue to grow – with or without cap and trade.
That is surely true in the short term, but the questions about natural gas production, and the fact that if natural gas were, for example, to be used at double the current rate (i.e. to produce 50% of the nation’s electricity) then even if the current prediction of 100 years of gas at current levels of consumption were true, then at double the rate, the lifetime of the reserve would be cut to 50 years. Given the questions on production referred to above, it may, be quite likely that estimates are at about twice reality, and thus there might be 50 years of supply, but this is likely to be consumed at somewhat higher rates than current, so that the effective lifetime is going to be closer to 25 – 30 years, I would suspect. (And don't forget that Questar is one of the largest natural gas producers, who may be vulnerable in the fight to retain market share as the increased availability of natural gas from conventional sources struggles against supplies from the gas shales and from LNG supplies - in the short term).

Oh I anticipate that there will be adjustments in the fuel mix in the future, the decline in oil’s share of the market will have to be replaced with some alternate fuel, and natural gas will have to carry some of this burden. I tend to see that there will be problems in getting an adequate amount of energy supplied from renewable sources, as the scale of their use grows, and the realistic availability of good sites for development of new farms reduces. (It is all too easy to just say we need the equivalent of say half of Utah, without recognizing just what percentage of Utah is actually available – just to pick some numbers for the sake of discussion).

Put this all together, and in an ideal world, when gold, oil and natural gas production could continue to increase in availability every year, then George Will’s suggestions might well come to pass. Unfortunately when there is a realistic limit to the actual volumes available, as we are now seeing with gold, so shall we soon see with oil, and before then I would suggest that Mr Will seek additional advice from elsewhere – perhaps here?
.

Monday, November 23, 2009

Workover Equipment

This is the slightly delayed Sunday Technical Post, and follows on the series that is listed on the right side of the page.

When we began this series of technical posts a drilling rig was anything that punched a hole through the ground, to get at the oil or natural gas underneath. Once a hole is drilled, however, there is often other work that needs to be done on the well, but now the infrastructure that helped when we drilled the well starts to get in the way when we need to other things. And so the tool will change to what is known as a workover rig. (Though these could be the old rigs left in place on a platform after the production wells are drilled - just to keep life clear).

See there you are, having sunk your kid's inheritance into this oilwell, and it just isn't producing the way you were promised. Sure it's making oil, but the supply seems to be dropping faster than it should, or perhaps there is too much sand coming out with the oil, or one of a variety of reasons.

And suddenly the partnership is talking about hiring an oilwell service company to bring out a workover unit to come out and fix the problem. You might have heard of one of the two of the small companies that carry out this sort of work, the two more prominent are Halliburton and Schlumberger, although the latter came into the business first as a company that helped log or survey the hole to determine the types of rock that the drill had gone through. (And in true MSM tradition I should admit that I have consulted for both these companies, and that “small” was meant as a joke).

Work-overs can deal with a wide variety of problems, but they come at the situation from a different perspective than the original well drilling. To begin with there is a cased hole that often goes all the way down to the original pay. Further the tools that will be used are not going, in large measure, to be used to drill new segments of holes, but rather to treat the original well, replace parts that have failed, or change the layer of rock that the well is getting the oil from.

Now there is a word of caution here. To work on the well the first thing that you are likely to do is stop it pumping oil. That is known as shutting the well in or killing the well. Then you bring in the work-over rig, do what needed to be done, and it leaves, and you start the well producing again. Here is the caution. Because you stopped the well producing for a while, when it restarts, in most cases, and regardless of whether the action or treatment that the company applied really worked, the well will begin by producing more oil than it did just before it was shut in.

Because they may have paid quite a bit of money for a treatment, it is sometimes amazing to me how well educated folk will see that immediate gain and believe that a treatment that in other circumstances they would find incredible, has created an improvement in production. Techniques, for example, that promise the ability to drill lateral wells out from the main bore at high rates of speed – without discussion of where the cuttings that were so miraculously removed went to may, at some time, be the subject of another post. The well behavior has also to be monitored over a period of time to validate the improvement (`nuff said).

Some of the treatments that need to be carried out are not very complicated. Perhaps when the well was first drilled it was not effectively acidized or perhaps the oil might have precipitated out some of its contents into the drill pipe as it moved from the completion zone up to the surface. Remember that the oil starts out usually more than a mile or two deep in the ground, where the temperatures can be quite hot. Then as it flows up though the pipe the oil can, for example, suddenly enter a section of the pipe that is being cooled by the North Sea that lies all around it and is a great heat sink. Suddenly any dissolved minerals in the oil might reach saturation and then drop below the saturation temperature, and start to crystallize out. (I have seen pipe sections where the hole diameter has been cut by more than half by crystals that grew in from the wall and were more than an inch long.) Paraffin or similar waxes that were in the oil might similarly, for example, have started to clog the pipe , or some of the carbonate and sulphates in the oil might form a precipitate or scale on the pipeline wall as the oil flows upwards.

A casing scraper (National Oilwell Varco)

These deposits can be removed by putting a scraper onto either the drilling rod of the workover unit, or from a wireline (a wire line) that is run from the surface, generally from a winch. A wireline can be either a slickline or single strand cable, or a braided line which has a number of strands and is capable of carrying a higher payload. These lines can be run into the well very quickly (and in smaller cases do not need to have the well killed to be used).

In a slightly more complicated case an electrical control cable or power cable can be added to the wire to power down-hole operations, particularly when packers or plugs are being used. Depending on purpose these might also be fielded using a more conventional drill string, or through coiled tubing, which I am going to talk about in a different post.

Packers are devices that are lowered into the well to isolate the well zone in which the work is to be carried out. For example if one were going to seal off the old production zone and move to another one, one might pack off the old zone, first before pumping cement into the sealed-off segment to fill it with cement.

You can imagine that almost every type of repair must be carried out in this fashion. If something goes wrong down hole, then because of the limits of access it is going to take some imagination to deal with the problem, and the oilwell service companies have now provided that for a number of years.

The more simple jobs, such as sealing off a zone that has stopped being productive, or lowering a new perforating system down the well to stimulate production from the layer of rock, to cleaning the screens at the bottom of the well that keep the rock in place, while allowing the oil to flow into the well, are all somewhat obvious once named, though perhaps not so obviously needed until you see the effect of the problem on well production.

The more common other workover uses, for stimulating production, will be in another post. And once again this has been but a short summary of what can go on, just to describe some of the basic ideas. Comments and questions are welcome.

Sunday, November 22, 2009

A slight delay

I must apologize to readers coming to the site to read the Tech Talk that I usually put up on Sundays. However we were busy with other matters - No! it was not the revelations from the CRU - rather we were attending the Baptism of our first grand-child.

Hopefully I will be back on schedule tomorrow.

Saturday, November 21, 2009

The CRU data dump and the journalistic integrity question

I would be obtuse, perhaps, this week to comment on anything other than the release of information from the Climate Research Unit (CRU) at the University of East Anglia. The folk that had used the site, and thus whose material has now been passed around, are largely those that contribute to the site at Real Climate, which has responded. For simplicity in what follows I will call them “The Team.” For those who are unaware, a significant amount of information at the CRU, including personal e-mails and data files, has been sent to a variety of sources, though which it is now publicly available.

The files have now been sorted (to the point that they can be surveyed by the curious) and my own initial curiosity – because I am just finishing reading Mia Tiljander’s dissertation, was to find if there was any information on the varve data that she had produced which, when she wrote her dissertation, showed clear evidence for a Medieval Warming Period, but once it passed through The Team, was used to show that the MWP didn’t exist. So I went to the search website typed in Tiljander, and found 3 references, the first of which is a memo from Darrell Kaufman recognizing that they had flipped the data . Not that this is particularly novel, since that recognition was, I believe, recently publically admitted, at least by some of the team, but indicative of the questions about honesty that will now, unfortunately, have to be addressed by The Team if the journalistic brother/sisterhood is to retain any credibility in this overall mess as unbiased reporters of the state-of-reality.
Nonetheless, it's unfortunate that I flipped the Korttajarvi data.
The memo seems to admit to regret at being caught, but little at improperly using data to backstop an opinion.

When the files were first revealed the most obvious candidate for a “smoking gun” memo was that from Dr. Phil Jones, the head of CRU to others of The Team in which he notes:
I’ve just completed Mike’s Nature trick of adding in the real temps 
to each series for the last 20 years (ie from 1981 onwards) amd from
1961 for Keith’s to hide the decline.
There is an explanation of what this entailed at What’s Up With That (WUMT).

If there was only this one memo, as evidence of improper activity, then perhaps it might be viewed less severely, although one has to bear in mind that it has been the CRU and GISS curves for average temperature over the decades, together with the now infamous Mann “hockey stick” of temperature over the last millennium that has provided most of the underpinning to arguments about global warming.

However, as the search has gone on through the apparently 1079 e-mails and 72 documents there are some disquieting suggestions of evidence of more serious malfeasance.

Bear in mind that the authors of these memos are public servants, whose supposed mission is to help answer queries about the topic, and then read the deliberate attempt at denying that role, with statements such as:
Send them a subset removing station data from some of the countries who made us pay in the normals papers of Hulme et al. (1990s) and also any number that David can remember. This should also omit some other countries like (Australia, NZ, Canada, Antarctica). Also could extract some of the sources that Anders added in (31-38 source codes in J&M 2003). Also should remove many of the early stations that we coded up in the 1980s.
Bear in mind that there was a request under the British Freedom of Information Act that had been submitted for this data and, as the Australian Herald Sun notes:
Destroying government data subject to an FOI request is a criminal offence. Is this data being deleted the stuff CA asked from Jones in repeated FOI requests? If true, Jones had better get himself a lawyer very fast, but I doubt very much he would have done anything remotely illegal.
To which issue there is also the following:
And don't leave stuff lying around on ftp sites - you never know who is trawling them. The two MMs [McKitrick, McIntyre] have been after the CRU station data for years. If they ever hear there is a Freedom of Information Act now in the UK, I think I'll delete the file rather than send to anyone. Does your similar act in the US force you to respond to enquiries within 20 days? - our does ! The UK works on precedents, so the first request will test it. We also have a data protection act, which I will hide behind. Tom Wigley has sent me a worried email when he heard about it - thought people could ask him for his model code. He has retired officially from UEA so he can hide behind that. IPR should be relevant here, but I can see me getting into an argument with someone at UEA who'll say we must adhere to it !
This from the same Dr Jones who later noted in a response to Roger Pielke Jr
Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. We, therefore, do not hold the original raw data but only the value-added (i.e. quality controlled and homogenized) data.
I believe that any reasonable person reading the memo and that response would be more than a little suspicious that there is something rotten resident in the CRU.

Those who have been engaged in the discussion of the truth of global warming are already finding tidbits that show that CRU had a significant ($22 million) investment in their work, which might argue for a motive in holding the position that they have.

It is disappointing to read of the effort to “diminish” the MWP” :
Phil and I have recently submitted a paper using about a dozen NH records that fit this category, and many of which are available nearly 2K back--I think that trying to adopt a timeframe of 2K, rather than the usual 1K, addresses a good earlier point that Peck made w/ regard to the memo, that it would be nice to try to "contain" the putative "MWP", even if we don't yet have a hemispheric mean reconstruction available that far back [Phil and I have one in review--not sure it is kosher to show that yet though--I've put in an inquiry to Judy Jacobs at AGU about this].
Well it is a mess. Much more manipulative it appears, than the first reports led readers to believe. But will it be more than a couple-of-day storm. Sadly that is going to have to rely on a group of journalists who have become quite sycophantic to The Team, but whose integrity now requires them to bite that hand. It will be interesting to see how many do, and then the response of the political wing that has so tightly embraced the Team position.

Wednesday, November 18, 2009

Coal, carbon capture and cheap natural gas

There are a couple of interesting articles that came out today on the future of coal as a power source that indicate some of the shorter and longer term concerns over energy supply. One is a tale of the United States the other the United Kingdom.

Looking at the United States condition first, those that are concerned about the future climate of the planet are particularly concerned about the amount of carbon dioxide that is generated when coal is used to generate electricity. (I am not going to go through the pro’s and con’s of their arguments, I do that on Saturdays, and there are lots of others who provide stronger arguments than I). The world had been anticipating that the upcoming meeting in Copenhagen would lead to a new treaty, and set of international regulations to control carbon dioxide emissions. This would build on the Kyoto accords which are expiring. However it now appears that the chances of an agreement in Denmark is not going to happen, and with the chances of a climate bill passing the US Congress in the near future fading, an alternative path to emission control is perceived, by those concerned, to be needed.

Thus the EPA is taking steps to generate regulations that will enforce carbon capture and sequestration by those using coal-fired power stations to generate electricity. One of the problems with a large scale program to undertake this effort is that it relies on a set of technologies that are not necessarily all worked out yet.

Certainly carbon dioxide can be captured from flue gases, and then liquefied for transport to an site for injection underground. There have been demonstrations of this in the past, though it should be noted that the two sites that are often quoted as examples of the ability to inject carbon dioxide – Sleipner for example is reinjecting carbon dioxide that is a byproduct of the gas being produced at the site back into a saline aquifer some 1,000 m below the sea bed. The other success quoted is that where carbon dioxide is injected into an oil reservoir to increase tertiary oil recovery. And the success most often cited here is at Weyburn field in Canada.
The EOR technique that is attracting the most new market interest is carbon dioxide (CO2)-EOR. First tried in 1972 in Scurry County, Texas, CO2 injection has been used successfully throughout the Permian Basin of West Texas and eastern New Mexico, and is now being pursued to a limited extent in Kansas, Mississippi, Wyoming, Oklahoma, Colorado, Utah, Montana, Alaska, and Pennsylvania.

Until recently, most of the CO2 used for EOR has come from naturally-occurring reservoirs. But new technologies are being developed to produce CO2 from industrial applications such as natural gas processing, fertilizer, ethanol, and hydrogen plants in locations where naturally occurring reservoirs are not available. One demonstration at the Dakota Gasification Company's plant in Beulah, North Dakota is producing CO2 and delivering it by a new 204-mile pipeline to the Weyburn oil field in Saskatchewan, Canada. Encana, the field's operator, is injecting the CO2 to extend the field's productive life, hoping to add another 25 years and as much as 130 million barrels of oil that might otherwise have been abandoned.
The Weyburn site is also not using flue gas carbon dioxide. There have been considerable questions about the cost of actually doing the carbon capture and liquefaction, but for some it provides the only alternative to shutting down the coal-fired sector of electricity generation.
"CCS is the only climate change solution we have for the existing fleet of coal-powered power plants," said Sarah Forbes of the World Resources Institute.
Thus there is the new initiative being carried out by AEP at their Mountaineer power plant to capture the gas and reinject it underground. The trial is planned for a 12 to 18 month operation
This project will test Alstom's chilled ammonia technology for CO2 capture from flue gases particular to natural gas combined cycle (NGCC) power plants.
The test began in September and is a scale-up from an earlier test. However it is going to take some time, even after the project is completed, to collect analyze and evaluate the data that will result. Unfortunately I would suspect that the EPA is not going to wait, and the high energy and fuel costs that will be required for the sequestration and re-injection of the carbon dioxide underground will not be much of a factor in the agencies move to force the industry into that particular box.

This will produce a significant increase in the price of power at the plant, and since such plants need to make at least some money to continue operation, one might logically anticipate that this will drive up the price of electricity. (I have heard predictions that in Missouri – which gets some 85% of its electricity from coal – the basic cost of electric power at home may double). The impact, worldwide is also projected to be very expensive.
Earlier this month the International Energy Agency said the world will need to spend $56 billion by 2020 to build 100 such projects, with an additional $646 billion needed from 2021-30.
And so, as a result, one might anticipate that there would, where possible, be a switch to using natural gas, which produces less carbon dioxide, and which is currently in abundant supply over a much greater swath of the nation. That coverage was enhanced with the transition to full operation of the Rockies Express Pipeline which has just come on line and will deliver natural gas from Wyoming and Colorado, where there is a plentiful sufficiency, to the Ohio and points East which has been a bit short. Given the availability of shale gas and new LNG terminals to import gas from abroad, gas will likely remain cheap in the short term.

And it is that dilemma that is facing the power station operators in the UK. The abundance of natural gas supplies to the UK (a quarter of which comes as LNG) means that it may be cheaper to generate power with gas-fired stations that by using coal. But the EU is seeking to shut down the older, more polluting stations and has mandated that they only run for 20,000 hours before being shut down permanently in 2015. This has been why there has been a move to build new replacement stations such as that at Kingsnorth (recently postponed).

The problems that might then arise, as the gas supply, on which the alternative is currently focused, provides the lynchpin of the energy plans of most of the world, is that increased demand may shorten the effective economic life of that resource. Given that nations need to have some alternative source in their back pocket, the dismantling of coal-fired stations across Europe may mean that these resources won’t be available if needed, and the move to restrict new nuclear plants, may keep that from being an answer in the same way. We shall see!!

Tuesday, November 17, 2009

Time and the latest CERA report

One of the features of many models that are used to predict future events is that they focus on target years. Decadal years are the most common target years, so that whether talking of climate or the amount of oil or natural gas available, models focus on, for example, the amount that will be available in 2030. The problem with this approach is that it leaves the public to think that a problem is not yet serious. For example if the prediction is that the production of oil will only be 75 mbd, in 2030 then there is an implication that until 2030 that the situation will remain fine.

However the world does not reach those levels by continuing in the business as usual mode for the next 21 years, and then suddenly have production drop off a cliff one Friday night. Rather it is a problem that inexorably will grow, year on year, between now and then. I was struck by this thought as I looked through the latest comments from CERA/IHS on their view of the future of oil supply. Their view, as we have come to expect, is an optimistic one, and though we are not still living in the days of $30 oil that they had, at one time predicted, it is worth looking into so as to provide some explanation of the difference between their view and mine.

Let me begin with a reason why I tend not to be immediately and totally swayed by the thinking behind the CERA report, and their conclusion that:
Global oil productive capacity will grow though 2030 with no evidence of a peak of supply before that time.
It has not been that long since we were assured that production of oil from Mexico would be maintained at levels of 4 mbd through 2015. In 2005 we have:
CERA said that oil from non-conventional sources would widen to 35% of capacity in 2015 compared with 10% in 1990. The research points to growth in output from ultra deepwater drilling in the U.S. Gulf of Mexico, Brazil, Angola and Nigeria; 250% more heavy oil production capacity from Canada and Venezuela; and the expansion of condensate and natural gas liquids to 23 million barrels per day from 14 million barrels per day currently.
The EIA is anticipating that Mexico will produce an average of 2.9 mbd in 2009, falling to 2.7 mbd in 2010. And the latest chart from CERA (downloadable at their site) shows a much reduced increase in production of the heavy oils by 2015, for a start.
.
CERA has, unfortunately, not only continued to shine an overoptimistic light on future production, but has also tended (as sadly it has also done in the past) to gloss over some of the problems – vide:
Though a peak in global production is not imminent, there are major hurdles above ground to negotiate.”
These surface hurdles no doubt include the minor details as to how to get significantly more production out of Iraq. It is all well and good to read reports such as:
Iraq is planning to increase its production capacity to approximately six million barrels per day within 80 months, following the signing of service contracts with a number of major international oil companies. This is in addition to the other agreements which are expected to be reached by next December, whereby Iraq’s production capacity may be increased to reach around 10 million barrels per day at the end of the next decade, compared to 2.5 million barrels per day at present. The overall cost that will be borne by the international companies investing in developing the Iraqi oil fields will amount to about one hundred billion dollars. Needless to say, these agreements are considered to be a historic event (both economically and politically), not only for Iraq, but also for the oil industry itself in the Middle East, and for the global oil industry.
Adding 7.5 mbd to existing world supplies would certainly go a substantial way toward meeting the existing and well documented declining production from so many of the major fields of the world. But is that target a realistic one – let me sound perhaps a little more cynical than some and raise a slight modicum of doubt. While it is nice to be optimistic, the reality still fills the headlines of too many papers and news reports.
Of course, it is expected that these companies will face some obstacles and delays as a result of terrorist attacks against their employees and sabotage against its installations. Also, the need arises to increase export capacity that can accommodate the ensuing increase in production, in addition to attracting a sufficient number of professionals and technicians to work in Iraq under the current circumstances, and procuring the necessary machinery and equipment on time. Despite all these potential obstacles, the delays in these projects are not expected to be significant, since similar experiences in other oil producing countries have shown that such delays only cost a relatively limited and not long amount of time.
Thus even though there are some big players moving into that game it is a little premature to be optimistic.

In other aspects of the report the average field decline rate, which CERA ties to 4.5% - but includes fields with rising production in the calculation, masks the reality of an increasing level of decline in fields that are past peak. As we saw with Cantarell, post-peak collapse can come more rapidly and severely than earlier forecast.

At the same time the move to produce alternate fuels, such as cellulosic ethanol for vehicles, seems to have hit more technical and economic snags that may well considerably delay the target production that has been anticipated for this alternate fuel, feeding into an overall reduction in “other” fuels beyond the level that CERA still optimistically holds to (raising unconventional liquids, in their view, from 14% of global capacity today to 23% by 2030).

It is notable that in the version of the report I got, while CERA lists three scenarios, Asian Phoenix, Global Fissure and Break Point, it only briefly mentions the assumptions and impacts that the different scenarios will have on both demand, and thereafter supply. Given that I noted just recently that China is signing up for another 1 mbd delivery from Saudi Arabia, and that sales of cars in both countries are rising at significant rates, one can anticipate that that market is likely to develop into the Asian Phoenix that one might imagine is presaged by the title of the CERA scenario.

The growth of that new market is recognized with the opening of the new port of Kozmino by Russia with the potential for shipping up to 1 mbd of oil, with China as a major customer. (Which raises a question for another post on which customers will lose out as China gains.)

But to now get to the nub of my point; this is that there is already a changing market and demand for oil and its products that is developing in the short term. The longer term view of potentially available resources that are not yet found, does not address the problem of how big a tap can be made available to meet demand over the next six years. There are serious questions, within that time frame, of the ability of some of the largest fields in the world to sustain production at their current levels.

Longer term forecasts will be forgotten long before they are called to face reality, unfortunately the optimism they project can lead people astray in the shorter terms, where the conditions have been glossed over.

Sunday, November 15, 2009

Making holes and cracks around oil and gas wells

This is a continuation of the technical topics that I write about on Sundays. For the past few weeks I have been writing about some of the techniques used in producing the gas from shales, and that will likely continue for another week or two. Because of the need to condense the topic into a relatively short post I would ask those familiar with the topics to understand that I have had to shorten the description and gloss over some details in order to keep the main theme clear. But further comments to help readers understand the techniques better (or questions when it isn't) are appreciated.

There is a simple test that I use in one of my introductory classes, where I give the students a rectangle of paper and ask them to pull it apart, then I give them a rectangle with a cut half way through it perpendicular to the length, and half-way down, and ask them to pull that apart. It tears apart much more easily, and it is how I start a lecture on the role of cracks in causing materials to fail. You apply that principle about every time you pull open a package with a serrated top. The deeper cuts focus the force you are pulling with over a small area, making it easier to part the package and extract the candy, nuts or whatever without having to pull so hard that, when the package tears, you throw the contents around the neighborhood.

Today I want to talk a little more about perforating the wall of a well, and a bit more about hydrofracing. They are not necessarily used together, but both are ways of getting cracks out from the immediate wall of the wellbore so that the valuable fluid on the other side can have an easier path into the well.

To begin with the topic of perforating a well, I have written about this earlier, but more from the point of the tools used to do the job. What I’d like to do here is to talk a little more about it from the rock point of view. As I mentioned last time the rock right around the well can be subject to a high enough pressure that it will partially fail, or crush, and this can lower the amount of fluid that can get through, or alternately it might have been damaged in some other way. By bringing in a tool with a set of shaped charges in it, and then firing these at the appropriate place this problem can be overcome.

Arrangement of shaped charges (the yellow cylinders) – when the explosive goes off the cones collapse and small liquid metal jets shoot out of the open end, through the casing, concrete and into the rock, creating a channel. (Core Labs)

The charges aren’t all necessarily fired at one time or place, even though, for the illustration below, they appear to be.

Representation of shaped charges firing and penetrating the casing, cement and wall (OSHA

The jet of metal that shoots out of the cone will travel into the rock roughly 10 cone diameters, as a rough rule of thumb, and this carries a channel, or tunnel, out through the damaged rock into the surrounding reservoir. The collapse and creation of the channel happens very fast:

Penetration of a perforating charge into Plexiglas after 3, 12, 21 and 30 microseconds. Marks are in cone diameters. (after Konya*)

The channel is initially hollow, and drives a set of small and large cracks out into the rock around the line of the charge.

Jet penetration through Plexiglas (note the lateral cracks away from the line of penetration. The dark section is due to a change in background. (after Konya*)

However, while it is easier to show the damage that the jet does by showing how it penetrates Plexiglas, this is not rock, but it does show some of the events that occur. When, for example, (vide the discussion on jointed shale last week) the jet shoots into rock where there are clear joint planes defined, then these act to stop the crack growth (perhaps in the way that those who used to remember stopping cracks growing in old cars by drilling a hole at the end of the crack. It distributes the stress that was causing the crack to grow when focused on the tip, over a larger area so that it drops below the critical level). Or the stress is high enough to cause cracks to form and be reflected back at the jet.

Jet damage confined between two adjacent planes when the charge is fired into plates that run parallel to the direction of the jet. (after Konya*)

If the charge is not carefully designed and used, therefore, it might not be as effective as initially hoped, and this becomes even more true if the pieces of metal that are formed when the cone collapses are carried into the channel and partially block it. There are different strategies, depending on the well and the surrounding rock and it is one of those things in life where, if you got it right the results are almost immediately obvious – as is the converse.

Creating cracks that go out into the surrounding rock has become a vital part of the economic production of gas from the shales around the country, as we have discussed, and having a starting crack in the right direction, whether it is a natural joint in the rock, or a crack that has been deliberately created helps control where the crack starts and how it grows.

Crack growing out from a drilled hole in Plexiglas, the small notch at the top of the hole controlled the direction of the growth of the crack (We put ink in the hole to show how the fluid goes into the crack).

In the above picture you can see that when the hole was pressurized, a crack grew, and ink flowed into the crack, as it formed, but, when the pressure came off, the crack closed and most of the ink was forced back out of the crack. (We created the pressure by firing an air rifle pellet into the hole).

So if we are going to have a useful crack we need to have it open after we take the pressure back off – after all we need to get the fluid back out of the well, so that the gas can pass up the well for collection.

Now it is not quite as easy to grow the crack, or prop it open as I may have suggested earlier, and to explain some of the issues in a little more detail I am going to use an example and some details from the Modern Shale Gas Primer .

When you decide to frac the well, and each well is different, as is just about every location, so there is a significant amount of preparation and knowledge required to work out the procedure required at that particular point. Bear in mind that the crack that you are going to have to grow needs to stay in the shale layer, and not go out beyond it into the surrounding rock. One of the reasons for this is, apart from giving the gas a path to the well, if it goes outside the reservoir rock then the gas can escape, or, alternately, other fluids can gain access to the well. This is particularly true of the Barnett where the rock immediately below it, the Ellenberger limestones, can hold a lot of water that can muck up the gas recovery if it gets into the fractures. (Given this degree of control and the large distance below the ground to the reservoir rock, this is why a lot of the fears that the frac job will damage the ground water tend to be dramatically overstated).

In the example cited, which is from the Marcellus shale, the treatment of the frac takes a total of 18 steps, and because some of these are fairly similar I am going to go through them in groups. First the hole is treated with an acid, to clean away any remaining debris and mud from the drilling operation and to clean any fractures around the hole, so that they can be used to help the frac grow. After the acid the hole is filled with an initial polymeric fluid, largely water, but containing the “Banana Water” that I referred to in an earlier post. This is a friction reducing agent and will help carry the particles used to hold the crack open into the crack in the first place. The problem with that polymer is that some of the choices available, while good at reducing the friction to help move the particles, aren’t that good at holding the particles in suspension, and the last thing we need is for them to settle out in the bottom of the well, and so in the subsequent steps in the process as the particles (or proppants) are added, there is usually a second polymer in the mix to hold them in suspension.

Once the hole is full of the slickwater (the official term for the first polymer solution) the initial frac is made with a fine sand suspended in the fluid. To keep the crack open all along its length we need sand along the path and the crack gets narrower as it grows deeper. So for the first several stages of the crack growth the fluid is filled with successively greater concentrations of the fine sand, so that, in this way, it can penetrate to the deepest part of the fracture.

In the example cited there are some seven of these sub-stages with the fluid being pumped into the well at some 3,000 gpm but varying the fluid:proppant density to carry more and more of the particles into the fracture. Once these stages have been completed, then the job is finished by pumping an additional eight sub-stages of fluid, with this second set containing a larger size of proppant particles. In this way the area closest to the mouth of the fracture will be held wider apart to make it easier for the gas to escape. As with the first set of sub-stages, the concentration of proppant in the fluid increases as the stages progress. In total, in the example given, some 450,000 lb of proppant was used to make the fracture, together with some 578,000 gallons of water.

Once the fracture is created, then the well is flushed to clean out the different fluids, and make it easier for the gas to get out out of the rock and into the well. (It also removes any loose and ineffective proppant so that it doesn’t later become a nuisance). If you think that this would need a lot of equipment you are right!

Equipment used for hydraulic fracturing a well (Primer)

Since there is some discussion of the effects of the different constituents of the fracing fluid on local waters I thought I would end with the listing of common chemicals used in that liquid, which is provided in the Primer .

As usual this has had to be a very brief review of the technology and may have oversimplified to the point of not being clear, so all technical comments and questions are appreciated.





* The initial photos in this post were taken as part of the dissertation of Dr Konya, "The use of Shaped Explosive Charges to Investigate Permeability, Penetration and Fracture Formation in Coal, Dolomite and Plexiglas" Missouri S&T, 1972.

Saturday, November 14, 2009

Himalayan Glaciers - Science vs the IPCC

There has been a little controversy recently over the state of glaciers in the Himalayas. There has been much talk about how the glaciers in the region are melting, and thus threatening the water supplies for many nations fed water through rivers that start in those mountains. The stories have dramatized the retreat of the glaciers and used the threat of glacier disappearance, which is tied in the stories to AGW, as means of highlighting the need for remedial action.

However the placers where these glaciers are found are difficult to access, in many cases, and more particularly have been hard to get to on a consistent basis, so as to develop a record of exactly how fast the glaciers are, and have been retreating. For if we accept the existence of the Little Ice Age, and that glaciers around the world have been warming as the globe has warmed since then, the question becomes one of whether the glaciers are, in fact, accelerating.

Rates of retreat of the Altay glaciers of the Pamir in the nineteenth and twentieth centuries (From Grove – The Little Ice Age)

Grove noted, however, (in 1988) of the Altay that
It is possible that retreat will not continue unbroken, for latterly positive mass balances have been measured on the five Aktra glaciers.

And just recently there have been other reports of glaciers – in this case some 230 of them around the Karakoram mountains of the western Himalayas – also growing.
"These are the biggest mid-latitude glaciers in the world," John Shroder of the University of Nebraska-Omaha said. "And all of them are either holding still, or advancing." . . . . In the Karakorams, the uptick in glacier mass has come with a welcomed perk. The mighty Indus River, which flows out of China and nourishes northern India and much of Pakistan has experienced an increase in discharge.
Given therefore that there appear to be some discrepancies between the dramatic conclusions that are being drawn, and the actual facts, it is interesting to read a report from the Indian Ministry of Environment and Forests where individuals have, over the years, actually gone out to a number of different glaciers feeding down into India, and measured the location of the snouts of the glaciers, and thus been able to determine, over time, how the glaciers are behaving. And their conclusion has been
Himalayan glaciers, although shrinking in volume and constantly showing a retreating front, have not in any way exhibited, especially in recent years, an abnormal annual retreat, of the order that some glaciers in Alaska and Greenland are reported.
To explain that a little more clearly, the studies have shown that there has been no increase in the average speed at which the glaciers are retreating, and that the acceleration that is trumpeted by the IPCC and its followers, does not exist.

Now this news has not been received with enthusiasm by the IPCC. Given that the report discusses actual measurements at the glaciers, beginning as far back as the mid-nineteenth century (see graph above) and going beyond just measuring the position of the snout, to geophysical measurements of the ice thickness which began in 1965. The report details glaciers, such as the Chong Kumdan, which by 1958 had advanced sufficiently far as to threaten to block the Shyok river. And it explains how and when different techniques were used to establish the mass balance of different glaciers.

The report illustrates some of the results that were found in the first decade of study as follows:



As the techniques for analysis have become more sophisticated, so it has been possible to look at the mechanisms by which the glaciers gain and lose mass over period of a year. Generally the glacier gains mass from the snows of winter, and loses it due to melting in the warmer weather of the summer. Given the conditions in the Himalayas it was difficult to make that determination (about winter growth) until satellites came along to provide winter access for observation.

From these studies the report notes
Studies have revealed that the major factor for the negative regimen of the glaciers in the Himalayas is the relatively less snow precipitation during the winter (rather) than enhanced glacier melting in summer.
In other words it is not the global warming that is causing the glaciers to retreat, (the summer melting) but rather that there is not as much snow falling on them in winter. (Which is similar to the cause of the glacier retreat on Mount Kilimanjaro). The amount of relative melt is of concern in India, not just because of the water that it provides for consumption, but also because the water provides motive energy for hydro-electric power generation.

Source Index Mundi.

The report has been criticized because of the small sample size, but it details the technology used, and explains why the models developed in the Alps are not suitable. It has included measuring the melt water
it becomes imperative that round the clock monitoring of water discharge, during the entire melt season- June to September be carried out at as many glaciers as is possible.

Various teams, which are carrying out glaciological studies in the Himalayas, have carried out the monitoring of the glacier melt streams either by erecting a weir, across the melt stream, downstream of the snout; or installing an Auto stage recorder.

These are generally established where the stream course is braded or spread out so that data recording is manageable even at the height of the melt season, when the discharge rate goes up many folds In some cases usage of current meter has also been undertaken. Considerable data has so far been generated and various prediction models devised, yet due to rather poor response from the user agencies this data has remained, by and large, un-tested.
It should be noted that if a glacier stops melting, then the amount of water that it supplies to the rivers, and to the folks downstream depending on that water (particularly when the monsoon fails) will decline.
For the assessment of the glacier ice thickness and the volume thereof, TTS, in their guide lines, had suggested the usage of a rating curve based upon their work in Alps. This rating curve was, however, found to give erroneous results. A rating curve that was suitable for the conditions in the Himalayas was evolved by assessing the thickness of the glacier ice in some selected glaciers by geophysical methods: electric resistivity, magnetic and seismic (refraction) techniques followed by thermal/rotary drilling for confirmation of the same.
The report, in short, provides a detailed and rational explanation of what was done, and the consequent results. That it does not meet with the approval of the head of the IPCC is not surprising, but it is perhaps an example of the way in which this debate is being carried out that on one side you get a detailed set of scientific measurements and on the other (bearing in mind that this is a government agency issued report, backed by a minister) you get, from the IPCC
Pachauri dismissed the report saying it was not "peer reviewed" and had few "scientific citations".

"With the greatest of respect this guy retired years ago and I find it totally baffling that he comes out and throws out everything that has been established years ago." . . . . .schoolboy science.
Scientific measurement versus ad hominem attacks, sadly one can anticipate which side the mainstream media will accept.

Thursday, November 12, 2009

Looking back at Peak Global Production of Gold

Yesterday the President of the largest gold mining and production company, Barrick Gold, noted that after ten years of declining production it is time to recognize that the world has seen the peak in gold production. To maintain production ore is being mined with increasingly less gold in it. (The grade of the ore, or metal content, defines whether it is profitable to mine).
Ore grades have fallen from around 12 grams per tonne in 1950 to nearer 3 grams in the US, Canada, and Australia. South Africa's output has halved since peaking in 1970.

The supply crunch has helped push gold to an all-time high, reaching $1,118 an ounce at one stage yesterday.
Gold serves two purposes, firstly it has provided, down through history, a form of currency, though it is not clear whether it was Croesus or the Egyptians who used it in trade, both date back to around 5-600 B.C. and gold coins have flourished since that time. (But before then gold was mined around Mestia, in what is now Georgia, back at the time of Jason and the Golden Fleece (before 1300 B.C.) and used for ornamental wear and art objects). But gold also has a useful function as a metal.
Gold conducts electricity, does not tarnish, is very easy to work, can be drawn into wire, can be hammered into thin sheets, alloys with many other metals, can be melted and cast into highly detailed shapes, has a wonderful color and a brilliant luster. Gold is a memorable metal that occupies a special place in the human mind.
It is even, on occasion, used as a roofing material.

Gold roof to the Small Gold Tile Hall of the Ta’er Temple in Qinghai Province, China.

As with peak oil, the fact that global production has peaked, does not mean that there is no gold left to mine. Rather it means that less gold will be mined each year into the future. It will likely, in time, bring back into debate the environmental costs of mining.

For there are deposits of gold still in the ground that are not mined, in part because of the environmental cost. If you go, for example, to the Malakoff Diggins in California (A state park north of American Hill) you will find tall sandstone cliffs that used to be mined using streams of water from large monitors. However, in excavating the rock it was also disintegrated, and the clay particles were carried down into the Sacramento River, gradually filling the river bed, to the point that in heavy rains the river flooded the surrounding communities. Thus, back in 1886 Judge Sawyer restricted the practice, which largely fell into abeyance. But the gold is still “in them thar hills.” Similarly if one goes up to the valleys outside Fairbanks in Alaska, there is gold in the gravel beds – but is has been largely too expensive, both commercially and environmentally, to recover to this point. One of the more recent discoveries in Alaska has become known as the Pebble Mine Project, but while there may be up to 3 million ounces of gold in the region, there has been strong opposition to development. even though that development continues.

Gold has been a valuable mineral for a long time and for nations all around the world. All the good and easy places to find it, therefore, have been sought after and largely found. The gold deposits that are worked have become smaller of lower value and found in places that are harder to get to. With lower availability, greater demand and higher price, it became more practical to mine and process lower grade deposits and to go deeper into the Earth for the higher grades. Mines in South Africa , and in South Dakota worked down to more than 2 miles below the surface, to recover the ore. And techniques have been developed that recover it in even very small quantities – 3 gm per tonne is an ore that contains very little gold (3 divided by 1000 = 0.003 kg divided by 1000 = 0.000003 tonnes or 3 parts per million). So, while miners can still find the odd nugget when they pan for gold in streams around the country (and there are lots of maps available to tell you where to look), for the large scale levels of production that make a significant impact on the market, you need large deposits of gold with the potential for greater yields, and those places are getting harder and harder to find. And even as one goes deeper the grade of the gold doesn’t necessarily continue.
Harmony Gold said yesterday that it may close two more mines over coming months due to poor ore grades.
Gold production in South Africa had fallen 9.3% year-on-year last September, this in the country that once led the world in gold production.

We haven’t run out of gold yet
Barrick produced 1.9 m ounces of gold last quarter, down from 1.95 m a year earlier. Costs have been "trending down" to $456 an ounce, though rising energy prices pose a fresh threat. Total reserves are 139 m ounces, far ahead of rival Newmont Mining at 86 m.
But production will continue to fall as the reserves become even harder to extract. Beyond a certain point there is not a lot that technology can do, except perhaps to fund ways of getting gold out of veins that are too small and costly to mine at present. But that won’t yield the millions of ounces that are needed to maintain supply. And the industry was not one, in recent years, to invest in that future. When gold can be recovered by soaking crushed rock in a solvent at relatively low cost, there is not a lot of incentive for new ideas. The days of the industrial innovations that used to come from the research labs in South Africa are likely now over.

So, as the oil industry starts its travels down a similar path past peak and into decline, there are a couple of thoughts I would offer.

Firstly it could be pointed out that the gold industry has been able to see the declining production and lack of available prospects for some time. But it is only now, some 9 – 10 years after the decline started, that the industry is publically recognizing the problem.

Secondly one might ask whether there should not be an agency of the government that can independently warn the government and the nation of this before it happens, so that either better mining methods, access to restricted reserves or the development of alternate materials could be hastened. Well actually there was, it was called the U.S. Bureau of Mines and all those tasks were in its charter. But the mining community is a small one, and has not nearly the clout or popularity in Washington that it is though to have, and thus, in 1996 the agency was closed.

Thirdly, even though the story is out there it is unlikely, for a while, to get much media attention, and the vast majority of the world’s population will not either know of the predicament that is now approaching, nor understand why it is going to be something that will impact many aspects of their lives. Until, of course, it does.

And of course gold is only a pre-cursor of other minerals that will soon run short. Few folk realize the role that metals and minerals play in providing their lifestyle, and do not recognize that the value of many metals comes, in part, because there is nothing that can substitute as well for that particular metal in doing a particular job. Unfortunately doing something about it requires vision for an industry that is not favorably viewed by much of the population. It will be interesting to see if that perception changes, or if the industry becomes the target of blame as shortages lead to even further cost increases.

And then, of course, will come oil . . . . .