Sunday, November 28, 2010

The Oil and Gas Production Sunday Series

We are now in the period where the subject of peak oil is no longer getting the ridicule that it created when we started discussing the topic as Kyle and I formed The Oil Drum back in 2005. It is still not a widely accepted fact, and my object, in helping Kyle to found TOD was not to merely publicize the coming of this particular point in history. Rather it was to help folk understand the background to the evolution of the crisis, and to document it as it developed. That particular mission is far from being over, though Nate Hagens would argue differently about the initial mission of TOD:
For the past 5 years, The Oil Drum has been a home base for many high level discussions about the details and implications surrounding an early peak in global crude oil production as well as topics on society and energy in general. The entire site was started, and continued, by volunteers, in what might be described as a loose anarchy glued by social capital and a desire to puzzle solve the complexities surrounding energy depletion. Over time, on these pages, our contributing staff and especially the many readers who joined the discussions, have pushed the envelope in publicly analyzing what was/is one of the central issues of our time - the opportunities and constraints facing society during the upcoming energy transition.

In many ways our initial mission is over.
Given that and the changes at TOD, it may be that they will move away somewhat from what I initially set out to do. And so this site will continue its merry way, trying to meet that original goal which will, perforce, remain my mission here at Bit Tooth, since it is one that I feel is far from over.

Part of the problem for a lay person, in trying to understand the reality that is coming to pass, is that there are many conflicting stories, and ways in which data is presented, so that without a basic appreciation of what oil is where, and what will and won’t be available when, and who wants it, it becomes difficult to understand what is actually going on.

Over the past couple of years I have tried to provide, in Sunday posts, some basic explanations on how oil and natural gas are extracted from the ground, and I followed that with a series on coal that has just concluded. I will shortly re-arrange these and provide an archival list for them at the bottom right of the page. At the top of the page I am going to start providing an archive to the series that I now anticipate will form the next set of Sunday posts.


It is going to combine a little bit of history (along the lines that Econbrowser posted on Saturday about the initial discovery by “Colonel” Drake, in Pennsylvania in 1859, with the evolution and current status of some of those fields. (And again Econbrower has beaten me to that particular punch by posting the plot, taken from Caplinger which shows the historic production from the more than 350,000 oil and gas wells that have been drilled in that state, since that time.

PA production of oil from 1890 to 1990.

The production of PA oil did not, however, come from just one single oil reservoir, but rather from a large number of different pools, of which 149 had been identified, and listed, by 1928. The last reported was the Atlantic in Crawford County. More recent evaluations have consolidated these down to a list of 24 fields. But it illustrates a point that one cannot focus just on the oil in a state (or country) one has to consider the reservoirs, and their sizes, in which the oil and gas are found.

Original map from Pennsylvania’s Mineral Heritage, as quoted by Caplinger.

Since the time that the above map was made oilwell drilling technology has undergone considerable change, so that, with the coming of long horizontal, multi-fractured slick-water wells it has become possible to produce gas from the Marcellus shale in PA, and the energy production capacity of the state has been resurrected.

Extent of the Marcellus Shale (Marcellus Shale Coalition)

However, based on the performance of somewhat similar gas wells drilled into the Barnett Shale in Texas, and the current glut of gas that has come to a market as the development of gas shales has expanded, the exact immediate future of that production is not yet clear.

There is therefore both a historic and a current importance to visiting PA (not to mention the coal reserves – but we’re going to leave that for a later series). So it is with a number of other states. Colorado, for example, was the second state to discover an oilfield within its bounds, 13 months after the PA discovery. It still ranks 11th in the country in proven reserves of oil, and is the 7th largest producer of natural gas, from 7 of the top 50 gas fields in the country. It also has the largely undeveloped oil shales which hold a kerogen that, if it could be viably produced at scale, would provide a significant change to the world energy supplies. And in that roll I had better not forget Texas, or . . . . well the list goes on.

It would be very easy to get into great technical detail in this series, but that would defeat the intent with which I am starting out. Rather the goal is to try and show how oil (and gas) production developed, what it is now, and what can be expected in the future. There has to be some level of detail to the discussion, since some of the super-giant oil fields, such as Ghawar in Saudi Arabia, have been producing for many decades and may thus require a post of their own. And then there are claims of planned increases in production from fields such as the Ramaila field in Iraq. The Chinese are helping develop, and push this field from a production of 1 mbd to a target of 2.85 mbd. Since that production and similar volumes from similar relatively underdeveloped fields might impact the overall volumes of oil that are available, I will stop and discuss those individually as the series moves along.

So while the overall intent is to provide a broad picture of the oil and gas supplies of the world, and where they are used, the series will focus, as it evolves, on individual producers and the history of some of the fields, since it serves both as a way of predicting what might come in the future, but also serves as an indication of the results of different ways of extracting the fuel through different strategies in the past.

Well that is the plan! See you here next Sunday?

Saturday, November 27, 2010

Rhode Island combined temperatures

I was not sure that I would look at the temperatures of Rhode Island at this point, but with a hat-tip to Janet for the motivation, here they are. There are only 3 USHCN stations in Rhode Island - Block Island, Kingston and Providence, and so the loading of the data from the USHCN site goes very quickly. Chiefio’s list shows no GISS stations currently being used in the state. There is not that much one can do with just 3 points, but to be comprehensive, I will add the usual plots.

With so little work being involved, I decided it might be an interesting exercise to implement a slight change in the way that I plot the data for the states. Because there is a clear relationship that has become evident between temperature and station latitude, elevation and population (and a dubious occasional one with longitude), I am, in future going to trying to use the state edges to define the scales on the plots, so that the location of the stations relative to the state average can be discerned. Because of changes in the relationships when the data is homogenized to give the USHCN mean temperatures I will continue to show the TOBS plots, since are likely to more accurately show the relationships to the station geographical conditions. I am using Netstate to provide this information. Rhode Island is the smallest state in the union, being 40 miles long and 30 miles wide. The Longitude runs from 71.13 W to 71.88 W, and Latitude runs from 41.3N to 42.002N. The land runs from sea level to 247 m, with an average elevation of 61 m (and all the stations are below this average).

When one looks at the temperature rise for the state, bearing in mind that the stations are close to the sea, there has been a steady rise in temperature over the century. With the TOBS data this averages 1.9 degrees per century.


Interested to see what the change in sea surface temperature has been over the last century, I went on Google and found a paper by Smith and Reynolds and this gave a plot of reconstructed temperatures for the period:

Smith and Reynolds

Note that the time period I am looking at is from 1895 and this was near the time that the average sst hit its low point, with the temperature rise since then having been around 1 degree C or 1.8 deg F, which is about the same as the raw data would suggest are seeing for the temperature change for Rhode Island.

With that rough agreement it is interesting to note that the homogenized data for the three stations in Rhode Island show a temperature increase of 3.3 degrees.

Note that this is not the TOBS, but rather the homogenized mean temps as adjusted for the different stations in the state.

Moving through the geographic plots, beginning with latitude, which in a general case would be negative:


And for Longitude


The overall elevation of the state, as I mentioned, in not that great, and averages 61 m.


And then looking at the correlation with population. Block Island is apparently only intermittently populated, so I gave it a population of 1.


The small number of data points do not allow any confidence in any of the above correlations. They are of interest, even though small in number, because of the proximity of the stations to the sea.

And as a final thought there is this, the difference between the adjusted USHCN mean values and the original Time of Observation corrected raw data.


Monday, November 22, 2010

Happy Thanksgiving

I will be with family for the next few days, and I know that they would join with me in wishing all of you who celebrate the occasion, a Happy Thanksgiving.

I will return before long.

Sunday, November 21, 2010

Future Coal Supplies - more, not less!

There has been a growing trend toward predicting an imminent peak to the production of coal. Just this last week Nature carried an article that précised Richard Heinberg’s recent book on Coal production, which I reviewed when it first came out. And while that was based, in turn, on David Rutledge’s application of Hubbert Linearization to coal production, (discussed here ) that underlying theme had also been picked up in an article in Energy by Tad Patzek. This latter paper had just come out when, at the beginning of last month, I gave a paper at the ASPO-USA meeting explaining why the approach was wrong. I did that with Kjell Aleklett sitting in the front row of the audience; his graduate student Mikael Höök had just presented a dissertation along the same lines.

Why am I so obdurate that this is wrong, in the face of such heavyweight opinion? Well let me run through the bones of my argument to explain why.

The most extreme of the positions on the imminent coal peak is that of Tad Patzek and Greg Croft. (Energy, Volume 35, Issue 8, August 2010, Pages 3109-3122) In that paper the authors had inserted a predictive graph on coal energy production rate, as follows:

From Patzek and Croft

As you may note, this suggests that we are right at that cusp of peak production and it is all downhill from here. With the major sources of energy for the planet currently coming from coal and oil, and with the recent comments both from the IEA and the Joint Services Command about the peaking of oil, that would transfer a lot of the load to natural gas, which is the third major source, according to the IEA. (And it should be noted that P&C did include the following from the IEA in their presentation.

IEA predictions of future energy supply sources (after P&G)

Gregor Macdonald at Seeking Alpha has recently highlighted the increasing world consumption of coal, which is rising much more rapidly than that of oil, which has almost stabilized. Much of that demand is coming from China, India and the growing economies of Asia.

Gregor MacDonald:

Much of the argument however for peak in production begins with the decline in the production of British coal. Famously, before the first World War, Winston Churchill converted the British Navy from the use of coal to that of oil, despite the UK having, at that time, no known indigenous oil supply yet having plenty of coal. It was a decision followed by Navies around the world. Britain still had the coal, - there were other reasons for the change.

Following the Second World War Britain had to rely on coal as a domestic and industrial fuel, and, in 1947 it nationalized the mining industry. As a part of that process it carried out a detailed physical inventory of the coal reserves of the country. That inventory has been published, and can be summarized in this table:

(From Trueman)
In the post-war years British coal production peaked at about 220 million tons a year in 1955.

British coal levels (Source Open University )

Current demand, as you may note, is just over 50 million tons a year – which might suggest that the UK has about 900 years worth of coal available.

But that is apparently not the case. If you look at a couple of data points, the World Coal Outlook, and then the BP estimate of reserves for 2005, there is nowhere near that amount of coal in the reserve:


The story is told in the difference between the resource and the reserve. At the present time folk would have you believe that the UK has very little coal left (4.5 years according to the BP report, from 2005, which suggests that the country is only staving off running out, since it is using about a quarter of that a year, by relying on imports).

So where did the coal go? Did some evil Voldemort-type character sneak into the country over night and magic it away? No! As the World Coal study shows, the coal is still there. It is just that, at the present time, the coal is not economically mineable and so it is not considered a reserve. It is, however, physically, still there, and thus, is still shown in the first column as a resource. And as the price of oil rises and the price of alternate fuels rises, so more of that resource will become a reserve. As a single example there has been talk of putting a new underground mine in at Canonbie in Scotland as part of a long-term plan to supply the Longannet power station.
It is thought that deep seams run for miles underneath the Canonbie area and could yield 400 million tonnes of coal, enough to keep the Longannet power station going for the next 80 years.
You may note that the 400 million ton figure is twice the volume that BP considered the totality of the British Reserve in 2005.

The predictions of the imminent death of the Coal Industry are likely thus to be somewhat premature.

Now let me close by addressing two other points. One of the technical advances that moved so much of the British reserve into the resource column instead was that it became, with the advent of larger mining equipment, much cheaper and simpler to mine coal from the surface deposits of places such as Wyoming or Queensland, than it did to mine it from underground. When the basic technology is not much more complex than using larger and larger shovels to dig the coal out of the ground there is, as yet, no need for the more complex technologies that might, at greater expense, make more of the coal a reserve, rather than a resource. The world has more than enough coal, and coal producers that its price is largely kept down by competition. (Technology was the second part of my ASPO paper, though I will forego going down that part of the argument in this post).

Consider, for example, that until recently Botswana found that it was cheaper to buy the power that it needed from South Africa, rather than expand its own coal-fired power system. Then South Africa decided it needed that power itself and so Botswana was thrown back on its own resources. It is expanding its sole coal mine and installing additional power stations to raise generation from 120 MW to 820 MW. In the process they have established that the coal deposits in the country may well be up to 200 billion tons of coal. However, since there is not that much demand, as yet, and Botswana is a land-locked country only 17 billion tons are currently counted as a reserve. However there was a strong Chinese presence during my visit there, and the situation may therefore change.

I won’t comment much on the growth of coal in China, which is moving toward consuming about half the world’s production, Euan Mearns has just covered that in a much better way than I might. But I would add some thoughts to his post.

Firstly the Chinese have been building a lot of coal-fired power stations, and are unlikely to have built any of them without an assured supply of coal for each. Secondly this is not the only market for coal in China. I was in Qinghai province, over by Tibet. The province gets most of its power from hydro, but uses coal to power the brickworks that are ubiquitous in the region, as dwellings are being converted from mud-brick to fired brick.

China has a domestic problem to keep the many regions happy with the central government, and this is causing them to put in massive amounts of infrastructure to allow access to all regions of the country. It is at a scale much greater than that I have seen anywhere else, even in remote regions, such as some of those I travelled. Rail and truck transport of coal is not the only way it can be shipped. Coal pipelines are an alternate way of doing it, but oddly, whenever the technology reaches a point of serious discussion freight costs seem to reduce, at least temporarily. I do not see, therefore, that over the longer term, coal supply to the various power plants being as great an issue as others foresee.

My overall conclusion is, therefore, that there is plenty of coal. The price is kept down by its international availability, and because it is so ubiquitous I anticipate that as oil becomes more expensive, so the nations of the world will, increasingly move to using this as an available reserve.

Saturday, November 20, 2010

Vermont combined temperatures

In two of my earlier visits to Maine, I had covered both the homogenized data temperature records for the state, and those from the Time of Observation corrected (TOBS) raw data. Thus by now looking at the temperature variation over the past 115 years for Vermont, I have largely captured the variations in New England, having previously also looked at Massachusetts, and New York.

There are eight USHCN stations in Vermont, and according to Chiefio’s list GISS no longer uses any of the information from stations in this state as part of their network, so I will eliminate that part of the analysis this week. As previously, because of the better correlations that are achieved before the values are “homogenized”, I will report using the TOBS data, rather than the “mean temperature” values that USHCN reports.

The only problem in acquiring the data came with South Lincoln, which is small enough that it doesn’t apparently appear in city-data. Looking at other sites from a Google search, and after a quick look through Google Earth:


It turns out that Lincoln is just over the hill, and so I am going to use the population of that town (1,214) to include South Lincoln.

With only 8 stations and only one above 10,000 (Burlington) this is, by GISS definition, definitely a rural state. (They count towns of less that that number as rural without further definition of population).

So turning to the graphs, given this relatively small, and somewhat narrower sample range than with some of the other states, I see that there are several differences in correlation relative to the other states;

Firstly when the overall temperature profile of the state is examined:


The temperature increase is 1.7 deg/century which is slightly higher with the TOBS data than with the homogenized values (1.4 deg/century) and the hottest year was 2008.

Examining the effects of geography on the numbers, for the first time the temperature shows sensibly no correlation with latitude:


While, in contrast, there is a reasonably good correlation with longitude.


The correlation with elevation remains strong:


In this case considerably stronger than either of the above two.

Vermont is some 160 miles long and 80 miles wide (71.47 to 73.43 Longitude; 42.73 to 45 Latitude; mean elevation is 1,000 ft (300 m) with the highest point at 4,395 ft). There is only one station, out of 8, above the average elevation for the state.

Given the narrow band of population scatter in the data, it is interesting that there is still a reasonable correlation of temperature with population.


And, in contrast with recent trends in other states, the homogenized data shows a decreasing effect on temperature over time, though the change is quite small (0.2 deg per century).



Given that it is Thanksgiving Break next weekend, there will not be a post on temperatures next Saturday (I will be taking a week off) but should be back the first Saturday in December – the next question to answer being that, having now gone right across the country, where do I go next?

Wednesday, November 17, 2010

Me, This Week in Energy, and why Halliburton might refuse EPA

This afternoon I was an invited guest on “This Week in Energy” with Nikki Gordon-Bloomfield and Bob Tregelus. Among other things we talked about the fracking process that is being used to help produce the natural gas from shales such as the Marcellus and Haynesville. In the course of the discussion I was asked why of the nine fracking companies that EPA asked for their formulae, only Halliburton had refused the request. Bob pointed out that they were going to be subpoenaed and thus would have to give up the information anyway. I have discussed some of the problems of stimulating a well with hydraulic fracturing, both real and less so, on this site last March when the public perception of the technology began to change.

In the possible explanation I am going to give, you need to know that this is purely a supposition and the chemicals that I am going to mention are put forward out of my own head, as it were. I have no real clue as to why Halliburton are acting the way that they are, and am only building a hypothesis that might only through some slight possibility have any approximate relation to the truth.


In the evolution of the technology that has made production of the gas shales possible several different technologies had to be developed. The rock (which is actually a mudstone) has a very poor natural permeability. I.e. it is very difficult for fluid to flow through the rock, because the passage ways are very narrow and not very well connected. Thus the normal vertical wells would not produce very much oil or gas when drilled through the shale reservoir, certainly not enough to be profitable. The first beneficial development was, therefore, the ability to drill horizontally after the well had reached the reservoir depth. Once this was possible, then the length of the well that was exposed to the reservoir (which might be only 30 ft thick) would increase from that 30 ft to perhaps 10,000 ft. Since the amount of fluid flowing into the well is a function of the length of the exposed well in the rock, when the reservoir rock has a normal permeability this is enough to increase production significantly (as for example in the new wells in Saudi Arabia).

However when the rock has a very poor permeability even the long wells will only very slowly accumulate fluid from the surrounding rock, since there are no easy passages to the well through that rock. Thus the next benefit that was needed was the ability to crack the rock around the well. This is known as hydraulic fracturing or hydrofracking for short. In modern wells, by isolating and then pressurizing different segments of the well in turn, these cracks can be created (when the pressure inside the well exceeds the rock strength) at regular intervals (say 30 to 120 ft apart) along the length of the borehole.

The cracks are controlled in length (since if they go outside the reservoir all the fluid can drain away through the other end of the cracks, not to the well). But the problem is that once the crack is made, the pressure inside the well is lowered and the equipment moved to the next segment. Without any other changes as the pressure comes off the crack it will close back up, and there will not be much gain from the effort. So to keep the crack open what the industry calls a proppant, but you or I might just call it a carefully sized sand, is mixed with the fracking fluid before it is injected into the well.

As a result, when the cracks open in the rock, and the fracking fluid flows into the crack, the sand is carried with it, and is then trapped in the crack, holding it open after the pressure is lowered. A passage then exists for the gas or oil to travel to the well and production of most of the rock volume becomes possible.

Well that was when the development of the gas shale deposits began, however it had not been going on very long when it was noticed that the sand was not flowing easily into the fractures, and without enough sand being carried far enough back into the cracks, production wasn’t nearly as good as it should have been.

At this point another development was needed. This came about when an additional chemical – what is known as a long-chain polymer (typically a polyacrylamide) - was added to the fracking fluid. These fluids are known as Friction Reducing Agents (FRAs) because they tend to make water stick together a bit, and create extremely slippery surfaces when they coat them. By adding these FRAs to the fracking fluid, the crack walls became slipperier and the sand particles could thus travel deeper into the cracks, holding them open more effectively and increasing gas production. The fluids were given the generic name “slick water”, so that the current state-of-the-art is a horizontal well that has had a multi-fracture, slickwater-hydrofracking operation run on it.

But the problems of the wells are not over. As I noted in my post yesterday, the mudstones contain a significant amount of clays. And the problem when clays get wet is that they get softer and clay particles can break away from the wall of the fracture (slaking). Over the different gas shale deposits the problems are not consistent, since each shale is made of a different set of constituent rock types and clays. But overall the problem that is now being evidenced, as Art Berman has commented a number of times, is that the wells are losing production faster and earlier than predicted, so that they cannot meet the overall targets that make the well profitable. Instead of the well lasting perhaps a decade, they are losing perhaps 60% of the flow in the first year, and are no longer worth operating after maybe three years.

With all that as background, here is a hypothesis to explain Halliburton’s actions. It is quite possible that the well failures are due to the clay failure in the shale reducing the crack effectiveness. Clay content failure can do this, once the fracking fluid has cracked and wetted it, by a long term softening (which will allow the walls of the crack to fold around the proppant particles, and close the crack as the walls move in), or simply swelling into some of the crack space, with the same effect. Alternately the clay particles may slake and break away from the walls of the crack, and over time build up small dams along the crack path, again blocking the fluid flow through the crack – any one of these mechanisms explains the production falls that are being seen in the industry.

So lets say that Halliburton has realized the problem and, for merely the sake of a discussable solution, changes the polymer that they use from a pure polyacrylamide (PA) to include polyethylene oxide (PO). One thing that PO does at much lower concentrations than PA is that it stops the fracking fluid from wetting the shale, and interacting with the clay. Because it is (or at least was when we did this) much more expensive than PA there is not normally any reason to use PO in the fracking fluid.

But let us say that Halliburton have tried this, and it works. Because it is a step change in the process (in the same way as horizontal drilling; fracking; and slick water use were each, in turn) then the company selling the new idea has a tremendous commercial advantage. They can promise you that your well will stay in production long enough for you to make a profit, while the competition cannot.

The world of hydrofracking contractors is small and engineers move around, so that commercial advantage does not last very long, and word gets out as to how it was done. But that takes time, first to find out what is causing the problem, then what the answer is in general, and then what the answer is in detail. Each of those steps might take a competitor a year. That gives you three years of advantage, when you can charge higher rates, and possibly put some of that competition out of business.

The problem is that if the competition sees that you have put PO in your fluid, instead of PA then they can immediately go and look up what difference that makes to the fluid. Knowing that it stops wetting immediately gets them past stages one and two and cuts the term of your commercial advantage from three years to one.

Would you want to give that up if, by lawyering and all those fancy tricks they get up to in Washington you could get the time that you have to release the content postponed by at least a year? Likely not, and since dragging out the process can extend the period of your commercial advantage, the longer you can keep kicking the ball down the street the greater your advantage, and the more benefit.

And I re-iterate this is purely a hypothesis that I came up with, and I have no connections that would suggest that this has any connection to reality.

Tuesday, November 16, 2010

An update on fuels from shale, and The Giant Toaster

This afternoon I was able to drop in on a talk by Jeremy Boak, who runs the Center for Oil Shale Technology and_Research (COSTAR) in Golden. He was speaking to the topic of “Finding Billions in Ancient Mud? Shale gas, shale-hosted oil and oil shale.” It is likely to become an increasingly contentious subject over the next few years as oil prices go higher, and the availability of the oils, kerogens and gases within the shales of the world become more economically viable. Jeremy drew attention to the annual Oil Shale Symposia that he has recently been chairing at Colorado School of Mines and which run each year. He provided the site where the papers from the last four of these Symposia are freely available. (The 29th, 28th, 27th and 26th Symposia, papers from the 30th, held in October, have not yet been posted).

His talk began with a review of the geology and the reality of the definition of the deposits – he quoted Walter Youngquist
"Bankers won’t invest a dime in organic mudstone, but find oil shale an entirely different matter.”
And then pointed out why the proper name is the mudstone, but that the MSM and bankers and those seeing their interest have been quite willing to allow the name change. Looking at numbers of around 1.5 trillion barrels of kerogen in the shales of Colorado, about 1.3 trillion in Utah, and 1.3 trillion in Wyoming, gives the United States in those states alone about 4 trillion barrels of oil. None of which, at present is being produced in any significant volume. However he noted that a production rate of 3 mbd is equivalent to a billion barrels a year (i.e. the current world production of around 87 mbd would be around 29 billion barrels a year). And a billion barrels is about the quantity of kerogen that can be found in one square mile of the basin. Normally reserves are only considered if the organic carbon content is about 30% (or roughly 70 gallons/ton).

The only places that are producing oil from the shale are currently in Estonia (with about 8,000 barrels a day); Brazil with about 4,000 barrels a day (bd), and China which is producing about 10,000 bd. However he also noted that China is building about 100 retorts a year to convert the kerogen to oil on the surface, and these retorts are located around China near significant deposits. (Typically the shale must be heated to over 300 degC before the kerogen will turn to oil, and be released from the rock – though the product of this treatment is generally thicker and quite heavy).

The potential for the continued production of liquid fossil fuels is encouraging other countries to take a serious look at developing their own resources. He cited Jordan, which is estimated to have 100 billion barrels, and testing is now going on in Morocco. However he cautioned that production levels would not rise rapidly and this could not be accepted, in the short term, as the answer to the coming shortages. He provided the following predictive plot (which I copied from an earlier talk), showing over a 60 year period the early development of conventional oil in the US against that of the tar sands of Canada, and the oil shales of the US.

(From Boak)

The hottest developments however are not in shale oil, but rather in shale gas. The most active search for which appears, at the moment to be in Poland. Conoco Phillips has apparently drilled two wells and is looking at a third, with over 900,000 acres now being leased. Not that the road to success in that county is predicted to be easy.

The talk moved on to describe the technological breakthroughs that have made oil and gas recovery from these shales possible. While part of this has been the ability to drill long horizontal holes, which is of relatively recent origin, the evolution of the multi-stage fracking process to put a multiplicity of cracks out from the well into the reservoir has been the real key. And hydrofracking has been developing and evolving for over 50 years.

It is that hydrofracking that has led to one of the more interesting novel approaches being developed by Exxon Mobil now in Colorado. While officially this is known as the Electro-frac process, almost everyone now is calling it the Giant Toaster.


Moving one step beyond the conventional horizontal drill, and then fracking of the rock, in the oil shale the fractures will be filled with a conductive material, so that, after the pressure is removed and the crack partially closes, the pressure squeezes this proppant , calcined coke and Portland cement, together to form a conductive sheet. By passing current through this (as in your toaster) the surrounding rock can slowly be brought up to temperature and then cooked slowly to transform the kerogen to oil.

Exxon Mobil are encouraged in this by the results that Shell have reported for their in-situ retorting (which I described in my series on oil shale – which are listed on the right hand side at the top of the BTE site). As I noted on the future of oil shale when Shell carried out their slow heating of the oil shale in-situ they were able to draw off a clear, golden oil which could easily be separated into gasoline, jet fuel and diesel.

The updates on the Exxon Mobil process show that they have been able to load the conductive material into the fractures as planned, and have been able to then run current through the material and heat the rock. They have not yet done the sustained higher temperature heating that will be required for full transition of the kerogen.

The Exxon-Mobil process has the benefit of being able to move the fractures further apart to some 125 ft spacing, over the 25 ft planned for the field tests of the Shell process.

Monday, November 15, 2010

Chesapeake on 60 minutes

I had forgotten that there was a story coming up on 60 minutes last night on those folk that had become millionaires from the gas in some of the shales around the country. Fortunately the folks at Go Haynesville Shale have put it up on their front page.

It's worth a visit, and you should also watch the minute of extras in the video above the main story, which shows how those with wealth can make a little more.

Sunday, November 14, 2010

Highwall Mining of coal

There is an aspect of coal mining that gets relatively little attention, even though it is growing in popularity, It is with this method that I will conclude this series on mining, and its precursor on oilwell production. Most mining can be divided clearly into either surface mining, where the rock over the coal is removed, and the coal taken away before the land is restored, or underground mining, where some of the coal has to be left in place to hold the roof up. But what happens in the middle?

There is an intermediate between the two main methods of mining, when the surface mine has produced coal from a seam that is steadily getting deeper, it reaches a point where stripping the rock from above it is no longer economical. So what to do? And the answer is what is known as Highwall Mining. When the last economic cut has been made with a surface mining operation, the coal seam is still exposed, lying under the rock and overburden in what is known as the Highwall of the mine. So, before the land is reclaimed and that final cut along the face filled back in, sometimes it is economic to look at using different mining machines to mine into that exposed coal face in the Highwall – hence the name.

Coal seams that might be mined this way are not going to be worked to that great a distance into the coal that a full underground operation will develop, rather a machine is mounted at the surface face of the coal, and advances into the coal mining and feeding it back out, so that it can be collected and hauled away.

The early machines that were used for this were often Augers, and in this picture the highwall had three seams of coal in it, (at the current bottom of the wall, and along the two sets of higher augered holes).

Augered seams in a highwall (Bundy Auger)

The coal auger is not dissimilar in shape to the auger used in making large holes in wood, or which might be used to drill large vertical surface holes, except that in this case the holes are drilled horizontally forward into the seam, and the auger might be more than 2-ft in diameter. Once the picks on the cutting head have mined the coal from the solid, then the broken coal is fed back along the scroll of the auger as it rotates, to the surface, where it is loaded, via the conveyor you can see in the above picture, into trucks. The scrolls are clearer in the picture below:

Auger working in Australia

The friction generated as the scroll rubs against the walls of the hole drilled require that it be pushed with increasing force, as the auger head mines deeper into the seam. This ultimately limits production from a single hole, since there is only a certain amount of force that can be applied through the auger string before it starts to deviate from drilling straight and, as a result, the head may begin to penetrate either the floor or roof rock around the coal, which is not to be desired. Back when I did some analysis of these some decades ago that limit was less than a couple of hundred yards.

However as the top picture shows, the holes can be placed fairly close together, though again care has to be maintained that the auger doesn’t punch through into a previous hole, since it is the hole walls that help get the coal back out.

Augered holes showing the narrow pillars between them (Bundy Auger )

The holes, once augered, can be backfilled with mine waste if that is thought viable.

The difficulty in steering the auger head, and thus the limitation on mining depth that this gives, has lead to the development of different pieces of mining equipment that can do the same job, though hopefully more effectively. One of these is the Highwall Miner (2:44 minute video here – the vertical auger is used for bracing posts, before mining begins)

This is a much more sophisticated machine and replaces the simple auger scroll with a miniature version of a continuous miner, which I have previously described being used underground. The resulting support machinery is significantly larger, although the basic functions, mining the coal, and feeding it back out to a cross-conveyor that then loads into trucks, remains the same.

Overview of Mining Machine (Bucyrus )

The mining head can oscillate up and down as it moves forward into the coal, thus mining seams that are larger in size than the machine itself (something that the auger does not do as well).

Highwall mining cutter head (Bucyrus )

The box conveyor sections provide a more robust platform for pushing the head forward into the coal, and contain two screw conveyor segments that carry the coal itself back to the cross conveyor.

Conveyor/push elements for the miner (the scroll feed conveyor segments can be seen inside the central box) (Bucyrus )

A machine of this type can produce between 40,000 and 120,000 tons of coal a month, depending on the conditions in which it is operating. It needs a crew of 3 or 4 folk, and averages 30 – 40 tons per man-hour. Remember that it must reposition at intervals.

Highwall miners of this type can advance much further into the highwall than augers, and machines are available that can mine in as far as 1,600 ft, achieving production levels (in Appalachia) of 300,000 tons a month. There is a video of the Addcarsystem, which is slightly different from the two described above, here).

The Addcar system, which uses an open conveyor concept to bring the coal out.

As I mentioned in an earlier post, this is the last segment of the coal mining series that I have been posting on Sundays. After a short break for Thanksgiving, I hope to begin a new series of Tech Talks, but more focused on nations from which we get our fuel, rather than on just the wells and mines that it comes from.

Saturday, November 13, 2010

New Hampshire combined temperatures

Well there are still a couple of states to do in the North-East, if I am to make this comprehensive, before I turn around and look elsewhere. So I am going to do New Hampshire today. It has only 5 USHCN stations and 1 GISS station at Concord, which has a record since 1880. (And I updated the bottom of this post on Sunday).

As is usual practice I’m going to tabulate the data from these stations first, and then, as I generate the graphs, comment. It turns out, however, that the third site in the USHCN set is at First Connecticut Lake, and that turns out to be “nestled in the quiet town of” Pittsburg, NH, so I will use that population, for the table. (It is still only 808 folk).

With only a few sites the conclusions are not, were they to stand alone, necessarily that reliable, but they are worth looking into, to see how they fit in with the other states. Firstly therefore how does the USHCN data compare with the GISS station. New Hampshire has a mean elevation of 1,000 ft and goes all the way up to Mount Washington at 6,288 ft. The Concord GISS station is at about 400 ft, and there are two of the USHCN stations above the mean. I will show the plots for the TOBS data and comment about the differences with the homogenized USHCN values.

The first question is how well does the GISS value reflect the average for the state? Well on average, over the century or more, it is 2.6 degrees higher.



There is a very slight increase in the difference over time. For reasons that will become obvious with the last slide of this post, if the homogenized USHCN data is used, rather than the raw data, then the difference significantly reduces over the time period.

Looking at how temperatures have changed in the state since 1895. The TOBS data shows a slight increase (0.1 degrees in a hundred years) for the state. The homogenized data shows a rate increase of 1.6 degrees over that same time interval.


Turning to the geographical factors controlling temperature in the state, the regression coefficients are helped by the low number of readings.

First there is latitude:

Because of homogenization, the regression coefficient with that data drops to 0.85, and the graph coefficient changes to -4.2.

Even with the small number of stations it is not possible to correlate well with longitude:


And in this case there is no change in the correlation whether TOBS or homogenized data is used. Looking at elevation, again the small number of stations help the correlation.


The homogenized result has a slightly lower regression of 0.92.

In terms of population, bearing in mind that this is a relatively lowly populated state, the regression is better than usual.


The state is thus, despite the low number of stations, fairly consistent in regard to the shape of the curves that are coming from the data. Including this one:


The adjustment over the century is on the order of 1.8 degrees.

There are apparently some other problems with some of the USHCN stations but those deal with location, and tie into the population estimation using local light, since we aren’t going that route, I mention it since there may be other problems I have not caught.

And for those wondering what all this about, down on the right hand side I list the states I have looked at so far, and the initial place I started with was Missouri, so you might want to look at that state first for more of an explanation.

Addendum - Kinuachdrach has asked a question in the comment below and I thought it interesting to plot (for the 13 states I have looked at so far) the value of the regression coefficient as a function of gradient for the temperature elevation plots.

Here it is:

Not quite sure what drives the outliers (Illinois and Indiana).

Thursday, November 11, 2010

The Chinese diesel situation

Well it appears that we are about back up to $90 a barrel oil which is the top end of the range which the Saudi Oil Minister said he was comfortable with, the other day. However, I suspect that were it to go higher, it would not change the current Saudi plans on oil production.

It might, however, draw more attention to the problems of dealing with an increasing demand for oil, in face of a limited ability to meet that demand. OPEC have just announced that they see next year’s average demand to be at 87 mbd, up about 120,000 bd over their estimate last month. Whether that projection proves realistic will depend on what happens in Asia.

Refineries in Asia have been faced with an increased demand from China, where there have been recent shortages of diesel across country, likely leading to the price increases that have just been imposed. Part of the cause of the increase is because refineries in China were reported to be losing up to $18 a barrel in refining oil they were buying at $80 a barrel. (Diesel was at $2.43 a gallon, and prices have been raised 10%, which given the current oil price, still leaves the refineries making a loss). Nevertheless Chinese government data shows that refining reached a record volume (8.8 mbd) in October, at the same time that overall Chinese crude imports fell for the month. In light of the increased demand it is expected that this month’s production will be even higher. Sinopec will also import feedstock for ethylene production so that refineries that were being used to supply the feed can, instead, concentrate on making diesel.

The unexpected size of the problem has been caused by the Chinese government trying to lower electricity consumption to meet a national target for energy savings by the end of the year. As a result of those decisions coal-fired power fell back, in October to the levels of a year ago, after an earlier increase. Because of these cuts in power, those who still need it (including metal production plants) have switched to diesel generators, with the increase in demand overwhelming the available supply. (Though some have had to close including 100,000 tons of aluminum smelting capacity). Thus the situation may be transient and improve, with electricity supply, after the end of the year, though that is not necessarily a given at this point, since it is dependant on government policies. (And hidden in that discussion has been the Chinese record refinery outputs of gasoline, which also hit a new record this month.)

For countries in Asia outside China the situation is reversed. With the profit on refining Dubai crude at over $14 a barrel in Singapore, refineries around the region are seeking to increase imports of crude. (China has been a net exporter of diesel until recently, but demand had grown, until recently, at 13% this year leading China to the potential switch to becoming a net importer of diesel, though that is debatable.) There are thus some strains evident in current ability to match existing demand.

In addition to getting additional supplies of crude from Russia, China has also increased crude imports from Iran, helping that country at a time when gasoline rationing and sanctions are being blamed for an 18% drop in internal gasoline demand.

In the United States there has been an increase in distillate demand according to the latest TWIP that is somewhat greater than usual:

(EIA)

The heating season is however anticipated to be, in general warmer than usual (sorry NorthEast), reducing heating fuel needs.
Fuel expenditures for individual households are highly dependent on local weather conditions, market size, the size and energy efficiency of individual homes and their heating equipment, and thermostat settings. The National Oceanic and Atmospheric Administration (NOAA) projects population-weighted U.S. heating degree-days will be about 4 percent lower than last winter. However, heating degree-day projections vary widely between regions. For example, NOAA projects that the South, a large market for propane, will be about 17 percent warmer than last winter, while the Northeast will be about 4 percent colder. The largest residential propane consuming region is the Midwest, where 8 percent of the homes heat with this fuel. Projected temperatures in this area are 2.2 percent warmer than last year. EIA projects Midwest propane prices to increase by 18 percent this winter while consumption in that region falls by 2.3 percent, resulting in an expenditure increase of 15 percent.

Oh, and in case you missed it, following my piece on explosives on Sunday, just to prove that not everything in life goes perfectly, here is a chimney demolition falling the wrong way.

Tuesday, November 9, 2010

The BP Deepwater Horizon Oil Spill and Offshore Drilling Commission

Courtesy of MoonofA, I am able to tell you that this past weekend the final act in the closing of the Deepwater Horizon well was the placing of a steel cap on top of the well, after the top plugs had been put in place, and the well was ready to be abandoned.



And on Monday the Oil Spill Commission began two days of public hearings.

As part of this, the Oil Spill Commission has issued its preliminary conclusions which are:
Flow path was exclusively through shoe track and up through casing.
• Cement (potentially contaminated or displaced by other materials) in shoe track and in some portion of annular space failed to isolate hydrocarbons.
• Pre-job laboratory data should have prompted redesign of cement slurry.
• Cement evaluation tools might have identified cementing failure, but most operators would not have run tools at that time. They would have relied on the negative pressure test.
• Negative pressure test repeatedly showed that primary cement job had not isolated hydrocarbons.
• Despite those results, BP and TO personnel treated negative pressure test as a complete success.
• BP’s temporary abandonment procedures introduced additional risk.
• Number of simultaneous activities and nature of flow monitoring equipment made kick detection more difficult during riser displacement.
• Nevertheless, kick indications were clear enough that if observed would have allowed the rig crew to have responded earlier.
• Once the rig crew recognized the influx, there were several options that might have prevented or delayed the explosion and/or shut in the well.
• Diverting overboard might have prevented or delayed the explosion. Triggering the EDS prior to the explosion might have shut in the well and limited the impact of any explosion and/or the blowout.
• Technical conclusions regarding BOP should await results of forensic BOP examination and testing.
• No evidence at this time to suggest that there was a conscious decision to sacrifice safety concerns to save money.

Needless to say these conclusions have not all been met with complete agreement, even the first has been challenged by Halliburton. The report on the test of the cement is revealing:
We asked Halliburton to supply us samples of materials like those actually used at the Macondo well so that we could investigate issues surrounding the cement failure. Halliburton provided us off-the-shelf cement and additive materials used at the Macondo well from their stock. Although these materials did not come from the specific batches used at the Macondo well, they are in all other ways identical in composition to the slurry used there. Chevron agreed as a public service to test the cement slurry on behalf of the Commission. Chevron employs some of the industry’s most respected cement experts, and it maintains a state-of-the art cement testing facility in Houston, Texas. Halliburton agreed that the Chevron lab was highly qualified for this work.

We attach Chevron’s report of its laboratory tests, and we have invited one of its experts to discuss that report with you at the public hearing on November 9.

Chevron’s report states, among other things, that its lab personnel were unable to generate stable foam cement in the laboratory using the materials provided by Halliburton and available design information regarding the slurry used at the Macondo well. Although laboratory foam stability tests cannot replicate field conditions perfectly, these data strongly suggest that the foam cement used at Macondo was unstable. This may have contributed to the blowout.

Halliburton has stated publicly that it tested the Macondo cement before pumping it on April 19th and 20th, and that its tests indicated the cement would be stable. When Chevron informed us of the preliminary results of its tests, we asked Halliburton to give us all of the data from all tests it had run on the Macondo cement slurry.

The documents provided to us by Halliburton show, among other things, that its personnel conducted at least four foam stability tests relevant to the Macondo cement slurry. The first two tests were conducted in February 2010 using different well design parameters and a slightly different slurry recipe than was finally used. Both tests indicated that this foam slurry design was unstable.

Halliburton provided data from one of the two February tests to BP in an email dated March 8, 2010. The data appeared in a technical report along with other information. There is no indication that Halliburton highlighted to BP the significance of the foam stability data or that BP personnel raised any questions about it. There is no indication that Halliburton provided the data from the other February test to BP.

Halliburton conducted two additional foam stability tests in April, this time using the actual recipe and design poured at the Macondo well. We believe that its personnel conducted the first of these two tests on or about April 13, seven days before the blowout. Lab personnel used slightly different lab protocols than they had used in February. Although there are some indications that lab personnel may have conducted this test improperly, it once again indicated that the foam slurry design was unstable. The results of this test were reported internally within Halliburton by at least April 17, though it appears that Halliburton never provided the data to BP.


As was brought out in the hearing, the resulting protocol that was implemented for the abandonment of the well at that time also put additional pressure on the cement.
BP’s temporary abandonment procedures at Macondo could have introduced additional risks, such as putting more pressure on Halliburton Co.’s cement job by removing mud and replacing it with seawater, setting the surface cement plug 3,000 ft deep, or deciding not to run a cement bond log test immediately, he continued. 

“What is of additional concern for us is that the procedures for temporary abandonment were changing up until the very last minute,” said Grimsley. “It is not clear to us why decisions on these procedures were changing in the days before the blowout. You have to make choices on the fly when conditions are changing offshore, but this apparently was not the case here.” There also was no indication that anyone at the rig called to shore in the three hours after the negative pressure test ended and the well blew out and said that test readings were odd, he indicated. 



Bartlit said BP’s decision to halt drilling nearly 2,000 ft short of the well’s original intended depth may have been based on concern that it had to keep mud and cement from leaking into adjacent formations, which could have fractured from unusually high pressure in the well. “They stopped because they were interested in well integrity and safety,” he said. Surprises in the reservoir can cause you to make changes which can affect what happens later. As near as we can tell, talking to experts, BP did the right thing here.”

Halliburton have issued a comment on the testing of the cement.
Halliburton has only recently received and is continuing to review the results, which it believes raises a number of questions. Halliburton is issuing this press release to provide information about the content and its preliminary views regarding Chevron’s cement testing report and the letter.

Halliburton believes that significant differences between its internal cement tests and the Commission’s test results may be due to differences in the cement materials tested. The Commission tested off-the-shelf cement and additives, whereas Halliburton tested the unique blend of cement and additives that existed on the rig at the time Halliburton’s tests were conducted. Halliburton also noted that it has been unable to provide the Commission with cement, additives and water from the rig because it is subject to a Federal Court preservation order but that these materials will soon be released to the Marine Board of Investigation. Halliburton believes further comment on Chevron’s tests is premature and should await careful study and understanding of the tests by Halliburton and other industry experts.

With respect to Halliburton’s internal tests, the letter concludes that “only one of the four tests” showed a stable slurry. Halliburton noted that two of those tests were conducted in February and were preliminary, pilot tests. As noted in the letter, those tests did not include the same slurry mixture and design as that actually used on the Macondo well because final well conditions were not known at that time. Contrary to the letter, however, the slurry tested in February was not “a very similar foam slurry design to the one actually pumped at the Macondo well….” Additionally, there are a number of significant differences in testing parameters, including depth, pressure, temperature and additive changes, between Halliburton’s February tests and two subsequent tests Halliburton conducted in April. Halliburton believes the first test conducted in April is irrelevant because the laboratory did not use the correct amount of cement blend. Furthermore, contrary to the assertion in the letter, BP was made aware of the issues with that test. The second test conducted in April was run on the originally agreed upon slurry formulation, which included eight gallons of retarder per 100 sacks of cement, and showed a stable foam.

BP subsequently instructed Halliburton to increase the amount of retarder in the slurry formulation from eight gallons per 100 sacks of cement to nine gallons per 100 sacks of cement. Tests, including thickening time and compressive strength, were performed on the nine gallon formulation (the cement formulation actually pumped) and were shared with BP before the cementing job had begun. A foam stability test was not conducted on the nine gallon formulation.
Their release concludes:
Well logs and rig personnel confirm that the well was not flowing after the cement job. BP and/or others, following the misinterpreted negative tests conducted after the cement job, proceeded to displace mud in the production casing and riser with lighter seawater, allowing the well to flow. Given these numerous intervening causes, Halliburton does not believe that the foam cement design used on the Macondo well was the cause of the incident.

The Commission web site has some beautifully rendered animations of the drilling process, among others. However, apart from taking over an hour for me to download, the drilling animation, among other things, shows the drilling bits creating holes larger than they are, and at the same size beyond the cased section as the well had before the casing was inserted. This does not happen, the well continues at the bit diameter, which is itself smaller than the internal diameter of the casing inserted into the hole over the interval.

And one other note, it appears that even though the moratorium on drilling has been lifted, no permits are available.
Ensco Offshore claims that since the ban was lifted Oct. 12, the government has not issued a single permit that would allow the resumption of any previously suspended drilling activities.

The government doesn't seem to dispute that allegation, saying in a late Monday filing that it must ensure applications meet regulations toughened after the Gulf of Mexico oil spill.