Friday, December 28, 2012

Waterjetting 4c - hoses and tubing

One of the first decisions one makes in connecting a pump to a nozzle is to select the size of the high-pressure pipeline that will take the water from the pump to the cutting nozzle. This choice has become a little more involved as ultra-high pressure hose has come on the market since this can be used at pressures that once could only be served with high-pressure tubing. However, at higher pressures the flexibility of hoses becomes reduced, both because of that pressure, and also because of the layers of protection that are built into the hose structure.

Much of the original plumbing, in the earlier days of the technology, used 3/16th inch inner diameter, 9/16th inch outer diameter, steel tubing. One reason for this was that, at this diameter, the tubing could be quite easily bent and curved into spiral shapes. And that, in turn, made it possible to provide some flexibility into an assembly that would otherwise have been quite rigid.


Figure 1. Early cutting nozzle with spiral coils in the high-pressure waterjet feed line to the nozzle.

When cutting nozzles were first introduced into industry, they were fixed in place, because of the rigid connection to the pump. The target material had, therefore, to be fed underneath the nozzle, since it was easier to move that than to add flexibility to the water supply line.


Figure 2. Early slitting operation (courtesy of KMT)

However, because feed stock can vary in geometry some flexibility in the positioning of the cutting nozzle above the cutting table would allow the jet to do more than cut straight lines. A way had to be found to allow the nozzle to move, and this led into the development of a series of spiral turns that high-pressure tubing can be turned through, as it brings the water to the nozzle. (See Figure 1). That, in turn, allowed a slight nozzle movement. By adding this flexibility to the nozzle, a very significant marriage could then take place between robotics and waterjet cutting.

The force required to hold a nozzle in a fixed location becomes quite small as the flow rate reduces and the pressure increases. (at 40,0000 psi and a flow rate of 1 gpm the thrust is about 10 lb). The first assembly robots that came into use were quite weak, and as their arms extended the amount of thrust they could hold without wobbling was small, but critically more than 10 lb. And this gave an initial impetus to adding jet cutting heads to industrial robots of both the pedestal and gantry type, to allow rapid cutting of shapes on a target material, such as a car carpet, where the ports for the various pedals and sticks need to be removed.

But this marriage between the robot and the jet required that the jet support pipeline be flexible, so that it could allow the nozzle to be moved over the target, and positioned to cut, for example, the holes for retaining bolts, without damaging the intervening material.

The pipe had to be able to turn and to extend and retract, within a reasonable range, so that it could carry out the needed tasks. Bending the pipe into a series of loops produced that flexibility.

A single full circular bend in the pipe will acquire sufficient flexibility that the end of the pipe (and thus the nozzle) can be moved over an arc of about 9 degrees.


Figure 3. Coils on a pedestal-mounted robot allowing 3-dimensional positioning of the cutting nozzle.

A large number of coils were required, since the tubing has only a very limited amount of flexibility in every turn. For example, if one wanted to stretch the connection by lowering the nozzle, then the several coils would act in the same way that the steel in a spring would as it extended. The movement can perhaps be illustrated with the following representation of a set of spirals, with metric dimensions.


Figure 4. Schematic of a series of coils, arranged to allow the nozzle to feed laterally.

Each spiral will also allow a slight angular adjustment also, and these add up, as more spirals are added to the passage.


Figure 5. Angular movement allowed per spiral. This should not exceed 9 degrees per turn.

While, in many modern assemblies this may seem to be a quaint way of solving the problem, back when these systems were first put together it was very had to find high-pressure swivels that would operate at pressure for any length of time. In those days we had one source that provided a swivel that would run for many hours provided that all the external forces could be removed from the swivel irself. But the moment an out-of-alignment force hit the swivel it was ruined. In another application we had tested every swivel we could find, that would fit down a six-inch diameter hole, and had found one that would run for ten minutes. To finish our field demonstration, where we had to drill out 50-ft horizontally from a vertical access well, we had to continuously pour water onto the joint to keep it cool, and the manufacturer stood by with a pocket full of bearing washers that we had to replace every time one started to gall.

But that was over thirty years ago. Now the connections from the pump to the nozzle can flow through ultra-high-pressure hose with a flexibility that we could barely imagine. And ultra-high pressure swivels will run for well over a hundred hours each without showing any loss in performance. It was, however, a gradual transition from one to the other.


Figure 6. Ultra-high-pressure feed to a nozzle, using coils and swivels

There are a couple of additional cautions that should be born in mind, when laying these lines out. While hose is more flexible, it is liable to pulsing, and moving slightly on a bearing surface, under pump cycling. In most places this is not a problem, but if the hose is confined and bent, then it may cause the hose to rub against a nearby surface. Over time this can generate heat, and can even wear through the various hose layers.


Figure 7. Worn hose and the scuff mark where it was rubbing on a plate.

There are other issues with hose, smaller high-pressure lines can kink, when used in cleaning operations and this is a seriously BAD thing to happen, and I will discuss that in a future article. Similarly one must consider the weight of hose, particularly in hand-held operations, where it is important to address hose handling as part of the procedure, but again this will be discussed later.

Read more!

Tuesday, December 25, 2012

Seasons Greetings to All

As a gentle snow falls in Southern Maine, I wish you all the Compliments of this Season, and hope that all is well for you and your loved ones. Best Wishes

Read more!

Friday, December 21, 2012

OGPSS - Iranian potential and the Caspian disputes

As we come to the end of the year, Leanan continues to point to the many stories that now fill the media reporting on the perception that the time to worry about peak oil is over. However, as Darwinian perceptively points out the global supply of oil (crude and condensate) is not going up with the celerity that most commentators are envisaging.


Figure 1. EIA reported global crude and condensate production through September 2012, (Darwinian at The Oil Drum)

The global balance between available supply and demand is in a balance, where the amount of oil remaining available to meet a surge in market demand is quite small. OPEC (and largely Saudi Arabia), by adjusting their production, ensures that the balance is maintained and prices remain at a level with which they are comfortable.

In the December 19th TWIP, the EIA has explained, without endorsing the statement that the US might surpass Saudi Arabia in global fuel production, why that is a less important statistic than folk are generally making it out to be.

The EIA note, in their review, that this balance is predicated on a sustained production from the Middle East. Yet, for several years now, reliance on maintaining current supplies, and guaranteeing adequate supplies in the future has assumed a steadily growing production from Iraq. The EIA notes that it is in the combined production from Iran, Iraq and Saudi Arabia that contains the additional oil needed for significantly greater production. (Venezuela is also mentioned, though there are more obstacles to overcome before their oil production can increase).

There is, unfortunately, a down-side to the glee which many commentators have greeted the news of potential US production gains. With the assumption that North America (and for this Mexico and Canada are included) can reach the higher targets projected, there is less concern with ensuring that alternate supplies remain available, as world demand continues to rise. This failure to ensure “insurance” may well cause some significant changes in the global market in the non-too-distant future, should the existing projections prove overly optimistic. (And Leanan referenced Kurt Cobb’s more realistic assessment, on Friday).

China, for example, is aware of the Iraqi potential to increase production and is moving to take over Exxon’s stake in the West Qurna oilfield, as part of their ongoing program to increase the reserves available to China in coming years. This comes as relations between the US and Iraq seem to be cooling, as Iraq becomes more friendly with Iran. Iranian exports are continuing to increase, despite the sanctions, although there are some problems arising with repatriating the payments to Tehran. The EIA note that Iran has some 137 billion barrels of proved reserves, some 9.3% of global reserves, and over 12% of total OPEC reserves. However, as with many countries, its internal consumption is keeping pace with or growing faster than overall production.


Figure 2. Iranian Production and Consumption (EIA )

In a recent post on Iraqi oil, Euan included a map showing the main distribution of fields in Iran.

Figure 3. Map showing the oil and gas fields of the Zagros fold belt. (Greg Croft via Euan Mearns)

Nevertheless Iran has significant potential reserves outside of this strip, as well as further discoveries within it. As the EIA notes:
There were a number of new discoveries in Iran over the past couple of years. In May 2011, NIOC announced a discovery of a deposit of light oil (35° API gravity) in the Khayyam field, offshore in the Hormuzgan province. The field had been discovered in 2010 but was originally classified as a gas field. According to the NIOC, the volume of in-place oil at this field is 758 million barrels, of which around 170 million barrels are recoverable. Also in May 2011, Iran announced the discovery of new onshore oil fields in its south and west with an estimated half a billion barrels of reserves. In late 2010, Iran claimed the discovery of new crude finds near gas reservoirs in the Persian Gulf, holding total in-place reserves of more than 40 billion barrels of oil, however recoverable reserves could be less than 10 billion barrels.
And just this year a significant discovery was announced in Iranian waters in the Caspian. The field promises a 10 billion barrel resource, which could add around 7% to the Iranian reserves.But there is a dispute over the location of the field, with Azerbaijan claiming that the region belongs to them.


Figure 4. Location of the discovery off Iran (Eurasia net)

Exploration in the southern end of the Caspian has been somewhat sporadic, given the territorial disputes as to who owns which part of the seabed, and this discovery is not likely to ease those tensions. Azerbaijan and Turkmenistan are both concerned over their individual territories, and are now in dispute over the Kyapaz (Serdar) field. And as the size of the field becomes more evident the dispute is continuing to raise tensions in the area.


Figure 5. Sunset over an Iranian drilling rig in the Caspian (rashidi4u on Google Earth).

And so, as we come to the end of 2012 disputes seem to be the order of the day in this part of the Middle East. Not forgetting the conflict in Syria, the disputes in Iraq over who gets to control what part of their oil future continues to evolve. Increasingly in the North the Turkish government is dealing with the regional government in Kurdish Iraq instead of the Government in Baghdad. These actions do not fill one with confidence that any of the predictions for future production are likely to come through in the time frame projected. Which will, unfortunately, tighten supplies in the non-too distant future.

Oh, and just in case you thought that Gazprom had changed its spots, it turns out that with the current cold spell in Kyrgyzstan, Gazprom has found the opportunity to take over the state gas company Kyrgyzgaz. The company has so many problems that the price will be nominal, given the amount of debt that comes with it. But it also gives Gazprom access to some of the Kyrgyz gas fields.

Read more!

Monday, December 17, 2012

Waterjetting 4b - Line losses

High-pressure pumps are, as a general rule, quite efficient at bringing water up to the pressure required for a given task. And yet, time after time, the jet that reaches the target is no longer capable of achieving the work that was promised when the system was designed. More often than not this drop in performance can be traced to the way that the water travels through the delivery system, and out of the nozzle that forms the jet.

The water flows that are used in a broad range of operations are quite low. Ten gallons a minute (gpm) and flows below that volume are mainly used in cutting operations and higher-pressure cleaning. Further, there are few occasions where hand-held operations will use flows much above 20 gpm, because of the thrust levels involved. And low flow rates mean that there is little pressure loss between the pump and the nozzle, right? UM! Well not exactly.

The pressure losses due to overcoming friction in the feed lines (whether hose or tubing) from the pump to the nozzle can make a significant difference in the operation of the system, as I mentioned in one of the early posts of this series. In that post I pointed out that a well-known research team (not us) spent two weeks running a system with 45,000 psi water pressure going into a feed line, but with only around 10,000 psi being usefully available when the flow reached the far end. (And I will freely confess later in this piece to having made a similar mistake myself). So the question naturally arises as to how these losses can be avoided.

In a word – diameter! The smaller the diameter of the feed line through which the water must flow, then the higher the pressure that is required to drive the water through that line, regardless of the nozzle size at the delivery end. The diameter of concern is, further, the inner diameter of the hose or tubing, not the outer diameter (though the combination is important in ensuring that the line can contain the pressure that the water is carrying through the line).

There are concerns over the condition of the line, the fittings that join the different parts together and other factors that I will cover in the posts following this one, but this will deal just with the simple pressure drop that occurs along a tube at different flow volumes. There are formulae that can be used, but a reasonable estimate of the loss can be obtained, either with the design tables that most manufacturers supply with their product, or through a simple nomogram that I will place at the end of this piece.

To begin with consider the basic equations that govern the pressure drop:


Figure 1. The equation relating pressure drop to flow volume and pipe diameter.

Note that in the above equation the pressure drop is related to the fifth power of the diameter of the tube – such is the power that even a small change in flow channel diameter will have on the pressure drop in the line.

When flow begins through a channel it is initially going to occur with the flow being laminar, in other words the water moves in layers. (There is an interesting video of this here and a video of one of the designs used, for example, to give the “solid” jet slugs that you might see jumping around the hedges at one of the Amusement Parks.


Figure 2. The difference between laminar and turbulent flow. (Equipment explained)

As water speed increases, however, the flow will transition from laminar flow into turbulent flow, where the roughness of the flow channel wall becomes more important. The roughness, resulting friction factor and the flow volume all then combine to allow the calculation of the pressure required to overcome the friction in the pipe. This holds true whether the flow is at the one or two gpm used in cutting at high pressure, or the relatively low pressure, high volume flows used in fighting fires.

But (outside of us academics) few actually calculate the numbers. There really is no need, since most of the manufacturers provide the information in their catalogs. There are two ways of presenting the information. The older convention was just to provide a graph, from which one could read off the pressure drop, as a function of the pipe internal diameter, and for a given pipe length.


Figure 3. Pressure drop along a tube, as a function of flow rate and tube internal diameter. Note that the scales are logarithmic.

Charts such as this are a little difficult to read, and being on a log plot small mistakes in reading the value can give significantly wrong estimates so that a more spread-out method is often more helpful. The one that I prefer to use is a nomogram, where it is possible to do comparisons between different options on a single figure with a slightly expanded scale.

Consider, for example, this nomogram from the Parker Catalog which shows the relationship between the volume flowing down through a line, the inner diameter through which it is flowing, and the resulting velocity of the flow.


Figure 4. A nomogram to determine the best pipe diameter, based on the allowable velocity of the flow. (Parker)

While this is not generally a concern in feed lines to nozzles (because of the high levels of filtration of the water) in lines that carry away spent water and debris the velocity can be of concern, and also in abrasive slurry systems, where flow rates above 40 ft/sec can lead to erosion of the line.

The more useful nomogram, however, is one that I have adapted from the U.S. Bureau of Mines (a Government agencies that is now, sadly, defunct).


Figure 5. Nomogram to calculate pressure loss along a 10-ft length of tubing.

Knowing the flow rate through the line, and setting a straight-edge (usually a ruler) to mark the level, the ruler is then positioned so that it also crosses the inner diameter of the tubing. In the example above that would align the ruler along the line shown, that runs from 20 gpm to 0.1875 inch pipe diameter (3/16ths of an inch). The point at which the line crosses the pressure drop gives the friction loss in the line. In this case that reads at 3,600 psi per 10 ft of pipe.

The example was taken from a field trial where we were drilling holes into the side of a rock pillar. We had no problem drilling the first ten feet, but when we added a second length of 10-ft tubing to allow us to drill holes 20-ft deep the drill did not work. It was not until late in the afternoon that we realized that by adding that second length of pipe we had dropped the cutting pressure coming out of the nozzle so that while the gage pressure was 10,000 psi, the initial jet pressure had been only 6,400 psi and when the second pipe length was added, the pressure fell to 2,800 psi. This was below the pressure at which it was possible to effectively cut the rock. And so we learned!

Read more!

Tuesday, December 11, 2012

OGPSS - Iran and the new EIA and OPEC Reports

With the possibility that demand for Iranian oil may fall below 1 million barrels a day (mbd) as sanctions continue to bite, Iran has announced that it wants OPEC to cut back production to the agreed quotas, rather than the overall additional 1 mbd that is actually being produced, and sold. Such a move would, of course, ,make it more difficult for those customers who have found a way of replacing Iranian oil, and perhaps incline them more towards disregarding the embargo.

OPEC has just released their December Monthly Oil Market Report (MOMR) in which they anticipate that earlier projections for 2013 oil demand growth will still be valid, at 0.8 mbd. (Though they note that December 2012 growth y-o-y was at 1.0 mbd as the US economy continued to improve). They expect that all of this increase will be met by non-OPEC increases in supply, and that demand for OPEC oil may even drop 0.4 mbd. Part of that projection continues to rely on increased US crude production, and the EIA TWIP of December 5th had the latest chart showing that projected growth, based on the newly released Annual Energy Outlook 2013.


Figure 1. Projections of future growth in US crude oil production. (EIA TWIP) from Annual Energy Outlook 2013)

As a footnote to that graph the Alyeska pipeline pumped an average of 582,755 bd in November, which brings the annual average up to 544, 625 bd. It is clear from looking at that plot that the gains in production are all assumed to come from increased production from the "tight" oil deposits that have produced the overall gains achieved to date. The optimism of this projection goes a little beyond the levels that I anticipate being achieved.

Coming back to the MOMR their projections do not include the recent news that Venezuelan President Chavez has had to have a fourth operation for cancer, and has named a successor, although the operation was apparently successful. This may complicate the decisions on how much to allocate among the OPEC partners, especially since all continue to need higher priced oil.

OPEC also give the price of various commodities in their report, and before going on to discuss country production, those prices are informative. (And can be read more easily by clicking on the table to get a better image). At present, with the decline in overall global demand, metal prices in particular seem to be continuing to slide.

Figure 2. OPEC report of commodity prices for November (OPEC December MOMR)

Equally informative is the demand that OPEC anticipates from the various regions of the world for oil in 2013.


Figure 3. OPEC estimates for regional oil demand in 2013. (OPEC December MOMR)

In total OPEC anticipates that global demand will reach 90.83 mbd by the fourth quarter of 2013, with the greatest growth continuing to be from China and the other Asian nations.

Looking at where this oil might come from, the main increase is still anticipated to come from North America.

Figure 4. Non-OPEC supply projections for 2013 (OPEC December MOMR)

The conflict in Syria is now reported to have led government forces to withdraw from the Omar and Al-Ward fields in the Deir Ezzor region, where much of Syria’s exports were produced. However the rebels do not, as yet control any of the refineries or export terminals and the result is that oil production is estimated to have fallen from 380 kbd to 160 kbd over the past few months. The regime is making up the shortfall in its needs by importing from Iraq.

Which brings us back to OPEC production levels. (Note that this is for crude oil and does not include the roughly 6 mbd in NGL that are currently being produced).

Firstly, this is what the various governments are reporting that they are producing:


Figure 5. OPEC production from official sources (OPEC December MOMR)

The total shows, among other things, how Libyan has recovered from their “Arab Spring.” In contrast with the official figures OPEC also posts the values from “secondary sources”.


Figure 6. OPEC production from secondary sources. (OPEC December MOMR)

The difference between the two figures for Iran is at around 1 mbd. Overall OPEC production is declining with the increase in non-OPEC production, so perhaps Iran won’t have quite as difficult a time persuading their colleagues to drop production a little more, to help them out. That won’t be at the latest meeting of the OPEC Ministers, which was held in Vienna on December 12th, where it was decided to maintain the current ceiling of 30 mbd.

The meeting was largely distracted by debate over who should be the new Secretary General, with this being “kicked down the road” for a decision at the end of May.

On the other hand, while Malaysia had promised to halt imports of oil from Iran last March, the IEA is reporting that they increased crude purchases from Iran in November. Whether this is oil ultimately destined for that country, or whether this a convenient transshipment point from Iranian tankers bringing in crude, which is then transferred to other carriers and a second purchaser is not clear, although a Chinese oil trader appears to be involved.

A move to make US natural gas available to NATO allies has begun in the Senate, with the intent that perhaps this could wean countries like Turkey from their use of Iranian and Russian natural gas. Whether this will ever amount to much is not clear, since Senator Lugar, the initial author, was defeated in the primary to the last election and thus leaves the Senate at the end of the term.

Read more!

Monday, December 10, 2012

Waterjetting 4a - Pump pulsations

High-pressure pumps generally draw water into a cylindrical cavity, and then expel it with a reciprocating piston. There are a number of different ways in which the piston can be driven. It can be connected eccentrically to a rotating shaft, so that, as the shaft rotates, the piston is pushed in and out. The pistons can be moved by the rotation of an inclined plate, so that as the plate rotates, so the pistons are displaced.


Figure 1. Basic Components of a Swash Plate Pump (after Sugino et al, 9th International Waterjet Symposium, Sendai, Japan, 1988)

And, more commonly at higher pressures, the piston can be of a dual size, so that as a lower pressure fluid on one side of the piston pushes forward, so a higher pressure fluid on the smaller end of the piston is driven into the outlet manifold, and out of the pump. This latter pump design has become commonly known as an Intensifier Pump. The simplified basis for its operation might be shown using the line drawing that was used earlier.


Figure 2. Simplified Sketch showing the operation of an intensifier.

When the intensifier is built, the simplified beauty of its construction is more evident.


Figure 3. Partially sectioned 90,000 psi intensifier showing the components and the small end of the reciprocating piston (Courtesy of KMT)

However, what I would like to discuss today is what happens when the pistons in these cylinders reaches the ends of their stroke, and it is a little easier to use an Intensifier as a starting point for this discussion, although (as I will show) it also relates to the other designs of high-pressure pumps that also use pistons.

Consider if there was only one side to the piston, rather than it producing high pressure in both directions. This design is known as a single acting Intensifier, and it might, schematically, look like this:


Figure 4. Simplified schematic of a single-acting Intensifier

As the piston starts to move from the right-hand side of the cylinder toward the left, driven by the pressure on the large side of the piston, it displaces water from the smaller diameter cylinder on the left. Assume that the area ratio is 20:1 and that the low-pressure fluid is entering at 5,000 psi, then, simplistically, the fluid in the high pressure pump chamber will be discharged at 100,000 psi. But not immediately!

Because the outlet valve has been set, so that it will not open until the fluid has reached the required discharge pressure, and this will require a small initial movement of the piston (perhaps around 12%), to compress the water and raise it to that pressure, before the valve opens. And, with a single intensifier piston, when the piston has moved all the way to the left, and the high pressure end is emptied of water, then there will be no more flow from that cylinder, until the piston has been pushed back to the far end of the cylinder, and the process is ready to start again.

Some of that problem of continuous flow is overcome when the single-acting intensifier is made dual-acting, because at the end of the stroke to the left, fluid has entered the chamber on the right, and when the piston starts its return journey the cylinder on the right will discharge high pressure fluid. But again not immediately!

One way of overcoming this is to use two single-acting pistons, but with a drive that is timed (phased) so that the second piston starts to move just before the first piston reaches the end of its stroke. This takes out the dead time during the directional change. The two can be compared:


Figure 5. Difference in the pulsation between a phased set of single acting intensifiers, and a double-acting unit. (Singh et al 11th International Waterjet Conference, 1992)

In cutting operations reducing the pulsation from the jet is often important in minimizing variations in cut quality, and thus, to dampen the pulsations with a dual-acting system a different approach is taken, and a small accumulator is put into the delivery line, so that the fluid in that volume can help maintain the pressure during the time of transition.


Figure 6. Effect of Accumulator volume on pressure variations (Chalmers 7th American Waterjet Conference, Seattle 1993)

A simplified schematic can again be used to show where an accumulator might be placed.


Figure 7. Location of the Accumulator in the flow line

On the other hand, in cleaning applications particularly with water and no abrasive, there are occasions (which I will get to later) where a pulsation might improve the operation of the system. A three piston pump, without an accumulator, will see a variation in the pressure output that may see an instantaneous drop to 12% below average, and then a rise to 6% above average, during a cycle. One way of reducing this is to increase the number of pistons that are being driven in the pump.

When one changes, for example, from the three pistons (triplex) to five pistons (quintupled), then the variation in outlet pressure is significantly less.


Figure 8. The effect of changing number of pump pistons on the variation in delivery pressure. (De Santis 3rd American Waterjet Conference, Pittsburgh, 1985)

Part of the reason that longer steadier pulses of water, which come from the slower stroke of the intensifier, can be of advantage is that the water is a jet comes out of the nozzle at a speed that is controlled by the driving pressure. A strong change in pressure means that there is a change in the velocity of the water stream along the jet. This means that slower sections of the jet are, at greater standoff distances, caught up with by the following faster slugs of the jet. This makes the jet more unstable. That can, however, be an advantage in some cases, and this will be discussed at some later time, when a better foundation has been established to explain what the effects are.

Read more!

Thursday, December 6, 2012

OGPSS - Iranian oil and the global future

There is a lot going on in the Middle East at the moment. There is the revolution in Syria which seems now to be entering some form of end game, and there are the riots in Egypt. There are some signs that these events might move on to countries such as Jordan. Increasing levels of turmoil in the Middle East do not help stabilize the future flow of oil and natural gas around the world, and there are underlying tensions, brought about in part by the need to sustain sanctions against Iran.

Turkey, for example, which is caught up in dealing with Syrian refugees and the adjacent civil war is also largely dependent on Iranian fuel to get it through the winter. In October Turkey is reported to have imported 75 kbd of Iranian oil with larger portions of the total 417 kbd import coming from Iraq (105 kbd) and Russia (103 kbd). The volumes that continue to flow are now becoming a source of friction, since US law demands that countries continue to lower their imports every six months . While Turkey continues to work to lower their need for Iranian oil (and may increase imports from Russia) in the interim the U.S. Government is not increasing pressure but apparently moving to extend the waiver of sanctions not only to Turkey, but also to a total of 21 countries, a list that includes China, India and South Korea.
Two officials said an announcement of the six-month extensions was expected from the State Department on Friday. The officials spoke on condition of anonymity because they were not authorized to publicly preview the step.

In addition to China, India and South Korea, the waivers will apply to Malaysia, Singapore, South Africa, Sri Lanka, Turkey and Taiwan. All nine were originally granted six-month renewable exemptions from the sanctions in June.

The exemption means that banks and other financial institutions based in those places will not be hit with penalties under U.S. law enacted as a way of pressuring Iran to come clean about its nuclear program.

A total of 20 countries and Taiwan have been granted the waivers. The others—Belgium, Britain, the Czech Republic, France, Germany, Greece, Italy, the Netherlands, Poland, Spain and Japan—will come up for review in March.

Yet Turkey, which gets some 20% of its natural gas from Iran, taking roughly 90% of Iran’s natural gas exports is resisting pressure to lower its gas purchases, since the fuel is the primary source for most Turkish electricity. And further, with estimates of Turkish needs estimated as rising to 655 kbd by 2016, the ability of the country to sustain an adequate supply of power supply may become more difficult without reliance on Iran.

There is a somewhat similar argument made in South Korea, who, while they have cut demand by some 30%, continue to import around 186 kbd of Iranian oil as of October, though the volume varies, depending on who is doing the counting. Similarly one sees that both China and India are reported to be lowering their purchases so that there is a projection that Iran might not ship more than 834 kbd in December. Some of the problem in sustaining even this level of supply is apparently coming from the lack of available tankers, and with Iran now being willing, apparently, to use false shipping transponders in co-ordination with Syria rather than just changing names; events seem moving toward some form of a Bond movie.

Oil is a recognized critical component in building energy supply and the current ongoing effort to contain Iranian exports seems to take much of the headline, relative to overall supply questions. But the game is being played in the margins of balance of overall oil supply and demand. The arrival of significant supplies of natural gas, whether real – as in the United States – or potential – as in most of Europe – has moved the focus away from concerns over oil supply as an issue.

Yet China does not seem to be cutting back on overall oil use, demand rose 6.6% in October 2012, over that in October 2011, and averaged 9.76 mbd. If that continues, then China must find an additional source for 644 kbd next year, over and above current suppliers and volumes. And so, with the country still growing, that demand will also continue to grow. But there are not a lot of places that can provide for that increased need. The slow economies of the United States and Europe have dropped demand from where it could have been. And while the European economy is likely to struggle on through next year, that of the United States (lunatics no longer being allowed in Washington) is on the path to recovery, which may well swell energy demand more than anticipated, and absorb any increased domestic supply without much further change in import needs.

And thus one comes back to the aggressive nature of the Chinese in regard to the hydrocarbon resources of the China Seas. The ASEAN nations seem powerless, whether by inclination ore real power, to do much to protest the Chinese position. The Chinese are also working to minimize the American presence, and treaty obligations, that involve them in these discussions. China has just authorized seizure of foreign vessels in their waters (which they, disputedly, claim include most of both China Seas). At the same time India has taken notice, and is more than just expressing concern.
Although India doesn’t have any direct territorial claim in the area, the waters are strategically important to New Delhi for three reasons. First, like for any trade-dependent country, the South China Sea represents an important global shipping route and freedom of navigation must be maintained. Second, India’s state-run Oil and Natural Gas Corporation (ONGC) owns a stake in waters claimed by Vietnam. And third, and perhaps most importantly, the South China Sea represents an opportunity for an Indian riposte against China’s ‘string of pearls’ naval encirclement of the Indian subcontinent.
Overall the world does not seem to be heading in the direction of a peace-filled future. The underlying imperative of energy supply to meet national needs has brought the world to war before now, remaining unconcerned about the situation means that we remain unwilling to learn the lessons of history.

Read more!

Tuesday, December 4, 2012

Gentle Cough - The Post Dispatch and Cherry-picking data

It has been a little while since I wrote anything about the Global Warming situation. Not that there is not an ongoing series of messages about how we are going to be drowned by increased glacial melting, or that extreme events might become more prevalent, and that we need to take precautions in case they do. Of course there is not a lot of evidence that the rate of extreme event occurrences has been increasing, but the alarmists feel that there is some need to drive home the message that the world has to be concerned about Global Warming, even when the globe isn’t warming. And so this post, which first notes why I wrote the last sentence, and then comments on how the media message is changing so that, by cherry picking data, alarm can still be spread.

So first let us look at the Global Warming situation. It has received very little coverage in the United States, and barely rated a mention in the UK, but the recent release of a new plot of global temperatures by the Climate Research Unit (CRU) at the University of East Anglia (UEA) is worth putting up, purely as a matter of record.


Figure 1. Global average temperatures over the past 15 years (British Daily Mail ).

This Met Office release (on a Friday) has largely been ignored by a scientific community that only exists in its current form as long as the reality that this graph presents remains ignored.

There was an immediate controversy in the UK (but not here, where it remains largely unknown) and there was a follow up report the following Sunday. But, even while ignored, the lack of increase in global temperature over the past fifteen years is surely some indication that the models widely used to predict an exponentially increasing global temperature, are falsified.

So what can a good alarmist do? Well consider the headline in the St. Louis Post Dispatch on November 26th. “2012 so far the warmest year on record in parts of Missouri.” So let me talk about this for a minute.

Notice that this does not say that the entire state is at its warmest. Rather it reports that Jayson Gosselin of the National Weather Service has noted that this was the warmest year on record for St. Louis and Columbia.
The average temperature in St. Louis so far this year is 63.4 degrees, a full degree higher than the 62.4-degree average seen in the previous warmest year, 1921. In Columbia, the previous warmest year as of Nov. 24 was in 1938, when the average was 61 degrees. This year, the average is 61.7 degrees. In Kansas City, Mo., it has been the fourth warmest year on record so far, with an average temperature of 61.3 degrees, Gosselin said.
He goes on to be more specific about when the heat wave occurred (in case we missed it!)
Gosselin, who works in the Weather Service's office near St. Louis, said the "meteorological spring" _ March through May _ was far and away the warmest ever in St. Louis with an average temperature of 61.1 degrees. Second warmest was 1910, when the average was 57.5 for the spring months. Summer also was unusually warm. Average temperatures in March, May and July all set records in St. Louis, he said.
For those who forget, I took a look at the Missouri State Temperatures first back in February 2010, when I first became curious as to whether our state was showing the global warming that everyone was talking about.

I found the location of all the US Historical Climate Network sites for Missouri and determined their location (latitude and longitude) elevation and population. Now as it turns out that there are 26 stations in Missouri, and so I took the average temperature for each station each year (this was the “homogenized” temperature in that initial post) and was able to plot the average state temperature over time.


Figure 2. Average “USHCN homogenized” temperatures for the state of Missouri (USHCN)

And if you look at that plot the state temperature has barely risen (less than half a degree Fahrenheit in 115 years) since official temperatures have been recorded, and the hottest years were in the 1930’s in the dust bowl years.

But there was something missing from the data table and it turns out that three of the largest cities in the state, Columbia, Springfield, and St. Louis were not tabulated in this network, but are, instead, part of the Goddard Institute for Space Studies (GISS) network that Dr. James Hansen used for his work.

And, being further curious, I then combined the two sets of data and obtained a plot for temperature as a function of population.


Figure 3. Temperature as a function of population size around the station. This conclusion, that there is a log relationship is not new. To quote from that post:
Oke (1973) * found that the urban heat-island (in °C) increases according to the formula –

➢ Urban heat-island warming = 0.317 ln P, where P = population.

Thus a village with a population of 10 has a warm bias of 0.73°C. A village with 100 has a warm bias of 1.46°C and a town with a population of 1000 people has a warm bias of 2.2°C. A large city with a million people has a warm bias of 4.4°C.
It is interesting to note that his coefficient is 0.317 and the one I found is 0.396.

( * Oke, T.R. 1973. City size and the urban heat island. Atmospheric Environment 7: 769-779.)

But then I revisited the state later in time, after the USHCN started also providing the raw and Time of Observation Corrected data (TOBS). And I found a few more interesting facts.

Firstly I compared the difference between the GISS data for the three large cities with the state average temperatures for both the raw data, and the “homogenized” data.


Figure 4. Difference between the average temperature in the large cities, and that of the average temperature in the State. The blue line is for the homogenized data, the red is for the raw.

I then went on to compare the TOBS average to that of the largest cities and this is what I got:


Figure 5. Difference between the average temperature in the large cities of the state, and that of the average temperature in the state using the TOBS data.

A slight upward trend, but not that significant. As for the temperatures in Missouri, over the past 100 years, with the correction – really there is no trend, it has been relatively stable:


Figure 6. Average TOBS temperature for the state of Missouri over the recorded interval.

I did note that the highest temperatures were some decades ago.

Oh and the correlation with population held up with the TOBS data, the coefficient was 0.327, and the r^2 value was 0.14.

Now I finished the entire contiguous United States some time ago, and that temperature relationship to population held up quite well, as the individual state reports listed on the rhs side of the blog show.

So what do we learn from this? That alarmist rhetoric is continuing with an embarrassing lack (for those of us who are scientists) of balance in the reporting. Data now has to be carefully cherry-picked to still be able to convey the message that the world is warming. One wonders how long they will be able to get away with this before they are called out by more prominent folk?

Read more!

Sunday, December 2, 2012

Waterjetting 3e - Water Quality part one.

One of the problems with taking a research team into the field is that you have to be able to provide answers, and a path forward when things go wrong. So it was on a project we once had in Indiana, and it took about a year for me to live down the tale. We had set up a 350-hp high-pressure triplex for a project that involved washing explosives out of shells. Everything had been set up, and was ready to go, and so we switched on the water to the pump, started the diesel engine and, almost immediately noticed that we weren’t getting enough water downstream of the pump. What was the problem? We checked all the valves, and couplings, and hoses, and they all seemed to be OK. It was, however, a bitterly cold day, with a howling wind around where we had the pump unit. And so I came up with the idea that it was the wind, chilling the pistons, which operated with their length exposed during part of the stroke. If the wind chill was cooling the pistons, then perhaps they weren’t displacing enough volume because they had shrunk. It became known as “The Wind Chill Factor” explanation and, as those of you who have done this sort of thing realize, it was bunkum! After a while one of the team wandered back to the filter unit, pulled out the partially plugged filters, changed them to new ones and we were in business.

There are a couple of reasons that I tell this bit of history, and they relate both to the quality, and the quantity of water that is being supplied at a site. I remember talking to Wally Walstad, who ran McCartney Manufacturing, before it became KMT, about their second commercial installation, and how the different water chemistry just a few hundred miles away had caused maintenance issues on the pumps that they had not expected.


Figure 1. Generic parts for a multi-piston high-pressure pump

It may seem obvious that a pump should be supplied with enough water so that it can work effectively. But the requirement, as one moves to higher-pressure pumps, becomes a little more rigorous than that. Consider that the water supplied must enter the piston, and fill it completely, during the time that the piston is pulling back within the cylinder. Because the piston is pulling back, if the water flow into the cylinder is not moving in enough, then the piston will pull on the water. Water does not have any tensile strength, and so small bubbles of vacuum will form. When the piston then starts back to push the water out of the cylinder these bubbles, which are known as Cavitation, will collapse. In a later post I will tell you how to use cavitation to improve material removal rates. But the last place that you want it is in the high-pressure cylinder, since the bubble collapse causes very tiny high (around 1 million psi) micro-jets to form that will very rapidly eat out the cylinder walls, or chew up the end of the piston. (Happened to us once).

There is a Youtube video which shows the cavitation clouds forming in a pump (the white blotches) as the flow to the pump falls below that needed.

To avoid that happening there is a term called Net Positive Suction Head, NPSH. I am not going to go into the details of the calculations, though they are given in the citation. In most cases it is not necessary to make them (unless you are designing the pump). Where the unit being operated is a pressure washer, then the pressure that drives the water out of the tap and into the hose is usually sufficient to overcome any problems with the inlet pressure.

When flow rates run above 5 gpm, however, or when there is a relatively narrow fluid passage into the pump cylinders, or where the water reservoir is below the pump, then the normal system pressure may not be enough. There are two values for the NPSH which are critical – the NPSH-Required (NPSHR) and the NPSH-Available (NPSHA). Let me give a simple example of where one could get into trouble.

For example consider the change which occurs when a pump, normally rated at 400 rpm is driven at 500 rpm, for a 25% increase in output. At 400 rpm the NPSHR for a triplex pump supplied through a 1.25-inch diameter pipe from an open tank will be 8 psi. At 500 rpm, as the flow increases from 26.4 gpm to 33 gpm, the NPSHR rises to 9 psi, which is only a 12.5% change.

However, under the same conditions the NPSHA, which begins at 11.5 psi with a 26.4 gpm demand, falls to 7.8 psi at 33 gpm. When the required suction head is set against that available there was an initial surplus of 45% over that needed. But this changes to a shortfall of 12% when the pump is run at the higher speed. The pump will cavitate, inadequate flow will reach the nozzle to provide full pump performance, and the equipment lifetime will be markedly reduced.

This supply pressure required should thus be checked with the manufacturer of the pump. In most cases where we have run pumps at 10,000 psi and higher, we have fed the water into the pump at the designated flow rate, but using a supply pump that ensures that the pressure on the inlet side of the pump valves is at least 60 psi.

One of the problems, as mentioned at the top of the piece, is that when going to a new site the immediate quality of the water is not known. There are two things that need to be done. The first of these, of particular importance at higher pressures, is to check the water chemistry. It is important to do this before going to the site, since it usually takes some time to get the results, and if there are some chemicals in the water that may react with pump or system parts, it is good to know this ahead of time so that the threatened parts can be changed to something that won’t be damaged.

There is a specific problem that comes with cutting systems in this regard, since at 50,000 psi or higher water quality becomes more important, even just in the nozzle passages. And I will deal with this in a few weeks when I talk about different nozzle designs.

And equally important is the cleanliness of the water. Particularly when tapping into a water line that hasn’t been used for a while (as we did) there is a certain amount of debris that can be carried down the line when it is first used. The smart thing to do is to run water through the line for a while to make sure that any of the debris is flushed out, before the system is connected up. The second is to ensure that there is more than one filter in the line between that supply and the pump.

Many years ago, when prices were much lower than they are today, Paddy Swan looked at the costs of increasingly dirty water on part costs. The costs are in dollars per hour for standard parts in a 10,000 psi system and the graph is from the 2nd Waterjet Conference held in Rolla in 1983.


Figure 2. 1982 costs for parts when increasingly dirty water is run through a pump (S.P.D. Swan “Economic considerations in Water Jet Cleaning,” 2nd US Water Jet Conference, Rolla, MO 1983, pp 433 - 439.)

Oh, and the moral of the opening story became one of our sayings in the Center, not that we were original, William of Ockham first came up with it about seven hundred years ago, it's known as Ockham's Razor, and simply put it means that the simplest answer is most likely the right one, or don't make things more complicated than they need be!

Read more!

Thursday, November 29, 2012

OGPSS - The ARPA-E 2012 Awards

The Department of Energy has just announced the projects that have been selected for funding in the next round of the ARPA-E program. (This is the Advanced Research Projects Agency-Energy, first funded in 2009, to, inter alia, "focus on creative “out-of-the-box” transformational energy research that industry by itself cannot or will not support due to its high risk but where success would provide dramatic benefits for the nation".) There are some 66 projects on the list, which is broken down into eleven different focus areas. These are the technologies that the ARPA-E program is betting some $130 million on, as sources of future energy supply or savings. It is worth taking a quick glance through the topics to see what is considered important and likely of success.

The two largest areas of funding are Advanced Fuels and Grid Modernization, both of which get around $24 million or 18% of the pie. This is split among 13 fuel projects, and 9 grid-related projects. With the growing supply of natural gas that is coming from the developing shale gas reserves in the country, it is perhaps no surprise to see that methane conversion to liquid fuel captures the largest part of the fuel funding this year, being the theme of nine of the awards.

The largest of the fuel awards goes to Allylix a company that specializes in terpenes, and who is tasked with turning these into a viable aviation fuel. Specific genes needed for terpene production are extracted from a biosource, and then optimized for use in a yeast host. The optimization is an engineered change that can increase product yield several hundred fold (according to their website). From that point there is a fermentation process, and then a recovery and purification of the liquid fuel, which is stated to be already commercially viable.

There is only one algae award this year, to Cornell for $910 k, and they will look at using light fibers in a small reactor as a means of improving economics. After having looked into this process I am prone to disagree that smaller is better (if you are going to generate hundreds of thousands of barrels a day you need large systems, and anything on a smaller scale is hardly worthwhile). Further there are issues with engineered light paths, but they will no doubt find those out as they carry on with their work.

The “different” program in this effort is for $1.8 million which is being given to Plant Sensory Systems to develop a high-output, low-input beet plant for sugar production.

There are just two awards for Advanced Vehicles, one to Electron Energy Corp to produce better permanent magnets that don’t rely on rare-earths, and one to United Technologies to improve efficiency by using laser deposition of alternate layers of copper and insulation in a new electric motor design. This will also reduce rare-earth dependence. They roughly split $5.6 million.

The $5.3 million for improving building efficiency goes to California, and is split with two awards to Lawrence Berkeley and one to Stanford. Each has a project on using coatings to alter the thermal transfer to the buildings and cars, while Lawrence Berkeley also gets almost $2 million for modeling studies of building heat losses.

The $10 million for carbon capture is split four ways, with two awards (to Arizona State and Dioxide Materials) for electrochemical systems that will generate new fuels from the carbon dioxide output of power plants, while the University of Massachusetts at Lowell is developing (for $3 million) a catalyst that will also combine sunlight, CO2 and water into a fuel precursor.

The fourth award is to the University of Pittsburg (at $2.4 million) for a way to thicken liquid CO2 either as a way of improving EOR, or as a substitute for water in hydrofracking. I can’t quite see the advantage of a thicker fluid for use in EOR, since the hope, surely, is to have a very low viscocity fluid that can more easily penetrate into the formation and mix with the oil, but the application in fracking is intriguing.

The emphasis with the investments in Grid Modernization (the co-largest topic) is on improving switchgear (five awards). In addition there are two awards for modeling, one on improved instrumentation and one to Grid Logic ($3.8 million) for developing a new super-conducting wire for power transmission.

There are two awards, both for $2 million, in the “Other” category. One is to MIT for a water purification system, wile the other is to Harvard. This latter is for a “self-repairing” coating that can be applied to water and oil pipes to reduce friction and thus lower pumping costs. The old fall-back on this was Teflon, which could be very effective, but any particulate matter in the fluid will erode this over time, so the “self-healing” aspect could be worthwhile, since it might allow a much thinner liner.

The $18.76 million for Renewable Energy projects is distributed to wind, sun and water energies, with two projects in waves where Brown University will be building a new underwater wing to capture flowing water energy, and Sea Engineering, who will be developing a better buoy for acquiring data for tidal energy potential assessment. Wind is down to a two projects, one, which seems a bit regressive, is to GE who will develop fabric blades for wind turbines for $3.7 million. A similar amount is going to Georgia Tech to develop a vertical axis turbine. The remaining six projects deal with solar power of which the most interesting, perhaps, is that at Cal Tech which is going to look into splitting light into its different color bands (think prism) before using them to improve device efficiency. We have seen that converting white light electronically to the narrow optimal color band can have dramatic effects on improving algae growth rates, for example, but it requires a bit more refinement to achieve the narrow division than, I suspect, will be possible optically.

The section that will invest $12 million in Stationary Energy Storage is funding 8 projects looking at different battery technologies. The largest investment ($4 million) is going to Alveo Energy, which has an intriguing entry in Find the Company. It was apparently only founded this year. The technology that it is chasing involves using Prussian Blue dye as the active ingredient in the battery.

The other “out of the ordinary” award is to Tai Yang which is affiliated with Florida State University. Superconductivity Center. The $2.15 million award is to develop a method for storing energy in a high-power superconducting cable.

Pratt and Whitney get two of the three Stationary Generation awards, the first for $650k is to develop a continuous detonation gas turbine, while the second, for $600 k is for work on an ultra-high temperature gas turbine. The University of North Dakota gets the third award to look at developing air cooling for power plants.

The $9.5 million for Thermal Energy Storage is split five ways, with three awards for the development of power from the waste heat in existing systems, one to the NREL for a solar thermal electric generator, and one to Georgia Tech for a solar fuels reactor using liquid metals.

When it comes to finding answers to Transportation Energy Storage the Agency is committing $15.3 million to seven projects. Six of these deal with battery development. (A123 Systems who previously received a $249 million federal grant to develop electric car batteries recently went bankrupt.) Two of the awards, to Georgia Tech and to UC Santa Barbara will seek to combine super-capacitor design with battery capabilities, while the Palo Alto Research Center will use a printing process to construct batteries.

Ceramatec is being funded, at $2.1 million, to develop a solid-state fuel cell using low-cost materials.

There is a clear change in emphasis from earlier years reflecting, no doubt, the results from ongoing research, as well as the obvious change that the current natural gas availability is allowing in developing technical advances for the future. It should, however, be born in mind that while some of these will likely prove to be quite successful, it will still take perhaps a decade before any of them can be anticipated to have any significant impact on the market.

Read more!

Sunday, November 25, 2012

Waterjetting 3d - High-pressure pump flow and pressure

When I first began experimenting with a waterjet system back in 1965 I used a pump that could barely produce 10,000 psi. This limited the range of materials that we could cut (this was before the days when abrasive particles were added to the jet stream) and so it was with some anticipation that we received a new pump, after my move to Missouri in 1968. The new, 60-hp pump came with a high-pressure end that delivered 3.3 gpm at 30,000 psi. which meant that a 0.027 inch diameter orifice in the nozzle was needed to achieve full operating pressure.

However I could also obtain (and this is now a feature of a number of pumps from different suppliers) a second high-pressure end for the pump. By unbolting the first, and attaching the second, I could alter the plunger and cylinder diameters so that, for the same drive and motor rpm, the pump would now deliver some 10 gpm at a flow rate of 10 gpm. This flow, at the lower pressure, could be used to feed four nozzles, each with a 0.029 inch diameter.


Figure 1. Delivery options from the same drive train with two different high-pressure ends.

The pressure range that this provided covers much of the range that was then available for high-pressure pumping units using the conventional multi-piston connection through a crankshaft to a single drive motor. Above that pressure it was necessary to use an intensifier system, which I will cover in later posts.

However there were a couple of snags in using this system to explore the cutting capabilities of waterjet streams in a variety of targets. The first of these was when the larger flow system was attached to the unit. In order to compare “apples with apples” at different pressures some of the tests were carried out with the same nozzle orifice. But the pump drive motor was a fixed speed unit which produced the same 10 gpm volume flow out of the delivery manifold regardless of delivery pressure (within the design limits). Because the single small nozzle would only handle a quarter of this flow, at that pressure (see table from Waterjetting 1c) the rest of the water leaving the manifold needed an alternate path.


Figure 2. Positive displacement pump with a bypass circuit.

This was provided through a bypass circuit (Figure 2) so that, as the water left the high-pressure manifold it passed through a “T” connection, with the perpendicular channel to the main flow carrying the water back to the original water tank. A flow control valve on this secondary circuit would control the orifice size the water had to pass through to get back to the water tank, thereby adjusting the flow down the main line to the nozzle, and concurrently controlling the pressure at which the water was driven.

Thus, when a small nozzle was attached to the cutting lance most of the flow would pass through the bypass channel. While this “works” when the pump is being used as a research tool, it is a very inefficient way of operating the pump. Bear in mind that the pump is being run at full pressure and flow delivery, but only 25% of the flow is being sent to the cutting system. This means that you are wasting 75% of the power of the system. There are a couple of other disadvantages that I will discuss later in more detail, but the first is that the passage through the valve will heat the water a little. Keep recirculating the water over time and the overall temperature will rise to levels that can be of concern (it melted a couple of fittings on one occasion). The other is that if you are using a chemical treatment in the water then the recirculation can quite rapidly affect the results, usually negatively.

It would be better if the power of the pump were fully used in delivering the water flow rate required for the cutting conditions under which the pump was being used. With a fixed size of pistons and cylinders this can be achieved, to an extent, by changing the rotation speed of the drive shaft. This can, in turn, be controlled through use of a suitable gearbox between the drive motor and the main shaft of the pump. As the speed of the motor increases, so the flow rate also rises. For a fixed nozzle size this means that the pressure will also rise. And the circuit must therefore contain a safety valve (or two) that will open at a designated pressure to stop the forces on the pump components from rising too high.


Figure 3. Output flows from a triplex (3-piston) pump in gpm, for varying piston size and pump rotation speed. Note that the maximum operating pressure declines as flow increases, to maintain a safe operating force on the crankshaft.

The most efficient way of removing different target materials varies with the nature of that material. But it should not be a surprise that neither a flow rate of 10 gpm at 10,000 psi, nor a flow rate of 3.3 gpm at 30,000 psi gave the most efficient cutting for most of the rock that we cut in those early experiments.

To illustrate this with a simple example: consider the case where the pump was used configured to produce 3.3 gpm at pressures up to 30,000 psi. At a nozzle diameter of 0.025 inches the pump registered a pressure of 30,000 psi for full flow through the nozzle. At a nozzle diameter of 0.03 inches the pump registered a pressure of 20,000 psi at full flow, and at a nozzle diameter of 0.04 inches the pressure of the pump was 8,000 psi. (The numbers don’t quite match the table because of water compression above 15,000 psi). Each of these jets was then used to cut a slot across a block of rock, cutting at the same traverse speed (the relative speed of the nozzle over the surface), and at the same distance between the nozzle and the rock. The depth of the cut was then averaged over the cut length.


Figure 4. Depth of cut into sandstone, as a function of nozzle diameter and jet pressure.

If the success of the jet cut is measured by the depth of the cut achieved, then the plot shows that the optimal cutting condition would likely be achieved with a nozzle diameter of around 0.032 inches, with a jet pressure of around 15,000 psi.

This cut is not made at the highest jet pressure achievable, nor is it at the largest diameter of the flow tested. Rather it is at some point in between, and it is this understanding, and the ability to manipulate the pressures and flow rates of the waterjets produced from the pump that makes it more practical to optimize pump performance through the proper selection of gearing, than it was when I got that early pump.

This does not hold true just for using a plain waterjet to cut into rock, but it has ramifications in other ways of using both plain and abrasive-laden waterjets, and so we will return to the topic as this series continues.

Read more!