Tuesday, March 5, 2013

Supersonic and high velocity Subsonic Saltwater and Freshwater Cloud Making Cannons

Aaron Franklin
By Aaron Franklin

As a compliment to cloud brightening systems, these for use in calm blue sky conditions, or windy blue sky conditions, over Ocean, sea and glacial ice, and land permafrost.

Also may be very important this year for as high tech cloud brightening/making doesn't look like it will be easy to get out in large unit numbers, while there is existing firepump systems that are available in numbers we need now.

Also are essentially no different from snowmaking gear used on ski fields, except for making snow, lower velocity is fine, and no CCN's are required. Just air below 0C, and freshwater.

- High pressure / high volume fire-fighting/water cannon pump gear can be used as is, or modified for higher pressure and kW capacities to increase output volumes at similar nozzle velocities.

An aerated system looks best at this point because:
  • By using de Laval nozzles ( convergent-divergent, supersonic and tight stream output ) the aerated water can be accelerated by expansion to high velocity or Supersonic speed as it leaves the divergent exit section of the nozzle.
  • Nozzle friction is reduced because air sticks to the surface and creates a gaseous boundary layer.
  • For Aeration, copper or soft stainless tubes CNC laser perforated, swaged to flare to hexagonal ends, stacked for a honeycomb aeration section (just like a ww2 spitfire radiator except they had the water on the outside of the tubes and no holes) fed with compressed air, in the water feed before the pumps can entrain microbubbles in the water. 
  • Alternatively supersonic streams can be achieved with unaerated water with convergent nozzles, but more pressure is required.
  • The high kinetic energy of the water stream will cause excellent dispersion, and evaporation, via transonic shockwaves as the stream slows, shedding its outer layer as it goes, eventually disintegrating completely either below the altitude where enough kinetic energy, has converted to gravitational potential energy for the stream to go transonic if the stream is below a critical diameter, or not far above that altitude if its above that diameter.
  • If its a high velocity Subsonic jet it will still shatter the droplets and evaporate lots of, if not all of them by air turbulence and high differential speed energy conduction/friction evaporation.
  • We need to look at freshwater versions as well. This because saltwater rain will be fine over open oceans but it landing on ice and land permafrost will make them melt faster. And saltwater rain on land living ecologies is not at all good either. There's going to be a big use for them to protect the land permafrosts with cloud cover too. Freshwater versions will benefit from using water with diatoms growing in it, as these act as cloud droplet condensation nuclei, just like salt crystals.

    Seeding tundra lakes with diatoms will also eat CO2, oxygenate the water enhancing aerobic digestion of dissolved methane and other organic carbon. Removing the diatoms with the water for cloud cannons will also remove excess nutrients from the waters, provide aeration for skyborne digestion of DOC to CO2, and will clean up lakes to make them better for winter snow-making watersources.
  • We're going to need to straffe the sky with these things for best cloudmaking effect, so we need to get ready to mount them on naval gun turrets with computer controlled tracking systems and look into parking tanks and APC's with suitable turrets on container ship decks.

    Using these tanks and APC's, maybe fixed installations when the wind is blowing, with cloud-cannons on the arctic tundras can help protect the permafrosts. 

Calculations and conclusions, for peer review:

These are based on a sonic speed case. Faster will give more range but less volume and slower more volume but less range, for a given pump system.

speed of sound 330m/s

Ep= mgh

Ek= 0.5mv^2

Ek sonic (1 kg water)= 0.5 x 1 x 330^2 = 54450J

54450=mgh=1 x 9.8m/s^2 x h

vertical ballistic altitude h=54450/9.8 = 5.556km


cloud water content = 0.3g/m^3

10m thickness= 3g/m^2

100m thickness= 30g/m^2

4 sqkm= 4,000,000 m^2


Fixed position still air straffing:

A=pi.r^2

r=sqrt(A/pi)


4 sqkm horizontal Cannon range r = sqrt(4/pi)= 1.12km


Moving ship, land tanker, or wind blowing fixed position straffing:

14m/s = 50km/hr (vehicle or wind velocity)

-4 sqkm per hr requires only 4/50= 80m watercannon range.


Water volume and flow rates:

4sqkm at 10m thick= 12000 liters= 12 tons (less than 10min with flow rates of existing fire pumps)

at 100m thick = 120 tons (could be less than an hour per firepump)

1 small Supersonic cloud cannon could produce 24hr x 4sqkm/hr = 96sqkm of 100m thick cloud per day.


Kinetic energy:

120,000 liters per hr / 3600 = 33.3 L/s

12,000 liters per hr / 3600 = 3.3 L/s

Ek Sonic 1kg = 54.45 kJ


kW 100m thick, 4sqkm cloud layer in an hr = 33.3 L/s x 54.45kJ = 1813 kW

- existing pump designs would need to be upgraded for higher power/pressure to produce this much cloud, if supersonic velocities are required, but this is a very small engineering challenge. Ships trawler size and up, and tanks have more than enough kWs for the job. Rapid small amplitude vertical oscillation of the jet release angle should lay down the average 100m thick cloud bank aimed for.

kW 10m thick, 4sqkm cloud layer in an hr = 3.3 L/s x 54.45kJ = 181.3 kW

- this looks good for mobile straffing with existing fire pumps, provided aerated water and deLavel nozzles are used to produce supersonic velocities. The range required for 4 sqkm per hr coverage at only 80m is no problem for the small volume, aerated supersonic water flows possible from existing fire pumps.


Latent heat of evaporation and Ek sonic considerations:

latent heat of evaporation water = 2260 kJ per L

Ek sonic water = 54.45 kJ per L

  • If the very small water droplets produced by transonic shockwaves shattering any water breaking from the decaying jet should partially or fully evaporate (this will depend on stream velocity) they will be doing this by absorbing a lot of heat from the air they are landing in. This will cool and supersaturate the air with water vapour, and result in rapid droplet condensation in both saltwater and freshwater versions proposed.
  • I am advised that we can expect around 60% humidity levels in arctic conditions. As the evaporative cooling effect will cool the air that the stream droplets land in, and vast quantities of very small cloud nucleation salt crystals will be formed, we can expect a lot more cloud to be formed than the above examples suggest.
  • Aeration should result in more and smaller salt crystals, and droplets. In part due to microbubbles enhancing droplet fragmentation. Also due to supersaturation of the water with air, enhanced by evaporation. This causing many disturbances per drop as new bubbles precipitate, and initiate many salt crystals per droplet to precipitate. Turbulence will also initiate precipitation of air and salt crystals in the supersaturated droplet.
  • How much extra cloud will depend on how much atmosperic turbulence and mixing is generated by the straffing pattern, and on local temperature and humidity conditions.
  • Less mixing will also result in larger cloud droplets.
  • Too much mixing will run the risk of forming little cloud at all, as the humidity levels may be too low to form any droplets at all around the salt crystals.
  • It's quite likely that 500-600kph will be sufficient velocity. This would produce about 15 litres per second from standard firepump gear. A good estimate seems to be that this would initially produce around 100sqkm of 100m thick cloud bank per day. However from what I am hearing there is likely to be a repeating cycle of droplet evaporation - re nucleation of new droplets - back to droplet evaporation, due to the added water vapour and downwind cooling effects. So total cloud produced may be more than this.
  • We should start testing on these ASAP. Others doing testing too, would be a good thing.

An integrated systems plan for 10 year carbon pumpdown to 280ppm

Aaron Franklin
By Aaron Franklin

There's little point getting too distracted with talk on how to reduce human CO2 emissions until we have succeeded in reversing the Arctic sea-ice crash.

However, as geoengineering for this will be an ongoing annual commitment until CO2 is back in the region of 280ppm, we do need a plan to pump carbon out of the atmosphere and the sea (where 60% of the 500 Gton total human contribution is residing.)

Current estimates are 56.4 billion tonnes C/yr (53.8%), for terrestrial primary production, and 48.5 billion tonnes C/yr for oceanic primary production.

It's been learned that the primary ocean production has fallen by nearly half in the last 100 years. The reduction in windblown dust from irrigation and cultivation of arid areas and the prolonging of the growing season of grasses in arid areas by CO2 increases is most likely the biggest cause of this. This has resulted in the amount of natural wind-borne iron-carrying dust falling dramatically, 30% over the past 30 years alone.
  • Tropical rainforests have globally 8 million square km with biomass productivity of 2000g Carbon per square meter for a total of 16 Gtons of Carbon per year. Doubling this area would only get near an extra 16 Gtons of annual carbon pulldown after 1 to 2 decades and with studies showing drought stress already turning Amazon and stheast Asian rainforests now net CO2 producers rather than removers no gains might occur at all.

  • Temperate forests have globally 19 million square km with biomass productivity of 1,250g Carbon per square meter for a total of 24 Gtons of Carbon per year. Doubling this area would only get near an extra 24 Gtons of annual carbon pulldown after 1 to 2 decades, and then would need a further 20 years to remove the 500 Gton existing carbon debt, and thats assuming that 100 percent of carbon taken in by these trees can be kept away from consumers and decomposers.

  • The Oceans have globally 350 million square km with average biomass productivity of 140 gC/m²/yr for a total of 48.5 Gtons of Carbon per year. This is heavily weighted towards coastal areas at present. The open Oceans are 311 million sqkm with average biomass productivity of 125 gC/m²/yr and a total of 39 Gtons of Carbon per year, however, as can be clearly seen on the map below, some 80% of the ocean are so isolated from land sourced nutrient inputs that their productivity is about 1/100 of the most productive oceanic zones.

 map of the earth showing primary (photosynthetic) productivity, from: http://upload.wikimedia.org/wikipedia/commons/4/44/Seawifs_global_biosphere.jpg

Oceanic desolate zone at 80% of 311 million sqkm is 249 million sqkm. 50 GtonsC/249million = 201 tonsC per sqkm per year = 200g C per sqmeter per year average. With prime coastal Aquatic enviroment like estuarys and coral reefs producing 10x that at 2000+ gC/sqm it would seem very achievable to increase the deep ocean productivity this much.

Doubling the productivity of the oceans could pump down the global 500 Gton Carbon burden in as little as 10 years and is possible, affordable, already very well studied.

In the currently near sterile central oceans the absence of an existing foodchain would ensure most of this Phytoplankton Carbon will die and sink a couple of hundred meters into the tidal mixed layer.

This can be a problem....

The amount of organic carbon needed to completely remove all oxygen from the WHOLE ocean as it is decomposed by bacteria is thought to be 1000 Gton C. Just letting the phytoplankton sink into the tidal mixed zone, which is low in oxygen already, would be a very bad idea. Back to this later.

As can be seen on the front page graphic of: http://www.aslo.org/meetings/Phytoplankton_Production_Symposium_Report.pdf
The benefits of iron fertilization alone are only achievable in the Nutrient Rich Iron Depleted zones of the southern ocean to 35degr sth, the equatorial oceans to 20degr sth and 10degr nth, and the nth pacific from 40 degr nth. These areas can easily be stimulated urgently.

At the low figure of 1 million tonC/1ton Fe we would annualy need 50GtC/1MtC= 50000 tons of iron dust -bugger all.

Antarctic krill have a total fresh biomass of up to 500 million tons. This will increase several times over when we iron fert the southern ocean.

The rest of the desolate zones need nitrogen and phosphorus. Rather than using mined phosphates and CO2 producing urea for nitrogen there are these alternatives:

  • Natural volcanic ash. There are concerns about heavy metal contamination from this but as long as we stick to siliceous ash from recycled seafloor volcanism we should be pretty OK.
  • Wave pumped chimneys. Tested already, these pump nutrient rich deep benthic water via wave power. We would however need millions of these due to scale limitations imposed by ocean wavelengths.
  • Chimneys driven by submarine volcanism. An idea I was looking at 10 yrs ago (dibs on the carbon credits, giggles, could make me a trillionaire) this could quickly fill the oceanic gyres of the desolate zones with all the deep benthic and volcano enriched nutrients needed.
  • Good old fashioned blood and bone. Puree krill from the southern ocean and fert the low nutrient desolate zones.

Simultaneous with fertilising the desolate zones we'll need to seed them with the best diatoms and suitable higher temp krill species such as north pacific, common in the sea of Japan. It would be possible to multiply world krill population 100x, to the region of 50 Gtons, making them the biggest living carbon store on the planet

Krill are looking very good for Ocean Fertilization for a number of reasons:

a) Getting phytoplankton produced carbon to seafloor or depth.

- Approximately every 13 to 20 days, krill shed their chitinous exoskeleton which is rich in stable CaCO3.
- Krill are very untidy feeders, and often spit out aggregates of phytoplankton (spit balls) containing thousands of cells sticking together.
- They produce fecal strings that still contain significant amounts of carbon and the carbonate/silica glass shells of the diatoms.

These are all heavy and sink very fast into the deep benthic zone and ocean floor. Oxygen levels are higher down there, and the deep benthic zone is much larger in volume than the rest of the worlds oceans. Besides which, unlike Phytoplankton alone, the spitballs and fecal strings stand a much beter chance of not being decomposed and using up oyxgen. The exoskeletons won't be decomposed at all.

Quote wikipedia: "If the phytoplankton is consumed by other [than krill] components of the pelagic ecosystem, most of the carbon remains in the upper strata. There is speculation that this process is one of the largest biofeedback mechanisms of the planet, maybe the most sizable of all, driven by a gigantic biomass"

b) They can be dried and pressed for krill oil. Krill oil can be used directly for biodiesel or as food suppliments.

c) Dried krill (pressed or not),( and any other biomass) can be pyrolysised for gas, pyrolysis oil for existing power plants, and these can have their flue CO2 fed into algae ponds for negative carbon energy.

- The pyrolysis oil can be used directly for large diesels like ships and heavy machinery.

- The water soluble portion of pyrolysis oil can be used for timber construction adhesive for plywoods, chipboards, and laminated beams etc

- Existing refineries can produce bio-petrols, bio-diesels, and bio-plastics from pyrolysis oil with little modification.

- Pyrolysis also produces biochar, which is terrific fertiliser. Producing soils called Terra Preta that are able to sequester fresh carbon from humus, water and nutrients better than any other soils on the planet, and holding fertility for thousands of years. This is the best and safest way to bury carbon.

d) Krill are delicious nutritious food for humans to replace massively methane emitting beef/sheep/goats and reforest this pastoral land with food-forests and indigenous ecologies.

e) Krill are the best food for a large number of fish and whale species. Putting carbon into living marine biomass is a safe store, and replacing the carbon that we have lost by depleting those stocks.

f) Krill are very efficient phytoplankton harvesters, sometimes reaching densities of 10,000–30,000 individual animals per cubic metre. They quickly swarm to any plankton bloom in the area.

Using them to harvest phytoplankton, and then using simple krill nets on the worlds fishing fleet, is much easier than getting phytoplankton out of the ocean ourselves, as that requires energy intensive centrifuge separation of large quantities of water.

g) Krill Females lay 6,000–10,000 eggs at one time, and they reach maturity after 2-3 years.

- Obviously they can quickly build biomass to any level we can provide food for. Particularly if we are putting them in fresh habitat where small fish that normally consume lots of tiny immature krill are absent.

If we increased the total biomass of krill to 50 Gton fresh biomass as suggested above, that would be about 10 Gton C, then we could remove this amount of Carbon from the ocean every 2 years, this alone has the potential to remove 100 Gton C from the ocean/atmosphere in twenty years.

As krill are such messy feeders, inefficient digesters and shed carbonate rich exoskeletons every 2-3 weeks, they probably would sink to the ocean floor to relatively safely aggregate into sediments, stable carbonate and undecomposed organic carbon around 100 times as much as that. So burying 500Gton C of CO2 in one year would be possible.

Obviously we only need to increase Krill populations by 10x to get the result we need in about 10 years total including the breed up time.

We'd be best to harvest as much as possible to refertilise and replace the carbon in our soils. Remember that about 600 Gton C of carbon from our soils has gone into the oceans already in the last 2000 years.

Friday, March 1, 2013

Using the Oceans to Remove CO2 from the Atmosphere

William H. Calvin, PhD, Professor at
University of Washington, 
author of: Global Fever: How to
Treat Climate Change
By William H. Calvin

1. Prospects for an Emergency Drawdown of CO2

Suppose we had to quickly put the CO2 genie back in the bottle. After a half-century of “thinking small” about climate action, we would be forced to think big—big enough to quickly pull back from the danger zone for tipping points and other abrupt climate shifts.

By addressing the prospects for an emergency drawdown of excess CO2 now, we can also judge how close we have already come to painting ourselves into a corner where all escape routes are closed off.7

Getting serious about emissions reduction will be the first course of action to come to mind in a climate crisis, as little else has been discussed. But it has become a largely ineffective course of action11 with poor prospects, as the following argument shows.

In half of the climate models14, global average overheating is more than 2°C by 2048. But in the US, we get there by 2028. It is a similar story for other large countries.

Because most of the growth in emissions now comes from the developing countries burning their own fossil fuels to modernize with electricity and personal vehicles, emissions growth is likely out of control, though capable of being countered by removals elsewhere.

But suppose the world somehow succeeds. In the slow growth IPCC scenario, similar to what global emissions reduction might buy us, 2°C arrives by 2079 globally–but in the US, it arrives by 2037.

 So drastic emissions reduction worldwide would only buy the US nine extra years.

However useful it would have been in the 20th century, emissions reduction has now become a failed strategy, though still useful as a booster for a more effective intervention.

We must now resort to a form of geoengineer­ing that will not cause more trouble than it cures, one that addresses ocean acidification as well as overheating and its knock-on effects.

Putting current and past CO2 emissions back into secure storage5 would reduce the global overheating, relieve deluge and drought, reverse ocean acidification, reverse the thermal expansion portion of sea level rise, and reduce the chance of more4 abrupt climate shifts.

Existing ideas for removing the excess CO2 from the air appear inadequate: too little, too late. They do not meet the test of being sufficiently big, quick, and secure. There is, however, an idealized approach to ocean fertilization5 that appears to pass this triple test.

It mimics natural up- and down-welling processes using push-pull ocean pumps powered by the wind. One pump pulls sunken nutrients back up to fertilize the ocean surface—but then another pump immediately pushes the new plankton production down to the slow-moving depths before it can revert to CO2.

How Big? How Fast?

The atmospheric CO2 is currently above 390 parts per million and the excess CO2 growth has been exponential. Excess CO2 is that above 280 ppm in the air, the pre-industrial (1750) value and also the old maximum concentration for the last several million years of ice age fluctuations between 200 and 280 ppm.

Is a 350 ppm reduction target12, allowing a 70 ppm anthropogenic excess, low enough? We hit 350 ppm in 1988, well after the sudden circulation shift18 in 1976, the decade-long failure of Greenland Sea flushing24 that began in 1978, and the sustained doubling (compared to the 1950-1981 average) of world drought acreage6 that suddenly began in 1982.

Clearly, 350 ppm is not low enough to avoid sudden climate jumps4, so for simplicity I have used 280 ppm as my target: essentially, cleaning up all excess CO2.

But how quickly must we do it? That depends not on 2°C overheating estimates but on an evaluation of the danger zone2 we are already in.

The Danger Zone

Global average temperature has not been observed to suddenly jump, even in the European heat waves of 2003 and 2010. However, other global aspects of climate have shifted suddenly and maintained the change for many years.

The traditional concern, failure of the northern-most loop of the Atlantic meridional overturning circulation (AMOC), has been sidelined by model results20-22 that show no sudden shutdowns (though they do show a 30% weakening by 2100).

While the standard cautions about negative results apply, there is a more important reason to discount this negative result: there have already been decade-long partial shutdowns not seen in the models.

Not only did the largest sinking site shut down in 1978 for a decade24, but so did the second-largest site23,28 in 1997. Were both the Greenland Sea and the Labrador Sea flushing to fail together2, we could be in for a major rearrange­ment of winds and moisture delivery as the surface of the Atlantic Ocean cooled above 55°N. From these sudden failures and the aforementioned leaps in drought, one must conclude that big trouble could arrive in the course of only 1-2 years, with no warning.

So the climate is already unstable. (“Stabilizing” emissions4 is not to be confused with climate stability; it still leaves us overheated and in the danger zone for climate jumps. Nor does “stabilized” imply safe.)

While quicker would be better, I will take twenty years as the target for completing the excess CO2 cleanup in order to estimate the drawdown rate needed.
The Size of the Cleanup

It is not enough to target the excess CO2 currently in the air, even though that is indeed the cause of ocean acidification, overheat­ing, and knock-on effects. We must also deal with the CO2 that will be released from the ocean surface as air concentration falls and the bicarbonate buffers reverse, slowing the drawdown.

Thus, I take as the goal to counter the anthropogenic emissions4,5 since 1750, currently totaling 350 gigatonnes of carbon. (GtC =1015g of Carbon=PgC.)

During a twenty year project period, another 250 GtC are likely be emitted, judging from the 3% annual growth in the use of fossil fuels5 despite some efforts at emissions reduction. Thus we need to take back 600 GtC within 20 yr at an average rate of 30 GtC/yr in order to clean up (for the lesser goal of countering continuing emissions, it would take 10 to 15 GtC/yr).

Chemically scrubbing the CO2 from the air is expensive and requires new electrical power from clean sources, not likely to arrive quickly enough. On this time scale, we cannot merely scale up what suffices on submarines.

Thus we must find ways of capturing 30 GtC/yr with traditional carbon-cycle8 biology, where CO2 is captured by photosynthesis and the carbon incorporated into an organic carbon molecule such as sugar. Then, to take this captured carbon out of circulation, it must be buried to keep decomposition methane and CO2 from reaching the atmosphere.

Sequestering CO2

One proposal26 is to bundle up crop residue (half of the annual harvest is inedible leaves, skins, cornstalks, etc.) and sink the weighted bales to the ocean floor. They will decompose there but it will take a thousand years before this CO2 can be carried back up to the ocean surface and vent into the air.

Such a project, even when done on a global scale, will yield only a few percent of 30 GtC/yr. Burying raw sewage3 is no better.

If crop residue represents half of the yearly agricultural biomass, this also tells you that additional land-based photo­synthesis, competing for space and water with human uses, cannot do the job in time.5 It would need to be far more efficient than traditional plant growth. At best, augmented crops on land would be an order of magnitude short of what we need for either countering or cleanup.

Big, Quick, and Secure

Because of the threat from abrupt climate leaps, the cleanup must be big, quick, and secure.

Doubling all forests might satisfy the first two requirements but it would be quite insecure—currently even rain forests4 are burning and rotting, releasing additional CO2.

 Strike One.  We are already past the point where enhanced land-based photosynthesis can implement  an emergency drawdown. They cannot even counter current emissions.

Basically, we must look to the oceans for the new photosynthesis and for the long-term storage of the CO2 thus captured.

Fertilization per se

Algal blooms are increases in biological productivity when the ocean surface is provided with fertilizer containing missing nutrients15 such as nitrogen, iron, and phosphorus.

A sustained bloom of algae can be fertilized by pumping up seawater5,16,19 from the depths, a more continuous version of what winter winds9 bring up.

Currently about 11 GtC/yr settles out of the wind-mixed surface layer into the slowly-moving depths13 as plankton die. To settle out another 30 GtC/yr, we would need about four times the current ocean primary productivity. Clearly, boosting ocean productivity worldwide is not, by itself, the quick way to put the CO2 genie back in the bottle.

 Strike Two. Our 41% CO2 excess is already too large to draw down in 20 yr via primary productivity  increases in the ocean per se.

However, our escape route is not yet closed off. There is at least one plausible prospect for an emergency draw down for 600 GtC in 20 yr. It seeks to mimic the natural ocean processes of upwelling and downwelling.

2. Push-pull ocean pipes

Upwelling and Downwelling

Upwelling from the depths is typically caused by winds which push aside surface waters, especially those strong westerly winds in the high southern latitudes that continuously circle Antarctica without bumping into land.

In addition to the heavier biomass (the larger fecal pellets and shells) that can settle into the depths before becoming CO2, there is downwelling, an express route to the depths using bulk flow. Surface waters are flushed via whirlpools into the depths of the Greenland Sea and the Labrador Sea23. This downwelling carries along the surface’s living biomass (from bacteria to fish) as well as the dissolved organic carbon (from feces and smaller cell debris).

Note that, in the surface ocean, there is a hundred times more dissolved organic carbon (DOC) than the organic carbon inside living organisms1. Bacterial respiration produces CO2 from this DOC that reaches the air within 40 days.

To augment normal downwelling, one could pump surface DOC and plankton into the ocean depths before they become CO2. Half of the decomposition CO2 produced in the depths rejoins the atmosphere when the deep water is first upwelled a millennium later. Thanks to ocean mixing in the depths and multiple upwelling sites at different path lengths, it will come back up spread out in time after that initial delay.

There is an even larger spread because the other half (called refractory DOC17) is somehow protected from becoming CO2 for a while, even when cycled through the surface layers multiple times.17 Average radiocarbon dates for DOC in the depths are about 4,000 years, not 40 days.

Thus, if we somehow sink 600 GtC into the ocean depths over 20 years,  the return of 600 GtC of decomposition CO2 to the air is spread out over, say, 6,000 years. That is an average of 0.1 GtC each year, about 1% of current emissions. Such a slow return of excess CO2 can be countered by slow reforestation or similar measures.

 From this analysis, we still have a plausible way out of the climate crisis, even on an emergency basis.

What follows is an idealized example of how we might implement it, using less than one percent of the ocean surface for the next twenty years to do the equivalent of plowing under a cover crop.5

Fig. 1. A plankton plantation design using windmill pumps (ref 5), including a fishing lane free of anchor cables. Shading shows the plume of nutrients from a single pump and the plume of organic matter dispersed in the depths. One advantage of windmills is that compressed air can be generated to be pumped into the depths, addressing anoxia problems. Spacing of windmills, however, is subject to the usual limitations of vortices downwind.

Plowing Under a Cover Crop

In addition to the up-pump of the fertilization-only example, add another wind-driven pump nearby that flushes the surface water back down into even deeper depths before its new biomass becomes CO2 again.

If we fertilize via pumping up and sink nearby via bulk flow (a push-pull pump), we are essentially burying a carbon-fixing crop, much as farmers plow under a nitrogen-fixing cover crop of legumes to fertilize the soil.

Algaculture yields25 allow a preliminary estimate to be made of the size of our undertaking. Suppose that a midrange 50 g (as dry weight) of algae can be grown each day under a square meter of sunlit surface, and that half is carbon. Thus it takes about 1 x 10-4 m2 to grow 1 gC each year. To produce our 30 x 1015 gC/yr drawdown rate would require 30 x 1011 m2 (0.8% of the ocean surface, about the size of the Caribbean).

But because we pump the surface waters down, not dried algae, we would also be sinking the entire organic carbon soup of the wind-mixed surface layer: the carbon in living cells plus the hundred-fold larger amounts in the surface DOC. Thus the plankton plantations might require only 30 x 10m2 (closer to the size of Lake Michigan).

Apropos location5, pumping down to 150 m near the edge of the continental shelf would deposit the organic carbon where it could be carried over the cliff and into the slower-moving deep ocean.

The ocean pipe spacing, and the volume pumped down, will depend on the outflow needed to optimize the organic carbon production (the chemostat calculation). Only field trials are likely to provide a better estimate for the needed size of sink-on-the-spot plankton plantations, pump numbers, and project costs. The obvious test beds are the North Sea and Gulf of Mexico where thousands of existing drilling platforms could be used to support appended pipes and pumps for field trials5. Without waiting for floating pumps, we could quickly test for impacts as well as efficient plantation layouts.

I have used windmills here for several reasons: they are familiar mechanisms and they enable a push-pull plantation layout to be readily illustrated. But there are a number of ways to achieve wind-wave-powered pumps, both up and down, such as atmocean.com’s buoyed pipes and Salter’s elevated ring23a to capture wave tops and create a hydrostatic pressure head for sinking less dense warm water into the more dense cool waters of the depths. Each implementation will have considerations peculiar to it; what follows are some of the more general advantages and disadvantages in the context.

Fig 2 A,B: A less expensive pump can be constructed that uses wave power and allows closer packing (ref 3). They would be more effective in the Antarctic Circumpolar Current because of the wave heights. Calvin (2012b), after P. Kithel’s design (atmocean.com).

Fig 3. Salter Sink23a uses a meter-high lip on a large floating ring to capture wavetops. This builds up enough hydrostatic pressure to push down warm surface water, kept enclosed by a skirt. It can also (not shown) achieve some upwelling.  Warm water exiting in the depths will rise outside the tube, entraining higher-density nutrient-rich cold water. The mix can rise above the thermocline into the surface layer, fertilizing plankton growth. Detail from figure in Intellectual Ventures white paper13a.

Pro and Con

Here we have an idealized candidate for removing 600 Gt of excess carbon from the air: the sink-on-the-spot plankton plantation that moves decomposition into the thousand-year depths. Push-pull pumping for fertilization and sequestration is relatively low-tech and merely augments natural up- and down­welling processes.

This idealized candidate has some unique advantages compared to current climate strategies: It is big, quick, and secure. It is impervious to drought and holdout governments. It does not compete for land, fresh water, fuel, or electricity. By bringing up cold water from the depths and sinking warm surface water into the thousand-year depths, it cools the ocean surface regionally. And there is a “cognitive carrot,” an immediate payoff every year (fish catch5, cooling hurricane paths9a) while growing the climate fix (the 600 GtC emergency draw down).

The idealized example intentionally uses technologies that are too old or simple to be patentable. The industries most likely to benefit would be fishing and the offshore services presently associated with oil and gas platforms.

It is against such advantages that we must judge the potential downsides5. Concerns voiced thus far include:
  1. Could we get international agreement fast enough? Continental shelves in the most productive latitudes belong to relatively wealthy countries. Their independent initiatives could quickly establish many plankton plantations just inside the shelf without new treaties.
  2. Won’t it pollute? Perhaps not as proposed here, using local algae and nutrients in a vertical loop, but the usual considerations would apply should we want to introduce exotic or modified algal species to achieve even higher rates of sinking potential CO2. Toxic blooms are possible during productivity transitions. With floating enclosures rather than plumes, this would change.
  3. Won’t anoxic “dead zones” form? Shallow continental shelf sites should be avoided because hypoxia will occur from the decomposition of the downwelled carbon soup in a restricted volume. Fish kills occur when anoxia develops more quickly than fish can find their way out of the increasingly hypoxic zone. However, a maintained hypoxic zone will mostly repel fish from entering.
  4. We don’t know what will happen. The novelty here is minimal, even less than for iron fertilization. Fertilizing and sinking surface waters merely mimics, albeit in new locations or new seasons, those frequently studied natural processes seen on a large scale in winter mixing and in ocean up- and downwelling. There is also prehistorical precedent. The 80 ppm drawdown of atmospheric CO2 in the last four ice ages is thought to have occurred via enhanced surface productivity, triggered by a major reduction in the Antarctic offshore downwelling27 that re-sinks nutrient-rich waters brought to the surface in high latitudes by the circumpolar winds.
  5. Won’t this just move the ocean acidification problem into the depths? Since the depths are 98% of ocean volume, there is a fifty-fold dilution of the acidity. Were countering out-of-control emissions to continue for a century, depth acidification might be more of a problem.
  6. Pumping up will just bring up water with higher CO2 than in the surface waters. A depth difference10 of 40 μmol/kg means that upwelling a cubic meter of seawater brings up an unwanted 0.48 g of inorganic carbon. The resulting fertilization will take that CO2 (and more) out of the surface ocean. Also, pumping down the same volume sinks 1 g of potential CO2 as DOC, even without fertilization.
  7. Aren't you going to run out of phosphate, what currently limits the global ocean productivity to a fraction of its capacity? Up-pump pipes could be sited to bring up bottom waters from the southern oceans that are currently rich in phosphate.
A Second Manhattan Project

Though these objections do not seem insurmountable  good reasons usually arise for not implementing most such projects as initially proposed.

This idealized push-pull ocean pumps proposal is meant to give a concrete example, easy to remember, that defines the response ballpark by being big, quick, secure, powered by clean sources, and inexpensive enough so that a country can implement it on its own continental shelf without endless international conferences. Other drawdown schemes—say, floating enclosures or wave-driven circulating cells — need to pass those same tests.

To do the planning job right is going to take a Second Manhattan Project of various experts to design cleanup candidates and evaluate their side effects. Lend them Los Alamos and let the Pentagon buy them what they need with wartime priorities. To field test their plantation designs, let them instrument the many abandoned oil platforms in the North Sea and the Gulf of Mexico. Then quickly deploy the best designs, using the abilities of the offshore services industry.

Aim to accomplish all this in the four year time frame of the original Manhattan Project. Ten years after that, the cleanup job should be half done, and without all of the economic pain of a quick (and ineffective) shutdown of fossil fuel use. At the beginning of World War II, Franklin D. Roosevelt used the metaphor of a “four alarm fire up the street” that had to be extinguished immediately, whatever the cost. Our need for fast action on climate deterioration requires devoting the resources necessary to radically shorten the developmental cycle for all carbon burial projects. We dare not wait until we are weakened before undertaking emergency climate repairs. Our ability to avoid a human population crash will be compromised if economies become fragile or if international cooperation is lost via conflicts. A serious jolt—say, a major rearrangement of the winds—could cause catastrophic crop failures and food riots within several years, creating global waves of climate refugees with the attendant famine, pestilence, war, and genocide.

Acquiescing in a slower approach to climate is, in effect, playing Russian roulette with the climate gun. The climate crisis needs wartime priorities now.

References

1. Amon RM, Budéus G, Meon B (2003) Dissolved organic carbon distribution and origin in the Nordic Seas: Exchanges with the Arctic Ocean and the North Atlantic. J Geophys Res 14: 1-17. www.agu.org/journals/jc/jc0307/2002JC001594/2002JC001594.pdf

2. Calvin WH (2008) Global Fever: How to Treat Climate Change. London and Chicago: University of Chicago Press. faculty.washington.edu/wcalvin/bk14

3. Calvin WH (2008) Estimates for sequestering organic carbon via sinking sewage in oceans. arXiv 0810.2275v1

4. Calvin WH (2012a) The Great Climate Leap. ClimateBooks.

5. Calvin WH (2012b) The Great CO2 Cleanup. ClimateBooks.

6. Dai A, Trenberth KE, Qian T (2004) A global data set of Palmer Drought Severity Index for 1870–2002: Relationship with soil moisture and effects of surface warming. J Hydrometeorology 5:1117-1130. www.cgd.ucar.edu/cas/adai/papers/Dai_pdsi_paper.pdf

7. Diamond J (2003) Collapse: How Societies Choose to Succeed or Fail. New York: Viking.

8. Falkowski PG, Laws EA, Barber RT, Murray JW (2003). Phytoplankton and their role in primary, new, and export production. In M. J. Fasham (Ed), Ocean Biogeochemistry (ch.4). New York: Springer. www.ocean.washington.edu/people/faculty/jmurray/chap-04.pdf

9. Feely RA, Sabine CL, Takahashi T, Wanninkhof R (2001) Uptake and storage of carbon dioxide in the ocean: The global CO2 survey. Oceanography 14:18-32. www.pmel.noaa.gov/pubs/outstand/feel2331/feel2331.shtml

10. Goyet C, Healy R, Ryan J, Kozyr A (2000) Global Distribution of Total Inorganic Carbon and Total Alkalinity below the Deepest Winter Mixed Layer Depths. ORNL Technical report NDP-076 at www.osti.gov/bridge/product.biblio.jsp?osti_id=760546

11. Greene CH, Baker DJ, Miller DH (2010) A very inconvenient truth. Oceanography 23:214-218. www.tos.org/oceanography/archive/23-1_greene.pdf

12. Hansen J, et al (2008) Target atmospheric CO2: Where should humanity aim? Open Atmos Sci J 2:217–231. doi:10.2174/1874282300802010217

13. Houghton RA (2007) Balancing the global carbon budget. Ann Rev Earth Planet Sci 35:313–347. doi:10.1146/annurev.earth.35.031306.140057

13a. Intellectual Ventures white paper (2009) Drains for hurricanes. http://intellectualventureslab.com/wp-content/uploads/2009/10/Salter-Sink-white-paper-300dpi1.pdf

14. Joshi M, Hawkins E, Sutton E, Lowe J, Frame D (2011) Projections of when temperature change will exceed 2°C above pre-industrial levels. Nature Climate Change 1:407-412, doi:10.1038/nclimate1261

15. Lampitt RS, et al (2008) Ocean fertilization: a potential means of geoengineering? Phil Trans Roy Soc A 366:3919–3945. doi: 10.1098/rsta.2008.0139

16. Lovelock JW, Rapley CG (2007) Ocean pipes could help the Earth to cure itself. Nature 449:403. doi:10.1038/449403a

17. McNichol AP, Aluwihar LI (2007) The power of radiocarbon in biogeochemical studies of the marine carbon cycle: insights from studies of dissolved and particulate organic carbon (DOC and POC). Chemical Reviews 107:443-466, doi:10.1002/chin.200724246.

18. Miller AJ, Cayan DR, Barnett TP, Oberhuber JM (1994) The 1976-77 climate shift of the Pacific Ocean. Oceanography 7: 21–26. meteora.ucsd.edu/~miller/papers/shift.html

19. Oschlies A, Pahlow M, Yool A, Matear RJ (2010), Climate engineering by artificial ocean upwelling: Channelling the sorcerer’s apprentice. Geophys Res Lett 37, L04701, doi:10.1029/ 2009GL041961

20. Pitman AJ, Stouffer RJ (2006) Abrupt change in climate and climate models. Hydrol. Earth Syst. Sci. Discuss., 3, 1745–1771. www.hydrol-earth-syst-sci-discuss.net/3/1745/2006/

21. Rahmstorf S (2006) Thermohaline Ocean Circulation. In: Encyclopedia of Quaternary Sciences, edited by S. A. Elias. Elsevier, Amsterdam. www.pik-potsdam.de/~stefan/Publications/Book_chapters/rahmstorf_eqs_2006.pdf

22. Rahmstorf S, Ganopolski A (1999) Long-term global warming scenarios computed with an efficient coupled climate model. Climatic Change 43: 353–367, doi:10.1023/A:1005474526406

23. Rhines PB (2006) Sub-Arctic oceans and global climate. Weather 61:109-118. doi: 10.1256/wea.223.05

23a. Salter S (2009) Wave-powered destratification for hurricane suppression, acidity reduction, carbon storage, and enhanced phytoplankton, dimethyl sulfide and fish production. Earth and Environmental Science. DOI:10.1088/1755-1307/6/5/452012

24. Schlosser P, Bönisch G, Rhein M, Bayer R (1991) Reduction of deepwater formation in the Greenland Sea during the 1980s: Evidence from tracer data. Science 251:1054–1056. www.sciencemag.org/cgi/reprint/251/4997/1054.pdf

25. Sheehan J, Dunahay T, Benemann J, Roessler P (1996) A Look Back at the U.S. Department of Energy’s Aquatic Species Program—Biodiesel from Algae, NREL/TP–580–24190. US DOE Technical Report. www1.eere.energy.gov/biomass/pdfs/biodiesel_from_algae.pdf

26. Strand S, Benford G (2009) Ocean sequestration of crop residue carbon: recycling fossil fuel carbon back to deep sediments. Environ Sci & Tech 43:1000-1007. doi:10.1021/es801555

27. Toggweiler JR, Murnane R, Carson S, Gnanadesikan A, Sarmiento JL (2003) Representation of the carbon cycle in box models and GCMs, 2, Organic pump. Global Biogeochem Cycles 17:1027, doi:10.1029/2001GB001841

28. VÃ¥ge K, et al (2009) Surprising return of deep convection to the subpolar North Atlantic Ocean in winter 2007–2008. Nature Geoscience 2:67–72.

Friday, January 25, 2013

Coded modulation of computer climate models for the prediction of precipitation and other side-effects of marine cloud brightening

by Stephen Salter, University of Edinburgh, and Alan Gadian, University of Leeds.


Background

In 1990 John Latham [1] suggested that the Twomey effect [2] [3] could be used to slow or reverse global warming by increasing the reflectivity of clouds. Reflectivity depends on the size distribution of drops. For the same amount of liquid water a large number of small drops is whiter than a smaller number of big ones. Latham suggested the release of submicron drops of filtered sea water into the marine boundary layer below or near mid-oceanic stratocumulus clouds in regions where the concentration of cloud condensation nuclei is low and cloud drops are large. Evaporation would produce salt residues which are excellent cloud condensation nuclei. Turbulence would disperse them through the marine boundary layer. They would increase the number but reduce the size of drops in the cloud. Twomey suggested that, for many cloud conditions, a doubling of the number of nuclei would increase cloud top reflectivity by about 0.058.

Several independent climate models [4], [5], [6], show that the amount of spray that would be needed to reverse the thermal effects of changes since pre-industrial times is quite small, of the order of 10 cubic metres a second for the whole world. The thermal effects of double preindustrial CO2 concentration would still be manageable. Furthermore the technique intercepts heat flowing from the tropics to the poles and so cools them no matter where the spraying is done. It should therefore be possible to preserve Arctic ice. Local control and rapid response may allow thermal protection of coral reefs. Design of wind-driven vessels and spray equipment is well advanced [7].

The aim of this proposal is to identify and quantify potential side-effects of marine cloud brightening. We want to produce an everywhere-to-everywhere transfer-function of spray quantity with regard to temperature, precipitation, polar ice, snow cover and vegetation using several leading climate models in parallel. This should especially show the times and places at which spraying should NOT be done. The technique involves changing the concentration of condensation nuclei at many spray regions round the world according to coded sequences unique to each region and correlating this sequence with model results at observing stations round the world. A first test on a set of 16 artificial changes with different magnitudes to a real 20-year temperature record showed that the magnitude of each change could be detected to 1% or 2% of the standard deviation. This is better than many thermometers. Confidence has been boosted by the PhD project carried out by Ben Parkes at Leeds who has shown that the effects on precipitation are bi-directional.

The technique may let us steer towards beneficial climate patterns if only the world community can agree what these are.

The differences between climate models may point to general model improvements for which there is plenty of room.

As well as humanitarian benefits the project may lead to better understanding of atmospheric physics and teleconnections.

Previous work

One of the early attempts at the identification of side effects was in 2009 by Jones, Haywood and Boucher of the Hadley Centre [8]. They picked three regions representing only 3.3% of the world ocean area and raised the concentration of cloud condensation nuclei to 375 per cubic centimetre everywhere in the regions from initial values of 50 to 300. The regions were off California, off Peru and off Angola / Namibia. These are labelled NP for North Pacific, SP for South Pacific and SA for south Atlantic in figure 1, top left. These areas usually have good conditions for cloud cover and solar input. Parts close to the coast have rather high nuclei concentrations. They are good but by no means the only suitable sites for cloud spraying. The increased nuclei concentration was held steady regardless of summer/winter, monsoons or the phase of the el Nino Southern oscillation. The resulting global cooling for the separate regions was 0.45, 0.52, and 0.34 watts per square metre giving a mean annual total of 1.31 watts per square metre. However if all of the regions sprayed together all of the time the 3.3% of ocean area would cool a little less, 0.97 watts per square metre. Even the lower amount of cooling would be a substantial fraction of the widely-accepted increase of 1.6 watts per square metre since preindustrial times.

Present global climate models are good at predicting temperature but are less accurate for precipitation, ice and snow. They cannot predict cloud cover, hurricanes or flood events. Climate change with no geo-engineering is already producing extremes floods in Pakistan and Queensland with droughts in South Australia, the Horn of Africa and the United States.

The Jones, Hayward and Boucher results show that albedo control can both increase and reduce precipitation far from the spray source, even in the opposite hemisphere. Spray from California (NP) shown in the top right of the figure can nearly double rainfall in South Australia. Angola/Namibia (SA) give a useful increase, lower left, in Ethiopia, Sudan and the Horn of Africa. But most attention was given to the 15% reduction over the Amazon. Perhaps Brazilians watching recent television footage of dying children in Ethiopia and Sudan would be glad to have their own rainfall reduced to 2000 mm a year when necessary.

Figure 1. The separate effects of the spray regions in Jones Haywood and Boucher 2009.

Figure 2. The combined effect of all three spray sources of figure 1. This slide appeared on its own with no indication of the spray regions used and could imply that Amazon drying is the result of spray anywhere.
If all three regions in figure 1 spray simultaneously and continuously we get the result in figure 2. The combination is not the sum of the parts. The reduction in the Amazon is there but less marked. There are useful increases in Australia and in the Horn of Africa. The reduction in precipitation in South West Africa caused by the South Atlantic spray region has vanished. Jones et al. did not test other source positions, spray rates or seasonal variations relative to the monsoons.

More recent work by Gadian and Parkes at Leeds [9] used the coded modulation of the nuclei concentration of 89 spray sources of roughly equal area round all the oceans. They then correlated the individual sequences with the resulting weather records round the world. The modulation was done by multiplying or dividing initial nuclei concentration values by a factor chosen initially as 1.5. Because of the logarithmic behaviour of the Twomey equation this alternation should have had a low overall effect. The factor of 1.5 is a much weaker stimulus than an increase of 50 to 375 nuclei per cubic centimetre which would increase reflectivity by 0.168.

Figure 3. An example of the effect of all spray regions on two places in the Amazon. Drying from spray in the South Atlantic as predicted by the Hadley Centre is evident but could easily be countered by spray from many other regions, especially from south of the Aleutians.
The results in figure 3 show that, as well as the spray sources used by Jones et al., there are many other spray sources which will either increase or reduce precipitation in the Amazon. The two regions in the Amazon basin are shown black. Red shows sites which would increase precipitation at the black site and blue shows a reduction. The Amazon increases from the red spray sources off California and Peru are in agreement with the 2009 Hadley Centre result. The strongest blue in (b) off Namibia and the weaker blue off Angola in (a) are also in agreement. But the great majority of spray sites, particularly the one in (b) off Recife, show increases in the Amazon precipitation. The analysis will show maps like these for every observing station of interest. This could amount to many hundred maps depending on the resolving power of the climate models.

It is also possible to show the transfer function of each spray site on target regions on land all round the world. Figure 4 shows a sweep of spray sources along the east sides of the North and South Atlantic. There are alternating effects in South America and Australia. Spraying between the English Channel and Labrador has little effect in the Amazon or Australia but the next region south increases rain in both. The Atlantic coast off Mauritania further increases Amazon precipitation, gives weaker precipitation in eastern Australia but dries the west. A block from Liberia to Nigeria has little effect on either the Amazon or Australia but is close to where hurricanes begin. Angola confirms the Hadley centre drying of the south Amazon but not the north. Namibia reverses this. Spray off the Cape of Good Hope increases rainfall both regions of the Amazon but the effect fades as we spray from further south. Spray further south increase rain in the Indian sub-continent and Japan.

The maximum swings are 0.0006 mm per day for each percentage variation of the initial nuclei concentration. This means that for the 100% nuclei increase needed to give a reflectivity increase of 0.057, the annual precipitation change would be 0.0006 x 365 x 100 mm = 21.9 mm per year. This is much smaller than precipitation changes indicated by the Hadley Centre but the size of individual spray regions is somewhat lower. The Hadley Centre increase from 50 per cubic centimetre in a clean spray region to 375 is a much stronger stimulus by a factor of 15 and a reflectivity increase of 0.168. The offshore edge of the Hadley test regions would have presented an impossibly high slope of nuclei concentration. Perhaps climate systems react just as badly to sharp changes as engineering components under stress.

The full Parkes thesis can be downloaded from [9]

Figure 4. A sweep of spray regions along the east side of North and South Atlantic show cyclical effects on precipitation in South America and Australia. Ben Parkes’ work provides 89 such maps.
The symbols in figure 4 show the scatter of precipitation results from 8 runs with different sequences from 89 spray sites on the Arabian region. Blue bars show standard deviations. A low scatter implies reliable operation of the technique but is not universal. While the general trend is towards slightly more precipitation, there are changes in both directions with less scatter in the wetter direction.

Figure 5. If the results of perturbations from separate runs with different code sequences show a large scatter we can deduce that the technique is not working well for that combination of source and observing station.

Work programme

Because of the poor representation of precipitation in global climate models and in the absence of any better prediction method, we want to use a multiple approach with at least five different climate models driven by different research groups attempting the same jointly agreed objectives but with some freedom to follow interesting results. Suggestions for the central questions which should be tackled by all groups are as follows. They must be debated and approved but then adhered to.

Correlation lag. The changes to weather are not immediate so we should include a time lag between the release and the autocorrelation period. There may also be several time lags with different durations. We can make good estimates by choosing a plausible guess for the response period, driving all or a subset of spray sources in unison to add a sinusoidal component at that period to the nuclei concentration, running the climate model, subtracting the mean offset at each observing station round the world and multiplying the mean response by the sine and cosine signals. This will produce two offset means. The tangent of the phase lag of the response at each observing station will be the cosine offset divided by the sine offset. Repeating the process for various periods will allow the choice of correlation lag for each observing station. The amplitude and phase of the response as a function for period will give an interesting insight into the important climate system processes but does not allow the separation of effects from individual spray sources as is possible with coded modulation.

Coded modulation sequences. Random number generators can, by chance, produce short groups with abnormal auto-correlation. Andrew Jarvis at Lancaster can give sequences without these. When God made random sequences He made a great many so we can all use different ones but it will be interesting to compare results of the same sets of sequences in different models.

Change-over period. The encoding sequence can be seen as a series of coin tosses. Each toss decides whether the spray/not spray mode should be reversed or left alone. If the coin is tossed too frequently then the weather system will not have time to respond. But, if the intervals are too long, the length of the computer run needed to get a reasonably low scatter will be expensive. Perhaps initially the very shortest change-over period should be about the time for which reliable forecasts can be made, perhaps ten days. At each possible change-over period there will be a 50% chance of no change and a 25% chance of getting three ‘no-change’ events in a row and so on. Carbon emissions vary over a weekly cycle and the release of decay gases and di-methyl sulphide from seaweed can be related to the 28 day tidal cycle so we must avoid being phase-locked to these periods. Parkes used a change-over period of 10 days for computational efficiency and a case can be made for 20 days. We should later extend the changeover period but not to the point where too many changes spread across a monsoon period.

Monsoon season. All the work so far has used continuous spray through the year. It might occur to even a naïve engineering person that the monsoon seasons could possibly have an effect on patterns of precipitation and evaporation. This means that we should do separate correlations and calculate separate transfer functions according to the monsoon phase. If the technique shows promise and computing time is available we may be able to resolve transfer functions down to monthly levels provided that we can get resources to allow the use of high resolution models.

Spray amplitude. The susceptibility of an ocean spray region is a function of low cloud, clean air incoming solar energy and perhaps wind to drive spray vessels and disperse spray. Large spray volumes in what initially appear to be regions of high susceptibility will reduce that susceptibility. A lower dose over a wider region will be more effective. Multiplying and dividing the initial nuclei concentration value by 1.5 was quite small but we do not know that it is the best choice. A sweep over multiplying and dividing amplitudes from 1.25, 1.5, 2, 3 and 4.5 will help us choose the best spray amplitude(s) for later work.

Spray asymmetry. The Twomey results can be condensed into an equation which says that the change in reflectivity is 1/12 of the natural log of the ratio of nuclei concentration. The log term led to the decision to multiply and divide initial nuclei concentration values by a constant rather than by the usual addition of some chosen amount. It was intended to cancel the mean thermal effects in other regions. However it may be that the multiplier should not be exactly equal to the divider. We need to establish the best numbers to use for the lowest external interference so as to minimise interference between regions. A possible method might be to use results of the sinusoidal modulation. Any departure from a sinusoidal wave form produces harmonics which can be detected by multiplying the signal minus its mean by the sine and cosine of 2, 3, 4 etc. times the fundamental.

Spray concentration profile. While time constraints forced Parkes to use blocks of spray with sharp edges it would be more realistic to have smoother variations of nuclei concentration, perhaps with the bell-shaped Gaussian concentration.

Number and position of spray sources. The Parkes choice of 89 spray regions was not made with any great confidence. We wanted at least two across the narrow section of the Atlantic. Parkes started with equal areas but then divided them round the Caribbean and either side of Iceland because of the current patterns. Some climate models show strange alterations either side of the equator in the Pacific. There is no need for spray regions to have equal areas provided that we can give each an appropriate weighting. There is no need for everyone to use the same regions provided that research result maps (discussed later) can give a common presentation. Individual selections should be encouraged and results merged to avoid blocky results.

Regions might be merged if there is little difference between their susceptibilities or divided if differences between adjacent neighbours are large provided that the spray regions are large enough to produce a consistent forcing over several grid points.

Region grouping. It is well known that climate patterns all over the world are affected by temperature differences across the South Pacific, not always to advantage. It will be interesting to drive the cloud nuclei concentration differentially either side of the Pacific with code sequences of each side in unison in a number of coherent ways. Two obvious ones are first an equal 50/50 east/west split with a sharp divide, secondly a linear ramp with concentration depending on distance either side of the midline and thirdly a blend of positive and negative Gaussian distributions. Other Boolean combinations of spray regions can be chosen but with the risk that this could lead to a combinatorial explosion of possibilities and so we need careful planning.

Tactical spraying. There is no need for spray rates to be preordained and fixed for a whole experiment. For example if we see that surface temperatures in the Pacific are forming an el Niño or la Niña pattern and we know the cooling power of world spray sites, even ones far from the Pacific, we can drive them so as to increase or reduce the Southern Oscillation. The spray can be in phase with the temperature anomaly or its rate of change or even at some other phase angle. The orientation of the jet-stream waves might be a powerful indicator. A force opposing change of position of a system, ie. a spring, will increase its oscillation frequency. Control engineers know that very small amounts of damping (a force opposing velocity) or its opposite, can have very large effects on the growth or decay of oscillations. We like error sensors and actuators with a high frequency-response and low phase-shift. Tropospheric cloud albedo control has an attractively rapid response – a few days compared with stratospheric sulphur at low latitudes which is about two years. With sufficiently high resolution we may also detect early signs of hurricane formation.

Ganging up. We may learn something about the climate system by using independent spray patterns to identify all the spray regions which have the same effect on one observation station, such as drying Queensland, and then driving then in unison. We then reverse the selection to all the spray regions which increase Queensland precipitation.

Map projections. The site http://egsc.usgs.gov/isb/pubs/MapProjections/projections.html gives a useful selection and explanation of map projections. All projections of a solid globe to a flat plane involve some distortions but we can choose between distorting area, direction, shape or distances at various places in the map. The Mercator projection is very common but produces gross distortion of east/west distances and areas at the high latitudes which are now seen to be of very great importance to climate change. For polar areas the Lambert azimuthal equal-area projection looks best. We can tilt this projection in other directions so that several images can show the whole world with acceptable distortion. The obvious starting one would be six Lambert azimuthal views, two from the poles and four from the equator at longitudes of 0, 90, 180 and 270 degrees and an option to set any other latitude and longitude for any other view. It can be very useful to have a transparent layer of one parameter laid over another but this will need coordination of page layout. Six 90 mm diameter circles on a 100 mm pitch can fit neatly on one page of A4 or letter page with room for arrows to adjacent balloons. If necessary we can fit 12 on an A3. A single 180 mm circle can be used to show finer detail. The modelling teams must consider the question carefully, come to a joint view and then stick to the common decision and page scale.

Solid modelling packages (e.g. SolidWorks) are increasingly common for engineering design and several offer free viewing software to let customers spin images of engineering components about any axis. A spinning image can give a good presentation of complex three-dimensional shapes. It should be possible to modify software to give surface colours with 10 saturation levels and text to regions of a spinning sphere.

Mapping contours. Result maps need to show the magnitude and slope of at least temperature, precipitation, evaporation ice and snow cover. While a continuous rainbow spectrum looks beautiful and gives a superficial impression of work done it is almost useless at providing any numerical information beyond the position of a peak. There are some meteorological result maps which have colour allocations that are particularly unhelpful, for example the one below.

Figure 6. How not to display the results of a climate model. The lead author of the paper from which this figure was taken agrees with me but was unable to challenge official policy. No names no pack-drill. Result format for this project may be dictatorial but will be more intelligent.
The area of ‘no change’ is between the lighter buff colour and the darker green. It covers a large fraction of the map. The polarity of the contour gradient is not obvious where light green moves to cyan or the darker buff moves to orange. Light green is a stronger effect than dark green. Numbers on the colour code bar refer to the borders not the middle of contours. Readers may confuse this map for precipitation with another map for temperature which uses the same colour set.

The right presentation of results can reveal the reasons for the most peculiar phenomena. We must make it as quick and as easy as possible for lazy, tired, non-technical readers to see effects with the minimum of mental decoding effort even when they are looking at a great many different maps.

The first requirement is that areas with effects that are below the level of statistical significance or the middle of the range should be white. Either side of this region there should be just two colours with increasing saturation. Red and blue would be intuitive for temperature with green and brown for river runoff. The male human eye can reliably distinguish 10 saturation levels provided regions are in contact with sharp edges. (Females have higher discrimination.) This gives a range of 20 steps (more than most result maps) plus a white central zero for the mean or the anomaly reference. The steps give an obvious direction of gradients. Adults, babies, birds and many animals can count up to five in an instant ‘analogue’ way. If we have thin black contour lines between the lowest five saturation steps and thin white lines between each of the top five colours we can avoid getting lost. We can also include black or white text numbers to show the contour value and total area of the contour region. An example is shown below.


Astronomers developed a very powerful technique to detect changes in the position or brightness of astronomical objects. Two images of star fields containing thousands of objects, taken at different times but with exactly the same magnification, would be shown alternately at intervals of about one second. A change in any one of them would be immediately apparent. We can adapt this for use with PowerPoint images to detect small differences in maps of model results provided that we can standardise the presentation format across all groups. We can also flicker through a sweep of many small changes in time or nuclei concentration.

Reports should have verbose comments and complete meta-data close to each map with minimum risk of confusion between them. This means frequent repetition and no jumps to other pages, or even other journals as is sometimes done. The clarity of caption wording should be tested by naive readers, rather than the intimidatory style of many journals.

Results can be presented as absolute values or as anomalies from some agreed reference baseline. We must agree on a small selection of base lines such as preindustrial, 1960-90, present, x 2 preindustrial CO2 or one of the, now discredited, IPPC scenarios. We must also be able to show instantly the differences between sets of results from different codes or institutions such as Pacific North-Western minus Hadley Centre. We must be able to combine results with various weightings from different teams. This will require an agreement between the teams of what the format should be, followed by their obedience to the agreement. It will be important to get advice from good information technology experts.

Blockiness. Because of pressure of time the Parkes thesis results were presented as blocks with sharp edges which are unusual in meteorology except for either sides of mountain ranges or places like the Cape of Good Hope. We should agree on a method to produce smoothly blended curves for both spray concentration regions and results.

Naming of sea areas. We must make it easy for people to know which of many possible spray regions are being discussed even if they are not the same as the original Parkes 89. One way is to pick a spot at the centroid of an ocean and then give the bearing and distance of a spray region under discussion in terms of the bearing and distance from the ocean centroid. Two digits are enough to identify runways at airports and most regions will be larger than 100 kilometres or a few model grid points so, for example, we could use a description such as South Pacific 18, 30 for a spray region 3000 kilometres south of the ocean centroid.

Numerical data. If we want daily results of 4 parameters affected by 100 spray regions for 500 different observing stations over 20 years with two-byte precision we will have to access nearly a Terabyte of information. Requirements are unlikely to shrink. We want this to be made freely accessible to anyone. We need verbose and intuitive labels and selection filters with same look and feel from all modelling groups. Subsets should be available in a widely used format, agreed by all teams such as netCDF and GrADS. People should normally supply numerical results in a common, agreed and widely-used format. If other formats have to be used then the teams should provide conversion software or do the conversions themselves on request.

Carbon dioxide variation. We should first test the technique with an agreed level of atmospheric greenhouse gases. If we can establish confidence in the coded modulation technique we can later experiment with changes to gas concentrations. Obvious ones are pre-industrial, double pre-industrial, ramped rises at various rates, methane burps and even the effects of plausible rates of CO2 removal. However access to present real observations will be useful and so there is a strong case for using present day gas concentrations unless there is a sudden need for work on methane burps.

Political information. Models will be able to produce results of several climate parameters of the effect of all the spray regions at places all round the world. There are many ways in which we could select target regions. One obvious one is the present political boundaries of 193 UN Member States. If a climate model has a resolution of one degree each cell has a side of 111 kilometres. It takes about eight points to draw a convincing sine wave so the size of result contour will be larger than the smaller countries. This means that some of the smallest ones will have to be grouped. This could be a matter requiring some delicacy. For large or elongated countries we can subdivide the area such as either side of a mountain range or north and south for countries in the Sahel. This would allow each country to choose which spray regions and times would give it the maximum benefit with the least dis-benefit to others. It might then be possible to understand and maximise the winner-to-loser ratio and even to decide on compensation. It is defeatist to assume that the outcomes will inevitably be unfavourable. The results of any pair of parameters predicted by each climate model for each country can be shown by plotting the model name on a map with, say, the vertical coordinate being temperature and the horizontal coordinate being precipitation. A close clustering of results from different models will add confidence. We must resist the temptation to place models in rank order. The objective should be model improvement and good science can come by sometimes testing opposites.

Specific Questions

There is a grave risk of a combinatorial explosion suppressing the detection of differences between climate models and so initially we must agree on as many test conditions as possible. The following are suggestions for debate.

  • What spray rates, change-over periods, concentration profiles and correlation delays should be used for commonly-agreed experiments?
  • What level of scatter will still allow us to draw useful conclusions?
  • What level of model resolution should be used?
  • How does scatter vary with the length of run?
  • How does scatter vary with the size of target area?
  • Do we need to merge target areas?
  • How does scatter vary with spray source and observing station?
  • What spray regions, spray rates and spray seasons will produce unacceptable changes?
  • Can scatter be low enough to allow seasonal or monthly transfer functions?
  • How far does the Twomey log equation for nuclei-concentration to change-of-reflectivity hold?
  • What ratio of multiplier to divider will minimise interference between spray sources?
  • Negative modulations are easy in a computer model but less so in the real world. How does susceptibility vary if the modulation is asymmetric or is only positive?
  • How does the susceptibility for temperature, precipitation and ice cover vary with the amplitude of the perturbation?
  • Will tactical variations based on day-to-day observations be useful for hurricanes and precipitation adjustment in both directions?
  • Are there large winner-to-loser ratios?
  • Can overall winner-to-loser ratios be minimized?
  • What other experiments do you suggest?
  • What is the probability that this project would improve the reliability of climate models?


Milestones and Deliverables


  • Agreement on result presentation, data format and low-level common analysis software packages.
  • Circulation and analysis of the existing Ben Parkes results.
  • Agreement on target parameters such as, temperature, precipitation, evaporation, ice, snow-line, vegetation and CO2 level.
  • Agreement between teams on timescales for deliverables.
  • Measurement of the phase and amplitude response to allow choices of correlation lags.
  • Maps of spray site susceptibility, defined as the annual change of each result parameter per unit of spray volume.
  • As above with spraying adjusted with a selection of correlations linked to the monsoon seasons. Even if the computer models cannot detect the onset of a monsoon we can use historic records to pick dates.
  • Investigation of tactical spray rate variation.
  • Results for groups of spray regions working in unison or subtle harmony especially with Trans-Pacific amplification and attenuation of el Niño / la Niña oscillations.
  • Design of a world-wide spray plan to cool the planet with the minimum winner/loser ratio.
  • Identification of the strengths and weaknesses of the various climate models leading to suggestions for improvement.


Chaos

Objectors to cloud albedo control have argued that the climate system is chaotic and so nothing can be done to direct it. There have been a great many phenomena such as planetary motions, chemical reactions, the incidence of disease and the motions of sea waves which were thought to be chaotic by the leading thinkers of the day. But Kepler showed that elliptical planetary orbits followed rules more precise than any man-made machinery. Mendeleev produced his periodic table and was able to predict the properties of hitherto unknown elements. Pasteur developed germ theory. Test tanks can now produce complex sea states with repeatability of a few parts per thousand. An oscilloscope signal from any backplane connector of a computer appears to be entirely random string of zeros and ones but is in fact one of the most highly defined sequences that we can produce.

A favourite demonstration of chaotic behaviour uses the fall of sheets of paper from above the demonstrator’s head. The smallest increase of the angle of incidence between the paper and the apparent airflow produces a pitching moment to increase its value up to the moment of stall. Sheets will be scattered over a wide area. But if a sheets of paper is folded in the form of a paper dart to increase stiffness, and a weight is added to the nose then the area enclosing its falling positions is greatly reduced. Clearly the magnitude of chaos is variable and can be affected by small changes to engineering design. The scatter could be further reduced if the falling item was fitted with optical systems driving control surfaces. We could call it a GBU 12 Paveway bomb which has an accuracy of about one metre despite chaotically random cross winds. Similarly we could fit video cameras and hinge actuators to the nails of Galton’s bagatelle board.

There may really be systems such as turbulence and subatomic physics which are genuinely chaotic. But if we believe that systems which we cannot at present understand are chaotic then we remove completely our chances of scientific discovery. The common factor is that very small changes, like the angle of incidence of a sheet of paper, are amplified. This means that small amount of input energy applied intelligently can produce large changes in output energy. That is just what we need to control the very large amounts of energy in the planetary climate. Apparent chaos implies the possibility of success. Coded modulations could give valuable insights into the climate system as well as saving world food supplies.

Conclusions

The world can be compared to a vehicle with free-castor wheels which is rolling down a hill with increasing gradient. A few passengers are warning that there may be a cliff edge somewhere ahead. Some are suggesting that there might just be time to design and fit brakes, steering and even a reverse gear. Others advise that the slope ahead might level off and so brakes and steering would be a waste of money. Some objectors complain that the passengers could never agree on the best direction to steer. Some are close to claiming that God wants humanity to drive over the cliff edge and that it is wrong to interfere with divine intentions.

We could also consider the climate system as a piano in which the spray regions are the keys, some black some white, on which a wide number of pleasant (or less unpleasant) tunes could be played if a pianist knew when and how hard to strike each key.

References

1. Latham J. 1990 Control of global warming? Nature vol no 6291 pp 347 339-340.

2. Twomey, S. 1977 Influence of pollution on the short-wave albedo of clouds.
J. Atmos. Sci. 34, 1149–1152. doi:10.1175/1520-0469

3. Schwartz, S. E. & Slingo, A. 1996 Enhanced shortwave radiative forcing due to anthropogenic aerosols. In Clouds chemistry and climate (eds P. Crutzen & V. Ramanathan), pp. 191–236. Heidelberg, Germany: Springer.

4. Latham J, Rasch P, Chen C-C, Kettles L, Gadian A, Gettelman A, Morrison H, Bower K, Choularton T. 2008.
Global temperature stabilization via controlled albedo enhancement of low-level maritime clouds. Philosophical Transactions of the Royal Society A 366(1882): 3969–3987.

5. Rasch PJ , Latham J, Chen CC, 2009. Geoengineering by cloud seeding: influence on sea ice and climate system. Environ. Res. Lett. 4 (2009) 045112 (8pp) doi: 10.1088/1748-9326/4/4/045112.

6. Latham J, Bower K, Choularton T, Coe H, Connelly P, Cooper G, Craft T, Foster J, Gadian A, Galbraith L, Iacovides H, Johnston D, Launder B, Leslie B, Meyer J, Neukermans A, Ormond B, Parkes B, Rasch P, Rush J, Salter S, Stevenson T, Wang H, Wang Q, Wood R. 2012. Marine cloud brightening. Philosophical Transactions of the Royal Society A 370: 4217–4262, doi:10.1098/rsta.2012.0086.

7. Salter S, Latham J, Sortino G. 2008. Sea-going hardware for the cloud albedo method of reversing global warming. Phil.Trans.Roy. Soc. A. Doi:10.1098/rsta.2008.0136

8. Jones A, Haywood J, Boucher O. 2009 Climate impacts of geoengineering marine stratocumulus clouds. Journal of Geophysical Research vol 114 D10106 . doi:10.1029/2008JD011450

9. Parkes B. Climate Impacts of Marine Cloud Brightening. 2012 Ph D Thesis University of Leeds. From http://homepages.see.leeds.ac.uk/~eebjp/thesis/