How Oil Became King
The great historical shifts in energy use, from wood to coal, to oil, nuclear power and beyond, have transformed civilisation and will do so again, as Richard Rhodes explains.
The history of energy transitions – from wood to coal and from coal to oil as well as natural gas and nuclear power – is a long one. Energy transitions take time, as Arnulf Grübler of Yale’s Environment School points out:
Hardly any innovation diffuses into a vacuum. Along its growth trajectory, an innovation interacts with existing techniques … and changes its technological, economic, and social characteristics … Decades are required for the diffusion of significant innovations and even longer time spans are needed to develop infrastructures.
The diffusion process is one of learning and humans learn slowly. The substitution of coal for wood was fundamental to the Industrial Revolution. Coal had been used for 3,000 years, but only marginally. Its characteristics were wrong for a society organised around burning wood, compared to which it was dirty: it required different skills and technologies to collect and distribute and its smoke was more toxic. In Tudor England, where wood smoke was thought to harden house timbers and disinfect the air, chimneys were uncommon; the smoke from fires was simply allowed to drift out of the windows. But 16th-century London suffered from a problem familiar to cities in developing countries today: as they grew, an ever greater area around them became deforested and, as transportation distances increased, wood became more expensive. The poor switched to coal, which the rich resisted, as the atmospheric chemist Peter Brimblecombe points out:
Even in late Elizabethan times the nobility still objected strongly to the use of the fuel. Well-bred ladies would not even enter rooms where coal had been burnt, let alone eat meat that had been roasted over a coal fire, and the Renaissance Englishman was not keen to accept beer tainted with the odour of coal smoke.
Brewing, however, was one London industry that turned to coal as wood and charcoal became scarce; so did dyers, lime-burners and salt- and soap-boilers. The nobility began to accept the transition after Elizabeth I’s death in 1603, when the throne passed to James I, already James VI of Scotland. Scottish nobles faced wood shortages earlier than the English and had access to less sulphurous coal, ‘so the new king used the fuel in his household when he moved to London’, according to the historical demographer Tony Wrigley.
Coal thus became fashionable. By 1700 production in England and Wales had reached three million tons per year – half a ton per capita. By 1800 production had tripled. There were two fundamental technological challenges to increasing coal production. One was that deepening coal mines penetrated the water table and flooded the mines: the water needed to be pumped away, for which purpose steam engines were developed. ‘Three quarters of the patents issued in England between 1561 and 1668,’ writes Wrigley, ‘were connected with the coal industry … and … a seventh were concerned with the drainage problem.’ Since the steam engines burned coal, the new energy source was bootstrapping itself.
The other fundamental challenge of using coal was its transportation. Wood, which grew dispersed across the landscape, could be transported efficiently in small batches in carts and on river boats. Coal was not areal, like wood, but punctiform – that is, it came out of the ground. Efficiency required its transportation in bulk. At first it was delivered by sea from mines near ports. There were 400 smaller colliers – boats carrying coal – working between Newcastle and London in 1600. By 1700 that number had increased to 1,400 and the boats were larger. By 1700 ‘about half of the total British merchant fleet by tonnage was engaged in the coal trade’. As use grew and mines were opened inland, coal drove the development of canals.
The technologies developing to meet the challenges of coal production combined. The first railways, horse-drawn, had connected pitheads with coal wharves to move coal onto colliers. The steam engine, mounted on wheels that ran on rails, offered greater speed. ‘Railways were peculiarly a mining development (even down to the track gauge),’ writes Wrigley:
And were created to overcome the problems posed by large-scale punctiform mineral production, initially as feeders to waterways, but later as an independent network. Like canals, they also proved of great benefit to other forms of production and eased the movement of vegetable and animal raw materials. Moreover, they developed a great passenger traffic.
Energy transitions transform societies, yet they are complex phenomenon, as these two opposing views of the coal transformation suggest. The first view is that of the economic historian David Landes, author of The Wealth and Poverty of Nations (1998):
The abundance and variety of [the Industrial Revolution’s] innovations almost defy compilation, but they may be subsumed under three principles: the substitution of machines … for human skill and effort; the substitution of inanimate for animate sources of power, in particular the introduction of engines for converting heat into work, thereby opening to man a new and almost unlimited supply of energy; [and] the use of new and far more abundant raw materials, in particular the substitution of mineral for vegetable or animal substances. These improvements constitute the Industrial Revolution. They yielded an unprecedented increase in man’s productivity and, with it, a substantial rise in income per head. Moreover, this rapid growth was self-sustaining. Where previously, an amelioration of the conditions of existence … had always been followed by a rise in population that eventually consumed the gains achieved, now, for the first time in history, both the economy and knowledge were growing fast enough to generate a continuing flow of investment and technological innovation, a flow that lifted beyond visible limits the ceiling of Malthus’ positive checks. The Industrial Revolution thereby opened a new age of promise. It also transformed the balance of political power, within nations, between nations, and between civilizations; revolutionized the social order; and as much changed man’s way of thinking as his way of doing.
The second view is that of Raphael Samuel, the radical British historian, commenting on Landes:
This account has the merit of symmetry, but the notion of substitution is problematic, since in many cases there are no real equivalents to compare. The fireman raising steam in an engine cab, or the boilermaker flanging plates in a furnace, were engaged in wholly new occupations which had no real analogy in previous times … If one looks at technology from the point of view of labour rather than that of capital, it is a cruel caricature to represent machinery as dispensing with toil. High-pressure engines had their counterpart in high-pressure work, endless-chain mechanisms in non-stop jobs. And quite apart from the demands which machinery itself imposed there was a huge army of labour engaged in supplying it with raw materials, from the slave labourers on the cotton plantations of the United States to the tinners and copper miners of Cornwall. The Industrial Revolution, far from abridging human labour, created a whole new world of labour-intensive jobs: railway navvying is a prime example, but one could consider too the puddlers and shinglers in the rolling mills, turning pig-iron into bars, the alkali workers stirring vats of caustic soda, and a whole spectrum of occupations in what the Factory legislation of the 1890s was belatedly to recognize as ‘dangerous’ trades. Working pace was transformed in old industries as well as new, with slow and cumbersome methods of production giving way, under the pressure of competition, to overwork and sweating.
The second great energy transition originated in the United States and, like the transition to coal, it began with a preadaptation. Coal’s preadaptation was its substitution for domestic wood-burning, which then led to its application to steam power in mining, transportation and manufacturing. Oil was first used as a substitute for whale oil, for illumination in the form of kerosene, another example of substituting mineral for animal or vegetable raw materials. In 1860, a year after Uncle Billy Smith struck oil at Oil Creek in Titusville, Pennsylvania, a pamphleteer wrote: ‘Rock oil emits a dainty light, the brightest and yet the cheapest in the world; a light fit for kings and royalists and not unsuitable for Republicans and Democrats.’ Kerosene remained the most important oil product for decades, with smaller markets developing for naphtha; petrol, which was used as a solvent or gasified for illumination; fuel oil; lubricants; petroleum jelly; and paraffin wax.
At the beginning of the 20th century coal still accounted for more than 93 per cent of all mineral fuels consumed in the US and electric light was displacing the kerosene lantern in urban America, with 18 million lightbulbs in use by 1902. Oil might have declined, as it was much more expensive per unit of energy than coal. But because it is a liquid it is also much cheaper to transport. Even as late as 1955 the cost per mile of transporting a ton of liquid fuel energy by tanker or pipeline was less than 15 per cent of the cost of transporting an equal amount of coal energy by train. Large oil fields were discovered in Texas and California early in the century. Railroads in the American West and South-west almost immediately converted to oil because local oil was cheaper than distant coal when transport was figured in. Total energy consumption in the US more than doubled between 1900 and 1920, making room for oil to expand its market share without directly challenging the coal industry.
Steamships offered another major market. The US navy converted to fuel oil before the First World War, a conversion which served as an endorsement for private shippers. As with coal, a bootstrapping market was the oil industry itself, which used oil both to fuel its oil tankers and to provide the energy needed for petroleum refining. As much as 10 per cent of all oil produced in this period was burned in refineries.
The automobile secured oil’s market share. According to Nebojsa Nakicenovic, Professor of Energy Economics at the Vienna University of Technology:
Animal feed reached its highest market share in the 1880s, indicating that draft animals provided the major form of local transportation and locomotive power in agriculture … Horse [-drawn] carriages and wagons were the only form of local transportation in rural areas and basically the only freight transportation mode in cities. In addition, they moved goods and people to and from railroads and harbours.
Henry Ford’s original intention was to develop a farm tractor. ‘It was not difficult for me to build a steam wagon or tractor,’ he wrote in his autobiography, first published in 1922:
In the building of it came the idea that perhaps it might be made for road use … I found eventually that people were more interested in something that would travel on the road than in something that would do the work on the farms.
By manufacturing motor cars, Ford and his competitors relieved farm labour by reducing the demand for animal feed. In Great Britain, for example, the annual feed bill for town horses in the 1890s approached 100 per cent of the annual value of all crops sold by British farms. In Nakicenovic’s analysis the automobile first substituted for and then displaced the horse-drawn carriage largely because it increased the radius of local transportation, allowing ‘entrepreneurs to expand their circles of customers and [offering] a more flexible mode of leisure and business transport’. Only after that process was completed, in the 1920s, ‘did it emerge as an important transportation mode in competition with the railroad for long-distance movement of people and goods’. Just at that time natural gas began penetrating major industrial markets, such as iron and steel, cement, textiles, food, paper and pulp, which burned coal or had recently switched to fuel oil, allowing petroleum to meet rising demand.
Preadaptations that prepared the way for the automobile included the availability of petroleum as a refinery by-product and the surfacing of roads for horse-drawn carriages. Eight per cent of all US roads were already surfaced by 1905, when there were fewer than 80,000 automobiles in use but more than three million non-farm horses and mules. The diesel engine was originally conceived as a combustion engine for powdered coal, but the resulting ash ground and fouled its cylinders and pistons; diesel fuel, another refinery byproduct, made it practical. By 1950 fuel wood comprised only 3.3 per cent of aggregate US energy consumption and natural gas 17 per cent, but coal and oil closely matched each other with more than 36 per cent each. Oil’s market share peaked in 1968 at just 43 per cent, much lower than coal’s earlier peak of 70 per cent. Natural gas had emerged to compete with oil only 20 years after the latter’s emergence.
The gap had been much wider between coal and oil: about 150 years. Today both are declining as fractions of total world energy, although oil demand is at a maximum. ‘The oil industry still has most of its future in front of it,’ the physicist Cesare Marchetti predicts, with a mean loss of production of only 1.6 per cent per year. But the longer future belongs to natural gas, which Marchetti expects to reach a maximum market share of 70 per cent – the same dominance coal once had – around the year 2040. Natural gas had time to gain a large market share because its next competitor, nuclear power, emerged seven decades later. Seventy per cent market share for gas will be a huge share of a huge market and if we ask where all the gas will come from the answer seems to be that the search for hydrocarbons is controlled much more by geopolitics than by the probability of discovery.
The preadaptation that prepared the emergence of nuclear power has continued to haunt it. In the US, the Soviet Union, Great Britain, France and China nuclear reactors were developed first of all to breed plutonium for nuclear weapons. Power reactors were delayed in the US in the years immediately after the Second World War because everyone involved in the new atomic energy enterprise believed that high-quality uranium ore was rare in the world, too rare to be diverted from weapons production. Early in the 1950s the US Atomic Energy Commission even considered extracting uranium from coal ash, where burning concentrates coal’s natural complement of uranium ore (the Chinese are again considering the idea today). Well into the 1950s almost the entire US production of uranium and plutonium was dedicated to nuclear weapons. Finally the US government offered bonuses to uranium prospectors for high-quality finds and the prospectors, reprising the California Gold Rush, unearthed the extensive uranium resources of the Colorado Plateau.
Another delay arose from concerns for secrecy. In the US the Atomic Energy Act of 1946 made atomic energy an absolute monopoly of the federal government. All discoveries were to be considered ‘born’ secret – treated as secret until formally declassified – and the penalty for divulging atomic secrets was life imprisonment, even death. All uranium and plutonium became the property of the government, just as beached whales once automatically became the property of kings. No one could build or operate a nuclear reactor except under government contract, nor could one be privately owned. All these restrictions and mindsets had to be revised before utilities could own or build nuclear power stations.
It is clear in hindsight that the careful evolutionary development of nuclear power in the United States, including the types of reactors developed and the nurturing of a solid political constituency, was a casualty of the Cold War. Early in the 1950s the Soviet Union announced a power reactor programme and by then the British were developing a power reactor fueled with natural uranium that countries without enrichment facilities might want to buy. In both cases Congress feared the US might be left behind. It amended the Atomic Energy Act in 1954 to allow private industry to own and operate reactors and government-subsidised construction began on a 60,000 kilowatt demonstration plant at Shippingport, the same year. The reactor design derived from a Westinghouse Large Ship Reactor, a pressurised water reactor (PWR) developed for aircraft carriers. But to limit proliferation, Admiral Hyman Rickover, ‘father of the nuclear navy’, made the bold decision to switch from uranium metal fuel to uranium oxide. The PWR configuration met the needs of the US navy, but it was less than ideal for commercial power. Water was a less efficient but familiar coolant.
Uranium oxide, which became the standard light-water reactor fuel, is less dense than uranium metal and conducts heat less efficiently. To make their compromised reactor designs competitive in a field dominated by relatively cheap fossil fuels, manufacturers pushed design limits, maximising temperatures, pressures and power densities. Tighter design limits led to more frequent shutdowns and increased the risk of breakdowns, which in turn required more complex safety systems.
More crucially, manufacturers began pursuing economies of scale by selling larger and larger reactors, without fully addressing the changing cost and safety issues such reactors raised. ‘The largest commercial facility operating in 1963’, two policy analysts write, ‘had a capacity of 200 megawatts; only four years later, utilities were ordering reactors of 1,200 megawatts.’ But the safety arrangements that government regulators judged sufficient at 200 megawatts were no longer judged sufficient at 1,000 megawatts. So they began requiring further add-on safety systems, escalating engineering and construction costs. Construction time increased from seven years in 1971 to 12 years in 1980, roughly doubling the cost of the plants and raising the cost of the resulting electricity. US Nuclear Regulatory Commissioner Peter Bradford would write later that ‘an entire generation of large plants was designed and built with no relevant operating experience, almost as if the airline industry had gone from Piper Cubs to jumbo jets in about 15 years.’
Because of the increase in size and the correspondingly larger inventory of fuel, ‘engineered safety’ replaced ‘defence in depth’ as a design philosophy and it became impossible to demonstrate that large US power reactors were acceptably safe. Nor was a safety culture developed and maintained among the operating teams at private utilities lacking experience in nuclear power operations. It was these problems and not anti-nuclear activism that led to the cancellation of orders and the halt in construction that followed the Arab oil embargo, which began in late 1973. Orders for some 100 nuclear power plants were cancelled, as well as those for 82 coal power plants – nearly 200,000 megawatts in all – because the oil embargo stimulated dramatic improvements in energy conservation in the US, as well as in Europe, that stalled a longstanding trend of increasing demand. ‘Who … would have predicted,’ physicist Al Weinberg wrote, ‘that the total amount of energy used in 1986 would be only 74 quads, the same as in 1973?’ Today, with demand once again increasing, US nuclear power is thriving: existing plants are being relicensed to extend their operating life by another 20 years; plants left unfinished will probably be completed and licensed; and new reactor construction utilising newer, safer and more efficient designs is pending.
Fusion, if it can be made practical, fits in well with these historic trends in energy development. Like nuclear power it also continues another trend that Grübler and Nakicenovic have identified historically, a trend toward increasing decarbonisation, meaning a decrease in the amount of carbon or CO2 emitted per unit of primary energy consumed. The carbon intensity of primary energy use today is some 30 to 40 per cent lower than in the mid-19th century. The long-term trend toward decarbonisation will not be sufficient by itself to limit or reverse a build up of greenhouse gas but at least it is moving in the right direction. Solar, wind and biomass also fit this trend toward decarbonisation, but unlike those energy systems, fusion is punctiform rather than areal and the trend has been away from areal energy sources for more than 200 years. Renewables are also lower-grade energy sources than fusion, another trend in its favour.
In truth the world will need every energy source that can be found or devised. Coal as it is presently used will no doubt continue to decline in world market share, but it may find renewal in a new form, as a liquid fuel supplementing petroleum. That would extend coal’s contribution for another century. The fundamental human project is the alleviation of suffering through the progressive materialisation of the world. In the longest run, into the 22nd century, nuclear, solar and fusion electricity and hydrogen fuel promise a healthier, cleaner environment, an adequate standard of living, a life expectancy of at least 70 years and consequently a minimum of war and civil conflict for a sustainable world population of even ten billion souls.
Richard Rhodes is an affiliate of the Center for International Security and Cooperation at Stanford University and the author of The Twilight of the Bombs: Recent Challenges, New Dangers, and the Prospects for a World Without Nuclear Weapons (Knopf, 2010).
- The Archive
- Medieval (4th-15thC)
- Early Modern (16th-18thC)
- 20th Century
- 21st Century
- Economic History
- Environmental History
- Food & Drink
- Historical Memory
- Science & Technology
- Middle East
- North America
- South America
- Central America
- Kings & Queens
- Prime Ministers
- US Presidents
- Special Series
- Student Advice
- Browse Back Issues
- History Review
- Digital Edition