Once a week.
Subscribe to our weekly newsletter.
Electricity and fear: The trouble with nuclear energy
Although everyone knows that coal-based energy is a thing of the past, declarations about nuclear power plants somehow do not want to enter into force.
No other power-generating device raises as much concern as the nuclear reactor. Because of this, until recently the future of the entire energy sector has been determined by its past.
On the eve of the pandemic, the European energy sector found itself at a crossroads, somewhere between Great Britain, Germany and Poland. Five years ago, across the English Channel, the then Prime Minister David Cameron announced an ambitious program to build 12 new nuclear power plants with a total capacity of 16 GW. While developing renewable energy resources, they would allow the United Kingdom to reduce carbon dioxide emissions from the energy sector to almost zero. Soon after, Cameron came up with the idea of a referendum on leaving the EU – and Brexit reset all long-term British plans. However, the British are already producing electricity in a very sustainable way. Almost 38% comes from renewable sources, about 20% from nuclear power plants, while the remainder is provided by gas-powered plants, the only ones that emit CO2.
Meanwhile in Germany, the aversion towards nuclear energy has been growing for years. Finally, following the Fukushima disaster in March 2011, chancellor Angela Merkel announced that all nuclear power plants would be shut down by 2022. For the first few years, the great Energiewende (energy transformation) plan seemed to be going well. Thanks to subsidies and increased electricity prices for individual customers, the intensive development of wind farms and solar power plants continued. However, no technological solution has been found to overcome the main weakness of renewable energy sources: plants running on renewable sources work on average for 20-30% of the day and remain completely dependent on whether the wind blows or the sun shines. Because of this, they are not able to handle energy peaks. In turn, when a gale comes, suddenly there is a network overload due to excess power. In both these extreme cases, the entire country is at risk of blackout, and the risk of collapsing energy supplies increases significantly when more than 30% is obtained from renewable sources. Safety requires the maintenance of traditional power plants, which due to their flexibility, stabilize the entire system.
In Germany, as subsequent nuclear reactors began to shut down, lignite-fired power plants started to play a key role. Unlike nuclear plants, they devastate the natural environment not only due to CO2 emissions, but also the need to expand opencast mines. A huge wave of criticism from environmentalists and Berlin's goal to lead by example in the fight against global warming have brought an adjustment in strategy. Today, coal-fired power plants are being replaced by gas-fired ones that emit one-third less carbon dioxide. Russia will provide fuel for them via the Nord Stream and Nord Stream 2 gas pipelines. However, withdrawal from the decommissioning of nuclear power plants is now out of the question.
In turn, the development of renewable energy in Poland is suffering, despite the construction of one or more nuclear power plants having been announced two decades ago. Before the pandemic, the government envoy for strategic energy infrastructure Piotr Naimski claimed that by the end of 2045 as many as six nuclear reactors with a total capacity of 6 GW would be built. Although everyone knows that coal-based energy is a thing of the past, declarations about nuclear power plants somehow do not want to enter into force. And this is a very complicated undertaking, during which any disregard of security standards can awaken demons from the past.
A pile of trouble
"In fifteen years, nuclear power will provide electricity too cheap to measure its consumption," the head of the American Atomic Energy Commission, Lewis Strauss, prophesied in 1954. By the end of that decade, energy corporations had overcome technological barriers. "Westinghouse has perfected the PWR reactor, the water-pressure reactor, and GE [General Electric] the BWR reactor, boiling water reactor," explains Daniel Yergin in the The Quest: In Search of Energy. These two types of first generation reactors have spread throughout the world. By 1970, 15 nuclear power plants had launched in 62 countries and the construction of a further 89 had begun. Most of them were located in the US, USSR, UK, France, Japan and West Germany. Three years later, the first oil crisis erupted and it seemed certain that highly developed countries would base their future on nuclear power plants. However, the first problems began to emerge.
The first generation, 1000 MW water-pressure reactor generated as much as 20 tons of radioactive waste annually. Initially, the Americans placed it into metal containers and buried it in the ocean. The Soviets did the same. Protests by environmental organizations led to containers with a guarantee of durability of a thousand years starting to be buried in the Nevada desert – ignoring the fact that the half-life of plutonium-239 is about 24,400 years. In other countries, old mines were used as waste dumps. The French coped with this problem exemplarily by building a plant at La Hague specializing in the recovery of radioactive uranium and plutonium from waste. Later, these elements are enriched and sold to energy companies. During the 1980s, many countries – including Japan, West Germany, Belgium and Switzerland – began to use the services of the French.
In addition to waste, investment costs have become an equally large problem. "Emerging ecological movements, especially anti-nuclear ones, forced additional reviews and changes. It was necessary to thicken the concrete walls, and remove pipeline installations and rework them. Power plants had to be redesigned, even several times during construction," emphasizes Yergin. He writes: "Power plants also became more expensive because of inflation and later, the high interest rates on loans. Instead of six years, construction took ten; it also cost money. The power plants, which were to cost $200million, ultimately cost $2billion." Later, they produced the cheapest electricity on the market, but gigantic expenses had to be included in its price. While the French model handles waste well, investment costs remain the Achilles' heel of nuclear energy to this day, even if they are less important than the media and public fear.
Awaiting the apocalypse
"There is nothing in the laws of nature that stops us from building better nuclear power plants. We are stopped by a deep justified public distrust. The public distrusts the experts because they claimed to be infallible," writes Freeman Dyson, a physicist who participated in the construction of the first reactors, in the book Imagined Worlds. The distrust of nuclear energy emerged gradually. In the 1960s, everyone remembered the fate of Hiroshima and Nagasaki, but the fear of radioactive radiation had not yet paralysed ordinary people. Experts managed to convince Western societies that the nuclear power plant hardly differed from the coal-fired power plant. All it needs is access to a lot more coolant for the reactor, preferably a huge water tank.
The sense of security began to fade not because of a failure, but catastrophic scenarios loved by the press, especially in West Germany. In October 1975, Der Spiegel very vividly presented to readers what would happen if the reactor at a power plant built near Ludwigshafen overheated. "The molten reactor core will penetrate the surrounding protective structures. It will sink into the ground at a speed of two to four meters per hour. The amount of radiation emitted would correspond to the radiation of a thousand bombs such as the one dropped on Hiroshima," the newspaper forecasted, estimating the number of victims at 100,000 killed immediately and about 1.6 million "dying slowly" due to radiation sickness. Such apocalyptic visions interested Hollywood, resulting in the neo-thriller entitled The China Syndrome. In specialist jargon, this name means the severe meltdown of the core components of the reactor.
Lo and behold, two weeks after the film's release, on 28th March 1979, there was a failure at the Three Mile Island nuclear power plant located on an artificial island. Pipes supplying coolant to the reactor burst when the back-up cooling system was disconnected for inspection. The reactor had warmed up, but the safety measures worked. Each reactor is managed using control rods. They are made of alloys that absorb neutrons. Sliding the control rods in between the fuel rods slows down the chain reaction. Pulling them out has the opposite effect. When the reactor overheats, all control rods fall into the core, quenching the reaction.
This happened at Three Mile Island. However, due to the pipes bursting, water poured out onto the reactor jacket and immediately evaporated, forming a mixture of oxygen and hydrogen under the dome of the power block. One spark could have blown up the power plant. The following day, technicians pumped off hazardous, radioactive gases outside. The residents of nearby Harrisburg panicked. About 80,000 people attempted to escape the city in cars. The US energy minister James Schlesinger's assurances that the radiation only increased by around 0.03 rem and would not hurt anyone fell on deaf ears. Those who have seen The China Syndrome knew better. It wasn't until five days later, when President Jimmy Carter personally visited Three Mile Island and in the presence of TV cameras toured the area, that the panic was subjugated. However, the misfortunes of nuclear power plants were only just beginning.
The weakest link
The owners of the plant, the Westinghouse group, largely caused the Three Mile Island disaster. The power plant was built in a rush to make it operational before 30th December 1978, in order for the company to gain a $40 million tax break. After launching the reactor, it turned out that the coolant supply pipes were leaking. At that point, the management ordered temporary sealing of leaks, after which the test of the emergency cooling system was performed, starting with its shutdown. This was done on the assumption that the main pipes would still last a little longer. "The accident was caused by a series of relatively small equipment failures followed by operator error," the head of the commission investigating the causes of the disaster, Admiral Hyman G. Rickover, wrote in his report. Fortunately, none of the Westinghouse executives were so thoughtless as to deactivate the other safeguards. Seven years later, it turned out that even such recklessness is possible.
On the night of 26th April 1986, the management of the Chernobyl power plant began to experiment with manual control of the reactor in block 4. For complete freedom, all automatic security systems were turned off. During the experiments, the stack heated up rapidly, and the control rods blocked by the staff did not automatically quench the chain reaction. Then the pipes supplying water to the cooling system burst. As in Three Mile Island, the water evaporated by the hot reactor turned into hydrogen and oxygen. The explosion of this mixture tore the dome and threw a 500-ton piece of concrete into the air, which a moment later fell into the reactor, breaking it completely. 50 tons of fuel escaped outside and the core melted. Vast areas of northern Ukraine and Belarus became contaminated due to the radioactive cloud. 50,000 residents of the nearby town of Pripyat and surrounding villages were evacuated.
As a result of the disaster, 31 people lost their lives (mainly irradiated firefighters). UNSCEAR (UN Scientific Committee on Effects of Atomic Radiation) found that there were many more casualties: a 2000 report found that of about 600 employees of the power plant and firefighters, 237 were diagnosed with radiation sickness symptoms. Of these, 28 people died. According to the report, epidemiologists have not observed an increase in the incidence of cancer in the most contaminated areas, except for higher than average rates of thyroid cancer. No genetic defects were found in the offspring of irradiated persons.
A quarter of a century later, the 'Chinese syndrome' became Japanese. Two oil crises in the 1970s encouraged the government of Japan to finance the construction of 50 nuclear reactors. They guaranteed energy security for the state. However, haste made them forget about their side effects in a country where earthquakes happen regularly. The Fukushima reactor was built right on the seafront. When massive shocks (9 on the Richter scale) came on 11th March 2011, the security systems functioned properly. The reactors were automatically quenched and the cooling system switched to the emergency power supply. Nothing bad would have happened if it weren't for the sea. Tectonic shocks caused a tsunami wave of 15-metre heights, and the breakwater was only six-metres high. Huge amounts of water flooded the power plant. The power generators went down and the reactor core suddenly stopped being cooled. Then the water evaporated and the hydroxide mixture exploded.
About 10 times less radioactive substance escaped outside than in Chernobyl, and no-one was killed during the event. The first person irradiated as a result of the disaster's aftermath did not die until September 2018. Yet again, however, a wave of fear swept through the entire world.
The sum of fears
The disaster in Fukushima was a strong blow to the nuclear energy sector – which even without it, suffered bad press – and led to public trepidation, even though by the mid-1980s the number of reactors operating worldwide had reached 430 and stopped growing. New ones were still being built in France, Japan, the USSR (later, Russia), South Korea and China, but elsewhere they were gradually dismantled. The only country that had based their entire energy system on nuclear power plants was France, where they produce over 80% of electricity. Finland is also focusing on the development of nuclear energy. Two nuclear power plants currently generate around 30% of the country's energy, and once the third one is built, this will reach 60% (the rest is to come from renewable sources).
Most countries, however, still recognize the nuclear industry as a dead end. The emergence of much better third generation reactors that use less uranium, while reducing the amount of waste, did not change that. Developed by two companies – the French Framatome and the German Siemens – the EPR (European Pressurized Reactor) has a quadruple safety system and reinforcement that can withstand even the impact of an aircraft crash. In turn, the ESBWR (Economic Simplified Boiling Water Reactor) by GE Hitachi, apart from showing similar resistance, requires minimal amounts of coolant and discharges excess heat directly into the atmosphere.
There are more innovative constructions, but they have started to generate interest only recently, thanks to the rapid development of Asian countries, and thus an increase in demand for cheap electricity. A nuclear power plant uses roughly 30-50 tons of uranium per year. At a market price of around $55 per kilogram, a cost of fuel of around $2.5 million a year is very cheap – 100 times cheaper than the cost of fuel for a coal-fired power plant fuel. It is estimated that known uranium deposits will last for about 300 years. At the same time, as with crude oil, this deadline may prove to be much more distant, since no new ones have been sought for years. Therefore, it should not come as a surprise that in April 2019 China presented a plan for the vast expansion of its nuclear energy sector. While today the total capacity of Chinese nuclear power plants is about 42 GW, it will exceed 100 GW in 100 years. Then, the People's Republic of China will overtake the US in this field. South Korea has presented slightly less ambitious goals, announcing an increase in nuclear power by one-third.
And what path will the European Union take? The fight against CO2 emissions determines the direction of its energy policy, and renewable energy sources are a priority. However, to fully base their economy on them, efficient energy storage is necessary – methods capable of accumulating electricity at times of overproduction and releasing it in the absence of sun and wind. Even lithium-ion cells cannot fully cope with this task. Attempts are being made at avoiding the lack of this element by designing self-sufficient buildings that draw energy from solar batteries and heat pumps. However, in the scale of cities and entire countries, large power plants cannot be replaced, and the only ones that do not emit carbon dioxide are nuclear power plants. This fact means that even in Europe, their slow renaissance continues. For now, countries on the outskirts of the EU (Finland, Hungary, Lithuania, the Czech Republic and Slovakia) are modernizing old plants or building new ones. In just one year, the construction of over 60 new reactors began.
Despite public resentment, more investments will begin soon. Right now, fear of the 'China syndrome' is weaker than fear of the effects of global warming and sudden energy shortages and blackouts.
Translated from the Polish by Joanna FigielReprinted with permission of Przekrój. Read the original article.
- It is Time to Hold Nuclear Fear Mongers Accountable. - Big Think ›
- Why the Future of Nuclear Energy May Depend on Media Coverage ... ›
A Harvard professor's study discovers the worst year to be alive.
- Harvard professor Michael McCormick argues the worst year to be alive was 536 AD.
- The year was terrible due to cataclysmic eruptions that blocked out the sun and the spread of the plague.
- 536 ushered in the coldest decade in thousands of years and started a century of economic devastation.
The past year has been nothing but the worst in the lives of many people around the globe. A rampaging pandemic, dangerous political instability, weather catastrophes, and a profound change in lifestyle that most have never experienced or imagined.
But was it the worst year ever?
Nope. Not even close. In the eyes of the historian and archaeologist Michael McCormick, the absolute "worst year to be alive" was 536.
Why was 536 so bad? You could certainly argue that 1918, the last year of World War I when the Spanish Flu killed up to 100 million people around the world, was a terrible year by all accounts. 1349 could also be considered on this morbid list as the year when the Black Death wiped out half of Europe, with up to 20 million dead from the plague. Most of the years of World War II could probably lay claim to the "worst year" title as well. But 536 was in a category of its own, argues the historian.
It all began with an eruption...
According to McCormick, Professor of Medieval History at Harvard University, 536 was the precursor year to one of the worst periods of human history. It featured a volcanic eruption early in the year that took place in Iceland, as established by a study of a Swiss glacier carried out by McCormick and the glaciologist Paul Mayewski from the Climate Change Institute of The University of Maine (UM) in Orono.
The ash spewed out by the volcano likely led to a fog that brought an 18-month-long stretch of daytime darkness across Europe, the Middle East, and portions of Asia. As wrote the Byzantine historian Procopius, "For the sun gave forth its light without brightness, like the moon, during the whole year." He also recounted that it looked like the sun was always in eclipse.
Cassiodorus, a Roman politician of that time, wrote that the sun had a "bluish" color, the moon had no luster, and "seasons seem to be all jumbled up together." What's even creepier, he described, "We marvel to see no shadows of our bodies at noon."
...that led to famine...
The dark days also brought a period of coldness, with summer temperatures falling by 1.5° C. to 2.5° C. This started the coldest decade in the past 2300 years, reports Science, leading to the devastation of crops and worldwide hunger.
...and the fall of an empire
In 541, the bubonic plague added considerably to the world's misery. Spreading from the Roman port of Pelusium in Egypt, the so-called Plague of Justinian caused the deaths of up to one half of the population of the eastern Roman Empire. This, in turn, sped up its eventual collapse, writes McCormick.
Between the environmental cataclysms, with massive volcanic eruptions also in 540 and 547, and the devastation brought on by the plague, Europe was in for an economic downturn for nearly all of the next century, until 640 when silver mining gave it a boost.
Was that the worst time in history?
Of course, the absolute worst time in history depends on who you were and where you lived.
Native Americans can easily point to 1520, when smallpox, brought over by the Spanish, killed millions of indigenous people. By 1600, up to 90 percent of the population of the Americas (about 55 million people) was wiped out by various European pathogens.
Like all things, the grisly title of "worst year ever" comes down to historical perspective.
A machine learning system lets visitors at a Kandinsky exhibition hear the artwork.
Have you ever heard colors?
As part of a new exhibition, the worlds of culture and technology collide, bringing sound to the colors of abstract art pioneer Wassily Kandinsky.
Kandinsky had synesthesia, where looking at colors and shapes causes some with the condition to hear associated sounds. With the help of machine learning, virtual visitors to the Sounds Like Kandinsky exhibition, a partnership project by Centre Pompidou in Paris and Google Arts & Culture, can have an aural experience of his art.
An eye for music
Kandinsky's synesthesia is thought to have heavily influenced his painting. Seeing yellow summoned up trumpets, evoking emotions like cheekiness; reds produced violins portraying restlessness; while organs representing heavenliness he associated with blues, according to the exhibition notes.
Virtual visitors are invited to take part in an experiment called Play a Kandinsky, which allows them to see and hear the world through the artist's eyes.
Kandinsky's synesthesia is thought to have heavily influenced his 1925 painting Yellow, Red, Blue.Image: Guillaume Piolle/Wikimedia Commons
In 1925, the artist's masterpiece, "Yellow, Red, Blue", broke new ground in the world of abstract art, guiding the viewer from left to right with shifting shapes and shades. Almost a century after it was painted, Google's interactive tool lets visitors click different parts of the artwork to journey through the artist's description of the colors, associated sounds and moods that inspired the work.
But Google's new toy is not the only tool developed to enhance the artistic experience.
Artist Neil Harbisson has developed an artificial way to emulate Kandinsky by turning colors into sounds. He has a rare form of color blindness and sees the world in greyscale. But a smart antenna attached to his head translates dominant colors into musical notes, creating a real-world soundtrack of what's in front of him. The invention could open up a new world for people who are color blind.
A new study suggests that private prisons hold prisoners for a longer period of time, wasting the cost savings that private prisons are supposed to provide over public ones.
- Private prisons in Mississippi tend to hold prisoners 90 days longer than public ones.
- The extra days eat up half of the expected cost savings of a private prison.
- The study leaves several open questions, such as what affect these extra days have on recidivism rates.
The United States of America, land of the free, is home to 5 percent of the world's population but 25 percent of its prisoners. The cost of having so many people in the penal system adds up to $80 billion per year, more than three times the budget for NASA. This massive system exploded in size relatively recently, with the prison population increasing by six-fold in the last four decades.
Ten percent of these prisoners are kept in private prisons, which are owned and operated for the sake of profit by contractors. In theory, these operations cost less than public prisons and jails, and states can save money by contracting them to incarcerate people. They have a long history in the United States and are used in many other countries as well.
However, despite the pervasiveness of private contractors in the American prison system, there is not much research into how well they live up to their promise to provide similar services at a lower cost to the state. The little research that is available often encounters difficulties in trying to compare the costs and benefits of facilities with vastly different operations and occasionally produces results suggesting there are few benefits to privatization.
A new study by Dr. Anita Mukherjee and published in the American Economic Journal: Economic Policy joins the debate with a robust consideration of the costs and benefits of private prisons. Its findings suggest that some private prisons keep people incarcerated longer and save less money than advertised.
The study focuses on prisons in Mississippi. Despite its comparatively high rate of incarceration, Mississippi's prison system is very similar to that of other states that also use private prisons. Demographically, its system is representative of the rest of the U.S. prison system, and its inmates are sentenced for similar amounts of time.
The state attempts to get the most out of its privatization efforts, as a 1994 law requires all contracts for private prisons in Mississippi to provide at least a 10 percent cost savings over public prisons while providing similar services. As a result, the state seeks to maximize its savings by sending prisoners to private institutions first if space if available.
While public and private prisons in Mississippi are quite similar, there are a few differences that allow for the possibility of cost savings by private operators — not the least of which is that the guards are paid 30 percent less and have fewer benefits than their publicly employed counterparts.
The results of privatization
The graph depicts the likelihood of release for public (dotted line) vs. private (solid line) prison inmates. At every level of time served, public prisoners were more likely to be released than private prisoners.Dr. Anita Mukherjee
The study relied on administrative records of the Mississippi prison system between 1996 and 2013. The data included information on prisoner demographics, the crimes committed, sentence lengths, time served, infractions while incarcerated, and prisoner relocation while in the system, including between public and private jails. For this study, the sample examined was limited to those serving between one and six years and those who served at least a quarter of their sentence. This created a primary sample of 26,563 bookings.
Analysis revealed that prisoners in private prisons were behind bars for four to seven percent longer than those in public prisons, which translates to roughly 85 to 90 extra days per prisoner. This is, in part, because those in private prison serve a greater portion of their sentences (73 percent) than those in public institutions (70 percent).
This in turn might be due to the much higher infraction rate in private prisons compared to public ones. While only 18 percent of prisoners in a public prison commit an infraction, such as disobeying a guard or possessing contraband, the number jumps to 46 percent in a private prison. Infractions can reduce the probability of early release or cause time to be added to a sentence.
It's unclear why there are so many more infractions in private prisons. Dr. Mukherjee suggests it could be the result of "harsher prison conditions in private prisons," better monitoring techniques, incentives to report more of them to the state before contract renewals, or even a lackadaisical attitude on the part of public prison employees.
What does all this cost Mississippi?
The extra time served eats 48 percent of the cost savings of keeping prisoners in a private facility. For example, it costs about $135,000 to house a prisoner in a private prison for three years and $150,000 in the public system. But longer stays in private prisons reduce the savings from $15,000 to only $7,800.
As Dr. Mukherjee remarks, this cost is also just the finance. Some things are a little harder to measure:
"There are, of course, other costs that are difficult to quantify — e.g., the cost of injustice to society (if private prison inmates systematically serve more time), the inmate's individual value of freedom, and impacts of the additional incarceration on future employment. Abrams and Rohlfs (2011) estimates a prisoner's value of freedom for 90 days at about $1,100 using experimental variation in bail setting. Mueller-Smith (2017) estimates that 90 days of marginal incarceration costs about $15,000 in reduced wages and increased reliance on welfare. If these social costs were to exceed $7,800 in the example stated, private prisons would no longer offer a bargain in terms of welfare-adjusted cost savings."
It is possible that the extra time in jail provides benefits that counter these costs, such as a reduced recidivism rate, but this proved difficult to determine. Though it was not statistically significant, there was some evidence that the added time actually increased the rate of recidivism. If that's true, then private prisons could be counterproductive.