An artificial island in the North Sea is the biggest building project ever in Danish history - and could pave the way for many more.
- In 1991, Denmark constructed the world's first offshore wind farm.
- Now they're building an entire 'Energy Island' in the North Sea.
- As the U.S. catches up, Danish know-how could soon come to America.
Giant wind farms
Wind turbines of the Block Island Wind Farm, so far the only offshore wind project in operation in the U.S.
Credit: Don Emmert/AFP via Getty Images
On Monday, President Biden designated a 'Wind Energy Area' in the waters between Long Island and New Jersey. It's part of an ambitious plan to build giant wind farms along the East Coast. There's currently only one offshore wind farm in the Eastern U.S., off Rhode Island (1).
When those wind farms get built, you can bet there'll be Danish companies involved. In 1991, Denmark built Vindeby, the world's first offshore wind farm. In the years since, Danish companies have maintained their global lead.
In February, the Danish government announced it would build the world's first 'Energy Island'. Everybody else in the world, take note: if the Danes pull this off, similar islands could soon pop up off your shores – perhaps also in the New York Bight.
So, what's an Energy Island, and why does Denmark want one? For the answer, we spool back to June 2020, when a broad coalition of Danish parties, left and right, in government and opposition, concluded a Climate Agreement. This is Denmark's plan not only to make a radical break with fossil fuels but also to show the rest of the world how it's done.
On the rise again
Close-up of Energy Island, with two of the seawalls at the back and the port at the front.
Credit: Danish Energy Agency
Due in large part to its pioneering work with wind energy, Denmark has a green image. But that hasn't always reflected reality. Yes, in 2019 the country generated 30 percent of its energy from renewable sources – earning it 9th place worldwide (2). But in 2018, Denmark also was the EU's leading oil producer (3).
Under the Climate Agreement, that will stop. Denmark will no longer explore and develop new oil and gas fields in its section of the North Sea. Extraction will be gradually reduced to zero. In exchange, Denmark will dramatically scale up the production of sustainable energy via offshore wind farms. The ultimate goal: nationwide carbon neutrality by 2050.
Offshore wind farms produce the bulk of Europe's sustainable energy. And after a dip in the first decade of the century, offshore wind farms are on the rise again (4). One reason for the increased popularity: taller turbines, which means larger blades, which means greater capacity.
- In 2016, the tallest turbines were 540 ft (164 m) and had a capacity of 8 megawatts (MW).
- In 2021, turbines can be up to 720 ft (220 m) tall, generating up to 12 MW.
- Soon, the turbines will reach 820 ft (250 m) – not that much shorter than the Eiffel Tower (1,030 ft or 314 m, street to flagpole). These will have a capacity of up to 20 MW.
Potential position of Energy Island (red) off the western coast of Jutland, surrounded by a wind farm (green) filled with turbines (blue dots).
Credit: Danish Energy Agency
As the shallow parts of the North Sea (<66 ft; <20 m) fill up with wind farms, the issue of managing the energy flow produced by these farms becomes acute. The obvious solution would be to build a central point where the energy is collected, converted from AC to DC and transmitted to one or more points onshore. Centralised management of the wind farms would mitigate the fluctuations in energy production and make it easier for supply to meet demand.
If supply is greater than demand, these collection points can also serve as storage units. Excess energy could be stored in batteries or transformed into hydrogen via electrolysis. If and when necessary, the hydrogen can then be transported onto land and reconverted into electricity.
The Dutch are thinking about it, and some have suggested the Dogger Bank as an ideal location: shallow and central within the North Sea, ideally placed to distribute energy to the various countries bordering the sea. But the Danes are doing it. The Climate Agreement envisaged not one, but two energy islands.
One would be Bornholm, Denmark's Baltic island, halfway between Sweden and Poland, which would serve as the hub for local offshore wind farms. But the other would be an entirely new, entirely artificial island in the North Sea, to be built about 50 miles (80 km) off Thorsminde, on the western coast of Jutland.
10 million households
Schematic overview of how an Energy Island could serve as a hub for collecting and redistributing sustainable energy.
Credit: Danish Energy Agency
In February, the Danish government revealed how much this Energi-Ø would cost, how long it would take to build – and what it might look like.
- Energy Island will be built via the caisson method – essentially, sinking a watertight box to the bottom of the sea. The island will be protected from storms by high seawalls on three sides. The fourth side will feature a dock for ships.
- Construction could start in 2026 and is expected to take three years. Building the wind farms and transmission network will take a few years more. By 2033, it could be churning out its sustainable GWs.
- In its initial phase, the island will have an area of about 12 hectares (30 acres, or about 18 soccer fields). It will centralize the production of about 200 offshore wind turbines, with a joint capacity of 3 GW. That's about the equivalent of 3 million households – slightly more than the total for Denmark.
- When fully completed, the island will have an area of around 46 hectares (114 acres, just under 70 soccer fields), collect the energy of 600 turbines, for a total capacity of 10 GW (5). That covers 10 million households.
- 10 GW is equivalent to about 150 percent of Denmark's entire electricity needs (households, industry, infrastructure, etc.) That leaves plenty of scope for supplying neighbouring countries. Agreements have already been reached with Germany, the Netherlands and Belgium.
The plan also foresees a plant for hydrogen production on the island, either to be piped onshore, or stored and transported in large batteries.
Yet untested aspects
Location of Energy Island (yellow) in the North Sea, showing potential connections towards neighboring countries.
Credit: Danish Climate Ministry / Vimeo
In all, the island would cost DKK 210 billion (US$33 billion) to build – by far Denmark's largest construction project (6).
The project will be undertaken in a public-private partnership between the Danish state and commercial interests. Because it is 'critical infrastructure', the state will retain a stake of at least 50.1 percent in the project. There are two scenarios for co-ownership:
- The island will be owned in its entirety by a company, in which the Danish state retains at least that smallest of majorities;
- Private companies will be able to own up to 49.9 percent of the island itself.
The Danish government needs private-sector input to overcome unknown and as yet untested aspects of the project, not just in terms of design and building an entire island from scratch, but also on how to operate and maintain it, and even when it comes to financing and risk management.
But where there's risk, there is potential. If the project is successful, it will become the blueprint for similar energy islands the world over – and the companies that helped build the first one, will be in high demand to build the other ones too, perhaps soon in Biden's 'Wind Energy Area'.
Green, as the Danes have discovered, is not just the color of nature. It's also the color of money.
Strange Maps #1077
Got a strange map? Let me know at firstname.lastname@example.org.
(1) Coastal Virginia Offshore Wind, a two-turbine pilot project 23 miles (43 km) off Virginia Beach, was completed last year.
(2) The Top 10 (2019) are Iceland (79%), Norway (66%), Brazil (45%), Sweden (42%), New Zealand (35%), Austria (38%), Switzerland (31%), Ecuador (30%), Denmark (30%) and Canada (28%).
(3) With 5.8 megatons of oil equivalent (Mtoe), Denmark beat Italy (4.7 Mtoe) and Romania (3.4 Mtoe). Oil production in the EU is on the way down. It peaked in 2004 (42.5 Mtoe) and has since halved (to 21.4 Mtoe in 2018). A similar trend has occurred in the two key non-EU oil producers in Europe. a. Norway's oil production peaked in 2001 (159.2 Mtoe) and has since more than halved (to 74.5 Mtoe in 2018). b. The UK's oil production peaked in 1999 (133.3 Mtoe) and has since been reduced by almost two thirds (to 49.3 Mtoe in 2018).
(5) The Bornholm energy hub is projected to top out at 2 GW.
(6) Inaugurated in 2000, the famous Øresund Bridge (Øresundsbroen), connecting Sweden to Denmark, cost about DKK 25 billion (US$4 billion) in today's money. When it's finished (by 2029, if work continues apace), the Fehmarn Belt Fixed Link (18 km) between the Danish island of Lolland and the German island of Fehmarn, will be the world's longest road/rail tunnel. It will have cost about DKK 55 billion (US$ 8.7 billion).
Can we ever make energy efficient AI?
The paper pointed out the risks of language-processing artificial intelligence, the type used in Google Search and other text analysis products.
Among the risks is the large carbon footprint of developing this kind of AI technology. By some estimates, training an AI model generates as much carbon emissions as it takes to build and drive five cars over their lifetimes.
I am a researcher who studies and develops AI models, and I am all too familiar with the skyrocketing energy and financial costs of AI research. Why have AI models become so power hungry, and how are they different from traditional data center computation?
Today's training is inefficient
Traditional data processing jobs done in data centers include video streaming, email and social media. AI is more computationally intensive because it needs to read through lots of data until it learns to understand it – that is, is trained.
This training is very inefficient compared to how people learn. Modern AI uses artificial neural networks, which are mathematical computations that mimic neurons in the human brain. The strength of connection of each neuron to its neighbor is a parameter of the network called weight. To learn how to understand language, the network starts with random weights and adjusts them until the output agrees with the correct answer.
How artificial neural networks work.
A common way of training a language network is by feeding it lots of text from websites like Wikipedia and news outlets with some of the words masked out, and asking it to guess the masked-out words. An example is “my dog is cute," with the word “cute" masked out. Initially, the model gets them all wrong, but, after many rounds of adjustment, the connection weights start to change and pick up patterns in the data. The network eventually becomes accurate.
One recent model called Bidirectional Encoder Representations from Transformers (BERT) used 3.3 billion words from English books and Wikipedia articles. Moreover, during training BERT read this data set not once, but 40 times. To compare, an average child learning to talk might hear 45 million words by age five, 3,000 times fewer than BERT.
Looking for the right structure
What makes language models even more costly to build is that this training process happens many times during the course of development. This is because researchers want to find the best structure for the network – how many neurons, how many connections between neurons, how fast the parameters should be changing during learning and so on. The more combinations they try, the better the chance that the network achieves a high accuracy. Human brains, in contrast, do not need to find an optimal structure – they come with a prebuilt structure that has been honed by evolution.
As companies and academics compete in the AI space, the pressure is on to improve on the state of the art. Even achieving a 1% improvement in accuracy on difficult tasks like machine translation is considered significant and leads to good publicity and better products. But to get that 1% improvement, one researcher might train the model thousands of times, each time with a different structure, until the best one is found.
Researchers at the University of Massachusetts Amherst estimated the energy cost of developing AI language models by measuring the power consumption of common hardware used during training. They found that training BERT once has the carbon footprint of a passenger flying a round trip between New York and San Francisco. However, by searching using different structures – that is, by training the algorithm multiple times on the data with slightly different numbers of neurons, connections and other parameters – the cost became the equivalent of 315 passengers, or an entire 747 jet.
Bigger and hotter
AI models are also much bigger than they need to be, and growing larger every year. A more recent language model similar to BERT, called GPT-2, has 1.5 billion weights in its network. GPT-3, which created a stir this year because of its high accuracy, has 175 billion weights.
Researchers discovered that having larger networks leads to better accuracy, even if only a tiny fraction of the network ends up being useful. Something similar happens in children's brains when neuronal connections are first added and then reduced, but the biological brain is much more energy efficient than computers.
AI models are trained on specialized hardware like graphics processor units, which draw more power than traditional CPUs. If you own a gaming laptop, it probably has one of these graphics processor units to create advanced graphics for, say, playing Minecraft RTX. You might also notice that they generate a lot more heat than regular laptops.
All of this means that developing advanced AI models is adding up to a large carbon footprint. Unless we switch to 100% renewable energy sources, AI progress may stand at odds with the goals of cutting greenhouse emissions and slowing down climate change. The financial cost of development is also becoming so high that only a few select labs can afford to do it, and they will be the ones to set the agenda for what kinds of AI models get developed.
Doing more with less
What does this mean for the future of AI research? Things may not be as bleak as they look. The cost of training might come down as more efficient training methods are invented. Similarly, while data center energy use was predicted to explode in recent years, this has not happened due to improvements in data center efficiency, more efficient hardware and cooling.
There is also a trade-off between the cost of training the models and the cost of using them, so spending more energy at training time to come up with a smaller model might actually make using them cheaper. Because a model will be used many times in its lifetime, that can add up to large energy savings.
In my lab's research, we have been looking at ways to make AI models smaller by sharing weights, or using the same weights in multiple parts of the network. We call these shapeshifter networks because a small set of weights can be reconfigured into a larger network of any shape or structure. Other researchers have shown that weight-sharing has better performance in the same amount of training time.
Looking forward, the AI community should invest more in developing energy-efficient training schemes. Otherwise, it risks having AI become dominated by a select few who can afford to set the agenda, including what kinds of models are developed, what kinds of data are used to train them and what the models are used for.
Fragments of energy – not waves or particles – may be the fundamental building blocks of the universe
New mathematics have shown that lines of energy can be used to describe the universe.
Matter is what makes up the universe, but what makes up matter?
This question has long been tricky for those who think about it – especially for the physicists. Reflecting recent trends in physics, my colleague Jeffrey Eischen and I have described an updated way to think about matter. We propose that matter is not made of particles or waves, as was long thought, but – more fundamentally – that matter is made of fragments of energy.
From five to one
The ancient Greeks conceived of five building blocks of matter – from bottom to top: earth, water, air, fire and aether. Aether was the matter that filled the heavens and explained the rotation of the stars, as observed from the Earth vantage point. These were the first most basic elements from which one could build up a world. Their conceptions of the physical elements did not change dramatically for nearly 2,000 years.
Then, about 300 years ago, Sir Isaac Newton introduced the idea that all matter exists at points called particles. One hundred fifty years after that, James Clerk Maxwell introduced the electromagnetic wave – the underlying and often invisible form of magnetism, electricity and light. The particle served as the building block for mechanics and the wave for electromagnetism – and the public settled on the particle and the wave as the two building blocks of matter. Together, the particles and waves became the building blocks of all kinds of matter.
This was a vast improvement over the ancient Greeks' five elements, but was still flawed. In a famous series of experiments, known as the double-slit experiments, light sometimes acts like a particle and at other times acts like a wave. And while the theories and math of waves and particles allow scientists to make incredibly accurate predictions about the universe, the rules break down at the largest and tiniest scales.
Einstein proposed a remedy in his theory of general relativity. Using the mathematical tools available to him at the time, Einstein was able to better explain certain physical phenomena and also resolve a longstanding paradox relating to inertia and gravity. But instead of improving on particles or waves, he eliminated them as he proposed the warping of space and time.
Using newer mathematical tools, my colleague and I have demonstrated a new theory that may accurately describe the universe. Instead of basing the theory on the warping of space and time, we considered that there could be a building block that is more fundamental than the particle and the wave. Scientists understand that particles and waves are existential opposites: A particle is a source of matter that exists at a single point, and waves exist everywhere except at the points that create them. My colleague and I thought it made logical sense for there to be an underlying connection between them.
A new building block of matter can model both the largest and smallest of things – from stars to light.
Christopher Terrell, CC BY-ND
Flow and fragments of energy
Our theory begins with a new fundamental idea – that energy always "flows" through regions of space and time.
Think of energy as made up of lines that fill up a region of space and time, flowing into and out of that region, never beginning, never ending and never crossing one another.
Working from the idea of a universe of flowing energy lines, we looked for a single building block for the flowing energy. If we could find and define such a thing, we hoped we could use it to accurately make predictions about the universe at the largest and tiniest scales.
There were many building blocks to choose from mathematically, but we sought one that had the features of both the particle and wave – concentrated like the particle but also spread out over space and time like the wave. The answer was a building block that looks like a concentration of energy – kind of like a star – having energy that is highest at the center and that gets smaller farther away from the center.
Much to our surprise, we discovered that there were only a limited number of ways to describe a concentration of energy that flows. Of those, we found just one that works in accordance with our mathematical definition of flow. We named it a fragment of energy. For the math and physics aficionados, it is defined as A = -⍺/r where ⍺ is intensity and r is the distance function.
Using the fragment of energy as a building block of matter, we then constructed the math necessary to solve physics problems. The final step was to test it out.
Back to Einstein, adding universality
General relativity was the first theory to accurately predict the slight rotation of Mercury's orbit. (Rainer Zenz via Wikimedia Commons)
More than 100 years ago, Einstein had turned to two legendary problems in physics to validate general relativity: the ever-so-slight yearly shift – or precession – in Mercury's orbit, and the tiny bending of light as it passes the Sun.
These problems were at the two extremes of the size spectrum. Neither wave nor particle theories of matter could solve them, but general relativity did. The theory of general relativity warped space and time in such way as to cause the trajectory of Mercury to shift and light to bend in precisely the amounts seen in astronomical observations.
If our new theory was to have a chance at replacing the particle and the wave with the presumably more fundamental fragment, we would have to be able to solve these problems with our theory, too.
For the precession-of-Mercury problem, we modeled the Sun as an enormous stationary fragment of energy and Mercury as a smaller but still enormous slow-moving fragment of energy. For the bending-of-light problem, the Sun was modeled the same way, but the photon was modeled as a minuscule fragment of energy moving at the speed of light. In both problems, we calculated the trajectories of the moving fragments and got the same answers as those predicted by the theory of general relativity. We were stunned.
Our initial work demonstrated how a new building block is capable of accurately modeling bodies from the enormous to the minuscule. Where particles and waves break down, the fragment of energy building block held strong. The fragment could be a single potentially universal building block from which to model reality mathematically – and update the way people think about the building blocks of the universe.
Scientists at Washington University are patenting a new electrolyzer designed for frigid Martian water.
- Mars explorers will need more oxygen and hydrogen than they can carry to the Red Planet.
- Martian water may be able to provide these elements, but it is extremely salty water.
- The new method can pull oxygen and hydrogen for breathing and fuel from Martian brine.
When people finally get to Mars, there are a two things they're going to need lots of: oxygen and fuel. (Drinking water, too, but that's another story.) They'll need more than they could reasonably bring with them. Fortunately—we now know—there's plenty of water on Mars that could potentially serve as a source of oxygen and of fuel in the form of hydrogen that could get our explorers back home at mission's end.
On Earth, we can extract the elements from pure water using a process called electrolysis. On Mars, though, the water contains a fair amount of magnesium perchlorate — salt. It is too "dirty" for electrolysis, and that process also requires heat, an issue in Mars' frigid climate. Engineers at Washington University (WashU) in St. Louis may have the solution. They've developed and are in the process of patenting a method for extracting oxygen and hydrogen from water like Martian brine, and it works perfectly well in sub-zero temperatures.
"Our Martian brine electrolyzer radically changes the logistical calculus of missions to Mars and beyond," says WashU's Vijay Ramani.
The system is described in a paper published in PNAS.
The WashU electrolyzer
The WashU electrolyzer—it has no snappy acronym yet—will not be the first device capable of extracting oxygen from Martian water. That honor goes to the Mars Oxygen In-Situ Resource Utilization Experiment, or MOXIE, which is en route to Mars onboard NASA's Perseverance rover. The rover was launched on July 30, 2020. It will arrive on February 18, 2021, and will perform high-temperature electrolysis to extract pure oxygen, but no hydrogen.
In addition to being able to capture hydrogen, the WashU system can even do a better job with oxygen than MOXIE can, extracting 25 times as much from the same amount of water.
The new system has no problem with Mars' magnesium perchlorate-laced water. On the contrary, the researchers say it ultimately makes their system work better since such high concentrations of salt keep water from freezing on such a cold a planet by lowering the liquid's freezing temperature to -60 °C. He adds it may "also improve the performance of the electrolyzer system by lowering the electrical resistance."
Cold itself is no issue for the WashU system. It's been tested in a sub-zero (-33 ⁰F, or -36 ⁰C) environment that simulates Mars'.
"Our novel brine electrolyzer incorporates a lead ruthenate pyrochlore anode developed by our team in conjunction with a platinum on carbon cathode," explains Ramani. He adds, "These carefully designed components coupled with the optimal use of traditional electrochemical engineering principles has yielded this high performance."
"This technology is equally useful on Earth where it opens up the oceans as a viable oxygen and fuel source," Ramani notes. His colleagues forsee potential applications such as producing oxygen in deep-sea habitats with ample water available, such as underwater research facilities and submarines.
The study's joint first author Pralay Gayen says that "having demonstrated these electrolyzers under demanding Martian conditions, we intend to also deploy them under much milder conditions on Earth to utilize brackish or salt water feeds to produce hydrogen and oxygen, for example, through seawater electrolysis."
A mineral made in a Kamchatka volcano may hold the answer to cheaper batteries, find scientists.
- Russian scientists discover a new mineral in the volcanic area of Kamchatka in the country's far east.
- The mineral dubbed "petrovite" can be utilized to power sodium-ion batteries.
- Batteries based on salt would be cheaper to produce than lithium-ion batteries.
Researchers from St. Petersburg University in Russia found a beautiful new mineral species called "petrovite," created in the volcanos of the remote region of Kamchatka in the country's far east.
The research team that found petrovite was headed by crystallography professor Stanislav Filatov, who studied the minerals of Kamchatka for over 40 years. The area offers amazing mineralogical diversity, with dozens of new minerals found there in recent years, according to the university's press release.
Specifically, Filatov focused his attention on scoria (or cinder) cone volcanos and lava flows formed after the eruptions of the Tolbachik Volcano in 1975-1976 and 2012-2013.
Excited Russian scientists at the edge of the volcanic area in Kamchatka where the mineral was found.
Credit: St. Petersburg University / Filatov
Petrovite, the blue and green mineral Filatov's team discovered, with the chemical formula of Na10CaCu2(SO4)8, contains oxygen atoms, sodium sulphur, and copper in a porous framework. "The copper atom in the crystal structure of petrovite has an unusual and very rare coordination of seven oxygen atoms," explained Filatov.
The scientists think its structure of voids connected by channels, which can pass through small sodium atoms, holds potential for ionic conductivity. The mineral may be adaptable as cathode material in sodium-ion batteries. Due to the abundance of salt, sodium-ion batteries could be a very inexpensive alternative to lithium-ion batteries you can commonly find in many devices today.
"At present, the biggest problem for this use is the small amount of a transition metal – copper – in the crystal structure of the mineral," added Filatov. "It might be solved by synthesizing a compound with the same structure as petrovite in the laboratory.'
Crystal structure displaying sodium migration pathways.
Credit: Filatov et al., Mineralogical Magazine, 2020
The mineral was named "petrovite" not in honor of (as you might first guess) Peter the Great, the founder of St. Petersburg, but in recognition of Professor Tomas Petrov, a crystallographer at the university. He was part of the team that was first in the world to synthetically grow malachite.
Besides researchers from St. Petersburg University, other Russian scientists involved came from the Institute of Volcanology and Seismology of the Far Eastern Branch of the Russian Academy of Sciences, and the Grebenshchikov Institute of Silicate Chemistry.
Check out the new study published in Mineralogical Magazine.