Once a week.
Subscribe to our weekly newsletter.
We are likely to see the first humans walk on Mars this decade.
- Space agencies have successfully sent three spacecraft to Mars this year.
- The independent missions occurred at around the same time because Earth and Mars were particularly close to each other last summer, providing an opportune time to launch.
- SpaceX says it hopes to send a crewed mission to Mars by 2026, while the U.S. and China aim to land humans on the planet in the 2030s.
Spacecraft from three of the world's space agencies reached Mars this year.
In February, the United Arab Emirates' Hope space probe entered the Martian orbit, where it is studying the planet's weather cycles. That same month, NASA's Perseverance rover touched down on Mars, where it will soon begin collecting rock samples that could contain signs of ancient life. And in May, China successfully landed its Zhurong rover on the Martian surface, becoming the second nation to ever do so.
All three missions launched in the summer of 2020. The timing was no coincidence: once every two years, Earth and Mars come especially close together because their orbits are "at opposition," which is when the Earth-Mars distance is smallest during the 780-day synodic period. It is an opportune window to send spacecraft to Mars.
The handful of spacecraft currently exploring the Martian surface and atmosphere are scheduled to conduct their experiments for periods ranging from months to years. Some even plan to collect materials to return to Earth. For example, NASA's Perseverance will store its rock samples in protective tubes and leave them behind for a smaller "fetch rover" to pick up on a future mission.
Photo of Martian surface taken by the Perseverance roverNASA/JPL-Caltech
If all goes well, an Airbus spacecraft dubbed the Earth Return Orbiter (ERO) will carry the samples back to Earth in 2031. It would be the first time a space mission has returned Martian matter to Earth. But before the decade's end, space agencies have some other missions that aim to study the Red Planet.
Europe & Russia
NASA is not the only space agency aiming to find evidence of life on the Red Planet. In 2023, Roscosmos and the European Space Agency plan to land their Rosalind Franklin rover on the Martian surface, where it will drill into rock and analyze soil composition for signs of past — or possibly present — alien life.
The joint mission is part of a long-term Mars project that began in 2016. This second phase was initially planned for 2020, but due in part to the COVID-19 pandemic, the space agencies decided to postpone the launch to 2022.
"We want to make ourselves 100% sure of a successful mission. We cannot allow ourselves any margin of error. More verification activities will ensure a safe trip and the best scientific results on Mars," said ESA Director General Jan Wörner.
In 2022, the Japanese Aerospace Exploration Agency (JAXA) plans to send to Mars its TEREX lander, which will "precisely measure the amount of water molecules and oxygen molecules, and search for water resources and the possibility of life on Mars," JAXA wrote.
In 2024, JAXA also plans to launch a uniquely bold interplanetary mission that will involve sending a probe to orbit Mars, landing on the Martian moon Phobos, collecting surface samples, and then returning those samples to Earth in 2029. JAXA says the mission has two main objectives: (1) to investigate whether the Martian moons are captured asteroids or fragments that coalesced after a giant impact with Mars; and (2) to clarify the mechanisms controlling the surface evolution of the Martian moons and Mars.
Following the successful landing of its Zhurong rover this year, China released a roadmap of its plans for additional Mars voyages. The first is an uncrewed mission scheduled for 2030, with crewed missions planned for 2033, 2035, 2037, and 2041. As the International Space Station project is coming to a close, China is in the process of building its own space station; earlier this year it launched into orbit the first part of its station, which will take 10 more missions to assemble.
Elon Musk's California-based aerospace company has its sights on two Mars voyages: a cargo-only mission in 2022 and a human mission by 2026. The crewed mission would involve building a propellant depot and preparing a site for future crewed flights. Getting to Mars will first require an orbital test of SpaceX's Starship rocket, which the company hopes to conduct this year.
Regarding the long-term future of humans on the Red planet, Musk once told Ars Technica:
"I'll probably be long dead before Mars becomes self-sustaining. But I'd like to at least be around to see a bunch of ships land on Mars."
In 2014, the Indian Space Research Organization executed its first interplanetary trip with its Mars Orbiter Mission. It marked the first time an Asian nation reached Martian orbit and also the first time a nation successfully reached the Red planet on its maiden voyage. India has plans for a follow-up Mars Orbiter Mission 2, but it remains unclear when that will occur and what the mission will entail.
In February, the chief of the Indian Space Research Organisation said the nation would only launch a Mars mission after Chandrayaan-3, India's upcoming mission to the Moon, which is expected to launch in 2022.
The ethical debate over zoos is going to grow louder. There might be a solution that involves robots.
- Zoos present a dilemma. On the one hand, they benefit conservation and research; on the other hand, placing animals (particularly intelligent ones) in captivity is ethically questionable.
- The more we learn about animals — especially how advanced or intelligent they are — the louder the debate will grow surrounding their captivity.
- Could zoos of the future feature realistic robots in place of animals?
How robots could end animal captivity in zoos and marine parks | Just Might Work www.youtube.com
In 1842, the Zoological Society of London opened up the doors to London Zoo to a very special guest: Queen Victoria. London Zoo is the oldest scientific zoo in the world, and the Zoological Society was anxious to see what the most powerful person in the world was to make of their rhinos, elephants, and quaggas (a species of zebra, now extinct). It did not go well.
While most of the tour went swimmingly, it all turned sour when Queen Victoria saw Jenny, the orangutan. This huge and hulking beast, one of the most intelligent of the primates, was the first of its kind to be seen in Europe. As Victoria saw Jenny's deliberate movements and her remarkable range of expressions, the Queen found her "frightful, painfully, disagreeably human." Victoria was quite okay to see herd animals and tiny critters, but the prospect of large, intelligent life caged up for her amusement? She was not amused.
The argument against zoos
Queen Victoria's qualms are not uncommon. Zoos make many of us uneasy. No matter how shiny a ribbon we put on it, ultimately we go to the zoo to derive pleasure from the captivity of animals that were never meant to be behind bars. No matter how large a lion's enclosure, how regularly the penguins are fed, or how attentively a sick giraffe is tended, the fact is that we go to zoos to enjoy the show animals provide us. We reduce them to objects for our enjoyment.
It seems that most people only go to the zoo because it's a fun day out — sort of like an amusement park, except the animals are real rather than teenagers in giant costumes. Few seem to go for a truly educational experience. Instead, they gawp and point.
Philosophers such as Aristotle and David Hume long argued what modern science needed very little effort to prove: animals think and feel. Monkeys experience pain, wallabies nurture their young, and stoats can lay traps for their prey. There is intelligence, sentience, and emotion in the animal world. Is it ethical to lock up creatures like this?
The argument for zoos
However, most zoos today, especially in the developed world, function as massive, well-funded centers for research. Furthermore, they do educate and inspire generations of new conservationists and zoologists, even if that's a small minority of the customers who attend.
Zoos also do their best to minimize the pain and death of their animals, a rather tangible benefit that these animals would not receive in the wild. In a zoo, a zebra gets hay on the menu; in the wild, the zebra itself is on the menu. Maybe captivity isn't so bad, after all.
The biggest and most reputable zoos and aquariums in the world collectively fund over 2500 conservation projects across more than 100 countries to the tune of $160 million, providing experts with the cash they need to do their work. It would be naïve to suggest that the sheer scale of this could be matched by public service announcements or well produced viral videos, alone. In this light, could zoos be seen as the lesser of two evils — a form of collateral damage for the greater good?
This same logic applies to the repulsive idea of trophy hunting. While most of us are horrified, is it not objectively true that if some rich guy pays hundreds of thousands of dollars to a lion conservation project, would that not save far more lions in the long run than the one lion he goes on to shoot?
The issue that we are faced with today is the same one that Queen Victoria called out in 1842. The more we learn about animal intelligence, the worse we feel about keeping them in zoos. This is particularly true for mammals like primates, dolphins, and whales.
There may be a solution. A company in California created a robot dolphin so realistic that visitors did not know they were watching a robot. (See video above.) Could something like this keep the upside of zoos while eliminating the downside?
The Rijksmuseum employed an AI to repaint lost parts of Rembrandt's "The Night Watch." Here's how they did it.
- In 1715, Amsterdam's Town Hall sliced off all four outer edges of Rembrandt's priceless masterpiece so that it would fit on a wall.
- Neural networks were used to fill in the missing pieces.
- An unprecedented collaboration between man and machine is now on display at the Rijksmuseum.
Robert Erdmann, a senior scientist working for the Rijksmuseum, cannot help but smile when I ask him to explain — in as much detail as possible — how exactly he used artificial intelligence to recreate long-lost portions of Rembrandt van Rijn's most famous painting, The Night Watch (1642). "Most people just want the elevator pitch," he tells me over Zoom.
The Night Watch is a mammoth of a painting, and it used to be even bigger. In 1715, it came into the possession of the bureaucrats in charge of Amsterdam's Town Hall. In order to fit it on their wall, they sliced off all four outer edges of Rembrandt's priceless masterpiece, inadvertently creating the compromised version we know today.
Rembrandt's "The Night Watch," with the missing edges shown in black.Credit: Courtesy of Robert Erdmann / Rijksmuseum
The missing pieces of "The Night Watch" were never recovered, but we know what they looked like thanks to Gerrit Lundens, a contemporary of Rembrandt who copied the painting when it was complete. These missing sections depict the top of the arch, a balustrade at the bottom, and two soldiers of Frans Banninck Cocq's militia company that stood at the far left.
Though the absence of these elements does not make "The Night Watch" any less impressive, their presence greatly alters the painting's look and feel. The balustrade emphasizes the company's movement forward. Together, the four missing pieces shift the principal figures — Cocq and Willem van Ruytenburch — to the right, creating a more compelling composition.
Copy of "The Night Watch" by Gerrit Lundens.Credit: Courtesy of Robert Erdmann / Rijksmuseum
As part of Operation Night Watch, a multimillion-dollar restoration mission, the Rijksmuseum set out to recreate these missing pieces of the painting to show visitors The Night Watch as Rembrandt had originally constructed it. One easy way to do this would be to upload the smaller Lundens copy into Photoshop, blow it up by a factor of five, print it out, and call it a day.
Easy, but far from adequate. As Erdmann puts it: "There's nothing wrong with using an artist like that. However, the final product would still contain traces of that artist's own style." For Erdmann, the only viable solution was to create a series of neural networks — software that mimics the human brain through the use of artificial neurons — to transform the Lundens copy into an "original" Rembrandt.
Humans, unlike computers, aren't able to make perfect copies. Faithful though Lundens' painting is — especially in its visual detail, for example, the number of buttons on a coat, plumes on a feather, or engravings on a halberd — it still contains a myriad of miniscule differences that prevented Erdmann from simply copy-pasting it onto the original.
Perspective was the first and arguably most important item on Erdmann's list. "The geometric correspondence is pretty good at the bottom of the copy," he says. "At the top, that correspondence starts to fall apart; the composition looks stretched out, supposedly because Lundens was unable to reach the top of the painting to get its precise measurements."
Lundens' copy, adjusted for perspective by the AI.Credit: Courtesy of Robert Erdmann / Rijksmuseum
After creating a neural network that could identify corresponding elements in both versions of The Night Watch — from faces and hands to clothing and weapons — Erdmann made a second neural network that could stretch, rotate, foreshorten, compress, and decompress the Lundens copy so that its measurements matched the Rembrandt original as closely as possible.
According to Erdman, this step was "a guide to where we should place the figures on the left, because they need to be consistent with the extrapolation from the original Night Watch." Aside from aligning the two paintings, Erdmann's adjustments also transformed the facial structure of figures like Cocq, bringing them closer to Rembrandt's expert rendering.
Detail of the Lundens copy before perspectival adjustments.Credit: Courtesy of Robert Erdmann / Rijksmuseum
Detail of the Lundens copy after perspectival adjustments.Credit: Courtesy of Robert Erdmann / Rijksmuseum
Just as a painter must tone their canvas before they can work on composition and color, so too did Erdmann have to get the dimensions right before he could move on to the third and final stage of his coding process. Erdmann's next part of the neural network involved — to paraphrase his elevator pitch — sending the artificial intelligence algorithm to art school.
"Not unlike how you might translate a text from Dutch to English, we wanted to see if we could transform Lundens' painterly style and palette into Rembrandt's," he explains, comparing the learning curve to a quiz. To educate it, the AI was given random tiles from the Lundens copy and asked to render the tiles in the style of Rembrandt.
As with any pedagogical situation, Erdmann evaluated the AI's efforts with a corresponding grade. The closer its output matched the contents of the original Night Watch, the higher the grade it received. When grading, Erdmann considered things like color, texture, and representation (i.e., how well does this frowning face resemble a frowning face, or this sword an actual sword?).
"Once you've defined what makes a good copy, you can train the network on thousands and thousands of these tiles," Erdmann goes on. There are 265 gigabytes of memory of thousands of attempts stored, which demonstrates improvement in quality over a very short time. Within less than a day, the error margin between the AI and the real Rembrandt grew so small it became insignificant; the training was complete.
Lundens copy when adjusted for perspective and Rembrandt's style by AI.Credit: Courtesy of Robert Erdmann / Rijksmuseum
Along the way, the AI had developed a thorough understanding of what made Rembrandt Rembrandt. When translating Lundens' copy, it used a less saturated color palette and thicker, sketchier brushstrokes. It even adopted the painter's signature use of chiaroscuro — a technique involving sharp contrasts between light and shadow.
Then it was time for the final exam. Using the knowledge gained from copying Rembrandt, Erdmann ordered the AI to transform the four outer edges of the Lundens copy — removed from the original Night Watch — into Rembrandt's signature style. The result, an unprecedented collaboration between man and machine, is now on display in the Eregalerij of the Rijksmuseum.
Detail of the completed "Night Watch." The two figures on the left were added from the adjusted Lundens copy.Credit: Courtesy of Robert Erdmann / Rijksmuseum
The missing pieces, resuscitated by AI, were printed onto canvas and varnished so that they had a similar gloss to the rest of the painting. The pieces were then attached to metal plates, which were placed in front of the original Night Watch at a distance of less than one centimeter, thus creating an optical illusion for visitors without actually touching Rembrandt's work.
While conservation science is evolving rapidly, the achievements of people like Erdmann are still eclipsed by the artistic genius of the painters whose work they try to preserve, which is a shame because Erdmann's software can be just as inventive as Rembrandt's brushwork. At the very least, Erdmann's problem-solving skills would have made the master proud.
It marks a breakthrough in using gene editing to treat diseases.
This article was originally published by our sister site, Freethink.
For the first time, researchers appear to have effectively treated a genetic disorder by directly injecting a CRISPR therapy into patients' bloodstreams — overcoming one of the biggest hurdles to curing diseases with the gene editing technology.
The therapy appears to be astonishingly effective, editing nearly every cell in the liver to stop a disease-causing mutation.
The challenge: CRISPR gives us the ability to correct genetic mutations, and given that such mutations are responsible for more than 6,000 human diseases, the tech has the potential to dramatically improve human health.
One way to use CRISPR to treat diseases is to remove affected cells from a patient, edit out the mutation in the lab, and place the cells back in the body to replicate — that's how one team functionally cured people with the blood disorder sickle cell anemia, editing and then infusing bone marrow cells.
Bone marrow is a special case, though, and many mutations cause disease in organs that are harder to fix.
Another option is to insert the CRISPR system itself into the body so that it can make edits directly in the affected organs (that's only been attempted once, in an ongoing study in which people had a CRISPR therapy injected into their eyes to treat a rare vision disorder).
Injecting a CRISPR therapy right into the bloodstream has been a problem, though, because the therapy has to find the right cells to edit. An inherited mutation will be in the DNA of every cell of your body, but if it only causes disease in the liver, you don't want your therapy being used up in the pancreas or kidneys.
A new CRISPR therapy: Now, researchers from Intellia Therapeutics and Regeneron Pharmaceuticals have demonstrated for the first time that a CRISPR therapy delivered into the bloodstream can travel to desired tissues to make edits.
We can overcome one of the biggest challenges with applying CRISPR clinically.
"While these are early data, they show us that we can overcome one of the biggest challenges with applying CRISPR clinically so far, which is being able to deliver it systemically and get it to the right place," she continued.
What they did: During a phase 1 clinical trial, Intellia researchers injected a CRISPR therapy dubbed NTLA-2001 into the bloodstreams of six people with a rare, potentially fatal genetic disorder called transthyretin amyloidosis.
The livers of people with transthyretin amyloidosis produce a destructive protein, and the CRISPR therapy was designed to target the gene that makes the protein and halt its production. After just one injection of NTLA-2001, the three patients given a higher dose saw their levels of the protein drop by 80% to 96%.
A better option: The CRISPR therapy produced only mild adverse effects and did lower the protein levels, but we don't know yet if the effect will be permanent. It'll also be a few months before we know if the therapy can alleviate the symptoms of transthyretin amyloidosis.
This is a wonderful day for the future of gene-editing as a medicine.
If everything goes as hoped, though, NTLA-2001 could one day offer a better treatment option for transthyretin amyloidosis than a currently approved medication, patisiran, which only reduces toxic protein levels by 81% and must be injected regularly.
Looking ahead: Even more exciting than NTLA-2001's potential impact on transthyretin amyloidosis, though, is the knowledge that we may be able to use CRISPR injections to treat other genetic disorders that are difficult to target directly, such as heart or brain diseases.
"This is a wonderful day for the future of gene-editing as a medicine," Fyodor Urnov, a UC Berkeley professor of genetics, who wasn't involved in the trial, told NPR. "We as a species are watching this remarkable new show called: our gene-edited future."
Gain-of-function mutation research may help predict the next pandemic — or, critics argue, cause one.
This article was originally published on our sister site, Freethink.
"I was intrigued," says Ron Fouchier, in his rich, Dutch-accented English, "in how little things could kill large animals and humans."
It's late evening in Rotterdam as darkness slowly drapes our Skype conversation.
This fascination led the silver-haired virologist to venture into controversial gain-of-function mutation research — work by scientists that adds abilities to pathogens, including experiments that focus on SARS and MERS, the coronavirus cousins of the COVID-19 agent.
If we are to avoid another influenza pandemic, we will need to understand the kinds of flu viruses that could cause it. Gain-of-function mutation research can help us with that, says Fouchier, by telling us what kind of mutations might allow a virus to jump across species or evolve into more virulent strains. It could help us prepare and, in doing so, save lives.
Many of his scientific peers, however, disagree; they say his experiments are not worth the risks they pose to society.
A virus and a firestorm
The Dutch virologist, based at Erasmus Medical Center in Rotterdam, caused a firestorm of controversy about a decade ago, when he and Yoshihiro Kawaoka at the University of Wisconsin-Madison announced that they had successfully mutated H5N1, a strain of bird flu, to pass through the air between ferrets, in two separate experiments. Ferrets are considered the best flu models because their respiratory systems react to the flu much like humans.
The mutations that gave the virus its ability to be airborne transmissible are gain-of-function (GOF) mutations. GOF research is when scientists purposefully cause mutations that give viruses new abilities in an attempt to better understand the pathogen. In Fouchier's experiments, they wanted to see if it could be made airborne transmissible so that they could catch potentially dangerous strains early and develop new treatments and vaccines ahead of time.
The problem is: their mutated H5N1 could also cause a pandemic if it ever left the lab. In Science magazine, Fouchier himself called it "probably one of the most dangerous viruses you can make."
Just three special traits
Recreated 1918 influenza virionsCredit: Cynthia Goldsmith / CDC / Dr. Terrence Tumpey / Public domain via Wikipedia
For H5N1, Fouchier identified five mutations that could cause three special traits needed to trigger an avian flu to become airborne in mammals. Those traits are (1) the ability to attach to cells of the throat and nose, (2) the ability to survive the colder temperatures found in those places, and (3) the ability to survive in adverse environments.
A minimum of three mutations may be all that's needed for a virus in the wild to make the leap through the air in mammals. If it does, it could spread. Fast.
Fouchier calculates the odds of this happening to be fairly low, for any given virus. Each mutation has the potential to cripple the virus on its own. They need to be perfectly aligned for the flu to jump. But these mutations can — and do — happen.
"In 2013, a new virus popped up in China," says Fouchier. "H7N9."
H7N9 is another kind of avian flu, like H5N1. The CDC considers it the most likely flu strain to cause a pandemic. In the human outbreaks that occurred between 2013 and 2015, it killed a staggering 39% of known cases; if H7N9 were to have all five of the gain-of-function mutations Fouchier had identified in his work with H5N1, it could make COVID-19 look like a kitten in comparison.
H7N9 had three of those mutations in 2013.
Gain-of-function mutation: creating our fears to (possibly) prevent them
Flu viruses are basically eight pieces of RNA wrapped up in a ball. To create the gain-of-function mutations, the research used a DNA template for each piece, called a plasmid. Making a single mutation in the plasmid is easy, Fouchier says, and it's commonly done in genetics labs.
If you insert all eight plasmids into a mammalian cell, they hijack the cell's machinery to create flu virus RNA.
"Now you can start to assemble a new virus particle in that cell," Fouchier says.
One infected cell is enough to grow many new virus particles — from one to a thousand to a million; viruses are replication machines. And because they mutate so readily during their replication, the new viruses have to be checked to make sure it only has the mutations the lab caused.
The virus then goes into the ferrets, passing through them to generate new viruses until, on the 10th generation, it infected ferrets through the air. By analyzing the virus's genes in each generation, they can figure out what exact five mutations lead to H5N1 bird flu being airborne between ferrets.
And, potentially, people.
"This work should never have been done"
The potential for the modified H5N1 strain to cause a human pandemic if it ever slipped out of containment has sparked sharp criticism and no shortage of controversy. Rutgers molecular biologist Richard Ebright summed up the far end of the opposition when he told Science that the research "should never have been done."
"When I first heard about the experiments that make highly pathogenic avian influenza transmissible," says Philip Dormitzer, vice president and chief scientific officer of viral vaccines at Pfizer, "I was interested in the science but concerned about the risks of both the viruses themselves and of the consequences of the reaction to the experiments."
In 2014, in response to researchers' fears and some lab incidents, the federal government imposed a moratorium on all GOF research, freezing the work.
Some scientists believe gain-of-function mutation experiments could be extremely valuable in understanding the potential risks we face from wild influenza strains, but only if they are done right. Dormitzer says that a careful and thoughtful examination of the issue could lead to processes that make gain-of-function mutation research with viruses safer.
But in the meantime, the moratorium stifled some research into influenzas — and coronaviruses.
The National Academy of Science whipped up some new guidelines, and in December of 2017, the call went out: GOF studies could apply to be funded again. A panel formed by Health and Human Services (HHS) would review applications and make the decision of which studies to fund.
As of right now, only Kawaoka and Fouchier's studies have been approved, getting the green light last winter. They are resuming where they left off.
Pandora's locks: how to contain gain-of-function flu
Here's the thing: the work is indeed potentially dangerous. But there are layers upon layers of safety measures at both Fouchier's and Kawaoka's labs.
"You really need to think about it like an onion," says Rebecca Moritz of the University of Wisconsin-Madison. Moritz is the select agent responsible for Kawaoka's lab. Her job is to ensure that all safety standards are met and that protocols are created and drilled; basically, she's there to prevent viruses from escaping. And this virus has some extra-special considerations.
The specific H5N1 strain Kawaoka's lab uses is on a list called the Federal Select Agent Program. Pathogens on this list need to meet special safety considerations. The GOF experiments have even more stringent guidelines because the research is deemed "dual-use research of concern."
There was debate over whether Fouchier and Kawaoka's work should even be published.
"Dual-use research of concern is legitimate research that could potentially be used for nefarious purposes," Moritz says. At one time, there was debate over whether Fouchier and Kawaoka's work should even be published.
While the insights they found would help scientists, they could also be used to create bioweapons. The papers had to pass through a review by the U.S. National Science Board for Biosecurity, but they were eventually published.
Intentional biowarfare and terrorism aside, the gain-of-function mutation flu must be contained even from accidents. At Wisconsin, that begins with the building itself. The labs are specially designed to be able to contain pathogens (BSL-3 agricultural, for you Inside Baseball types).
They are essentially an airtight cement bunker, negatively pressurized so that air will only flow into the lab in case of any breach — keeping the viruses pushed in. And all air in and out of the lap passes through multiple HEPA filters.
Inside the lab, researchers wear special protective equipment, including respirators. Anyone coming or going into the lab must go through an intricate dance involving stripping and putting on various articles of clothing and passing through showers and decontamination.
And the most dangerous parts of the experiment are performed inside primary containment. For example, a biocontainment cabinet, which acts like an extra high-security box, inside the already highly-secure lab (kind of like the radiation glove box Homer Simpson is working in during the opening credits).
"Many people behind the institution are working to make sure this research can be done safely and securely." — REBECCA MORITZ
The Federal Select Agent program can come and inspect you at any time with no warning, Moritz says. At the bare minimum, the whole thing gets shaken down every three years.
There are numerous potential dangers — a vial of virus gets dropped; a needle prick; a ferret bite — but Moritz is confident that the safety measures and guidelines will prevent any catastrophe.
"The institution and many people behind the institution are working to make sure this research can be done safely and securely," Moritz says.
No human harm has come of the work yet, but the potential for it is real.
"Nature will continue to do this"
They were dead on the beaches.
In the spring of 2014, another type of bird flu, H10N7, swept through the harbor seal population of northern Europe. Starting in Sweden, the virus moved south and west, across Denmark, Germany, and the Netherlands. It is estimated that 10% of the entire seal population was killed.
The virus's evolution could be tracked through time and space, Fouchier says, as it progressed down the coast. Natural selection pushed through gain-of-function mutations in the seals, similarly to how H5N1 evolved to better jump between ferrets in his lab — his lab which, at the time, was shuttered.
"We did our work in the lab," Fouchier says, with a high level of safety and security. "But the same thing was happening on the beach here in the Netherlands. And so you can tell me to stop doing this research, but nature will continue to do this day in, day out."
Critics argue that the knowledge gained from the experiments is either non-existent or not worth the risk; Fouchier argues that GOF experiments are the only way to learn crucial information on what makes a flu virus a pandemic candidate.
"If these three traits could be caused by hundreds of combinations of five mutations, then that increases the risk of these things happening in nature immensely," Fouchier says.
"With something as crucial as flu, we need to investigate everything that we can," Fouchier says, hoping to find "a new Achilles' heel of the flu that we can use to stop the impact of it."
Virtual reality continues to blur the line between the physical and the digital, and it will change our lives forever.
- Extended reality technologies — which include virtual reality, augmented reality, and mixed reality — have long captivated the public imagination, but have yet to become mainstream.
- Extended reality technologies are quickly becoming better and cheaper, suggesting they may soon become part of daily life.
- Over the long term, these technologies may usher in the "mirror world" — a digital layer "map" that lies atop the physical world and enables us to interact with internet-based technologies more seamlessly than ever.
What will the Disneyland of the future look like? | Hard Reset by Freethink www.youtube.com
Immersive technology aims to overlay a digital layer of experience atop everyday reality, changing how we interact with everything from medicine to entertainment. What that future will look like is anyone's guess. But immersive technology is certainly on the rise.
The extended reality (XR) industry — which includes virtual reality (VR), augmented reality (AR), and mixed reality (MR), which involves both virtual and physical spaces — is projected to grow from $43 billion in 2020 to $333 billion by 2025, according to a recent market forecast. Much of that growth will be driven by consumer technologies, such as VR video games, which are projected to be worth more than $90 billion by 2027, and AR glasses, which Apple and Facebook are currently developing.
But other sectors are adopting immersive technologies, too. A 2020 survey found that 91 percent of businesses are currently using some form of XR or plan to use it in the future. The range of XR applications seems endless: Boeing technicians use AR when installing wiring in airplanes. H&R Block service representatives use VR to boost their on-the-phone soft skills. And KFC developed an escape-room VR game to train employees how to make fried chicken.
XR applications not only train and entertain; they also have the unique ability to transform how people perceive familiar spaces. Take theme parks, which are using immersive technology to add a new experiential layer to their existing rides, such as roller coasters where riders wear VR headsets. Some parks, like China's $1.5 billion VR Star Theme Park, don't have physical rides at all.
One of the most novel innovations in theme parks is Disney's Star Wars: Galaxy's Edge attraction, which has multiple versions: physical locations in California and Florida and a near-identical virtual replica within the "Tales from the Galaxy's Edge" VR game.
"That's really the first instance of anything like this that's ever been done, where you can get a deeper dive, and a somewhat different view, of the same location by exploring its digital counterpart," game designer Michael Libby told Freethink.
Libby now runs Worldbuildr, a company that uses game-engine software to prototype theme park attractions before construction begins. The prototypes provide a real-time VR preview of everything riders will experience during the ride. It begs the question: considering that VR technology is constantly improving, will there come a point when there's no need for the physical ride at all?
Maybe. But probably not anytime soon.
"I think we're more than a few minutes from the future of VR," Sony Interactive Entertainment CEO Jim Ryan told the Washington Post in 2020. "Will it be this year? No. Will it be next year? No. But will it come at some stage? We believe that."
It could take years for XR to become mainstream. But that growth period is likely to be a brief chapter in the long history of XR technologies.
The evolution of immersive technology
The first crude example of XR technology came in 1838 when the English scientist Charles Wheatstone invented the stereoscope, a device through which people could view two images of the same scene but portrayed at slightly different angles, creating the illusion of depth and solidity. Yet it took another century before anything resembling our modern conception of immersive technology struck the popular imagination.
In 1935, the science fiction writer Stanley G. Weinbaum wrote a short story called "Pygmalion's Spectacles," which describes a pair of goggles that enables one to perceive "a movie that gives one sight and sound [...] taste, smell, and touch. [...] You are in the story, you speak to the shadows (characters) and they reply, and instead of being on a screen, the story is all about you, and you are in it."
The 1950s and 1960s saw some bold and crude forays into XR, such as the Sensorama, which was dubbed an "experience theater" that featured a movie screen complemented by fan-generated wind, a motional chair, and a machine that produced scents. There was also the Telesphere Mask, which packed most of the same features but in the form of a headset designed presciently similar to modern models.
The first functional AR device came in 1968 with Ivan Sutherland's The Sword of Damocles, a heavy headset through which viewers could see basic shapes and structures overlaid on the room around them. The 1980s brought interactive VR systems featuring goggles and gloves, like NASA's Virtual Interface Environment Workstation (VIEW), which let astronauts control robots from a distance using hand and finger movements.
1980's Virtual Reality - NASA Video youtu.be
That same technology led to new XR devices in the gaming industry, like Nintendo's Power Glove and Virtual Boy. But despite a ton of hype over XR in the 1980s and 1990s, these flashy products failed to sell. The technology was too clunky and costly.
In 2012, the gaming industry saw a more successful run at immersive technology when Oculus VR raised $2.5 million on Kickstarter to develop a VR headset. Unlike previous headsets, the Oculus model offered a 90-degree field of view, was priced reasonably, and relied on a personal computer for processing power.
In 2014, Facebook acquired Oculus for $2 billion, and the following years brought a wave of new VR products from companies like Sony, Valve, and HTC. The most recent market evolution has been toward standalone wireless VR headsets that don't require a computer, like the Oculus Quest 2, which last year received five times as many preorders as its predecessor did in 2019.
Also notable about the Oculus Quest 2 is its price: $299 — $100 cheaper than the first version. For years, market experts have said cost is the primary barrier to adoption of VR; the Valve Index headset, for example, starts at $999, and that price doesn't include the cost of games, which can cost $60 a piece. But as hardware gets better and prices get cheaper, immersive technology might become a staple in homes and industry.
Advancing XR technologies
Over the short term, it's unclear whether the recent wave of interest in XR technologies is just hype. But there's reason to think it's not. In addition to surging sales of VR devices and games, particularly amid the COVID-19 pandemic, Facebook's heavy investments into XR suggests there's plenty of space into which these technologies could grow.
A report from The Information published in March found that roughly 20 percent of Facebook personnel work in the company's AR/VR division called Facebook Reality Labs, which is "developing all the technologies needed to enable breakthrough AR glasses and VR headsets, including optics and displays, computer vision, audio, graphics, brain-computer interface, haptic interaction."
What would "breakthroughs" in XR technologies look like? It's unclear exactly what Facebook has in mind, but there are some well-known points of friction that the industry is working to overcome. For example, locomotion is a longstanding problem in VR games. Sure, some advanced systems — that is, ones that cost far more than $300 — include treadmill-like devices on which you move through the virtual world by walking, running, or tilting your center of gravity.
But for the consumer-grade devices, the options are currently limited to using a joystick, walking in place, leaning forward, or pointing and teleporting. (There's also these electronic boots that keep you in place as you walk, for what it's worth.) These solutions usually work fine, but it produces an inherent sensory contradiction: Your avatar is moving through the virtual world but your body remains still. The locomotion problem is why most VR games don't require swift character movements and why designers often compensate by having the player sit in a cockpit or otherwise limiting the game environment to a confined space.
For AR, one key hurdle is fine-tuning the technology to ensure that the virtual content you see through, say, a pair of smart glasses is optically consistent with physical objects and spaces. Currently, AR often appears clunky, unrooted from the real world. Incorporating LiDAR (Light Detection and Ranging) into AR devices may do the trick. The futurist Bernard Marr elaborated on his blog:
"[LIDAR] is essentially used to create a 3D map of surroundings, which can seriously boost a device's AR capabilities. It can provide a sense of depth to AR creations — instead of them looking like a flat graphic. It also allows for occlusion, which is where any real physical object located in front of the AR object should, obviously, block the view of it — for example, people's legs blocking out a Pokémon GO character on the street."
Another broad technological upgrade to XR technologies, especially AR, is likely to be 5G, which will boost the transmission rate of wireless data over networks.
"The adoption of 5G will make a difference in terms of new types of content being able to be viewed by more people." Irena Cronin, CEO of Infinite Retina, a research and advisory firm that helps companies implement spatial computing technologies, said in a 2020 XR survey report. "5G is going to make a difference for more sophisticated, heavy content being viewed live when needed by businesses."
Beyond technological hurdles, the AR sector still has to answer some more abstract questions on the consumer side: From a comfort and style perspective, do people really want to walk around wearing smart glasses or other wearable AR tech? (The failure of Google Glass suggests people were not quite ready to in 2014.) What is the value proposition of AR for consumers? How will companies handle the ethical dilemmas associated with AR technology, such as data privacy, motion sickness, and the potential safety hazards created by tinkering with how users see, say, a busy intersection?
Despite the hurdles, it seems likely that the XR industry will steadily — if clumsily — continue to improve these technologies, weaving them into more aspects of our personal and professional lives. The proof is in your pocket: Smartphones can already run AR applications that let you see prehistoric creatures, true-to-size IKEA furniture in your living room, navigation directions overlaid on real streets, paintings at the Vincent Van Gogh exhibit, and, of course, Pokémon. So, what's next?
The future of immersive experiences
When COVID-19 struck, it not only brought a surge in sales of XR devices and applications but also made a case for rethinking how workers interact in physical spaces. Zoom calls quickly became the norm for office jobs. But for some, prolonged video calls became annoying and exhausting; the term "Zoom fatigue" caught on and was even researched in a 2021 study published in Technology, Mind, and Behavior.
The VR company Spatial offered an alternative to Zoom. Instead of talking to 2D images of coworkers on a screen, Spatial virtually recreates office environments where workers — more specifically, their avatars — can talk and interact. The experience isn't perfect: your avatar, which is created by uploading a photo of yourself, looks a bit awkward, as do the body movements. But the experience is good enough to challenge the idea that working in a physical office is worth the trouble.
Cyberspace illustrationtampatra via Adobe Stock
That's probably the most relatable example of an immersive environment people may soon encounter. But the future is wide open. Immersive environments may also be used on a wide scale to:
- Conduct job interviews, potentially with gender- and race-neutral avatars to eliminate possibilities of discriminatory hiring practices
- Ease chronic pain
- Help people overcome phobias through exposure therapy
- Train surgeons to conduct complex procedures, which may be especially beneficial to doctors in nations with weaker healthcare systems
- Prepare inmates for release into society
- Educate students, particularly in ways that cut down on distractions
- Enable people to go on virtual dates
But the biggest transformation XR technologies are likely to bring us is a high-fidelity connection to the "mirror world." The mirror world is essentially a 1:1 digital map of our world, created by the fusion of all the data collected through satellite imagery, cameras, and other modeling techniques. It already exists in crude form. For example, if you were needing directions on the street, you could open Google Maps AR, point your camera in a certain direction, and your screen will show you that Main Street is 223 feet in front of you. But the mirror world will likely become far more sophisticated than that.
Through the looking glass of AR devices, the outside world could be transformed in any number of ways. Maybe you are hiking through the woods and you notice a rare flower; you could leave a digital note suspended in the air so the next passerby can check it out. Maybe you encounter something like an Amazon Echo in public and, instead of it looking like a cylindrical tube, it appears as an avatar. You could be touring Dresden in Germany and choose to see a flashback representation of how the city looked after the bombings of WWII. You might also run into your friends — in digital avatar form — at the local bar.
Of course, this future poses no shortage of troubling aspects, ranging from privacy, pollution from virtual advertisements, and the currently impossible-to-answer psychological consequences of creating such an immersive environment. But despite all the uncertainties, the foundations of the mirror world are being built today.
As for what may lie beyond it? Ivan Sutherland, the creator of The Sword of Damocles, once described his idea of an "ultimate" immersive display:
"...a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal. With appropriate programming such a display could literally be the Wonderland into which Alice walked."