Once a week.
Subscribe to our weekly newsletter.
Since 1957, the world's space agencies have been polluting the space above us with countless pieces of junk, threatening our technological infrastructure and ability to venture deeper into space.
- Space debris is any human-made object that's currently orbiting Earth.
- When space debris collides with other space debris, it can create thousands more pieces of junk, a dangerous phenomenon known as the Kessler syndrome.
- Radical solutions are being proposed to fix the problem, some of which just might work. (See the video embedded toward the end of the article.)
In 1957, the Soviet Union launched a human-made object into orbit for the first time. It marked the dawn of the Space Age. But when Sputnik 1's batteries died and the aluminum satellite began lifelessly orbiting the planet, it marked the end of another era: the billions of years during which space was pristine.
Today, the space above Earth is the world's "largest garbage dump," according to NASA. It's littered with 8,000 tons of human-made junk, called space debris, left by space agencies over the past six decades.
The U.S. now tracks more than 25,000 pieces of space junk. And that's only the debris that ground-based radar technologies can track. The U.S. Space Surveillance Network estimates there could be more than 170 million pieces of space debris currently orbiting Earth, with the majority being tiny fragments smaller than 1 mm.
Space debris: Trashing a planet
Space debris includes all human-made objects, big and small, that are orbiting Earth but no longer serve a useful function. A brief inventory of known space junk includes: a spatula, a glove, a mirror, a bag filled with astronaut tools, spent rocket stages, stray bolts, paint chips, defunct spacecraft, and about 3,000 dead satellites — all of which are orbiting Earth at speeds of roughly 18,000 m.p.h.
By allowing space debris to accumulate unchecked, we could be building a prison that keeps us stranded on Earth for centuries.
Most space junk is floating in low Earth orbit (LEO), the region of space within an altitude of about 100 to 1,200 miles. LEO is also where most of the world's 3,000 satellites operate, powering our telecommunications, GPS technologies, and military operations.
"Millions of pieces of orbital debris exist in low Earth orbit (LEO) — at least 26,000 the size of a softball or larger that could destroy a satellite on impact; over 500,000 the size of a marble big enough to cause damage to spacecraft or satellites; and over 100 million the size of a grain of salt that could puncture a spacesuit," wrote NASA's Office of Inspector General Office of Audits.
If LEO becomes polluted with too much space junk, it could become treacherous for spacecraft, threatening not only our modern technological infrastructure, but also humanity's ability to venture into space at all.
By allowing space debris to accumulate unchecked, we could be building a prison that keeps us stranded on Earth for centuries.
An outsized problem
Space debris of any size poses grave threats to spacecraft. But tiny, untrackable micro-debris presents an especially dreadful problem: A paint fragment chipped off a spacecraft might not seem dangerous, but it careens through space at nearly 10 times the speed of a bullet, packing enough energy to puncture an astronaut's suit, crack a window of the International Space Station, and potentially destroy satellites.
Impacts with space debris are common. During the Space Shuttle era, NASA replaced an average of one to two shuttle windows per mission "due to hypervelocity impacts (HVIs) from space debris." To be sure, some space debris are natural micrometeoroids. But much of it is human-made, like the fragment that struck the starboard payload bay radiator of the STS-115 flight in 2006.
"The debris penetrated both walls of the honeycomb structure, and the shock wave from the penetration created a crack in the rear surface of the radiator 6.8 mm long," NASA wrote. "Scanning electron microscopy and energy dispersive X-ray detection analysis of residual material around the hole and in the interior of the radiator shows that the impactor was a small fragment of circuit board material."
The European Space Agency notes that any fragment of space debris larger than a centimeter could shatter a spacecraft into pieces.
Impact chip on the ISSESA
To dodge space junk, the International Space Station (ISS) has to conduct "avoidance maneuvers" a couple times every year. In 2014, for example, flight controllers decided to raise the ISS's altitude by half a mile to avoid collision with part of an old European rocket in its orbital path.
NASA has strict guidelines for how it decides to perform these maneuvers.
"Debris avoidance maneuvers are planned when the probability of collision from a conjunction reaches limits set in the space shuttle and space station flight rules," NASA wrote. "If the probability of collision is greater than 1 in 100,000, a maneuver will be conducted if it will not result in significant impact to mission objectives. If it is greater than 1 in 10,000, a maneuver will be conducted unless it will result in additional risk to the crew."
These precautionary measures are becoming increasingly necessary. In 2020, the ISS had to move three times to avoid potential collisions. One of the latest close-calls came with such little warning that astronauts were instructed to take shelter in the Russian segment of the space station, in order to be closer to their Soyuz MS-16 spacecraft, which serves as an escape pod in case of an emergency.
The Kessler syndrome
The hazards of space debris grow exponentially over time. That's because of a problem that NASA scientist Donald J. Kessler outlined in 1978. The so-called Kessler syndrome states that as space becomes increasingly packed with spacecraft and debris, collisions become more likely. And because each collision would create more debris, it could trigger a chain reaction of collisions — potentially to the point where near-Earth space becomes a shrapnel field through which safe travel is impossible.
A paint fragment chipped off a spacecraft might not seem dangerous, but it careens through space at nearly 10 times the speed of a bullet, packing enough energy to puncture an astronaut's suit, crack a window of the International Space Station, and potentially destroy satellites.
The Kessler syndrome may already be playing out. Perhaps it began with the first known case of a spacecraft being severely damaged by artificial space debris, which occurred in 1996 when the French spy satellite Cerise was struck by a piece of an old European Ariane rocket. The collision tore off a 13-foot segment of the satellite.
The next major space debris incident occurred in 2007 when China conducted an anti-satellite missile test in which the nation destroyed one of its own weather satellites, triggering international criticism and creating more than 3,000 pieces of trackable space debris, most of which was still in orbit ten years after the explosion.
Then, in 2009, an unexpected collision between communications satellites — the active Iridium 33 and the defunct Russian Cosmos-2251 — produced at least 2,000 large fragments of space debris and as many as 200,000 smaller pieces, according to NASA. About half of all space debris currently orbiting Earth came from the Iridium-Cosmos collision and China's missile test.
There's more. Russia's BLITS satellite was spun out of its orbital path in 2013 after being struck by a piece of space debris suspected to have come from China's 2007 missile test; the European Space Agency's Copernicus Sentinel-1A satellite was struck by a tiny particle in 2016; and a window of the ISS was hit by a small fragment that same year.
As nations and private companies plan to send more satellites into orbit, collisions and impacts could soon become more common.
The promise and peril of satellite mega-constellations
Space organizations have recently begun launching satellites into low Earth orbit at an unprecedented pace. The goal is to create "mega-constellations" of satellites that provide high-quality internet access to virtually all parts of the planet.
Internet-providing satellites have existed for years, but they're typically expensive and provide slower service than land-based internet infrastructure. That's mainly because it can take a relatively long time for a signal to travel from the satellite to the user due to the high altitudes at which many of these satellites float above us in geostationary orbit.
China and companies like SpaceX, OneWeb, and Amazon aim to solve this problem by launching thousands of satellites into lower orbits in order to reduce signal latency, or the time it takes for the signal to travel to and from the satellite. But some space experts worry satellite mega-constellations could create more space debris.
"We face entirely new challenges as hundreds of satellites are launched every month now — more than we used to launch in a year," Thomas Schildknecht of the International Astronomical Union said at a European Space Agency conference in April. "The mega-constellations are producing huge risks of collisions. We need more stringent rules for traffic management in space and international mechanisms to ensure enforcement of the rules."
A 2017 study funded by the European Space Agency found that the deployment of satellite mega-constellations into low Earth orbit could increase the number of catastrophic collisions by 50 percent. Still, it remains unclear whether sending more satellites into space will necessarily cause more collisions.
SpaceX, for example, claims that Starlink satellites aren't at significant risk of collision because they're equipped with automated collision-avoidance propulsion systems. However, this system seemed to fail in 2019 when a Starlink satellite had a close call with a European science satellite named Aeolus. The company later said it had fixed the bug.
A batch of 60 Starlink test satellites stacked atop a Falcon 9 rocket.SpaceX
Currently, there are no strict international rules governing the deployment and management of satellite mega-constellations. But there are some international efforts to curb space debris risks.
The most concerted effort is the Inter-Agency Space Debris Coordination Committee (IADC), a forum that comprises 13 of the world's space agencies, including those of the U.S., Russia, China, and Japan. The committee aims "to exchange information on space debris research activities between member space agencies, to facilitate opportunities for cooperation in space debris research, to review the progress of ongoing cooperative activities, and to identify debris mitigation options."
The IADC's Space Debris Mitigation Guidelines list three broad goals:
1. Preventing on-orbit break-ups
2. Removing spacecraft from the densely populated orbit regions when they reach the end of their mission
3. Limiting the objects released during normal operations
But even though the world's space agencies recognize the gravity of the space debris problem, they're reluctant to act because of an incentives-based dilemma.
Space debris: A classic tragedy of the commons
Space debris is everyone's problem, but no one entity is obligated to solve it. It's a tragedy of the commons — an economic scenario in which individuals with access to a shared and scarce resource (space) act in their own best interest (spend the least amount of money). Left unchecked, the shared resource is vulnerable to depletion or corruption.
For example, the U.S. by itself could develop a novel method for removing space debris, which, if successful, would benefit all organizations with assets in space. But the odds of this happening are slim because of a game-theoretical dilemma.
"[In space debris removal] each stakeholder has an incentive to delay its actions and wait for others to respond. This makes the space debris removal setting an interesting strategic dilemma. As all actors share the same environment, actions by one have a potential immediate and future impact on all others. This gives rise to a social dilemma in which the benefits of individual investment are shared by all while the costs are not. This encourages free-riders, who reap the benefits without paying the costs. However, if all involved parties reason this way, the resulting inaction may prove to be far worse for all involved. This is known in the game theory literature as the tragedy of the commons."
Similar to trying to curb climate change, there's no clear answer on how to best incentivize nations to mitigate space debris. (For what it's worth, the game theoretical model in the 2018 study found that a centralized solution — e.g., one where a single actor makes decisions on mitigating space debris, perhaps on behalf of a multinational coalition — is less costly than a decentralized solution.)
Although space organizations have been slow to act, many have been exploring ways to remove space junk from orbit and prevent new debris from forming.
Cleaning up space debris
Space organizations have proposed and experimented with many ways to remove debris from space. Although the techniques vary, most agree on strategy: get rid of the big stuff first.
That's because collisions involving large objects would create lots of new debris. So, removing big debris first would simultaneously clean up low Earth orbit and slow down the phenomenon of cascading collisions described by the Kessler syndrome.
To clean up low Earth orbit, space organizations have proposed using:
- Electrodynamic tethers: In 2017, the Japanese Aerospace Exploration Agency attempted to remove space debris by outfitting a cargo ship with an electrodynamic tether — essentially a fishing net made of stainless steel and aluminium. The craft then tried to "catch" space debris with the aim of dragging it into lower orbit, where it would eventually crash to Earth. The experiment failed.
- Ultra-thin nets: NASA's Innovative Advanced Concepts program has funded research for a project that would deploy extremely thin nets designed to wrap around space debris and drag them down to Earth's atmosphere.
- "Laser brooms": Since the 1990s, space researchers have proposed using ground-based lasers to strategically heat one side of a piece of space debris, which would change its orbit so that it re-enters Earth's atmosphere sooner. Because the laser systems would be based on Earth, this strategy could prove to be relatively affordable.
- Drag sails: As a relatively passive way to accelerate the de-orbit of space junk, NASA and other space organizations have been exploring the viability of attaching sails to space junk that would help guide debris back to Earth. These sails could either be packed within new satellites, to be deployed once the satellites are no longer useful, or attached to existing space junk.
Illustration of Brane Craft Phase II, which would use thin nets to capture space debris.Siegfried Janson via NASA
But perhaps one of the most promising solutions for space debris is the ESA-funded ClearSpace-1 mission. Set to launch in 2025, ClearSpace-1 intends to be the first mission that successfully removes space debris from orbit. The goal is to launch a satellite into orbit and rendezvous with the upper stage of Europe's Vega launcher, which was left in space after a 2013 flight.
ClearSpace-1 satellite using its robotic arm to capture space debrisClearSpace-1
Once the satellite meets up with the debris, it will try to capture the junk with a robotic arm and then perform a controlled atmospheric reentry. The task will be challenging, in part because space junk tumbles as it flies above Earth, meaning the satellite will have to match its movements in order to safely capture it.
Freethink recently spoke to the ClearSpace-1 team to get a better understanding of the mission and its challenges.
Catching the Most Dangerous Thing in Space Freethink via youtube.com
But not all space debris removal strategies center on technology. A 2020 paper published in PNAS argued that imposing taxes on each satellite in orbit would be the most effective way to clean up space. Called "orbital use fees," the plan would charge space organizations an annual fee of roughly $235,000 per each satellite that's in orbit. The fee would, in theory, incentivize nations and companies to declutter space over time.
The main hurdle of orbital-use fees is getting all of the world's space organizations to agree to such a plan. If they do, it could help eliminate the tragedy of the commons aspect of space debris and potentially quadruple the value of the space industry by 2040.
"The costly buildup of debris and satellites in low-Earth orbit is fundamentally a problem of incentives — satellite operators currently lack the incentives to factor into their launch decisions the collision risks their satellites impose on other operators," the researchers wrote. "Our analysis suggests that correcting these incentives, via an OUF, could have substantial economic benefits to the satellite industry, and failing to do so could have substantial and escalating economic costs."
No matter the solution, cleaning up space debris will be a complex and expensive challenge that requires a coordinated, international effort. If the global community wants to maintain modern technological infrastructure and venture deeper into space, conducting business as usual isn't an option.
"Imagine how dangerous sailing the high seas would be if all the ships ever lost in history were still drifting on top of the water," Jan Wörner, European Space Agency (ESA) director general, said in a statement. "That is the current situation in orbit, and it cannot be allowed to continue."
It uses radio waves to pinpoint items, even when they're hidden from view.
"Researchers have been giving robots human-like perception," says MIT Associate Professor Fadel Adib. In a new paper, Adib's team is pushing the technology a step further. "We're trying to give robots superhuman perception," he says.
The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.
The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper's lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.Play video
As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That's in part because robots struggle to locate and grasp objects in such a crowded environment. "Perception and picking are two roadblocks in the industry today," says Rodriguez. Using optical vision alone, robots can't perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don't pass through walls.
But radio waves can.
For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader.
The reflected signal provides information about the location and identity of the tagged item. The technology has gained popularity in retail supply chains — Japan aims to use RF tracking for nearly all retail purchases in a matter of years. The researchers realized this profusion of RF could be a boon for robots, giving them another mode of perception.
"RF is such a different sensing modality than vision," says Rodriguez. "It would be a mistake not to explore what RF can do."
RF Grasp uses both a camera and an RF reader to find and grab tagged objects, even when they're fully blocked from the camera's view. It consists of a robotic arm attached to a grasping hand. The camera sits on the robot's wrist. The RF reader stands independent of the robot and relays tracking information to the robot's control algorithm. So, the robot is constantly collecting both RF tracking data and a visual picture of its surroundings. Integrating these two data streams into the robot's decision making was one of the biggest challenges the researchers faced.
"The robot has to decide, at each point in time, which of these streams is more important to think about," says Boroushaki. "It's not just eye-hand coordination, it's RF-eye-hand coordination. So, the problem gets very complicated."
The robot initiates the seek-and-pluck process by pinging the target object's RF tag for a sense of its whereabouts. "It starts by using RF to focus the attention of vision," says Adib. "Then you use vision to navigate fine maneuvers." The sequence is akin to hearing a siren from behind, then turning to look and get a clearer picture of the siren's source.
With its two complementary senses, RF Grasp zeroes in on the target object. As it gets closer and even starts manipulating the item, vision, which provides much finer detail than RF, dominates the robot's decision making.
RF Grasp proved its efficiency in a battery of tests. Compared to a similar robot equipped with only a camera, RF Grasp was able to pinpoint and grab its target object with about half as much total movement. Plus, RF Grasp displayed the unique ability to "declutter" its environment — removing packing materials and other obstacles in its way in order to access the target. Rodriguez says this demonstrates RF Grasp's "unfair advantage" over robots without penetrative RF sensing. "It has this guidance that other systems simply don't have."
RF Grasp could one day perform fulfilment in packed e-commerce warehouses. Its RF sensing could even instantly verify an item's identity without the need to manipulate the item, expose its barcode, then scan it. "RF has the potential to improve some of those limitations in industry, especially in perception and localization," says Rodriguez.
Adib also envisions potential home applications for the robot, like locating the right Allen wrench to assemble your Ikea chair. "Or you could imagine the robot finding lost items. It's like a super-Roomba that goes and retrieves my keys, wherever the heck I put them."
The research is sponsored by the National Science Foundation, NTT DATA, Toppan, Toppan Forms, and the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).
"The question is which are okay, which are not okay."
- As the material that makes all living things what/who we are, DNA is the key to understanding and changing the world. British geneticist Bryan Sykes and Francis Collins (director of the Human Genome Project) explain how, through gene editing, scientists can better treat illnesses, eradicate diseases, and revolutionize personalized medicine.
- But existing and developing gene editing technologies are not without controversies. A major point of debate deals with the idea that gene editing is overstepping natural and ethical boundaries. Just because they can, does that mean that scientists should be edit DNA?
- Harvard professor Glenn Cohen introduces another subcategory of gene experiments: mixing human and animal DNA. "The question is which are okay, which are not okay, why can we generate some principles," Cohen says of human-animal chimeras and arguments concerning improving human life versus morality.
Our love-hate relationship with browser tabs drives all of us crazy. There is a solution.
- A new study suggests that tabs can cause people to be flustered as they try to keep track of every website.
- The reason is that tabs are unable to properly organize information.
- The researchers are plugging a browser extension that aims to fix the problem.
A lot of ideas that people had about the internet in the 1990s have fallen by the wayside as technology and our usage patterns evolved. Long gone are things like GeoCities, BowieNet, and the belief that letting anybody post whatever they are thinking whenever they want is a fundamentally good idea with no societal repercussions.
While these ideas have been abandoned and the tools that made them possible often replaced by new and improved ones, not every outdated part of our internet experience is gone. A new study by a team at Carnegie Mellon makes the case that the use of tabs in a web browser is one of these outdated concepts that we would do well to get rid of.
How many tabs do you have open right now?
We didn't always have tabs. Introduced in the early 2000s, tabs are now included on all major web browsers, and most users have had access to them for a little over a decade. They've been pretty much the same since they came out, despite the ever changing nature of the internet. So, in this new study, researchers interviewed and surveyed 113 people on their use of — and feelings toward — the ubiquitous tabs.
Most people use tabs for the short-term storage of information, particularly if it's information that is needed again soon. Some keep tabs that they know they'll never get around to reading. Others used them as a sort of external memory bank. One participant described this action to the researchers:
"It's like a manifestation of everything that's on my mind right now. Or the things that should be on my mind right now... So right now, in this browser window, I have a web project that I'm working on. I don't have time to work on it right now, but I know I need to work on it. So it's sitting there reminding me that I need to work on it."
You suffer from tab overload
Unfortunately, trying to use tabs this way can cause a number of problems. A quarter of the interview subjects reported having caused a computer or browser to crash because they had too many tabs open. Others reported feeling flustered by having so many tabs open — a situation called "tab overload" — or feeling ashamed that they appeared disorganized by having so many tabs up at once. More than half of participants reported having problems like this at least two or three times a week.
However, people can become emotionally invested in the tabs. One participant explained, "[E]ven when I'm not using those tabs, I don't want to close them. Maybe it's because it took efforts [sic] to open those tabs and organize them in that way."
So, we have a tool that inefficiently saves web pages that we might visit again while simultaneously reducing our productivity, increasing our anxiety, and crashing our machines. And yet we feel oddly attached to them.
Either the system is crazy or we are.
Skeema: The anti-tab revolution
The researchers concluded that at least part of the problem is caused by tabs not being an ideal way of organizing the work we now do online. They propose a new model that better compartmentalizes tabs by task and subtask, reflects users' mental models, and helps manage the users' attention on what is important right now rather than what might be important later.
To that end, the team also created Skeema, an extension for Google Chrome, that treats tabs as tasks and offers a variety of ways to organize them. Users of an early version reported having fewer tabs and windows open at one time and were better able to manage the information they contained.
Tabs were an improvement over having multiple windows open at the same time, but they may have outlived their usefulness. While it might take a paradigm shift to fully replace the concept, the study suggests that taking a different approach to tabs might be worth trying.
And now, excuse me, while I close some of the 87 tabs I currently have open.
- The history of AI shows boom periods (AI summers) followed by busts (AI winters).
- The cyclical nature of AI funding is due to hype and promises not fulfilling expectations.
- This time, we might enter something resembling an AI autumn rather than an AI winter, but fundamental questions remain if true AI is even possible.
The dream of building a machine that can think like a human stretches back to the origins of electronic computers. But ever since research into artificial intelligence (AI) began in earnest after World War II, the field has gone through a series of boom and bust cycles called "AI summers" and "AI winters."
Each cycle begins with optimistic claims that a fully, generally intelligent machine is just a decade or so away. Funding pours in and progress seems swift. Then, a decade or so later, progress stalls and funding dries up. Over the last ten years, we've clearly been in an AI summer as vast improvements in computing power and new techniques like deep learning have led to remarkable advances. But now, as we enter the third decade of the 21st century, some who follow AI feel the cold winds at their back leading them to ask, "Is Winter Coming?" If so, what went wrong this time?
How to build an A.I. brain that can conceive of itself | Joscha Bach | Big Think www.youtube.com
A brief history of AI
To see if the winds of winter are really coming for AI, it is useful to look at the field's history. The first real summer can be pegged to 1956 and the famous Dartmouth University Workshop where one of the field's pioneers, John McCarthy, coined the term "artificial intelligence." The conference was attended by scientists like Marvin Minsky and H. A. Simon, whose names would go on to become synonymous with the field. For those researchers, the task ahead was clear: capture the processes of human reasoning through the manipulation of symbolic systems (i.e., computer programs).
Unless we are talking about very specific tasks, any 6-year-old is infinitely more flexible and general in his or her intelligence than the "smartest" Amazon robot.
Throughout the 1960s, progress seemed to come swiftly as researchers developed computer systems that could play chess, deduce mathematical theorems, and even engage in simple discussions with a person. Government funding flowed generously. Optimism was so high that, in 1970, Minsky famously proclaimed, "In three to eight years we will have a machine with the general intelligence of a human being."
By the mid 1970s, however, it was clear that Minsky's optimism was unwarranted. Progress stalled as many of the innovations of the previous decade proved too narrow in their applicability, seeming more like toys than steps toward a general version of artificial intelligence. Funding dried up so completely that researchers soon took pains not to refer to their work as AI, as the term carried a stink that killed proposals.
The cycle repeated itself in the 1980s with the rise of expert systems and the renewed interest in what we now call neural networks (i.e., programs based on connectivity architectures that mimic neurons in the brain). Once again, there was wild optimism and big increases in funding. What was novel in this cycle was the addition of significant private funding as more companies began to rely on computers as essential components of their business. But, once again, the big promises were never realized, and funding dried up again.
AI: Hype vs. reality
The AI summer we're currently experiencing began sometime in the first decade of the new millennium. Vast increases in both computing speed and storage ushered in the era of deep learning and big data. Deep learning methods use stacked layers of neural networks that pass information to each other to solve complex problems like facial recognition. Big data provides these systems with vast oceans of examples (like images of faces) to train on. The applications of this progress are all around us: Google Maps give you near-perfect directions; you can talk with Siri anytime you want; IBM's Deep Think computer beat Jeopardy's greatest human champions.
In response, the hype rose again. True AI, we were told, must be just around the corner. In 2015, for example, The Guardian reported that self-driving cars, the killer app of modern AI, was close at hand. Readers were told, "By 2020 you will become a permanent backseat driver." And just two years ago, Elon Musk claimed that by 2020 "we'd have over a million cars with full self-driving software."
The general intelligence — i.e., the understanding — we humans exhibit may be inseparable from our experiencing. If that's true, then our physical embodiment, enmeshed in a context-rich world, may be difficult if not impossible to capture in symbolic processing systems.
By now, it's obvious that a world of fully self-driving cars is still years away. Likewise, in spite of the remarkable progress we've made in machine learning, we're still far from creating systems that possess general intelligence. The emphasis is on the term general because that's what AI really has been promising all these years: a machine that's flexible in dealing with any situation as it comes up. Instead, what researchers have found is that, despite all their remarkable progress, the systems they've built remain brittle, which is a technical term meaning "they do very wrong things when given unexpected inputs." Try asking Siri to find "restaurants that aren't McDonald's." You won't like the results.
Unless we are talking about very specific tasks, any 6-year-old is infinitely more flexible and general in his or her intelligence than the "smartest" Amazon robot.
Even more important is the sense that, as remarkable as they are, none of the systems we've built understand anything about what they are doing. As philosopher Alva Noe said of Deep Think's famous Jeopardy! victory, "Watson answered no questions. It participated in no competition. It didn't do anything. All the doing was on our side. We played Jeapordy! with Watson." Considering this fact, some researchers claim that the general intelligence — i.e., the understanding — we humans exhibit may be inseparable from our experiencing. If that's true, then our physical embodiment, enmeshed in a context-rich world, may be difficult if not impossible to capture in symbolic processing systems.
Not the (AI) winter of our discontent
Thus, talk a of a new AI winter is popping up again. Given the importance of deep learning and big data in technology, it's hard to imagine funding for these domains drying up any time soon. What we may be seeing, however, is a kind of AI autumn when researchers wisely recalibrate their expectations and perhaps rethink their perspectives.
A new method could make holograms for virtual reality, 3D printing, and more. You can even run it can run on a smartphone.
Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing.
One reason: VR can make users feel sick. Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.
Holograms deliver an exceptional representation of 3D world around us. Plus, they're beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer's position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.
Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.
"People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations," says Liang Shi, the study's lead author and a PhD student in MIT's Department of Electrical Engineering and Computer Science (EECS). "It's often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades."
Shi believes the new approach, which the team calls "tensor holography," will finally bring that elusive 10-year goal within reach. The advance could fuel a spillover of holography into fields like VR and 3D printing.
Shi worked on the study, published today in Nature, with his advisor and co-author Wojciech Matusik. Other co-authors include Beichen Li of EECS and the Computer Science and Artificial Intelligence Laboratory at MIT, as well as former MIT researchers Changil Kim (now at Facebook) and Petr Kellnhofer (now at Stanford University).
The quest for better 3D
Courtesy of the researchers
A typical lens-based photograph encodes the brightness of each light wave — a photo can faithfully reproduce a scene's colors, but it ultimately yields a flat image.
In contrast, a hologram encodes both the brightness and phase of each light wave. That combination delivers a truer depiction of a scene's parallax and depth. So, while a photograph of Monet's "Water Lilies" can highlight the paintings' color palate, a hologram can bring the work to life, rendering the unique 3D texture of each brush stroke. But despite their realism, holograms are a challenge to make and share.
First developed in the mid-1900s, early holograms were recorded optically. That required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves' phase. This reference generates a hologram's unique sense of depth. The resulting images were static, so they couldn't capture motion. And they were hard copy only, making them difficult to reproduce and share.
Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. "Because each point in the scene has a different depth, you can't apply the same operations for all of them," says Shi. "That increases the complexity significantly." Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image. Plus, existing algorithms don't model occlusion with photorealistic precision. So Shi's team took a different approach: letting the computer teach physics to itself.
They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation. The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information. Training a neural network typically requires a large, high-quality dataset, which didn't previously exist for 3D holograms.
The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram. To create the holograms in the new database, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion. That approach resulted in photorealistic training data. Next, the algorithm got to work.
By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms. The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.
"We are amazed at how well it performs," says Matusik. In mere milliseconds, tensor holography can craft holograms from images with depth information — which is provided by typical computer-generated images and can be calculated from a multicamera setup or LiDAR sensor (both are standard on some new smartphones). This advance paves the way for real-time 3D holography. What's more, the compact tensor network requires less than 1 MB of memory. "It's negligible, considering the tens and hundreds of gigabytes available on the latest cell phone," he says.
The research "shows that true 3D holographic displays are practical with only moderate computational requirements," says Joel Kollin, a principal optical architect at Microsoft who was not involved with the research. He adds that "this paper shows marked improvement in image quality over previous work," which will "add realism and comfort for the viewer." Kollin also hints at the possibility that holographic displays like this could even be customized to a viewer's ophthalmic prescription. "Holographic displays can correct for aberrations in the eye. This makes it possible for a display image sharper than what the user could see with contacts or glasses, which only correct for low order aberrations like focus and astigmatism."
"A considerable leap"
Real-time 3D holography would enhance a slew of systems, from VR to 3D printing. The team says the new system could help immerse VR viewers in more realistic scenery, while eliminating eye strain and other side effects of long-term VR use. The technology could be easily deployed on displays that modulate the phase of light waves. Currently, most affordable consumer-grade displays modulate only brightness, though the cost of phase-modulating displays would fall if widely adopted.
Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say. This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.
"It's a considerable leap that could completely change people's attitudes toward holography," says Matusik. "We feel like neural networks were born for this task."
The work was supported, in part, by Sony.