New book explores a future populated with robot helpers.
As Covid-19 has made it necessary for people to keep their distance from each other, robots are stepping in to fill essential roles, such as sanitizing warehouses and hospitals, ferrying test samples to laboratories, and serving as telemedicine avatars.
There are signs that people may be increasingly receptive to robotic help, preferring, at least hypothetically, to be picked up by a self-driving taxi or have their food delivered via robot, to reduce their risk of catching the virus.
As more intelligent, independent machines make their way into the public sphere, engineers Julie Shah and Laura Major are urging designers to rethink not just how robots fit in with society, but also how society can change to accommodate these new, "working" robots.
Shah is an associate professor of aeronautics and astronautics at MIT and the associate dean of social and ethical responsibilities of computing in the MIT Schwarzman College of Computing. Major SM '05 is CTO of Motional, a self-driving car venture supported by automotive companies Hyundai and Aptiv. Together, they have written a new book, "What to Expect When You're Expecting Robots: The Future of Human-Robot Collaboration," published this month by Basic Books.
What we can expect, they write, is that robots of the future will no longer work for us, but with us. They will be less like tools, programmed to carry out specific tasks in controlled environments, as factory automatons and domestic Roombas have been, and more like partners, interacting with and working among people in the more complex and chaotic real world. As such, Shah and Major say that robots and humans will have to establish a mutual understanding.
"Part of the book is about designing robotic systems that think more like people, and that can understand the very subtle social signals that we provide to each other, that make our world work," Shah says. "But equal emphasis in the book is on how we have to structure the way we live our lives, from our crosswalks to our social norms, so that robots can more effectively live in our world."
Getting to know you
As robots increasingly enter public spaces, they may do so safely if they have a better understanding of human and social behavior.
Consider a package delivery robot on a busy sidewalk: The robot may be programmed to give a standard berth to obstacles in its path, such as traffic cones and lampposts. But what if the robot is coming upon a person wheeling a stroller while balancing a cup of coffee? A human passerby would read the social cues and perhaps step to the side to let the stroller by. Could a robot pick up the same subtle signals to change course accordingly?
Shah believes the answer is yes. As head of the Interactive Robotics Group at MIT, she is developing tools to help robots understand and predict human behavior, such as where people move, what they do, and who they interact with in physical spaces. She's implemented these tools in robots that can recognize and collaborate with humans in environments such as the factory floor and the hospital ward. She is hoping that robots trained to read social cues can more safely be deployed in more unstructured public spaces.
Major, meanwhile, has been helping to make robots, and specifically self-driving cars, work safely and reliably in the real world, beyond the controlled, gated environments where most driverless cars operate today. About a year ago, she and Shah met for the first time, at a robotics conference.
"We were working in parallel universes, me in industry, and Julie in academia, each trying to galvanize understanding for the need to accommodate machines and robots," Major recalls.
From that first meeting, the seeds for their new book began quickly to sprout.
A cyborg city
In their book, the engineers describe ways that robots and automated systems can perceive and work with humans — but also ways in which our environment and infrastructure can change to accommodate robots.
A cyborg-friendly city, engineered to manage and direct robots, could avoid scenarios such as the one that played out in San Francisco in 2017. Residents there were seeing an uptick in delivery robots deployed by local technology startups. The robots were causing congestion on city sidewalks and were an unexpected hazard to seniors with disabilities. Lawmakers ultimately enforced strict regulations on the number of delivery robots allowed in the city — a move that improved safety, but potentially at the expense of innovation.
If in the near future there are to be multiple robots sharing a sidewalk with humans at any given time, Shah and Major propose that cities might consider installing dedicated robot lanes, similar to bike lanes, to avoid accidents between robots and humans. The engineers also envision a system to organize robots in public spaces, similar to the way airplanes keep track of each other in flight.
In 1965, the Federal Aviation Agency was created, partly in response to a catastrophic crash between two planes flying through a cloud over the Grand Canyon. Prior to that crash, airplanes were virtually free to fly where they pleased. The FAA began organizing airplanes in the sky through innovations like the traffic collision avoidance system, or TCAS — a system onboard most planes today, that detects other planes outfitted with a universal transponder. TCAS alerts the pilot of nearby planes, and automatically charts a path, independent of ground control, for the plane to take in order to avoid a collision.
Similarly, Shah and Major say that robots in public spaces could be designed with a sort of universal sensor that enables them to see and communicate with each other, regardless of their software platform or manufacturer. This way, they might stay clear of certain areas, avoiding potential accidents and congestion, if they sense robots nearby.
"There could also be transponders for people that broadcast to robots," Shah says. "For instance, crossing guards could use batons that can signal any robot in the vicinity to pause so that it's safe for children to cross the street."
Whether we are ready for them or not, the trend is clear: The robots are coming, to our sidewalks, our grocery stores, and our homes. And as the book's title suggests, preparing for these new additions to society will take some major changes, in our perception of technology, and in our infrastructure.
"It takes a village to raise a child to be a well-adjusted member of society, capable of realizing his or her full potential," write Shah and Major. "So, too, a robot."
Would you ever have sex with a robot?
- In 2016, "Harmony", the world's first AI sex robot was designed by a tech firm called Realbotix.
- According to 2020 survey data, more than one in five Americans (22 percent) say they would consider having sex with a robot. This is an increase from a survey conducted in 2017.
- Robots (and robotic tech) already play a vital role in speeding up manufacturing, packaging, and processing across various industries.
From homemade dildos to Harmony, the AI sex robot
"...amid an economic crisis, with restaurants and retailers closing their doors and larger companies laying off and furloughing employees, the sex tech industry is booming."
A Bustle article published in April 2020, weeks after COVID-19 was declared a pandemic, explored the drastic boost in the sex tech industry. According to the research, Dame Products (a popular sex toy retailer) experienced a 30 percent increase in sales between the months of February to April, and popular sexual wellness brand Unbound reported selling twice as many toys as normal in this period.
While the new coronavirus was crashing the economy in other ways, the sex tech industry was one of the few that actually saw improvements, likely due to people all over the world being advised, encouraged, and in some instances forced to stay at home.
Something similar happened in 2008, during the recession: the sex toy industry was one of the only industries at the time that didn't gravely suffer.
The evolution of sex tech from stone dildos to artificial intelligence.
The history of sex toys is quite interesting. A 28,000-year-old siltstone dildo was uncovered in Germany in 2005. Luxury bronze dildos have also been found in China that are at least 2,000 years old.
Aside from various materials being shaped into dildos, there has always been an interest in how to advance sex technology, even before it involved actual technology at all.
- The 1700s: Steam-powered vibrators (such as the Manipulator).
- The 1800s—1900s: The invention of the first electric vibrator (the Pulsoson) and "beauty tools" being used for sexual satisfaction (such as the Polar Cub massager)
- The 1920s—1940s: The introduction of hand-held massagers (the Andis Vibrator) and compact devices (such as the Oster Stim-U-Lax)
- The 1940s—1960s: Japan introduced the "Cadillac of Vibrators" (The Hitachi Magic Wand), which eventually made it's way to America.
- 1965: The invention of silicone, which most modern sex toys are made of.
- The 1980s—1990s: The invention of the rabbit-style vibrator, made more popular with one of the first showings of a sex toy on television ("Sex and the City").
- The 2000s: Visual porn website Pornhub launched and sex toys became increasingly popular. Erotic literature also became more common and popular, with "50 Shades of Grey" and others like it.
- The 2010s and beyond: Sex toys and technology start to blend, and the world's first internet-controlled sex toy was launched in 2010 by Lovense.
In 2016, "Harmony", the world's first AI sex robot was designed by a tech firm called Realbotix.
From television shows to real-life applications, artificial intelligence (AI) is becoming more and more popular in all areas of human life.
Credit: Willyam Bradberry on Shutterstock
In 2020, more than one in five Americans (22 percent) say they would consider having sex with a robot. YouGov conducted a study in February 2020 that compared results from a similar study from 2017.
According to the results, 6 percent more people in 2020 are comfortable with the idea of having sex with a robot than in 2017.
YouGov points out that the increase in consideration is particularly significant among American adults between the ages of 18-34 years old. Additionally, how people feel about having sex with a robot has also changed. In 2020, 27 percent of Americans said they would consider it cheating if they had a partner who had sex with a robot during the relationship, compared to the 32 percent reported in 2017.
"If you had a partner who had sex with a robot, would you consider it cheating?"
The results from this interesting study also reveal that many people (42 percent) believe having sex with a robot is safer than having sex with a human stranger.
Robots (and robotic tech) already play a vital role in speeding up manufacturing, packaging, and processing across various industries. From television shows to real-life applications, artificial intelligence is becoming more and more popular in all areas of human life.
According to YouGov, "a Bloomberg report outlining Amazon's plans for an Alexa-powered robot that follows and helps you around the home may redefine how these machines service humans in the near future."
If A.I.s are as smart as mice or dogs, do they deserve the same rights?
Universities across the world are conducting major research on artificial intelligence (A.I.), as are organisations such as the Allen Institute, and tech companies including Google and Facebook.
A likely result is that we will soon have A.I. approximately as cognitively sophisticated as mice or dogs. Now is the time to start thinking about whether, and under what conditions, these A.I.s might deserve the ethical protections we typically give to animals.
Discussions of "A.I. rights" or "robot rights" have so far been dominated by questions of what ethical obligations we would have to an A.I. of humanlike or superior intelligence – such as the android Data from Star Trek or Dolores from Westworld. But to think this way is to start in the wrong place, and it could have grave moral consequences. Before we create an A.I. with humanlike sophistication deserving humanlike ethical consideration, we will very likely create an A.I. with less-than-human sophistication, deserving some less-than-human ethical consideration.
We are already very cautious in how we do research that uses certain nonhuman animals. Animal care and use committees evaluate research proposals to ensure that vertebrate animals are not needlessly killed or made to suffer unduly. If human stem cells or, especially, human brain cells are involved, the standards of oversight are even more rigorous. Biomedical research is carefully scrutinised, but A.I. research, which might entail some of the same ethical risks, is not currently scrutinised at all. Perhaps it should be.
You might think that A.I.s don't deserve that sort of ethical protection unless they are conscious – that is, unless they have a genuine stream of experience, with real joy and suffering. We agree. But now we face a tricky philosophical question: how will we know when we have created something capable of joy and suffering? If the AI is like Data or Dolores, it can complain and defend itself, initiating a discussion of its rights. But if the AI is inarticulate, like a mouse or a dog, or if it is for some other reason unable to communicate its inner life to us, it might have no way to report that it is suffering.
A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views – 'liberal' views – for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views – 'conservative' views – consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.
It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.
Discussions of "A.I. risk," in needless vivisections, the Nazi medical war crimes, and the Tuskegee syphilis study). With AI, we have a chance to do better. We propose the founding of oversight committees that evaluate cutting-edge A.I. research with these questions in mind. Such committees, much like animal care committees and stem-cell oversight committees, should be composed of a mix of scientists and non-scientists – A.I. designers, consciousness scientists, ethicists and interested community members. These committees will be tasked with identifying and evaluating the ethical risks of new forms of A.I. design, armed with a sophisticated understanding of the scientific and ethical issues, weighing the risks against the benefits of the research.
It is likely that such committees will judge all current A.I. research permissible. On most mainstream theories of consciousness, we are not yet creating A.I. with conscious experiences meriting ethical consideration. But we might – possibly soon – cross that crucial ethical line. We should be prepared for this.
John Basl & Eric Schwitzgebel
This article was originally published at Aeon and has been republished under Creative Commons.
An innovation may lead to lifelike evolving machines.
- Scientists at Cornell University devise a material with 3 key traits of life.
- The goal for the researchers is not to create life but lifelike machines.
- The researchers were able to program metabolism into the material's DNA.
Cornell University engineers have created an artificial material that has three key traits of life — metabolism, self-assembly and organization. The engineers were able to pull off such a feat by using DNA in order to make machines from biomaterials that would have characteristics of alive things.
Dubbing their process DASH for "DNA-based Assembly and Synthesis of Hierarchical" materials, the scientists made a DNA material that has metabolism — the set of chemical processes that convert food into energy necessary for the maintenance of life.
The goal for the scientists is not to create a lifeform but a machine with lifelike characteristics, with Dan Luo, professor of biological and environmental engineering, pointing out "We are not making something that's alive, but we are creating materials that are much more lifelike than have ever been seen before."
The major innovation here is the programmed metabolism that is coded into the DNA materials. The set of instructions for metabolism and autonomous regeneration allows the material to grow on its own.
In their paper, the scientists described the metabolism as the system by which "the materials comprising life are synthesized, assembled, dissipated, and decomposed autonomously in a controlled, hierarchical manner using biological processes."
To keep going, a living organism must be able to generate new cells, while discarding old ones and waste. It is this process that the Cornell scientists duplicated using DASH. They devised a biomaterial that can arise on its own from nanoscale building blocks. It can arrange itself into polymers first and into mesoscale shapes after.
The DNA molecules in the materials were duplicated hundreds of thousands of times, resulting in chains of repeating DNA that were a few millimeters in length. The solution with the reaction was injected into a special microfluidic device that facilitated biosynthesis.
This flow washed over the materials, causing DNA to synthesize its own strands. The material even had its own locomotion, with the front end growing while the tail end was degrading, making it creep forth.
This fact allowed the researchers to have portions of the materials competing against each other.
"The designs are still primitive, but they showed a new route to create dynamic machines from biomolecules. We are at a first step of building lifelike robots by artificial metabolism," explained Shogo Hamada, the lead and co-corresponding author of the paper as well as a lecturer and research associate in the Luo lab. "Even from a simple design, we were able to create sophisticated behaviors like racing. Artificial metabolism could open a new frontier in robotics."
Credit: Shogo Hamada / Cornell University
The material that was created lasted for two cycles of synthesis and degradation but the longevity can be extended, think the researchers. This could lead to more generations of the material, eventually resulting in a "lifelike self-reproducing machines," said Hamada.
He also foresees that the system can result in a "self-evolutionary possibility."
Next for the material? The engineers are looking at how to get it to react to stimuli and be able to seek out light or food all on its own. They also want it to be able to avoid harmful stimuli.
Check out the video of Professor Luo explaining their achievement here —
You can check out the new paper "Dynamic DNA Material With Emergent Locomotion Behavior Powered by Artificial Metabolism," in the April 10th issues of Science Robotics.
Upload your mind? Here's a reality check on the Singularity.
- Though computer engineers claim to know what human consciousness is, many neuroscientists say that we're nowhere close to understanding what it is, or its source.
- Scientists are currently trying to upload human minds to silicon chips, or re-create consciousness with algorithms, but this may be hubristic because we still know so little about what it means to be human.
- Is transhumanism a journey forward or an escape from reality?