Some say the proliferation of sex robots could lead to less demand for prostitution, but not all agree.
- A Toronto-based sex robot brothel plans to open another location in Houston.
- Some critics argue that the proliferation of sex robots would lead to increases in prostitution and sex trafficking.
- Others say that such technology could help some people find a degree of much-needed companionship.
There are currently no laws against opening a sex robot brothel in Houston, but recently announced plans to open one have some residents saying there should be.
The owner of Kinky S Dolls, a Toronto-based company where $120 gets customers 80 minutes alone with a robotic sex doll that moves and talks, plans to open another location in the Houston area. It would be the first sex robot brothel in the U.S.
On advice from counsel, owner Yuval Gavriel doesn't call his business a 'sex robot brothel' but rather a kind of try-it-before-you-buy-it shop for realistic sex dolls, which he sells for $2,000 to $5,000.
"I consulted with a lawyer and the lawyer said, 'Listen, there are no rules to it, but if you are smart you don't go out and say you are operating a brothel,'" Gavriel told the Washington Examiner. "He went through all the laws and all of the regulations and currently there are no regulations for this kind of service. The States is a bigger market, and a healthier market, and God bless Trump."
A sex doll sold by Kinky S Dolls for about $3,500.
Sex dolls and robots may be legal in the U.S., but some believe that establishing what's essentially a robot sex brothel would cross a line. In response to Gavriel's plans, Elijah Rising, a Christian organization in Houston that combats sex trafficking, published a petition titled 'Keep Robot Brothels Out Of Houston'.
"As a nonprofit whose mission is to end sex trafficking we have seen the progression as sex buyers go from pornography to strip clubs to purchasing sex—robot brothels will ultimately harm men, their understanding of healthy sexuality, and increase the demand for the prostitution and sexual exploitation of women and children," reads the petition, which currently has nearly 6,000 signatures.
Elijah Rising's argument is based on a paper written by Kathleen Richardson, a professor of ethics and culture of robots at De Montfort University.
"I propose that extending relations of prostitution into machines is neither ethical, nor is it safe," Richardson argues in the paper. "If anything the development of sex robots will further reinforce relations of power that do not recognise both parties as human subjects. Only the buyer of sex is recognised as a subject, the seller of sex (and by virtue the sex-robot) is merely a thing to have sex with."
How would sex robots affect rates of prostitution?
One argument, to which Gavriel subscribes, says that increased availability of sex robots would reduce the demand for human prostitutes. It's an idea tangentially related to the longstanding body of research that shows countries tend to see decreases in sexual assaults and rape after they legalize porn.
In his bestselling book Love and Sex with Robots, A.I. researcher David Levy explores the future of human relationships with robots and suggests that sex robots could reduce prostitution rates, or even someday render it obsolete.
But that's "highly speculative philosophy," according to Richardson.
"The reality is that it will just become a new niche market within the pornography industry and within the prostitution trade," she said in an interview with Feminist Current. "If people buy into the idea that you can have these dolls as part of your sexual fetish, it will become another burden that actual living human beings will have to undergo in the commercial sex trade."
A sex doll sold by Kinky S Dolls.
Richardson elaborated on this idea in her paper.
"...studies have found that the introduction of new technology supports and contributes to the expansion of the sex industry," she wrote. "Prostitution and pornography production also rises with the growth of the internet. In 1990, 5.6 percent of men reported paying for sex in their lifetime, by 2000, this had increased to 8.8 percent."
However, those rates aren't necessarily causally linked.
Richardson also wrote that if sex toys, such as RealDolls and blow-up dolls, actually led to lower prostitution demand then we would have already seen decreases, but "no such correlation is found."
Still, that last point might soon become invalid as a sort of apples-to-oranges comparison if technology can produce artificially intelligent and lifelike sex robots unlike anything the industry has seen before.
An illusion of companionship
Image: Film4, from the 2015 film 'Ex Machina'
Critics argue that the proliferation of sex robots would serve to reinforce the objectification of women in men's minds, and also reduce the ability for some men to empathize, a necessary component of healthy social interaction.
Houstonian Andrea Paul voiced a simpler objection to the brothel:
"There's kids around here and it's a family-oriented neighborhood and I live right here and to have that here is just gross."
Gross, sure. But to Matt McMullen, creator of the RealDoll, the future of sex robots looks a bit more uplifting.
"My goal, in a very simple way, is to make people happy," McMullen told CNET. "There are a lot of people out there, for one reason or another, who have difficulty forming traditional relationships with other people. It's really all about giving those people some level of companionship—or the illusion of companionship."
The capabilities on this thing are both impressive and worrisome.
Autonomous cars are coming down the pike, and they’re going to change our lives in so many ways. Consider that 94% of all car accidents are due to human error. Self-driving cars are expected to be safer, more reliable, and much more environmentally friendly. They might also cut down on traffic and commutes.
So when will fully autonomous, self-driving cars be introduced? Elon Musk said Tesla’s model will be ready by 2019. But, he also admitted in an August earnings call, that the unveiling is likely to be delayed. One advantage, all Tesla models come with the hardware to become fully autonomous, when the capability is available.
A recent analysis predicts that self-driving cars will be noticeable by 2020. Yet, they aren’t expected to be ubiquitous until 2040. By then, 95% of all cars on the road will be autonomous. Meanwhile, self-driving trucks are already making deliveries from Texas to California.
Self-driving cars use an array of sensors and cameras to maneuver within the environment. Credit: Getty Images.
Although tech companies like Apple, Google, and Uber have jumped into the autonomous vehicle game, analysts say traditional automakers have a leg up, mostly because they already have the infrastructure required to pump out millions of such cars per year. Ford is now going one step farther. The company has filed a patent for an autonomous police cruiser.
The website Motor 1 broke the story. Writer Christopher Smith discovered Ford’s plan while leafing through the company’s most recent patent applications. The cruiser will require a complex A.I. system which has yet to be developed.
It needs to be at a level 4 of autonomy or better. This is when the car can handle itself, without a human constantly controlling it. But a driver may be required for some functions. At level 5, a human is not required at all. Ford and GM are developing level 4 and 5 models, as is a company called Waymo.
The self-driving cop car would be supported by an elaborate system, including on-board and roadside sensors and surveillance cameras. These would detect infractions. Depending on the type and level of violation, the car would decide whether to go in pursuit or remotely issue a citation for an infraction.
Illustration for Ford’s proposed self-driving police car. Credit: U.S. Patent Office.
Relying on wireless, vehicle-to-vehicle communication, the autonomous cruiser would be able to pull up your driver’s license information (if your car wasn’t self-driving), check your speedometer, and even get footage from red light cameras along your route. Then, it would decide what to do.
There’s been no word yet on how it will make such decisions. Not only could this lead to the loss of jobs for police, but it would spell the end of the time honored tradition of roadside arbitration. At least today, you have a chance at talking the cop out of a ticket.
Ford believes the vehicle could help find highway patrol officers better places to hide from passing motorists, and the self-driving option might free up an on-board officer, who would spend his or her time performing tasks that the computer doesn’t do well. Giving such technology powers over the public generates a lot of questions and anxiety.
Could it get hacked? Would freeing up the officer allow him or her to look more closely into passing motorists’ immigration status, and whether or not they have outstanding warrants? Would the robocar understand special circumstances, such as an anxious husband rushing his pregnant wife to the hospital? Another question is exactly how fair such a system would be.
Although on the surface, A.I. looks as if it’s free of the prejudices normal humans carry, research has shown quite the opposite. A.I. adopts the biases of programmers and any humans it interacts with. Considering problems with racial profiling and systemic racial injustice in the criminal justice system, there’s a fear that such a vehicle would merely extend inherent biases, perhaps in a way that’s trickier to recognize.
Keep in mind that not every patent leads to a finished product. Still, these are elements to weigh carefully, should a Robocop on wheels become a reality for everyday motorists.
Police in Dubai already use self-driving cars. Want to learn more? Click here:
Poachers trade on a black market estimated to total $40 billion. It’s impossible to stop every poacher, but new technology could bolster the efforts of conservationists by putting a set of eyes in the sky.
Poaching takes a brutal toll on the world’s wildlife every year. By the thousands, rhinos are for killed for their horns, elephants for their ivory, and tigers for their bones and exotic pelts. To protect these animals, rangers and conservationists must monitor enormous swaths of land, day and night, looking for poachers who trade on a black market estimated to total $40 billion. It’s impossible to stop every poacher.
New technology could bolster the efforts of conservationists, though, by putting a set of eyes in the sky. Air Shepherd, a conservation group, recently field tested an AI drone system that’s able to automatically detect humans and animals through infrared thermal imaging. The SPOT (Systematic POacher deTector) system, developed by researchers from Carnegie Mellon, the University of Southern California, and Microsoft, can be operated on a common laptop with a wireless internet connection, allowing park rangers to get advanced knowledge of poachers’ movements so they can be intercepted. It could also provide park rangers a heads-up in situations where they’re heading toward a large group of armed poachers.
The researchers trained the system through deep learning, a branch of A.I. that seeks to enable computers to learn and recognize patterns in the world — images of animals and poachers, in this case. First, the SPOT system was shown a series of images in which humans had marked where the animals and humans were. Then, the system used that information to learn about what to look for on its mission.
A paper published by the researchers in November, 2017 describes the deep-learning process in greater detail.
A machine learning algorithm has shown it can discover planets from weak signals overlooked in the Kepler spacecraft’s database.
Humans in the Western world for a long time thought that Earth was the center of the universe. At one point, it was heresy not to think so. After the heliocentric universe was adopted, we felt smaller and less self-important. But we’d also gained something, new knowledge and a new avenue in which to explore the heavens. That was a paradigm shift in our understanding and now, it’s happening again.
We know now today that our planet isn’t special in certain other respects. In terms of inhabiting a “goldilocks zone” which could harbor life, Earth is not the only planet that’s neither too hot nor too cold. Liquid water and an atmosphere are no longer considered luxuries, either. In short, given this and the mathematical possibilities, many scientists believe it’s only a matter of time before we find life somewhere else. After all, hundreds of billions of stars inhabit our galaxy alone, which assumes hundreds of planets the right size within the habitable zone. And that’s a very conservative assessment.
Kepler is a space telescope designed to scour a section of the Milky Way in order to find exoplanets. An exoplanet is one found beyond our solar system. Before Kepler, astronomers were wondering if planets themselves were common or rare. Launched in 2009, Kepler has discovered over 2,500 confirmed exoplanets, and 30 in the habitable zone—each less than twice the size of Earth. Today, there are currently 3,500 confirmed exoplanets total.
The Kepler spacecraft. Credit: NASA.
The problem with Kepler is that it collected reams of reams of data, so much that no one could go through it all. Scientists chose to select those candidates with strong signals. Weak signals could be a treasure trove for A.I., however. So Christopher Shallue, a senior software engineer at Google Brain, and Andrew Vanderburg, a NASA Sagan Postdoctoral Fellow at the University of Texas at Austin, decided to have a crack at it. They employed machine learning, a fascinating new A.I. field that’s making some incredible headway. Their A.I. utilizes artificial “neural networks” modeled after our brain, albeit a far simpler version.
In a recent press conference, Shallue and Vanderburg explained how they trained their A.I. program to identify exoplanets from the Kepler database. According to the scientists, you train neural networks not by programming them but by exposing them to what you want them to recognize. For instance, if you want it to identify puppies and kittens, you show it plenty of pictures of them. After a while, it’ll get good at recognizing them.
Except, here they didn’t show it pets. Instead, they taught it to read minuscule light changes in the brightness of a star which occur when a planet transits or passes in front of it. After enough practice, they let it loose on light recordings captured by Kepler. What the A.I. discovered is that our solar system isn’t as unique as we thought. Instead of being the only eight-planet one, we’re now one of two (that we know of).
NASA introduced the Kepler 90 system in an historic announcement on December 14, 2017. "The Kepler-90 star system is like a mini version of our solar system. You have small planets inside and big planets outside, but everything is scrunched in much closer," said Andrew Vanderburg, astronomer and NASA Sagan Postdoctoral Fellow at The University of Texas, Austin.
Experts speculate that an eight-planet solar system may in fact be common. It also seems that having a solar system where the smaller planets are in the front and the larger ones in the back, may not be so rare. Hopefully, future explorations can help to understand exactly how planetary systems form, as the discovery of new exoplanets has disrupted many of the theories astronomers developed from studying our solar system.
Credit: NASA/Ames Research Center/Wendy Stenzel.
The discovery began with just one planet. The A.I. found a weak transit signal from a planet known as Kepler-90i that had been previously missed. It’s in a planetary system called Kepler-90, located in the constellation Draco, some 2,545 light years from Earth. The new planet is extremely hot, with an average surface temperature of 800 degrees Fahrenheit—about as hot as mercury. Its year is incredibly short. It orbits its star once every 14.4 days. This system probably isn’t the best candidate for life. What’s game changing besides our solar system slipping from its unique place, is the ability to use machine learning to detect previously unrecognized exoplanets. The program also located a 6th planet in the newly discovered TRAPPIST system.
A.I. had been used previously to scour the Kepler database. But these findings show that artificial neural networks are particularly adept at the task. The idea was first posited by Google’s Shallue, who while studying astronomy in his free time, heard about how the discipline was drowning in data. "Machine learning really shines in situations where there is so much data that humans can't search it for themselves,” he said.
Paul Hertz is the director of NASA’s Astrophysics Division in Washington. He said, “Just as we expected, there are exciting discoveries lurking in our archived Kepler data, waiting for the right tool or technology to unearth them.” He added, “This finding shows that our data will be a treasure trove available to innovative researchers for years to come.” Shallue and Vanderburg consider this a successful proof of concept study. In it, explained in a paper to be published in The Astronomical Journal, the A.I. scanned 670 stars. In the future, they plan to have it study all 150,000 stars Kepler has identified.
Today, we still consider our planet special, as it’s the only known place to harbor life. One wonders for how long it’ll hold this lofty location. So far, the A.I. used can’t determine whether an exoplanet is a good candidate for life. But with technology and computing power moving so fast, that ability shouldn’t be too far away.
To see NASA’s video of this discovery, click here:
Researchers at Human Longevity have developed technology that can generate images of individuals face using only their genetic information. But not all are convinced.
What if a computer could generate a realistic image of your face using only your genetic information?
That's precisely the technology researchers at Human Longevity, a San-Diego based company with the world's largest genomic database, claim to have developed. The team, led by genome-sequencing pioneer Craig Venter, reported their findings in a controversial paper published in the journal Proceedings of the National Academy of Sciences.
To train the A.I. to generate facial images, the team first sequenced the genomes of 1,061 people of various ages and ethnicity. They also took high-definition 3D photos of each participant. Finally, they fed the photos and genetic information to an algorithm that taught itself how small differences in DNA relate to facial features, like cheekbone height or protrusion of the brow. The algorithm was then given genomes it hadn't seen before, and it used them to generate images of the individual's face that could be reliably matched to real photos.
Well... sort of.
The team successfully matched eight out of ten images to the real photos. However, this rate fell to just five out of ten when researchers analyzed participants of only one race, considering facial features differ slightly by race. Judge for yourself how well the algorithm did:
The potential applications of this technology are especially intriguing for fields like forensic science — what if investigators were able to use genetic information left at a crime scene to “see” the perpetrator?
Interesting as the applications may be, Human Longevity is more concerned with the implications its findings has on privacy in genomics research, namely that technologies like this could be used to match people's thought-to-be anonymous genetic information to their online photos.
“A core belief from the HLI researchers is that there is now no such thing as true deidentification and full privacy in publicly accessible databases,” HLI said in a statement.
Privacy concerns seem to be widely shared in the community. But some scientists say that the paper is misleading. One reason is that the Human Longevity researchers already knew the age, sex and race of the participants — demographic information that could have been used to achieve the same matching rate without using the computer-generated photos at all.
“I don't think this paper raises those risks, because they haven’t demonstrated any ability to individuate this person from DNA,” said Mark Shriver, an anthropologist at Pennsylvania State University in University Park, in an interview with Nature.
Jason Piper, a former employee of Human Longevity, took issue with what he considered a lack of accuracy in the images, writing on Twitter that:
“everyone looks close to the average of their race, everyone looks like their prediction.”
But perhaps the most exhaustive criticism came from computational biologist Yaniv Erlich, who published a paper entitled Major flaws in "Identification of individuals by trait prediction using whole-genome sequencing data, part of which reads:
“The results of the authors are unremarkable. I achieved a similar re-identification accuracy with the Venter cohort in 10 minutes of work without fancy face morphology...”
Just days later, the team behind the original paper issued a rebuttal, titled simply No major flaws in "Identification of individuals by trait prediction using whole-genome sequencing data.
(It may seem mundane to those outside the field, but it's a pretty vicious beef in the scientific community at the moment, as seen by the "shots fired!" and "I'm gonna grab my popcorn..." comments under both papers.)
Access to genomics data
Underlying this whole debate is a question of access. Genomic data is used across various fields of study, but perhaps most importantly in research that seeks to combat diseases. In an interview with Nature, Piper said that Human Longevity has a vested interest in restricting access to DNA databases because it's a for-profit company that's trying to build the largest genome database in the world.
“I think genetic privacy is very important, but the approach being taken is the wrong one,” Piper said. “In order to get more information out of the genome, people have to share.”
Rather than privatizing and restricting access to genomic data, Piper said that a better solution would be to make data public while using techniques that still allow individuals to remain anonymous.