Does conscious AI deserve rights?
If machines develop consciousness, or if we manage to give it to them, the human-robot dynamic will forever be different.
RICHARD DAWKINS: When we come to artificial intelligence and the possibility of their becoming conscious, we reach a profound philosophical difficulty. I am a philosophical naturalist; I'm committed to the view that there is nothing in our brains that violates the laws of physics, there's nothing that could not, in principle, be reproduced in technology. It hasn't been done yet; we're probably quite a long way away from it, but I see no reason why in the future we shouldn't reach the point where a human-made robot is capable of consciousness and of feeling pain.
BABY X: Da. Da.
MARK SAGAR: Yes, that's right. Very good.
BABY X: Da. Da.
MARK SAGAR: Yeah.
BABY X: Da. Da.
MARK SAGAR: That's right.
JOANNA BRYSON: So, one of the things that we did last year, which was pretty cool, the headlines, because we were replicating some psychology stuff about implicit bias—actually, the best one is something like 'Scientists show that AI is sexist and racist and it's our fault,' which, that's pretty accurate because it really is about picking things up from our society. Anyway, the point was, so here is an AI system that is so humanlike that it's picked up our prejudices and whatever and it's just vectors. It's not an ape, it's not going to take over the world, it's not going to do anything, it's just a representation, it's like a photograph. We can't trust our intuitions about these things.
SUSAN SCHNEIDER: So why should we care about whether artificial intelligence is conscious? Well, given the rapid-fire developments in artificial intelligence, it wouldn't be surprising if within the next 30 to 80 years we start developing very sophisticated general intelligences. They may not be precisely like humans, they may not be as smart as us but they may be sentient beings. If they're conscious beings, we need ways of determining whether that's the case. It would be awful if, for example, we sent them to fight our wars, force them to clean our houses, made them essentially a slave class. We don't want to make that mistake, we want to be sensitive to those issues, so we have to develop ways to determine whether artificial intelligence is conscious or not.
ALEX GARLAND: The Turing Test was a test set by Alan Turing, the father of modern computing. He understood that at some point the machines they were working on could become thinking machines as opposed to just calculating machines and he devised a very simple test.
DOMHNALL GLEESON (IN CHARACTER): It's when a human interacts with a computer and if the human doesn't know they're interacting with a computer the test is passed.
DOMHNALL GLEESON: And this Turing Test is a real thing and it's never, ever been passed.
ALEX GARLAND: What the film does is engage with the idea that it will, at some point, happen. The question is what that leads to.
MARK SAGAR: So she can see me and hear me. Hey, sweetheart, smile at Dad. Now, she's not copying my smile, she's responding to my smile. We've got different sorts of neuromodulators, which you can see up here. So, for example, I'm going to abandon the baby, I'm just going to go away and she's going to start wondering where I've gone. And if you watch up where the mouse is you should start seeing cortisol levels and other sorts of neuromodulators rising. She's going to get increasingly—this is a mammalian maternal separation distress response. It's okay sweetheart. It's okay. Aw. It's okay. Hey. It's okay.
RICHARD DAWKINS: This is profoundly disturbing because it goes against the grain to think that a machine made of metal and silicon chips could feel pain, but I don't see why they would not. And so, this moral consideration of how to treat artificially intelligent robots will arise in the future and it's a problem which philosophers and moral philosophers are already talking about.
SUSAN SCHNEIDER: So, suppose we figure out ways to devise consciousness in machines, it may be the case that we want to deliberately make sure that certain machines are not conscious. So, for example, consider a machine that we would send to dismantle a nuclear reactor so we would essentially quite possibly be sending it to its death, or a machine that we'd send to a war zone. Would we really want to send conscious machines in those circumstances? Would it be ethical? You might say well, maybe we can tweak their minds so they enjoy what they're doing or they don't mind sacrifice, but that gets into some really deep-seated engineering issues that are actually ethical in nature that go back to Brave New World, for example, situations where humans were genetically engineered and took a drug called Soma so that they would want to live the lives that they were given. So, we have to really think about the right approach. So, it may be the case that we deliberately devise machines for certain tasks that are not conscious.
MAX TEGMARK: Some people might prefer that their future home helper robot is an unconscious zombie so they don't have to feel guilty about giving it boring chores or powering it down, some people might prefer that it's conscious so that there can be a positive experience in there, and so they don't feel creeped out by this machine just faking it and pretending to be conscious even though it's a zombie.
JOANNA BRYSON: When will we know for sure that we need to worry about robots? Well, there's a lot of questions there, but consciousness is another one of those words. The word I like to use is moral patient; it's a technical term that the philosophers came up with and it means exactly something that we are obliged to take care of. So, now we can have this conversation: If you just mean conscious means moral patient then it's no great assumption to say well then if it's conscious then we need to take care of it. But it's way more cool if you can say: Does consciousness necessitate moral patiency? And then we can sit down and say, well, it depends what you mean by consciousness. People use consciousness to mean a lot of different things.
A lot of people that this rubs them the wrong way, it's because they've watched Blade Runner or AI the movie or something like this, in a lot of these movies we're not really talking about AI, we're not talking about something designed from the ground up, we're talking basically about clones. And clones are a different situation. If you have something that's exactly like a person, however it was made, then okay it's exactly like a person and it needs that kind of protection. But people think it's unethical to create human clones partly because they don't want to burden someone with the knowledge that they're supposed to be someone else, right, that there was some other person that chose them to be that person. I don't know if we'll be able to stick to that but I would say that AI clones fall into the same category. If you were really going to make something and then say, hey, congratulations, you're me and you have to do what I say, I wouldn't want myself to tell me what to do, if that makes sense, if there were two of me. Right? I think we'd like to both be equals and so you don't want to have an artifact of something that you've deliberately built and that you're going to own. If you have something that's sort of a humanoid servant that you own, then the word for that is slave. And so, I was trying to establish that, look, we are going to own anything we build and so therefore it would be wrong to make it a person because we've already established that slavery of people is wrong and bad and illegal. And so, it never occurred to me that people would take that to mean that oh, the robots will be people that we just treat really badly. It's like no that's exactly the opposite.
We give things rights because that's the best way we can find to handle very complicated situations. And the things that we give rights are basically people. I mean some people argue about animals but, technically, and again this depends on who's technical definition you use, but technically rights are usually things that come with responsibilities and that you can defend in a court of law. So, normally we talk about animal welfare and we talk about human rights. But with artificial intelligence you can even imagine itself knowing its rights and defending itself in the court of law, but the question is why would we need to protect the artificial intelligence with rights? Why is that the best way to protect it? So, with humans it's because we're fragile, it's because there's only one of us and, I actually think this is horribly reductionist, but I actually think it's just the best way that we've found to be able to cooperate. It's sort of an acknowledgment of the fact that we're all basically the same thing, the same stuff, and we had to come up with some kind of, the technical term, again, is equilibrium; we had to come up with some way to share the planet and we haven't managed to do it completely fairly, like everybody gets the same amount of space, but actually we all want to be recognized for our achievements so even completely fair isn't completely fair, if that makes sense. And I don't mean to be facetious there, it really is true that you can't make all the things you would like out of fairness be true at once. That's just a fact about the world, it's a fact about the way we define fairness. So, given how hard it is to be fair, why should we build AI that needs us to be fair to it? So, what I'm trying to do is just make the problem simpler and focus us on the thing that we can't help, which is the human condition. And I'm recommending that if you specify something, if you say okay this is when you really need rights in this context, okay, once we've established that don't build that.
PETER SINGER: Exactly where we would place robots would depend on what capacities we believe they have. I can imagine that we might create robots that are limited to the intelligence level of nonhuman animals, perhaps not the smartest nonhuman animals either, they could still perform routine tasks for us, they could fetch things for us on voice command. That's not very hard to imagine. But I don't think that that would be a sentient being necessarily. And so, if it was just a robot that we understood how exactly that worked, it's not very far from what we have now, I don't think it would be entitled to any rights or moral status. But if it was at a higher level than that, if we were convinced that it was a conscious being, then the kind of moral status it would have would depend on exactly what level of consciousness and what level of awareness. Is it more like a pig for example? Well, then it should have the same rights as a pig—which by the way I think we are violating every day on a massive scale by the way we treat pigs in factory farms. So, I'm not saying such a robot should be treated like pigs are being treated in our society today. On the contrary it should be treated with respect for their desires and awareness and their capacities to feel pain and their social nature, all of those things that we ought to take into account when we are responsible for the lives of pigs also we would have to take into account when we are responsible for the lives of robots at a similar level. But if we created robots who are at our level then I think we would have to give them really the same rights that we have. There would be no justification for saying, oh, yes but we're a biological creature and you're a robot; I don't think that has anything to do with the moral status of a being.
GLENN COHEN: One possibility is you say: A necessary condition for being a person is being a human being. So, many people are attracted to that argument and say: Only humans can be persons. All persons are humans. Now, it may be that not all humans are persons, but all persons are humans. Well, there's a problem with that and this is put most forcefully by the philosopher Peter Singer, the bioethicist Peter Singer, who says to reject a species, the possibility that a species has rights and ought to be a patient for moral consideration, the kinds of things that have moral consideration on the basis of the mere fact that they're not a member of your species, he says, is equivalent morally to rejecting giving rights or moral consideration to someone on the basis of their race. So, he says speciesism equals racism. And the argument is: Imagine that you encountered someone who is just like you in every possible respect but it turned out they actually were not a member of the human species, they were a Martian, let's say, or they were a robot and truly exactly like you. Why would you be justified in giving them less moral regard?
So, people who believe in capacity X views have to at least be open to the possibility that artificial intelligence could have the relevant capacities, albeit even though they're not human and therefore qualify for personhood. On the other side of the continuum, one of the implications is that you might have members of the human species that aren't persons and so anencephalic children, children born with very little above the brain stem in terms of their brain structure, are often given as an example. They're clearly members of the human species but their abilities to have the kinds of capacities most people think matter are relatively few and far between. So, you get into this uncomfortable position where you might be forced to recognize that some humans are non-persons and some non-humans are persons.
Now again, if you bite the bullet and say 'I'm willing to be a speciesist; being a member of the human species is either necessary or sufficient for being a person,' you avoid this problem entirely. But if not, you at least have to be open to the possibility that artificial intelligence in particular may at one point become person-like and have the rights of persons. And I think that that scares a lot of people, but in reality, to me, when you look at the course of human history and look how willy-nilly we were in declaring some people non-persons from the law, slaves in this country, for example, it seems, to me, a little humility and a little openness to this idea may not be the worst thing in the world.
- Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
- Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
- One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.
- A.I. will serve humans—but only about 1% of them - Big Think ›
- Should A.I. have free will? - Big Think ›
- AIs should have the same ethical protections as animals | Aeon Ideas ›
Once a week.
Subscribe to our weekly newsletter.
That's as fast as a bullet train in Japan.
The way an elephant manipulates its trunk to eat and drink could lead to better robots, researchers say.
Elephants dilate their nostrils to create more space in their trunks, allowing them to store up to 5.5 liters (1.45 gallons) of water, according to their new study.
They can also suck up three liters (0.79 gallons) per second—a speed 30 times faster than a human sneeze (150 meters per second/330 mph), the researchers found.
The researchers wanted to better understand the physics of how elephants use their trunks to move and manipulate air, water, food, and other objects. They also wanted to learn if the mechanics could inspire the creation of more efficient robots that use air motion to hold and move things.
Photo by David Clode on Unsplash
While octopuses use jets of water to propel themselves and archer fish shoot water above the surface to catch insects, elephants are the only animals able to use suction both on land and underwater.
"An elephant eats about 400 pounds of food a day, but very little is known about how they use their trunks to pick up lightweight food and water for 18 hours, every day," says lead author Andrew Schulz, a mechanical engineering PhD student at the Georgia Institute of Technology. "It turns out their trunks act like suitcases, capable of expanding when necessary."
Sucking up tortilla chips without breaking them
Schulz and his colleagues worked with veterinarians at Zoo Atlanta, studying elephants as they ate various foods. For large rutabaga cubes, for example, the animal grabbed and collected them. It sucked up smaller cubes and made a loud vacuuming sound, like the sound of a person slurping noodles, before transferring the vegetables to its mouth.
To learn more about suction, the researchers gave elephants a tortilla chip and measured the applied force. Sometimes the animal pressed down on the chip and breathed in, suspending the chip on the tip of its trunk without breaking it, similar to a person inhaling a piece of paper onto their mouth. Other times the elephant applied suction from a distance, drawing the chip to the edge of its trunk.
Elephants inhale at speeds comparable to Japan's 300 mph bullet trains.
"An elephant uses its trunk like a Swiss Army knife," says David Hu, Schulz's advisor and a professor in Georgia Tech's School of Mechanical Engineering. "It can detect scents and grab things. Other times it blows objects away like a leaf blower or sniffs them in like a vacuum."
By watching elephants inhale liquid from an aquarium, the team was able to time the durations and measure volume. In just 1.5 seconds, the trunk sucked up 3.7 liters (just shy of 1 gallon), the equivalent of 20 toilets flushing simultaneously.
Soft robots and elephant conservation
The researchers used an ultrasonic probe to take trunk wall measurements and see how the trunk's inner muscles work. By contracting those muscles, the animal dilates its nostrils up to 30%. This decreases the thickness of the walls and expands nasal volume by 64%.
"At first it didn't make sense: an elephant's nasal passage is relatively small and it was inhaling more water than it should," Schulz says. "It wasn't until we saw the ultrasonographic images and watched the nostrils expand that we realized how they did it. Air makes the walls open, and the animal can store far more water than we originally estimated."
Based on the pressures applied, Schulz and the team suggest that elephants inhale at speeds comparable to Japan's 300-mph bullet trains.
"By investigating the mechanics and physics behind trunk muscle movements, we can apply the physical mechanisms—combinations of suction and grasping—to find new ways to build robots," Schulz says.
"In the meantime, the African elephant is now listed as endangered because of poaching and loss of habitat. Its trunk makes it a unique species to study. By learning more about them, we can learn how to better conserve elephants in the wild."
The paper appears in the Journal of the Royal Society Interface. The US Army Research Laboratory and the US Army Research Oﬃce 294 Mechanical Sciences Division, Complex Dynamics and Systems Program, funded the work. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the sponsoring agency.
Source: Georgia Tech
Original Study DOI: 10.1098/rsif.2021.0215
The experience of life flashing before one's eyes has been reported for well over a century, but where's the science behind it?
At the age of 16, when Tony Kofi was an apprentice builder living in Nottingham, he fell from the third story of a building. Time seemed to slow down massively, and he saw a complex series of images flash before his eyes.
As he described it, “In my mind's eye I saw many, many things: children that I hadn't even had yet, friends that I had never seen but are now my friends. The thing that really stuck in my mind was playing an instrument". Then Tony landed on his head and lost consciousness.
When he came to at the hospital, he felt like a different person and didn't want to return to his previous life. Over the following weeks, the images kept flashing back into his mind. He felt that he was “being shown something" and that the images represented his future.
Later, Tony saw a picture of a saxophone and recognized it as the instrument he'd seen himself playing. He used his compensation money from the accident to buy one. Now, Tony Kofi is one of the UK's most successful jazz musicians, having won the BBC Jazz awards twice, in 2005 and 2008.
Though Tony's belief that he saw into his future is uncommon, it's by no means uncommon for people to report witnessing multiple scenes from their past during split-second emergency situations. After all, this is where the phrase “my life flashed before my eyes" comes from.
But what explains this phenomenon? Psychologists have proposed a number of explanations, but I'd argue the key to understanding Tony's experience lies in a different interpretation of time itself.
When life flashes before our eyes
The experience of life flashing before one's eyes has been reported for well over a century. In 1892, a Swiss geologist named Albert Heim fell from a precipice while mountain climbing. In his account of the fall, he wrote is was “as if on a distant stage, my whole past life [was] playing itself out in numerous scenes".
More recently, in July 2005, a young woman called Gill Hicks was sitting near one of the bombs that exploded on the London Underground. In the minutes after the accident, she hovered on the brink of death where, as she describes it: “my life was flashing before my eyes, flickering through every scene, every happy and sad moment, everything I have ever done, said, experienced".
In some cases, people don't see a review of their whole lives, but a series of past experiences and events that have special significance to them.
Explaining life reviews
Perhaps surprisingly, given how common it is, the “life review experience" has been studied very little. A handful of theories have been put forward, but they're understandably tentative and rather vague.
For example, a group of Israeli researchers suggested in 2017 that our life events may exist as a continuum in our minds, and may come to the forefront in extreme conditions of psychological and physiological stress.
Another theory is that, when we're close to death, our memories suddenly “unload" themselves, like the contents of a skip being dumped. This could be related to “cortical disinhibition" – a breaking down of the normal regulatory processes of the brain – in highly stressful or dangerous situations, causing a “cascade" of mental impressions.
But the life review is usually reported as a serene and ordered experience, completely unlike the kind of chaotic cascade of experiences associated with cortical disinhibition. And none of these theories explain how it's possible for such a vast amount of information – in many cases, all the events of a person's life – to manifest themselves in a period of a few seconds, and often far less.
Thinking in 'spatial' time
An alternative explanation is to think of time in a “spatial" sense. Our commonsense view of time is as an arrow that moves from the past through the present towards the future, in which we only have direct access to the present. But modern physics has cast doubt on this simple linear view of time.
Indeed, since Einstein's theory of relativity, some physicists have adopted a “spatial" view of time. They argue we live in a static “block universe" in which time is spread out in a kind of panorama where the past, the present and the future co-exist simultaneously.
The modern physicist Carlo Rovelli – author of the best-selling The Order of Time – also holds the view that linear time doesn't exist as a universal fact. This idea reflects the view of the philosopher Immanuel Kant, who argued that time is not an objectively real phenomenon, but a construct of the human mind.
This could explain why some people are able to review the events of their whole lives in an instant. A good deal of previous research – including my own – has suggested that our normal perception of time is simply a product of our normal state of consciousness.
In many altered states of consciousness, time slows down so dramatically that seconds seem to stretch out into minutes. This is a common feature of emergency situations, as well as states of deep meditation, experiences on psychedelic drugs and when athletes are “in the zone".
The limits of understanding
But what about Tony Kofi's apparent visions of his future? Did he really glimpse scenes from his future life? Did he see himself playing the saxophone because somehow his future as a musician was already established?
There are obviously some mundane interpretations of Tony's experience. Perhaps, for instance, he became a saxophone player simply because he saw himself playing it in his vision. But I don't think it's impossible that Tony did glimpse future events.
If time really does exist in a spatial sense – and if it's true that time is a construct of the human mind – then perhaps in some way future events may already be present, just as past events are still present.
Admittedly, this is very difficult to make sense of. But why should everything make sense to us? As I have suggested in a recent book, there must be some aspects of reality that are beyond our comprehension. After all, we're just animals, with a limited awareness of reality. And perhaps more than any other phenomenon, this is especially true of time.
A school lesson leads to more precise measurements of the extinct megalodon shark, one of the largest fish ever.
- A new method estimates the ancient megalodon shark was as long as 65 feet.
- The megalodon was one of the largest fish that ever lived.
- The new model uses the width of shark teeth to estimate its overall size.
A Florida student figured out a way to more accurately measure the size of one of the largest fish that ever lived – the extinct megalodon shark – and found that it was even larger than previously estimated.
The megalodon (officially named Otodus megalodon, which means "Big Tooth") lived between 3.6 and 23 million years ago and was thought to be about 34 feet long on average, reaching the maximum length of 60 feet. Now a new study puts that number at up to 65 feet (20 meters).
Homework assignment leads to a discovery
The study, published in Palaeontologia Electronica, used new equations extrapolated from the width of megalodon's teeth to make the improved estimates. The paper's lead author, Victor Perez, developed the revised methodology while he was a doctoral student at the Florida Museum of Natural History. He got the idea while teaching students, noticing a range of discrepancies in the results they were getting.
Students were supposed to calculate the size of megalodon based on the ancient fish's similarities to the modern great white shark. They utilized the commonly accepted method of linking the height of a shark's tooth to its total body length. As the press release from the Florida Museum of Natural History expounds, this method involves locating the anatomical position of a tooth in the shark's jaw, measuring the tooth "from the tip of the crown to the line where root and crown meet," and using that number in an appropriate equation.
But while carrying out calculations in this way, some of Perez's students thought the shark would have been just 40 feet long, while others were calculating 148 feet. Teeth located toward the back of the mouth were yielding the largest estimates.
"I was going around, checking, like, did you use the wrong equation? Did you forget to convert your units?" said Perez, currently the assistant curator of paleontology at the Calvert Marine Museum in Maryland. "But it very quickly became clear that it was not the students that had made the error. It was simply that the equations were not as accurate as we had predicted."
Found in North Carolina, these 46 fossils are the most complete set of megalodon teeth ever excavated.Credit: Jeff Gage/Florida Museum
The new approach
Perez's math exercise demonstrated that the equations in use since 2002 were generating different size estimates for the same shark based on which tooth was being measured. Because megalodon teeth are most often found as standalone fossils, Perez focused on a nearly complete set of teeth donated by a fossil collector to design a new approach.
Perez also had help from Teddy Badaut, an avocational paleontologist in France, who suggested using tooth width instead of height, which would be proportional to the length of its body. Another collaborator on the revised method was Ronny Maik Leder, then a postdoctoral researcher at the Florida Museum, who aided in the development of the new set of equations.
The research team analyzed the widths of fossil teeth that came from 11 individual sharks of five species, which included megalodon and modern great white sharks, and created a model that connects how wide a tooth was to the size of the jaw for each species.
"I was quite surprised that indeed no one had thought of this before," shared Leder, who is now director of the Natural History Museum in Leipzig, Germany. "The simple beauty of this method must have been too obvious to be seen. Our model was much more stable than previous approaches. This collaboration was a wonderful example of why working with amateur and hobby paleontologists is so important."
Why use teeth?
In general, almost nothing of the super-shark survived to this day, other than a few vertebrae and a large number of big teeth. The megalodon's skeleton was made of lightweight cartilage that decomposed after death. But teeth, with enamel that preserves very well, are "probably the most structurally stable thing in living organisms," Perez said. Considering that megalodons lost thousands of teeth during a lifetime, these are the best resources we have in trying to figure out information about these long-gone giants.
Researchers suggest megalodon's large jaws were very thick, made for grabbing prey and breaking its bones, exerting a bite force of up to 108,500 to 182,200 newtons.
Megalodon tooth compared to two great white shark teeth. Credit: Brocken Inaglory / Wikimedia.
Limitations of the new model
While the new model is better than previous methods, it's still far from perfect in precisely figuring out the sizes of animals which lived so long ago and left behind few if any full remains. Because individual sharks come in a variety of sizes, Perez warned that even their new estimates have an error range of about 10 feet when it comes to the largest animals.
Other ambiguities may affect the results, such as the width of the megalodon's jaw and the size of the gaps between its teeth, neither of which are accurately known. "There's still more that could be done, but that would probably require finding a complete skeleton at this point," Perez pointed out.
How did the megalodon go extinct?
Environmental changes that led to fluctuations in sea levels and disturbed ecosystems in the oceans likely led to the demise of these enormous ancient sharks. They were just too big to be sustained by diminishing food resources, says the ReefQuest Centre for Shark Research.
A 2018 study suggested that a supernova 2.6 million years ago hit Earth's atmosphere with so much cosmic energy that it resulted in climate change. The cosmic rays that included particles called muons might have caused a mass extinction of giant ocean animals ("the megafauna") that included the megalodon by causing mutations and cancer.
Scientists, led by Adrian Melott, professor emeritus of physics and astronomy at the University of Kansas, estimated that "the cancer rate would go up about 50 percent for something the size of a human — and the bigger you are, the worse it is. For an elephant or a whale, the radiation dose goes way up," as he explained in a press release.
Might as well face it, you're addicted to love.
- Many writers have commented on the addictive qualities of love. Science agrees.
- The reward system of the brain reacts similarly to both love and drugs
- Someday, it might be possible to treat "love addiction."
Since people started writing, they've written about love. The oldest love poem known dates back to the 21st century BCE. For most of that time, writers also apparently have been of two (or more) minds about it, announcing that love can be painful, impossible to quit, or even addictive — while also mentioning how nice it is.
The idea of love as an addiction is one that is both familiar and unsettling. Surely it can't be the case that our mutual love with our partner — a thing that can produce euphoria, consumes a great deal of our time, and which we fear losing — can be compared to a drug habit? But indeed, many scientists have turned their attention to the idea of "love addiction" and how your brain on drugs might resemble your brain in love.
Love and other drugs
In a 2017 article published in the journal Philosophy, Psychiatry, & Psychology, a team of neuroethicists considered the idea that love is addicting and held the idea up to science for scrutiny.
They point out that the leading model of addiction rests on the notion of a drug causing the brain to release an unnatural level of reward chemicals, such as dopamine, effectively hijacking the brain's reward system. This phenomenon isn't strictly limited to drugs, though they are more effective at this process than other things. Rats can get a similar rush from sugar as from cocaine, and they can have terrible withdrawal symptoms when the sugar crash kicks in.
On the structural level, there is a fair amount of overlap between the parts of the brain that handle love and pair-bonding and the parts that deal with addiction and reward processing. When inside an MRI machine and asked to think about the person they love romantically, the reward centers of people's brains light up like Broadway.
Love as an addiction
These facts lead the authors to consider two ideas, dubbed the "narrow" and "broad" views of love as an addiction.
The narrow view holds that addiction is the result of abnormal brain processes that simply don't exist in non-addicts. Under this paradigm, "food-seeking or love-seeking behaviors are not truly the result of addiction, no matter how addiction-like they may outwardly appear." It could be that abnormal processes cause the brain's reward system to misfire when exposed to love and to react to it excessively.
If this model is accurate, love addiction would be a rare thing — one study puts it around five to ten percent of the population — but could be considered a disorder similar to others and caused by faulty wiring in the brain. As with other addictions, this malfunction of the reward system could lead to an inability to fully live a typical life, difficulty having healthy relationships, and a number of other negative consequences.
The broad view looks at addiction differently, perhaps even radically.
It begins with the idea that addiction exists on a spectrum of motivations. All of our appetites, including those for food and water, exist on this spectrum and activate similar parts of the brain when satisfied. We can have appetites for anything that taps into our reward system, including food, gambling, sex, drugs, and love. For most people most of the time, our appetites are fairly temperate, if recurring. I might be slightly "addicted" to food — I do need some a few times per day — but that "addiction" doesn't have any negative effects on my health.
An appetite for cocaine, however, is rarely temperate and usually dangerous. Likewise, a person's appetite for love could reach addiction levels, and a person could be considered "hooked" on relationships (or on a particular person). This would put love addiction at the extreme end of the spectrum.
None of this is to say that the authors think that love is bad for you just because it can resemble an addiction. Love addiction is not the same as cocaine addiction at the neurological level: important differences, like how long it takes for the desire for another "hit" to occur, do exist. Rather, the authors see this as an opportunity to reconsider our approach to addiction in general and to think about how we can help the heartsick when they just can't seem to get over their last relationship.
Is "love addiction" a treatable disorder?
Hypothetically, a neurological basis for an addiction to love could point toward interventions that "correct" for it. If the narrow view of addiction is accurate, perhaps some people will be able to seek treatment for love addiction in the same way that others seek help to quit smoking. If the broad view of addiction is correct, the treatment of love addiction would be unlikely as it may be difficult to properly identify where the cutoff of acceptability on a spectrum should be.
Either way, since love is generally held in high regard by all cultures and doesn't quite seem to be in the same category as a bad cocaine habit in terms of social undesirability, the authors doubt we'll be treating anyone for "love addiction" anytime soon.