Literally the Worst Definition of a Word Ever

Stop using 'literally' figuratively!

Ours is a cultural and linguistic moment obsessed with irony.


For just that reason, people need to stop using the word 'literally' to mean 'figuratively'. I am not the first to say this. Yet the point failed to sink in. Worse, prompted by the addition of the non-literal sense of 'literal' to several dictionaries, including the Oxford Enflish Dictionary Online, the usage is getting even more accepted.

The situation is also getting worse because the flames are being fanned by several reactionary articles which claim that the usage which means 'figuratively' is perfectly legitimate. They make this claim based on three lines of reasoning: That the usage is very old, that nobody actually gets confused by the two meanings, and that language evolves naturally and we must simply describe it and conform to it, rather than judge it or make prescriptions for it.

I will describe precisely why this usage is a bad thing for the language. But first, because all of the above arguments are faulty, I want to take the time to point out why:

Bad Reason 1: "Aha!" they point out. "The usage is not some horrible new invention of the millennials, it has been around for a very long time, in the dictionary since 1903, and first used in 1759!"

This point tends to be the primary data used to make the non-literal use of 'literally' look legitimate to detractors. Why the hell does this matter to anyone? What is so much more authoritative about English-speakers of centuries past than those from this century? We do not claim that modern doctors should include a medical procedure in a modern textbook because doctors did so in 1759.

Sure, those who, like me, decry the usage at issue tend to associate it with younger speakers who are diluting the language with their text-speak. But isn't that a legitimate complaint? Original date of usage aside, no one is denying that this usage has exploded in popularity recently, largely due to young people who are not, so to speak, of the lettered classes. So should we really be formalizing our conventional language around people whose most thoughtful comment on a dictionary would be TL;DR?

I say they aren't even invited to the convention.

Bad Reason 2: "And anyway," they say, "it does not devalue the primary definition of the word or the value of the English language at large because one would have to be an idiot to confuse which sense of the word is being employed."

There are a few things wrong with this.

For one, it assumes the faulty premise that only genuinely misunderstanding the meaning of usage indicates a problem with it. That isn't the case. The primary stylistic problem with the non-literal 'literally' is that, though hearers can sort out which usage is being employed, that sorting is jarring.

For example, if I hear that "he literally broke my heart" or that "Transformers 2 is literally the worst thing ever" my brain goes in places that the speaker doesn't want it to go. Now, of course I am going to extrapolate the intended meaning from context. But noting only that ignores the fact that somebody who is trying to express themselves to me more, rather than less, accurately by being figurative has failed. Instead he or she made me visualize the rending of cardiac tissue, or made me compare genocides to blockbusters. It's just not good communication.

(And anyway, the non-literal usage of "literally" is just so damned dramatic. It comes from our worst, most unpleasantly exaggerative linguistic instincts; Quoth a girl I walked by recently: "If I got the wrong shampoo, I'm literally going to kill myself.")

But the bigger problem with this Bad Reason 2 is that it simply isn't true. This reasoning ignores the possibility and the likelihood of people's "talking at cross purposes." That can be a huge mistake. For example: I once heard two very smart people (both, in fact, versed in the philosophy of language, from which the phrase 'sense of a word' comes) have a conversation, which was a fierce disagreement about the artistic merits of a band, for almost ten minutes. Only after that long did they determine that they were each discussing entirely different bands!

So yes, context is a vital and informative part of understanding words, but it is careless to assume that subtleties will never, ever get lost in translation. This inevitable phenomenon even has a historical name, The Inscrutability of Reference (which phrase means nothing more and nothing less than "Our inability to know precisely what the hell each other are talking about").

Bad Reason 3: "All that the maker of a dictionary can do is record uses that come into being organically. We cannot tell people what is and is not 'correct,' because the concept of correct does not even apply to language usage. The job of the dictionary editor/linguist/lexicographer is only to observe and record."

This is the old canard that because language evolves, it is somehow elitist and morally wrong to try to formalize certain usages of language as legitimate while writing others off as illegitimate. Holders of this view are called Descriptivists, while holders of the opposite are called Prescriptivists.

I do not propose to make the case for Prescriptivism writ large here, because I can just as well defeat this point on Descriptivist terms. (If you do want to see that discussion borne out, I highly recommend David Foster Wallace's classic essay "Authority and American Usage", though for the love of god ignore the footnote about Wittgenstein's private language argument, which is all wrong.)

So I have this question for the Descriptivists who draw an Is from an Ought by saying that language evolves, and can therefore never be rightfully subject to authority: In what sense are the efforts of a Prescriptivist dictionary editor to formalize or ban a word not part of that very linguistic evolution that you cheer on? If everybody is supposed to have a part in the organic evolution of a living, breathing language, why not understand this as the elites, in their own snooty way, doing just that?

Alternative Reasoning:

A language is better the more things it can say clearly. It should allow us to communicate what we mean. We need the primary definition of 'literally' to be left alone, because without it, we don't have any other way to say that thing.

Irony is the use of words expressing something other than their literal intention. I chose to open this article by noting that this is a moment in which irony is a cultural fixation. You might have wondered why I think that is relevant.

It's relevant because having one word, 'literally', which word is exempt from ironic usage, allows us to talk about that very obsession. It allows us to demarcate irony from non-irony. The non-literal definition of 'literal' makes English smaller. 

Need proof? Simply consider how many times I have just had to employ the clunker of a phrase "the non-literal usage of "literally.'" (Tellingly, I stole the phrase from one of the articles arguing against me.)

We can see, then, that at the very best, using 'literal' figuratively needlessly complicates things. At worst, it lessens the very power of the language to describe. We can therefore see why we need to eliminate this language from our speech and from our dictionaries. Describing is all languages do!

Cambridge scientists create a successful "vaccine" against fake news

A large new study uses an online game to inoculate people against fake news.

University of Cambridge
Politics & Current Affairs
  • Researchers from the University of Cambridge use an online game to inoculate people against fake news.
  • The study sample included 15,000 players.
  • The scientists hope to use such tactics to protect whole societies against disinformation.
Keep reading Show less

Yale scientists restore brain function to 32 clinically dead pigs

Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.

Still from John Stephenson's 1999 rendition of Animal Farm.
Surprising Science
  • Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
  • They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
  • The research raises many ethical questions and puts to the test our current understanding of death.

The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?

But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.

What's dead may never die, it seems

The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.

BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.

The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.

As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.

The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.

"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.

An ethical gray matter

Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.

The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.

Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.

Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?

"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."

One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.

The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.

"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.

It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.

Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?

The dilemma is unprecedented.

Setting new boundaries

Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."

She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.

5 facts you should know about the world’s refugees

Many governments do not report, or misreport, the numbers of refugees who enter their country.

David McNew/Getty Images
Politics & Current Affairs

Conflict, violence, persecution and human rights violations led to a record high of 70.8 million people being displaced by the end of 2018.

Keep reading Show less