A ‘Humanely’ Killed Animal Is Still Killed—And That’s Wrong

Western conventional wisdom about animal ethics is that killing an animal is not the problem; the problem is making the animal suffer. 

A newborn calf, isolated from other calves the first days, is pictured on December 14, 2017 at an intensive cattle farm, known as the 1,000-cow farm, in the northeastern French town of Drucat. (Photo credit: PHILIPPE HUGUEN/AFP/Getty Images)
A newborn calf, isolated from other calves the first days, is pictured on December 14, 2017 at an intensive cattle farm, known as the 1,000-cow farm, in the northeastern French town of Drucat. (Photo credit: PHILIPPE HUGUEN/AFP/Getty Images)

Western conventional wisdom about animal ethics is that killing an animal is not the problem; the problem is making the animal suffer. As long as we have treated and killed an animal in a ‘humane’ way, we have done nothing wrong. A compelling example of this belief is found in the case of dogs and cats, animals particularly valued in Western culture. If someone inflicts suffering on a dog or cat, they are excoriated. But unwanted dogs and cats are routinely ‘put to sleep’ – killed – in shelters with an intravenous injection of sodium pentobarbital, and most people do not object as long as the process is administered properly by a trained person and there is no suffering inflicted on the animal.


Why do we think that killing animals per se is not morally wrong? Why do we think that death is not a harm for non-human animals?

Before the 19th century, animals were mostly regarded as things. Neither our use nor our treatment of them mattered morally or legally. We could have obligations that concerned animals, such as an obligation not to damage our neighbour’s cow, but that obligation was owed to our neighbour as the owner of the cow, not to the cow.

To say that we thought of animals as things didn’t mean that we denied that they were sentient, or subjectively aware, and had interests in not experiencing pain, suffering or distress. But we believed that we could ignore those interests because animals were our inferiors. We could reason; they couldn’t. We could use symbolic communication; they couldn’t.

In the 19th century, a paradigm shift occurred, and the animal welfare theory was born. In a relatively brief period of time as far as major shifts in thinking go, we claimed to reject the notion of animals as things, and to embrace the idea that animals had moral value. Prominent in this paradigm shift was the lawyer/philosopher Jeremy Bentham, who argued in 1789 that, although a full-grown horse or dog is more rational and more able to communicate than a human infant, ‘the question is not, Can they reason? nor, Can they talk? but, Can they suffer?’

Bentham maintained that the fact that animals were cognitively different from humans – that they had different sorts of minds – did not mean that their suffering did not matter morally. He argued that we could no more morally justify ignoring the suffering of animals based on their species than we could ignore the suffering of slaves based on their skin colour.

But Bentham did not advocate that we stop using animals as resources in the manner he had advocated abolition in the case of human slavery. He maintained that it was morally acceptable to use and kill animals for human purposes as long as we treated them well. According to Bentham, animals live in the present and are not aware of what they lose when we take their lives. If we kill and eat them, ‘we are the better for it, and they are never the worse. They have none of those long-protracted anticipations of future misery which we have.’ Bentham maintained that we actually do animals a favour by killing them, as long as we do so in a relatively painless manner: ‘The death they suffer in our hands commonly is, and always may be, a speedier, and by that means a less painful one, than that which would await them in the inevitable course of nature … [W]e should be the worse for their living, and they are never the worse for being dead.’ In other words, the cow does not care that we kill and eat her; she cares only about how we treat and kill her, and her only interest is not to suffer.

And that is precisely what most of us believe today. Killing animals is not the problem. The problem is making them suffer. If we provide a reasonably pleasant life and a relatively painless death, we have done nothing wrong. Interestingly, Bentham’s views are endorsed by Peter Singer, who bases the position he articulates in Animal Liberation (1975) squarely on Bentham. Singer claims that ‘the absence of some form of mental continuity’ makes it difficult to understand why killing an animal is not ‘made good by the creation of a new animal who will lead an equally pleasant life’.

We think that this view is wrong.

To say that a sentient being – any sentient being – is not harmed by death is decidedly odd. Sentience is not a characteristic that has evolved to serve as an end in itself. Rather, it is a trait that allows the beings who have it to identify situations that are harmful and that threaten survival. Sentience is a means to the end of continued existence. Sentient beings, by virtue of their being sentient, have an interest in remaining alive; that is, they prefer, want or desire to remain alive. Continued existence is in their interest. Therefore, to say that a sentient being is not harmed by death denies that the being has the very interest that sentience serves to perpetuate. It would be analogous to saying that a being with eyes does not have an interest in continuing to see or is not harmed by being made blind. Animals in traps will chew their paws or limbs off and thereby inflict excruciating suffering on themselves in order to continue to live.

Singer recognises that ‘an animal may struggle against a threat to its life’, but he concludes that this does not mean that the animal has the mental continuity required for a sense of self. This position begs the question, however, in that it assumes that the only way that an animal can be self-aware is to have the sort of autobiographical sense of self that we associate with normal adult humans. That is certainly one way of being self-aware, but it is not the only way. As the biologist Donald Griffin, one of the most important cognitive ethologists of the 20th century, noted, it is arbitrary to deny animals some sort of self-awareness given that animals who are perceptually conscious must be aware of their own bodies and actions, and must see them as different from the bodies and actions of other animals.

Even if animals live in the ‘eternal present’ that Bentham and Singer think they inhabit, that does not mean that they are not self-aware or that they do not have an interest in continued existence. Animals would still be aware of themselves in each instant of time and have an interest in perpetuating that awareness; they would have an interest in getting to the next second of consciousness. Humans who have a particular form of amnesia might be unable to recall memories or engage in ideation about the future, but that does not mean that they are not self-aware in each moment, or that the cessation of that awareness would not be a harm.

It is time that we rethink this issue. If we saw killing an animal – however painlessly – as raising a moral issue, perhaps that might lead us to start thinking more of whether animal use is morally justifiable, rather than only whether treatment is ‘humane’. Given that animals are property, and we generally protect animal interests only to the extent that it is cost-effective, it is a fantasy to think that ‘humane’ treatment is an attainable standard in any case. So if we take animal interests seriously, we really cannot avoid thinking about the morality of use totally apart from considerations of treatment. 

Anna E Charlton & Gary L Francione

--

This article was originally published at Aeon and has been republished under Creative Commons.

U.S. Navy controls inventions that claim to change "fabric of reality"

Inventions with revolutionary potential made by a mysterious aerospace engineer for the U.S. Navy come to light.

U.S. Navy ships

Credit: Getty Images
Surprising Science
  • U.S. Navy holds patents for enigmatic inventions by aerospace engineer Dr. Salvatore Pais.
  • Pais came up with technology that can "engineer" reality, devising an ultrafast craft, a fusion reactor, and more.
  • While mostly theoretical at this point, the inventions could transform energy, space, and military sectors.
Keep reading Show less

Hack your brain for better problem solving

Tips from neuroscience and psychology can make you an expert thinker.

Credit: Olav Ahrens Røtne via Unsplash
Mind & Brain

This article was originally published on Big Think Edge.

Problem-solving skills are in demand. Every job posting lists them under must-have qualifications, and every job candidate claims to possess them, par excellence. Young entrepreneurs make solutions to social and global problems the heart of their mission statements, while parents and teachers push for curricula that encourage critical-thinking methods beyond solving for x.

It's ironic then that we continue to cultivate habits that stunt our ability to solve problems. Take, for example, the modern expectation to be "always on." We push ourselves to always be working, always be producing, always be parenting, always be promoting, always be socializing, always be in the know, always be available, always be doing. It's too much, and when things are always on all the time, we deplete the mental resources we need to truly engage with challenges.

If we're serious about solving problems, at work and in our personal lives, then we need to become more adept at tuning out so we can hone in.

Solve problems with others (occasionally)

A side effect of being always on is that we are rarely alone. We're connected through the ceaseless chirps of friends texting, social media buzzing, and colleagues pinging us for advice everywhere we go. In some ways, this is a boon. Modern technologies mediate near endless opportunities for collective learning and social problem-solving. Yet, such cooperation has its limits according to a 2018 study out of Harvard Business School.

In the study, participants were divided into three group types and asked to solve traveling salesman problems. The first group type had to work on the problems individually. The second group type exchanged notes after every round of problem-solving while the third collaborated after every three rounds.

The researchers found that lone problem-solvers invented a diverse range of potential solutions. However, their solutions varied wildly in quality, with some being true light bulb moments and others burnt-out duds. Conversely, the always-on group took advantage of their collective learning to tackle more complex problems more effectively. But social influence often led these groups to prematurely converge around a single idea and abandon potentially brilliant outliers.

It was the intermittent collaborators who landed on the Goldilocks strategy. By interacting less frequently, individual group members had more time to nurture their ideas so the best could shine. But when they gathered together, the group managed to improve the overall quality of their solutions thanks to collective learning.

In presenting their work, the study's authors question the value of always-on culture—especially our submissiveness to intrusions. "As we replace those sorts of intermittent cycles with always-on technologies, we might be diminishing our capacity to solve problems well," Ethan Bernstein, an associate professor at Harvard Business School and one of the study's authors, said in a press release.

These findings suggest we should schedule time to ruminate with our inner geniuses and consult the wisdom of the crowd. Rather than dividing our day between productivity output and group problem-solving sessions, we must also create space to focus on problems in isolation. This strategy provides the best of both worlds. It allows us to formulate our ideas before social pressure can push us to abandon them. But it doesn't preclude the group knowledge required to refine those ideas.

And the more distractions you can block out or turn off, the more working memory you'll have to direct at the problem.

A problem-solving booster

The next step is to dedicate time to not dealing with problems. Counterintuitive as it may seem, setting a troublesome task aside and letting your subconscious take a crack at it improves your conscious efforts later.

How should we fill these down hours? That's up to you, but research has shown time and again that healthier habits produce hardier minds. This is especially true regarding executive functions—a catchall term that includes a person's ability to self-control, meet goals, think flexibly, and, yes, solve problems.

"Exercisers outperform couch potatoes in tests that measure long-term memory, reasoning, attention, problem-solving, even so-called fluid-intelligence tasks. These tasks test the ability to reason quickly and think abstractly, improvising off previously learned material to solve a new problem. Essentially, exercise improves a whole host of abilities prized in the classroom and at work," writes John Medina, a developmental molecular biologist at the University of Washington.

One such study, published in the Frontiers in Neuroscience, analyzed data collected from more than 4,000 British adults. After controlling for variables, it found a bidirectional relationship between exercise and higher levels of executive function over time. Another study, this one published in the Frontiers in Aging Neuroscience, compared fitness data from 128 adults with brain scans taken as they were dual-tasking. Its findings showed regular exercisers sported more active executive regions.

Research also demonstrates a link between problem-solving, healthy diets, and proper sleep habits. Taken altogether, these lifestyle choices also help people manage their stress—which is known to impair problem-solving and creativity.

Of course, it can be difficult to untangle the complex relationship between cause and effect. Do people with healthy life habits naturally enjoy strong executive functions? Or do those habits bolster their mental fitness throughout their lives?

That's not an easy question to answer, but the Frontiers in Neuroscience study researchers hypothesize that it's a positive feedback loop. They posit that good sleep, nutritious food, and regular exercise fortify our executive functions. In turn, more potent executive decisions invigorate healthier life choices. And those healthy life choices—you see where this is going.

And while life choices are ultimately up to individuals, organizations have a supportive role to play. They can foster cultures that protect off-hours for relaxing, incentivize healthier habits with PTO, and prompt workers to take time for exercise beyond the usual keyboard calisthenics.

Nor would such initiatives be entirely selfless. They come with the added benefit of boosting a workforce's collective problem-solving capabilities.

Live and learn and learn some more

Another advantage of tuning out is the advantage to pursue life-long learning opportunities. People who engage in creative or problem-solving activities in their downtime—think playing music, puzzles, and even board games—show improved executive functions and mental acuity as they age. In other words, by learning to enjoy the act of problem-solving, you may enhance your ability to do so.

Similarly, lifelong learners are often interdisciplinary thinkers. By diving into various subjects, they can come to understand the nuances of different skills and bodies of knowledge to see when ideas from one field may provide a solution to a problem in another. That doesn't mean lifelong learners must become experts in every discipline. On the contrary, they are far more likely to understand where the limits of their knowledge lie. But those self-perceived horizons can also provide insight into where collaboration is necessary and when to follow someone else's lead.

In this way, lifelong learning can be key to problem-solving in both business and our personal lives. It pushes us toward self-improvement, gives us an understanding of how things work, hints at what's possible, and, above all, gives us permission to tune out and focus on what matters.

Cultivate lifelong learning at your organization with lessons 'For Business' from Big Think Edge. At Edge, more than 350 experts, academics, and entrepreneurs come together to teach essential skills in career development and lifelong learning. Heighten your problem-solving aptitude with lessons such as:

  • Make Room for Innovation: Key Characteristics of Innovative Companies, with Lisa Bodell, Founder and CEO, FutureThink, and Author, Why Simple Wins
  • Use Design Thinking: An Alternative Approach to Tackling the World's Greatest Problems, with Tim Brown, CEO and President, IDEO
  • The Power of Onlyness: Give Your People Permission to Co-Create the Future, with Nilofer Merchant, Marketing Expert and Author, The Power of Onlyness
  • How to Build a Talent-First Organization: Put People Before Numbers, with Ram Charan, Business Consultant
  • The Science of Successful Things: Case Studies in Product Hits and Flops, with Derek Thompson, Senior Editor, The Atlantic, and Author, Hit Makers

Request a demo today!

How AI learned to paint like Rembrandt

The Rijksmuseum employed an AI to repaint lost parts of Rembrandt's "The Night Watch." Here's how they did it.

Credit: Rijksmuseum
Culture & Religion
  • In 1715, Amsterdam's Town Hall sliced off all four outer edges of Rembrandt's priceless masterpiece so that it would fit on a wall.
  • Neural networks were used to fill in the missing pieces.
  • An unprecedented collaboration between man and machine is now on display at the Rijksmuseum.
Keep reading Show less
Culture & Religion

Pragmatism: How Americans define truth

If something is "true," it needs to be shown to work in the real world.

Quantcast