Liberal Metapaternalism and Higher Education

Matt Yglesias replies to an argument from Mike Konczal:

Mike Konczal has a fairly compelling argument that it would make sense to dismantle the entire crazy quilt of "submerged state" tax deductions and credits designed to help make college affordable and just use the money to directly provide free or near-free college education at public universities.

I think, though, that any effort to radically rethink higher education finance does need to go back to first principles. Why spend public money on this at all? Why not dismantle the submerged state exactly as Konczal suggests, and give the money to poor people? Then people could use the money to buy higher education services or not according to whether or not they thought vendors of said services were, all things considered, offering a reasonable value proposition. There are good answers to this question (I think) but the nature of the answer you give helps shape your agenda for higher education reform.

So why not just give poor people money and let them decide how they want to spend it? The obvious answer is that they ought to spend it on education, but they won't. In a smart follow-up post, Konczal tries to find a nice way to say this.

As Konczal has it, the "liberal paternalist" argument against simply giving cash to the poor is that this encourages dependence, but we're trying to encourage independence. I think this is a good argument!

Against this sort of liberal paternalist Konczal offers Peter Frase, who argues from the left that it's better to give the poor money than to make to sure they have access to jobs, because most wage labor is demeaning and enervating, and the only reason anyone does it is to make money. So just give them money!

Frase thinks "that having a job gives a person a greater sense of self-worth than getting a handout" only to the extent that "we, as a society, treat wage labor as though it is a unique source of dignity and worth." The suggestion seems to be that if we, as a society, treated as a source of dignity and worth whatever else folks are up to, other than wage-laboring, there'd be no particular problem with the dole. This strikes me as a bit silly.

We, as a society, just aren't going to regard whatever folks want to do as a source of dignity and worth. Lots of us have a fairly narrow, though eminently reasonable, view of what's dignified and worthwhile. The ideal of society as a cooperative venture for mutual advantage is a good one, as is the idea of society as an order of mutual respect and fair reciprocity (which comes to the same thing, by my lights). If you want a cut of the cooperative surplus, you've got to pitch in and cooperate! If you can, but you don't, most of us are going to resent your insisting on a cut anyway, even if we think we owe you something, just because you're a person. There's an important sense in which you don't have it coming, that it's unfair to claim a share, and if you feel a little bad about taking it, most of us are going to be glad you feel a little bad, because probably you should. In a decent order of fair reciprocity, having a job gives a person a greater sense of self-worth than getting a handout because paychecks are compensation for having made others better off -- are hard evidence we are worth something to somebody else -- and handouts, as such, aren't. Being worth something to others gives us good reason to feel we're worth something to ourselves.

Frase talks a bit about the importance of non-market labor, and it is important. But, again, it's not clear that it's money we owe to people who are providing services to their own families, or selflessly volunteering to write Wikipedia entries.

Now, sometimes we need help, and we shouldn't feel too bad about accepting it when we need it. But we should try not to need it, and part of what it means to treat people with respect is to encourage them not to need it. If this is a matter of convention, it's a good convention. Now, like Frase, I favor a guaranteed social minimum, but not because people should get a cut of the surplus no matter what they do or don't do, but because I think (and this is an empirical hypothesis) indemnifying one another against downside risk induces more and better cooperation. My pitching in to put a floor under you is something you can justify to me, and everybody else, if it's likely to put you in a better position to make me, and everybody else, better off than we'd be if you (and we) didn't enjoy the assurance of a floor.

Konczal goes on to quote T.M. Scanlon at length, and Scanlon makes a great point:

The strength of a stranger’s claim on us for aid in the fulfillment of some interest depends upon what that interest is and need not be proportional to the importance he attaches to it. The fact that someone would be willing to forgo a decent diet in order to build a monument to his god does not mean that his claim for aid in his project has the same strength as a claim for aid in obtaining enough to eat (even assuming that the sacrifices required on others would be the same). Perhaps a person does have some claim on others for assistance in a project to which he attaches such great importance. All I need maintain is that it does not have the weight of a claim to aid in the satisfaction of a truly urgent interest even if the person in question assigns these interests equal weight.

Right! And there are facts of the matter about what our interests are. One of the greatest of these is that it is much in our interests to develop the capacity to tell the difference between what we actually need and what we just happen to want. Let's call this capacity "autonomy." Autonomy has real developmental conditions. If we haven't become able to exercise judgment in this way, if we haven't developed what it takes to be a reliable agent of our own interests, it's not always going to be in our interests to be empowered to buy what we want.

This is, to my mind, the best reason to not just give people money and then find out whether they spend it on what they are or their children need in order to become full-blooded autonomous agents. Is there something a little paternalistic about this? There sure is! Is this a problem? Yes! It's not easy to come to agreement about the developmental conditions for autonomy. But we do the best we can, and it's not that controversial. We agree, more or less, that a certain measure of economic security, access to decent food, decent health care, and a decent education are generally necessary for the development of the capacities that put us in a position to make robustly autonomous decisions about our lives.

One problem is that those of us who were deprived of these goods may not be well-positioned to make great decisions on the behalf of our children. Offering free school, nutritional assistance, subsidized housing and the like, instead of just giving parents a chunk of change to buy whatever they do or don't want for their kids is a pretty literal kind of paternalism. I call it "meta-paternalism," paternalism in the service of the development of the sort robust autonomy it is wrong to paternalistically interfere with, once it's in place. We paternalistically intervene to prevent parents from being bad paternalists to their kids. Kids need their parents to make good decisions on their behalf, and we, as a society, try to help kids the best we can while minimizing the chance parents will make bad decisions to their kids' detriment. Alas, parents can always use their autonomy to screw up the development of their kids autonomy, but not as much as they might like.

Now, is higher education needed for the development of autonomy? I don't think so. One thorny issue here may be timing. If we've done a good job giving kids the basic goods and opportunities they need for the development of autonomy, they may nevertheless require a little time before it all comes together. Suppose the government gives every kid from a relatively poor family a big check on their eighteenth birthday (how big depends on how poor, say) and tells them they can spend it any way they like. Go to school! Start a business! Whatever!

What's going to happen? I dunno, but I would predict more than a little regret by age twenty-something. Does this mean we should nudge young adults toward college by, say, making it free. I don't think so. This strikes me as an excellent way to subsidize the academy. But if the idea is to help finish off the development of robust autonomy, and/or to subsidize the development of socially valuable human capital, it might be better to just give hard-up kids money with strings attached.

'Upstreamism': Your zip code affects your health as much as genetics

Upstreamism advocate Rishi Manchanda calls us to understand health not as a "personal responsibility" but a "common good."

Sponsored by Northwell Health
  • Upstreamism tasks health care professionals to combat unhealthy social and cultural influences that exist outside — or upstream — of medical facilities.
  • Patients from low-income neighborhoods are most at risk of negative health impacts.
  • Thankfully, health care professionals are not alone. Upstreamism is increasingly part of our cultural consciousness.
Keep reading Show less

Yale scientists restore brain function to 32 clinically dead pigs

Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.

Still from John Stephenson's 1999 rendition of Animal Farm.
Surprising Science
  • Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
  • They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
  • The research raises many ethical questions and puts to the test our current understanding of death.

The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?

But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.

What's dead may never die, it seems

The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. Think a dialysis machine for the mind. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.

BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.

The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.

As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.

The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.

"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.

An ethical gray matter

Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.

The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.

Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.

Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?

"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."

One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.

The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.

"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.

It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.

Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?

The dilemma is unprecedented.

Setting new boundaries

Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."

She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.

Dubai to build the world’s largest concentrated solar power plant

Can you make solar power work when the sun goes down? You can, and Dubai is about to run a city that way.

Photo credit: MARWAN NAAMANI / AFP / Getty Images
Technology & Innovation
  • A new concentrated solar plant is under construction in Dubai.
  • When it opens next year, it will be the largest plant of its kind on Earth.
  • Concentrated solar power solves the problem of how to store electricity in ways that solar pannels cannot.
Keep reading Show less

19th-century medicine: Milk was used as a blood substitute for transfusions

Believe it or not, for a few decades, giving people "milk transfusions" was all the rage.

Photo credit: Robert Bye on Unsplash
Surprising Science
  • Prior to the discovery of blood types in 1901, giving people blood transfusions was a risky procedure.
  • In order to get around the need to transfuse others with blood, some doctors resorted to using a blood substitute: Milk.
  • It went pretty much how you would expect it to.
Keep reading Show less