Mixing human + animal DNA and the future of gene editing
"The question is which are okay, which are not okay."
- What might be wrong with mixing human and animal parts? There are changes we could make to human beings by mixing in animal DNA that might make them better and there are changes we can make to human beings that might make them worse.
- You're made out of DNA from thousands and millions of ancestors and it's the collaboration between DNA from all your ancestors that keeps you alive.
- We are experiencing right now a remarkable deluge of discovery in terms of the causes of disease, much of it coming out of genomics.
BRYAN SYKES: Genetics and DNA does get to the central issue of what makes us tick. It's perhaps too determinist to say that your genes determine everything you do. They don't, but, if you like, it's like the deck of cards that you're dealt at birth. What you do with that deck, like any card game, depends a lot on your choices, but it is influenced by those cards, those genes that you got when you were born.
What I've enjoyed about genetics is looking to see what it tells us about where we've come from because those pieces of DNA, they came from somewhere. They weren't just sort of plucked out of the air. They came from ancestors. And it's a very good way of finding out about your ancestors, not only who they are, but just imagining their lives. You're made up of DNA from thousands and millions of ancestors who've lived in the past, most of them now dead, but they've survived, they've got through, they've passed their DNA onto their children, and it's come down to you. It doesn't matter who you are. You could be the President. You could be the Prime Minister. You could be the head of a big corporation. You could be a taxi driver. You could be someone who lives on the street. But the same is true of everybody. I can see a time, long after I've gone but when, in fact, everyone will know their relationship to everybody else. It is possible, if anybody wants to do it or can afford it, you could actually, I think, draw the family tree of the entire world by linking up the segments of DNA. So you could find out in what way everyone was related to everybody else.
No doubt, most of the funding for the advances in genetics, for example, the complete sequencing of the human genome, has come from ambition to learn more about health issues. The technology for exploring that, which is making leaps and bounds, has come through the healthcare benefits. Those are the two main things that people are learning about themselves and who they're related to, where they've come from. And that does, and I know from experience, that does add a lot to people's sense of identity. It's not for everybody, not everyone's very interested in it, but a lot of people are and I think that's a very good thing.
FRANCIS COLLINS: It's too bad that you can't actually see DNA easily under a microscope and scan across the double-helix and read out of the sequence of bases that amounts to the information content because it would be easier, I think, to explain then how a geneticist goes about tracking down the molecular bases of a disease at the DNA level. Our methods are indirect. They're very powerful, they're very highly accurate, but they're not as visual as you might like. We do have methods though now that allow you to read out with high accuracy all 3 billion of the letters of the DNA instruction book. Those letters are actually these chemical bases. The chemical language of DNA is a simple one. There's only four letters in the alphabet. Those bases that we abbreviate, A, C, G, and T. And we have methods of being able to compare then the DNA sequence of people who have a disease versus people who don't and look for the critical differences in order to nail down something that might be the cause. Well, since, however, we all differ in our DNA sequence by about a half of 1%, you wouldn't get very far if you basically sequenced my DNA and the DNA of somebody with Parkinson's disease trying to figure out what the differences were because there would be way too many of them. But if you're willing to do that for a large number of people, you kind of average out all the noise and the difference that matters begins to be more and more clear. That's an overly-simplified description of how a geneticist goes about zeroing in on the actual molecular cause of a complex or a simple disease. This worked most readily for diseases that are highly heritable, cystic fibrosis, Huntington's disease. Those are conditions where a single mutation very reproducibly results in the disease. It's been a lot tougher for diseases where the inheritance is muddy. If you take diabetes for instance, which is what my lab primarily works on, or you take asthma or high blood pressure, that is not a set of conditions where one gene is involved in risk. There are dozens of genes involved in that and no single one of them contributes very much, but you put it all together and the consequences to that individual may tip them over the threshold into having the illness.
We are experiencing right now a remarkable deluge of discovery in terms of the causes of disease, much of it coming out of genomics, the ability to pinpoint at the molecular level what pathway has gone awry in causing a particular medical condition. And that in itself is exciting because it's new information, but what you really want to do is to take that and push that forward into clinical benefit. Some of that can be in prevention by identifying people at highest risk and trying to be sure they're having the right preventative strategy. But people are still going to get sick and so you want to come up with also better treatments than what we have now. And some of it is the ability through personalized medicine to begin to identify individual risks for future illness to get us beyond the-one-size-fits all approach to prevention, which has been not that effective. People haven't necessarily warmed to these recommendations about what you should do about diet, exercise, colonoscopies, mammograms, and so on because it all sounds very much generic. But if you could provide people with information about their personal risks and allow them, therefore, to come up with a personalized plan for maintaining health, that seems to inspire a lot more interest.
Personalized medicine is a term that gets used differently by different people. In my view, this is an effort to try to take diagnosis, prevention, and treatment and, when possible, factor into that individual information about that person in order to optimize the outcome. I think in some instances, we're not very far along with that. In others, we're making real progress. Take, for instance, the effort to try to choose the right drug at the right dose for the right person, what we'd call pharmacogenomics. There are now more than 10% of FDA-approved drugs that have some mention in the label about the importance of paying attention to genetic differences in order to optimize the outcome. Take, for instance, the drug Abacavir, which is used to treat HIV-AIDS, a very powerful antiretroviral, but a drug that caused a pretty serious hypersensitivity reaction in about 6% or 7% of those who took it. We now know exactly how to predict that on the basis of a genetic test and so there is now a what-called black box label on the FDA, a label for this drug saying you must do that genetic test before you prescribe this drug in order to avoid that outcome. That was unimaginable a few years ago, that you would have that kind of precision in making that choice of a drug.
GLENN COHEN: Recent set of controversies has to do with the funding by the federal government about research that mixes human and animal genetic materials, sometimes called chimeras, but there's actually a broader group. So again, the method is to think about a large number of cases. It's helpful to think about very different cases. So, to use some real cases, imagine you mixed human brain cells, so human brain stem cells in the embryonic stage, into a mouse to create a mouse with a humanized brain. Now, it wouldn't be a human brain. It's not exactly the same. It's much smaller, for example, but has humanized elements. Another example is a humanized immune system. Take a mouse, and we do this, we have these at Harvard for example, and created an immune system in order to test drugs. Think about HIV, for example, that was humanized. So not the brain, but just the immune system was very human-like. And last example is actually heart valve replacements. So Jesse Helms, the Senator, had a pig valve placement years ago, so there's a piece of an animal in him. So these are some real cases of different kinds of mixing and the question is which are okay, which are not okay, why can we generate some principles?
So what might be wrong with mixing human and animal parts? So one thing that might be wrong is that we think it will confuse the boundaries between humans and animals, that right now we have a pretty clear distinction. While many people love their dogs and their cats like members of the family, they are able to say this is not a member of my family. This is not a member that has the same rights as my family member. In a world where we had a much more of a continuum between animals and human beings, those distinctions would become difficult. Now, just because they become difficult doesn't mean that that's wrong. It would just pose for us a new problem and maybe it would illustrate a problem we should be thinking about altogether. So I'm not particularly sympathetic to that argument. Different argument though is to say human beings are particular kinds of beings with particular kinds of capacities and there's a dignity to being human being. And if we were to mix enough animal material into a human being, the thing that we would have would not be something new, but it would be a human being that could not flourish as a human being. It would be an undignified human being, a kind of entity that is one that really is unable to really experience what it is to be human. Now, again, you might push on this and say, well, yes, that's true, they would not be a human being and they would not necessarily have all the capacities of a human being. So imagine having some of the capacities of a human being, but being stuck in a rat body, for example. Sure, there'd be ways in which you would not flourish as a human being, but why not think of you as flourishing as a new kind of entity? And in particular you might actually think there might be an obligation to create some kinds of chimeras. So if, for example, we think of Big Bird from Sesame Street, sounds like a silly example, but it's a good one. Big Bird talks, Big Bird has friends, Big Bird goes to school, been in school a long time on Sesame Street, I guess, but he seems to have a pretty good life. Imagine we could take regular birds and turn them into big birds by doing something to them. Would we think of that as improving a little bird's life or would we think about that as hurting a human being's life through this mixture? Hard questions, but at least it might be possible that we think we're doing animals a favor by doing this. And other answers might say it depends a lot on the specifics of the case. There are changes we could make to human beings by mixing in animal DNA that might make them better and there are changes we can make to human beings that might make them worse and worse from a moral perspective. So, for example, if it turned out that there was, to use an example in literature, we could give human beings night vision so they could see at night like some animals through mixing in a little animal DNA, you might think that would be great. We could do more search and rescue. We'd be better drivers. There'll be less fatalities. On the other hand, if the result was to produce human beings that had much stronger aggression or violence or claws or something like that, you might think that's worse because we're going to do more harm. And that would suggest the answer about whether we ought to have chimeras or not and what kind can only be answered in a particularistic way by thinking about a particular case.
I will say, and this is kind of referencing some work by my friend Hank Greely at Stanford, that there are particular kinds of changes which from a sociological perspective seem to bother us more. And he describes them as kind of brains, balls, and faces. So brains, it turns out we're very disturbed by the idea of human brains or humanized brains in animals. Much more disturbed by the humanized brain mice than we are by the humanized immune system mice, for example. The other is balls. We tend to be very nervous when we think about the idea, and this is kind of crazy and out there, imagine you could create an animal that had the ability to reproduce — its gonads, it's reproductive system, was human. So that you'd have animals mating and producing human beings and animals. That's the kind of thing that I think disturbs a lot of people as an idea. And the last is faces. The idea of having animals with human faces, for example, I think just disturbs a lot of people, even though you might say a face is a face. But it's a marker of human beings and the way we relate to each other and I think there's just a strong sociological push back against that.
- As the material that makes all living things what/who we are, DNA is the key to understanding and changing the world. British geneticist Bryan Sykes and Francis Collins (director of the Human Genome Project) explain how, through gene editing, scientists can better treat illnesses, eradicate diseases, and revolutionize personalized medicine.
- But existing and developing gene editing technologies are not without controversies. A major point of debate deals with the idea that gene editing is overstepping natural and ethical boundaries. Just because they can, does that mean that scientists should be edit DNA?
- Harvard professor Glenn Cohen introduces another subcategory of gene experiments: mixing human and animal DNA. "The question is which are okay, which are not okay, why can we generate some principles," Cohen says of human-animal chimeras and arguments concerning improving human life versus morality.
- Genome Editing Has Begun – How Will It Be Controlled? - Big Think ›
- Just how useful is human gene editing? - Big Think ›
- Could CRISPR gene editing solve the opioid epidemic? - Big Think ›
Once a week.
Subscribe to our weekly newsletter.
Cross-disciplinary cooperation is needed to save civilization.
- There is a great disconnect between the sciences and the humanities.
- Solutions to most of our real-world problems need both ways of knowing.
- Moving beyond the two-culture divide is an essential step to ensure our project of civilization.
For the past five years, I ran the Institute for Cross-Disciplinary Engagement at Dartmouth, an initiative sponsored by the John Templeton Foundation. Our mission has been to find ways to bring scientists and humanists together, often in public venues or — after Covid-19 — online, to discuss questions that transcend the narrow confines of a single discipline.
It turns out that these questions are at the very center of the much needed and urgent conversation about our collective future. While the complexity of the problems we face asks for a multi-cultural integration of different ways of knowing, the tools at hand are scarce and mostly ineffective. We need to rethink and learn how to collaborate productively across disciplinary cultures.
The danger of hyper-specialization
The explosive expansion of knowledge that started in the mid 1800s led to hyper-specialization inside and outside academia. Even within a single discipline, say philosophy or physics, professionals often don't understand one another. As I wrote here before, "This fragmentation of knowledge inside and outside of academia is the hallmark of our times, an amplification of the clash of the Two Cultures that physicist and novelist C.P. Snow admonished his Cambridge colleagues in 1959." The loss is palpable, intellectually and socially. Knowledge is not adept to reductionism. Sure, a specialist will make progress in her chosen field, but the tunnel vision of hyper-specialization creates a loss of context: you do the work not knowing how it fits into the bigger picture or, more alarmingly, how it may impact society.
Many of the existential risks we face today — AI and its impact on the workforce, the dangerous loss of privacy due to data mining and sharing, the threat of cyberwarfare, the threat of biowarfare, the threat of global warming, the threat of nuclear terrorism, the threat to our humanity by the development of genetic engineering — are consequences of the growing ease of access to cutting-edge technologies and the irreversible dependence we all have on our gadgets. Technological innovation is seductive: we want to have the latest "smart" phone, 5k TV, and VR goggles because they are objects of desire and social placement.
Are we ready for the genetic revolution?
When the time comes, and experts believe it is coming sooner than we expect or are prepared for, genetic meddling with the human genome may drive social inequality to an unprecedented level with not just differences in wealth distribution but in what kind of being you become and who retains power. This is the kind of nightmare that Nobel Prize-winning geneticist Jennifer Doudna talked about in a recent Big Think video.
CRISPR 101: Curing Sickle Cell, Growing Organs, Mosquito Makeovers | Jennifer Doudna | Big Think www.youtube.com
At the heart of these advances is the dual-use nature of science, its light and shadow selves. Most technological developments are perceived and sold as spectacular advances that will either alleviate human suffering or bring increasing levels of comfort and accessibility to a growing number of people. Curing diseases is what motivated Doudna and other scientists involved with CRISPR research. But with that also came the potential for altering the genetic makeup of humanity in ways that, again, can be used for good or evil purposes.
This is not a sci-fi movie plot. The main difference between biohacking and nuclear hacking is one of scale. Nuclear technologies require industrial-level infrastructure, which is very costly and demanding. This is why nuclear research and its technological implementation have been mostly relegated to governments. Biohacking can be done in someone's backyard garage with equipment that is not very costly. The Netflix documentary series Unnatural Selection brings this point home in terrifying ways. The essential problem is this: once the genie is out of the bottle, it is virtually impossible to enforce any kind of control. The genie will not be pushed back in.
Cross-disciplinary cooperation is needed to save civilization
What, then, can be done? Such technological challenges go beyond the reach of a single discipline. CRISPR, for example, may be an invention within genetics, but its impact is vast, asking for oversight and ethical safeguards that are far from our current reality. The same with global warming, rampant environmental destruction, and growing levels of air pollution/greenhouse gas emissions that are fast emerging as we crawl into a post-pandemic era. Instead of learning the lessons from our 18 months of seclusion — that we are fragile to nature's powers, that we are co-dependent and globally linked in irreversible ways, that our individual choices affect many more than ourselves — we seem to be bent on decompressing our accumulated urges with impunity.
The experience from our experiment with the Institute for Cross-Disciplinary Engagement has taught us a few lessons that we hope can be extrapolated to the rest of society: (1) that there is huge public interest in this kind of cross-disciplinary conversation between the sciences and the humanities; (2) that there is growing consensus in academia that this conversation is needed and urgent, as similar institutes emerge in other schools; (3) that in order for an open cross-disciplinary exchange to be successful, a common language needs to be established with people talking to each other and not past each other; (4) that university and high school curricula should strive to create more courses where this sort of cross-disciplinary exchange is the norm and not the exception; (5) that this conversation needs to be taken to all sectors of society and not kept within isolated silos of intellectualism.
Moving beyond the two-culture divide is not simply an interesting intellectual exercise; it is, as humanity wrestles with its own indecisions and uncertainties, an essential step to ensure our project of civilization.
New study analyzes gravitational waves to confirm the late Stephen Hawking's black hole area theorem.
- A new paper confirms Stephen Hawking's black hole area theorem.
- The researchers used gravitational wave data to prove the theorem.
- The data came from Caltech and MIT's Advanced Laser Interferometer Gravitational-Wave Observatory.
The late Stephen Hawking's black hole area theorem is correct, a new study shows. Scientists used gravitational waves to prove the famous British physicist's idea, which may lead to uncovering more underlying laws of the universe.
The theorem, elaborated by Hawking in 1971, uses Einstein's theory of general relativity as a springboard to conclude that it is not possible for the surface area of a black hole to become smaller over time. The theorem parallels the second law of thermodynamics that says the entropy (disorder) of a closed system can't decrease over time. Since the entropy of a black hole is proportional to its surface area, both must continue to increase.
As a black hole gobbles up more matter, its mass and surface area grow. But as it grows, it also spins faster, which decreases its surface area. Hawking's theorem maintains that the increase in surface area that comes from the added mass would always be larger than the decrease in surface area because of the added spin.
Will Farr, one of the co-authors of the study that was published in Physical Review Letters, said their finding demonstrates that "black hole areas are something fundamental and important." His colleague Maximiliano Isi agreed in an interview with Live Science: "Black holes have an entropy, and it's proportional to their area. It's not just a funny coincidence, it's a deep fact about the world that they reveal."
What are gravitational waves?
Gravitational waves are "ripples" in spacetime, predicted by Albert Einstein in 1916, that are created by very violent processes happening in space. Einstein showed that very massive, accelerating space objects like neutron stars or black holes that orbit each other could cause disturbances in spacetime. Like the ripples produced by tossing a rock into a lake, they would bring about "waves" of spacetime that would spread in all directions.
As LIGO shared, "These cosmic ripples would travel at the speed of light, carrying with them information about their origins, as well as clues to the nature of gravity itself."
The gravitational waves discovered by LIGO's 3,000-kilometer-long laser beam, which can detect the smallest distortions in spacetime, were generated 1.3 billion years ago by two giant black holes that were quickly spiraling toward each other.
What Stephen Hawking would have discovered if he lived longer | NASA's Michelle Thaller | Big Think www.youtube.com
Confirming Hawking's black hole area theorem
The researchers separated the signal into two parts, depending on whether it was from before or after the black holes merged. This allowed them to figure out the mass and spin of the original black holes as well as the mass and spin of the merged black hole. With this information, they calculated the surface areas of the black holes before and after the merger.
"As they spin around each other faster and faster, the gravitational waves increase in amplitude more and more until they eventually plunge into each other — making this big burst of waves," Isi elaborated. "What you're left with is a new black hole that's in this excited state, which you can then study by analyzing how it's vibrating. It's like if you ping a bell, the specific pitches and durations it rings with will tell you the structure of that bell, and also what it's made out of."
The surface area of the resulting black holes was larger than the combined area of the original black holes. This conformed to Hawking's area law.
Ever since we've had the technology, we've looked to the stars in search of alien life. It's assumed that we're looking because we want to find other life in the universe, but what if we're looking to make sure there isn't any?
Here's an equation, and a rather distressing one at that: N = R* × fP × ne × f1 × fi × fc × L. It's the Drake equation, and it describes the number of alien civilizations in our galaxy with whom we might be able to communicate. Its terms correspond to values such as the fraction of stars with planets, the fraction of planets on which life could emerge, the fraction of planets that can support intelligent life, and so on. Using conservative estimates, the minimum result of this equation is 20. There ought to be 20 intelligent alien civilizations in the Milky Way that we can contact and who can contact us. But there aren't any.
The Drake equation is an example of a broader issue in the scientific community—considering the sheer size of the universe and our knowledge that intelligence life has evolved at least once, there should be evidence for alien life. This is generally referred to as the Fermi paradox, after the physicist Enrico Fermi who first examined the contradiction between high probability of alien civilizations and their apparent absence. Fermi summed this up rather succinctly when he asked, “Where is everybody"?
But maybe this was the wrong question. A better question, albeit a more troubling one, might be “What happened to everybody?" Unlike asking where life exists in the universe, there's a clearer potential answer to this question: the Great Filter.
Why the universe is empty
Alien life is likely, but there is none that we can see. Therefore, it could be the case that somewhere along the trajectory of life's development, there is a massive and common challenge that ends alien life before it becomes intelligent enough and widespread enough for us to see—a great filter.
This filter could take many forms. It could be that having a planet in the Goldilocks' zone—the narrow band around a star where it is neither too hot nor too cold for life to exist—and having that planet contain organic molecules capable of accumulating into life is extremely unlikely. We've observed plenty of planets in the Goldilock's zone of different stars (there's estimated to be 40 billion in the Milky Way), but maybe the conditions still aren't right there for life to exist.
The Great Filter could occur at the very earliest stages of life. When you were in high school bio, you might have the refrain drilled into your head “mitochondria are the powerhouse of the cell." I certainly did. However, mitochondria were at one point a separate bacteria living its own existence. At some point on Earth, a single-celled organism tried to eat one of these bacteria, except instead of being digested, the bacterium teamed up with the cell, producing extra energy that enabled the cell to develop in ways leading to higher forms of life. An event like this might be so unlikely that it's only happened once in the Milky Way.
Or, the filter could be the development of large brains, as we have. After all, we live on a planet full of many creatures, and the kind of intelligence humans have has only occurred once. It may be overwhelmingly likely that living creatures on other planets simply don't need to evolve the energy-demanding neural structures necessary for intelligence.
What if the filter is ahead of us?
These possibilities assume that the Great Filter is behind us—that humanity is a lucky species that overcame a hurdle almost all other life fails to pass. This might not be the case, however; life might evolve to our level all the time but get wiped out by some unknowable catastrophe. Discovering nuclear power is a likely event for any advanced society, but it also has the potential to destroy such a society. Utilizing a planet's resources to build an advanced civilization also destroys the planet: the current process of climate change serves as an example. Or, it could be something entirely unknown, a major threat that we can't see and won't see until it's too late.
The bleak, counterintuitive suggestion of the Great Filter is that it would be a bad sign for humanity to find alien life, especially alien life with a degree of technological advancement similar to our own. If our galaxy is truly empty and dead, it becomes more likely that we've already passed through the Great Filter. The galaxy could be empty because all other life failed some challenge that humanity passed.
If we find another alien civilization, but not a cosmos teeming with a variety of alien civilizations, the implication is that the Great Filter lies ahead of us. The galaxy should be full of life, but it is not; one other instance of life would suggest that the many other civilizations that should be there were wiped out by some catastrophe that we and our alien counterparts have yet to face.
Fortunately, we haven't found any life. Although it might be lonely, it means humanity's chances at long-term survival are a bit higher than otherwise.
As a form of civil disobedience, hacking can help make the world a better place.
- Hackers' motivations range from altruistic to nihilistic.
- Altruistic hackers expose injustices, while nihilistic ones make society more dangerous.
- The line between ethical and unethical hacking is not always clear.
The following is an excerpt from Coding Democracy by Maureen Webb, which is publishing in paperback on July 21. Reprinted with Permission from The MIT PRESS. Copyright 2020.
As people begin to hack more concertedly at the structures of the status quo, the reactions of those who benefit from things as they are will become more fierce and more punitive, at least until the "hackers" succeed in shifting the relevant power relationships. We know this from the history of social movements. At the dawning of the digital age, farmers who hack tractors will be ruthlessly punished.
Somewhere on the continuum of altruism and transgression is the kind of hacking that might lead the world toward more accountable government and informed citizenries.
Of course, it must be acknowledged that hackers are engaged in a whole range of acts, from the altruistic to the plainly nihilistic and dangerous. On the altruistic side of the continuum, they are creating free software (GNU/Linux and other software under GPL licenses), Creative Commons (Creative Commons licensing), and Open Access (designing digital interfaces to make public records and publicly funded research accessible). They are hacking surveillance and monopoly power (creating privacy tools, alternative services, cooperative platforms, and a new decentralized internet) and electoral politics and decision making (Cinque Stelle, En Comú, Ethelo, Liquid Democracy, and PartidoX). They have engaged in stunts to expose the technical flaws in voting, communications, and security systems widely used by, or imposed on, the public (by playing chess with Germany's election voting machines, hacking the German Bildschirmtext system, and stealing ministers' biometric identifiers). They have punished shady contractors like HackingTeam, HBGary, and Stratfor, spilling their corporate dealings and personal information across the internet. They have exposed the corruption of oligarchs, politicians, and hegemons (through the Panama Papers, WikiLeaks, and Xnet).
More notoriously, they have coordinated distributed denial of service (DDoS) attacks to retaliate against corporate and government conduct (such as the Anonymous DDoS that protested PayPal's boycott of WikiLeaks; the ingenious use of the Internet of Things to DDoS Amazon; and the shutdown of US and Canadian government IT systems). They have hacked into databases (Manning and Snowden), leaked state secrets (Manning, Snowden, and WikiLeaks), and, in doing so, betrayed their own governments (Manning betrayed US war secrets, and Snowden betrayed US security secrets). They have interfered with elections (such as the hack and leak of the Democratic National Committee in the middle of the 2016 US election) and sown disinformation (the Russian hacking of US social media). They have interfered with property rights in order to assert user ownership, self-determination, and free software's four freedoms (farmers have hacked DRM code to repair their tractors, and Geohot unlocked the iPhone and hacked the Samsung phone to allow users administrator-level access to their devices) and to assert open access to publicly funded research. They have created black markets to evade state justice systems (such as Silk Road on the dark web) and cryptocurrencies that could undermine state-regulated monetary systems. They have meddled in geopolitics as free agents (Anonymous and the Arab Spring, and Julian Assange and his conduct with the Trump campaign). They have mucked around in and could potentially impair or shut down critical infrastructure. (The notorious "WANK worm" attack on NASA is an early, notorious, example, but hackers could potentially target banking systems, stock exchanges, electrical grids, telecommunications systems, air traffic control, chemical plants, nuclear plants, and even military "doomsday machines.")
It is impossible to calculate where these acts nudge us as a species. Some uses of hacking — such as the malicious, nihilistic hacking that harms critical infrastructure and threatens lives, and the hacking in cyberwarfare that injures the critical interests of other countries and undermines their democratic processes — are abhorrent and cannot be defended. The unfolding digital era looks very grim when one considers the threat this kind of hacking poses to peace and democracy combined with the dystopian direction states and corporations are going with digital tech.
But somewhere on the continuum of altruism and transgression is the kind of hacking that might lead the world toward more accountable government and informed citizenries, less corrupt and unfair economic systems, wiser public uses of digital tech, more self-determination for the ordinary user, fairer commercial contracts, better conditions for innovation and creativity, more decentralized and robust infrastructure systems, and an abolition of doomsday machines. In short, some hacking might move us toward a digital world in which there are more rather than fewer democratic, humanist outcomes.
It is not clear where the line between "good" and "bad" hacking should be drawn or how to regulate it wisely in every instance. Citizens should inform themselves and begin to consider this line-drawing seriously, however, since we will be grappling intensely with it for the next century or more. My personal view is that digital tech should not be used for everything. I think we should go back to simpler ways of running electrical grids and elections, for example. Systems are more resilient when they are not wholly digital and when they are smaller, more local, and modular. Consumers should have analogue options for things like fridges and cars, and design priorities for household goods should be durability and clean energy use, not interconnectedness.
In setting legal standards, prohibiting something and enforcing the prohibition are two different things. Sometimes a desired social norm can be struck by prohibiting a thing and not enforcing it strenuously. And the law can also recognize the constructive role that civil disobedience plays in the evolution of social norms, through prosecutorial discretion and judicial discretion in sentencing.
Wau Holland told the young hackers at the Paradiso that the Chaos Computer Club was "not just a bunch of techno freaks: we've been thinking about the social consequences of technology from the very beginning." Societies themselves, however, are generally just beginning to grapple with the social consequences of digital technology and with how to characterize the various acts performed by hackers, morally and legally. Each act raises a set of complex questions. Societies' responses will be part of the dialectic that determines where we end up. Should these various hacker acts be treated as incidents of public service, free speech, free association, legitimate protest, civil disobedience, and harmless pranksterism? Or should they be treated as trespass, tortious interference, intellectual property infringement, theft, fraud, conspiracy, extortion, espionage, terrorism, and treason? I invite you to think about this as you consider how hacking has been treated by societies to date.