Does Free Will Exist?
Alfred Mele: Yes. Yes they do. But it turns out that not everybody understands the expression “free will” in the same way. And there are lots of different ways of understanding it. Unfortunately, that makes it hard to just say, “Yes, this is true that isn’t.” One thing philosopher’s spend a lot of time doing is trying to sort out the possible meanings of an expression like “free will,” and the history on the literature on free will is a couple of thousand years old. So, when I talked to the general public, one thing I say about free will is, you can think about it on a sort of gas station model, service station model, So, when you go to the gas station and get regular gas, or the mid grade gas, or the premium. And maybe we could simplify things by thinking of like regular free will. Well, regular free will would be the sort of thing that is presupposed in courts of law when somebody is judged guilty of an offense. So, just that you understood what you were doing, you’re sane and rational, and nobody was forcing, or compelling you to do it, and you didn’t have any medical condition that forced or compelled you to do it. That would be enough to be acting freely. Now, that’s regular free will.
Yeah, okay, so now how do we understand this being able to do otherwise, everything being the same up until that moment? And by everything, I mean the entire history of the universe and all of the laws of nature. So, one way to picture this ability to do otherwise is as follows. If I could have done otherwise at a given moment, then there’s another possible universe. You don’t have to suppose that this universe actually exists. Another scenario where the entire universe is the same up until that moment, and even so, I do something else instead. So maybe what I did was decide to call a taxi, but at that very moment, everything being the same up until then, I could have decided to take a subway instead, and then started heading down the stairs.
Okay. So, some people require that kind of ability for free will. Now, if we’re going to have it, then the brain has to work in such a way that everything being the same up until a given point in time, although I did one thing, I decided to call a taxi. I could have decided to take the subway. And we don’t have good evidence that the brain does work that way, but also, we don’t have good evidence that it doesn’t. Right? So, this is a question that is empirically open. And it could turn out that the brain doesn’t work this way, and then if it doesn’t, then we’re not going to have free will at this mid grade level, but we could still have regular free will.
So, I’m convinced we have regular free will, the mid grade thing, I’m not convinced we have because we don’t have the empirical evidence that we need. But we don’t have it either way.
Question: What is the main experiment that’s driven this kind of free will?
Alfred Mele: So these were originally done starting in the early ‘80’s. They are still being done today. The technology today is better, but it’s the same kind of experiment. What you have are subjects seated in a chair like the one I’m sitting in, and they have this task. To flex the wrist whenever they want. They’re watching a fast clock. There’s dot on the clock and it makes a complete revolution in less than 3 seconds, and they’re hooked up to two machines. One is measuring EEG, electrical conductivity on the scalp. And the other measures a muscle burst on the wrist, it’s an electromyogram. Okay? So, they’re supposed to flex whenever they want and watch this rapidly revolving spot on the clock and then after they flex, they’re supposed to indicate where the spot was on the clock when they first became aware of their urge, intention, decision, to flex. And they indicate it by moving a cursor to that spot on the clock. Okay, is that clear?
All right. Now, when these subjects are regularly reminded to be spontaneous and not to plan in advance when to flex, what you see is that – well, it’s 500 milliseconds, it’s about half a second before the muscle burst, you get a marked change in electrical conductivity on the scalp. You get this ramping up effect. So, that’s about ½ second before the muscle burst. On average, subject say, they first became aware of this urge, or this decision, or intention, or whatever at about 200 milliseconds before the muscle burst. So, if you average out all those responses that they make by moving the cursor, it’s about 206 milliseconds before the muscle burst.
So, Benjamin Libet was the first one to do these studies. And they’re very interesting studies. And what was innovative is he had a way to measure consciousness because he was timing this conscious experience, he thought, right. So, the claim is, then the brain is deciding over a third of a second before the mind becomes aware of the decision. So, conscious free will isn’t driving this behavior, isn’t generating the flexants. And then Libet generalized and he said, “Well, you know, this is the way it is for all behavior. So, the brain unconsciously makes decisions and the mind becomes aware of them only later.”
Now if you think that in order to be acting freely, say it’s an overt action, an action involving bodily motion, like flexing the wrist, a conscious decision has to be causing the behavior and you’re thinking that doesn’t happen, then you’re thinking you never act freely. And so, there is no free will.
Question: What are the mistakes?
Alfred Mele: Okay, it is an interesting result, but what does it really show? Do we know that it’s decisions that are being made at -550 milliseconds, that is about half a second before the muscle burst, as opposed to something else? Well, one thing it could be instead of decisions is a causal process is up and running that increases the probability of a subsequent flexing, but doesn’t raise it to one. So, what you might have then at -550 is a potential cause of a subsequent flexing. And the decision might be made later than -500 or -550. It might even be made around -200, when people say they think they made it. So, that’s one problem, we can’t really correlate this early spike with the decision at that time. And in fact, the way the study is done is what triggered the computer to make a record of the preceding second, or more, of brain activity was the muscle burst. So, there is a muscle burst, and that triggers the computer, okay, make a record of this preceding second of brain activity. But if you use that methodology then you never looked for cases where you get this spike about half a second before, call it zero time, but no muscle motion. You don’t because it’s the muscle motion that triggers the computer to make a record of the preceding activity.
So that’s one problem. And one thing you might wonder too is, so how long does it take a decision to flex your wrist now to generate a muscle burst, or a wrist flexing. And there’s a way to get indirect evidence about that. You could give subjects a reaction time test. So now they wouldn’t be making the decision, but they would be responding to a queue with an intention. So the task might be flexing your wrist whenever that clock changes color from red to green. Okay? And they could be watching the clock too. And it turns out that reaction time studies have been done with a Libet clock. And a mean reaction time, in one study anyway, was 231 milliseconds. There was just a 231-millisecond gap between the emission of the go signal, which was a sound in that study, and the muscle burst. But if it took an intention or a decision something like 550 milliseconds to cause a muscle burst, this result would be very surprising. I mean, here it’s only roughly 200. And of course after there is the go signal, it’s going to take some time to respond mentally with an intention. It doesn’t have to be a conscious intention, but it’s a causal process, so it takes time.
So, that’s another problem. And then there’s a third problem with these studies and it has to do with the measurement of awareness. So, after they flex again, subject moved the cursor to the spot and say that’s when I first became aware of it.
Question: How does measurement of awareness become a problem?
Alfred Mele: So, it must have been two-and-a-half to three years ago now that I gave a lecture on the neuroscience of free will at the National Institute of Health in a motor control unit. The idea was, I do my lecture and then after that they make me a subject in one of these Libet experiments, which was cool. I was interested. And then after that I got out to dinner. They take me out to dinner, but first I have to be a subject in the experiment. So, I gave my lecture then it was time to do the experiment. I was sitting at the chair, they set up the clock and I knew what my task was. And I wanted to pretend to be a naive subject. I wanted to put myself in the shoes of somebody who might do this and not really knowing what is going on. And so, I thought this is what I’ll do, I’ll sit there and watch the clock and I’ll wait for urges to flex my wrist to pop up in me to become conscious, and then as soon as I have such an urge, I’ll flex and then after I flex I’ll move the cursor to where I thought the spot was on the clock when I first became aware of that urge, or intention, or whatever.
So, I was sitting there a little while and nothing was happening. That is, no urges was coming to mind. And I thought, how did they do this? How do these people do it? And then I thought, I better think of a way to do it because otherwise I’m going to be stuck in this chair and I won’t get any dinner. Right? Dinner was next. So, I thought, this is what I’ll do. I’ll just consciously say “now” to myself silently and treat that as an indicator of an urge or a decision, and then I’ll flex and then after I flex, I’ll report where the spot was on the clock when I said “now” to myself.
Okay. So, I had to remember to do this then. Say “now,” flex as soon as possible after I say “now,” and then do the reporting. And at first the neuroscientist said I was flexing in too wimpy a way, so I had to remember to flex hard too. So, okay. I did all of that. And these experiments subjects had to do at least 40 times to get data you could actually read and use. So, I did it about 40 times. And one thing I discovered that although I could pinpoint the spot on the clock to maybe a range of the clock, I don’t know, 20%, 25% of the clock. I couldn’t pinpoint it to an exact tick, let’s say. That was one problem. Also, I had something every definite to look for internally. I was looking for the conscious “now” saying, and I know what that’s like. But subjects who are said, who are instructed to look for an urge or an intention, or decision, or whatever, might wonder, “Well, what the heck was that that I was just experiencing? Was I just thinking about doing it, was it an urge?” So, there could be confusion that they would have that I didn’t have.
Question: What is the bottom line of these experiments in terms of where we stand on that free will scale?
Alfred Mele: Well, these experiments are thought to show that there is no free will. And my main point here is, they don’t show that. For three different reasons really. These judgment times are unreliable, so we don’t really know when people first became aware of the urge. And we don’t have good evidence of what happens at -550 milliseconds, about half a second before the muscle burst, is that a decision is made as opposed to a potential cause of a decision is present. And we don’t have evidence that what’s happening half a second before the muscle burst is sufficient for subsequent muscle bursts. So, it just leaves free will wide open.
Another thing too is, notice what we’re studying here. We’re studying relatively trivial actions. Wrist flexions or mouse button clickings and decisions to do things now. And it may be that free will mainly isn’t at work in that dimension in our lives, but mainly is at work in broader dimensions when we’re thinking about maybe back to students they’ve been accepted into different graduate schools with different scholarship offers and they are thinking about which one to take. Or, maybe thinking about whether to propose marriage, or not, or whether it’s time finally to get the divorce or what house to buy. You know? It maybe that free will is more involved in things like that then in wrist flexions and the like. And then, now this is not a criticism of the scientists who do this work, but with the technology we have now, if you’re going to be studying something similar to free will, it looks like you’re going to be in this domain and not the domain of choosing graduate schools, buying houses, proposing marriage.
Interviewed\r\n by Austin Allen
The question of human autonomy, the alternate universes that our choices can open up, and the problem of measurement awareness.
Once a week.
Subscribe to our weekly newsletter.
Cross-disciplinary cooperation is needed to save civilization.
- There is a great disconnect between the sciences and the humanities.
- Solutions to most of our real-world problems need both ways of knowing.
- Moving beyond the two-culture divide is an essential step to ensure our project of civilization.
For the past five years, I ran the Institute for Cross-Disciplinary Engagement at Dartmouth, an initiative sponsored by the John Templeton Foundation. Our mission has been to find ways to bring scientists and humanists together, often in public venues or — after Covid-19 — online, to discuss questions that transcend the narrow confines of a single discipline.
It turns out that these questions are at the very center of the much needed and urgent conversation about our collective future. While the complexity of the problems we face asks for a multi-cultural integration of different ways of knowing, the tools at hand are scarce and mostly ineffective. We need to rethink and learn how to collaborate productively across disciplinary cultures.
The danger of hyper-specialization
The explosive expansion of knowledge that started in the mid 1800s led to hyper-specialization inside and outside academia. Even within a single discipline, say philosophy or physics, professionals often don't understand one another. As I wrote here before, "This fragmentation of knowledge inside and outside of academia is the hallmark of our times, an amplification of the clash of the Two Cultures that physicist and novelist C.P. Snow admonished his Cambridge colleagues in 1959." The loss is palpable, intellectually and socially. Knowledge is not adept to reductionism. Sure, a specialist will make progress in her chosen field, but the tunnel vision of hyper-specialization creates a loss of context: you do the work not knowing how it fits into the bigger picture or, more alarmingly, how it may impact society.
Many of the existential risks we face today — AI and its impact on the workforce, the dangerous loss of privacy due to data mining and sharing, the threat of cyberwarfare, the threat of biowarfare, the threat of global warming, the threat of nuclear terrorism, the threat to our humanity by the development of genetic engineering — are consequences of the growing ease of access to cutting-edge technologies and the irreversible dependence we all have on our gadgets. Technological innovation is seductive: we want to have the latest "smart" phone, 5k TV, and VR goggles because they are objects of desire and social placement.
Are we ready for the genetic revolution?
When the time comes, and experts believe it is coming sooner than we expect or are prepared for, genetic meddling with the human genome may drive social inequality to an unprecedented level with not just differences in wealth distribution but in what kind of being you become and who retains power. This is the kind of nightmare that Nobel Prize-winning geneticist Jennifer Doudna talked about in a recent Big Think video.
CRISPR 101: Curing Sickle Cell, Growing Organs, Mosquito Makeovers | Jennifer Doudna | Big Think www.youtube.com
At the heart of these advances is the dual-use nature of science, its light and shadow selves. Most technological developments are perceived and sold as spectacular advances that will either alleviate human suffering or bring increasing levels of comfort and accessibility to a growing number of people. Curing diseases is what motivated Doudna and other scientists involved with CRISPR research. But with that also came the potential for altering the genetic makeup of humanity in ways that, again, can be used for good or evil purposes.
This is not a sci-fi movie plot. The main difference between biohacking and nuclear hacking is one of scale. Nuclear technologies require industrial-level infrastructure, which is very costly and demanding. This is why nuclear research and its technological implementation have been mostly relegated to governments. Biohacking can be done in someone's backyard garage with equipment that is not very costly. The Netflix documentary series Unnatural Selection brings this point home in terrifying ways. The essential problem is this: once the genie is out of the bottle, it is virtually impossible to enforce any kind of control. The genie will not be pushed back in.
Cross-disciplinary cooperation is needed to save civilization
What, then, can be done? Such technological challenges go beyond the reach of a single discipline. CRISPR, for example, may be an invention within genetics, but its impact is vast, asking for oversight and ethical safeguards that are far from our current reality. The same with global warming, rampant environmental destruction, and growing levels of air pollution/greenhouse gas emissions that are fast emerging as we crawl into a post-pandemic era. Instead of learning the lessons from our 18 months of seclusion — that we are fragile to nature's powers, that we are co-dependent and globally linked in irreversible ways, that our individual choices affect many more than ourselves — we seem to be bent on decompressing our accumulated urges with impunity.
The experience from our experiment with the Institute for Cross-Disciplinary Engagement has taught us a few lessons that we hope can be extrapolated to the rest of society: (1) that there is huge public interest in this kind of cross-disciplinary conversation between the sciences and the humanities; (2) that there is growing consensus in academia that this conversation is needed and urgent, as similar institutes emerge in other schools; (3) that in order for an open cross-disciplinary exchange to be successful, a common language needs to be established with people talking to each other and not past each other; (4) that university and high school curricula should strive to create more courses where this sort of cross-disciplinary exchange is the norm and not the exception; (5) that this conversation needs to be taken to all sectors of society and not kept within isolated silos of intellectualism.
Moving beyond the two-culture divide is not simply an interesting intellectual exercise; it is, as humanity wrestles with its own indecisions and uncertainties, an essential step to ensure our project of civilization.
New study analyzes gravitational waves to confirm the late Stephen Hawking's black hole area theorem.
- A new paper confirms Stephen Hawking's black hole area theorem.
- The researchers used gravitational wave data to prove the theorem.
- The data came from Caltech and MIT's Advanced Laser Interferometer Gravitational-Wave Observatory.
The late Stephen Hawking's black hole area theorem is correct, a new study shows. Scientists used gravitational waves to prove the famous British physicist's idea, which may lead to uncovering more underlying laws of the universe.
The theorem, elaborated by Hawking in 1971, uses Einstein's theory of general relativity as a springboard to conclude that it is not possible for the surface area of a black hole to become smaller over time. The theorem parallels the second law of thermodynamics that says the entropy (disorder) of a closed system can't decrease over time. Since the entropy of a black hole is proportional to its surface area, both must continue to increase.
As a black hole gobbles up more matter, its mass and surface area grow. But as it grows, it also spins faster, which decreases its surface area. Hawking's theorem maintains that the increase in surface area that comes from the added mass would always be larger than the decrease in surface area because of the added spin.
Will Farr, one of the co-authors of the study that was published in Physical Review Letters, said their finding demonstrates that "black hole areas are something fundamental and important." His colleague Maximiliano Isi agreed in an interview with Live Science: "Black holes have an entropy, and it's proportional to their area. It's not just a funny coincidence, it's a deep fact about the world that they reveal."
What are gravitational waves?
Gravitational waves are "ripples" in spacetime, predicted by Albert Einstein in 1916, that are created by very violent processes happening in space. Einstein showed that very massive, accelerating space objects like neutron stars or black holes that orbit each other could cause disturbances in spacetime. Like the ripples produced by tossing a rock into a lake, they would bring about "waves" of spacetime that would spread in all directions.
As LIGO shared, "These cosmic ripples would travel at the speed of light, carrying with them information about their origins, as well as clues to the nature of gravity itself."
The gravitational waves discovered by LIGO's 3,000-kilometer-long laser beam, which can detect the smallest distortions in spacetime, were generated 1.3 billion years ago by two giant black holes that were quickly spiraling toward each other.
What Stephen Hawking would have discovered if he lived longer | NASA's Michelle Thaller | Big Think www.youtube.com
Confirming Hawking's black hole area theorem
The researchers separated the signal into two parts, depending on whether it was from before or after the black holes merged. This allowed them to figure out the mass and spin of the original black holes as well as the mass and spin of the merged black hole. With this information, they calculated the surface areas of the black holes before and after the merger.
"As they spin around each other faster and faster, the gravitational waves increase in amplitude more and more until they eventually plunge into each other — making this big burst of waves," Isi elaborated. "What you're left with is a new black hole that's in this excited state, which you can then study by analyzing how it's vibrating. It's like if you ping a bell, the specific pitches and durations it rings with will tell you the structure of that bell, and also what it's made out of."
The surface area of the resulting black holes was larger than the combined area of the original black holes. This conformed to Hawking's area law.
Ever since we've had the technology, we've looked to the stars in search of alien life. It's assumed that we're looking because we want to find other life in the universe, but what if we're looking to make sure there isn't any?
Here's an equation, and a rather distressing one at that: N = R* × fP × ne × f1 × fi × fc × L. It's the Drake equation, and it describes the number of alien civilizations in our galaxy with whom we might be able to communicate. Its terms correspond to values such as the fraction of stars with planets, the fraction of planets on which life could emerge, the fraction of planets that can support intelligent life, and so on. Using conservative estimates, the minimum result of this equation is 20. There ought to be 20 intelligent alien civilizations in the Milky Way that we can contact and who can contact us. But there aren't any.
The Drake equation is an example of a broader issue in the scientific community—considering the sheer size of the universe and our knowledge that intelligence life has evolved at least once, there should be evidence for alien life. This is generally referred to as the Fermi paradox, after the physicist Enrico Fermi who first examined the contradiction between high probability of alien civilizations and their apparent absence. Fermi summed this up rather succinctly when he asked, “Where is everybody"?
But maybe this was the wrong question. A better question, albeit a more troubling one, might be “What happened to everybody?" Unlike asking where life exists in the universe, there's a clearer potential answer to this question: the Great Filter.
Why the universe is empty
Alien life is likely, but there is none that we can see. Therefore, it could be the case that somewhere along the trajectory of life's development, there is a massive and common challenge that ends alien life before it becomes intelligent enough and widespread enough for us to see—a great filter.
This filter could take many forms. It could be that having a planet in the Goldilocks' zone—the narrow band around a star where it is neither too hot nor too cold for life to exist—and having that planet contain organic molecules capable of accumulating into life is extremely unlikely. We've observed plenty of planets in the Goldilock's zone of different stars (there's estimated to be 40 billion in the Milky Way), but maybe the conditions still aren't right there for life to exist.
The Great Filter could occur at the very earliest stages of life. When you were in high school bio, you might have the refrain drilled into your head “mitochondria are the powerhouse of the cell." I certainly did. However, mitochondria were at one point a separate bacteria living its own existence. At some point on Earth, a single-celled organism tried to eat one of these bacteria, except instead of being digested, the bacterium teamed up with the cell, producing extra energy that enabled the cell to develop in ways leading to higher forms of life. An event like this might be so unlikely that it's only happened once in the Milky Way.
Or, the filter could be the development of large brains, as we have. After all, we live on a planet full of many creatures, and the kind of intelligence humans have has only occurred once. It may be overwhelmingly likely that living creatures on other planets simply don't need to evolve the energy-demanding neural structures necessary for intelligence.
What if the filter is ahead of us?
These possibilities assume that the Great Filter is behind us—that humanity is a lucky species that overcame a hurdle almost all other life fails to pass. This might not be the case, however; life might evolve to our level all the time but get wiped out by some unknowable catastrophe. Discovering nuclear power is a likely event for any advanced society, but it also has the potential to destroy such a society. Utilizing a planet's resources to build an advanced civilization also destroys the planet: the current process of climate change serves as an example. Or, it could be something entirely unknown, a major threat that we can't see and won't see until it's too late.
The bleak, counterintuitive suggestion of the Great Filter is that it would be a bad sign for humanity to find alien life, especially alien life with a degree of technological advancement similar to our own. If our galaxy is truly empty and dead, it becomes more likely that we've already passed through the Great Filter. The galaxy could be empty because all other life failed some challenge that humanity passed.
If we find another alien civilization, but not a cosmos teeming with a variety of alien civilizations, the implication is that the Great Filter lies ahead of us. The galaxy should be full of life, but it is not; one other instance of life would suggest that the many other civilizations that should be there were wiped out by some catastrophe that we and our alien counterparts have yet to face.
Fortunately, we haven't found any life. Although it might be lonely, it means humanity's chances at long-term survival are a bit higher than otherwise.
As a form of civil disobedience, hacking can help make the world a better place.
- Hackers' motivations range from altruistic to nihilistic.
- Altruistic hackers expose injustices, while nihilistic ones make society more dangerous.
- The line between ethical and unethical hacking is not always clear.
The following is an excerpt from Coding Democracy by Maureen Webb, which is publishing in paperback on July 21. Reprinted with Permission from The MIT PRESS. Copyright 2020.
As people begin to hack more concertedly at the structures of the status quo, the reactions of those who benefit from things as they are will become more fierce and more punitive, at least until the "hackers" succeed in shifting the relevant power relationships. We know this from the history of social movements. At the dawning of the digital age, farmers who hack tractors will be ruthlessly punished.
Somewhere on the continuum of altruism and transgression is the kind of hacking that might lead the world toward more accountable government and informed citizenries.
Of course, it must be acknowledged that hackers are engaged in a whole range of acts, from the altruistic to the plainly nihilistic and dangerous. On the altruistic side of the continuum, they are creating free software (GNU/Linux and other software under GPL licenses), Creative Commons (Creative Commons licensing), and Open Access (designing digital interfaces to make public records and publicly funded research accessible). They are hacking surveillance and monopoly power (creating privacy tools, alternative services, cooperative platforms, and a new decentralized internet) and electoral politics and decision making (Cinque Stelle, En Comú, Ethelo, Liquid Democracy, and PartidoX). They have engaged in stunts to expose the technical flaws in voting, communications, and security systems widely used by, or imposed on, the public (by playing chess with Germany's election voting machines, hacking the German Bildschirmtext system, and stealing ministers' biometric identifiers). They have punished shady contractors like HackingTeam, HBGary, and Stratfor, spilling their corporate dealings and personal information across the internet. They have exposed the corruption of oligarchs, politicians, and hegemons (through the Panama Papers, WikiLeaks, and Xnet).
More notoriously, they have coordinated distributed denial of service (DDoS) attacks to retaliate against corporate and government conduct (such as the Anonymous DDoS that protested PayPal's boycott of WikiLeaks; the ingenious use of the Internet of Things to DDoS Amazon; and the shutdown of US and Canadian government IT systems). They have hacked into databases (Manning and Snowden), leaked state secrets (Manning, Snowden, and WikiLeaks), and, in doing so, betrayed their own governments (Manning betrayed US war secrets, and Snowden betrayed US security secrets). They have interfered with elections (such as the hack and leak of the Democratic National Committee in the middle of the 2016 US election) and sown disinformation (the Russian hacking of US social media). They have interfered with property rights in order to assert user ownership, self-determination, and free software's four freedoms (farmers have hacked DRM code to repair their tractors, and Geohot unlocked the iPhone and hacked the Samsung phone to allow users administrator-level access to their devices) and to assert open access to publicly funded research. They have created black markets to evade state justice systems (such as Silk Road on the dark web) and cryptocurrencies that could undermine state-regulated monetary systems. They have meddled in geopolitics as free agents (Anonymous and the Arab Spring, and Julian Assange and his conduct with the Trump campaign). They have mucked around in and could potentially impair or shut down critical infrastructure. (The notorious "WANK worm" attack on NASA is an early, notorious, example, but hackers could potentially target banking systems, stock exchanges, electrical grids, telecommunications systems, air traffic control, chemical plants, nuclear plants, and even military "doomsday machines.")
It is impossible to calculate where these acts nudge us as a species. Some uses of hacking — such as the malicious, nihilistic hacking that harms critical infrastructure and threatens lives, and the hacking in cyberwarfare that injures the critical interests of other countries and undermines their democratic processes — are abhorrent and cannot be defended. The unfolding digital era looks very grim when one considers the threat this kind of hacking poses to peace and democracy combined with the dystopian direction states and corporations are going with digital tech.
But somewhere on the continuum of altruism and transgression is the kind of hacking that might lead the world toward more accountable government and informed citizenries, less corrupt and unfair economic systems, wiser public uses of digital tech, more self-determination for the ordinary user, fairer commercial contracts, better conditions for innovation and creativity, more decentralized and robust infrastructure systems, and an abolition of doomsday machines. In short, some hacking might move us toward a digital world in which there are more rather than fewer democratic, humanist outcomes.
It is not clear where the line between "good" and "bad" hacking should be drawn or how to regulate it wisely in every instance. Citizens should inform themselves and begin to consider this line-drawing seriously, however, since we will be grappling intensely with it for the next century or more. My personal view is that digital tech should not be used for everything. I think we should go back to simpler ways of running electrical grids and elections, for example. Systems are more resilient when they are not wholly digital and when they are smaller, more local, and modular. Consumers should have analogue options for things like fridges and cars, and design priorities for household goods should be durability and clean energy use, not interconnectedness.
In setting legal standards, prohibiting something and enforcing the prohibition are two different things. Sometimes a desired social norm can be struck by prohibiting a thing and not enforcing it strenuously. And the law can also recognize the constructive role that civil disobedience plays in the evolution of social norms, through prosecutorial discretion and judicial discretion in sentencing.
Wau Holland told the young hackers at the Paradiso that the Chaos Computer Club was "not just a bunch of techno freaks: we've been thinking about the social consequences of technology from the very beginning." Societies themselves, however, are generally just beginning to grapple with the social consequences of digital technology and with how to characterize the various acts performed by hackers, morally and legally. Each act raises a set of complex questions. Societies' responses will be part of the dialectic that determines where we end up. Should these various hacker acts be treated as incidents of public service, free speech, free association, legitimate protest, civil disobedience, and harmless pranksterism? Or should they be treated as trespass, tortious interference, intellectual property infringement, theft, fraud, conspiracy, extortion, espionage, terrorism, and treason? I invite you to think about this as you consider how hacking has been treated by societies to date.