Once a week.
Subscribe to our weekly newsletter.
As a form of civil disobedience, hacking can help make the world a better place.
- Hackers' motivations range from altruistic to nihilistic.
- Altruistic hackers expose injustices, while nihilistic ones make society more dangerous.
- The line between ethical and unethical hacking is not always clear.
The following is an excerpt from Coding Democracy by Maureen Webb. Reprinted with Permission from The MIT PRESS. Copyright 2020.
As people begin to hack more concertedly at the structures of the status quo, the reactions of those who benefit from things as they are will become more fierce and more punitive, at least until the "hackers" succeed in shifting the relevant power relationships. We know this from the history of social movements. At the dawning of the digital age, farmers who hack tractors will be ruthlessly punished.
Somewhere on the continuum of altruism and transgression is the kind of hacking that might lead the world toward more accountable government and informed citizenries.
Of course, it must be acknowledged that hackers are engaged in a whole range of acts, from the altruistic to the plainly nihilistic and dangerous. On the altruistic side of the continuum, they are creating free software (GNU/Linux and other software under GPL licenses), Creative Commons (Creative Commons licensing), and Open Access (designing digital interfaces to make public records and publicly funded research accessible). They are hacking surveillance and monopoly power (creating privacy tools, alternative services, cooperative platforms, and a new decentralized internet) and electoral politics and decision making (Cinque Stelle, En Comú, Ethelo, Liquid Democracy, and PartidoX). They have engaged in stunts to expose the technical flaws in voting, communications, and security systems widely used by, or imposed on, the public (by playing chess with Germany's election voting machines, hacking the German Bildschirmtext system, and stealing ministers' biometric identifiers). They have punished shady contractors like HackingTeam, HBGary, and Stratfor, spilling their corporate dealings and personal information across the internet. They have exposed the corruption of oligarchs, politicians, and hegemons (through the Panama Papers, WikiLeaks, and Xnet).
More notoriously, they have coordinated distributed denial of service (DDoS) attacks to retaliate against corporate and government conduct (such as the Anonymous DDoS that protested PayPal's boycott of WikiLeaks; the ingenious use of the Internet of Things to DDoS Amazon; and the shutdown of US and Canadian government IT systems). They have hacked into databases (Manning and Snowden), leaked state secrets (Manning, Snowden, and WikiLeaks), and, in doing so, betrayed their own governments (Manning betrayed US war secrets, and Snowden betrayed US security secrets). They have interfered with elections (such as the hack and leak of the Democratic National Committee in the middle of the 2016 US election) and sown disinformation (the Russian hacking of US social media). They have interfered with property rights in order to assert user ownership, self-determination, and free software's four freedoms (farmers have hacked DRM code to repair their tractors, and Geohot unlocked the iPhone and hacked the Samsung phone to allow users administrator-level access to their devices) and to assert open access to publicly funded research. They have created black markets to evade state justice systems (such as Silk Road on the dark web) and cryptocurrencies that could undermine state-regulated monetary systems. They have meddled in geopolitics as free agents (Anonymous and the Arab Spring, and Julian Assange and his conduct with the Trump campaign). They have mucked around in and could potentially impair or shut down critical infrastructure. (The notorious "WANK worm" attack on NASA is an early, notorious, example, but hackers could potentially target banking systems, stock exchanges, electrical grids, telecommunications systems, air traffic control, chemical plants, nuclear plants, and even military "doomsday machines.")
It is impossible to calculate where these acts nudge us as a species. Some uses of hacking — such as the malicious, nihilistic hacking that harms critical infrastructure and threatens lives, and the hacking in cyberwarfare that injures the critical interests of other countries and undermines their democratic processes — are abhorrent and cannot be defended. The unfolding digital era looks very grim when one considers the threat this kind of hacking poses to peace and democracy combined with the dystopian direction states and corporations are going with digital tech.
But somewhere on the continuum of altruism and transgression is the kind of hacking that might lead the world toward more accountable government and informed citizenries, less corrupt and unfair economic systems, wiser public uses of digital tech, more self-determination for the ordinary user, fairer commercial contracts, better conditions for innovation and creativity, more decentralized and robust infrastructure systems, and an abolition of doomsday machines. In short, some hacking might move us toward a digital world in which there are more rather than fewer democratic, humanist outcomes.
It is not clear where the line between "good" and "bad" hacking should be drawn or how to regulate it wisely in every instance. Citizens should inform themselves and begin to consider this line-drawing seriously, however, since we will be grappling intensely with it for the next century or more. My personal view is that digital tech should not be used for everything. I think we should go back to simpler ways of running electrical grids and elections, for example. Systems are more resilient when they are not wholly digital and when they are smaller, more local, and modular. Consumers should have analogue options for things like fridges and cars, and design priorities for household goods should be durability and clean energy use, not interconnectedness.
In setting legal standards, prohibiting something and enforcing the prohibition are two different things. Sometimes a desired social norm can be struck by prohibiting a thing and not enforcing it strenuously. And the law can also recognize the constructive role that civil disobedience plays in the evolution of social norms, through prosecutorial discretion and judicial discretion in sentencing.
Wau Holland told the young hackers at the Paradiso that the Chaos Computer Club was "not just a bunch of techno freaks: we've been thinking about the social consequences of technology from the very beginning." Societies themselves, however, are generally just beginning to grapple with the social consequences of digital technology and with how to characterize the various acts performed by hackers, morally and legally. Each act raises a set of complex questions. Societies' responses will be part of the dialectic that determines where we end up. Should these various hacker acts be treated as incidents of public service, free speech, free association, legitimate protest, civil disobedience, and harmless pranksterism? Or should they be treated as trespass, tortious interference, intellectual property infringement, theft, fraud, conspiracy, extortion, espionage, terrorism, and treason? I invite you to think about this as you consider how hacking has been treated by societies to date.
Life is governed by unspoken rules. How do you know you're following them correctly?
- Most parts of everyday life involve accepting and applying various rules, from the words we speak to the cultural norms we insist on.
- These rules are learned largely by observation of others and are very rarely taught explicitly.
- Saul Kripke asks us how it is that we can ever be sure that we're following the rules correctly? And does it matter?
Imagine you're out with some friends and you have to, for whatever reason, add up two numbers: 432 and 222. It's easy, you think! You were great at calculus in school, and you won't even need to get out your phone. In a confident voice, you say, "Oh, that's 654."
There's a pause as everyone looks at you oddly. "You serious?" someone says. Of course you are. That's how addition works, right?
Or is it? According to Saul Kripke, how do you know that you're doing addition correctly?
The games people play
In everyday life, we all follow a series of rules, whether we know it or not. These can be the rules of etiquette, like "don't burp in public" or "don't cook fish in the office microwave," but there are also unspoken rules that apply to our use of words and concepts. For instance, consider the words "anxious" and "scared." The two are similar but there are also very specific rules for when we cannot use them interchangeably.
Sociologists, anthropologists, and linguists have varying names for these rules, but Austro-British philosopher, Ludwig Wittgenstein, called them our "form of life." Although the term is a bit ambiguous, it's taken to mean those rules that we accept to go about our public interactions. They're a bit like the rules of a game before everyone plays — "don't pick up the ball" or "start running when you hear the gun."
We all belong to various forms of life, which give us the values we have and the language we use (which in turn influences how we think). It might be, for instance, that your family has a very particular word for the remote control that other families find odd. Or a certain country might have cultural norms that others do not. It's curious how Scandinavians tend to eat their evening meal around 4 or 5pm, while Spaniards eat nearer 9 pm.
Let's return to the opening example. Mathematics is no different. There are certain rules we have to learn and understand, and then we apply them to new situations. We have axioms, parameters, operators, coefficients, and so on, all of which constitute the "form of life" of mathematics.
Do any of us know what we're doing?
Kripke was a card-carrying Wittgensteinian. He argued that while we go about applying these rules all the time, he raised the question of whether we can ever be entirely sure that we're applying them correctly.
For example, if a child or a non-native speaker is learning a language, they will often be corrected by competent speakers. In fact, it's important that they are corrected so that they can, themselves, become the ones who will enforce those rules later. As a speaker learns the proper rules of a language, they will recalibrate what Kripke calls their "rule following consideration." And yet, it's quite conceivable that someone could misunderstand a word, but use it correctly all the time, by luck, perhaps.
In my own case, I remember using the word "reprehensible" quite correctly for a long time, thinking it meant one (slightly off) thing. I was simply lucky enough to use the word only in the contexts that fit my understanding. I was never "caught out." Most adults have a vocabulary of around 30,000 words, and most haven't taken the time to look up even a fraction of those. And, even if you did, what would that prove? Lexicographers are always playing catch up — words morph and evolve as well as die, and new ones are born every day.
But, this skepticism is not limited to words. It applies, too, to things like mathematics. No one is ever shown "addition." What happens is that we're given a list of discrete examples of addition at work and are expected to just understand. We say, "2+2=4, 4+3=7, 9+7=16. You got it yet? Good, now go and do that on your own."
A teacher or a group of people competent at math might correct us as we're finding our feet, but it's a wonder how we latch on to the principle of addition. And then we assume that we're doing it right all along.
But what if addition isn't what you think it is? In the opening example, what if addition works differently if the second addend is three repeated numbers? What if addition works differently after you reach a certain number? It might be that you've just never encountered this before.
I don't care — it just works
There are some Wittgensteinians who think Kripke misses the point. They argue that when you are part of a form of life, or when you wholesale accept a system of rules, part of doing that means that you don't question it. When you play chess you don't spend all your time asking, "But why do the knights move this way? It makes no sense!" You just play the game.
Likewise, when we speak to each other, we're not crippled by doubt that we might be choosing the wrong word. We just assume that we're right and get on with it. So, too, with Kripke's "rule following considerations." To understand a rule is to accept it, not to doubt it. Addition is no different.
But, that being true, it's still an interesting thought: How do you know that you're doing anything properly? We all think that we're competent and intelligent, but what if we're just monumentally lucky? What if one day, we're exposed as poseurs?Jonny Thomson teaches philosophy in Oxford. He runs a popular Instagram account called Mini Philosophy (@philosophyminis). His first book is Mini Philosophy: A Small Book of Big Ideas
The first nation to make bitcoin legal tender will use geothermal energy to mine it.
This article was originally published on our sister site, Freethink.
In June 2021, El Salvador became the first nation in the world to make bitcoin legal tender. Soon after, President Nayib Bukele instructed a state-owned power company to provide bitcoin mining facilities with cheap, clean energy — harnessed from the country's volcanoes.
The challenge: Bitcoin is a cryptocurrency, a digital form of money and a payment system. Crypto has several advantages over physical dollars and cents — it's incredibly difficult to counterfeit, and transactions are more secure — but it also has a major downside.
Crypto transactions are recorded and new coins are added into circulation through a process called mining.
Crypto mining involves computers solving incredibly difficult mathematical puzzles. It is also incredibly energy-intensive — Cambridge University researchers estimate that bitcoin mining alone consumes more electricity every year than Argentina.
Most of that electricity is generated by carbon-emitting fossil fuels. As it stands, bitcoin mining produces an estimated 36.95 megatons of CO2 annually.
A world first: On June 9, El Salvador became the first nation to make bitcoin legal tender, meaning businesses have to accept it as payment and citizens can use it to pay taxes.
Less than a day later, Bukele tweeted that he'd instructed a state-owned geothermal electric company to put together a plan to provide bitcoin mining facilities with "very cheap, 100% clean, 100% renewable, 0 emissions energy."
Geothermal electricity is produced by capturing heat from the Earth itself. In El Salvador, that heat comes from volcanoes, and an estimated two-thirds of their energy potential is currently untapped.
Why it matters: El Salvador's decision to make bitcoin legal tender could be a win for both the crypto and the nation itself.
"(W)hat it does for bitcoin is further legitimizes its status as a potential reserve asset for sovereign and super sovereign entities," Greg King, CEO of crypto asset management firm Osprey Funds, told CBS News of the legislation.
Meanwhile, El Salvador is one of the poorest nations in North America, and bitcoin miners — the people who own and operate the computers doing the mining — receive bitcoins as a reward for their efforts.
"This is going to evolve fast!"
If El Salvador begins operating bitcoin mining facilities powered by clean, cheap geothermal energy, it could become a global hub for mining — and receive a much-needed economic boost in the process.
The next steps: It remains to be seen whether Salvadorans will fully embrace bitcoin — which is notoriously volatile — or continue business-as-usual with the nation's other legal tender, the U.S. dollar.
Only time will tell if Bukele's plan for volcano-powered bitcoin mining facilities comes to fruition, too — but based on the speed of things so far, we won't have to wait long to find out.
Less than three hours after tweeting about the idea, Bukele followed up with another tweet claiming that the nation's geothermal energy company had already dug a new well and was designing a "mining hub" around it.
"This is going to evolve fast!" the president promised.
In each of our minds, we draw a demarcation line between beliefs that are reasonable and those that are nonsense. Where do you draw your line?
- Conspiracy theories exist on a spectrum, from plausible and mainstream to fringe and unpopular.
- It's very rare to find someone who only believes in one conspiracy theory. They generally believe in every conspiracy theory that's less extreme than their favorite one.
- To some extent, we are all conspiracy theorists.
The following is an excerpt from the book Escaping the Rabbit Hole by Mick West. It is reprinted with permission from the author.
If you want to understand how people fall for conspiracy theories, and if you want to help them, then you have to understand the conspiracy universe. More specifically, you need to know where their favorite theories are on the broader spectrum of conspiracies.
What type of person falls for conspiracy theories? What type of person would think that the World Trade Center was a controlled demolition, or that planes are secretly spraying chemicals to modify the climate, or that nobody died at Sandy Hook, or that the Earth is flat? Are these people crazy? Are they just incredibly gullible? Are they young and impressionable? No, in fact the range of people who believe in conspiracy theories is simply a random slice of the general population.
There's a conspiracy theory for everyone, and hence very few people are immune.
Many dismiss conspiracy theorists as a bunch of crazy people, or a bunch of stupid people, or a bunch of crazy stupid people. Yet in many ways the belief in a conspiracy theory is as American as apple pie, and like apple pie it comes in all kinds of varieties, and all kinds of normal people like to consume it.
My neighbor down the road is a conspiracy theorist. Yet he's also an engineer, retired after a successful career. I've had dinner at his house, and yet he's a believer in chemtrails, and I'm a chemtrail debunker. It's odd; he even told me after a few glasses of wine that he thinks I'm being paid to debunk chemtrails. He thought this because he googled my name and found some pages that said I was a paid shill. Since he's a conspiracy theorist he tends to trust conspiracy sources more than mainstream sources, so he went with that.
Why do people believe in conspiracy theories? | Michio Kaku, Bill Nye & more | Big Think www.youtube.com
I've met all kinds of conspiracy theorists. At a chemtrails convention I attended there was pretty much the full spectrum. There were sensible and intelligent older people who had discovered their conspiracy anything from a few months ago to several decades ago. There were highly eccentric people of all ages, including one old gentleman with a pyramid attached to his bike. There were people who channeled aliens, and there were people who were angry that the alien-channeling people were allowed in. There were young people itching for a revolution. There were well-read intellectuals who thought there was a subtle system of persuasion going on in the evening news, and there were people who genuinely thought they were living in a computer simulation.
There's such a wide spectrum of people who believe in conspiracy theories because the spectrum of conspiracy theories itself is very wide. There's a conspiracy theory for everyone, and hence very few people are immune.
The mainstream and the fringe
One unfortunate problem with the term "conspiracy theory" is that it paints with a broad brush. It's tempting to simply divide people up into "conspiracy theorists" and "regular people" — to have tinfoil-hat-wearing paranoids on one side and sensible folk on the other. But the reality is that we are all conspiracy theorists, one way or another. We all know that conspiracies exist; we all suspect people in power of being involved in many kinds of conspiracies, even if it's only something as banal as accepting campaign contributions to vote a certain way on certain types of legislation.
It's also tempting to simply label conspiracy theories as either "mainstream" or "fringe." Journalist Paul Musgrave referenced this dichotomy when he wrote in the Washington Post:
Less than two months into the administration, the danger is no longer that Trump will make conspiracy thinking mainstream. That has already come to pass.
Musgrave obviously does not mean that shape-shifting lizard overlords have become mainstream. Nor does he mean that flat Earth, chemtrails, or even 9/11 truth are mainstream. What he's really talking about is a fairly small shift in a dividing line on the conspiracy spectrum. Most fringe conspiracy theories remain fringe, most mainstream theories remain mainstream. But, Musgrave argues, there's been a shift that's allowed the bottom part of the fringe to enter into the mainstream. Obama being a Kenyan was thought by many to be a silly conspiracy theory, something on the fringe. But if the president of the United States (Trump) keeps bringing it up, then it moves more towards the mainstream.
Both conspiracy theories and conspiracy theorists exist on a spectrum. If we are to communicate effectively with a conspiracy-minded friend we need to get some perspective on the full range of that spectrum, and where our friend's personal blend of theories fit into it.
It's very rare to find someone who only believes in one conspiracy theory. They generally believe in every conspiracy theory that's less extreme than their favorite one.
There are several ways we can classify a conspiracy theory: how scientific is it? How many people believe in it? How plausible is? But one I'm going use is a somewhat subjective measure of how extreme the theory is. I'm going to rank them from 1 to 10, with 1 being entirely mainstream to 10 being the most obscure extreme fringe theory you can fathom.
This extremeness spectrum is not simply a spectrum of reasonableness or scientific plausibility. Being extreme is being on the fringe, and fringe simply denotes the fact that it's an unusual interpretation and is restricted to a small number of people. A belief in religious supernatural occurrences (like miracles) is a scientifically implausible belief, and yet it is not considered particularly fringe.
Let's start with a simple list of actual conspiracy theories. These are ranked by extremeness in their most typical manifestation, but in reality, the following represent topics that can span several points on the scale, or even the entire scale.
- Big Pharma: The theory that pharmaceutical companies conspire to maximize profit by selling drugs that people do not actually need
- Global Warming Hoax: The theory that climate change is not caused by man-made carbon emissions, and that there's some other motive for claiming this
- JFK: The theory that people in addition to Lee Harvey Oswald were involved in the assassination of John F. Kennedy
- 9/11 Inside Job: The theory that the events of 9/11 were arranged by elements within the US government
- Chemtrails: The theory that the trails left behind aircraft are part of a secret spraying program
- False Flag Shootings: The theory that shootings like Sandy Hook and Las Vegas either never happened or were arranged by people in power
- Moon Landing Hoax: The theory that the Moon landings were faked in a movie studio
- UFO Cover-Up: The theory that the US government has contact with aliens or crashed alien crafts and is keeping it secret
- Flat Earth: The theory that the Earth is flat, but governments, business, and scientists all pretend it is a globe
- Reptile Overlords: The theory that the ruling classes are a race of shape-shifting trans-dimensional reptiles
If your friend subscribes to one of these theories you should not assume they believe in the most extreme version. They could be anywhere within a range. The categories are both rough and complex, and while some are quite narrow and specific, others encapsulate a wide range of variants of the theory that might go nearly all the way from a 1 to a 10. The position on the fringe conspiracy spectrum instead gives us a rough reference point for the center of the extent of the conspiracy belief.
Credit: "Escaping the Rabbit Hole" by Mick West
Figure 3 is an illustration (again, somewhat subjective) of the extents of extremeness of the conspiracy theories listed. For some of them the ranges are quite small. Flat Earth and Reptile Overlords are examples of theories that exist only at the far end of the spectrum. It's simply impossible to have a sensible version of the Flat Earth theory due to the fact that the Earth is actually round.
Similarly, there exist theories at the lower end of the spectrum that are fairly narrow in scope. A plot by pharmaceutical companies to maximize profits is hard (but not impossible) to make into a more extreme version.
Other theories are broader in scope. The 9/11 Inside Job theory is the classic example where the various theories go all the way from "they lowered their guard to allow some attack to happen," to "the planes were holograms; the towers were demolished with nuclear bombs." The chemtrail theory also has a wide range, from "additives to the fuel are making contrails last longer" to "nano-machines are being sprayed to decimate the population."
There's also overlapping relationships between the theories. chemtrails might be spraying poison to help big pharma sell more drugs. JFK might have been killed because he was going to reveal that UFOs were real. Fake shootings might have been arranged to distract people from any of the other theories. The conspiracy theory spectrum is continuous and multi-dimensional.
Don't immediately pigeonhole your friend if they express some skepticism about some aspect of the broader theories. For example, having some doubts about a few pieces from a Moon-landing video does not necessarily mean that they think we never went to the Moon, it could just mean that they think a few bits of the footage were mocked up for propaganda purposes. Likewise, if they say we should question the events of 9/11, it does not necessarily mean that they think the Twin Towers were destroyed with explosives, it could just mean they think elements within the CIA helped the hijackers somehow.
Understanding where your friend is on the conspiracy spectrum is not about which topics he is interested in, it's about where he draws the line.
The demarcation line
While conspiracy theorists might individually focus on one particular theory, like 9/11 or chemtrails, it's very rare to find someone who only believes in one conspiracy theory. They generally believe in every conspiracy theory that's less extreme than their favorite one.
In practical terms this means that if someone believes in the chemtrail theory they will also believe that 9/11 was an inside job involving controlled demolition, that Lee Harvey Oswald was just one of several gunmen, and that global warming is a big scam.
The general conspiracy spectrum is complex, with individual theory categories spread out in multiple ways. But for your friend, an individual, they have an internal version of this scale, one that is much less complex. For the individual the conspiracy spectrum breaks down into two sets of beliefs — the reasonable and the ridiculous. Conspiracists, especially those who have been doing it for a while, make increasingly precise distinctions about where they draw the line.
The drawing of such dividing lines is called "demarcation." In philosophy there's a classical problem called the "demarcation problem," which is basically where you draw the line between science and non-science. Conspiracists have a demarcation line on their own personal version of the conspiracy spectrum. On one side of the line there's science and reasonable theories they feel are probably correct. On the other side of the line there's non-science, gibberish, propaganda, lies, and disinformation.
Credit: "Escaping the Rabbit Hole" by Mick West
I have a line of demarcation (probably around 1.5), you have one, your friend has a line. We all draw the line in different places.
A brief passage from a recent UN report describes what could be the first-known case of an autonomous weapon, powered by artificial intelligence, killing in the battlefield.
- Autonomous weapons have been used in war for decades, but artificial intelligence is ushering in a new category of autonomous weapons.
- These weapons are not only capable of moving autonomously but also identifying and attacking targets on their own without oversight from a human.
- There's currently no clear international restrictions on the use of new autonomous weapons, but some nations are calling for preemptive bans.
Nothing transforms warfare more violently than new weapons technology. In prehistoric times, it was the club, the spear, the bow and arrow, the sword. The 16th century brought rifles. The World Wars of the 20th century introduced machine guns, planes, and atomic bombs.
Now we might be seeing the first stages of the next battlefield revolution: autonomous weapons powered by artificial intelligence.
In March, the United Nations Security Council published an extensive report on the Second Libyan War that describes what could be the first-known case of an AI-powered autonomous weapon killing people in the battlefield.
The incident took place in March 2020, when soldiers with the Government of National Accord (GNA) were battling troops supporting the Libyan National Army of Khalifa Haftar (called Haftar Affiliated Forces, or HAF, in the report). One passage describes how GNA troops may have used an autonomous drone to kill retreating HAF soldiers:
"Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2... and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability."
Still, because the GNA forces were also firing surface-to-air missiles at the HAF troops, it's currently difficult to know how many, if any, troops were killed by autonomous drones. It's also unclear whether this incident represents anything new. After all, autonomous weapons have been used in war for decades.
Lethal autonomous weapons
Lethal autonomous weapon systems (LAWS) are weapon systems that can search for and fire upon targets on their own. It's a broad category whose definition is debatable. For example, you could argue that land mines and naval mines, used in battle for centuries, are LAWS, albeit relatively passive and "dumb." Since the 1970s, navies have used active protection systems that identify, track, and shoot down enemy projectiles fired toward ships, if the human controller chooses to pull the trigger.
Then there are drones, an umbrella term that commonly refers to unmanned weapons systems. Introduced in 1991 with unmanned (yet human-controlled) aerial vehicles, drones now represent a broad suite of weapons systems, including unmanned combat aerial vehicles (UCAVs), loitering munitions (commonly called "kamikaze drones"), and unmanned ground vehicles (UGVs), to name a few.
Some unmanned weapons are largely autonomous. The key question to understanding the potential significance of the March 2020 incident is: what exactly was the weapon's level of autonomy? In other words, who made the ultimate decision to kill: human or robot?
The Kargu-2 system
One of the weapons described in the UN report was the Kargu-2 system, which is a type of loitering munitions weapon. This type of unmanned aerial vehicle loiters above potential targets (usually anti-air weapons) and, when it detects radar signals from enemy systems, swoops down and explodes in a kamikaze-style attack.
Kargu-2 is produced by the Turkish defense contractor STM, which says the system can be operated both manually and autonomously using "real-time image processing capabilities and machine learning algorithms" to identify and attack targets on the battlefield.
STM | KARGU - Rotary Wing Attack Drone Loitering Munition System youtu.be
In other words, STM says its robot can detect targets and autonomously attack them without a human "pulling the trigger." If that's what happened in Libya in March 2020, it'd be the first-known attack of its kind. But the UN report isn't conclusive.
It states that HAF troops suffered "continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems," which were "programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability."
What does that last bit mean? Basically, that a human operator might have programmed the drone to conduct the attack and then sent it a few miles away, where it didn't have connectivity to the operator. Without connectivity to the human operator, the robot would have had the final call on whether to attack.
Key line 2: The loitering munitions/LAWS (depending upon how you frame it) were enabled to attack without data conn… https://t.co/5u89cDDA60— Jack McDonald (@Jack McDonald)1622114029.0
To be sure, it's unclear if anyone died from such an autonomous attack in Libya. In any case, LAWS technology has evolved to the point where such attacks are possible. What's more, STM is developing swarms of drones that could work together to execute autonomous attacks.
Noah Smith, an economics writer, described what these attacks might look like on his Substack:
"Combined with A.I., tiny cheap little battery-powered drones could be a huge game-changer. Imagine releasing a networked swarm of autonomous quadcopters into an urban area held by enemy infantry, each armed with little rocket-propelled fragmentation grenades and equipped with computer vision technology that allowed it to recognize friend from foe."
But could drones accurately discern friend from foe? After all, computer-vision systems like facial recognition don't identify objects and people with perfect accuracy; one study found that very slightly tweaking an image can lead an AI to miscategorize it. Can LAWS be trusted to differentiate between a soldier with a rifle slung over his back and, say, a kid wearing a backpack?
Opposition to LAWS
Unsurprisingly, many humanitarian groups are concerned about introducing a new generation of autonomous weapons to the battlefield. One such group is the Campaign to Stop Killer Robots, whose 2018 survey of roughly 19,000 people across 26 countries found that 61 percent of respondents said they oppose the use of LAWS.
In 2018, the United Nations Convention on Certain Conventional Weapons issued a rather vague set of guidelines aiming to restrict the use of LAWS. One guideline states that "human responsibility must be retained when it comes to decisions on the use of weapons systems." Meanwhile, at least a couple dozen nations have called for preemptive bans on LAWS.
The U.S. and Russia oppose such bans, while China's position is a bit ambiguous. It's impossible to predict how the international community will regulate AI-powered autonomous weapons in the future, but among the world's superpowers, one assumption seems safe: If these weapons provide a clear tactical advantage, they will be used on the battlefield.
Milgram's experiment is rightly famous, but does it show what we think it does?
- In the 1960s, Stanley Milgram was sure that good, law-abiding Americans would never be able to follow orders like the Germans in the Holocaust.
- His experiments proved him spectacularly wrong. They showed just how many of us are willing to do evil if only we're told to by an authority figure.
- Yet, parts of the experiment were set up in such a way that we should perhaps conclude something a bit more nuanced.
Holding a clipboard and wearing a lab coat makes you a very powerful person. Add in a lanyard and a confident voice, and you're pretty much in Ocean's Eleven.
Though we believe ourselves to be contrarians, most of us like to obey authority. We answer questions, help with any number of tasks, and obey commands unthinkingly. The vast majority of the time, this is relatively harmless and even requisite for a functioning society, but it can also lead humanity to very dark places.
It could happen here
As we've seen with Asch's experiments on conformity, the post-World War II community was determined to answer how and why the Holocaust took place. Just after the trial of Adolf Eichmann, the American media and public came to see German society as some special kind of monster in just how willing they were to follow orders unthinkingly, at odds with any sense of duty or morality.
Into this came Stanley Milgram. In 1961, Milgram set out a series of experiments to show, in his view, how the German people were more susceptible to authoritarianism than Americans. Milgram believed, as a lot of people did, that the American people would never be capable of such horrendous evil.
The experiment was to be set up in two stages: the first would be on American subjects, to gauge how far they would obey orders; the second would be on Germans, to prove how much they differed. The results stopped Milgram in his tracks.
Shock, shock, horror
Milgram wanted to ensure that his experiment involved as broad and diverse a group of people as possible. In addition to testing the American vs. German mindset, he wanted to see how much age, education, employment, and so on affected a person's willingness to obey orders.
So, the original 40 participants he gathered came from a wide spectrum of society, and each was told that they were to take part in a "memory test." They were to determine the extent to which punishment affects learning and the ability to memorize.
Milgram believed, as a lot of people did, that the American people would never be capable of such horrendous evil.
The experiment involved three people. First, there was the "experimenter," dressed in a lab coat, who gave instructions and prompts. Second, there was an actor who was the "learner." Third, there was the participant who thought that they were acting as the "teacher" in the memory test. The apparent experimental setup was that the learner had to match two words together after being taught them, and whenever they got the answer wrong, the teacher had to administer an electric shock. (The teachers (participants) were shocked as well to let them know what kind of pain the learner would experience.) At first, the shock was set at 15 volts.
The learner (actor) repeatedly made mistakes for each study, and the teacher was told to increase the voltage each time. A tape recorder was played that had the learner (apparently) make sounds as if in pain. As it went on, the learner would plead and beg for the shocks to stop. The teacher was told to increase the amount of voltage as punishment up to a level that was explicitly described as being fatal — not least because the learner was desperately saying he had a heart condition.
The question Milgram wanted to know: how far would his participants go?
Just obeying orders
The results were surprising. Sixty-five percent of the participants were willing to give a 450-volt shock described as lethal, and all administered a 300-volt shock described as traumatically painful. It should be repeated, this occurred despite the learner (actor) begging the teacher (participant) to stop.
In the studies that came after, in a variety of different setups, that 60 percent number came up again and again. They showed that roughly two out of three people would be willing to kill someone if told to by an authority figure. Milgram proved that all genders, ages, and nationalities were depressingly capable of inflicting incredible pain or worse on innocent people.
Major limitations in Milgram's experiment
Milgram took many steps to make sure that his experiment was rigorous and fair. He used the same tape recording of the "learner" screaming, begging, and pleading for all participants. He made sure the experimenters used only the same four prompts each time when the participants were reluctant or wanted to stop. He even made sure that he himself was not present at the experiment, lest he interfere with the procedure (something Phillip Zimbardo did not do).
But, does the Milgram experiment actually prove what we think it does?
First, the experimenters were permitted to remind the participants that they were not responsible for what they did and that the team would take full blame. This, of course, does not make the study any less shocking, but it does perhaps change the scope of the conclusions. Perhaps the experiment reveals more about our ability to surrender responsibility and our willingness simply to become a tool. The conclusion is still pretty depressing, but it shows what we are capable of when offered absolution rather than when simply following orders.
Second, the experiment took place in a single hour, with very little time either to deliberate or talk things over with someone. In most situations, like the Holocaust, the perpetrators had ample time (years) to reflect on their actions, and yet, they still chose to turn up every day. Milgram perhaps highlights only how far we'll go in the heat of the moment.
Finally, the findings do not tell the whole tale. The participants were not engaging in sadistic glee to shock the learner. They all showed signs of serious distress and anxiety, such as nervous laughing fits. Some even had seizures. These were not willing accomplices but participants essentially forced to act a certain way. (Since then, many scientists have argued that Milgram's experiment is hugely unethical.)
The power of authority
That all being said, there's a reason why Milgram's experiment stays with us today. Whether it's evolutionarily or socially drilled into us, it seems that humans are capable of doing terrible things, if only we are told to do so by someone in power — or, at the very least, when we don't feel responsible for the consequences.
One silver lining to Milgram is in how it can inoculate us against such drone-like behavior. It can help us to resist. Simply knowing how far we can be manipulated helps allow us to say, "No."