The Online Economy Is Breaking Businesses, and Stealing Our Time and Energy
It's harder for most people to making a living now than it was before the rise of online businesses like Facebook and Amazon. That's because the digital economy is hurting the real economy.
Douglas Rushkoff is the host of the Team Human podcast and a professor of digital economics at CUNY/Queens. He is also the author of a dozen bestselling books on media, technology, and culture, including, Present Shock, Program or Be Programmed, Media Virus, and Team Human, the last of which is his latest work.
Douglas Rushkoff: The digital economy is breaking things because most people running digital companies aren't aware of the operating system that's beneath what they're doing. There's a good old fashion venture capital driven operating system. It goes back to central currency and corporatism and chartered monopolies and really the way our economy works, which has worked for a good 400 or 500 years and promoted everything from the British East India Trading Company right through to Walmart and General Electric. But when you take that and juice it up with digital steroids, weird things start to happen. You end up able to tweak and optimize your business so carefully that you can really see is a growing? Is it not growing? What can we do to promote growth? Growth. Growth. And if your company is not growing you end up in big trouble against all the other players that are growing.
It's simple power law dynamics. It's a winner takes all landscape. So if you have a company like Twitter, which I would see in my old fashion view as a successful company. It makes $500 million a quarter based on 140-character app. Success, right? No. In the current environment that's a failure because they don't have a growth strategy. They don't know how to turn into a video company, a news company, a social company, an everything company. And the reason why the digital economy is breaking our businesses is because we're taking the old agenda of growth and running it on digital platforms and it ends up amplifying and spinning this priority out of control.
Most of us look at the industrial age as this natural outgrowth of the need to do more and better business. But as I researched it I found out that most of the innovations we came up with in the industrial age were really for the opposite. There was a thriving peer to peer economy right at the end of the Middle Ages that nobody likes to talk about. The soldiers had come back from the Crusades. They had all sorts of new inventions and technologies and mechanisms; there were new trade routes that they had opened up. And they had came back to their towns and they took something that they found in the Middle East called the bazaar and they revived it as something they called the marketplace. So now people who had just been peasants working on the land of the Lords started coming together and trading this stuff that they made. And they had all of these really interesting instruments from market money and local currency and grain-based currency and all of these evolved really to promote the exchange of value and the velocity of transactions between people.
And it started to really do well, which was the problem. As the peasants became wealthy the aristocracy got scared, who are these people? They're not going to be dependent on us any more. So they came up with two main financial innovations to prevent the rise of this peer to peer economy. The first one was the chartered monopoly, really the parent to the modern corporation. All the chartered monopoly was was a way to say all of you small businesses are now illegal. If you want to be in the shoe business you have to work for his majesty's royal shoe company. You want to be in the grain business you have to work for his majesty's royal grain company. So people who were small business people now became employees. Instead of selling the value they created, now they sold their time as servants, as wage laborers.
The second invention they came up with was central currency. Not such a terrible thing in itself. It's great to have a long distance currency that lots of people can use and value, but the problem was they made all of the local currencies illegal. So the only way people could trade with each other, the candlestick maker could trade with the chicken farmer was by borrowing central currency from the treasury. So now you had to borrow money at interest just in order to transact. And that set in motion really a growth cascade. If you have a currency that has to be paid back with interest, in order to just make end meet you need an economy that's growing. You need more money next year than there was this year.
So that worked well for colonial powers, as long as we could extend into Africa and South America and North America, find slaves, find new resources, we could grow. But what happens when you reach the end of the planet's growth as we did really at the end of World War II? Then we started to look for really virtual surface area, some new way to grow. And that was the technology. We believed that digital technology and the World Wide Web and computers would really create a new place, a new virtual territory for us to colonize. And it just turns out what we've been colonizing for the last 20 years is human attention and human time. And now it looks like we're even running out of that.
Well, when the Internet first emerged people like me thought hooray, now we're going to have a way to restore all that peer to peer conductivity between people. And some of the earliest Internet businesses actually sought to do that in one way or another. If you look at the path of eBay and PayPal and Etsy and Square there are a lot of businesses that are looking at how can we connect people in a lateral way? The problem is as those businesses grew and everybody got interested in the net, other folks, the folks who really who started Wired Magazine or the Global Business Network, a lot of folks who were from the old NASDAQ stock exchange, which had been really depleted every sense of the biotech crash of 1987, they sought in the Internet a new avenue for growth. So for them it wasn't about how are we going to connect people in new wonderful ways and let him create and exchange value between them, it was really more how can we use digital platforms to extract value from people and places? So if you compare an eBay, which is promoting exchange to an Amazon, which is promoting extraction, you can kind of begin to see the difference. What happens is you end up with digital businesses that really are started more in the flip this house model of business that the I'm going to start a business that helps people and keeps going for 20 years.
People put money into an Internet business in order to get to acquisition or IPO. They want to grow this business a hundred times and then get out. It's called an exit strategy. Then it doesn't matter what happens. So, if you have a ride sharing company, are you going to pay your drivers enough so that they can have an ecosystem that lasts 20 years? No, you don't care about that. You can adopt a scorched earth policy towards this because you only need those drivers and that whole ridesharing sector long enough to establish a monopoly and then move over into something else. I mean do you think Amazon cares about booksellers and authors and publishers? No. The book industry was low hanging fruit in the digital economy. Believe me, I'm in the book of business. I know. We barely get by. It's not a growth industry, but it's ripe for the picking. So if you come in and then optimize the book industry you can extract so much value from it, you kill the thing and take it over very, very rapidly, but what's that for? Is it to own the book business? No. It's to then move over into another vertical like housewares and gardening and toys and drones and Amazon Cloud Services and everything else.
When the Internet came around we all thought we're all going to work at home in our underwear in our own time exchanging value with one another, but instead we've ended up with an Internet that takes more time from us, an Internet that we feel exhausted and drained when we're done using it. And that's because we're not using it; it's using us. The Internet is really just the technological front on a whole series of business plans that are looking to extract money from us, time from us, attention from us, and if we have none of those things, at least data from us. And we're not feeling it creating value for us; we're not feeling it really enhance our ability to create and exchange value with other people. It's harder for most of us to make a living now rather than easier. And that's not because automation is doing things better, it's because we're really facing very extractive business plans, extensions of that very same late medieval squashing of peer to peer activity, although now amplify in every device in our arsenal.
The reason I'm optimistic is because the metrics are all on the side of doing business well rather than this scorched earth short term extractive model that most digital companies use. If you look at the data, family businesses do better than shareholder owned businesses on every single metric except one, they grow slower during bubbles. And actually you kind of want to grow slower during bubbles because if you grow big during a bubble then you're part of what pops. Now, the reason why family businesses do better in the long run than shareholder owned businesses is because the person running a family business wants it to be around in another generation or two or three. The person running a shareholder owned business wants to extract enough money from that business so their grandchild can be given this wad of cash.
The person with a family business wants the business to be healthy enough so the grandchild can then run a successful thriving business. And because your family name is on the business you don't want to do really mean things to your employees or to the places that where you operate because then people are going to hate your family. It's your name. It's your face. So a very different dynamic gets set up and one that's ultimately more sustainable as a business. And startup, the founders of startups and technology businesses have to come to understand that taking venture capital and going for the big acquisition, going for that one out of 10,000 chance of getting a homerun is dumb compared to taking a very small amount of money and hitting the single or the double, in other words making ten or $20 million is not tragic, it's still enough. You can still really – you're going to get by on that. You're going to be able to send your kids to great colleges. It's not a failure to have millions of dollars.
But if you have to aim for the billions, if you are forced to your probability is so low - when I look at these founders running to Sequoia and Flatiron and Y Combinator because they want to go big and get the homerun, it looks to me like those people you see taking their welfare checks and going to the bodega and buying lottery tickets because they want to get the jackpot, the $43 million, they have no chance of doing that. And they would have a chance of taking their $5 or $10 and actually investing it and buy a book, learn how to do something, get a job. They have a much more high probability of succeeding. And it just breaks my heart when I see a kid have a great app, a great idea and then they turn to a venture capitalist who gives them a ton of money and a high valuation and then has them pivot. The pivot is not to do something better, all a pivot means is you're going to abandon what your original business was in order to come up with something that's sellable to the next round of investors, the next round of suckers really.
And do you really want to throw away your business? Do you really want to dispose of your idea in order to turn it into the brand name on a Ponzi scheme? I hope not because there's still so many great things we can do with these technologies. There's so much money to be made. There's so much revenue in creating people to one another, in helping people create an exchange value. The mantra I would give any startup and any big business is make other people rich. If you make your users rich they will come back. If you make them poor you've killed your marketplace.
It's harder for most people to making a living now than it was before the rise of online businesses like Facebook and Amazon. That's because the digital economy is hurting the real economy, says media theorist Douglas Rushkoff. Competition is increasingly fierce in just about every industry, and digital technologies have allowed companies to pursue monopolies like never before — because they chase the entire world's population as a customer base.
Businesses have always sought growth, but applying the growth mindset to digital technology wields some very disturbing results. Take Twitter for instance: as a company, it makes $500 billion each quarter, but market observers have questioned the company's value because it doesn't have a growth strategy. Compare that to Amazon or Facebook or Google, each of which span multiple industries and have grown rapidly over the last decade.
Interestingly, for all our fascination with businesses owned by shareholders, family businesses perform better in just about every metric. The reason, says Rushkoff, is that family businesses are more concerned for the future — the long term future, not just next quarter. Rushkoff explains more surprising facts about our digital economy in his book, Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity. It's truly a fascinating read.
Once a week.
Subscribe to our weekly newsletter.
The experience of life flashing before one's eyes has been reported for well over a century, but where's the science behind it?
At the age of 16, when Tony Kofi was an apprentice builder living in Nottingham, he fell from the third story of a building. Time seemed to slow down massively, and he saw a complex series of images flash before his eyes.
As he described it, “In my mind's eye I saw many, many things: children that I hadn't even had yet, friends that I had never seen but are now my friends. The thing that really stuck in my mind was playing an instrument". Then Tony landed on his head and lost consciousness.
When he came to at the hospital, he felt like a different person and didn't want to return to his previous life. Over the following weeks, the images kept flashing back into his mind. He felt that he was “being shown something" and that the images represented his future.
Later, Tony saw a picture of a saxophone and recognized it as the instrument he'd seen himself playing. He used his compensation money from the accident to buy one. Now, Tony Kofi is one of the UK's most successful jazz musicians, having won the BBC Jazz awards twice, in 2005 and 2008.
Though Tony's belief that he saw into his future is uncommon, it's by no means uncommon for people to report witnessing multiple scenes from their past during split-second emergency situations. After all, this is where the phrase “my life flashed before my eyes" comes from.
But what explains this phenomenon? Psychologists have proposed a number of explanations, but I'd argue the key to understanding Tony's experience lies in a different interpretation of time itself.
When life flashes before our eyes
The experience of life flashing before one's eyes has been reported for well over a century. In 1892, a Swiss geologist named Albert Heim fell from a precipice while mountain climbing. In his account of the fall, he wrote is was “as if on a distant stage, my whole past life [was] playing itself out in numerous scenes".
More recently, in July 2005, a young woman called Gill Hicks was sitting near one of the bombs that exploded on the London Underground. In the minutes after the accident, she hovered on the brink of death where, as she describes it: “my life was flashing before my eyes, flickering through every scene, every happy and sad moment, everything I have ever done, said, experienced".
In some cases, people don't see a review of their whole lives, but a series of past experiences and events that have special significance to them.
Explaining life reviews
Perhaps surprisingly, given how common it is, the “life review experience" has been studied very little. A handful of theories have been put forward, but they're understandably tentative and rather vague.
For example, a group of Israeli researchers suggested in 2017 that our life events may exist as a continuum in our minds, and may come to the forefront in extreme conditions of psychological and physiological stress.
Another theory is that, when we're close to death, our memories suddenly “unload" themselves, like the contents of a skip being dumped. This could be related to “cortical disinhibition" – a breaking down of the normal regulatory processes of the brain – in highly stressful or dangerous situations, causing a “cascade" of mental impressions.
But the life review is usually reported as a serene and ordered experience, completely unlike the kind of chaotic cascade of experiences associated with cortical disinhibition. And none of these theories explain how it's possible for such a vast amount of information – in many cases, all the events of a person's life – to manifest themselves in a period of a few seconds, and often far less.
Thinking in 'spatial' time
An alternative explanation is to think of time in a “spatial" sense. Our commonsense view of time is as an arrow that moves from the past through the present towards the future, in which we only have direct access to the present. But modern physics has cast doubt on this simple linear view of time.
Indeed, since Einstein's theory of relativity, some physicists have adopted a “spatial" view of time. They argue we live in a static “block universe" in which time is spread out in a kind of panorama where the past, the present and the future co-exist simultaneously.
The modern physicist Carlo Rovelli – author of the best-selling The Order of Time – also holds the view that linear time doesn't exist as a universal fact. This idea reflects the view of the philosopher Immanuel Kant, who argued that time is not an objectively real phenomenon, but a construct of the human mind.
This could explain why some people are able to review the events of their whole lives in an instant. A good deal of previous research – including my own – has suggested that our normal perception of time is simply a product of our normal state of consciousness.
In many altered states of consciousness, time slows down so dramatically that seconds seem to stretch out into minutes. This is a common feature of emergency situations, as well as states of deep meditation, experiences on psychedelic drugs and when athletes are “in the zone".
The limits of understanding
But what about Tony Kofi's apparent visions of his future? Did he really glimpse scenes from his future life? Did he see himself playing the saxophone because somehow his future as a musician was already established?
There are obviously some mundane interpretations of Tony's experience. Perhaps, for instance, he became a saxophone player simply because he saw himself playing it in his vision. But I don't think it's impossible that Tony did glimpse future events.
If time really does exist in a spatial sense – and if it's true that time is a construct of the human mind – then perhaps in some way future events may already be present, just as past events are still present.
Admittedly, this is very difficult to make sense of. But why should everything make sense to us? As I have suggested in a recent book, there must be some aspects of reality that are beyond our comprehension. After all, we're just animals, with a limited awareness of reality. And perhaps more than any other phenomenon, this is especially true of time.
Might as well face it, you're addicted to love.
- Many writers have commented on the addictive qualities of love. Science agrees.
- The reward system of the brain reacts similarly to both love and drugs
- Someday, it might be possible to treat "love addiction."
Since people started writing, they've written about love. The oldest love poem known dates back to the 21st century BCE. For most of that time, writers also apparently have been of two (or more) minds about it, announcing that love can be painful, impossible to quit, or even addictive — while also mentioning how nice it is.
The idea of love as an addiction is one that is both familiar and unsettling. Surely it can't be the case that our mutual love with our partner — a thing that can produce euphoria, consumes a great deal of our time, and which we fear losing — can be compared to a drug habit? But indeed, many scientists have turned their attention to the idea of "love addiction" and how your brain on drugs might resemble your brain in love.
Love and other drugs
In a 2017 article published in the journal Philosophy, Psychiatry, & Psychology, a team of neuroethicists considered the idea that love is addicting and held the idea up to science for scrutiny.
They point out that the leading model of addiction rests on the notion of a drug causing the brain to release an unnatural level of reward chemicals, such as dopamine, effectively hijacking the brain's reward system. This phenomenon isn't strictly limited to drugs, though they are more effective at this process than other things. Rats can get a similar rush from sugar as from cocaine, and they can have terrible withdrawal symptoms when the sugar crash kicks in.
On the structural level, there is a fair amount of overlap between the parts of the brain that handle love and pair-bonding and the parts that deal with addiction and reward processing. When inside an MRI machine and asked to think about the person they love romantically, the reward centers of people's brains light up like Broadway.
Love as an addiction
These facts lead the authors to consider two ideas, dubbed the "narrow" and "broad" views of love as an addiction.
The narrow view holds that addiction is the result of abnormal brain processes that simply don't exist in non-addicts. Under this paradigm, "food-seeking or love-seeking behaviors are not truly the result of addiction, no matter how addiction-like they may outwardly appear." It could be that abnormal processes cause the brain's reward system to misfire when exposed to love and to react to it excessively.
If this model is accurate, love addiction would be a rare thing — one study puts it around five to ten percent of the population — but could be considered a disorder similar to others and caused by faulty wiring in the brain. As with other addictions, this malfunction of the reward system could lead to an inability to fully live a typical life, difficulty having healthy relationships, and a number of other negative consequences.
The broad view looks at addiction differently, perhaps even radically.
It begins with the idea that addiction exists on a spectrum of motivations. All of our appetites, including those for food and water, exist on this spectrum and activate similar parts of the brain when satisfied. We can have appetites for anything that taps into our reward system, including food, gambling, sex, drugs, and love. For most people most of the time, our appetites are fairly temperate, if recurring. I might be slightly "addicted" to food — I do need some a few times per day — but that "addiction" doesn't have any negative effects on my health.
An appetite for cocaine, however, is rarely temperate and usually dangerous. Likewise, a person's appetite for love could reach addiction levels, and a person could be considered "hooked" on relationships (or on a particular person). This would put love addiction at the extreme end of the spectrum.
None of this is to say that the authors think that love is bad for you just because it can resemble an addiction. Love addiction is not the same as cocaine addiction at the neurological level: important differences, like how long it takes for the desire for another "hit" to occur, do exist. Rather, the authors see this as an opportunity to reconsider our approach to addiction in general and to think about how we can help the heartsick when they just can't seem to get over their last relationship.
Is "love addiction" a treatable disorder?
Hypothetically, a neurological basis for an addiction to love could point toward interventions that "correct" for it. If the narrow view of addiction is accurate, perhaps some people will be able to seek treatment for love addiction in the same way that others seek help to quit smoking. If the broad view of addiction is correct, the treatment of love addiction would be unlikely as it may be difficult to properly identify where the cutoff of acceptability on a spectrum should be.
Either way, since love is generally held in high regard by all cultures and doesn't quite seem to be in the same category as a bad cocaine habit in terms of social undesirability, the authors doubt we'll be treating anyone for "love addiction" anytime soon.
A school lesson leads to more precise measurements of the extinct megalodon shark, one of the largest fish ever.
- A new method estimates the ancient megalodon shark was as long as 65 feet.
- The megalodon was one of the largest fish that ever lived.
- The new model uses the width of shark teeth to estimate its overall size.
A Florida student figured out a way to more accurately measure the size of one of the largest fish that ever lived – the extinct megalodon shark – and found that it was even larger than previously estimated.
The megalodon (officially named Otodus megalodon, which means "Big Tooth") lived between 3.6 and 23 million years ago and was thought to be about 34 feet long on average, reaching the maximum length of 60 feet. Now a new study puts that number at up to 65 feet (20 meters).
Homework assignment leads to a discovery
The study, published in Palaeontologia Electronica, used new equations extrapolated from the width of megalodon's teeth to make the improved estimates. The paper's lead author, Victor Perez, developed the revised methodology while he was a doctoral student at the Florida Museum of Natural History. He got the idea while teaching students, noticing a range of discrepancies in the results they were getting.
Students were supposed to calculate the size of megalodon based on the ancient fish's similarities to the modern great white shark. They utilized the commonly accepted method of linking the height of a shark's tooth to its total body length. As the press release from the Florida Museum of Natural History expounds, this method involves locating the anatomical position of a tooth in the shark's jaw, measuring the tooth "from the tip of the crown to the line where root and crown meet," and using that number in an appropriate equation.
But while carrying out calculations in this way, some of Perez's students thought the shark would have been just 40 feet long, while others were calculating 148 feet. Teeth located toward the back of the mouth were yielding the largest estimates.
"I was going around, checking, like, did you use the wrong equation? Did you forget to convert your units?" said Perez, currently the assistant curator of paleontology at the Calvert Marine Museum in Maryland. "But it very quickly became clear that it was not the students that had made the error. It was simply that the equations were not as accurate as we had predicted."
Found in North Carolina, these 46 fossils are the most complete set of megalodon teeth ever excavated.Credit: Jeff Gage/Florida Museum
The new approach
Perez's math exercise demonstrated that the equations in use since 2002 were generating different size estimates for the same shark based on which tooth was being measured. Because megalodon teeth are most often found as standalone fossils, Perez focused on a nearly complete set of teeth donated by a fossil collector to design a new approach.
Perez also had help from Teddy Badaut, an avocational paleontologist in France, who suggested using tooth width instead of height, which would be proportional to the length of its body. Another collaborator on the revised method was Ronny Maik Leder, then a postdoctoral researcher at the Florida Museum, who aided in the development of the new set of equations.
The research team analyzed the widths of fossil teeth that came from 11 individual sharks of five species, which included megalodon and modern great white sharks, and created a model that connects how wide a tooth was to the size of the jaw for each species.
"I was quite surprised that indeed no one had thought of this before," shared Leder, who is now director of the Natural History Museum in Leipzig, Germany. "The simple beauty of this method must have been too obvious to be seen. Our model was much more stable than previous approaches. This collaboration was a wonderful example of why working with amateur and hobby paleontologists is so important."
Why use teeth?
In general, almost nothing of the super-shark survived to this day, other than a few vertebrae and a large number of big teeth. The megalodon's skeleton was made of lightweight cartilage that decomposed after death. But teeth, with enamel that preserves very well, are "probably the most structurally stable thing in living organisms," Perez said. Considering that megalodons lost thousands of teeth during a lifetime, these are the best resources we have in trying to figure out information about these long-gone giants.
Researchers suggest megalodon's large jaws were very thick, made for grabbing prey and breaking its bones, exerting a bite force of up to 108,500 to 182,200 newtons.
Megalodon tooth compared to two great white shark teeth. Credit: Brocken Inaglory / Wikimedia.
Limitations of the new model
While the new model is better than previous methods, it's still far from perfect in precisely figuring out the sizes of animals which lived so long ago and left behind few if any full remains. Because individual sharks come in a variety of sizes, Perez warned that even their new estimates have an error range of about 10 feet when it comes to the largest animals.
Other ambiguities may affect the results, such as the width of the megalodon's jaw and the size of the gaps between its teeth, neither of which are accurately known. "There's still more that could be done, but that would probably require finding a complete skeleton at this point," Perez pointed out.
How did the megalodon go extinct?
Environmental changes that led to fluctuations in sea levels and disturbed ecosystems in the oceans likely led to the demise of these enormous ancient sharks. They were just too big to be sustained by diminishing food resources, says the ReefQuest Centre for Shark Research.
A 2018 study suggested that a supernova 2.6 million years ago hit Earth's atmosphere with so much cosmic energy that it resulted in climate change. The cosmic rays that included particles called muons might have caused a mass extinction of giant ocean animals ("the megafauna") that included the megalodon by causing mutations and cancer.
Scientists, led by Adrian Melott, professor emeritus of physics and astronomy at the University of Kansas, estimated that "the cancer rate would go up about 50 percent for something the size of a human — and the bigger you are, the worse it is. For an elephant or a whale, the radiation dose goes way up," as he explained in a press release.
A brief passage from a recent UN report describes what could be the first-known case of an autonomous weapon, powered by artificial intelligence, killing in the battlefield.
- Autonomous weapons have been used in war for decades, but artificial intelligence is ushering in a new category of autonomous weapons.
- These weapons are not only capable of moving autonomously but also identifying and attacking targets on their own without oversight from a human.
- There's currently no clear international restrictions on the use of new autonomous weapons, but some nations are calling for preemptive bans.
Nothing transforms warfare more violently than new weapons technology. In prehistoric times, it was the club, the spear, the bow and arrow, the sword. The 16th century brought rifles. The World Wars of the 20th century introduced machine guns, planes, and atomic bombs.
Now we might be seeing the first stages of the next battlefield revolution: autonomous weapons powered by artificial intelligence.
In March, the United Nations Security Council published an extensive report on the Second Libyan War that describes what could be the first-known case of an AI-powered autonomous weapon killing people in the battlefield.
The incident took place in March 2020, when soldiers with the Government of National Accord (GNA) were battling troops supporting the Libyan National Army of Khalifa Haftar (called Haftar Affiliated Forces, or HAF, in the report). One passage describes how GNA troops may have used an autonomous drone to kill retreating HAF soldiers:
"Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2... and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability."
Still, because the GNA forces were also firing surface-to-air missiles at the HAF troops, it's currently difficult to know how many, if any, troops were killed by autonomous drones. It's also unclear whether this incident represents anything new. After all, autonomous weapons have been used in war for decades.
Lethal autonomous weapons
Lethal autonomous weapon systems (LAWS) are weapon systems that can search for and fire upon targets on their own. It's a broad category whose definition is debatable. For example, you could argue that land mines and naval mines, used in battle for centuries, are LAWS, albeit relatively passive and "dumb." Since the 1970s, navies have used active protection systems that identify, track, and shoot down enemy projectiles fired toward ships, if the human controller chooses to pull the trigger.
Then there are drones, an umbrella term that commonly refers to unmanned weapons systems. Introduced in 1991 with unmanned (yet human-controlled) aerial vehicles, drones now represent a broad suite of weapons systems, including unmanned combat aerial vehicles (UCAVs), loitering munitions (commonly called "kamikaze drones"), and unmanned ground vehicles (UGVs), to name a few.
Some unmanned weapons are largely autonomous. The key question to understanding the potential significance of the March 2020 incident is: what exactly was the weapon's level of autonomy? In other words, who made the ultimate decision to kill: human or robot?
The Kargu-2 system
One of the weapons described in the UN report was the Kargu-2 system, which is a type of loitering munitions weapon. This type of unmanned aerial vehicle loiters above potential targets (usually anti-air weapons) and, when it detects radar signals from enemy systems, swoops down and explodes in a kamikaze-style attack.
Kargu-2 is produced by the Turkish defense contractor STM, which says the system can be operated both manually and autonomously using "real-time image processing capabilities and machine learning algorithms" to identify and attack targets on the battlefield.
STM | KARGU - Rotary Wing Attack Drone Loitering Munition System youtu.be
In other words, STM says its robot can detect targets and autonomously attack them without a human "pulling the trigger." If that's what happened in Libya in March 2020, it'd be the first-known attack of its kind. But the UN report isn't conclusive.
It states that HAF troops suffered "continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems," which were "programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability."
What does that last bit mean? Basically, that a human operator might have programmed the drone to conduct the attack and then sent it a few miles away, where it didn't have connectivity to the operator. Without connectivity to the human operator, the robot would have had the final call on whether to attack.
Key line 2: The loitering munitions/LAWS (depending upon how you frame it) were enabled to attack without data conn… https://t.co/5u89cDDA60— Jack McDonald (@Jack McDonald)1622114029.0
To be sure, it's unclear if anyone died from such an autonomous attack in Libya. In any case, LAWS technology has evolved to the point where such attacks are possible. What's more, STM is developing swarms of drones that could work together to execute autonomous attacks.
Noah Smith, an economics writer, described what these attacks might look like on his Substack:
"Combined with A.I., tiny cheap little battery-powered drones could be a huge game-changer. Imagine releasing a networked swarm of autonomous quadcopters into an urban area held by enemy infantry, each armed with little rocket-propelled fragmentation grenades and equipped with computer vision technology that allowed it to recognize friend from foe."
But could drones accurately discern friend from foe? After all, computer-vision systems like facial recognition don't identify objects and people with perfect accuracy; one study found that very slightly tweaking an image can lead an AI to miscategorize it. Can LAWS be trusted to differentiate between a soldier with a rifle slung over his back and, say, a kid wearing a backpack?
Opposition to LAWS
Unsurprisingly, many humanitarian groups are concerned about introducing a new generation of autonomous weapons to the battlefield. One such group is the Campaign to Stop Killer Robots, whose 2018 survey of roughly 19,000 people across 26 countries found that 61 percent of respondents said they oppose the use of LAWS.
In 2018, the United Nations Convention on Certain Conventional Weapons issued a rather vague set of guidelines aiming to restrict the use of LAWS. One guideline states that "human responsibility must be retained when it comes to decisions on the use of weapons systems." Meanwhile, at least a couple dozen nations have called for preemptive bans on LAWS.
The U.S. and Russia oppose such bans, while China's position is a bit ambiguous. It's impossible to predict how the international community will regulate AI-powered autonomous weapons in the future, but among the world's superpowers, one assumption seems safe: If these weapons provide a clear tactical advantage, they will be used on the battlefield.