A brief passage from a recent UN report describes what could be the first-known case of an autonomous weapon, powered by artificial intelligence, killing in the battlefield.
- Autonomous weapons have been used in war for decades, but artificial intelligence is ushering in a new category of autonomous weapons.
- These weapons are not only capable of moving autonomously but also identifying and attacking targets on their own without oversight from a human.
- There's currently no clear international restrictions on the use of new autonomous weapons, but some nations are calling for preemptive bans.
Nothing transforms warfare more violently than new weapons technology. In prehistoric times, it was the club, the spear, the bow and arrow, the sword. The 16th century brought rifles. The World Wars of the 20th century introduced machine guns, planes, and atomic bombs.
Now we might be seeing the first stages of the next battlefield revolution: autonomous weapons powered by artificial intelligence.
In March, the United Nations Security Council published an extensive report on the Second Libyan War that describes what could be the first-known case of an AI-powered autonomous weapon killing people in the battlefield.
The incident took place in March 2020, when soldiers with the Government of National Accord (GNA) were battling troops supporting the Libyan National Army of Khalifa Haftar (called Haftar Affiliated Forces, or HAF, in the report). One passage describes how GNA troops may have used an autonomous drone to kill retreating HAF soldiers:
"Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2... and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability."
Still, because the GNA forces were also firing surface-to-air missiles at the HAF troops, it's currently difficult to know how many, if any, troops were killed by autonomous drones. It's also unclear whether this incident represents anything new. After all, autonomous weapons have been used in war for decades.
Lethal autonomous weapons
Lethal autonomous weapon systems (LAWS) are weapon systems that can search for and fire upon targets on their own. It's a broad category whose definition is debatable. For example, you could argue that land mines and naval mines, used in battle for centuries, are LAWS, albeit relatively passive and "dumb." Since the 1970s, navies have used active protection systems that identify, track, and shoot down enemy projectiles fired toward ships, if the human controller chooses to pull the trigger.
Then there are drones, an umbrella term that commonly refers to unmanned weapons systems. Introduced in 1991 with unmanned (yet human-controlled) aerial vehicles, drones now represent a broad suite of weapons systems, including unmanned combat aerial vehicles (UCAVs), loitering munitions (commonly called "kamikaze drones"), and unmanned ground vehicles (UGVs), to name a few.
Some unmanned weapons are largely autonomous. The key question to understanding the potential significance of the March 2020 incident is: what exactly was the weapon's level of autonomy? In other words, who made the ultimate decision to kill: human or robot?
The Kargu-2 system
One of the weapons described in the UN report was the Kargu-2 system, which is a type of loitering munitions weapon. This type of unmanned aerial vehicle loiters above potential targets (usually anti-air weapons) and, when it detects radar signals from enemy systems, swoops down and explodes in a kamikaze-style attack.
Kargu-2 is produced by the Turkish defense contractor STM, which says the system can be operated both manually and autonomously using "real-time image processing capabilities and machine learning algorithms" to identify and attack targets on the battlefield.
STM | KARGU - Rotary Wing Attack Drone Loitering Munition System youtu.be
In other words, STM says its robot can detect targets and autonomously attack them without a human "pulling the trigger." If that's what happened in Libya in March 2020, it'd be the first-known attack of its kind. But the UN report isn't conclusive.
It states that HAF troops suffered "continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems," which were "programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability."
What does that last bit mean? Basically, that a human operator might have programmed the drone to conduct the attack and then sent it a few miles away, where it didn't have connectivity to the operator. Without connectivity to the human operator, the robot would have had the final call on whether to attack.
Key line 2: The loitering munitions/LAWS (depending upon how you frame it) were enabled to attack without data conn… https://t.co/5u89cDDA60— Jack McDonald (@Jack McDonald)1622114029.0
To be sure, it's unclear if anyone died from such an autonomous attack in Libya. In any case, LAWS technology has evolved to the point where such attacks are possible. What's more, STM is developing swarms of drones that could work together to execute autonomous attacks.
Noah Smith, an economics writer, described what these attacks might look like on his Substack:
"Combined with A.I., tiny cheap little battery-powered drones could be a huge game-changer. Imagine releasing a networked swarm of autonomous quadcopters into an urban area held by enemy infantry, each armed with little rocket-propelled fragmentation grenades and equipped with computer vision technology that allowed it to recognize friend from foe."
But could drones accurately discern friend from foe? After all, computer-vision systems like facial recognition don't identify objects and people with perfect accuracy; one study found that very slightly tweaking an image can lead an AI to miscategorize it. Can LAWS be trusted to differentiate between a soldier with a rifle slung over his back and, say, a kid wearing a backpack?
Opposition to LAWS
Unsurprisingly, many humanitarian groups are concerned about introducing a new generation of autonomous weapons to the battlefield. One such group is the Campaign to Stop Killer Robots, whose 2018 survey of roughly 19,000 people across 26 countries found that 61 percent of respondents said they oppose the use of LAWS.
In 2018, the United Nations Convention on Certain Conventional Weapons issued a rather vague set of guidelines aiming to restrict the use of LAWS. One guideline states that "human responsibility must be retained when it comes to decisions on the use of weapons systems." Meanwhile, at least a couple dozen nations have called for preemptive bans on LAWS.
The U.S. and Russia oppose such bans, while China's position is a bit ambiguous. It's impossible to predict how the international community will regulate AI-powered autonomous weapons in the future, but among the world's superpowers, one assumption seems safe: If these weapons provide a clear tactical advantage, they will be used on the battlefield.
Dreams are weird. According to a new theory, that's what makes them useful.
- A new paper suggests that dreaming helps us generalize our experiences so that we can adapt to new circumstances.
- Therefore, the strangeness of dreams is what makes them useful.
- This idea is supported by some data, though new experiments could help confirm it.
Lots of animals dream, but nobody is quite sure why. Researchers are divided over if dreaming is a mere side effect of other brain functions or if it serves its own purpose.
A number of theories attempting to explain dreaming exist. These include the ideas that dreams are needed to regulate our emotional health (the grandchild of Freudian theory) and that they help us psychologically practice for encountering real-world phenomena. The leading contemporary theory is that dreams are involved with or even caused by memory processing and storage.
A new paper published in the journal Patterns proposes a new hypothesis: dreaming is the brain's attempt to generalize our experiences, much like how randomness must be used to teach computers how to recognize real world-data. The paper also proposes ways to test it.
Perchance to dream?
The author, Erik Hoel, calls his idea the "overfitted brain hypothesis." It is based in part on the learning process of artificial neural networks, which are computer algorithms that seek to find patterns in large data sets. These systems are often given training data that is similar, but not identical, to the data that they will analyze later. Practice data is often purposefully contaminated with extra noise and chaos. This is done in order to prevent "overfitting" — in other words, to prevent the neural network from becoming too "narrow minded" and hence unable to identify the bigger picture.
Dr. Hoel's new hypothesis argues that your brain does something similar through dreaming. The "hallucinogenic, category-breaking, and fabulist quality of dreams" allow our brains to introduce "warped or 'corrupted'" sensory input for consideration.
In this way, the strangeness of our dreams is a feature rather than a bug.
By presenting us with occasionally bizarre takes on the world, our brains keep us from getting too fixed on the specifics of a task and make us better able to generalize. Dr. Hoel summarizes this rather poetically by saying, "Dreams are there to keep you from becoming too fitted to the model of the world."
Can Hoel's dream hypothesis be tested?
Dr. Hoel suggests that evidence for this already exists. It has been shown that repeatedly performing a novel task while awake is a good way to assure that you'll dream about it that night. He proposes that actions like this trigger the brain's defense against overfitting, and the weird dreams are the result.
Dr. Hoel's idea does not necessarily exclude other hypotheses about sleep or dreaming that currently have a fair amount of empirical support. Importantly, he also proposes a few ways to test the predictions made by his hypothesis.
If it is correct, then the effects of sleep deprivation on the ability to memorize would be different from its effects on the ability to generalize. Dr. Hoel suggests that a well designed test examining if sleep or dream deprivation impacts the ability of mice to generalize fears could provide evidence for his hypothesis. Tracking synaptic changes in response to dreams could also be an avenue worth exploring.
Additionally, Dr. Hoel proposes that dream-like stimuli, such as virtual reality or video, could provide similar benefits as dreaming if the over-fitting theory is correct. He explains that this could also serve as the foundation of an experiment to test the hypothesis as well as a possible application of it:
"For example, it may be that a pilot who has been flying for a long period of time is beginning to overfit to their task, and a quick but intense exposure to an entirely different sort of visual stimulus (like a dream-like nature scene in VR) could stave off some of the effects of sleep deprivation. The impact of substitutions can be examined both behaviorally but also at the neurophysiological level of REM rebound."
If the hypothesis catches on, we may expect to see future studies that seek to confirm or deny the predictions it makes. Until then, we can only speculate on the idea's possible merits and failings within the framework of what we already know to be true.
Though, I suppose we can also sleep on it.
A new method could make holograms for virtual reality, 3D printing, and more. You can even run it can run on a smartphone.
Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing.
One reason: VR can make users feel sick. Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.
Holograms deliver an exceptional representation of 3D world around us. Plus, they're beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer's position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.
Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.
"People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations," says Liang Shi, the study's lead author and a PhD student in MIT's Department of Electrical Engineering and Computer Science (EECS). "It's often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades."
Shi believes the new approach, which the team calls "tensor holography," will finally bring that elusive 10-year goal within reach. The advance could fuel a spillover of holography into fields like VR and 3D printing.
Shi worked on the study, published today in Nature, with his advisor and co-author Wojciech Matusik. Other co-authors include Beichen Li of EECS and the Computer Science and Artificial Intelligence Laboratory at MIT, as well as former MIT researchers Changil Kim (now at Facebook) and Petr Kellnhofer (now at Stanford University).
The quest for better 3D
Courtesy of the researchers
A typical lens-based photograph encodes the brightness of each light wave — a photo can faithfully reproduce a scene's colors, but it ultimately yields a flat image.
In contrast, a hologram encodes both the brightness and phase of each light wave. That combination delivers a truer depiction of a scene's parallax and depth. So, while a photograph of Monet's "Water Lilies" can highlight the paintings' color palate, a hologram can bring the work to life, rendering the unique 3D texture of each brush stroke. But despite their realism, holograms are a challenge to make and share.
First developed in the mid-1900s, early holograms were recorded optically. That required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves' phase. This reference generates a hologram's unique sense of depth. The resulting images were static, so they couldn't capture motion. And they were hard copy only, making them difficult to reproduce and share.
Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. "Because each point in the scene has a different depth, you can't apply the same operations for all of them," says Shi. "That increases the complexity significantly." Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image. Plus, existing algorithms don't model occlusion with photorealistic precision. So Shi's team took a different approach: letting the computer teach physics to itself.
They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation. The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information. Training a neural network typically requires a large, high-quality dataset, which didn't previously exist for 3D holograms.
The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram. To create the holograms in the new database, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion. That approach resulted in photorealistic training data. Next, the algorithm got to work.
By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms. The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.
"We are amazed at how well it performs," says Matusik. In mere milliseconds, tensor holography can craft holograms from images with depth information — which is provided by typical computer-generated images and can be calculated from a multicamera setup or LiDAR sensor (both are standard on some new smartphones). This advance paves the way for real-time 3D holography. What's more, the compact tensor network requires less than 1 MB of memory. "It's negligible, considering the tens and hundreds of gigabytes available on the latest cell phone," he says.
The research "shows that true 3D holographic displays are practical with only moderate computational requirements," says Joel Kollin, a principal optical architect at Microsoft who was not involved with the research. He adds that "this paper shows marked improvement in image quality over previous work," which will "add realism and comfort for the viewer." Kollin also hints at the possibility that holographic displays like this could even be customized to a viewer's ophthalmic prescription. "Holographic displays can correct for aberrations in the eye. This makes it possible for a display image sharper than what the user could see with contacts or glasses, which only correct for low order aberrations like focus and astigmatism."
"A considerable leap"
Real-time 3D holography would enhance a slew of systems, from VR to 3D printing. The team says the new system could help immerse VR viewers in more realistic scenery, while eliminating eye strain and other side effects of long-term VR use. The technology could be easily deployed on displays that modulate the phase of light waves. Currently, most affordable consumer-grade displays modulate only brightness, though the cost of phase-modulating displays would fall if widely adopted.
Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say. This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.
"It's a considerable leap that could completely change people's attitudes toward holography," says Matusik. "We feel like neural networks were born for this task."
The work was supported, in part, by Sony.
Companies can identify you from your music preferences, as well as influence and profit from your behavior.
- New research discovered that you can be identified from just three song choices.
- This type of information can be exploited by streaming services through targeted advertising.
- The researchers are calling for musical preference to be considered in regulations regarding online privacy.
While the focus on music piracy dominated the media for years, an equally important (and far less discussed) phenomenon occurred during the transition from broadcast radio to streaming. People were no longer beholden to the gatekeepers known as DJs. Today, listeners have the entire history of music at their fingertips. Each person is now their own DJ.
If it's free, you are the product
Though this might appear empowering, every advancement comes at a cost. Because listeners changed how they consumed music (namely, from radio broadcasts to personalized online streams), companies had to change their monetization strategy. Now, you are the product.
When you curate a playlist, you are inadvertently sending tons of data to different companies, with Spotify, YouTube, and Apple Music leading the way. As it turns out, according to a new study from Israeli researchers — Ariel University's Dr. Ron Hirschprung and Tel Aviv University's Dr. Ori Leshman — your musical tastes reveal more about your personality than you likely ever imagined.
Musical selection is a quasi-identifier
There are different ways in which you can be identified. Identifiers, such as your social security number, are highly specific and unique to you. But then there are quasi-identifiers — things like age, gender, and occupation — that can also give away your identity. The authors claim that musical selection is a quasi-identifier, and they argue that, as with other forms of sensitive data, our playlists should be considered when constructing privacy laws.
In their paper, they write, "[T]he combination of Big-Data, together with the availability of computational power — which is notoriously known for its potential of privacy violation — introduces a privacy threat from an unexpected angle: listening to music."
To prove their point, the researchers divided undergraduate students into four groups with roughly 35 volunteers in each. Every member submitted three songs from their playlist of favorite tracks. Then, the researchers picked five members at random in each group, and the remaining volunteers were asked to vote to determine if they could match the members with their playlists.
Photo: cherryandbees / Adobe Stock
Even to the surprise of the researchers, the participants were right between 80 percent and 100 percent of the time. Incredibly, these students did not know one another well and were not aware in advance of anyone's musical preferences.
There are many outward signs that mark us in the eyes of others: what we wear, what we eat, how we style our hair, our mannerisms and posture, and even where we stand at parties. Other people pick up on these subtle clues, which in turn allows them to predict our personalities. In this study, the volunteers were able to identify the musical preferences of strangers simply by observing their outward appearances.
Of course, companies notice similar things and are able to exploit what they learn about us. In a press release, the authors stated:
"Music can become a form of characterization, and even an identifier. It provides commercial companies like Google and Spotify with additional and more in-depth information about us as users of these platforms. In the digital world we live in today, these findings have far-reaching implications on privacy violations, especially since information about people can be inferred from a completely unexpected source, which is therefore lacking in protection against such violations."Musical preference isn't the only way in which you can be identified online. For instance, your browsing history can give away your identity. Listening to your favorite tunes while searching Google for a new recipe isn't as innocuous as you might think.
Stay in touch with Derek on Twitter and Facebook. His most recent book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."
New machine-learning algorithms from Columbia University detect cognitive impairment in older drivers.
An older person's cognitive health is not always obvious. Cognitive impairment and dementia manifest gradually over time, and a person may be unaware of their advance. During this subtle transition, such a person may continue living as they always have, going about their business at home and behind the wheel. But this could lead to a dangerous car accident.
So, researchers from Columbia University have announced the development of AI algorithms that can detect mild cognitive impairment and dementia in older people based on the way they drive. The authors report in the journal Geriatrics that their algorithm is 88 percent accurate.
"Driving is a complex task involving dynamic cognitive processes and requiring essential cognitive functions and perceptual motor skills," says senior author Guohua Li, professor of epidemiology. "Our study indicates that naturalistic driving behaviors can be used as comprehensive and reliable markers for mild cognitive impairment and dementia."
Random forest model
The algorithms the researchers developed were based on a common AI statistical method involving "decision trees" that form a "random forest model." The most successful algorithm, according to lead author Sharon Di, associate professor of civil engineering, was based on "variables derived from the naturalistic driving data and basic demographic characteristics, such as age, sex, race/ethnicity and education level."
Decision trees are often used in memes in which answering "yes" or "no" regarding some attribute leads you down a path to another question, which in turn ultimately leads to a final conclusion.
Data used in the study
The algorithm was developed using data sourced by the Longitudinal Research on Aging Drivers (LongROAD) study sponsored by the AAA Foundation for Traffic Safety. It came from in-vehicle recording devices that captured the driving behaviors of 2,977 participants from August 2015 through March 2019. At the time the project began, the motorists' ages ranged from 65 to 79 years. From the raw data, the authors of the new study derived 29 behavioral variables, which they used to develop cognitive profiles of the drivers.
Credit: Zoran Zeremski/Adobe Stock
The researchers then developed a series of machine-learning models to predict cognitive issues, with differing success rates. While models based on driving variables alone were just 66 percent accurate, and demographic models less so at 29 percent, using both models together produced an accuracy rate of 88 percent.
The researchers also explored the validity of individual factors as predictors of cognitive issues. In order of most reliable to least reliable, they were: (1) age; (2) percentage of trips traveled within 15 miles of home; (3) race/ethnicity; (4) minutes per round trip; and (5) number of hard braking events.
Li is hopeful that his team's work can help keep roadways and older drivers safe. "If validated," he says, "the algorithms developed in this study could provide a novel, unobtrusive screening tool for early detection and management of mild cognitive impairment and dementia in older drivers."