from the world's big
Could A.I. detect mass shooters before they strike?
President Trump has called for Silicon Valley to develop digital precogs, but such systems raise efficacy concerns.
- President Donald Trump wants social media companies to develop A.I. that can flag potential mass shooters.
- Experts agree that artificial intelligence is not advanced enough, nor are current moderating systems up to the task.
- A majority of Americans support stricter gun laws, but such policies have yet to make headway.
On August 3, a man in El Paso, Texas, shot and killed 22 people and injured 24 others. Hours later, another man in Dayton, Ohio, shot and killed nine people, including his own sister. Even in a country left numb by countless mass shootings, the news was distressing and painful.
President Donald Trump soon addressed the nation to outline how his administration planned to tackle this uniquely American problem. Listeners hoping the tragedies might finally spur motivation for stricter gun control laws, such as universal background checks or restrictions on high-capacity magazines, were left disappointed.
Trump's plan was a ragbag of typical Republican talking points: red flag laws, mental health concerns, and regulation on violent video games. Tucked among them was an idea straight out of a Philip K. Dick novel.
"We must recognize that the internet has provided a dangerous avenue to radicalize disturbed minds and perform demented acts," Trump said. "First, we must do a better job of identifying and acting on early warning signs. I am directing the Department of Justice to work in partnership with local, state and federal agencies as well as well as social media companies to develop tools that can detect mass shooters before they strike."
Basically, Trump wants digital precogs. But has artificial intelligence reached such grand, and potentially terrifying, heights?
A digitized state of mind
It's worth noting that A.I. has made impressive strides at reading and quantifying the human mind. Social media is a vast repository of data on how people feel and think. If we can suss out the internal from the performative, we could improve mental health care in the U.S. and abroad.
For example, a study from 2017 found that A.I. could read the predictive markers for depression in Instagram photos. Researchers tasked machine learning tools with analyzing data from 166 individuals, some of whom had been previously diagnosed with depression. The algorithms looked at filter choice, facial expressions, metadata tags, etc., in more than 43,950 photos.
The results? The A.I. outperformed human practitioners at diagnosing depression. These results held even when analyzing images from before the patients' diagnoses. (Of course, Instagram is also the social media platform most likely to make you depressed and anxious, but that's another study.)
Talking with Big Think, Eric Topol, a professor in the Department of Molecular Medicine at Scripps, called this the ability to "digitize our state of mind." In addition to the Instagram study, he pointed out that patients will share more with a self-chosen avatar than a human psychiatrist.
"So when you take this ability to digitize a state of mind and also have a support through an avatar, this could turn out to be a really great way to deal with the problem we have today, which is a lack of mental health professionals with a very extensive burden of depression and other mental health conditions," Topol said.
Detecting mass shooters?
....mentally ill or deranged people. I am the biggest Second Amendment person there is, but we all must work togeth… https://t.co/T9OthUAsXe— Donald J. Trump (@Donald J. Trump)1565352202.0
However, it's not as simple as turning the A.I. dial from "depression" to "mass shooter." Machine learning tools have gotten excellent at analyzing images, but they lag behind the mind's ability to read language, intonation, and social cues.
As Facebook CEO Mark Zuckerberg said: "One of the pieces of criticism we get that I think is fair is that we're much better able to enforce our nudity policies, for example, than we are hate speech. The reason for that is it's much easier to make an A.I. system that can detect a nipple than it is to determine what is linguistically hate speech."
Trump should know this. During a House Homeland Security subcommittee hearing earlier this year, experts testified that A.I. was not a panacea for curing online extremism. Alex Stamos, Facebook's former chief security officer, likened the world's best A.I. to "a crowd of millions of preschoolers" and the task to demanding those preschoolers "get together to build the Taj Mahal."
None of this is to say that the problem is impossible, but it's certainly intractable.
Yes, we can create an A.I. that plays Go or analyzes stock performance better than any human. That's because we have a lot of data on these activities and they follow predictable input-output patterns. Yet even these "simple" algorithms require some of the brightest minds to develop.
Mass shooters, though far too common in the United States, are still rare. We've played more games of Go, analyzed more stocks, and diagnosed more people with depression, which millions of Americans struggle with. This gives machine learning software more data points on these activities in order to create accurate, responsible predictions — that still don't perform flawlessly.
Add to this that hate, extremism, and violence don't follow reliable input-output patterns, and you can see why experts are leery of Trump's direction to employ A.I. in the battle against terrorism.
"As we psychological scientists have said repeatedly, the overwhelming majority of people with mental illness are not violent. And there is no single personality profile that can reliably predict who will resort to gun violence," Arthur C. Evans, CEO of the American Psychological Association, said in a release. "Based on the research, we know only that a history of violence is the single best predictor of who will commit future violence. And access to more guns, and deadlier guns, means more lives lost."
Social media can't protect us from ourselves
First Lady Melania Trump visits with the victims of the El Paso, Texas, shooting. Image source: Andrea Hanks / Flickr
One may wonder if we can utilize current capabilities more aggressively? Unfortunately, social media moderating systems are a hodgepodge, built piecemeal over the last decade. They rely on a mixture of A.I., paid moderators, and community policing. The outcome is an inconsistent system.
For example, the New York Times reported in 2017 that YouTube had removed thousands of videos using machine learning systems. The videos showed atrocities from the Syrian War, such as executions and people spouting Islamic State propaganda. The algorithm flagged and removed them as coming from extremist groups.
In truth, the videos came from humanitarian organizations to document human rights violations. The machine couldn't tell the difference. YouTube reinstated some of the videos after users reported the issue, but mistakes at such a scale do not give one hope that today's moderating systems could accurately identify would-be mass shooters.
That's the conclusion reached in a report from the Partnership on A.I. (PAI). It argued there were "serious shortcomings" in using A.I. as a risk-assessment tool in U.S. criminal justice. Its writers cite three overarching concerns: accuracy and bias; questions of transparency and accountability; and issues with the interface between tools and people.
"Although the use of these tools is in part motivated by the desire to mitigate existing human fallibility in the criminal justice system, it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data," the report states. "While formulas and statistical models provide some degree of consistency and replicability, they still share or amplify many weaknesses of human decision-making."
In addition to the above, there are practical barriers. The technical capabilities of law enforcement vary between locations. Social media platforms deal in massive amounts of traffic and data. And even when the red flags are self-evident — such as when shooters publish manifestos — they offer a narrow window in which to act.
The tools to reduce mass shootings
Protesters at March for Our Lives 2018 in San Francisco. Image source: Gregory Varnum / Wikimedia Commons
Artificial intelligence offers many advantages today and will offer more in the future. But as an answer to extremism and mass shootings, experts agree it's simply the wrong tool. That's the bad news. The good news is we have the tools we need already, and they can be implemented with readily available tech.
"Based on the psychological science, we know some of the steps we need to take. We need to limit civilians' access to assault weapons and high-capacity magazines. We need to institute universal background checks. And we should institute red flag laws that remove guns from people who are at high risk of committing violent acts," Evans wrote.
We don't need advanced A.I. to figure this out. There's only one developed country in the world where someone can legally and easily acquire an armory of guns, and it's the only developed country that suffers mass shootings with such regularity. It's a simple arithmetic.
- From AI to Mass Shootings, Neuroscience Is the Future of Problem ... ›
- Can A.I. create more diverse workplaces? - Big Think ›
Andy Samberg and Cristin Milioti get stuck in an infinite wedding time loop.
- Two wedding guests discover they're trapped in an infinite time loop, waking up in Palm Springs over and over and over.
- As the reality of their situation sets in, Nyles and Sarah decide to enjoy the repetitive awakenings.
- The film is perfectly timed for a world sheltering at home during a pandemic.
Richard Feynman once asked a silly question. Two MIT students just answered it.
Here's a fun experiment to try. Go to your pantry and see if you have a box of spaghetti. If you do, take out a noodle. Grab both ends of it and bend it until it breaks in half. How many pieces did it break into? If you got two large pieces and at least one small piece you're not alone.
But science loves a good challenge<p>The mystery remained unsolved until 2005, when French scientists <a href="http://www.lmm.jussieu.fr/~audoly/" target="_blank">Basile Audoly</a> and <a href="http://www.lmm.jussieu.fr/~neukirch/" target="_blank">Sebastien Neukirch </a>won an <a href="https://www.improbable.com/ig/" target="_blank">Ig Nobel Prize</a>, an award given to scientists for real work which is of a less serious nature than the discoveries that win Nobel prizes, for finally determining why this happens. <a href="http://www.lmm.jussieu.fr/spaghetti/audoly_neukirch_fragmentation.pdf" target="_blank">Their paper describing the effect is wonderfully funny to read</a>, as it takes such a banal issue so seriously. </p><p>They demonstrated that when a rod is bent past a certain point, such as when spaghetti is snapped in half by bending it at the ends, a "snapback effect" is created. This causes energy to reverberate from the initial break to other parts of the rod, often leading to a second break elsewhere.</p><p>While this settled the issue of <em>why </em>spaghetti noodles break into three or more pieces, it didn't establish if they always had to break this way. The question of if the snapback could be regulated remained unsettled.</p>
Physicists, being themselves, immediately wanted to try and break pasta into two pieces using this info<p><a href="https://roheiss.wordpress.com/fun/" target="_blank">Ronald Heisser</a> and <a href="https://math.mit.edu/directory/profile.php?pid=1787" target="_blank">Vishal Patil</a>, two graduate students currently at Cornell and MIT respectively, read about Feynman's night of noodle snapping in class and were inspired to try and find what could be done to make sure the pasta always broke in two.</p><p><a href="http://news.mit.edu/2018/mit-mathematicians-solve-age-old-spaghetti-mystery-0813" target="_blank">By placing the noodles in a special machine</a> built for the task and recording the bending with a high-powered camera, the young scientists were able to observe in extreme detail exactly what each change in their snapping method did to the pasta. After breaking more than 500 noodles, they found the solution.</p>
The apparatus the MIT researchers built specifically for the task of snapping hundreds of spaghetti sticks.
(Courtesy of the researchers)
What possible application could this have?<p>The snapback effect is not limited to uncooked pasta noodles and can be applied to rods of all sorts. The discovery of how to cleanly break them in two could be applied to future engineering projects.</p><p>Likewise, knowing how things fragment and fail is always handy to know when you're trying to build things. Carbon Nanotubes, <a href="https://bigthink.com/ideafeed/carbon-nanotube-space-elevator" target="_self">super strong cylinders often hailed as the building material of the future</a>, are also rods which can be better understood thanks to this odd experiment.</p><p>Sometimes big discoveries can be inspired by silly questions. If it hadn't been for Richard Feynman bending noodles seventy years ago, we wouldn't know what we know now about how energy is dispersed through rods and how to control their fracturing. While not all silly questions will lead to such a significant discovery, they can all help us learn.</p>
The multifaceted cerebellum is large — it's just tightly folded.
- A powerful MRI combined with modeling software results in a totally new view of the human cerebellum.
- The so-called 'little brain' is nearly 80% the size of the cerebral cortex when it's unfolded.
- This part of the brain is associated with a lot of things, and a new virtual map is suitably chaotic and complex.
Just under our brain's cortex and close to our brain stem sits the cerebellum, also known as the "little brain." It's an organ many animals have, and we're still learning what it does in humans. It's long been thought to be involved in sensory input and motor control, but recent studies suggests it also plays a role in a lot of other things, including emotion, thought, and pain. After all, about half of the brain's neurons reside there. But it's so small. Except it's not, according to a new study from San Diego State University (SDSU) published in PNAS (Proceedings of the National Academy of Sciences).
A neural crêpe
A new imaging study led by psychology professor and cognitive neuroscientist Martin Sereno of the SDSU MRI Imaging Center reveals that the cerebellum is actually an intricately folded organ that has a surface area equal in size to 78 percent of the cerebral cortex. Sereno, a pioneer in MRI brain imaging, collaborated with other experts from the U.K., Canada, and the Netherlands.
So what does it look like? Unfolded, the cerebellum is reminiscent of a crêpe, according to Sereno, about four inches wide and three feet long.
The team didn't physically unfold a cerebellum in their research. Instead, they worked with brain scans from a 9.4 Tesla MRI machine, and virtually unfolded and mapped the organ. Custom software was developed for the project, based on the open-source FreeSurfer app developed by Sereno and others. Their model allowed the scientists to unpack the virtual cerebellum down to each individual fold, or "folia."
Study's cross-sections of a folded cerebellum
Image source: Sereno, et al.
A complicated map
Sereno tells SDSU NewsCenter that "Until now we only had crude models of what it looked like. We now have a complete map or surface representation of the cerebellum, much like cities, counties, and states."
That map is a bit surprising, too, in that regions associated with different functions are scattered across the organ in peculiar ways, unlike the cortex where it's all pretty orderly. "You get a little chunk of the lip, next to a chunk of the shoulder or face, like jumbled puzzle pieces," says Sereno. This may have to do with the fact that when the cerebellum is folded, its elements line up differently than they do when the organ is unfolded.
It seems the folded structure of the cerebellum is a configuration that facilitates access to information coming from places all over the body. Sereno says, "Now that we have the first high resolution base map of the human cerebellum, there are many possibilities for researchers to start filling in what is certain to be a complex quilt of inputs, from many different parts of the cerebral cortex in more detail than ever before."
This makes sense if the cerebellum is involved in highly complex, advanced cognitive functions, such as handling language or performing abstract reasoning as scientists suspect. "When you think of the cognition required to write a scientific paper or explain a concept," says Sereno, "you have to pull in information from many different sources. And that's just how the cerebellum is set up."
Bigger and bigger
The study also suggests that the large size of their virtual human cerebellum is likely to be related to the sheer number of tasks with which the organ is involved in the complex human brain. The macaque cerebellum that the team analyzed, for example, amounts to just 30 percent the size of the animal's cortex.
"The fact that [the cerebellum] has such a large surface area speaks to the evolution of distinctively human behaviors and cognition," says Sereno. "It has expanded so much that the folding patterns are very complex."
As the study says, "Rather than coordinating sensory signals to execute expert physical movements, parts of the cerebellum may have been extended in humans to help coordinate fictive 'conceptual movements,' such as rapidly mentally rearranging a movement plan — or, in the fullness of time, perhaps even a mathematical equation."
Sereno concludes, "The 'little brain' is quite the jack of all trades. Mapping the cerebellum will be an interesting new frontier for the next decade."
What happens if we consider welfare programs as investments?
- A recently published study suggests that some welfare programs more than pay for themselves.
- It is one of the first major reviews of welfare programs to measure so many by a single metric.
- The findings will likely inform future welfare reform and encourage debate on how to grade success.
Welfare as an investment<p>The <a href="https://scholar.harvard.edu/files/hendren/files/welfare_vnber.pdf" target="_blank">study</a>, carried out by Nathaniel Hendren and Ben Sprung-Keyser of Harvard University, reviews 133 welfare programs through a single lens. The authors measured these programs' "Marginal Value of Public Funds" (MVPF), which is defined as the ratio of the recipients' willingness to pay for a program over its cost.</p><p>A program with an MVPF of one provides precisely as much in net benefits as it costs to deliver those benefits. For an illustration, imagine a program that hands someone a dollar. If getting that dollar doesn't alter their behavior, then the MVPF of that program is one. If it discourages them from working, then the program's cost goes up, as the program causes government tax revenues to fall in addition to costing money upfront. The MVPF goes below one in this case. <br> <br> Lastly, it is possible that getting the dollar causes the recipient to further their education and get a job that pays more taxes in the future, lowering the cost of the program in the long run and raising the MVPF. The value ratio can even hit infinity when a program fully "pays for itself."</p><p> While these are only a few examples, many others exist, and they do work to show you that a high MVPF means that a program "pays for itself," a value of one indicates a program "breaks even," and a value below one shows a program costs more money than the direct cost of the benefits would suggest.</p> After determining the programs' costs using existing literature and the willingness to pay through statistical analysis, 133 programs focusing on social insurance, education and job training, tax and cash transfers, and in-kind transfers were analyzed. The results show that some programs turn a "profit" for the government, mainly when they are focused on children:
This figure shows the MVPF for a variety of polices alongside the typical age of the beneficiaries. Clearly, programs targeted at children have a higher payoff.
Nathaniel Hendren and Ben Sprung-Keyser<p>Programs like child health services and K-12 education spending have infinite MVPF values. The authors argue this is because the programs allow children to live healthier, more productive lives and earn more money, which enables them to pay more taxes later. Programs like the preschool initiatives examined don't manage to do this as well and have a lower "profit" rate despite having decent MVPF ratios.</p><p>On the other hand, things like tuition deductions for older adults don't make back the money they cost. This is likely for several reasons, not the least of which is that there is less time for the benefactor to pay the government back in taxes. Disability insurance was likewise "unprofitable," as those collecting it have a reduced need to work and pay less back in taxes. </p>