Self-Motivation
David Goggins
Former Navy Seal
Career Development
Bryan Cranston
Actor
Critical Thinking
Liv Boeree
International Poker Champion
Emotional Intelligence
Amaryllis Fox
Former CIA Clandestine Operative
Management
Chris Hadfield
Retired Canadian Astronaut & Author
Learn
from the world's big
thinkers
Start Learning

Could A.I. write a novel like Hemingway?

Artificial Intelligence has come a long way in a short time. So at what point will it be able to emulate the great artists and writers of our time?

Salman Rushdie: You know, I never say never. I mean I remember... I mean I’m sort of an amateur chess player—that’s what I’m interested in: chess, and I remember back in the day when computers were first being taught to play chess that people said that they would never be able to beat the real great, the grandmasters and the world champions. And for a long time that was true—that the world champion players and the great grandmasters were able to overcome the computer. 

Uh, not true anymore. It’s not true anymore. Computers are certainly as good if not better as any human player. As computer memory and sophistication has increased, it has outstripped human memory and sophistication. 

So I don’t know, it seems to me the thing that makes a writer a good writer is not just the technical skill with language, not even being able to find and tell a good story, it seems to me that first of all there’s a relationship with language that the best writers have, which is very much their relationship. 

If we read to Hemingway we would know it’s Hemingway because he has a particular relationship with the language; if we read James Joyce or William Faulkner we know it’s them, and if we read Garcia Marquez same thing. 

So that’s the first thing, is when I’m looking at work I’m trying to see what is the relationship with language.

And the second things are how you see the world—like do you have a good ear? Are you good at listening to how people really speak? Do you have a good eye? Are you good at seeing the world in an interesting way? 

And then finally the greatest writers, the best writers have a vision of the world that is personal to themselves, they have a kind of take on reality which is theirs and out of which their whole sensibility proceeds. 

Now to have all of that in the form of artificial intelligence—I don’t think we’re anywhere near that yet. 

But what is true I think is there’s beginning to be some sense of AI as developing a moral stance, developing an ability to make good and evil choices, right and wrong choices, and that’s a step on the way towards being what one would call human. 

So I’m not saying never, I’m just saying I don’t see that we’re there yet.

Author and public intellectual Salman Rushdie knows his way around a good word or two. It's made him one of the most celebrated and widely-read authors of the last 50 years. But he has an open mind that one day a machine might be able to emulate him. He remembers an era not too long ago where people were deriding computer chess programs, saying that they would never beat grandmaster human players. It only took a couple of decades until those that chided the chess AI had to eat crow, Salman posits, so why should writing be any different? Salman Rushdie's latest book is The Golden House.

Does conscious AI deserve rights?

If machines develop consciousness, or if we manage to give it to them, the human-robot dynamic will forever be different.

Videos
  • Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
  • Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
  • One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.

A new hydrogel might be strong enough for knee replacements

Duke University researchers might have solved a half-century old problem.

Photo by Alexander Hassenstein/Getty Images
Technology & Innovation
  • Duke University researchers created a hydrogel that appears to be as strong and flexible as human cartilage.
  • The blend of three polymers provides enough flexibility and durability to mimic the knee.
  • The next step is to test this hydrogel in sheep; human use can take at least three years.
Keep reading Show less

Hints of the 4th dimension have been detected by physicists

What would it be like to experience the 4th dimension?

Two different experiments show hints of a 4th spatial dimension. Credit: Zilberberg Group / ETH Zürich
Technology & Innovation

Physicists have understood at least theoretically, that there may be higher dimensions, besides our normal three. The first clue came in 1905 when Einstein developed his theory of special relativity. Of course, by dimensions we’re talking about length, width, and height. Generally speaking, when we talk about a fourth dimension, it’s considered space-time. But here, physicists mean a spatial dimension beyond the normal three, not a parallel universe, as such dimensions are mistaken for in popular sci-fi shows.

Keep reading Show less

Predicting PTSD symptoms becomes possible with a new test

An algorithm may allow doctors to assess PTSD candidates for early intervention after traumatic ER visits.

Image source: camillo jimenez/Unsplash
Technology & Innovation
  • 10-15% of people visiting emergency rooms eventually develop symptoms of long-lasting PTSD.
  • Early treatment is available but there's been no way to tell who needs it.
  • Using clinical data already being collected, machine learning can identify who's at risk.

The psychological scars a traumatic experience can leave behind may have a more profound effect on a person than the original traumatic experience. Long after an acute emergency is resolved, victims of post-traumatic stress disorder (PTSD) continue to suffer its consequences.

In the U.S. some 30 million patients are annually treated in emergency departments (EDs) for a range of traumatic injuries. Add to that urgent admissions to the ED with the onset of COVID-19 symptoms. Health experts predict that some 10 percent to 15 percent of these people will develop long-lasting PTSD within a year of the initial incident. While there are interventions that can help individuals avoid PTSD, there's been no reliable way to identify those most likely to need it.

That may now have changed. A multi-disciplinary team of researchers has developed a method for predicting who is most likely to develop PTSD after a traumatic emergency-room experience. Their study is published in the journal Nature Medicine.

70 data points and machine learning

nurse wrapping patient's arm

Image source: Creators Collective/Unsplash

Study lead author Katharina Schultebraucks of Columbia University's Department Vagelos College of Physicians and Surgeons says:

"For many trauma patients, the ED visit is often their sole contact with the health care system. The time immediately after a traumatic injury is a critical window for identifying people at risk for PTSD and arranging appropriate follow-up treatment. The earlier we can treat those at risk, the better the likely outcomes."

The new PTSD test uses machine learning and 70 clinical data points plus a clinical stress-level assessment to develop a PTSD score for an individual that identifies their risk of acquiring the condition.

Among the 70 data points are stress hormone levels, inflammatory signals, high blood pressure, and an anxiety-level assessment. Says Schultebraucks, "We selected measures that are routinely collected in the ED and logged in the electronic medical record, plus answers to a few short questions about the psychological stress response. The idea was to create a tool that would be universally available and would add little burden to ED personnel."

Researchers used data from adult trauma survivors in Atlanta, Georgia (377 individuals) and New York City (221 individuals) to test their system.

Of this cohort, 90 percent of those predicted to be at high risk developed long-lasting PTSD symptoms within a year of the initial traumatic event — just 5 percent of people who never developed PTSD symptoms had been erroneously identified as being at risk.

On the other side of the coin, 29 percent of individuals were 'false negatives," tagged by the algorithm as not being at risk of PTSD, but then developing symptoms.

Going forward

person leaning their head on another's shoulder

Image source: Külli Kittus/Unsplash

Schultebraucks looks forward to more testing as the researchers continue to refine their algorithm and to instill confidence in the approach among ED clinicians: "Because previous models for predicting PTSD risk have not been validated in independent samples like our model, they haven't been adopted in clinical practice." She expects that, "Testing and validation of our model in larger samples will be necessary for the algorithm to be ready-to-use in the general population."

"Currently only 7% of level-1 trauma centers routinely screen for PTSD," notes Schultebraucks. "We hope that the algorithm will provide ED clinicians with a rapid, automatic readout that they could use for discharge planning and the prevention of PTSD." She envisions the algorithm being implemented in the future as a feature of electronic medical records.

The researchers also plan to test their algorithm at predicting PTSD in people whose traumatic experiences come in the form of health events such as heart attacks and strokes, as opposed to visits to the emergency department.

Quantcast