from the world's big
Could A.I. write a novel like Hemingway?
Artificial Intelligence has come a long way in a short time. So at what point will it be able to emulate the great artists and writers of our time?
Salman Rushdie is a British-Indian novelist and writer, author of ten novels including Midnight’s Children (Booker Prize, 1981), Two Years Eight Months and Twenty-Eight Nights, and The Golden House. The publication of his fourth novel "The Satanic Verses" in 1988 led to violent protests in the Muslim world for its depiction of the prophet Mohammad. The Supreme Leader of Iran, Ayatollah Khomeini, issued a death fatwa against Rushdie, which sent him into hiding for nearly a decade. Rushdie weathered countless death threats and many assassination attempts.
Salman Rushdie: You know, I never say never. I mean I remember... I mean I’m sort of an amateur chess player—that’s what I’m interested in: chess, and I remember back in the day when computers were first being taught to play chess that people said that they would never be able to beat the real great, the grandmasters and the world champions. And for a long time that was true—that the world champion players and the great grandmasters were able to overcome the computer.
Uh, not true anymore. It’s not true anymore. Computers are certainly as good if not better as any human player. As computer memory and sophistication has increased, it has outstripped human memory and sophistication.
So I don’t know, it seems to me the thing that makes a writer a good writer is not just the technical skill with language, not even being able to find and tell a good story, it seems to me that first of all there’s a relationship with language that the best writers have, which is very much their relationship.
If we read to Hemingway we would know it’s Hemingway because he has a particular relationship with the language; if we read James Joyce or William Faulkner we know it’s them, and if we read Garcia Marquez same thing.
So that’s the first thing, is when I’m looking at work I’m trying to see what is the relationship with language.
And the second things are how you see the world—like do you have a good ear? Are you good at listening to how people really speak? Do you have a good eye? Are you good at seeing the world in an interesting way?
And then finally the greatest writers, the best writers have a vision of the world that is personal to themselves, they have a kind of take on reality which is theirs and out of which their whole sensibility proceeds.
Now to have all of that in the form of artificial intelligence—I don’t think we’re anywhere near that yet.
But what is true I think is there’s beginning to be some sense of AI as developing a moral stance, developing an ability to make good and evil choices, right and wrong choices, and that’s a step on the way towards being what one would call human.
So I’m not saying never, I’m just saying I don’t see that we’re there yet.
Author and public intellectual Salman Rushdie knows his way around a good word or two. It's made him one of the most celebrated and widely-read authors of the last 50 years. But he has an open mind that one day a machine might be able to emulate him. He remembers an era not too long ago where people were deriding computer chess programs, saying that they would never beat grandmaster human players. It only took a couple of decades until those that chided the chess AI had to eat crow, Salman posits, so why should writing be any different? Salman Rushdie's latest book is The Golden House.
If machines develop consciousness, or if we manage to give it to them, the human-robot dynamic will forever be different.
- Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
- Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
- One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.
Duke University researchers might have solved a half-century old problem.
- Duke University researchers created a hydrogel that appears to be as strong and flexible as human cartilage.
- The blend of three polymers provides enough flexibility and durability to mimic the knee.
- The next step is to test this hydrogel in sheep; human use can take at least three years.
Duke researchers have developed the first gel-based synthetic cartilage with the strength of the real thing. A quarter-sized disc of the material can withstand the weight of a 100-pound kettlebell without tearing or losing its shape.
Photo: Feichen Yang.<p>That's the word from a team in the Department of Chemistry and Department of Mechanical Engineering and Materials Science at Duke University. Their <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/adfm.202003451" target="_blank">new paper</a>, published in the journal,<em> Advanced Functional Materials</em>, details this exciting evolution of this frustrating joint.<br></p><p>Researchers have sought materials strong and versatile enough to repair a knee since at least the seventies. This new hydrogel, comprised of three polymers, might be it. When two of the polymers are stretched, a third keeps the entire structure intact. When pulled 100,000 times, the cartilage held up as well as materials used in bone implants. The team also rubbed the hydrogel against natural cartilage a million times and found it to be as wear-resistant as the real thing. </p><p>The hydrogel has the appearance of Jell-O and is comprised of 60 percent water. Co-author, Feichen Yang, <a href="https://today.duke.edu/2020/06/lab-first-cartilage-mimicking-gel-strong-enough-knees" target="_blank">says</a> this network of polymers is particularly durable: "Only this combination of all three components is both flexible and stiff and therefore strong." </p><p> As with any new material, a lot of testing must be conducted. They don't foresee this hydrogel being implanted into human bodies for at least three years. The next step is to test it out in sheep. </p><p>Still, this is an exciting step forward in the rehabilitation of one of our trickiest joints. Given the potential reward, the wait is worth it. </p><p><span></span>--</p><p><em>Stay in touch with Derek on <a href="http://www.twitter.com/derekberes" target="_blank">Twitter</a>, <a href="https://www.facebook.com/DerekBeresdotcom" target="_blank">Facebook</a> and <a href="https://derekberes.substack.com/" target="_blank">Substack</a>. His next book is</em> "<em>Hero's Dose: The Case For Psychedelics in Ritual and Therapy."</em></p>
What would it be like to experience the 4th dimension?
Physicists have understood at least theoretically, that there may be higher dimensions, besides our normal three. The first clue came in 1905 when Einstein developed his theory of special relativity. Of course, by dimensions we’re talking about length, width, and height. Generally speaking, when we talk about a fourth dimension, it’s considered space-time. But here, physicists mean a spatial dimension beyond the normal three, not a parallel universe, as such dimensions are mistaken for in popular sci-fi shows.
An algorithm may allow doctors to assess PTSD candidates for early intervention after traumatic ER visits.
- 10-15% of people visiting emergency rooms eventually develop symptoms of long-lasting PTSD.
- Early treatment is available but there's been no way to tell who needs it.
- Using clinical data already being collected, machine learning can identify who's at risk.
The psychological scars a traumatic experience can leave behind may have a more profound effect on a person than the original traumatic experience. Long after an acute emergency is resolved, victims of post-traumatic stress disorder (PTSD) continue to suffer its consequences.
In the U.S. some 30 million patients are annually treated in emergency departments (EDs) for a range of traumatic injuries. Add to that urgent admissions to the ED with the onset of COVID-19 symptoms. Health experts predict that some 10 percent to 15 percent of these people will develop long-lasting PTSD within a year of the initial incident. While there are interventions that can help individuals avoid PTSD, there's been no reliable way to identify those most likely to need it.
That may now have changed. A multi-disciplinary team of researchers has developed a method for predicting who is most likely to develop PTSD after a traumatic emergency-room experience. Their study is published in the journal Nature Medicine.
70 data points and machine learning
Image source: Creators Collective/Unsplash
Study lead author Katharina Schultebraucks of Columbia University's Department Vagelos College of Physicians and Surgeons says:
"For many trauma patients, the ED visit is often their sole contact with the health care system. The time immediately after a traumatic injury is a critical window for identifying people at risk for PTSD and arranging appropriate follow-up treatment. The earlier we can treat those at risk, the better the likely outcomes."
The new PTSD test uses machine learning and 70 clinical data points plus a clinical stress-level assessment to develop a PTSD score for an individual that identifies their risk of acquiring the condition.
Among the 70 data points are stress hormone levels, inflammatory signals, high blood pressure, and an anxiety-level assessment. Says Schultebraucks, "We selected measures that are routinely collected in the ED and logged in the electronic medical record, plus answers to a few short questions about the psychological stress response. The idea was to create a tool that would be universally available and would add little burden to ED personnel."
Researchers used data from adult trauma survivors in Atlanta, Georgia (377 individuals) and New York City (221 individuals) to test their system.
Of this cohort, 90 percent of those predicted to be at high risk developed long-lasting PTSD symptoms within a year of the initial traumatic event — just 5 percent of people who never developed PTSD symptoms had been erroneously identified as being at risk.
On the other side of the coin, 29 percent of individuals were 'false negatives," tagged by the algorithm as not being at risk of PTSD, but then developing symptoms.
Image source: Külli Kittus/Unsplash
Schultebraucks looks forward to more testing as the researchers continue to refine their algorithm and to instill confidence in the approach among ED clinicians: "Because previous models for predicting PTSD risk have not been validated in independent samples like our model, they haven't been adopted in clinical practice." She expects that, "Testing and validation of our model in larger samples will be necessary for the algorithm to be ready-to-use in the general population."
"Currently only 7% of level-1 trauma centers routinely screen for PTSD," notes Schultebraucks. "We hope that the algorithm will provide ED clinicians with a rapid, automatic readout that they could use for discharge planning and the prevention of PTSD." She envisions the algorithm being implemented in the future as a feature of electronic medical records.
The researchers also plan to test their algorithm at predicting PTSD in people whose traumatic experiences come in the form of health events such as heart attacks and strokes, as opposed to visits to the emergency department.