from the world's big
On the Origins of Genius: How Human Consciousness Evolved
The human mind is like a Turing machine, says Daniel Dennett. It's made up of unthinking cogs – but when combined in the right order, their motion gives rise to consciousness.
Daniel C. Dennett is the author of Intuition Pumps and Other Tools for Thinking, Breaking the Spell, Freedom Evolves, and Darwin's Dangerous Idea and is University Professor and Austin B. Fletcher Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University. He lives with his wife in North Andover, Massachusetts, and has a daughter, a son, and a grandson. He was born in Boston in 1942, the son of a historian by the same name, and received his B.A. in philosophy from Harvard in 1963. He then went to Oxford to work with Gilbert Ryle, under whose supervision he completed the D.Phil. in philosophy in 1965. He taught at U.C. Irvine from 1965 to 1971, when he moved to Tufts, where he has taught ever since, aside from periods visiting at Harvard, Pittsburgh, Oxford, and the École Normale Supérieure in Paris.
His first book, Content and Consciousness, appeared in 1969, followed by Brainstorms (1978), Elbow Room (1984), The Intentional Stance (1987), Consciousness Explained (1991), Darwin's Dangerous Idea (1995), Kinds of Minds (1996), and Brainchildren: A Collection of Essays 1984-1996. Sweet Dreams: Philosophical Obstacles to a Science of Consciousness, was published in 2005. He co-edited The Mind's I with Douglas Hofstadter in 1981 and he is the author of over three hundred scholarly articles on various aspects on the mind, published in journals ranging from Artificial Intelligence and Behavioral and Brain Sciences to Poetics Today and the Journal of Aesthetics and Art Criticism.
Dennett gave the John Locke Lectures at Oxford in 1983, the Gavin David Young Lectures at Adelaide, Australia, in 1985, and the Tanner Lecture at Michigan in 1986, among many others. He has received two Guggenheim Fellowships, a Fulbright Fellowship, and a Fellowship at the Center for Advanced Studies in Behavioral Science. He was elected to the American Academy of Arts and Sciences in 1987.
He was the Co-founder (in 1985) and Co-director of the Curricular Software Studio at Tufts, and has helped to design museum exhibits on computers for the Smithsonian Institution, the Museum of Science in Boston, and the Computer Museum in Boston.
Daniel C. Dennett: In an entirely natural world without any supernatural mysteries you can explain the mind, the human mind, consciousness. It's been my project for 50 years and what I've come to realize is that the only way to do it right is you have to take evolution a lot more seriously and really look hard at the question of how evolution could have gotten these wonderful projects up and running that have now lead to people like you and me and all the great artistic geniuses and scientific geniuses, the real intelligent designers that now inhabit the planet instead of the imaginary intelligent designer who never existed.
For millennia people had it in mind that all the wonderful things they saw in the world, all the beautiful design of the animals and plants and living things must be due to a fabulously intelligent designer, a creator. And so it was until Darwin came along and turned that upside down and realized that in principle there could be a process with no intelligence, no comprehension, no foresight, no purpose that would just inexorably grind out algorithmically better and better and better designs of all sorts and create the living world were there had been just lifeless matter before. And this was a shocking idea to many people, even to Darwin in some regards it was shocking. But he was right. He had the essentials right and now 150 years later there's just no question about it he was right and we're filling in the details at a breathtaking pace. So that was the first great inversion, the strange inversion of reason. And it's much about it in recent years by what I call Alan Turing's strange inversion of reasoning.
When Turing came along computers were people; that was a job. What do you do for a living? "I'm a computer." And these are human beings, typically they were math majors and they were hired to compute various functions, tables, logarithms, celestial navigation tables and so forth and what Turing realized was you didn't have to be intelligent. You didn't have to comprehend. You could make a device, which did all the things that the human computers were doing with all the intelligence and all the understanding laundered out of it except for the most minimal sort of mechanical quasi understanding. All it had to do was to be able to tell a zero from a one or a hole in a punch tape or from no hole in a punch tape, a very simple discriminator, put it together with the right logic and you have a Universal Turing Machine, which can compute anything computable. And that was the birth of the computer. And the two strange inversions fit together beautifully. What they show, and this is still strange to people, is what I call competence without comprehension.
We tend to think the reason we send our children to university is so that they can acquire comprehension, which we view as the source of competence. It's out of that well of comprehension that they acquire the competences they do. And we look down our noses at wrote learning and drill and practice because that's just competence, we want comprehension. And what Turing and Darwin in a very similar way showed is no that's just backwards. Comprehension is any effect of multiple competences not itself a source, an independent source of competence. So that's the second strain of inversion.
If we want to look at human minds we have to add another source of evolutionary power and that's cultural evolution. We don't get all our intelligence from our genes, in fact relatively little all things considered. And here's where Turing's ideas really come in handy because if you take Richard Dawkins's idea of the meme as a unit of cultural evolution and you take Turing's idea about a programmable computer and you put them together you get the idea of a meme as a thing made of information, it's like an app which you download to your neck top. And it's a brain filled with apps is a mind, is a human mind. And if you don't download all the apps you're not going to be able to think very well. That's why no creature on the planet, however intelligent they are in some regards, they can't hold a candle to us because they can't download the apps of culture because they don't have, basically they don't have a language. And it's language, which is itself composed of memes, words or memes, it's language that is the backbone of cultural evolution. And what it permits is for cultural evolution to become ever less Darwinian ever more intelligent.
And now we're living in the age of intelligent design. We have scientists and engineers and artists and musicians and composers all these wonderful designers of wonderful things, poems, bridges, airplanes, theories, and they are intelligent designers, but if you want to know how they manage to have that intelligence you have to go back and look at their brains as ultimately like Turing machines. They're composed of actually trillions of moving parts that are all just as stupid as posts. They don't understand a thing; they don't have to understand a thing; you put them together in the right way and you get comprehension and eventually consciousness.
Daniel Dennett has been mulling consciousness over for the last 50 years, and he’s ended up where we began: evolution. When this theory was proposed by Darwin, it inverted everything people at the time held to be true – it revealed that we were not created by intelligent design, but rather we evolved into intelligent designers ourselves. The process of evolution worked mindlessly, producing better and better human prototypes, crafting ever-more complex brains until that rhythmic, algorithmic, repetition birthed consciousness. This is what Dennett refers to as ‘competence without comprehension’. Daniel Dennett's most recent book is From Bacteria to Bach and Back: The Evolution of Minds.
Daniel Dennett's most recent book is From Bacteria to Bach and Back: The Evolution of Minds.
Duke University researchers might have solved a half-century old problem.
- Duke University researchers created a hydrogel that appears to be as strong and flexible as human cartilage.
- The blend of three polymers provides enough flexibility and durability to mimic the knee.
- The next step is to test this hydrogel in sheep; human use can take at least three years.
Duke researchers have developed the first gel-based synthetic cartilage with the strength of the real thing. A quarter-sized disc of the material can withstand the weight of a 100-pound kettlebell without tearing or losing its shape.
Photo: Feichen Yang.<p>That's the word from a team in the Department of Chemistry and Department of Mechanical Engineering and Materials Science at Duke University. Their <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/adfm.202003451" target="_blank">new paper</a>, published in the journal,<em> Advanced Functional Materials</em>, details this exciting evolution of this frustrating joint.<br></p><p>Researchers have sought materials strong and versatile enough to repair a knee since at least the seventies. This new hydrogel, comprised of three polymers, might be it. When two of the polymers are stretched, a third keeps the entire structure intact. When pulled 100,000 times, the cartilage held up as well as materials used in bone implants. The team also rubbed the hydrogel against natural cartilage a million times and found it to be as wear-resistant as the real thing. </p><p>The hydrogel has the appearance of Jell-O and is comprised of 60 percent water. Co-author, Feichen Yang, <a href="https://today.duke.edu/2020/06/lab-first-cartilage-mimicking-gel-strong-enough-knees" target="_blank">says</a> this network of polymers is particularly durable: "Only this combination of all three components is both flexible and stiff and therefore strong." </p><p> As with any new material, a lot of testing must be conducted. They don't foresee this hydrogel being implanted into human bodies for at least three years. The next step is to test it out in sheep. </p><p>Still, this is an exciting step forward in the rehabilitation of one of our trickiest joints. Given the potential reward, the wait is worth it. </p><p><span></span>--</p><p><em>Stay in touch with Derek on <a href="http://www.twitter.com/derekberes" target="_blank">Twitter</a>, <a href="https://www.facebook.com/DerekBeresdotcom" target="_blank">Facebook</a> and <a href="https://derekberes.substack.com/" target="_blank">Substack</a>. His next book is</em> "<em>Hero's Dose: The Case For Psychedelics in Ritual and Therapy."</em></p>
An algorithm may allow doctors to assess PTSD candidates for early intervention after traumatic ER visits.
- 10-15% of people visiting emergency rooms eventually develop symptoms of long-lasting PTSD.
- Early treatment is available but there's been no way to tell who needs it.
- Using clinical data already being collected, machine learning can identify who's at risk.
The psychological scars a traumatic experience can leave behind may have a more profound effect on a person than the original traumatic experience. Long after an acute emergency is resolved, victims of post-traumatic stress disorder (PTSD) continue to suffer its consequences.
In the U.S. some 30 million patients are annually treated in emergency departments (EDs) for a range of traumatic injuries. Add to that urgent admissions to the ED with the onset of COVID-19 symptoms. Health experts predict that some 10 percent to 15 percent of these people will develop long-lasting PTSD within a year of the initial incident. While there are interventions that can help individuals avoid PTSD, there's been no reliable way to identify those most likely to need it.
That may now have changed. A multi-disciplinary team of researchers has developed a method for predicting who is most likely to develop PTSD after a traumatic emergency-room experience. Their study is published in the journal Nature Medicine.
70 data points and machine learning
Image source: Creators Collective/Unsplash
Study lead author Katharina Schultebraucks of Columbia University's Department Vagelos College of Physicians and Surgeons says:
"For many trauma patients, the ED visit is often their sole contact with the health care system. The time immediately after a traumatic injury is a critical window for identifying people at risk for PTSD and arranging appropriate follow-up treatment. The earlier we can treat those at risk, the better the likely outcomes."
The new PTSD test uses machine learning and 70 clinical data points plus a clinical stress-level assessment to develop a PTSD score for an individual that identifies their risk of acquiring the condition.
Among the 70 data points are stress hormone levels, inflammatory signals, high blood pressure, and an anxiety-level assessment. Says Schultebraucks, "We selected measures that are routinely collected in the ED and logged in the electronic medical record, plus answers to a few short questions about the psychological stress response. The idea was to create a tool that would be universally available and would add little burden to ED personnel."
Researchers used data from adult trauma survivors in Atlanta, Georgia (377 individuals) and New York City (221 individuals) to test their system.
Of this cohort, 90 percent of those predicted to be at high risk developed long-lasting PTSD symptoms within a year of the initial traumatic event — just 5 percent of people who never developed PTSD symptoms had been erroneously identified as being at risk.
On the other side of the coin, 29 percent of individuals were 'false negatives," tagged by the algorithm as not being at risk of PTSD, but then developing symptoms.
Image source: Külli Kittus/Unsplash
Schultebraucks looks forward to more testing as the researchers continue to refine their algorithm and to instill confidence in the approach among ED clinicians: "Because previous models for predicting PTSD risk have not been validated in independent samples like our model, they haven't been adopted in clinical practice." She expects that, "Testing and validation of our model in larger samples will be necessary for the algorithm to be ready-to-use in the general population."
"Currently only 7% of level-1 trauma centers routinely screen for PTSD," notes Schultebraucks. "We hope that the algorithm will provide ED clinicians with a rapid, automatic readout that they could use for discharge planning and the prevention of PTSD." She envisions the algorithm being implemented in the future as a feature of electronic medical records.
The researchers also plan to test their algorithm at predicting PTSD in people whose traumatic experiences come in the form of health events such as heart attacks and strokes, as opposed to visits to the emergency department.
What would it be like to experience the 4th dimension?
Physicists have understood at least theoretically, that there may be higher dimensions, besides our normal three. The first clue came in 1905 when Einstein developed his theory of special relativity. Of course, by dimensions we’re talking about length, width, and height. Generally speaking, when we talk about a fourth dimension, it’s considered space-time. But here, physicists mean a spatial dimension beyond the normal three, not a parallel universe, as such dimensions are mistaken for in popular sci-fi shows.
Vaccines find more success in development than any other kind of drug, but have been relatively neglected in recent decades.
Vaccines are more likely to get through clinical trials than any other type of drug — but have been given relatively little pharmaceutical industry support during the last two decades, according to a new study by MIT scholars.