New machine-learning algorithms from Columbia University detect cognitive impairment in older drivers.
An older person's cognitive health is not always obvious. Cognitive impairment and dementia manifest gradually over time, and a person may be unaware of their advance. During this subtle transition, such a person may continue living as they always have, going about their business at home and behind the wheel. But this could lead to a dangerous car accident.
So, researchers from Columbia University have announced the development of AI algorithms that can detect mild cognitive impairment and dementia in older people based on the way they drive. The authors report in the journal Geriatrics that their algorithm is 88 percent accurate.
"Driving is a complex task involving dynamic cognitive processes and requiring essential cognitive functions and perceptual motor skills," says senior author Guohua Li, professor of epidemiology. "Our study indicates that naturalistic driving behaviors can be used as comprehensive and reliable markers for mild cognitive impairment and dementia."
Random forest model
The algorithms the researchers developed were based on a common AI statistical method involving "decision trees" that form a "random forest model." The most successful algorithm, according to lead author Sharon Di, associate professor of civil engineering, was based on "variables derived from the naturalistic driving data and basic demographic characteristics, such as age, sex, race/ethnicity and education level."
Decision trees are often used in memes in which answering "yes" or "no" regarding some attribute leads you down a path to another question, which in turn ultimately leads to a final conclusion.
Data used in the study
The algorithm was developed using data sourced by the Longitudinal Research on Aging Drivers (LongROAD) study sponsored by the AAA Foundation for Traffic Safety. It came from in-vehicle recording devices that captured the driving behaviors of 2,977 participants from August 2015 through March 2019. At the time the project began, the motorists' ages ranged from 65 to 79 years. From the raw data, the authors of the new study derived 29 behavioral variables, which they used to develop cognitive profiles of the drivers.
Credit: Zoran Zeremski/Adobe Stock
The researchers then developed a series of machine-learning models to predict cognitive issues, with differing success rates. While models based on driving variables alone were just 66 percent accurate, and demographic models less so at 29 percent, using both models together produced an accuracy rate of 88 percent.
The researchers also explored the validity of individual factors as predictors of cognitive issues. In order of most reliable to least reliable, they were: (1) age; (2) percentage of trips traveled within 15 miles of home; (3) race/ethnicity; (4) minutes per round trip; and (5) number of hard braking events.
Li is hopeful that his team's work can help keep roadways and older drivers safe. "If validated," he says, "the algorithms developed in this study could provide a novel, unobtrusive screening tool for early detection and management of mild cognitive impairment and dementia in older drivers."
Measuring a person's movements and poses, smart clothes could be used for athletic training, rehabilitation, or health-monitoring.
In recent years there have been exciting breakthroughs in wearable technologies, like smartwatches that can monitor your breathing and blood oxygen levels.
But what about a wearable that can detect how you move as you do a physical activity or play a sport, and could potentially even offer feedback on how to improve your technique?
And, as a major bonus, what if the wearable were something you'd actually already be wearing, like a shirt of a pair of socks?
That's the idea behind a new set of MIT-designed clothing that use special fibers to sense a person's movement via touch. Among other things, the researchers showed that their clothes can actually determine things like if someone is sitting, walking, or doing particular poses.
The group from MIT's Computer Science and Artificial Intelligence Lab (CSAIL) says that their clothes could be used for athletic training and rehabilitation. With patients' permission, they could even help passively monitor the health of residents in assisted-care facilities and determine if, for example, someone has fallen or is unconscious.
The researchers have developed a range of prototypes, from socks and gloves to a full vest. The team's "tactile electronics" use a mix of more typical textile fibers alongside a small amount of custom-made functional fibers that sense pressure from the person wearing the garment.
According to CSAIL graduate student Yiyue Luo, a key advantage of the team's design is that, unlike many existing wearable electronics, theirs can be incorporated into traditional large-scale clothing production. The machine-knitted tactile textiles are soft, stretchable, breathable, and can take a wide range of forms.
"Traditionally it's been hard to develop a mass-production wearable that provides high-accuracy data across a large number of sensors," says Luo, lead author on a new paper about the project that is appearing in this month's edition of Nature Electronics. "When you manufacture lots of sensor arrays, some of them will not work and some of them will work worse than others, so we developed a self-correcting mechanism that uses a self-supervised machine learning algorithm to recognize and adjust when certain sensors in the design are off-base."
The team's clothes have a range of capabilities. Their socks predict motion by looking at how different sequences of tactile footprints correlate to different poses as the user transitions from one pose to another. The full-sized vest can also detect the wearers' pose, activity, and the texture of the contacted surfaces.
The authors imagine a coach using the sensor to analyze people's postures and give suggestions on improvement. It could also be used by an experienced athlete to record their posture so that beginners can learn from them. In the long term, they even imagine that robots could be trained to learn how to do different activities using data from the wearables.
"Imagine robots that are no longer tactilely blind, and that have 'skins' that can provide tactile sensing just like we have as humans," says corresponding author Wan Shou, a postdoc at CSAIL. "Clothing with high-resolution tactile sensing opens up a lot of exciting new application areas for researchers to explore in the years to come."
The paper was co-written by MIT professors Antonio Torralba, Wojciech Matusik, and Tomás Palacios, alongside PhD students Yunzhu Li, Pratyusha Sharma, and Beichen Li; postdoc Kui Wu; and research engineer Michael Foshey.
The work was partially funded by Toyota Research Institute.
The simulation hypothesis is fun to talk about, but believing it requires an act of faith.
- The simulation hypothesis posits that everything we experience was coded by an intelligent being, and we are part of that computer code.
- But we cannot accurately reproduce natural laws with computer simulations.
- Faith is fine, but science requires evidence and logic.
[Note: The following is a transcript of the video embedded at the bottom of this article.]
I quite like the idea that we live in a computer simulation. It gives me hope that things will be better on the next level. Unfortunately, the idea is unscientific. But why do some people believe in the simulation hypothesis? And just exactly what's the problem with it? That's what we'll talk about today.
According to the simulation hypothesis, everything we experience was coded by an intelligent being, and we are part of that computer code. That we live in some kind of computation in and by itself is not unscientific. For all we currently know, the laws of nature are mathematical, so you could say the universe is really just computing those laws. You may find this terminology a little weird, and I would agree, but it's not controversial. The controversial bit about the simulation hypothesis is that it assumes there is another level of reality where someone or some thing controls what we believe are the laws of nature, or even interferes with those laws.
The belief in an omniscient being that can interfere with the laws of nature, but for some reason remains hidden from us, is a common element of monotheistic religions. But those who believe in the simulation hypothesis argue they arrived at their belief by reason. The philosopher Nick Boström, for example, claims it's likely that we live in a computer simulation based on an argument that, in a nutshell, goes like this. If there are a) many civilizations, and these civilizations b) build computers that run simulations of conscious beings, then c) there are many more simulated conscious beings than real ones, so you are likely to live in a simulation.
Elon Musk is among those who have bought into it. He too has said "it's most likely we're in a simulation." And even Neil DeGrasse Tyson gave the simulation hypothesis "better than 50-50 odds" of being correct.
Are we living in a simulation? | Bill Nye, Joscha Bach, Donald Hoffman | Big Think www.youtube.com
Maybe you're now rolling your eyes because, come on, let the nerds have some fun, right? And, sure, some part of this conversation is just intellectual entertainment. But I don't think popularizing the simulation hypothesis is entirely innocent fun. It's mixing science with religion, which is generally a bad idea, and, really, I think we have better things to worry about than that someone might pull the plug on us. I dare you!
But before I explain why the simulation hypothesis is not a scientific argument, I have a general comment about the difference between religion and science. Take an example from Christian faith, like Jesus healing the blind and lame. It's a religious story, but not because it's impossible to heal blind and lame people. One day we might well be able to do that. It's a religious story because it doesn't explain how the healing supposedly happens. The whole point is that the believers take it on faith. In science, in contrast, we require explanations for how something works.
Let us then have a look at Boström's argument. Here it is again. If there are many civilizations that run many simulations of conscious beings, then you are likely to be simulated.
First of all, it could be that one or both of the premises is wrong. Maybe there aren't any other civilizations, or they aren't interested in simulations. That wouldn't make the argument wrong of course; it would just mean that the conclusion can't be drawn. But I will leave aside the possibility that one of the premises is wrong because really I don't think we have good evidence for one side or the other.
The point I have seen people criticize most frequently about Boström's argument is that he just assumes it is possible to simulate human-like consciousness. We don't actually know that this is possible. However, in this case it would require explanation to assume that it is not possible. That's because, for all we currently know, consciousness is simply a property of certain systems that process large amounts of information. It doesn't really matter exactly what physical basis this information processing is based on. Could be neurons or could be transistors, or it could be transistors believing they are neurons. So, I don't think simulating consciousness is the problematic part.
The problematic part of Boström's argument is that he assumes it is possible to reproduce all our observations using not the natural laws that physicists have confirmed to extremely high precision, but using a different, underlying algorithm, which the programmer is running. I don't think that's what Boström meant to do, but it's what he did. He implicitly claimed that it's easy to reproduce the foundations of physics with something else.
But nobody presently knows how to reproduce General Relativity and the Standard Model of particle physics from a computer algorithm running on some sort of machine. You can approximate the laws that we know with a computer simulation – we do this all the time – but if that was how nature actually worked, we could see the difference. Indeed, physicists have looked for signs that natural laws really proceed step by step, like in a computer code, but their search has come up empty handed. It's possible to tell the difference because attempts to algorithmically reproduce natural laws are usually incompatible with the symmetries of Einstein's theories of Special and General Relativity. I'll leave you a reference in the info below the video. The bottom line is it's not easy to outdo Einstein.
It also doesn't help, by the way, if you assume that the simulation would run on a quantum computer. Quantum computers, as I have explained earlier, are special purpose machines. Nobody currently knows how to put General Relativity on a quantum computer.
A second issue with Boström's argument is that, for it to work, a civilization needs to be able to simulate a lot of conscious beings, and these conscious beings will themselves try to simulate conscious beings, and so on. This means you have to compress the information that we think the universe contains. Boström therefore has to assume that it's somehow possible to not care much about the details in some parts of the world where no one is currently looking, and just fill them in case someone looks.
Again though, he doesn't explain how this is supposed to work. What kind of computer code can actually do that? What algorithm can identify conscious subsystems and their intention and then quickly fill in the required information without ever producing an observable inconsistency? That's a much more difficult issue than Boström seems to appreciate. You cannot in general just throw away physical processes on short distances and still get the long distances right.
Climate models are an excellent example. We don't currently have the computational capacity to resolve distances below something like 10 kilometers or so. But you can't just throw away all the physics below this scale. This is a non-linear system, so the information from the short scales propagates up into large scales. If you can't compute the short-distance physics, you have to suitably replace it with something. Getting this right even approximately is a big headache. And the only reason climate scientists do get it approximately right is that they have observations which they can use to check whether their approximations work. If you only have a simulation, like the programmer in the simulation hypothesis, you can't do that.
And that's my issue with the simulation hypothesis. Those who believe it make, maybe unknowingly, really big assumptions about what natural laws can be reproduced with computer simulations, and they don't explain how this is supposed to work. But finding alternative explanations that match all our observations to high precision is really difficult. The simulation hypothesis, therefore, just isn't a serious scientific argument. This doesn't mean it's wrong, but it means you'd have to believe it because you have faith, not because you have logic on your side.
The Simulation Hypothesis is Pseudoscience
Republished with permission of Dr. Sabine Hossenfelder. The original article is here.
Using machine-learning technology, the genealogy company My Heritage enables users to animate static images of their relatives.
- Deep Nostalgia uses machine learning to animate static images.
- The AI can animate images by "looking" at a single facial image, and the animations include movements such as blinking, smiling and head tilting.
- As deepfake technology becomes increasingly sophisticated, some are concerned about how bad actors might abuse the technology to manipulate the pubic.
A new service gives new life to the past by using artificial intelligence to convert static images into moving videos.
Called Deep Nostalgia, the service creates animations by using deep learning to analyze a single facial photo. Then, the system animates the facial image through a "driver" — a pre-determined sequence of movements and gestures, like blinking, smiling and head-turning. The process is completely automated, and the service enhances the images to make the animations run more smoothly.
Launched in February by the Israeli genealogy company My Heritage, some of Deep Nostalgia's early results are impressive.
My Heritage/Deep Nostalgia
But that's not to say the animations are perfect. As with most deep-fake technology, there's still an uncanny air to the images, with some of the facial movements appearing slightly unnatural. What's more, Deep Nostalgia is only able to create deepfakes of one person's face from the neck up, so you couldn't use it to animate group photos, or photos of people doing any sort of physical activity.
My Heritage/Deep Nostalgia
But for a free deep-fake service, Deep Nostalgia is pretty impressive, especially considering you can use it to create deepfakes of any face, human or not.
I just ran the @Warcraft cover through the Deep Nostalgia Tool, and this happened... https://t.co/1eD3bb7fAN— Solitaire #The06 (@Solitaire #The06)1614590060.0
Generated with MyHeritage. #DeepNostalgia https://t.co/gNX3wLHsS8— Andrey Frolov (@Andrey Frolov)1614413388.0
So, is creating deepfakes of long-dead people a bit creepy? Some people seem to think so.
"Some people love the feature with Deep Nostalgia ™ and consider it magical while others think it is scary and dislike it," My Heritage wrote on its website. "In fact, the results can be controversial and it is difficult to be indifferent to this technology. We invite you to create movies using this feature and share them on social media to see what your friends and relatives think. This feature is intended for nostalgic use, that is, to give life back to beloved ancestors."
Deep Nostalgia isn't the first project to create deepfakes from single images. In 2019, researchers working at the Samsung AI Center in Moscow published a paper describing how machine-learning techniques can produce deepfakes after "looking" at only one or a few images. Using a framework known as a generative adversarial network, the researchers trained a pair of computer models to compete with each other to create convincing deepfakes.
While the results from the Samsung researchers were impressive, the Deep Nostalgia project shows how deepfake technology is advancing at a rapid pace. As these tools have become increasingly sophisticated, media experts have raised concerns about how bad actors might use deepfakes and "cheap fakes" to manipulate the public.
My Heritage seemed to sense Deep Nostalgia's potential for abuse, writing:
"Please use this feature on your own historical photos and not on photos of living people without their consent."
Light-emitting tattoos could indicate dehydration in athletes or health conditions in hospital patients.
- Researchers at UCL and IIT have created a temporary tattoo that contains the same OLED technology that is used in TVs and smartphones.
- This technology has already been successfully applied to various materials including glass, food items, plastic, and paper packaging.
- This advance in technology isn't just about aesthetics. "In healthcare, they could emit light when there is a change in a patient's condition - or, if the tattoo was turned the other way into the skin, they could potentially be combined with light-sensitive therapies to target cancer cells, for instance," explains senior author Franco Cacialli of UCL.
Scientists at University College London (UCL) and the IIT (Istituto Italiano di Tecnologia) have created a temporary tattoo that contains the same light-emitting technology used in TVs and smartphone screens.
The technology uses organic light-emitting diodes (OLEDs) and is applied in the same way as simple water-transfer tattoos. The OLEDs are fabricated onto a temporary tattoo paper and then transferred to a new surface by being pressed onto it and dabbed with water.
According to the research, these OLED devices being developed are 2.3 micrometers thick in total (less than one 400th of a millimeter) and about one-third of the length of a single red blood cell. The device consists of an electroluminescent polymer (a polymer that emits light when an electric field is applied) that is placed in between electrodes. An insulating layer is then placed in between the electrodes and the commercial tattoo paper.
This process has already been successfully applied to various materials.
Once the research team had perfected the technology, they applied the tattoo-able OLEDs (which emit green light) onto various surfaces including a pane of glass, a plastic bottle, an orange, and paper packaging. The first OLEDs were used in a flatscreen television more than 20 years ago, and now, through this proof-of-concept study, "smart tattoos" may be a thing of the (very near) future.
Why “smart tattoos” could be beneficial
OLEDs are used to create digital displays in devices (such as television screens computer monitors, smartphones, etc).
Credit: Hanna on Adobe Stock
While this is perhaps the most obvious way you could use light-emitting tattoo technology, the world of tattoo art and design could see a huge surge in new exciting trends based on light-emitting tattoo technology.
It's not just about looks—this approach provides a quick and easy method of transferring OLEDs onto practically any surface.
OLEDs are used to create digital displays in devices (such as television screens computer monitors, smartphones, etc). While some may get OLED and LED confused, they are quite different, with OLED displays emitting visible light and therefore being able to be used without a backlight. The breakthrough process of being able to transfer OLEDs onto virtually any surface can be useful in many different applications and settings.
Light-emitting tattoos could be used to indicate (and potentially even treat) various health conditions in the future.
The eventual implementation or use of OLED tattoos could be combined with other tattoo electronics to, for instance, emit light when an athlete is dehydrated, or when a person is being exposed to too much sun and is prone to sunburn.
"In healthcare, they could emit light when there is a change in a patient's condition - or, if the tattoo was turned the other way into the skin, they could potentially be combined with light-sensitive therapies to target cancer cells, for instance." - Professor Franco Cacialli (UCL)
OLED tattoo devices
Credit: Barsotti - Italian Institute of Technology
Similarly, this technology could be used on the packaging of various items to give us more information about them.
For example, OLEDs could be tattooed onto the packaging of a fruit to signal when the product is passed its expiration date or will soon become inedible.
In reality, creating light-emitting tattoo technology doesn't have to be expensive.
Professor Franco Cacialli explains to Eurekalert: "The tattooable OLEDs that we have demonstrated for the first time can be made at scale and very cheaply. They can be combined with other forms of tattoo electronics for a very wide range of possible uses. These could be for fashion - for instance, providing glowing tattoos and light-emitting fingernails. In sports, they could be combined with a sweat sensor to signal dehydration."
"Our proof-of-concept study is the first step. Future challenges will include encapsulating the OLEDs as much as possible to stop them from degrading quickly through contact with air, as well as integrating the device with a battery or supercapacitor."