New machine-learning algorithms from Columbia University detect cognitive impairment in older drivers.
An older person's cognitive health is not always obvious. Cognitive impairment and dementia manifest gradually over time, and a person may be unaware of their advance. During this subtle transition, such a person may continue living as they always have, going about their business at home and behind the wheel. But this could lead to a dangerous car accident.
So, researchers from Columbia University have announced the development of AI algorithms that can detect mild cognitive impairment and dementia in older people based on the way they drive. The authors report in the journal Geriatrics that their algorithm is 88 percent accurate.
"Driving is a complex task involving dynamic cognitive processes and requiring essential cognitive functions and perceptual motor skills," says senior author Guohua Li, professor of epidemiology. "Our study indicates that naturalistic driving behaviors can be used as comprehensive and reliable markers for mild cognitive impairment and dementia."
Random forest model
The algorithms the researchers developed were based on a common AI statistical method involving "decision trees" that form a "random forest model." The most successful algorithm, according to lead author Sharon Di, associate professor of civil engineering, was based on "variables derived from the naturalistic driving data and basic demographic characteristics, such as age, sex, race/ethnicity and education level."
Decision trees are often used in memes in which answering "yes" or "no" regarding some attribute leads you down a path to another question, which in turn ultimately leads to a final conclusion.
Data used in the study
The algorithm was developed using data sourced by the Longitudinal Research on Aging Drivers (LongROAD) study sponsored by the AAA Foundation for Traffic Safety. It came from in-vehicle recording devices that captured the driving behaviors of 2,977 participants from August 2015 through March 2019. At the time the project began, the motorists' ages ranged from 65 to 79 years. From the raw data, the authors of the new study derived 29 behavioral variables, which they used to develop cognitive profiles of the drivers.
Credit: Zoran Zeremski/Adobe Stock
The researchers then developed a series of machine-learning models to predict cognitive issues, with differing success rates. While models based on driving variables alone were just 66 percent accurate, and demographic models less so at 29 percent, using both models together produced an accuracy rate of 88 percent.
The researchers also explored the validity of individual factors as predictors of cognitive issues. In order of most reliable to least reliable, they were: (1) age; (2) percentage of trips traveled within 15 miles of home; (3) race/ethnicity; (4) minutes per round trip; and (5) number of hard braking events.
Li is hopeful that his team's work can help keep roadways and older drivers safe. "If validated," he says, "the algorithms developed in this study could provide a novel, unobtrusive screening tool for early detection and management of mild cognitive impairment and dementia in older drivers."
Can spacekime help us make headway on some of the most pernicious inconsistencies in physics?
- Our linear model of time may be holding back scientific progress.
- Spacekime theory can help us better understand the development of diseases, financial and environmental events, and even the human brain.
- This theory helps us better utilize big data, develop AI, and can even solve inconsistencies in physics.
We take for granted the western concept of linear time. In ancient Greece, time was cyclical and if the Big Bounce theory is true, they were right. In Buddhism, there is only the eternal now. Both the past and the future are illusions. Meanwhile, the Amondawa people of the Amazon, a group that first made contact with the outside world in 1986, have no abstract concept of time. While we think we know time pretty well, some scientists believe our linear model hobbles scientific progress. We're missing whole dimensions of time, in this view, and our limited perception could be the last obstacle to a sweeping theory of everything.
Theoretical physicist Itzhak Bars of the University of Southern California, Los Angeles, is the most famous scientist with such a hypothesis, known as two-time physics. Here, time is 2D, visualized as a curved plane interwoven into the fabric of the "normal" dimensions—up-down, left-right, and backward-forward. While the hypothesis is over a decade old, Bars isn't the only scientist with such an idea. But what's different with spacekime theory is that it uses a data analytics approach, rather than a physics one. And while it posits that there are at least two dimensions of time, it allows for up to five.
In the spacekime model, space is 5D. Besides the ones we normally encounter, the extra dimensions are so infinitesimally small, we never notice them. This relates to the Kaluza–Klein theory developed in the early 20th century, which stated that there might be an extra, microscopic dimension of space. In this view, space would be curved like the surface of Earth. And like Earth, those who travel the entire distance would, eventually, loop back to their place of origin.
Kaluza-Klein theory unified electromagnetism and gravity, but wasn't accepted at the time, although it did help in the search for quantum gravity. The concept of additional dimensions was revived in the 1990s with Paul Wesson's Space-Time-Matter Consortium. Today, proponents of superstring theory say there may be as many as 10 different dimensions, including nine of space and one of time.
The Spacekime model
Spacekime theory was developed by two data scientists. Dr. Ivo Dinov is the University of Michigan's SOCR Director, as well as a professor of Health Behavior and Biological Sciences, and Computational Medicine and Bioinformatics. SOCR stands for: Statistics Online Computational Resource designs. Dr. Dinov is an expert in "mathematical modeling, statistical analysis, computational processing, scientific visualization of large datasets (Big Data) and predictive health analytics." His research has focused on mathematical modeling, statistical inference, and biomedical computing.
His colleague, Dr. Milen Velchev Velev, is an associate professor at the Prof. Dr. A. Zlatarov University in Bulgaria. He studies relativistic mechanics in multiple time dimensions, and his interests include "applied mathematics, special and general relativity, quantum mechanics, cosmology, philosophy of science, the nature of space and time, chaos theory, mathematical economics, and micro-and-macroeconomics."
Drs. Dinov and Velev began developing spacekime theory around four or five years ago, while working with big data in the healthcare field. "We started looking at data that intrinsically has a temporal dimension to it," Dr. Dinov told me during a video chat. "It's called longitudinal or time varying data, longitudinal time variance—it has many, many names. This is data that varies with time. In biomedicine, this is the de facto, standard data. All big health data is characterized by space, time, phenotypes, genotypes, clinical assessments, and so forth."
A better way to manage big data
"We started asking big questions," Dinov said. "Why are our models not really fitting too well? Why do we need so many observations? And then, we started playing around with time. We started digging and experimenting with various things. And then we realized two important facts.
"Number one, if we use what's called color-coded representations of the complex plane, we can define spacekime, or higher dimensional spacetime, in such a way that it agrees with the common observations that we make in (the longitudinal time series in) ordinary spacetime. That agreement was very important to us, because it basically says, yes, the higher dimensional theory does not contradict our common observations.
"The second realization was that, since this extra dimension of time is imperceptible, we needed to approximate, model, or estimate, one of the unobservable time characteristics, which we call the kime phase. After about a year, we discovered that there is a mathematically elegant tool called the Laplace Transform that allows us to analytically represent time series data as kime-surfaces. Turns out, the spacekime mathematical manifold is a natural, higher dimensional extension of classical Minkowski, four-dimensional spacetime."
Our understanding of the world is becoming more complex. As a result, we have big data to contend with. How do we find new ways to analyze, interpret and visual such data? Dinov believes spacekime theory can help in some pretty impressive ways. "The result of this multidimensional manifold generalization is that you can make scientific inferences using smaller data samples. This requires that you have a good model or prior knowledge about the phase distribution," he said. "For instance, we can use spacekime process representation to better understand the development or pathogenesis to model the distributions of certain diseases.
"Suppose we are evaluating fMRIs of Alzheimer's disease subjects. Assume we know the kime phase distribution for another cohort of patients suffering from amyotrophic lateral sclerosis, Lou Gehrig's disease. The ALS kime-phase distribution could be used for evaluating the Alzheimer's patients," and many other neurodegenerative populations. Dinov also thinks spacekime analytics could help improve political polling, increase our understanding of complex financial and environmental events, and even the innerworkings of the human brain, all without having to take the huge samples required today to make accurate models or predictions. Spacekime theory even offers opportunities to design novel AI analytical techniques. But it goes beyond that.
The problem of time
Spacekime theory can help us make headway on some of the most pernicious inconsistencies in physics, such as Heisenberg's uncertainty principle and the seemingly irreconcilable rift between quantum physics and general relativity, what's known as "the problem of time."
Dinov wrote that the "approach relies on extending the notions of time, events, particles, and wave functions to complex-time (kime), complex-events (kevents), data, and inference-functions." Basically, working with two points of time allows you to make inferences on a radius of points associated with a certain event. With Heisenberg's uncertainty principle, according to this model, since time is a plane, a certain particle would be in one position or phase, time-wise, in terms of velocity, and another phase, in terms of position.
This idea of hidden dimensions of time is a little like Plato's allegory of the cave or how an X-ray signifies what's underneath, but doesn't convey a 3D image. From a data science perspective, it all comes down to utility. Dinov believes that if we can calculate the true phase dispersion of complex phenomena, we can better understand and control them.
Drs. Dinov and Velev's book on spacekime theory comes out this August. It's called "Data Science: Time Complexity, Inferential Uncertainty, and Spacekime Analytics".
Northwell Health is using insights from website traffic to forecast COVID-19 hospitalizations two weeks in the future.
- The machine-learning algorithm works by analyzing the online behavior of visitors to the Northwell Health website and comparing that data to future COVID-19 hospitalizations.
- The tool, which uses anonymized data, has so far predicted hospitalizations with an accuracy rate of 80 percent.
- Machine-learning tools are helping health-care professionals worldwide better constrain and treat COVID-19.
One of the most devastating aspects of the COVID-19 pandemic has been unpredictability. The nation's health systems—especially those in hard-hit areas like New York City—have had to adapt to sudden surges of COVID-19 cases, all while dealing with limited resources, existing patients, and a novel virus that's still not fully understood.
But what if health systems were able to forecast COVID-19 hospitalizations two weeks before they occur? Northwell Health, the largest health care system in New York state, recently deployed a predictive tool that does just that.
Northwell Health's surveillance dashboard is able to predict COVID-19 hospitalizations by using insights from machine learning. In March, Northwell Health's Customer Insights Group developed an algorithm that's been mining data from online traffic to the Northwell.edu website, which has received more than 20 million hits since March.
The algorithm collects data through 15 different indicators, each of which reflects the online behavior of the website's visitors. For example, the tool analyzes metrics such as the length of time users spend on certain pages, searches for emergency department wait times, and specific symptoms users search for. Combined, this information translates into something like the "public mood" of the website on any given day.
Since Northwell Health began using the predictive tool in September, it's predicted COVID-19 hospitalizations with an accuracy of about 80 percent.
To understand how this mood relates to future COVID-19 cases, Northwell Health began comparing its data with a timeline of COVID-19 hospitalizations across 23 hospitals and nearly 800 outpatient facilities and in the metro New York area. This enabled the Customer Insights Group to see patterns of online activity that precede future increases or decreases in hospitalizations.
Since Northwell Health began using the predictive tool in September, it's predicted COVID-19 hospitalizations with an accuracy of about 80 percent.
"This is really the first tool that I've been exposed to that gives me a sort of guestimate of what two weeks from now may look like," said Dr. Eric Cruzen, chief medical informatics officer of Northwell's emergency medicine services and chair of the emergency department at Lenox Health Greenwich Village in Manhattan.
"Even if the data can provide an idea of whether to expect an increase, decrease, or stasis, that's valuable. Because every day we're working to estimate what tomorrow's going to bring. Any tool that's going to shed light on that is a good tool in my book."
The value of forecasting
Northwell emergency departments use the dashboard to monitor in real time.
Credit: Northwell Health
One unique benefit of forecasting COVID-19 hospitalizations is that it allows health systems to better prepare, manage and allocate resources. For example, if the tool forecasted a surge in COVID-19 hospitalizations in two weeks, Northwell Health could begin:
- Making space for an influx of patients
- Moving personal protective equipment to where it's most needed
- Strategically allocating staff during the predicted surge
- Increasing the number of tests offered to asymptomatic patients
The health-care field is increasingly using machine learning. It's already helping doctors develop personalized care plans for diabetes patients, improving cancer screening techniques, and enabling mental health professionals to better predict which patients are at elevated risk of suicide, to name a few applications.
Health systems around the world have already begun exploring how machine learning can help battle the pandemic, including better COVID-19 screening, diagnosis, contact tracing, and drug and vaccine development.
Cruzen said these kinds of tools represent a shift in how health systems can tackle a wide variety of problems.
"Health care has always used the past to predict the future, but not in this mathematical way," Cruzen said. "I think [Northwell Health's new predictive tool] really is a great first example of how we should be attacking a lot of things as we go forward."
Making machine-learning tools openly accessible
Northwell Health has made its predictive tool available for free to any health system that wishes to utilize it.
"COVID is everybody's problem, and I think developing tools that can be used to help others is sort of why people go into health care," Dr. Cruzen said. "It was really consistent with our mission."
Open collaboration is something the world's governments and health systems should be striving for during the pandemic, said Michael Dowling, Northwell Health's president and CEO.
"Whenever you develop anything and somebody else gets it, they improve it and they continue to make it better," Dowling said. "As a country, we lack data. I believe very, very strongly that we should have been and should be now working with other countries, including China, including the European Union, including England and others to figure out how to develop a health surveillance system so you can anticipate way in advance when these things are going to occur."
In all, Northwell Health has treated more than 112,000 COVID patients. During the pandemic, Dowling said he's seen an outpouring of goodwill, collaboration, and sacrifice from the community and the tens of thousands of staff who work across Northwell.
"COVID has changed our perspective on everything—and not just those of us in health care, because it has disrupted everybody's life," Dowling said. "It has demonstrated the value of community, how we help one another."
What lies in store for humanity? Theoretical physicist Michio Kaku explains how different life will be for your ancestors—and maybe your future self, if the timing works out.
- Carl Sagan believed humanity needed to become a multi-planet species as an insurance policy against the next huge catastrophe on Earth. Now, Elon Musk is working to see that mission through, starting with a colony of a million humans on Mars. Where will our species go next?
- Theoretical physicist Michio Kaku looks decades into the future and makes three bold predictions about human space travel, the potential of 'brain net', and our coming victory over cancer.
- "[I]n the future, the word 'tumor' will disappear from the English language," says Kaku. "We will have years of warning that there is a colony of cancer cells growing in our body. And our descendants will wonder: How could we fear cancer so much?"
It's hard to stop looking back and forth between these faces and the busts they came from.
- A quarantine project gone wild produces the possibly realistic faces of ancient Roman rulers.
- A designer worked with a machine learning app to produce the images.
- It's impossible to know if they're accurate, but they sure look plausible.
Imaginative as humans are, it's often hard not to see historical figures depicted in black-and-white photos as being somehow of another species. Confronted with colorized images can be startling — hey, they look like us — bringing home at last what they were really like. Maybe that person evens look like someone we know.
The same is true of figures whose appearance we know only from their statues, maybe even moreso. We may know their names and something about them, but, again, it's all sort of not quite real. Now cinematographer and virtual reality designer Daniel Voshart has published amazing, life-like images of 54 Roman emperors based on their statues. He used machine learning and filled in the (many) remaining blanks with his imagination. While he's careful to point out that his renderings are merely what these individuals may have looked like, they're remarkably plausible, and also remarkably familiar.
Voshart describes the whole thing as a quarantine project that got out of hand, but lots of people are excited about what he's done, and are purchasing posters of his Roman emperors.
How the Roman emperors got faced
Credit: Daniel Voshart
Voshart's imaginings began with an AI/neural-net program called Artbreeder. The freemium online app intelligently generates new images from existing ones and can combine multiple images into…well, who knows. It's addictive — people have so far used it to generate nearly 72.7 million images, says the site — and it's easy to see how Voshart fell down the rabbit hole.
The Roman emperor project began with Voshart feeding Artbreeder images of 800 busts. Obviously, not all busts have weathered the centuries equally. Voshart told Live Science, "There is a rule of thumb in computer programming called 'garbage in garbage out,' and it applies to Artbreeder. A well-lit, well-sculpted bust with little damage and standard face features is going to be quite easy to get a result." Fortunately, there were multiple busts for some of the emperors, and different angles of busts captured in different photographs.
For the renderings Artbreeder produced, each face required some 15-16 hours of additional input from Voshart, who was left to deduce/guess such details as hair and skin coloring, though in many cases, an individual's features suggested likely pigmentations. Voshart was also aided by written descriptions of some of the rulers.
There's no way to know for sure how frequently Voshart's guesses hit their marks. It is obviously the case, though, that his interpretations look incredibly plausible when you compare one of his emperors to the sculpture(s) from which it was derived.
It's fascinating to feel like you're face-to-face with these ancient and sometimes notorious figures. Here are two examples, along with some of what we think we know about the men behind the faces.
One of numerous sculptures of Caligula, left
Caligula was the third Roman Emperor, ruling the city-state from AD 37 to 41. His name was actually Gaius Caesar Augustus Germanicus — Caligula is a nickname meaning "Little Boot."
One of the reputed great madmen of history, he was said to have made a horse his consul, had conversations with the moon, and to have ravaged his way through his kingdom, including his three sisters. Caligula is known for extreme cruelty, terrorizing his subjects, and accounts suggest he would deliberately distort his face to surprise and frighten people he wished to intimidate.
A 1928 journal, Studies in Philology, noted that contemporary descriptions of Caligula depicted him as having a "head misshapen, eyes and temples sunken," and "eyes staring and with a glare savage enough to torture." In some sculptures not shown above, his head is a bit acorn-shaped.
One of numerous sculptures of Nero, left
There's a good German word for the face of Nero, that guy famous for fiddling as Rome burned. It's "backpfeifengesicht." Properly named Nero Claudius Caesar Augustus Germanicus, he was Rome's fifth emperor. He ruled from AD 54 until his suicide in AD 68.
Another Germanicus-family gem, Nero's said to have murdered his own mother, Agrippa, as well as (maybe) his second wife. As for the fiddling, he was a lover of music and the arts, and there are stories of his charitability. And, oh yeah, he may have set the fire as an excuse to rebuild the city center, making it his own.
While it may not be the most historically sound means of assessing an historical personage, Voshart's imagining of Nero does suggest an over-indulged, entitled young man. Backpfeifengesicht.