from the world's big
Collective intelligence out-diagnoses even professionals
The Human Diagnosis Project project is building the world's "open medical intelligence" system.
- The Human Diagnosis Project can develop medical diagnoses with startling accuracy.
- The platform combines the knowledge of medical professionals and artifical intelligence.
- The goal of the project is to provide open, readily available high-level guidance and training to health care professionals across the globe.
The world-class Mayo Clinic is often the place patients go for a second opinion on a medical diagnosis. It's a good thing they do. According to a report issued by the clinic in 2017, 88 percent of them return home with either a completely different diagnosis or a significantly altered one. Only 12 percent receive confirmation of their doctors' original conclusions.
It's hard to overstate the life-and-death importance of medical misdiagnoses, and with all the artificial intelligence and data collection tools out there, you'd think there might be a way to improve on these statistics. This said, the goal of the Human Diagnosis project, or "Human Dx," (a triple pun their site explains) is to create the world's open medical intelligence system, a "collective intelligence" that can produce vastly improved diagnostic accuracy.
In early March, JAMA published the results of an experiment conducted by Human Dx in cooperation with Harvard, and the results were impressive. Where 54 individual human medical specialists correctly diagnosed 156 test cases 66.3 percent of the time, collective intelligence achieved an 85.5 percent accuracy rate. Nine medical professionals contributed to the collective intelligence conclusions.
Human Dx founder Jayanth Komarneni tells Big Think that, "We can get numbers in the 97th, 98th [percentile], and even — if we have sufficiently large numbers of participants — we can get to super intelligent results. That means that it outperforms 100 percent of individual participants."
About Human Dx
The Human Dx project is a partnership between the social, public, and private sectors — in the U.S., it's a 501 (c)(3) not-for-profit/public-benefit corporation. According to Komarneni, Human Dx's business model is as free of cost to users as possible while still generating enough income to be self-sustaining. There are now nearly 20,000 medical professionals in almost 80 countries contributing. Among Human Dx's partners are, as the company states: the American Medical Association, the Association of American Medical Colleges, American Board of Medical Specialties, and the American Board of Internal Medicine. They're also working in collaboration with researchers at Harvard, Johns Hopkins., University of California San Francisco, Berkeley, and MIT.
While diagnoses produced by Human Dx do bring together the opinions of multiple medical professionals, it's far from a simple voting system. It incorporates its own massive data set, machine learning, and artificial intelligence in addition to the input from medical professionals to develop its diagnoses. In designing their collective intelligence, says Komarneni, Human Dx had to first re-think the idea of open intelligence itself.
"We believe that open intelligence is the third form of open knowledge," he explains. The first was open source-protocols such as those on which the internet is based, as well as operating systems such as Linux. These protocols enabled the second form, open content: Wikipedia, data libraries, and so on. Open intelligence combines the first two: "And when you think about A.I. in the context of software," says Komarneni, "it really is code which is smartly delivering content to you based on what you put into the system."
The importance of open intelligence is that without it being available at low cost or free, the cost of A.I. is going to be so prohibitive that it'll "exacerbate, as opposed to close, income, health, and other disparities in society," warns Komarneni. Nowhere will the ramification be more serious than in health care, since "there is nothing we care more about than the well-being of the people we love and ourselves."
How Human Dx collective intelligence works
Collective intelligence in the Human Dx project is not unlike a panel of participants, when are referred to as "agents." Some of these are medical professionals, but they may also include the outputs of other systems. For example, Komarneni mentions that it's entirely possible IBM's Watson could be one of these agents, or even a data set from the National Institutes of Health.
Of course, individual agents, even the human participants, express themselves in their own ways — is a lump "blue" or "blueberry-colored," for example — not to mention that contributions from some agents such as A.I. or datasets may be in the form of raw data. Before any meaningful synthesis of all these opinions can be performed, the first step is to convert them all into a common language of some sort. Human Dx's AI uses natural language processing, text prediction, and medical ontologies to derive these translations as the process's first step.
Human Dx establishes the capability, or CQ ("clinical quotient"), of each agent. To do this they rank agents' skills using test cases with known diagnoses, including "some of the most wickedly complex cases," says Komarneni. This allows Human Dx to determine how accurate agents' diagnoses can be expected to be, and how heavily they should be weighted against other participants' contributions in solving the current case.
A.I. joins the panel
At this point, the agents' inputs are synthesized to derive the most likely diagnosis, and this is combined in an A.I. model with all of the aggregated case data that's ever been captured by Human Dx — interactions in the "tens of millions" — including how "lots of other participants over many other cases have solved these cases." This A.I. model then joins the panel in arriving at the final diagnosis.
"And those [agents] combined," says Komarneni, "are how we can get to results that outperform the vast majority of individual participants."
The Harvard and Johns Hopkins studies
The Harvard study published in JAMA is the first public demonstration of the Human Dx system as a diagnostic tool. Working with an international cohort of medical students and professionals, the results were unquestionably amazing. There were 2069 users working 1572 cases — again, these were cases with known correct answers — from the Human Dx data set. About 60 percent of the participants were residents or fellows, 20 percent were attending physicians, and another 20 percent were medical students. In the study, as more medical professionals were added to the collective intelligence "panel," up to nine individuals, its accuracy consistently rose. Physicians who weren't specialists in their test-case areas achieved just a 62.5 percent accuracy score.
A previous study published in JAMA in January, and done in cooperation with Johns Hopkins, looked at Human Dx as an automatic platform for assessing the diagnostic abilities of health care professionals and students. That the scores of participants looking at 11,023 case simulations were consistent with their training level shows, in Komarneni words, "that we provided a valid, quantitative, scalable measure of medical reasoning." While he admits this doesn't sound like a big deal, it is, since it offers a far more accurate and scalable option to current multiple-choice assessments, which have been shown to correspond poorly to real-world diagnostic skills.
The future of health care and Human Dx
Komarneni says that there are basically only two ways to provide global universal health care, a pressing need since, "Almost half the world has no access to essential health services." One way, he says, would be to create a God-like A.I. system to provide health care to everyone, but, "We know that's not going to happen." God-like AI is just too hard, potentially requiring having to know everything about a patient from the tiniest details — say, the quantum behavior of electrons in mitochondria — to the huge, as in the kind of environment a patient lived in as a child.
In addition, Komarneni says, "In a world where data is locked up in many disparate silos, there isn't going to be a single collective agent. There's going to be a collective of many intelligent agents, both human and machine. The key is how do you integrate intelligence into larger buckets of intelligence than can solve the world's hardest problems."
This is where the Human Dx project, and the second approach, comes in. It actually has two components:
- The first is the expansion of existing medical professionals' diagnostic accuracy skills by providing them access to the Human Dx platform and its collective intelligence as a diagnostic tool.
- The second is helping to train new professionals, and Human Dx Training is already offering this on the Human Dx site.
For those concerned with privacy in a system such as Human Dx, Komarneni says it'll be a non-issue, explaining with an example. When two people converse, "We don't have access to the underlying data of each others' minds. We're agents that are interacting with each other to gain relevant and useful information from each other." Similarly, Human Dx's system of interacting agents doesn't require the exposure of patients' personal data. What's shared with Human Dx are the conclusions agents draw from that data, not the data itself. In the case of a dataset operating as an agent, the data would be anonymized.
Human Dx's interest in all this is developing a platform it hopes others find uses for. "We believe we're just building the enabling technology that many other stakeholders could use." As examples, Komarneni imagines, "The VA could implement their own version of this. Kaiser Permanente could implement their own version. Employers could contract with us or with their own insurers. You could even also have individual and group practices use Human Dx software to serve patients directly."
Human Dx is currently looking at ways to open up as much of the project for non-professionals as possible, and they've already made a start: On their home page is a diagnosis cloud — mouse over the various blue bubbles to see different conditions, and then click for further details. In addition, just beneath the cloud is a search field with which you can look up diseases and symptoms.
Andy Samberg and Cristin Milioti get stuck in an infinite wedding time loop.
- Two wedding guests discover they're trapped in an infinite time loop, waking up in Palm Springs over and over and over.
- As the reality of their situation sets in, Nyles and Sarah decide to enjoy the repetitive awakenings.
- The film is perfectly timed for a world sheltering at home during a pandemic.
Richard Feynman once asked a silly question. Two MIT students just answered it.
Here's a fun experiment to try. Go to your pantry and see if you have a box of spaghetti. If you do, take out a noodle. Grab both ends of it and bend it until it breaks in half. How many pieces did it break into? If you got two large pieces and at least one small piece you're not alone.
But science loves a good challenge<p>The mystery remained unsolved until 2005, when French scientists <a href="http://www.lmm.jussieu.fr/~audoly/" target="_blank">Basile Audoly</a> and <a href="http://www.lmm.jussieu.fr/~neukirch/" target="_blank">Sebastien Neukirch </a>won an <a href="https://www.improbable.com/ig/" target="_blank">Ig Nobel Prize</a>, an award given to scientists for real work which is of a less serious nature than the discoveries that win Nobel prizes, for finally determining why this happens. <a href="http://www.lmm.jussieu.fr/spaghetti/audoly_neukirch_fragmentation.pdf" target="_blank">Their paper describing the effect is wonderfully funny to read</a>, as it takes such a banal issue so seriously. </p><p>They demonstrated that when a rod is bent past a certain point, such as when spaghetti is snapped in half by bending it at the ends, a "snapback effect" is created. This causes energy to reverberate from the initial break to other parts of the rod, often leading to a second break elsewhere.</p><p>While this settled the issue of <em>why </em>spaghetti noodles break into three or more pieces, it didn't establish if they always had to break this way. The question of if the snapback could be regulated remained unsettled.</p>
Physicists, being themselves, immediately wanted to try and break pasta into two pieces using this info<p><a href="https://roheiss.wordpress.com/fun/" target="_blank">Ronald Heisser</a> and <a href="https://math.mit.edu/directory/profile.php?pid=1787" target="_blank">Vishal Patil</a>, two graduate students currently at Cornell and MIT respectively, read about Feynman's night of noodle snapping in class and were inspired to try and find what could be done to make sure the pasta always broke in two.</p><p><a href="http://news.mit.edu/2018/mit-mathematicians-solve-age-old-spaghetti-mystery-0813" target="_blank">By placing the noodles in a special machine</a> built for the task and recording the bending with a high-powered camera, the young scientists were able to observe in extreme detail exactly what each change in their snapping method did to the pasta. After breaking more than 500 noodles, they found the solution.</p>
The apparatus the MIT researchers built specifically for the task of snapping hundreds of spaghetti sticks.
(Courtesy of the researchers)
What possible application could this have?<p>The snapback effect is not limited to uncooked pasta noodles and can be applied to rods of all sorts. The discovery of how to cleanly break them in two could be applied to future engineering projects.</p><p>Likewise, knowing how things fragment and fail is always handy to know when you're trying to build things. Carbon Nanotubes, <a href="https://bigthink.com/ideafeed/carbon-nanotube-space-elevator" target="_self">super strong cylinders often hailed as the building material of the future</a>, are also rods which can be better understood thanks to this odd experiment.</p><p>Sometimes big discoveries can be inspired by silly questions. If it hadn't been for Richard Feynman bending noodles seventy years ago, we wouldn't know what we know now about how energy is dispersed through rods and how to control their fracturing. While not all silly questions will lead to such a significant discovery, they can all help us learn.</p>
The multifaceted cerebellum is large — it's just tightly folded.
- A powerful MRI combined with modeling software results in a totally new view of the human cerebellum.
- The so-called 'little brain' is nearly 80% the size of the cerebral cortex when it's unfolded.
- This part of the brain is associated with a lot of things, and a new virtual map is suitably chaotic and complex.
Just under our brain's cortex and close to our brain stem sits the cerebellum, also known as the "little brain." It's an organ many animals have, and we're still learning what it does in humans. It's long been thought to be involved in sensory input and motor control, but recent studies suggests it also plays a role in a lot of other things, including emotion, thought, and pain. After all, about half of the brain's neurons reside there. But it's so small. Except it's not, according to a new study from San Diego State University (SDSU) published in PNAS (Proceedings of the National Academy of Sciences).
A neural crêpe
A new imaging study led by psychology professor and cognitive neuroscientist Martin Sereno of the SDSU MRI Imaging Center reveals that the cerebellum is actually an intricately folded organ that has a surface area equal in size to 78 percent of the cerebral cortex. Sereno, a pioneer in MRI brain imaging, collaborated with other experts from the U.K., Canada, and the Netherlands.
So what does it look like? Unfolded, the cerebellum is reminiscent of a crêpe, according to Sereno, about four inches wide and three feet long.
The team didn't physically unfold a cerebellum in their research. Instead, they worked with brain scans from a 9.4 Tesla MRI machine, and virtually unfolded and mapped the organ. Custom software was developed for the project, based on the open-source FreeSurfer app developed by Sereno and others. Their model allowed the scientists to unpack the virtual cerebellum down to each individual fold, or "folia."
Study's cross-sections of a folded cerebellum
Image source: Sereno, et al.
A complicated map
Sereno tells SDSU NewsCenter that "Until now we only had crude models of what it looked like. We now have a complete map or surface representation of the cerebellum, much like cities, counties, and states."
That map is a bit surprising, too, in that regions associated with different functions are scattered across the organ in peculiar ways, unlike the cortex where it's all pretty orderly. "You get a little chunk of the lip, next to a chunk of the shoulder or face, like jumbled puzzle pieces," says Sereno. This may have to do with the fact that when the cerebellum is folded, its elements line up differently than they do when the organ is unfolded.
It seems the folded structure of the cerebellum is a configuration that facilitates access to information coming from places all over the body. Sereno says, "Now that we have the first high resolution base map of the human cerebellum, there are many possibilities for researchers to start filling in what is certain to be a complex quilt of inputs, from many different parts of the cerebral cortex in more detail than ever before."
This makes sense if the cerebellum is involved in highly complex, advanced cognitive functions, such as handling language or performing abstract reasoning as scientists suspect. "When you think of the cognition required to write a scientific paper or explain a concept," says Sereno, "you have to pull in information from many different sources. And that's just how the cerebellum is set up."
Bigger and bigger
The study also suggests that the large size of their virtual human cerebellum is likely to be related to the sheer number of tasks with which the organ is involved in the complex human brain. The macaque cerebellum that the team analyzed, for example, amounts to just 30 percent the size of the animal's cortex.
"The fact that [the cerebellum] has such a large surface area speaks to the evolution of distinctively human behaviors and cognition," says Sereno. "It has expanded so much that the folding patterns are very complex."
As the study says, "Rather than coordinating sensory signals to execute expert physical movements, parts of the cerebellum may have been extended in humans to help coordinate fictive 'conceptual movements,' such as rapidly mentally rearranging a movement plan — or, in the fullness of time, perhaps even a mathematical equation."
Sereno concludes, "The 'little brain' is quite the jack of all trades. Mapping the cerebellum will be an interesting new frontier for the next decade."
What happens if we consider welfare programs as investments?
- A recently published study suggests that some welfare programs more than pay for themselves.
- It is one of the first major reviews of welfare programs to measure so many by a single metric.
- The findings will likely inform future welfare reform and encourage debate on how to grade success.
Welfare as an investment<p>The <a href="https://scholar.harvard.edu/files/hendren/files/welfare_vnber.pdf" target="_blank">study</a>, carried out by Nathaniel Hendren and Ben Sprung-Keyser of Harvard University, reviews 133 welfare programs through a single lens. The authors measured these programs' "Marginal Value of Public Funds" (MVPF), which is defined as the ratio of the recipients' willingness to pay for a program over its cost.</p><p>A program with an MVPF of one provides precisely as much in net benefits as it costs to deliver those benefits. For an illustration, imagine a program that hands someone a dollar. If getting that dollar doesn't alter their behavior, then the MVPF of that program is one. If it discourages them from working, then the program's cost goes up, as the program causes government tax revenues to fall in addition to costing money upfront. The MVPF goes below one in this case. <br> <br> Lastly, it is possible that getting the dollar causes the recipient to further their education and get a job that pays more taxes in the future, lowering the cost of the program in the long run and raising the MVPF. The value ratio can even hit infinity when a program fully "pays for itself."</p><p> While these are only a few examples, many others exist, and they do work to show you that a high MVPF means that a program "pays for itself," a value of one indicates a program "breaks even," and a value below one shows a program costs more money than the direct cost of the benefits would suggest.</p> After determining the programs' costs using existing literature and the willingness to pay through statistical analysis, 133 programs focusing on social insurance, education and job training, tax and cash transfers, and in-kind transfers were analyzed. The results show that some programs turn a "profit" for the government, mainly when they are focused on children:
This figure shows the MVPF for a variety of polices alongside the typical age of the beneficiaries. Clearly, programs targeted at children have a higher payoff.
Nathaniel Hendren and Ben Sprung-Keyser<p>Programs like child health services and K-12 education spending have infinite MVPF values. The authors argue this is because the programs allow children to live healthier, more productive lives and earn more money, which enables them to pay more taxes later. Programs like the preschool initiatives examined don't manage to do this as well and have a lower "profit" rate despite having decent MVPF ratios.</p><p>On the other hand, things like tuition deductions for older adults don't make back the money they cost. This is likely for several reasons, not the least of which is that there is less time for the benefactor to pay the government back in taxes. Disability insurance was likewise "unprofitable," as those collecting it have a reduced need to work and pay less back in taxes. </p>