The biggest problem in AI? Machines have no common sense.
Correlation doesn't equal causation — we all know this. Well, except robots.
Dr. Gary Marcus is the director of the NYU Infant Language Learning Center, and a professor of psychology at New York University. He is the author of "The Birth of the Mind," "The Algebraic Mind: Integrating Connectionism and Cognitive Science," and "Kluge: The Haphazard Construction of the Human Mind." Marcus's research on developmental cognitive neuroscience has been published in over forty articles in leading journals, and in 1996 he won the Robert L. Fantz award for new investigators in cognitive development.
Marcus contributed an idea to Big Think's "Dangerous Ideas" blog, suggesting that we should develop Google-like chips to implant in our brains and enhance our memory.
GARY MARCUS: The dominant vision in the field right now is, collect a lot of data, run a lot of statistics, and intelligence will emerge. And I think that's wrong. I think that having a lot of data is important, and collecting a lot of statistics is important. But I think what we also need is deep understanding, not just so-called "deep learning." So deep learning finds what's typically correlated, but we all know that correlation is not the same thing as causation. And even though we all know that and everybody learned that in Intro to Psych, or should have learned that in Intro to Psych they should have learned that you don't know whether cigarettes cause smoking just from the statistics we have. We have to make causal inferences and do careful studies. We all know that causation and correlation are not the same thing.
Have right now as AIs-- giant correlation machines. And it works, if you have enough control of the data relative to the problem that you're studying that you can exhaust the problem, to beat the problem into submission. So you can do that with go. You could play this game, over and over again, the rules never change. They haven't changed in 2,000 years. And the board is always the same size. And so you can just get enough statistics about what tends to work, and you're good to go. But if you want to use the same techniques for natural language understanding, for example, or to guide a domestic robot through your house, it's not going to work. So the domestic robot in your house is going to keep seeing new situations. And your natural language understanding system, every dialogue is going to be different. It's not really going to work. So yeah, you can talk to Alexa and you can say, please turn on my over and over again, get statistics on that. It's fine. But there's no machine in the world T=that we're having right now. It's just not anywhere near a reality and you're not going to be able to do it with the statistics, because there's not enough similar stuff going on.
Probably the single best thing that we could do to make our machines smarter is to give them common sense, which is much harder than it sounds like. I mean, first you might say, what is common sense? And what we settled on in the book that we wrote is that common sense is the knowledge that's commonly held. that ordinary people have. And yet machines don't. So machines are really good at things like, I don't know, converting metrics -- you know, converting from the English system to the metric system. Things that are nice, and precise, and factual, and easily stated. But things that are a little bit less sharply stated, like how you open a door, machines don't understand the first thing. So there's actually a competition right now for opening doors. So if somebody uploaded data sets from 500 different doors, and they're hoping that robots will experiment with all 500 doors and then get it. But what's more likely is that they'll get to door 501 and they'll actually have a problem or at least door 601. So every ordinary person in the west has opened a bunch of door knobs and gets the idea, right? I need to turn something, jiggle something -- might be different for the next one -- until the door itself is free.
So you can give a definition of something like that. It's hard to give a perfect definition. But we all know that. And yet nobody's ever built a robot that can do that. We made a joke about it in the book. We said, you know, in the event of a robot attack do the following six things and number one was close the door. And we added that you might need to lock it. And that was before this whole database came out. So, you know, the field advances. People are working on that. Maybe next year they'll work on teaching robots about locks. But I bet you it will take a while before robots understand all the little ways it can jiggle the key, and maybe you need to pull in the door to make it just right. We don't know how to even encode that information in a language that a computer can understand. So the big challenge of common sense is to take stuff like that-- like, how do you open the door, why do you want to open a door-- and translate into the language of the machine. It's a lot harder than a metric converter. It's a lot harder than a database. And right now the field's not even really trying to answer that question. It's so obsessed with what it can do with these big databases, which are exciting in themselves, that it's kind of lost sight of that, even though the question itself that goes back to the 1950s, when one of the founders of A.I., John McCarthy, first started writing about it in the late '50s.
But it's not-- common sense is not getting the attention that it deserves and that's one of the reasons we wrote the book. Common sense is just one step along the way to intelligence. People talk about artificial intelligence. And sometimes they talk about artificial general intelligence. There's also narrow AI. And narrow AI is the stuff that we're doing pretty well now. So, do the same problem over and over again, just solving one problem. You could think about idiot Savants that can do a calendar but can't do anything else and can tell you what day you were born on if you give them your birthday. We have a lot of narrow AI right now. We can't do narrow AI for everything that we want to do. But the dream is to have broad AI, or general AI, that can solve any problem. You think about the Star Trek computer. You can say, you know, please give me the demographics in this galaxy and cross-correlate it with this, and with that, and tell me this. And Star Trek computer says, OK. And it figures it out. So the Star Trek computer understands everything about language, and it understands pretty much everything about how the world works. And it can put those together to give you an answer.
It's not like Google, right? Google can search for pages that have the information. But then you have to put the information together. The Star Trek computer can synthesize it. And it's a great example of general intelligence. If I ask you, did George Washington have a cell phone? Well, if you have a kind of common knowledge of when cellphones were introduced, when George Washington was alive, the fact that he's dead, then you could compute the information and give me the right answer. If you Google for it, you might get a wacky answer. If it works for George Washington, maybe you search for, did Thomas Jefferson have a cell phone? And the answer should obviously be the same. But maybe that one won't be on a Google page. So if you have common sense about things like time, and space, and a lot of sort of everyday factual knowledge, that gets you a long way to intelligence. It doesn't give you all the way there. So general intelligence, first of all, has many dimensions to it. So you can think like, the SAT has verbal and math.
Those are two of the of intelligence that's Really not well-established right now in the AI community is common sense. There are other aspects of intelligence, like doing pure calculation, where the A.I. community has done a good job. But there are other things that go into intelligence as well. For example, the ability to read a graph is partly about common sense, and it's partly about understanding what people might intend, it's partly sometimes about expert knowledge. So reading a graph is another form of intelligence. And that's going to require putting together better perceptual tools than we have now, better common sense tools, probably some knowledge about politics, for example, if you're reading a graph that's relevant to the latest political campaign, and so forth. So there's a lot of stuff Right now we're trying to approximate it all with statistics. But it's never general knowledge. So you could learn to read one graph, Going help you read the next graph. So general intelligence is going to be putting together a lot of things, both some that we already understand pretty well in the A.I. community and some we just haven't been working on and really need to get back to.
- There are a lot of people in the tech world who think that if we collect as much data possible, and run a lot of statistics, that we will be able to develop robots where artificial "intelligence" organically emerges.
- However, many A.I.'s that currently exist aren't close to being "intelligent," it's difficult to even program common sense into them. The reason for this is because correlation doesn't always equal causation — robots that operate on correlation alone may have skewed algorithms in which to operate in the real world.
- When it comes to performing simple tasks, such as opening a door, we currently don't know how to encode that information — the varied process that is sometimes required in differing situations, i.e. jiggling the key, turning the key just right — into a language that a computer can understand.
Gary Marcus is the author of Rebooting AI: Building Artificial Intelligence We Can Trust.
- Human-like AI could emerge in 5 to 10 years - Big Think ›
- The next Stage of Machine Learning: Teach Robots to Think Like ... ›
- What animals is A.I. currently smarter than? - Big Think ›
Once a week.
Subscribe to our weekly newsletter.
What is human dignity? Here's a primer, told through 200 years of great essays, lectures, and novels.
- Human dignity means that each of our lives have an unimpeachable value simply because we are human, and therefore we are deserving of a baseline level of respect.
- That baseline requires more than the absence of violence, discrimination, and authoritarianism. It means giving individuals the freedom to pursue their own happiness and purpose.
- We look at incredible writings from the last 200 years that illustrate the push for human dignity in regards to slavery, equality, communism, free speech and education.
The inherent worth of all human beings<p>Human dignity is the inherent worth of each individual human being. Recognizing human dignity means respecting human beings' special value—value that sets us apart from other animals; value that is intrinsic and cannot be lost.</p> <p>Liberalism—the broad political philosophy that organizes society around liberty, justice, and equality—is rooted in the idea of human dignity. Liberalism assumes each of our lives, plans, and preferences have some unimpeachable value, not because of any objective evaluation or contribution to a greater good, but simply because they belong to a human being. We are human, and therefore deserving of a baseline level of respect. </p> <p>Because so many of us take human dignity for granted—just a fact of our humanness—it's usually only when someone's dignity is ignored or violated that we feel compelled to talk about it. </p> <p>But human dignity means more than the absence of violence, discrimination, and authoritarianism. It means giving individuals the freedom to pursue their own happiness and purpose—a freedom that can be hampered by restrictive social institutions or the tyranny of the majority. The liberal ideal of the good society is not just peaceful but also pluralistic: It is a society in which we respect others' right to think and live differently than we do.</p>
From the 19th century to today<p>With <a href="https://books.google.com/ngrams/graph?year_start=1800&year_end=2019&content=human+dignity&corpus=26&smoothing=3&direct_url=t1%3B%2Chuman%20dignity%3B%2Cc0" target="_blank" rel="noopener noreferrer">Google Books Ngram Viewer</a>, we can chart mentions of human dignity from 1800-2019.</p><img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDg0ODU0My9vcmlnaW4ucG5nIiwiZXhwaXJlc19hdCI6MTY1MTUwMzE4MX0.bu0D_0uQuyNLyJjfRESNhu7twkJ5nxu8pQtfa1w3hZs/img.png?width=980" id="7ef38" class="rm-shortcode" data-rm-shortcode-id="9974c7bef3812fcb36858f325889e3c6" data-rm-shortcode-name="rebelmouse-image" />
American novelist, writer, playwright, poet, essayist and civil rights activist James Baldwin at his home in Saint-Paul-de-Vence, southern France, on November 6, 1979.
Credit: Ralph Gatti/AFP via Getty Images
The future of dignity<p>Around the world, people are still working toward the full and equal recognition of human dignity. Every year, new speeches and writings help us understand what dignity is—not only what it looks like when dignity is violated but also what it looks like when dignity is honored. In his posthumous essay, Congressman Lewis wrote, "When historians pick up their pens to write the story of the 21st century, let them say that it was your generation who laid down the heavy burdens of hate at last and that peace finally triumphed over violence, aggression and war."</p> <p>The more we talk about human dignity, the better we understand it. And the sooner we can make progress toward a shared vision of peace, freedom, and mutual respect for all. </p>
Researchers dramatically improve the accuracy of a number that connects fundamental forces.
- A team of physicists carried out experiments to determine the precise value of the fine-structure constant.
- This pure number describes the strength of the electromagnetic forces between elementary particles.
- The scientists improved the accuracy of this measurement by 2.5 times.
The process for measuring the fine-structure constant involved a beam of light from a laser that caused an atom to recoil. The red and blue colors indicate the light wave's peaks and troughs, respectively.
Scientists at Washington University are patenting a new electrolyzer designed for frigid Martian water.
- Mars explorers will need more oxygen and hydrogen than they can carry to the Red Planet.
- Martian water may be able to provide these elements, but it is extremely salty water.
- The new method can pull oxygen and hydrogen for breathing and fuel from Martian brine.
The WashU electrolyzer<iframe src='https://mars.nasa.gov/layout/embed/model/?s=6' width='800' height='450' scrolling='no' frameborder='0' allowfullscreen></iframe><p>The WashU electrolyzer—it has no snappy acronym yet—will not be the first device capable of extracting oxygen from Martian water. That honor goes to the Mars Oxygen In-Situ Resource Utilization Experiment, or <a href="https://mars.nasa.gov/mars2020/spacecraft/instruments/moxie/" target="_blank">MOXIE</a>, which is en route to Mars onboard NASA's <a href="https://mars.nasa.gov/mars2020/" target="_blank">Perseverance</a> rover. The rover was launched on July 30, 2020. It will arrive on February 18, 2021, and will perform high-temperature <a href="https://en.wikipedia.org/wiki/Electrolysis_of_water" target="_blank">electrolysis</a> to extract pure oxygen, but no hydrogen.</p><p>In addition to being able to capture hydrogen, the WashU system can even do a better job with oxygen than MOXIE can, extracting 25 times as much from the same amount of water.</p><p>The new system has no problem with Mars' magnesium perchlorate-laced water. On the contrary, the researchers say it ultimately makes their system work better since such high concentrations of salt keep water from freezing on such a cold a planet by lowering the liquid's freezing temperature to -60 °C. He adds it may "also improve the performance of the electrolyzer system by lowering the electrical resistance."</p><p>Cold itself is no issue for the WashU system. It's been tested in a sub-zero (-33 ⁰F, or -36 ⁰C) environment that simulates Mars'.</p><p>"Our novel brine electrolyzer incorporates a lead <a href="https://www.sciencedirect.com/science/article/abs/pii/S0926337318311299" target="_blank">ruthenate pyrochlore</a> <a href="https://en.wikipedia.org/wiki/Anode" target="_blank" rel="noopener noreferrer">anode</a> developed by our team in conjunction with a platinum on carbon <a href="https://en.wikipedia.org/wiki/Cathode" target="_blank">cathode</a>," explains Ramani. He adds, "These carefully designed components coupled with the optimal use of traditional electrochemical engineering principles has yielded this high performance."</p>
Back home<p>"This technology is equally useful on Earth where it opens up the oceans as a viable oxygen and fuel source," Ramani notes. His colleagues forsee potential applications such as producing oxygen in deep-sea habitats with ample water available, such as underwater research facilities and submarines.</p><p>The study's joint first author Pralay Gayen says that "having demonstrated these electrolyzers under demanding Martian conditions, we intend to also deploy them under much milder conditions on Earth to utilize brackish or salt water feeds to produce hydrogen and oxygen, for example, through seawater electrolysis."</p>
Scientists find that bursts of gamma rays may exceed the speed of light and cause time-reversibility.
- Astrophysicists propose that gamma-ray bursts may exceed the speed of light.
- The superluminal jets may also be responsible for time-reversibility.
- The finding doesn't go against Einstein's theory because this effect happens in the jet medium not a vacuum.
Jet bursting out of a blazar. Black-hole-powered galaxies called blazars are the most common sources detected by NASA's Fermi Gamma-ray Space Telescope.
Cosmic death beams: Understanding gamma ray bursts<div class="rm-shortcode" data-media_id="cu2knVEk" data-player_id="FvQKszTI" data-rm-shortcode-id="c6cfd20fdf31c82cb206ade8ce21ba3f"> <div id="botr_cu2knVEk_FvQKszTI_div" class="jwplayer-media" data-jwplayer-video-src="https://content.jwplatform.com/players/cu2knVEk-FvQKszTI.js"> <img src="https://cdn.jwplayer.com/thumbs/cu2knVEk-1920.jpg" class="jwplayer-media-preview" /> </div> <script src="https://content.jwplatform.com/players/cu2knVEk-FvQKszTI.js"></script> </div>
Pfizer's vaccine needs to be kept at -100°F until it's administered. Can caregivers deliver?
- Fair distribution of the Moderna and Pfizer vaccines is especially challenging because they need to be stored at extremely cold temperatures.
- Back in 2018, the WHO reported that over half of all vaccines are wasted worldwide due to lack of cold storage, and they were only talking about vaccines that need to be chilled or kept at standard freezer temperatures.
- Real-time logistics data, location tracking, and information about movements are crucial to track shipment progress, product temperature and other conditions.