Skip to content
Culture & Religion

Why Scientists are Training AI to Take Standardized Tests

Researchers hope training machines to the test will allow for advances in imbuing software with basic common sense.

Computer software has proven itself to be a lot better than humans at a whole lot of things: search queries, indexing, calculations, etc. But common sense is not currently one of those things. That’s why computer scientists are toying with a bunch of neat new strategies for instilling in AI the main cognitive ability we possess that it doesn’t — the ability to learn.


For example, a team of researchers out of the Allen Institute for Artificial Intelligence in Seattle is training their AI program, named Aristo, to take the New York state fourth grade standard science exams. Oren Etzioni, the Allen Institute’s CEO, argues that standardized tests offer a strong benchmark for tracking the progress of machine learning.

To understand what he means, let’s revert quickly to standardized tests. They get a bad rap around here and deservedly so, as they’re not a great way to guide our school children toward creative thinking or a lifelong love of learning. Luckily for computer scientists, AI isn’t like your typical fourth grader.

Microsoft Director of Search Stefan Weitz explains that the future of machine learning consists of teaching artificial intelligence to identify patterns.

There’s a reason some kids are better at taking tests than others — and it’s not all brains. It’s a matter of finding the most efficient way to interpret questions and deliver the best possible answers. Take, for example, your garden variety multiple-choice question. If you don’t already know the solution, the best strategy is to winnow down the choices until you’ve found the one that’s most likely correct. In many ways it’s a matter of common sense, which is why Etzioni is so keen on making sure Aristo passes the exam.

Why is common sense the current golden fleece for computer scientists? The Siri program on your phone might be able to interpret your voice and deliver action, but it’s not applying rational thought to assist you. It’s incapable of figuring things out for itself or of interpreting requests in ways it wasn’t initially programmed to do so. The same applies for plenty of computer systems more advanced and important than personal assistant software. Imagine how useful effective and replicable machine learning could be if scientists can make major progress in teaching AI to teach itself.

The future of AI and machine learning is going to be a lot more impressive than a simple search engine. The only reason we’re not there yet is because teaching software how to reason is a lot more difficult than assigning it mindless busy work. If the folks at Allen and others like them prove successful in their endeavors, the future of AI may not be so far into the future after all.

Read more at MIT Technology Review.

Wanna meet Aristo and see it in action: Check it out at the Allen Institute website.

Image credit: Vergeles_Andrey / Getty iStock


Related

Up Next