David Goggins
Former Navy Seal
Career Development
Bryan Cranston
Critical Thinking
Liv Boeree
International Poker Champion
Emotional Intelligence
Amaryllis Fox
Former CIA Clandestine Operative
Chris Hadfield
Retired Canadian Astronaut & Author
from the world's big
Start Learning

What's the difference between A.I., machine learning, and robotics?

There's a lot of confusion as to what AI, machine learning, and robotics do. Sometimes, they can all be used together.

Boston Dynamics, Big Think

Artificial intelligence is everywhere. On your screens, in your pockets and one day may even be walking to a home near you. The headlines tend to group together this vast and diverse field into one subject. Robots emerging from the labs, algorithms playing ancient games and winning, AI and its promises are becoming a part of our everyday lives. While all of these instances have some relationship to AI, this is not a monolithic field, but one that has many separate and distinct disciplines.

A lot of the times we use the term Artificial intelligence as an all-encompassing umbrella term that covers everything. That’s not exactly the case. A.I., machine learning, deep learning, and robotics are all fascinating and separate topics. They all serve as an integral piece of the greater future of our tech. Many of these categories tend to overlap and complement one another.

The broader AI field of study is an extensive place where you have a lot to study and choose from. Understanding the difference between these four areas are foundational to getting a grasp and seeing the whole picture of the field.  

Blade Runner 2049 depicts a world over-run... and populated heavily... with robots. 

Artificial intelligence

At the root of AI technology is the ability for machines to be able to perform tasks characteristic of human intelligence. These types of things include planning, pattern recognizing, understanding natural language, learning and solving problems.  

There are two main types of AI: general and narrow. Our current technological capabilities fall under the latter. Narrow AI exhibits a sliver of some kind of intelligence – be it reminiscent of an animal or a human. This machine’s expertise is as the name would suggest, is narrow in scope. Usually, this type of AI will only be able to do one thing extremely well, like recognize images or search through databases at lightning speed.  

General intelligence would be able to perform everything equally or better than humans can. This is the goal of many AI researchers, but it is a ways down the road.  

Current AI technology is responsible for a lot of amazing things. These algorithms help Amazon give you personalized recommendations and makes sure your Google searches are relevant to what you’re looking for. Mostly any technologically literate person uses this type of tech every day.

One of the main differentiators between AI and conventional programming is the fact that non-AI programs are carried out by a set of defined instructions. AI on the other hand learns without being explicitly programmed.    

Here is when the confusion starts to take place. Often times – but not all the time – AI utilizes machine learning, which is a subset of the AI field. If we go a little deeper, we get deep learning, which is a way to implement machine learning from scratch.  

Furthermore, when we think about robotics we tend to think that robots and AI are interchangeable terms. AI algorithms are usually only one part of a larger technological matrix of hardware, electronics and non-AI code inside of a robot.

Ex Machina, A24

Robot... or artificially intelligent robot?  

Robotics is a branch of technology that concerns itself strictly with robots. A robot is a programmable machine that carries out a set of tasks autonomously in some way. They’re not computers nor are they strictly artificially intelligent.

Many experts cannot agree on what exactly constitutes a robot. But for our purposes, we’ll consider that it has a physical presence, is programmable and has some level of autonomy. Here are a few different examples of some robots we have today:

  • Roomba (Vacuum Cleaning Robot)

  • Automobile Assembly Line Arm

  • Surgery Robots

  • Atlas (Humanoid Robot)    

Some of these robots, for example, the assembly line robot or surgery bot are explicitly programmed to do a job. They do not learn. Therefore we could not consider them artificially intelligent.

These are robots that are controlled by inbuilt AI programs. This is a recent development, as most industrial robots were only programmed to carry out repetitive tasks without thinking.  Self-learning bots with machine learning logic inside of them would be considered AI. They need this in order to perform increasingly more complex tasks.

"I'm sorry, Dave..." — Hal 9000 from Stanley Kubrick's 2001: A Space Odyssey

What’s the difference between Artificial Intelligence and Machine Learning?

At its foundation, machine learning is a subset and way of achieving true AI. It was a term coined by Arthur Samuel in 1959, where he stated: “The ability to learn without being explicitly programmed.” 

The idea is to get the algorithm to learn or be trained to do something without being specifically hardcoded with a set of particular directions. It is the machine learning that paves way for artificial intelligence.

Arthur Samuel wanted to create a computer program that could enable his computer to beat him in checkers. Rather than create a detailed and long-winding program that could do it, he thought of a different idea. The algorithm that he created gave his computer the ability to learn as it played thousands of games against itself. This has been the crux of the idea ever since. By the early 1960s, this program was able to beat champions in the game.  

Over the years, machine learning developed into a number of different methods. Those being:

  1. Supervised

  2. Semi-supervised

  3. Unsupervised

  4. Reinforcement  

In a supervised setting, a computer program would be given labeled data and then be asked to assign a sorting parameter to them. This could be pictures of different animals and then it would guess and learn accordingly while it trained. Semi-supervised would only label a few of the images. After that, the computer program would have to use its algorithm to figure out the unlabeled images by using its past data.   

Unsupervised machine learning doesn’t involve any preliminary labeled data. It would be thrown into the database and have to sort for itself different classes of animals. It could do this based on grouping similar objects together due to how they look and then creating rules on the similarities it finds along the way.

Reinforcement learning is a little bit different than all of these subsets of machine learning. A great example would be the game of Chess. It knows a set amount of rules and bases its progress on the end result of either winning or losing.  

A.I., 2001, Stephen Speilberg

Deep learning

For an even deeper subset of machine learning comes deep learning. It’s tasked with far greater types of problems than just rudimentary sorting. It works in the realm of vasts amounts of data and comes to its conclusion with absolutely no previous knowledge.

If it was to differentiate between two different animals, it would distinguish them in a different way compared to regular machine learning. First, all pictures of the animals would be scanned, pixel by pixel. Once that was completed, it would then parse through the different edges and shapes, ranking them in a differential order to determine the difference.  

Deep learning tends to require much more hardware power. These machines that run this are usually housed away in large data centers. Programs that use deep learning are essentially starting from scratch.

Of all the AI disciplines, deep learning is the most promising for one day creating a generalized artificial intelligence. Some current applications that deep learning has spurned have been the many chatbots we see today. Alexa, Siri and Microsoft’s Cortana can thank their brains because of this nifty tech.   

A new cohesive approach

There have been many seismic shifts in the tech world this past century. From the computing age to the internet and to the world of mobile devices. These different categories of tech will pave the way for a new future. Or as Google CEO Sundar Pichai put it quite nicely:

“Over time, the computer itself—whatever its form factor—will be an intelligent assistant helping you through your day. We will move from mobile first to an A.I. first world.”

Artificial intelligence in all of its many forms combined together will take us on our next technological leap forward.

Hints of the 4th dimension have been detected by physicists

What would it be like to experience the 4th dimension?

Two different experiments show hints of a 4th spatial dimension. Credit: Zilberberg Group / ETH Zürich
Technology & Innovation

Physicists have understood at least theoretically, that there may be higher dimensions, besides our normal three. The first clue came in 1905 when Einstein developed his theory of special relativity. Of course, by dimensions we’re talking about length, width, and height. Generally speaking, when we talk about a fourth dimension, it’s considered space-time. But here, physicists mean a spatial dimension beyond the normal three, not a parallel universe, as such dimensions are mistaken for in popular sci-fi shows.

Keep reading Show less

Does conscious AI deserve rights?

If machines develop consciousness, or if we manage to give it to them, the human-robot dynamic will forever be different.

  • Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
  • Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
  • One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.

A new hydrogel might be strong enough for knee replacements

Duke University researchers might have solved a half-century old problem.

Lee Jae-Sung of Korea Republic lies on the pitch holding his knee during the 2018 FIFA World Cup Russia group F match between Korea Republic and Germany at Kazan Arena on June 27, 2018 in Kazan, Russia.

Photo by Alexander Hassenstein/Getty Images
Technology & Innovation
  • Duke University researchers created a hydrogel that appears to be as strong and flexible as human cartilage.
  • The blend of three polymers provides enough flexibility and durability to mimic the knee.
  • The next step is to test this hydrogel in sheep; human use can take at least three years.
Keep reading Show less
Technology & Innovation

Predicting PTSD symptoms becomes possible with a new test

An algorithm may allow doctors to assess PTSD candidates for early intervention after traumatic ER visits.

Scroll down to load more…