New computing theory allows artificial intelligences to store memories.
- To become autonomous, robots need to perceive the world around them and move at the same time.
- Researchers create a theory of hyperdimensional computing to help store robot movement in high-dimensional vectors.
- This improvement in perception will allow artificial intelligences to create memories.
Do androids dream of electric sheep? Philip K. Dick famously wondered that in his stories that explored what it meant to be human and robot in the age of advanced and widespread artificial intelligence. We aren't quite in "Blade Runner" reality just yet, but now a team of researchers came up with a new way for robots to remember that may close the gap between robots and us for good.
For robots to be as proficient as humans in various tasks, they need to coordinate sensory data with motor capabilities. Scientists from the University of Maryland published a paper in the journal Science Robotics describing a potentially revolutionary approach to improve how AI handles sensorimotor representation using hyperdimensional computing theory.
What the researchers set out to create was a way to improve a robot's "active perception" - its ability to integrate how it perceives the world around it with how it moves in that world. As they wrote in their paper, "we find that action and perception are often kept in separated spaces," which they attribute to traditional thinking.
They proposed instead "a method of encoding actions and perceptions together into a single space that is meaningful, semantically informed, and consistent by using hyperdimensional binary vectors (HBVs). "
As their press release explains, HBVs work in very high-dimensional spaces, containing a plethora of information about different discrete items like an image or a sound or a command. These can be further grouped into sequences of discrete items and groupings of items and sequences.
By utilizing these vectors, the researchers look to keep all sensory information the robot receives in one place, essentially creating its memories. As more information gets stored, "history" vectors would be created, increasing the robot's memory content.
The scientists think that active perception and memories would make the robots better at autonomous decisions, expecting future situations and completing tasks.
The Hyperdimensional "pipeline"
This "pipeline" describes how data from a drone flight is recorded and translated into binary vectors that are integrated into memory through vector operations. This memory can then be recalled.
Credit: Perception and Robotics Group, University of Maryland.
"An active perceiver knows why it wishes to sense, then chooses what to perceive, and determines how, when and where to achieve the perception," said Aloimonos. "It selects and fixates on scenes, moments in time, and episodes. Then it aligns its mechanisms, sensors, and other components to act on what it wants to see, and selects viewpoints from which to best capture what it intends. Our hyperdimensional framework can address each of these goals."
Outside of robots, the scientists also see an application of their theories in deep learning AI methods employed in data mining and visual recognition.
To test the theory, the team employed a dynamic vision sensor (DVS) which continually captures the edges of objects in event clouds as they move by. By quickly focusing on the contours of the scene and the movement, this sensor is well-suited for autonomous navigation of robots. The data from the event clouds is stored in binary vectors, allowing the scientists to apply hyperdimensional computing.
Here’s a video of how DVS works:
The research was carried out by the computer science Ph.D. students Anton Mitrokhin and Peter Sutor, Jr., along with Cornelia Fermüller, an associate research scientist with the University of Maryland Institute for Advanced Computer Studies, as well as the computer science professor Yiannis Aloimonos. He advised Mitrokhin and Sutor.
Check out their paper "Learning sensorimotor control with neuromorphic sensors: Toward hyperdimensional active perception" in Science Robotics.
A new AI-produced commercial from Lexus shows how AI might be particularly suited for the advertising industry.
- The commercial was written by IBM's Watson. It was acted and directed by humans.
- Lexus says humans played a minimal part in influencing Watson, in terms of the writing.
- Advertising, with its clearly defined goals and troves of data, seems like one creative field in which AI would prove particularly useful.
IBM's Watson supercomputer is the author of a script recently used in a Lexus commercial, marking a first for the advertising and AI industries.
To create the commercial, which was acted and directed by humans, Watson was given 15 years' worth of award-winning commercials, a trove of data showing when consumers tended to connect most emotionally with advertisements, and further data from an experiment conducted by the University of New South Wales' MindX, which analyzed how highly intuitive people respond to car advertising.
Strangely, the ad is about a car that's about to be destroyed by humans but manages to save itself through its sentience.
It depicts an emotional Takumi (Japanese artisan) expressing a silent goodbye to the AI-equipped Lexus he created. The car ventures out into the world, but dark clouds soon appear. It becomes clear the Lexus must undergo a crash test in a menacing facility, which it passes by using its AI-equipped braking system, much to the relief of the Takumi watching the test on TV at home.
"When I was handed the script, the melodrama of the story convinced me of its potential," director Kevin Macdonald, who's directed films such as The Last King of Scotland and State of Play, said in a statement. "The fact that AI gave a fellow machine sentience, placed it in a sort of combat situation, and then had it escaping into the sunset was such an emotional response from what is essentially a digital platform."
Macdonald also noted that humans did intervene into the project with "a nudge here and there." But, considering Lexus hasn't released the script, it's unclear how closely the finished commercial matches up with Watson's original script. Those involved with the project simply suggested Watson was able to extract the best aspects of manmade commercials from the past.
"The magic of storytelling will always come to life in the human creative process," said Reece Medway, media and entertainment specialist for IBM Watson in the U.K. and Ireland. "Using Watson to identify the common attributes for truly award-winning creative work is an example of how man and machine will collaborate in the AI era."
You can watch the commercial below.
Is AI coming after creative jobs?
It seems likely that AI could eventually replace jobs like cashier, truck driver, data analyst and even accountants. What's harder to imagine, but increasingly plausible, is how AI could soon begin replacing jobs in more creative fields—journalism, entertainment and, especially, advertising.
AI has already made breakthroughs—some more impressive than others—in multiple creative endeavors. In music, an AI has combined the mathematical properties of disparate instruments to create sounds never before heard by humans. In visual art, an auction house has already sold the first AI-produced produced artwork, for a price of $432,000, and there's also an algorithm that can draw anything you want, though the results aren't always intelligible. And in entertainment, AIs have also written scripts, including one that, while mostly ridiculous, managed to capture some of the rhythm and conventions of science fiction writing.
But it's in advertising—a field with a clearly defined parameters and goals—that AI seems likely to be most effective.
"Advertising, more than music, movies, art or entertainment, is the perfect incubation bed for this kind of technology," wrote Loz Blain for New Atlas. "Where you have a measurable result to grade the art against, it's easy for an algorithm to decide what has been effective and what hasn't, and tune itself up to improve its performance over time. Advertising is an art form designed purely to manipulate. You better believe that ad agencies will use every tool in their arsenal to get the job done."
Still, considering Lexus admitted giving a "nudge" here and there to Watson, it could be a long time before the industry's top copywriters start fearing for their jobs.
The controversy around the Torah codes gets a new life.
- Mathematicians claim to see a predictive pattern in the ancient Torah texts.
- The code is revealed by a method found with special computer software.
- Some events described by reading the code took place after the code was written.
Searching for patterns is how we make sense of the world. We look for meaning in the often-overwhelming chaos by making connections between symbols and events. Some times these are meaningful discoveries, resulting in good science and breakthrough insights. Other times, these patterns may lead nowhere but still help us focus energies on what's important.
One intriguing source of patterns that has emerged thanks to our development of computers is the Bible. Among humanity's oldest and arguably most influential pieces of writing, the Bible has been studied and analyzed phrase by phrase by countless scholars and devotees. But what computers have allowed us to do, thanks to the work of Israeli mathematicians, is to see that the ancient text may be not only an intricately-weaved collection of spiritual stories and teachings but a code that speaks to the inner workings of history.
"The Bible Code," a 1997 book by the reporter Michael Drosnin popularized the idea. His book claimed to use the earliest parts of the Bible to predict the assassination of the Israeli Prime Minister Yitzhak Rabin, the Gulf war, and comet collisions. It also seemed to have information about the Holocaust, various other assassinations like those of JFK and his brother Robert. It similarly suggested a nuclear war was looming – a theme the author explored in subsequent books of the "Bible Code" series.
The inspiration for Drosnin's book came from the 1994 paper "Equidistant Letter Sequences in the Book of Genesis," published in the journal Statistical Science by mathematicians Doron Witztum, Eliyahu Rips and Yoav Rosenberg. They presented statistical evidence that information about the lives of famous rabbis was encoded in the Hebrew text of the Book of Genesis, hundreds of years before those rabbis lived.
Dr. Eliyahu Rips is one of the world's leading experts on group theory and is the scientist who got most closely associated with the "Bible Code" hypothesis, even though the software used to implement the word search was designed by both Rips and Witztum.
Dr. Eliyahu Rips. 2017.
Rips later distanced himself from Drosnin's book. In a 1997 statement on the matter, he pointed out that he didn't make or support some of the specific predictions Drosnin claimed. Nonetheless, Rips wrote quite clearly that "the only conclusion that can be drawn from the scientific research regarding the Torah codes is that they exist and that they are not a mere coincidence."
The method used by the scientists to arrive at their conclusions is the Equidistant Letter Sequence (ELS). To get a word with some meaning, this method calls you to pick a starting point in a text and a skip number. And then, start selecting letters while skipping the same number of spaces every time (pretty much in any direction). If you're lucky, a sensible word will be spelled out. This method works well if letters are arranged in an array, like this one –
The Bible Code made a recent re-appearance in the public consciousness thanks to the work of author and fourth-generation antiques expert Timothy Smith. His 2017 book "The Chamberlain Key" describes how following 25 years of research, he unlocked a "God code" in the Bible. He calls his book "the Da Vinci Code on steroids, but it's true."
Smith's decoding work is based on his own ancient copy of the Bible titled "The Leningrad Codex" - it's the oldest complete manuscript of the Hebrew Old Testament. Smith used a computer-driven application of the ELS method, as well as code-breaking techniques and his intimate knowledge of ancient and aboriginal ceremonial devices like scepters, crowns and thrones to arrive at his reading of the Bible.
Smith is a devout Christian and his conclusions revolve around Christian motifs. In particular, he claims to have found detailed informations about Jesus's birth, crucifixion and resurrection within a passage in Genesis.
The book has received a special on the History channel and a documentary series is being made about the travels leading to Smith's discoveries.
David McKillop, the executive producer for Jupiter Entertainment, which is creating the TV series, said that "Tim's quest is the ultimate treasure hunt for one of history's greatest mysteries, and his map is an ancient text that could possibly be talking to us."
Here's the History Channel's teaser for Smith's TV special
If you think there can't possibly be any pattern in the Bible and other long texts may produce similar results - there are studies for you too. The Australian computer scientist Brendan McKay famously came up with a table of assassination predictions in "Moby Dick".
While the Bible or "Torah Codes" can be criticized, there is scholarly evidence that ancient writers of the Bible, like Matthew, "consciously used numerical patterns or codes in their compositions," as writes Dr. Randall Buth, the director of the Biblical Language Center and a lecturer at the Hebrew University in Jerusalem.
Another factor we should keep in mind that our understanding of how time and history work very much depends on our frame of reference. If time flows differently, for example as proposed by the Block Universe Theory, all bets would be off and a book could theoretically contain the code of history both of the past and the future.
An Ivy League education without the Ivy League price tag.
We recently published an article outlining how you can take Yale University courses for free. Given the response to that article, we have decided to show you more classes that you can access at no cost. Just like last time, a certificate of completion is available for all of these classes for a fee, if you want to prove that you have bettered yourself this way.
So, here are 8 Harvard University courses you can take right now, for free.
Introduction to Computer Science
Knowing how to code is a vital skill in in today’s digital world. This entry level course teaches the basics of computational thinking, programming problem solving, data structures, and web development, among other things. It will leave the learner able to code in several languages including C, Python, and Java.
The class is self-paced, and consists of a time investment of 10-20 hours to finish nine problem sets and a final project, which is done online. This class will help you learn several of the five programming languages that Bjarne Stroustrup, inventor of C++, says you should learn in his Big Think interview.
The Architectural Imagination
Art and science are often viewed in opposition to one another, but in the field of architecture they meet in fantastic and beautiful ways. In this class, students will learn both the technical and cultural aspects of architecture, and gain a better understanding of how the buildings we inhabit relate to history, values, and pragmatic concerns.
The class is self-paced and consists of 3-5 hours of work over 10 weeks.
Super-Earths and Life
What life lies beyond our small world? Thirty years ago we only knew about nine planets; today we know of thousands orbiting nearby stars. In this course, students will learn about exoplanets, which ones might be the best candidates for harboring life, and why those planets are of the greatest interest. Combining concepts in astronomy and biology which have rarely been put together before, the class is an excellent introduction to one of the most interesting eras in astrobiology; today.
This class is self-paced and is offered over six weeks of 3-5 hours investment. What might life on those exoplanets look like? Jonathan Losos, also of Harvard, explains in his Big Think interview.
What is the right thing to do in a given situation? Would you still act justly if you could get away with acting horribly? These are some of the oldest and most important questions in philosophy. In this class, students will learn differing perspectives of justice from thinkers like Aristotle, John Locke, Immanuel Kant, John Stuart Mill, and John Rawls. The class is taught in English, but subtitles are available in Chinese, German, Portuguese, and Spanish.
It requires a time commitment of 2-4 hours a week for 12 weeks.
Leaders of Learning
How do you learn? Why do you learn? Can you name three people who would share your answers? In this class, students will identify their own style of learning and find out how that style fits into the ever-changing landscape of education. Later lectures focus on how to apply that knowledge to leadership, organizational structure, and the future of learning.
This course is self-paced, and is taken for 2-4 hours per week for six weeks.
Using Python for Research
Do you want to learn to code, and then learn how to actually use it? In this course, students will review the basics of the Python coding language and then learn how to apply that knowledge to research projects by means of tools such as NumPy and SciPy. This class is an intermediate level course, and a basic understanding of the Python language is ideal before beginning.
This course is self-paced, and is taken for 4-8 hours a week for four weeks.
The federal government of the United States can seem like a far off and alien system, one which acts in strange ways; but it is a powerful force in the life of every American. To not understand how it works, and your place in it as a citizen and voter, is to be an irresponsible citizen. This course introduces students to the function, history, institutions, and inner workings of American government. No previous study or understanding of American politics is required, making the course ideal for non-American students who want to understand what exactly is going on there.
This course is self-paced and is taken for 3 hours per week for 16 weeks. It is a great start on issues that NYU law professor Kenji Yoshino finds can be remarkably difficult to interpret:
Humanitarian Response to Conflict and Disaster
We live in a world with staggering humanitarian crises, and responses to them that are often lacking. In this class, students will ask questions on how to deal with humanitarian disasters through the case studies of Zaire, Syria, The Balkans, and elsewhere. The history of humanitarian responses, and the frameworks that those responses past and present operate in, will be covered as well, and students will be challenged to ask if they remain sufficient.
This course lasts five weeks and requires 3-4 hours of time investment per week.
The Opioid Crisis in America
One of the greatest challenges facing the United States today is the spike in opioid addiction. In this course, medical experts explain the causes of the crisis, the science of getting hooked, the realities of addiction, treatment options, and more. The class is free, and currently offers credit for SHRM-CP.
This course requires a 1-2-hour commitment per week, for seven weeks.
Many other great courses are available if these subjects aren't quite what you're looking for. They're free, they're great, and you're looking at them right now: what are you waiting for?
Elon Musk, Sam Harris, Ray Kurzweil and other visionaries discuss AI superintelligence at a recent conference.
A fascinating conference on artificial intelligence was recently hosted by the Future of Life Institute, an organization aimed at promoting “optimistic visions of the future” while anticipating “existential risks” from artificial intelligence and other directions.
The conference “Superintelligence: Science or Fiction?” featured a panel of Elon Musk from Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of MIT’s DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.
The conference participants offered a number of prognostications and warnings about the coming superintelligence, an artificial intelligence that will far surpass the brightest human.
Most agreed that such an AI (or AGI for Artificial General Intelligence) will come into existence. It is just a matter of when. The predictions ranged from days to years, with Elon Musk saying that one day an AI will reach a “a threshold where it's as smart as the smartest most inventive human” which it will then surpass in a “matter of days”, becoming smarter than all of humanity.
Ray Kurzweil’s view is that however long it takes, AI will be here before we know it:
“Every time there is an advance in AI, we dismiss it as 'oh, well that's not really AI:' chess, go, self-driving cars. An AI, as you know, is the field of things we haven't done yet. That will continue when we actually reach AGI. There will be lots of controversy. By the time the controversy settles down, we will realize that it's been around for a few years," says Kurzweil [5:00].
Neuroscientist and author Sam Harris acknowledges that his perspective comes from outside the AI field, but sees that there are valid concerns about how to control AI. He thinks that people don’t really take the potential issues with AI seriously yet. Many think it’s something that is not going to affect them in their lifetime - what he calls the “illusion that the time horizon matters.”
“If you feel that this is 50 or a 100 years away that is totally consoling, but there is an implicit assumption there, the assumption is that you know how long it will take to build this safely. And that 50 or a 100 years is enough time,” he says [16:25].
On the other hand, Harris points out that at stake here is how much intelligence humans actually need. If we had more intelligence, would we not be able to solve more of our problems, like cancer? In fact, if AI helped us get rid of diseases, then humanity is currently in “pain of not having enough intelligence.”
Elon Musk’s point of view is to be looking for the best possible future - the “good future” as he calls it. He thinks we are headed either for “superintelligence or civilization ending” and it’s up to us to envision the world we want to live in.
“We have to figure out, what is a world that we would like to be in where there is this digital superintelligence?,” says Musk [at 33:15].
He also brings up an interesting perspective that we are already cyborgs because we utilize “machine extensions” of ourselves like phones and computers.
Musk expands on his vision of the future by saying it will require two things - “solving the machine-brain bandwidth constraint and democratization of AI”. If these are achieved, the future will be “good” according to the SpaceX and Tesla Motors magnate [51:30].
By the “bandwidth constraint,” he means that as we become more cyborg-like, in order for humans to achieve a true symbiosis with machines, they need a high-bandwidth neural interface to the cortex so that the “digital tertiary layer” would send and receive information quickly.
At the same time, it’s important for the AI to be available equally to everyone or a smaller group with such powers could become “dictators”.
He brings up an illuminating quote about how he sees the future going:
“There was a great quote by Lord Acton which is that 'freedom consists of the distribution of power and despotism in its concentration.' And I think as long as we have - as long as AI powers, like anyone can get it if they want it, and we've got something faster than meat sticks to communicate with, then I think the future will be good,” says Musk [51:47]
You can see the whole great conversation here: