Self-Motivation
David Goggins
Former Navy Seal
Career Development
Bryan Cranston
Actor
Critical Thinking
Liv Boeree
International Poker Champion
Emotional Intelligence
Amaryllis Fox
Former CIA Clandestine Operative
Management
Chris Hadfield
Retired Canadian Astronaut & Author
Learn
from the world's big
thinkers
Start Learning

Elon Musk wants testers for Tesla’s long-awaited ‘full self-driving’ A.I. chip

The Tesla CEO said the Hardware 3 upgrade has "1000 percent more capability" than the current hardware.

Photo by VCG / VCG via Getty Images
  • Elon Musk is looking for a few hundred more people to test and provide feedback about Tesla's long-awaited Hardware 3 update, according to an internal company message.
  • Hardware 3, first announced in August, will likely expand the autonomous abilities of Tesla cars.
  • It's still unclear just what those expanded capabilities will be, however.

Tesla CEO Elon Musk wants "a few hundred" more people to test out the company's new Autopilot Hardware 3, the long-awaited neural net technology that the company has said will give cars "Full Self-Driving Capability" for $8,000 plus the price of the car.

It's been hard to pin down details about Hardware 3. Musk had once promised that Tesla cars would be able to provide a coast-to-coast autonomous drive by the end of 2017. Of course, that never happened, though the cars do offer limited self-driving capabilities, including:

  • Autosteer, which detects painted lane lines and cars to keep you in your lane automatically
  • Auto-lane change
  • Summon, an option that starts the car and brings it to you
  • And the abilities to take on/off ramps, pass slow cars and park automatically

In September, Tesla announced that it planned to use a team of internal testers to test an early version of Hardware 3. One month later, the company removed an option from its website that let customers pre-order "Full Self-Driving Capability." Musk said the controversial option was "causing too much confusion."

Now, an internal Tesla message, ostensibly from Musk, shows that the company is offering to swap out Hardware 2 for Hardware 3 in the cars of anyone, customers or employees, who chooses to participate in the testing program. It's the "last time the offer will be made," the message reads.

Will the update really make Teslas "full self-driving"?

Telsa CEO Elon Musk unveils new vehicle. Photo credit: Kevork Djansezian / Getty Images

It's unclear, exactly. The website 1reddrop, which covers Tesla news and technology, wrote that Tesla is focusing on improving GPS technology, and that increased computing power in Hardware 3 will translate to lower latency and quicker reaction times. Also, it could enable the cars to keep better live maps of roads:

Every Tesla with Autopilot engaged sends a ton of information back to the company's servers, and this information can be used to maintain live maps that are constantly being updated.

The biggest problem for Tesla right now is reliability. Autopilot hardware 2 and 2.5 are now stretched to their physical limits, but Autopilot Hardware 3 will be far more powerful in that it can process a lot of the information on-board.

Whether this means Teslas will be fully self-driving remains an open question. It also depends on how you define the term.

​The Society of Automotive Engineers' 6 levels of driving automation

The Society of Automotive Engineers maintains six categories of self-driving capabilities (descriptions listed below). Right now, Tesla cars exist between levels two and three, arguably.

  • Level 0: Automated system issues warnings and may momentarily intervene but has no sustained vehicle control.
  • Level 1 ("hands on"): The driver and the automated system share control of the vehicle. Examples are Adaptive Cruise Control (ACC), where the driver controls steering and the automated system controls speed; and Parking Assistance, where steering is automated while speed is under manual control. The driver must be ready to retake full control at any time. Lane Keeping Assistance (LKA) Type II is a further example of level 1 self-driving.
  • Level 2 ("hands off"): The automated system takes full control of the vehicle (accelerating, braking, and steering). The driver must monitor the driving and be prepared to intervene immediately at any time if the automated system fails to respond properly. The shorthand "hands off" is not meant to be taken literally. In fact, contact between hand and wheel is often mandatory during SAE 2 driving, to confirm that the driver is ready to intervene.
  • Level 3 ("eyes off"): The driver can safely turn their attention away from the driving tasks, e.g. the driver can text or watch a movie. The vehicle will handle situations that call for an immediate response, like emergency braking. The driver must still be prepared to intervene within some limited time, specified by the manufacturer, when called upon by the vehicle to do so. As an example, the 2018 Audi A8 Luxury Sedan was the first commercial car to claim to be capable of level 3 self-driving. This particular car has a so-called Traffic Jam Pilot. When activated by the human driver, the car takes full control of all aspects of driving in slow-moving traffic at up to 60 kilometres per hour (37 mph). The function works only on highways with a physical barrier separating one stream of traffic from oncoming traffic.
  • Level 4 ("mind off"): As level 3, but no driver attention is ever required for safety, e.g. the driver may safely go to sleep or leave the driver's seat. Self-driving is supported only in limited spatial areas (geofenced) or under special circumstances, like traffic jams. Outside of these areas or circumstances, the vehicle must be able to safely abort the trip, e.g. park the car, if the driver does not retake control.
  • Level 5 ("steering wheel optional"): No human intervention is required at all. An example would be a robotic taxi.
1reddrop wrote that the new update could "decisively bring Tesla into Level 4 territory. If they succeed within the projected timeline, Tesla could become the world's first automaker with a fully autonomous (by SAE standards) fleet on the road in 2019."

Neom, Saudi Arabia's $500 billion megacity, reaches its next phase

Construction of the $500 billion dollar tech city-state of the future is moving ahead.

Credit: Neom
Technology & Innovation
  • The futuristic megacity Neom is being built in Saudi Arabia.
  • The city will be fully automated, leading in health, education and quality of life.
  • It will feature an artificial moon, cloud seeding, robotic gladiators and flying taxis.
Keep reading Show less

Human brains remember certain words more easily than others

A study of the manner in which memory works turns up a surprising thing.

Image Point Fr / Shutterstock
Mind & Brain
  • Researchers have found that some basic words appear to be more memorable than others.
  • Some faces are also easier to commit to memory.
  • Scientists suggest that these words serve as semantic bridges when the brain is searching for a memory.

Cognitive psychologist Weizhen Xie (Zane) of the NIH's National Institute of Neurological Disorders and Stroke (NINDS) works with people who have intractable epilepsy, a form of the disorder that can't be controlled with medications. During research into the brain activity of patients, he and his colleagues discovered something odd about human memory: It appears that certain basic words are consistently more memorable than other basic words.

The research is published in Nature Human Behaviour.

An odd find

Image source: Tsekhmister/Shutterstock

Xie's team was re-analyzing memory tests of 30 epilepsy patients undertaken by Kareem Zaghloul of NINDS.

"Our goal is to find and eliminate the source of these harmful and debilitating seizures," Zaghloul said. "The monitoring period also provides a rare opportunity to record the neural activity that controls other parts of our lives. With the help of these patient volunteers we have been able to uncover some of the blueprints behind our memories."

Specifically, the participants were shown word pairs, such as "hand" and "apple." To better understand how the brain might remember such pairings, after a brief interval, participants were supplied one of the two words and asked to recall the other. Of the 300 words used in the tests, five of them proved to be five times more likely to be recalled: pig, tank, doll, pond, and door.

The scientists were perplexed that these words were so much more memorable than words like "cat," "street," "stair," "couch," and "cloud."

Intrigued, the researchers looked at a second data source from a word test taken by 2,623 healthy individuals via Amazon's Mechanical Turk and found essentially the same thing.

"We saw that some things — in this case, words — may be inherently easier for our brains to recall than others," Zaghloul said. That the Mechanical Turk results were so similar may "provide the strongest evidence to date that what we discovered about how the brain controls memory in this set of patients may also be true for people outside of the study."

Why understanding memory matters

person holding missing piece from human head puzzle

Image source: Orawan Pattarawimonchai/Shutterstock

"Our memories play a fundamental role in who we are and how our brains work," Xie said. "However, one of the biggest challenges of studying memory is that people often remember the same things in different ways, making it difficult for researchers to compare people's performances on memory tests." He added that the search for some kind of unified theory of memory has been going on for over a century.

If a comprehensive understanding of the way memory works can be developed, the researchers say that "we can predict what people should remember in advance and understand how our brains do this, then we might be able to develop better ways to evaluate someone's overall brain health."

Party chat

Image source: joob_in/Shutterstock

Xie's interest in this was piqued during a conversation with Wilma Bainbridge of University of Chicago at a Christmas party a couple of years ago. Bainbridge was, at the time, wrapping up a study of 1,000 volunteers that suggested certain faces are universally more memorable than others.

Bainbridge recalls, "Our exciting finding is that there are some images of people or places that are inherently memorable for all people, even though we have each seen different things in our lives. And if image memorability is so powerful, this means we can know in advance what people are likely to remember or forget."

spinning 3D model of a brain

Temporal lobes

Image source: Anatomography/Wikimedia

At first, the scientists suspected that the memorable words and faces were simply recalled more frequently and were thus easier to recall. They envisioned them as being akin to "highly trafficked spots connected to smaller spots representing the less memorable words." They developed a modeling program based on word frequencies found in books, new articles, and Wikipedia pages. Unfortunately, the model was unable to predict or duplicate the results they saw in their clinical experiments.

Eventually, the researchers came to suspect that the memorability of certain words was linked to the frequency with which the brain used them as semantic links between other memories, making them often-visited hubs in individuals's memory networks, and therefore places the brain jumped to early and often when retrieving memories. This idea was supported by observed activity in participants' anterior temporal lobe, a language center.

In epilepsy patients, these words were so frequently recalled that subjects often shouted them out even when they were incorrect responses to word-pair inquiries.

Seek, find

Modern search engines no longer simply look for raw words when resolving an inquiry: They also look for semantic — contextual and meaning — connections so that the results they present may better anticipate what it is you're looking for. Xie suggests something similar may be happening in the brain: "You know when you type words into a search engine, and it shows you a list of highly relevant guesses? It feels like the search engine is reading your mind. Well, our results suggest that the brains of the subjects in this study did something similar when they tried to recall a paired word, and we think that this may happen when we remember many of our past experiences."

He also notes that it may one day be possible to leverage individuals' apparently wired-in knowledge of their language as a fixed point against which to assess the health of their memory and brain.

Does conscious AI deserve rights?

If machines develop consciousness, or if we manage to give it to them, the human-robot dynamic will forever be different.

Videos
  • Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
  • Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
  • One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.

Scroll down to load more…
Quantcast