The Technological Singularity and Merging With Machines
The idea of a coming Singularity refers to a point in time of radical exponential progress, beyond which our minds can't imagine—the technological counterpart to an event horizon in a black hole.
The term "singularity," which is often heard today, comes originally from my field, theoretical physics. It denotes a point in space and time where the gravitational field becomes infinite. At the center of a black hole, for example, we might find a singularity. It also refers to a mathematical term where a certain function also becomes infinite. But the type of singularity that you have probably been hearing about the most lately is called "The Technological Singularity" and although its not a new concept, it's definitely becoming more of a mainstream topic of conversation.
Countless books on the subject are being published on a consistent basis, and Ray Kurzweil just recently launched his documentary, "The Transcendent Man" which shares his vision of a world in which humans merge with machines and is currently screening in sold-out screenings around the planet, web forums, blogs and video sites.
Recently it was part of a TIME Magazine cover story entitled "2045: The Year Man Becomes Immortal" which includes a five page narrative. Not to mention that there are an increased number of institutes, dozens of annual singularity conferences and even the 2008 founding of the Singularity University by X-Prize's Peter Diamandis & Ray Kurzweil which is based at the NASA Ames campus in Silicon Valley. The Singularity University offers a variety of programs including one in particular called "The Exponential Technologies Executive Program" which they state has a main goal to "educate, inform, and prepare executives to recognize the opportunities and disruptive influences of exponentially growing technologies and understand how these fields affect their future, business, and industry."
My television series Sci Fi Science, on the The Science Channel aired an episode entitled A.I. Uprising which maintained a focus on the coming technological singularity and on the fear that mankind will one day create a machine that could quite possibly threaten our very existence. One cannot rule out the point in time when machine intelligence will eventually surpass human intelligence. These super intelligent machine creations will become self-aware, have their own agenda and may even one day be able to create copies of themselves that are more intelligent than they are.
Common questions I'm often asked are:
But the road to the singularity is not going to be a smooth one. As I originally mentioned in my Big Think interview, "How to Stop Robots from Killing Us", Moore's law states that computing power doubles about every 18 months and it's a curve that has held sway for about 50 years. Chip manufacturing and the technology behind the development of transistors will eventually hit a wall where they are just too small, too powerful and generate way too much heat resulting in a chip meltdown and electrons leaking out due to the Heisenberg Uncertainty Principle.
Needless to say, it's time to find a replacement for silicon and it's my belief that eventual replacement will essentially take things to the next level. Graphene is a potential candidate replacement and far superior to that of silicon but the technology to construct a large scale manufacturing of graphene (carbon nanotube sheets) is still up in the air. It's not clear at all what will replace silicon, but a variety of technologies have been proposed, including molecular transistors, DNA computers, protein computers, quantum dot computers, and quantum computers. However, none of them is ready for prime time. Each has its own formidable technical problems which, at present, keep them on the drawing boards.
Well, because of all these uncertainties, no one knows exactly when this tipping point will happen although there are many predictions when computing power will finally meet and then eventually tower above that of human intelligence. For example, Ray Kurzweil whom I've interviewed several times on my radio programs stated in his Big Think interview that he feels by 2020 we'll have computers that are powerful enough to simulate the human brain but we won't be finished with the reverse engineering of the brain until about the year 2029. He also estimates that by the year 2045, we'll have expanded the intelligence of our human machine civilization a billion fold.
But in all fairness, we should also point out there are many different points of view on this question. The New York Times asked a variety of experts at the recent Asilomar Conference on AI in California when machines might become as powerful as humans. The answer was quite surprising. The answers ranged from 20 years to 1,000 years. I once interviewed Marvin Minsky for my national science radio show and asked him the same question. He was very careful to say that he does not make predictions like that.
We should also point out that there are a variety of measures proposed by AI specialists about what do to about it. One simple proposal is to put a chip in the brains of our robots, which automatically shut them off if they get murderous thoughts. Right now, our most advanced robots have the intellectual capability of a cockroach (a mentally challengead cockroach, at that). But over the years, they will become as intelligent as a mouse, rabbit, fox, dog, cat, and eventually a monkey. When they become that smart, they will be able to set their own goals and agendas, and could be dangerous. We might also put a fail safe device in them so that any human could shut them off by a simple verbal command. Or, we might create an elite corps of robot fighters, like in Blade Runner, who have superior powers and can track down and hunt for errant robots.
But the proposal that is getting the most traction is merging with our creations. Perhaps one day in the future, we might find ourselves waking up with a superior body, intellect, and living forever. For more, visit the Facebook Fanpage for my latest book, Physics of the Future.
Research in plant neurobiology shows that plants have senses, intelligence and emotions.
- The field of plant neurobiology studies the complex behavior of plants.
- Plants were found to have 15-20 senses, including many like humans.
- Some argue that plants may have awareness and intelligence, while detractors persist.
Most people think human extinction would be bad. These people aren't philosophers.
- A new opinion piece in The New York Times argues that humanity is so horrible to other forms of life that our extinction wouldn't be all that bad, morally speaking.
- The author, Dr. Todd May, is a philosopher who is known for advising the writers of The Good Place.
- The idea of human extinction is a big one, with lots of disagreement on its moral value.
Since the idea of locality is dead, space itself may not be an aloof vacuum: Something welds things together, even at great distances.
- Realists believe that there is an exactly understandable way the world is — one that describes processes independent of our intervention. Anti-realists, however, believe realism is too ambitious — too hard. They believe we pragmatically describe our interactions with nature — not truths that are independent of us.
- In nature, properties of Particle B may be depend on what we choose to measure or manipulate with Particle A, even at great distances.
- In quantum mechanics, there is no explanation for this. "It just comes out that way," says Smolin. Realists struggle with this because it would imply certain things can travel faster than light, which still seems improbable.