Stephen Wolfram is a distinguished scientist, inventor, author, and business leader. Born in London in 1959, Wolfram was educated at Eton, Oxford, and Caltech. He published his first scientific paper at the age of 15, and had received his PhD in theoretical physics from Caltech by the age of 20. Having started to use computers in 1973, Wolfram rapidly became a leader in the emerging field of scientific computing, and in 1979 he began the construction of SMP—the first modern computer algebra system—which he released commercially in 1981. In recognition of his early work in physics and computing, Wolfram became in 1981 the youngest recipient of a MacArthur Prize Fellowship.
That same year, Wolfram set out on an ambitious new direction in science aimed at understanding the origins of complexity in nature. Through the mid-1980s, Wolfram continued this work, discovering a number of fundamental connections between computation and nature, and inventing such concepts as computational irreducibility. Following his scientific work on complex systems research, in 1986 Wolfram founded the first research center and the first journal in the field, "Complex Systems."
In 1987, Wolfram launched Wolfram Research, Inc., which soon distinguished itself as a premier software company with the release of the first version of "Mathematica." A major advance in computing, "Mathematica" is a computational software program used in science, mathematics, and engineering.
By the mid-1990s his discoveries led him to develop a fundamentally new conceptual framework, which he then spent the remainder of the 1990s applying not only to new kinds of questions, but also to many existing foundational problems in physics, biology, computer science, mathematics, and several other fields. And after more than ten years of highly concentrated work, Wolfram finally described his achievements in his 1200-page book "A New Kind of Science."
Building on these previous projects, Wolfram in May 2009 launched Wolfram|Alpha—an ambitious, long-term project to make as much of the world's knowledge as possible computable, and accessible to everyone.
Question: What is the future of science that you envision in “A New Kind of Science”?
Stephen Wolfram: So a number of years ago I published this very big book called "A New Kind of Science." So what is this book about? Well it’s really about a new direction, a new kind of science, as the book title says. And the premise of the book is if we look back at sort of the history of science about 300 years ago there was this sort of big idea, which was we can take these things that we see in the natural world and we can start to describe them not just in terms of sort of philosophy and logic, but by using mathematics. We have this kind of language and methodology for thinking about nature which is mathematics and mathematical equations and so on and that has been a very successful idea that has led to sort of physics, a lot of modern physics and sort of spread out into a bunch of different areas over the exact sciences. So maybe 25 years ago now I got very involved—more than that, 30 years ago now—I got very involved in kind of thinking about "Is that really enough, is this sort of mathematical paradigm that we’ve developed enough to explain all the things that we’re interested in the natural world?"
And what I realized is that if we’re going to be able to have a theory about what happens in, for example, nature there has to ultimately be some rule by which nature operates. But the issue is does that rule have to correspond to something like a mathematical equation, something that we have sort of created in our human mathematics or is there some more general sort of source or possible rules by which nature can operate? And what I realized is that now with our understanding of computation and computer programs and so on, there is actually a much bigger universe of possible rules to describe the natural world than just these mathematical equation kinds of things. And we can think about them in terms of computer programs: we can say, just imagine that nature was described by rules that correspond to computer programs. What are the possibilities for those rules?
Now normally when we set up computer programs we build programs for very particular purposes. You know, we’re constructing a program to perform a particular task. We write this big complicated program with thousands, maybe millions of line of code to perform our particular task. But there is kind of a basic science question, which is: if we just look at the possible programs that we could construct just imagine that we start off and let’s say pick at random a few little instructions to put into our program. What do all these possible programs... what do they do? What does this sort of computational universe of possible programs look like?
So I started exploring this many years ago and the thing that I discovered was something that really surprised me a lot. When you look at this sort of computational universe of possible programs what you find is that even very, very simple programs, sometimes the very simple programs do very simple things, but there are very simple programs that do immensely complicated kinds of things. It’s a big surprise from our usual sort of intuition of engineering, because usually when we think about making something complicated in engineering we think we have to go to a lot of effort to do that. We have to sort of set up complicated rules, go to lots of... put in all these different steps and so on to build our complicated thing. But what one discovers is in this computational universe of possible programs, it’s really easy to find simple programs that kind of effortlessly produce immense complexity in their behavior.
So I first noticed my favorite example of this, a thing called the Rule 30 cellular automaton in 1984 and this kind of for me was sort of the... a critical moment in my development of my sort of scientific thinking. I kind of view it a little bit like I sort of had the luck of using computers to do experiments on this computational universe and sort of being for the first time able to sort of see what was out there in this computational universe. It’s a little bit like what you know what Galileo got to do 400 years ago when he took a telescope and pointed it at the sky and it was pretty easy to see that there were mountains on the moon and then you know moons of Jupiter and things like that.
So, sort of pointing a computer out into this computational universe it was in a sense pretty easy to see that there were these very simple programs that did these immensely complex things. And so I then sort of got to explore so what does this mean. What is this? What happens when you sort of look at this new paradigm that is created from realizing that in this computational universe there is a lot of sort of richness and complexity kind of readily at hand and what I discovered is that a lot of problems that exist in the natural sciences whether it’s physics or biology, other areas thing where it looked as if nature had some mysterious secret about how it produces the kinds of things we see whether it’s complexity in a physical system, biological system, whatever. A lot of this we can understand now in terms of this paradigm of looking at the computational universe and seeing simple programs do these very complex things. So sort of the first big sort of results of this new kind of science is to be able to have a lot broader supply of potential models for the natural world and to understand sort of how it can be that nature produces so much apparent complexity in lots of different kinds of systems.
So having sort of looked at this from a science point of view... and there are many directions to that science you know trying to understand whether one can use this to find the fundamental theory of physics, trying to use it to understand sort of things about the foundations of mathematics, things about biology and the importance or lack thereof of Darwinian evolution and so on and the production of complexity in biology, lots of different applications in the natural sciences. But then one could sort of say well what else can one do given this sort of paradigm of finding these simple programs in the computational universe that do such sophisticated things?
And one of the directions of application is in technology. See, in a sense what is technology? Technology is in a sense finding things that exist in the world and applying them somehow for human purposes to achieve some human objective and so we do that with you know materials that we find in the natural world whether it’s you know magnets or liquid crystals or whatever else. We find these materials and then we realize how to apply them to create things that are useful to us as humans.
So what one realizes is that in this computational universe it’s kind of the same story. Out in the computational universe there are all these programs that effectively perform algorithms that are rich and complex and then the question is: are they useful for human purposes? So one of the things that has happened over the last decade or so is that increasingly we’ve been able to make use of these kinds of programs of just sort of searching the computational universe for technology, finding programs that achieve particular purposes so in our mathematical system an increasing fraction of the algorithms that we introduce are ones that were not constructed by some sort of human software engineer going step by step, but instead were found by big searches in the computational universe.
Recorded July 26, 2010
Interviewed by Max Miller
It’s a sad but true fact that most data that’s generated or collected—even with considerable effort—never gets any kind of serious analysis.