Question: What balance do you strike between your teaching, writing, science, and art?
David Gelernter: Nowadays, I spend my time mainly painting. I have an exhibition coming up. Generally speaking, I spend more time painting than doing anything else, except for writing. I’ve been writing pieces for—some pieces connected with DLD where I got to meet Frank Schumacher, who's been associated with the meeting for some time, and is an editor and publisher at the Frankfurter Allgemeine Zeitung, and a remarkable guy. So, I agreed to write a series of pieces for them and he’s a wonderful guy and I think Europe is more interested in the implications as opposed to the immediate market meaning of technology. I mean, I don’t think people are better educated or more thoughtful or any different, they’ve just got a somewhat different focus. I think growing out of the nature of the European market and the origin of so much of the technology in the United States gives them one degree of remove, which I think is useful. There’s a lot of thoughtful people over there.
Question: What is the focus of your new art exhibition?
David Gelernter: Well, let’s see, this is the latest of a series of exhibits at Yale, which is a good place for me to exhibit. I like to sell paintings, not from galleries, but from a more informal, one-on-one way, and so a non-commercial gallery space in which to exhibit, is, for me, very useful. I mainly—I’ve been trying for many years, I should say for many decades at this point, to figure out what Jewish art is, if there is such a thing. It’s come to seem to me that Jewish art is paintings of words. Not just paintings in which words appear, or words on a wall, but paintings in which the words themselves have meaning and decorative significance and conceptual weight. It’s hard to describe an image, especially one that is somewhat idiosyncratic, but anyway. General idea.
Question: How does Judaism shape your work?
David Gelernter: Genetically, to begin with. When I do think up pictures, my own job description is an image thinker, as many people have been, and what I do is a matter of the images that float through my head. Many people think in images, it’s hard to say how many. Certainly many people think in images some of the time. Many people think in images virtually all the time. When I’m working in software, I’m thinking of the picture that needs to appear on the screen or that needs to appear in the user’s head in order to make sense of the software. In the studio, more directly, I try and take as any painter does, as any artist does, tries to take what is in his head and make it concrete which is a constant—which is a struggle, which isn’t easy, but is what art has always been about.
When I write, I tend to write vividly or try to write vividly, and it’s also a matter of the images that drift through one’s head. So, this is the way I deal with the world, picture-wise.
Question: Why do you believe Judaism is the central intellectual development in Western history?
David Gelernter: It seems to be, beyond doubt, that Judaism is the most important intellectual development in western history for two reasons: one having to do with the aesthetic and spiritual, and the other having to do with the ethical. If I begin with ethical and moral issues, Judaism invented the idea of mankind as an entity. So we see striking differences between ancient Israeli literature and Greek literature, let’s say in the first 1,000 years, the first millennium B.C. There is a word in Greek that has no equivalent in Hebrew, namely “barbarian.” Barbarian meaning, somebody that babbles—a Greek word meaning someone who babbles, who doesn’t speak Greek, who is foreign, who is culturally inferior by definition and of very little interest. Not only different, but boring. Judaism, meanwhile insofar as to develop the idea of a single god, which was a revolutionary and bazaar idea at that time, first emerges 3,000 some odd years ago. I figured that if there really only one god in the world, he had to be everybody’s god. Everybody should have the right to say, this is my god. Must have that right. And then if you look who that community, who the faithful are in principle, it’s everybody. So, Judaism has the idea that ethical laws, moral rules and strictures apply to everybody. Not that everybody has a sort of liability to carry them out. There were stricter requirements of Jews, or Israelis, than there are of people in general. Judaism has never been a proselytizing religion. It doesn’t really care—as a matter of fact is indifferent—whether people become Jews or join the Jewish community, but is very clear on what the basic moral obligations of mankind are with respect for life, respect for justice, kindness to animals, a familial, what should I say, sexual fidelity and refraining from sexual crimes. These are the so-called “Seven laws of the sons of Noah,” meaning that they apply to everybody.
So, without going into a lengthy disquisition, Judaism has the idea that there is a simple moral code which goes not only for the Israeli people, or the Israeli nation, but is applicable to everybody and has the revolutionary idea that not only is there one god, but there is essentially one man; one mankind, the whole world. So on festival occasions at the Temple of Jerusalem, 70 sacrifices would be brought at certain points. It was thought that there were 70 nations in the world; one for each nation.
Judaism has an aesthetic and spiritual side also, of course. Judaism is obsessed with imagery. One often finds that its stereotypes are either basically right or exactly wrong. They are rarely sort of in between. Judaism is often described as being hostile to imagery. But we know that can’t be right because of the Hebrew Bible underlies western literature. Hebrew poetry, the poetry of the psalms, the prophets, the Book of Job, is the basis of Western literature. Hebrew prose narrative is the basis of Western narrative. There is no such thing as great poetry without imagery, the idea is absurd. There is no such thing as great writing that isn’t vivid and vibrant and that means based on images. And we find, in fact, the imagery of the Bible is the imagery that recurs throughout Western literature and Western art, from ... the split-open Red Sea, to the handwriting on the wall, to chariot of fire. These are images that are not only painted in the developing tradition of medieval art and western realist painting, but they recur in Western literature of all languages down to this afternoon.
So for both of these reasons, Judaism has a commanding role in the creation of the culture and civilization that we’ve occupied for several thousand years, and especially so with the emergence of the idea of the liberal nation. The liberal modern nation which is a sort of joint invention of the United States and of Great Britain in the 17th century and the 18th century. These were Christian nations, but the Christianity of early America and of Britain in the Elizabethan, and especially the age of the civil wars and Cromwell, is what is often called “Hebraic Christianity,” or “Old Testament Christianity.” It was a profoundly Hebrew-inspired sort of Christianity. Not that people thought of themselves as Jews because they did not, but both the early United States and the early Britain repeatedly referred to themselves as “The New Israel” and the idea of freedom and liberty emerges in the United States on the basis of the story of the Exodus, the biblical verse, “Let my people go,” which is repeated many times by Moses to Pharaoh becomes fundamental in American history not only when religious zealots, who were persecuted in England immigrate in the 17th Century to the United States, but when the United States declares it’s own independence and freedom as a nation during the Civil War when the North becomes gradually resolved under President Lincoln to free the slaves, and then the Civil Rights Movement of the ‘60s, late ’50s and ‘60s again.
So, the notion of freedom, the notion of equality, which is derived by the founders of English and American liberalism from the opening of the Bible, which says, “All men are created in God’s image, therefore you’re not allowed to make distinctions on the basis of race, color, and creed. All men being in God’s image are to be treated justly and fairly.” Abraham Lincoln put that most concisely. And interestingly, the idea of democracy too, if you read the early literature in the United States, developing the idea of modern democracy in the 1600’s, especially New England and in Virginia, to some extent, biblical verses are quoted constantly. Not only the ones in which Moses sets up what is described as a Jewish commonwealth, he’s told to essentially let each tribe furnish its own leaders. Tell Moses who his leaders will be. But it’s also the case of the Hebrew Bible is an aggressively anti-monarchy book. There are vivid denunciations of the idea of a king, the rights of kings, an absolute king. Prophets in the Bible confront kings for them in the name of God to be fair and to be just and to be honorable, and in fact, Israel was told that if it had any sense, they wouldn’t have a king to begin with.
So in lots of ways—and this is something that used to be well known—the last couple of generations in western culture, I would say since the Second World War, have been secularizing generations in which we were more apt to look at ancient Greece than ancient Israel. But as a matter of historical record, it’s easy to trace these ideas, also in the philosophy of the English Enlightenment. It’s easy to open a book of Locke and notice that he keeps quoting the Bible, or Hobbes, or Seldon, or others of the English philosophers who provided the intellectual counter-weight to the active and pragmatic liberalism of the founding fathers.
Question: What is cloud computing, and what do you believe its future will be?
David Gelernter: The idea of the cloud is that I compute on many platforms in many places. I use many different machines, either because I have a machine at home, a machine at work, because I have a couple of laptops, maybe I have a cell phone which itself is a computing platform, a pod, a pad, a Blackberry or whatever it is, there are a lot of different platforms. I travel; I need to compute in a lot of different places. So, for practical reasons, rather than taking my information and putting it in the file system on my new laptop, or on my machine at home, or on my cell phone or something like that, it’s much easier for me just to let the information float off somewhere so it’s always sort of overhead, or some intangible place around me and I can tune it in, in the sense that I can tune in C-Span from any TV, cable connected TV, I want to be able to tune in my information and be able to see it from any internet-connected computer. It’s important in terms of portability; it has other major pragmatic advantages, some of which have not yet been fully realized. It still is an enormous nuisance to buy a new computer, which is absurd. Why, when I get a new computer it sits in my front hall for three weeks while I work up courage to install it. I usually wait until one of my sons is home so he can do the work for me because, although it should be trivial, what I want is to get a new computer, take it out of the box, plug it in, take a sledge hammer and smash the old one to bits, and I’m online. But because the cloud doesn’t really function the way I want it to yet, one has to copy, painfully, the entire file system from one computer to the other computer, even if one rigs up a special connection that’s a nuisance. One always winds up missing things.
So, anyway, you need a cloud because you have a lot of
computers. You need a cloud
because you often get new computers that are born empty. Maybe most important, you need a cloud
for security. More and more
of people’s lives is going online.
For security and privacy, I need the same sort of serious protection my
information gets that my money gets in a bank. If I have money, I’m not going to shove it in a drawer under
my bed and protect it with a shotgun or something like that. I’m just going to assume that there are
institutions that I can trust, reasonably trustworthy to take care of the money
for me. By the same token, I don’t
want to worry about the issues particularly with machines that are always on, that are always connected to the network, easy to break into. I don’t want to manage the security on
my machine. I don’t want to worry
about encryption; I don’t want to worry about other techniques to frustrate
thieves and spies. If my
information is out on the cloud, not only can somebody else worry about encryption
and coding it, not only can somebody else worry about barriers and logon
protections, but going back to Linda and the idea of parallelism and a network
server existing not on one machine, but being spread out on many, I’d like each
line of text that I have to be spread out over a thousand computers, let’s say,
or over a million.
So, anyway, you need a cloud because you have a lot of computers. You need a cloud because you often get new computers that are born empty. Maybe most important, you need a cloud for security. More and more of people’s lives is going online. For security and privacy, I need the same sort of serious protection my information gets that my money gets in a bank. If I have money, I’m not going to shove it in a drawer under my bed and protect it with a shotgun or something like that. I’m just going to assume that there are institutions that I can trust, reasonably trustworthy to take care of the money for me. By the same token, I don’t want to worry about the issues particularly with machines that are always on, that are always connected to the network, easy to break into. I don’t want to manage the security on my machine. I don’t want to worry about encryption; I don’t want to worry about other techniques to frustrate thieves and spies. If my information is out on the cloud, not only can somebody else worry about encryption and coding it, not only can somebody else worry about barriers and logon protections, but going back to Linda and the idea of parallelism and a network server existing not on one machine, but being spread out on many, I’d like each line of text that I have to be spread out over a thousand computers, let’s say, or over a million.
So, if I’m a hacker and I break into one computer, I may be able to read a vertical strip of a document or a photograph, which is meaningless in itself, and I have to break into another 999,999 computers to get the other strips. You know, or it may be more computers than that. The cost of computers is going asymptotically to zero, of course it will always cost money to connect them and keep them running and stuff like that, but not only for matters of convenience, which are very important, I need to be able to get my data anywhere on any platform, but even more for privacy and security when people talk about a cloud, they mean information that’s available on any platform managed, not by me, but by responsible—by an organization in whom I can place as much trust as the institution as my community or my city that patrol the streets, that bank my money, that generally keep civilization running. They need to do the same thing with respect to the information landscape and privacy and security and so forth.
Question: How can we fix ingrained software flaws rather than accepting them?
David Gelernter: Well, there are a bunch of big juicy flaws one could talk about. I think sort of the most important flaw that haunts us today is the way that software is stuck in 1984. The hardware world has changed dramatically. The world of computer users, everything has changed, but when I turn on a computer I see a desktop with windows and icons and a menu and I have a mouse. This so-called desktop UI or GUI (graphical user interface), a brilliant invention at Xerox Palo Alto Research Center in the 1970’s. A group led by Allen Kay and others, developed this idea of the GUI for the first personal computer called the Altos. So, in 1976, ‘77, ‘78, this was an extraordinary invention. When Apple saw Xerox demo in, I think it was 1979, and first implemented this thing on the Lisa in 1983 and in a more commercially successfully form on the Mac in 1984, the brilliant move, the brilliant idea was a GUI. However, 1984 was a long time ago. It’s a full generation ago. And while a freshman in one of my classes today, if I sat him down with a 1984 Mac, he wouldn’t even know it was a computer. I mean, it’s got this tiny little screen it was sort of like an upright shoebox and it’s obviously wildly—it doesn’t have a hard drive in it. It’s obviously wildly different in terms of its capabilities. It has no internet connection ordinarily. However, if I turn it on for him and he looks at the display, he’s going to be right at home because it’s exactly what he sees on his Mac or his PC, or whatever platform he’s got today. He’s got Windows, and they overlap and there are menus and icons and you drag the mouse around.
And so this interface, the Windows interface, the desktop interface, that was a brilliant innovation, the most important in computing, when Xerox came up with it the 1970’s, was a tremendously important step forward when Apple popularized it and commercialized it with a Mac in 1984, was still reasonably fresh and new when Microsoft took over the world with Windows 3.0 in 1990, which was—I mean, Microsoft was already a company. But that’s the product that made Microsoft the dominant power in computing for the next period of years. That was still pretty good in 1990. By 1995, the Web was up and running, uses in computers were becoming different, email was changing the way in which people communicated. By the late ‘90’s, the Web was emerging and all the Web commerce—the Web became essential to the economy in a lot of ways. Cell phones were becoming ubiquitous as computing platforms of other sorts, there were all sort of new ways to communicate; in the last 10 years the emergence of social networking sites and stuff like that have once again changed the picture.
So, users today are a radically different group of people. They use computers for different purposes than they did in 1984. I mean, today I use a computer as an information... first of all as a communication device, second of all to manage the information that I have. It’s essential for me in dealing with the world in a lot of different ways and finding out what’s going on. Keeping my own information up-to-date. I certainly don’t use a computer for computing. Back in the 1980’s people still did compute with computers. The major users were people who had reasonably computationally serious applications at the time. Either they were spreadsheets, or they were running simulations, video games have always been major users. Cycles of course going back then too, but the emergence of the network, the radical increase in power of the hardware, the radical increase in the size of the user base, the radical change in the purpose to which computers are put, the radical difference in graphical capabilities—modern high definition screen as opposed to a small low definition, now only grey scale, but black and white screen that the first Mac had and that Xerox was working with. All this stuff suggests that we 1984 software is probably not going to be the right solution for 2010 computing.
However, the industry, what can you say about an industry which makes so much money so fast, has so many stockholders, responsible to so many owners and employees that it’s naturally reactionary. You know, all the companies that depend on Microsoft, all the companies that depend on Apple and there are a few other producers at the edges dealing with Linux machines and stuff like that—they have heavy, what do you want to call it, fiduciary responsibilities that make them reactionary. I mean, if you are a very successful company, you’re slow to change except for the edges. I mean, you want everybody to think you’re a leading edge, you want to do your best to look as if you’re changing, but nonetheless, you’ve got your Windows, and you’ve got your menus, and you’ve got your mouse, you’ve got a display which treats the screen as an opaque surface, as a flat surface, as a desktop and there are Windows sort of tacked to it as if it were a bulletin board.
What I want is not an opaque surface; I want to think of the screen as a view port that I can look through. I want to be able to look through it to a multi-dimensional information landscape on the other side. I don’t want to have a little bulletin board or a little desktop to put stuff on. That was a brilliant idea in 1984, today it’s a constraining view and it’s obsolete.
Question: How can a grasp of emotional subtext be built into artificial intelligence?
David Gelernter: AI is really—it used to be a completely separate field, in effect. If you were working in Artificial Intelligence, which had an obvious interface to the neurosciences, to cognitive science, to cognitive psychology, experimental psychology, and philosophy of mind—that was a very different field from software, software engineering, software design, software architecture. Today, the fields increasingly are in contact as techniques are developed and AI are ubiquitous in modern software. Generally not very fancy techniques, but systems of rules; what are called experts, at what used to be called expert systems. They’re fundamental in a lot of ways. If you look at the heart of the problems of AI, Artificial Intelligence, going back to the 1950’s, the question—the deeper question AI was interested in, and certainly did want to build useful software, and it still does and has an important goal with AI. But it was also the philosophical or scientific question of, "How does the mind work? What does it mean to think?" And the idea was—finding a definition for thinking, understanding thinking, had proven enormously elusive over the centuries, since Descartes with his opening bid essentially created the field of philosophy of mind in the 17th century, it’s been very difficult to understand what the mind is. What consciousness is, what understanding is, and what it means to understand and communicate as opposed to merely use language in a robotic or mechanical way.
Now, AI people, in trying to understand thought and mind, generally approach it by attempting to build a working model and software. The computer has seemed mind-like to people in 1950. I mean, during the 1950’s it was widely called an electronic brain, and people don’t use that phrase anymore, but they think of the computer in the same terms. It was an interesting idea, the idea of trying to build a working model of human thought out of software. Because if I could get a computer to act as if it were thinking, then I could look at the software and maybe learn something. Not necessarily—because software is very different from brain-ware—but maybe get some idea of what thought really consists of and maybe what consciousness consists of.
Now, my argument with the direction that AI has taken is that borrowing from the standard approaches in philosophy and mind, in most cases, not in every case, borrowing from approaches in cognitive psychology and experimental psychology, AI had tended to say, “We want to know about thinking, then we’ll move on to emotion, or maybe consciousness.” Thinking is what we want to do. The really important thing the mind does, the important activity the mind does is think; solve problems. A lot of people in AI used problem-solving as an equivalent of thinking. They said, “Here’s what we’re working on. We’re working on artificial thought, artificial intelligence, problem-solving software.” But it’s obvious to anybody that the mind does much more than solve problems. It’s very rare for anybody to go about solving a problem formally. Certainly we don’t do a lot of mathematical problem-solving, or problem sets in physics most of the time, and to an extent that we are confronted with problems to solve, we almost always first have recourse to experience and we think, “Well, what did I do the last time?” But in a more fundamental way, it is obvious to anybody—maybe obvious to anybody who is not in AI—that if I am working at my computer and I get tired and I lean back and look out the window and just watch the passing scene, I’m still thinking, my mind hasn’t shut down. I watch what’s happening; I react in more subtle, cognitive ways to what I see. It’s obvious that when I get tired, when my mind starts to drift, when I move into the free-associative state that was studied by Freud that we know precedes falling asleep; free association is a kind of thinking also. My mind doesn’t shut off, but I’m certainly not solving problems, I’m wandering around. And we also know that when we sleep, we think also. Sleep thought is different than waking thought. Sleep thought is not solving problems in mathematics, or solving any kind of problems in a methodical way. Sleep thought is image thought, for the most part, and sleep thought is hallucinatory. I see things that aren’t there.
So, we need to understand the connection, the spectrum that connects wide-awake, focused, alert problem-solving type of thought with what happens to my mind as I get tired, as my focus decreases, as I approach sleep. Actually, the brain goes through several oscillations like this during the day, but there’s a continuous spectrum connecting my most focused, my sharpest kind of analytical thought, the sharpest of which I am capable of on the one hand, and the lowest focused kind of thought in which my mind drifts and ultimately I find myself asleep and dreaming.
The field of Artificial Intelligence had studied only the very top end of the spectrum and still tends to study only the very top end. Tends to say, what is thinking, it’s this highly focused, wide awake, alert, problem-solving state of mind. But not only is that not the whole story, but the problem, the sort of biggest unsolved problem has tended to haunt philosophy of mind, cognitive psychology AI is creativity. People have always been fascinated, what makes for a creative person? What explains a creative leap, which is a well defined psychological event? People know when it happens to them. There is general agreement that to be creative is to have the ability to invent new analogies. To connect two things that are not obviously related, but once you have made the connection, you can see, yeah, there is a relationship and other people can see the relationship too and creativity flows from that.
Now, we know that the invention of analogy has to do with not highly focused analytic problem-solving thought. Creating analogy means connecting thoughts, letting your mind drift essentially from one thought to another. So, not only do we need to study the entire cognitive spectrum, or cognitive continuum because that’s what human beings are. They are not problem-solving machines, they do a lot of kinds of thought that is not problem-solving, but if we ever want to know what creativity is, if we ever want to know what goes into the invention of the new analogy, we’re going to have to study the free-associative states in which the mind drifts from state to state to state.
And finally, to add in a question which had to do with emotion, how do we connect thoughts? I’ve argued—and I won’t go through this now—but, as you’re focus declines, as you become less alert, less focused. As you look out the window, as your mind starts to wander, as you start to get drowsy, emotion is playing a more and more important role in your thought. Emotion is what allows us to take two thoughts or ideas that seem very different and connect them together because emotion is a tremendously subtle thing; a subtle kind of code, or tag that can be attached to a very complicated scene. If I say "What is your emotion on the first really warm day in April or March when you go out and you don’t need a coat and you can smell the flowers blooming and there may be remnants of snow, but you know it’s not going to snow anymore and there’s a certain springiness in the air. What do you feel?" It’s not that you feel happy exactly. There are a million kinds of happiness. It’s a particular shade of emotion. Or what you feel on a million other occasions. You go to the mailbox and you see a letter from a girlfriend from 15 years ago and you haven’t heard from her since. Or you read in the newspaper something about blah, blah, blah, or your computer breaks just when you need to do something. There are many, many circumstances in which I can say I’m happy, I’m sad, but I need to go much further, beyond language. I can’t use ordinary language to describe the nuanced emotions that I feel on these occasions. They’re much more subtle than elation or depression. But the mind knows what they are even if they can’t be reduced to words and I can connect two scenes that seem very different, that are virtually are very different. Where the actors are different, where the scenes are different, the colors are different, the sounds are different, but nonetheless, if they made me feel the same way, I can connect them and create analogies. So, how it ultimately fits together.
Question: How has surviving the Unabomber attack changed your life?
David Gelernter: Zero. It was my responsibility—I think it would be anybody’s who had been attacked in a particularly cowardly and despicable fashion—to go on. If I had said "This attack had changed me in the following 10 ways"... I’m not interested in being changed by criminals, murderers, and terrorists. I’m interested in being whoever I was destined to be as a member of my family and my community and that’s what I’ve been doing. It slowed me down, presented physical challenges, but it didn’t change my worldview, or the sort of broader sense... Worldview in the sense—there’s a tremendously useful German word used in philosophy: weltanschauung. Worldview meaning not just looking around, but how to make sense of things, how I put it together in a coherent way. So, my worldview is the same.
Question: Has being the victim of an attack changed your feelings about terrorism?
David Gelernter: I’m not a victim. I never was, never will be. Victimhood is something you choose, or something you reject. I and so many others have done before me and are doing today, they rejected, hate the tendency of society to glorify victimhood and to speak of oppression and victimhood and persecution as some soft of badge of honor, or something of that sort. I’m not a victim.
On terrorism, on the other hand, I guess it’s fair to say that I had a close-up personal look at terrorism. I don’t think my views have changed any. The fact is, any member of the American Jewish community has relatives who lived through the Holocaust, and who has more important, has relatives or close friends in Israel, who were either attacked themselves or whose family has experienced terrorist attack because terrorism goes back many centuries, but has always been a weapon of choice of Jew-haters and Israel-haters... So, the tragic fact is that the reality of terrorism is fundamental cowardliness, is fundamental anti-human character. I think it's familiar to everybody... I’d should say not just in the American Jewish community, the fact is that America is unique in its sympathy for Israel. Europe certainly doesn’t feel this way, Asia doesn’t feel this way. This is not a feeling only of American Jews. In fact, in many cases, the Christian community has been—has shown itself as much more interested in Israel’s fate and well-being than the Jewish community, which has its own political axes to grind. I think America in general has felt close to, in some ways, because the states are so similar—there is no nation in the world set up by people with bibles in their back pockets as a New Israel, there’s no nation that has been set up on that basis aside from the United States and Israel. So, there’s always been the sympathy, and growing up one has the feeling, one had a feeling in this country, I mean back in the 1960s and ‘70s, that terrorist attacks on Israel were hitting close to home. It was impossible not to be aware of the nature of terrorism, the threat of terrorism. It’s something that I’ve always lived with, tragically, as has everybody who has felt close to Israel.
Question: What makes you optimistic about the century ahead?
David Gelernter: We’re looking at—in one word maybe I should say, graphics. Not only computer graphics or animation, but the enormously increased scope for pictures; for showing pictures, for seeing pictures, for seeing things. Seeing is a source of wisdom and pleasure in a lot of ways. Mankind really has no vocabulary to discuss color because if you look at art history, until two generations ago, nobody knew what paintings looked like, they could be reproduced in black and white going back to the 19th century, before then they couldn’t be reproduced in any way at all, but until, say the 1930’s, ‘40’s, ‘50’s in the 20th century, there was no way to, you could say Titian is a great colorist, or Velazquez has extraordinary subtle browns, or the reason the 13th century glass at Chartres is unique is because of the blue. The special blue. But you could see it. You had to travel to France or to you know, Venice to see—wherever. And not only that, once you were there, unless you stayed, you’re not going to stay planted in front of a picture in a museum and nor are you going to camp out in a cathedral. But computers have not only made printing—has not only made displaying on their screens, but they made printing on paper—color printing—vastly better and inexpensive.
The possibility that we have now of seeing what mankind has done, the art that has been done, the cities that have been built, the landscapes that have drawn on people is a tremendously exciting—and to see each other, because ultimately that is what people want to see most of all is other people. That’s exciting. It opens up a new world that mankind has longed for ever since he’s seen... "Colors are good, and I want to make my world colorful, and I want to see my fellow human beings and I want to build things and I want the horizons to be further than what I can see from my front door."
Recorded on April 1, 2010.