Skip to content
Who's in the Video
David Gelernter is professor of computer science at Yale, chief scientist at Mirror Worlds Technologies, contributing editor at the Weekly Standard, and member of the National Council of the Arts.[…]
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

The basic user interface of our personal computers has stayed the same for a generation. How can we move beyond the desktop?

Question: How can we fix ingrained software flaws rather than accepting them?

David Gelernter: Well, there are a bunch of big juicy flaws one could talk about.  I think sort of the most important flaw that haunts us today is the way that software is stuck in 1984.  The hardware world has changed dramatically.  The world of computer users, everything has changed, but when I turn on a computer I see a desktop with windows and icons and a menu and I have a mouse.  This so-called desktop UI or GUI (graphical user interface), a brilliant invention at Xerox Palo Alto Research Center in the 1970’s. A group led by Allen Kay and others, developed this idea of the GUI for the first personal computer called the Altos.  So, in 1976, ‘77, ‘78, this was an extraordinary invention.  When Apple saw Xerox demo in, I think it was 1979, and first implemented this thing on the Lisa in 1983 and in a more commercially successfully form on the Mac in 1984, the brilliant move, the brilliant idea was a GUI.  However, 1984 was a long time ago.  It’s a full generation ago.  And while a freshman in one of my classes today, if I sat him down with a 1984 Mac, he wouldn’t even know it was a computer.  I mean, it’s got this tiny little screen it was sort of like an upright shoebox and it’s obviously wildly—it doesn’t have a hard drive in it.  It’s obviously wildly different in terms of its capabilities.  It has no internet connection ordinarily.  However, if I turn it on for him and he looks at the display, he’s going to be right at home because it’s exactly what he sees on his Mac or his PC, or whatever platform he’s got today.  He’s got Windows, and they overlap and there are menus and icons and you drag the mouse around.  

And so this interface, the Windows interface, the desktop interface, that was a brilliant innovation, the most important in computing, when Xerox came up with it the 1970’s, was a tremendously important step forward when Apple popularized it and commercialized it with a Mac in 1984, was still reasonably fresh and new when Microsoft took over the world with Windows 3.0 in 1990, which was—I mean, Microsoft was already a company.  But that’s the product that made Microsoft the dominant power in computing for the next period of years.  That was still pretty good in 1990.  By 1995, the Web was up and running, uses in computers were becoming different, email was changing the way in which people communicated.  By the late ‘90’s, the Web was emerging and all the Web commerce—the Web became essential to the economy in a lot of ways.  Cell phones were becoming ubiquitous as computing platforms of other sorts, there were all sort of new ways to communicate; in the last 10 years the emergence of social networking sites and stuff like that have once again changed the picture. 

So, users today are a radically different group of people.  They use computers for different purposes than they did in 1984.  I mean, today I use a computer as an information... first of all as a communication device, second of all to manage the information that I have.  It’s essential for me in dealing with the world in a lot of different ways and finding out what’s going on.  Keeping my own information up-to-date.  I certainly don’t use a computer for computing.  Back in the 1980’s people still did compute with computers.  The major users were people who had reasonably computationally serious applications at the time.  Either they were spreadsheets, or they were running simulations, video games have always been major users.  Cycles of course going back then too, but the emergence of the network, the radical increase in power of the hardware, the radical increase in the size of the user base, the radical change in the purpose to which computers are put, the radical difference in graphical capabilities—modern high definition screen as opposed to a small low definition, now only grey scale, but black and white screen that the first Mac had and that Xerox was working with.  All this stuff suggests that we 1984 software is probably not going to be the right solution for 2010 computing. 

However, the industry, what can you say about an industry which makes so much money so fast, has so many stockholders, responsible to so many owners and employees that it’s naturally reactionary.  You know, all the companies that depend on Microsoft, all the companies that depend on Apple and there are a few other producers at the edges dealing with Linux machines and stuff like that—they have heavy, what do you want to call it, fiduciary responsibilities that make them reactionary.  I mean, if you are a very successful company, you’re slow to change except for the edges.  I mean, you want everybody to think you’re a leading edge, you want to do your best to look as if you’re changing, but nonetheless, you’ve got your Windows, and you’ve got your menus, and you’ve got your mouse, you’ve got a display which treats the screen as an opaque surface, as a flat surface, as a desktop and there are Windows sort of tacked to it as if it were a bulletin board. 

What I want is not an opaque surface; I want to think of the screen as a view port that I can look through.  I want to be able to look through it to a multi-dimensional information landscape on the other side.  I don’t want to have a little bulletin board or a little desktop to put stuff on.  That was a brilliant idea in 1984, today it’s a constraining view and it’s obsolete.

Recorded on April 1, 2010.


Related