Big Think Interview With David Gelernter
David Gelernter is professor of computer science at Yale, chief scientist at Mirror Worlds Technologies, contributing editor at the Weekly Standard, and member of the National Council of the Arts. He is the author of several books and many technical articles, as well as essays, art criticism, and fiction. The "tuple spaces" introduced in Carriero and Gelernter's Linda system (1983) are the basis of many computer-communication and distributed programming systems worldwide. According to Reuters, his book "Mirror Worlds" (Oxford University Press, 1991) "foresaw" the World Wide Web and was "one of the inspirations for Java"; the "lifestreams" system (first implemented by Eric Freeman at Yale) is the basis for Mirror Worlds Technologies' software. Gelernter is also the author of "The Muse in the Machine" (Free Press, 1994), the novel "1939" (Harper Perennial, 1995), "Machine Beauty" (Basic Books, 1998), and most recently, "Judaism: A Way of Being" (Yale University Press, 2010).
Question: What will be the next big technological innovation?
David Gelernter: Software unification. So that I no longer care what computing device I pick up, whether it’s a laptop or desktop, whether it’s one I own or one in a public place, whether it has a small screen or a large screen. I sit down, I identify myself, I tune in my information world, I get it the same way on a no-screen computer in a car. Uniform access across platforms, computers, telephones, to my entire information life. It’s easy to achieve in hardware; there are hard software issues, but they should have been solved 15 years ago.
Question: If you had to invest in digital media, which companies would you invest in?
David Gelernter: Which is so long from the other side of the fence. If I were to invest in a new company? I would invest in some of the companies that were working on so-called digital furnaces. In five years, everybody throws out all the CD’s and DVD’s. There are larger MP3 players, home stereo systems, and TV systems are integrated and this is a multi-billion dollar transition. As things stand, it will be Sony and Google, Toshiba and very large companies doing it, and they won’t do it badly, but they won’t do it well. I would invest in one of the small companies that have imaginative instead of recycled ideas as to how to make that transition.
Question: Can print and digital media coexist in the future?
David Gelernter: In two ways. The service that print media traditionally makes available is editing, which can turn almost unintelligible random words into intelligible language, depending on how good a writer you are dealing with. The vast majority of mankind today writes badly and finds it difficult to – may have good ideas, but finds it difficult to write more than an articulate sentence. I’m really judging by my students. Each year, my students at Yale, which is a good university, we get very good students, are less and less able to express themselves in writing. So, the print media more than ever provides editing and provides authority, respectability. If I go to a newsstand 30 years ago, and there are five newspapers, it’s important for me to know that I trust this newspaper and I don’t trust that one. But if today I go to the Web and there are 30,000 sites, it’s even more important for me to go to a site which I have some reason to trust consistently over the long term. I think the print media have failed to make the most of their opportunity. They may all disappear because they are approaching things in the wrong way. But editing to produce coherent, readable text and authority so that readers know how to spend - no matter how much information there is, I have the same number of minutes in my day, and I need to look to sources that I trust, beginning with the print media, newspapers I read, magazines I read, publishers I trust to give me advice on how to spend those minutes. Print media will flourish if it does that. I don’t know if it will.
Question: What is “lifestreaming,” and are modern social networking tools making it universal?
David Gelernter: The first part of your question, lifestreaming is, we defined it and first implemented it back in the mid 1990’s, was a time-based versus space or surface-based organization of information. The idea was that every electronic asset you had, every piece of information, whether it was an email or instant message, whether it was a file or spreadsheet, whether it was a photo or am MP3, it would appear in a single time ordered stream that mirrored the evolution of your life. So, in principle the first thing on the stream would be my birth certificate, a little electronic version of that, my parents would put my school records, health records, whatever of their child onto the stream. And the stream continues to flow forward through time. I can deal with the – the future is available to me as well as the past. I can search in the past to find whatever I want and everything is fully indexed. If, when I schedule things when I know things are coming up, I put them in the future. When I have something I need to return to that I don’t have time for now, I put it in the future.
And together with that, we had an interface. We thought it was important not to only make use of the surface of the screen, but to present displays in depth to make use of the virtual depth of the screen. So, I wasn’t looking at a surface, I was looking through a window through an arbitrarily large information landscape. So, instead of the mouse just moving a cursor over the surface, it would be a robot vehicle moving through free space, or some arbitrary space.
But the basic idea was to assemble just a chronological timeline heterogeneous with just absolutely everything on it. We have seen it commercialized in two different ways. One way, the one that I’m allowed to talk about; I no longer own these patents. The patents have changed hands and the patents are now subject to an enormous lawsuit against Apple, which is not mine because I don’t own the patent, I’m told it is the largest lawsuit in patent history. Apple took these ideas; I’m not in a position to say they stole them, I don’t know if they did or not legally, but these ideas are the basis of Apple’s cover flow of the way they display originally songs on iTunes. Cover Flow has now become a standard way of displaying files on virtually all Apple platforms; Spotlight, which allows you to find files, not by name or folder, but by content. And Time Machine, which is a series of archive things.
Without commenting on the legal aspects, which I’m not capable of doing, those are lifestreams and there are other companies that have done similar things. That makes me angry personally, not because of the money, but because of the deliberate failure to acknowledge work that we would have made freely available as academics and that companies will not acknowledge because there is so much money involved.
At the same time, on the networks, there are thousands of groups that are building lifestreams, or lifestreaming for themselves in their own way. We’d love to see this activity. There’s a lifestream blog talking about all the different lifestreams. So, that’s great. And for that matter, we’d like to see Apple do it too. But when there are large companies that work on it, which is also the case with Friend Feed, with the event stream on Facebook, which is true of the AOL B-boast stream, which they actually call a lifestream, or lifestreaming. It’s not as if we want to stop that activity – shut it down, but we’d like to see credit where credit is due. Not just to me, or mainly to me, but to graduates who’ve actually built the software, worked tremendously hard, published the papers, put them in – you know, made them available, and we’d like to see credit awarded.
Question: Why have you claimed that students should be taught largely from books and not computers?
David Gelernter: There are certain subjects which seem to me can be taught very effectively online, although they aren’t. I would love to see writing taught online because at university like Yale, much less at a high school like where my boys went to school; public high school in New Haven, Connecticut, there are not enough teachers who are able to teach writing well, or in some cases, there are none. Well, there are always a few. Teaching somebody to write is a labor-intensive activity. I have to go through somebody’s paper and mark it up as a copy editor would do, sentence-by-sentence. And I have to do that repeatedly. Now, I could have a student take the paper and send it to somebody in Alaska, or India, or anywhere who is capable of doing it; a world wide network of writing teaching would be very effective. It doesn’t exist. We don’t have the right software tools; we will have at some point. Similarly for marking exercises, quantitative exercises, maybe not so much in mathematics, but certainly problem sets in physics, chemistry, and engineering and things like that where answers and methods are clear cut, absolutely. I would like to see that done online.
I think the universities as we know them will be dead in 10 or 15 years. I’d like to see them replaced by something better, instead of something worse, and it’s not clear which way it will go. But the book is – abolishing the book is like abolishing the symphony, or sonata form, or the sonnet, or the wall painting. The book is a form in which some of the greatest masterpieces that mankind has ever achieved are expressed; not only fiction, but the great biographies, biographists, the great historians. There are great science books that were conceived as books. Feynman’s famous introductory lectures in physics, which have a beginning and an end, which are written with style ****, the book is a unit and is such a brilliant ergonomic unit. I take a book and I can judge a book by its cover. I can glance at it from the outside and know what it is. I can tell if it’s a novel, or a text book, or a history book. I can look at the side and tell about how long it is. I can flip through it and I don’t need a map to know where the table of contents are, the index is. I can find a photograph, if there is a section of photographs. I can write on it, which is tremendously important. If I read a book, the value to it to me in having the book another time is to remember what I said, what I wanted to know, and so forth. It’s portable. I don’t have to worry about stepping on it accidentally, or I can use it on the beach. I can use it anywhere; standing up in a bus. It is the greatest design in the history of ergonomics; I wrote a piece on it a long time ago. We still have books because they are so brilliantly suited to the way human beings absorb information and at their best, they are among the most beautiful things we have.
It’s terrible to think they’re disappearing, surviving only in libraries, but that’s not going to happen. People are too smart to allow it, even if the industry sometimes seems so oblivious that it wouldn’t care.
A conversation with the professor of computer science at Yale University.
Dominique Crenn, the only female chef in America with three Michelin stars, joins Big Think Live this Thursday at 1pm ET.
The number of people with dementia is expected to triple by 2060.
The images and our best computer models don't agree.
A trio of intriguing galaxy clusters<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDQzNDA0OS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxNTkzNzUyOH0.0IRzkzvKsmPEHV-v1dqM1JIPhgE2W-UHx0COuB0qQnA/img.jpg?width=980" id="d69be" class="rm-shortcode" data-rm-shortcode-id="2d2664d9174369e0a06540cb3a3a9079" data-rm-shortcode-name="rebelmouse-image" />
The three galaxy clusters imaged for the study
Mapping dark matter<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="d904b585c806752f261e1215014691a6"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/fO0jO_a9uLA?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>The assumption has been that the greater the lensing effect, the higher the concentration of dark matter.</p><p>As scientists analyzed the clusters' large-scale lensing — the massive arc and elongation visual effects produced by dark matter — they noticed areas of smaller-scale lensing within that larger distortion. The scientists interpret these as concentrations of dark matter within individual galaxies inside the clusters.</p><p>The researchers used spectrographic data from the VLT to determine the mass of these smaller lenses. <a href="https://www.oas.inaf.it/en/user/pietro.bergamini/" target="_blank" rel="noopener noreferrer">Pietro Bergamini</a> of the INAF-Observatory of Astrophysics and Space Science in Bologna, Italy explains, "The speed of the stars gave us an estimate of each individual galaxy's mass, including the amount of dark matter." The leader of the spectrographic aspect of the study was <a href="http://docente.unife.it/docenti-en/piero.rosati1/curriculum?set_language=en" target="_blank">Piero Rosati</a> of the Università degli Studi di Ferrara, Italy who recalls, "the data from Hubble and the VLT provided excellent synergy. We were able to associate the galaxies with each cluster and estimate their distances." </p><p>This work allowed the team to develop a thoroughly calibrated, high-resolution map of dark matter concentrations throughout the three clusters.</p>
But the models say...<p>However, when the researchers compared their map to the concentrations of dark matter computer models predicted for galaxies bearing the same general characteristics, something was <em>way</em> off. Some small-scale areas of the map had 10 times the amount of lensing — and presumably 10 times the amount of dark matter — than the model predicted.</p><p>"The results of these analyses further demonstrate how observations and numerical simulations go hand in hand," notes one team member, <a href="https://nena12276.wixsite.com/elenarasia" target="_blank">Elena Rasia</a> of the INAF-Astronomical Observatory of Trieste, Italy. Another, <a href="http://adlibitum.oats.inaf.it/borgani/" target="_blank" rel="noopener noreferrer">Stefano Borgani</a> of the Università degli Studi di Trieste, Italy, adds that "with advanced cosmological simulations, we can match the quality of observations analyzed in our paper, permitting detailed comparisons like never before."</p><p>"We have done a lot of testing of the data in this study," Meneghetti says, "and we are sure that this mismatch indicates that some physical ingredient is missing either from the simulations or from our understanding of the nature of dark matter." <a href="https://physics.yale.edu/people/priyamvada-natarajan" target="_blank">Priyamvada Natarajan</a> of Yale University in Connecticut agrees: "There's a feature of the real Universe that we are simply not capturing in our current theoretical models."</p><p>Given that any theory in science lasts only until a better one comes along, Natarajan views the discrepancy as an opportunity, saying, "this could signal a gap in our current understanding of the nature of dark matter and its properties, as these exquisite data have permitted us to probe the detailed distribution of dark matter on the smallest scales."</p><p>At this point, it's unclear exactly what the conflict signifies. Do these smaller areas have unexpectedly high concentrations of dark matter? Or can dark matter, under certain currently unknown conditions, produce a tenfold increase in lensing beyond what we've been expecting, breaking the assumption that more lensing means more dark matter?</p><p>Obviously, the scientific community has barely begun to understand this mystery.</p>
Astronomers spot an object heading into Earth orbit.
Minimoons<p>Scientists have confirmed just two prior minimoons. One was <a href="https://en.wikipedia.org/wiki/2006_RH120" target="_blank">2006 RH120</a>, which orbited us from September 2006 to June 2007. The other was <a href="https://en.wikipedia.org/wiki/2020_CD3" target="_blank">2020 CD3</a>, which got stuck in the 2015–2016 timeframe, and is believed to gotten away in May 2020.</p><p>2020 SO, the new kid on the block, is expected to arrive in October 2020 and pop out of orbit in May 2021.</p><div id="37962" class="rm-shortcode" data-rm-shortcode-id="f4c0fc8a2cba6536ea4cd960ebed3e6e"><blockquote class="twitter-tweet twitter-custom-tweet" data-twitter-tweet-id="1307729521869611008" data-partner="rebelmouse"><div style="margin:1em 0">Asteroid 2020 SO may get captured by Earth from Oct 2020 - May 2021. Current nominal trajectory shows shows capture… https://t.co/F5utxRvN6Z</div> — Tony Dunn (@Tony Dunn)<a href="https://twitter.com/tony873004/statuses/1307729521869611008">1600621989.0</a></blockquote></div>
Identifying 2020 SO<p>The first clue 2020 SO isn't your ordinary asteroid is its exceptionally low velocity. It's traveling much more slowly that a typical asteroid — their <a href="https://www.lpi.usra.edu/exploration/training/illustrations/craterMechanics/" target="_blank">average rate of travel</a> <a href="https://www.lpi.usra.edu/exploration/training/illustrations/craterMechanics/" target="_blank" rel="noopener noreferrer"></a>is 18 kilometers (58,000 feet) per second. Even <a href="https://en.wikipedia.org/wiki/Moon_rock" target="_blank">moon rocks</a> sent careening into Earth orbit by impacts on the lunar surface outpace pokey 2020 SO.</p><p>For another thing, 2020 SO has an orbital path very similar to Earth's, lasting about one Earth year. It's also just slightly less circular than our own orbit, from which it's barely tilted off-axis.</p><p>So, what is it? <a href="https://cneos.jpl.nasa.gov/ca/" target="_blank">NASA estimates</a> that the object has dimensions very reminiscent of a discarded Centaur rocket stage from the <a href="https://en.wikipedia.org/wiki/Surveyor_2" target="_blank" rel="noopener noreferrer">Surveyor 2 mission</a> that landed an unmanned craft on the moon. Back in the day, rocket stages were jettisoned as craft were aimed toward their desired position. This stuff, if released high enough, remains in space. It appears that this Centaur rocket, launched in September 1966, is now making its way back homeward, at least for a little bit.</p><p>When 2020 SO arrives at its closest point in December, the rocket is expected to be about 50,000 kilometers from Earth. Its next closest approach is much further: 220,000 kilometers, in February 2010.</p><img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDQzMDk3NC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyODg1MTQ1MX0.HGknDwqp0GmeuczKY_AS7vrPG7KMFUc_XO95tNoI2xo/img.jpg?width=980" id="e5cda" class="rm-shortcode" data-rm-shortcode-id="85eb1f790d8c3ee5b261f7ba13eaa5e1" data-rm-shortcode-name="rebelmouse-image" alt="Centaur rocket stage" />
Centaur rocket stage
What we may be able to learn<p>Earthly space programs being as young as they are, scientists would love to know what's happened to our rocket during a half century in space.</p><p>While 2020 SO won't get close enough to drop into our atmosphere, its slow progress has scientists hopeful that they'll still get some kind of a decent look at it.</p><p>Spectroscopy may be able to reveal what the rocket's surface is like now — has any of its paint survived, for example? Of course, being out in space, it's likely to have been hit by lots of dust and micrometeorites, so the current state of its surfaces is also of interest. Experts are curious to know how reflective the rocket is at this point, valuable information that can help planners of future long-term missions anticipate how well a craft out in space for extended periods will remain able to reflect sunlight.</p>
Scientists have found evidence of hot springs near sites where ancient hominids settled, long before the control of fire.