from the world's big
Big Think Interview With Nicholas Carr
Nicholas Carr writes on the social, economic, and business implications of technology. He is the author of the 2008 Wall Street Journal bestseller "The Big Switch: Rewiring the World, from Edison to Google," which is "widely considered to be the most influential book so far on the cloud computing movement," according the Christian Science Monitor. His earlier book, "Does IT Matter?," published in 2004, "lays out the simple truths of the economics of information technology in a lucid way, with cogent examples and clear analysis," said The New York Times. His new book is "The Shallows: What the Internet Is Doing to Our Brains."
Carr has also written for many periodicals, including The Atlantic Monthly, The New York Times Magazine, Wired, The Financial Times, Die Zeit, The Futurist, and Advertising Age, and has been a columnist for The Guardian and The Industry Standard. His much-discussed essay "Is Google Making Us Stupid?," which appeared as the cover story of the Atlantic Monthly's Ideas issue in the summer of 2008, has been collected in three popular anthologies. Carr has written a personal blog, Rough Type, since 2005. He is a member of the Encyclopaedia Britannica's editorial board of advisors and is on the steering board of the World Economic Forum's cloud computing project.
Carr holds a B.A. from Dartmouth College and an M.A., in English and American literature and language, from Harvard University.
Question: What are some technologies, prior to the Internet, that have radically reshaped the way our brains work?
Nicholas Carr: I think that if you look across the entire world of tools and technologies, what you see is that there are different categories. One category is what I call intellectual technologies. And these are the tools we use to think with, to find information, gather information, exchange information and so forth. And I think if you look back through the intellectual history of human beings you can trace the way that these intellectual technologies influence the way we think. And that’s true all the way back to, for instance, the arrival of the map, which actually predates history. We don’t know who invented the map, but somebody at some point had to invent it.
And before the map came along people understood where they were and where they were going purely through their sensory perceptions, through what they saw, also what they hear and so forth. As soon as the map came along, we suddenly had a very different way to think about where we were in space. The pure visual and auditory and sensory perception was supplemented by an abstract picture, which is a radically different way to think about space. And of course, there were all sorts of practical uses of maps—and still are—for charting routes and establishing boundaries, but what happened at a deeper level is that the map kind of trained us to think abstractly, more abstractly in general. So it gave us, or helped give us – give human beings, a more abstract mind. More attuned to the hidden patterns that lay behind what we saw and what we heard and what we felt, and so forth.
And I think you see a similar thing when the mechanical clock comes around. Now this is much later, in the 1300’s or 1400’s or so. Before the mechanical clock came along, people experienced time as a natural flow; to the extent that they measured it by watching the stars or the moon or the sun or so forth, things that emphasized the natural flow of time. As soon as you introduce the mechanical clock, you get a radically different view of time. Suddenly, it’s not a flow; it’s a series of discreet, precisely measurable units, seconds, minutes, hours, and so forth. And again, there’s all sorts of practical uses of the tool for scheduling a person’s time, for coordinating work and other activities among a large number of people. But what we see again, is that this new tool, this new intellectual technology gave us, in general, a different way of thinking. A much more scientific way of thinking that very much focused on measurement and on kind of precise cause and effect across long chains.
So here again, we see an intellectual technology, that beyond its practical uses really changed in a kind of fundamental way, I think, the way people think. And it’s no coincidence, I think, that after the arrival of the mechanical clock we see an explosion in scientific thinking and scientific discovery.
At about the same time, a little after the arrival of the mechanical clock, we saw the introduction of the printing press and hence printed books, which replaced handwritten books. And I think that the book in some ways is the most interesting from our own present standpoint, particularly when we want to think about the way the internet is changing us. It’s interesting to think about how the book changed us.
I think what the book did in addition to its practical uses, is it gave us a more attentive way of thinking. What the book does as a technology is shield us from distraction. The only thinggoing on is the, you know, the progression of words and sentences across page after page and so suddenly we see this immersive kind of very attentive thinking, whether you are paying attention to a story or to an argument, or whatever. And what we know about the brain is the brain adapts to these types of tools.
And so the ways of thinking that we learned from the tools we can then apply in other areas of our lives. So we become, after the arrival of the printing press in general, more attentive more attuned to contemplative ways of thinking. And that’s a very unnatural way of using our mind. You know, paying attention, filtering out distractions. So the book, I think, like the map before it, like the clock, created or help create a revolution in the human mind in the way our habits of mind and ultimately the way we use our brains.
Question: Neurologically, how does our brain adapt itself to new technologies?
Nicholas Carr: A couple of types of adaptations take place in your brain. One is a strengthening of the synaptical connections between the neurons involved in using that instrument, in using that tool. And basically these are chemical – neural chemical changes. So you know, cells in our brain communicate by transmitting electrical signals between them and those electrical signals are actually activated by the exchange of chemicals, neurotransmitters in our synapses.
And so when you begin to use a tool, for instance, you have much stronger electrochemical signals being processed in those – through those synaptical connections.
And then the second, and even more interesting adaptation is in actual physical changes,anatomical changes. Your neurons, you may grow new neurons that are then recruited into these circuits or your existing neurons may grow new synaptical terminals. And again, that also serves to strengthen the activity in those, in those particular pathways that are being used – new pathways.
On the other hand, you know, the brain likes to be efficient and so even as its strengthening the pathways you’re exercising, it’s pulling – it’s weakening the connections in other ways between the cells that supported old ways of thinking or working or behaving, or whatever that you’re not exercising so much.
So that adaptation – I mean there are a whole lot of reasons to be very happy that our brains are able to adapt and adapt so readily because we do strengthen and become more efficient at things we do a lot of in that changed ways of thinking that we might need. On the other hand, there is a cost. We lose – we begin to lose the facilities that we don’t exercise. So adaptation has both a very, very positive side, but also a potentially negative side because ultimately our brain is qualitatively neutral. It doesn’t pare what it’s strengthening or what it’s weakening, it just responds to the way we’re exercising our mind.
Question: What skills are we losing because of the Internet?
Nicholas Carr: The Internet, like all intellectual technologies has a trade off. As we train our brains to use it, as we adapt to the environment of the internet, which is an environment of kind of constant immersion and information and constant distractions, interruptions, juggling lots of messages, lots of bits of information. As we adapt to that information environment, so to speak, we gain certain skills, but we lose other ones. And if you look at the scientific evidence, it’s pretty clear particularly from studies of like video games, that use of online media enhances our – some of our visual cognitive ability. So our ability to spot patterns in arrays of visual information to keep track of lots of things going on at once on a screen but along with that, what we lose is the ability to pay deep attention to one thing for a sustained period of time, to filter out distractions.
And the ability to pay attention not only underpins kind of ways of thinking that are pretty obvious, contemplativeness, reflection, introspection, all of those kind of solitary ways of thinking, but what we know from brain studies is that the ability to pay attention also is very important for our ability to build memories, to transfer information from our short-term memory to our long-term memory. And only when we do that do we weave new information into everything else we have stored in our brains. All the other facts we’ve learned, all the other experiences we’ve had, emotions we’ve felt. And that’s how you build, I think, a rich intellect and a rich intellectual life.
Question: Is losing the capacity for solitary thought necessarily a bad thing?
Nicholas Carr: I think there is a reason that, you know, 100 years ago when Rodin sculpted his great figure of The Thinker, The Thinker was, you know, in a contemplative pose and was concentrating deeply, and wasn’t you know, multi-tasking. And because that is something that, until recently anyway, people always thought was the deepest and most distinctly human way of thinking. And that doesn’t meant that I believe that all of us should sit in you know, darkened rooms and think big thoughts without any stimuli coming at us all day. I think it’s important to have a balance of those ways of thinking.
But what the web seems to be doing, and a lot of the proponents of the web seem to be completely comfortable with, its pushing us all in the direction of skimming and scanning and multi-tasking and it’s not encouraging us or even giving us an opportunity to engage in more attentive ways of thinking. And so, to me, I think losing those abilities may – we may end up finding that those are actually the most valuable ways of thinking that are available to us as human beings.
Question: How can we resist the Internet’s effect on our brains?
Nicholas Carr: I think the solution, so to speak, to this problem is pretty simple to state. I mean, if you want to change your brain, you change your habits. You change your habits of thinking. And that means, you know, setting aside time to engage in more contemplative, more reflective ways of thinking and that means, you know, setting aside time to engage in more contemplative, more reflective ways of thinking, to be – to screen out distractions. And that means retreating from digital media and from the web and from Smart Phones and texting and Facebook and Tweeting and everything else.
And so that’s a pretty obvious solution. What’s hard is actually doing it. Because it’s no longer just a matter of personal choice, of personal discipline, though obviously those things are always important, but what we’re seeing and we see this over and over again in the history of technology, is that the technology – the technology of the web, the technology of digital media, gets entwined very, very deeply into social processes, into expectations. So more and more, for instance in our work lives. You know, if our boss and all our colleagues are constantly exchanging messages, constantly checking email on their Blackberry or iPhone or their Droid or whatever, then it becomes very difficult to say, I’m not going to be as connected because you feel like you’re career is going to take a hit. And that same expectation is now moving over into our social lives, particularly for young people.
If all your friends are planning their social lives through texts and Facebook and Twitter and so forth, then to back away from that means to feel socially isolated. And of course for all people, particularly for young people, there’s kind of nothing worse than feeling socially isolated, that your friends are you know, having these conversations and you’re not involved. So it’s easy to say the solution, which is to, you know, becomes a little bit more disconnected. What’s hard it actually doing that. And I think that all of us, including myself who try, find that it’s really a struggle because were so kind of – we’re so used to craving constant streams of new information that it’s kind of bewildering to be alone with our thoughts these days.
Question: How has the technology of reading evolved from papyrus to the iPad?
Nicholas Carr: One of the most important things to realize about reading is that it is a fairly new invention in human history—a couple of millennia old, only after the invention of the alphabet. And for a long time, reading was really only just a kind of adjunct to oral communication because you know, most of human history you just conversed and exchanged information through speech.
And so one of the fascinating things about early writing on slates, on papyrus, even on early handwritten books, is for instance, there were no space between the words. People just wrote in continuous script. And that’s because that’s the way we hear speech. You now, when somebody’s talking to us, they’re not putting pauses – carefully putting pauses between words. It all flows together. The problem with that though, it’s very hard to read. A lot of your mental energy goes to figuring out where does one word end and the next begin. And as a result, all reading was done in the early years out loud, there was no such thing as silent reading because you had to read out loud in order to figure out you know, where was a word ending and where is the word beginning.
And it was only in around the year 800 or 900 that we saw the introduction of word spaces. And suddenly reading became, in a sense, easier and suddenly you had to arrival of silent reading, which changed the act of reading from just transcription of speech to something that every individual did on their own. And suddenly you had this whole deal of the silent solitary reader who was improving their mind, expanding their horizons, and so forth. And when Guttenberg invented the printing press around 1450, what that served to do was take this new very attentive, very deep form of reading, which had been limited to just, you know, monasteries and universities, and by making books much cheaper and much more available, spread that way of reading out to a much larger mass of audience. And so we saw, for the last 500 years or so, one of the central facts of culture was deep solitary reading. The immersion of ourselves in books, in long articles, and so forth.
With the arrival – with the transfer now of text more and more onto screens, we see, I think, a new and in some ways more primitive way of reading. In order to take in information off a screen, when you are also being bombarded with all sort of other information and when there links in the text where you have to think even for just a fraction of a second, you know, do I click on this link or not. Suddenly reading again becomes a more cognitively intensive act, the way it was back when there were no spaces between words. And as a result, I think we begin to lose the ability to read in the deepest, most interpretive ways because were not kind of calming our mind and just focusing on the argument or the story.
Recorded November 10, 2010
Interviewed by Max Miller
A conversation with the technology writer.
Andy Samberg and Cristin Milioti get stuck in an infinite wedding time loop.
- Two wedding guests discover they're trapped in an infinite time loop, waking up in Palm Springs over and over and over.
- As the reality of their situation sets in, Nyles and Sarah decide to enjoy the repetitive awakenings.
- The film is perfectly timed for a world sheltering at home during a pandemic.
The multifaceted cerebellum is large — it's just tightly folded.
- A powerful MRI combined with modeling software results in a totally new view of the human cerebellum.
- The so-called 'little brain' is nearly 80% the size of the cerebral cortex when it's unfolded.
- This part of the brain is associated with a lot of things, and a new virtual map is suitably chaotic and complex.
Just under our brain's cortex and close to our brain stem sits the cerebellum, also known as the "little brain." It's an organ many animals have, and we're still learning what it does in humans. It's long been thought to be involved in sensory input and motor control, but recent studies suggests it also plays a role in a lot of other things, including emotion, thought, and pain. After all, about half of the brain's neurons reside there. But it's so small. Except it's not, according to a new study from San Diego State University (SDSU) published in PNAS (Proceedings of the National Academy of Sciences).
A neural crêpe
A new imaging study led by psychology professor and cognitive neuroscientist Martin Sereno of the SDSU MRI Imaging Center reveals that the cerebellum is actually an intricately folded organ that has a surface area equal in size to 78 percent of the cerebral cortex. Sereno, a pioneer in MRI brain imaging, collaborated with other experts from the U.K., Canada, and the Netherlands.
So what does it look like? Unfolded, the cerebellum is reminiscent of a crêpe, according to Sereno, about four inches wide and three feet long.
The team didn't physically unfold a cerebellum in their research. Instead, they worked with brain scans from a 9.4 Tesla MRI machine, and virtually unfolded and mapped the organ. Custom software was developed for the project, based on the open-source FreeSurfer app developed by Sereno and others. Their model allowed the scientists to unpack the virtual cerebellum down to each individual fold, or "folia."
Study's cross-sections of a folded cerebellum
Image source: Sereno, et al.
A complicated map
Sereno tells SDSU NewsCenter that "Until now we only had crude models of what it looked like. We now have a complete map or surface representation of the cerebellum, much like cities, counties, and states."
That map is a bit surprising, too, in that regions associated with different functions are scattered across the organ in peculiar ways, unlike the cortex where it's all pretty orderly. "You get a little chunk of the lip, next to a chunk of the shoulder or face, like jumbled puzzle pieces," says Sereno. This may have to do with the fact that when the cerebellum is folded, its elements line up differently than they do when the organ is unfolded.
It seems the folded structure of the cerebellum is a configuration that facilitates access to information coming from places all over the body. Sereno says, "Now that we have the first high resolution base map of the human cerebellum, there are many possibilities for researchers to start filling in what is certain to be a complex quilt of inputs, from many different parts of the cerebral cortex in more detail than ever before."
This makes sense if the cerebellum is involved in highly complex, advanced cognitive functions, such as handling language or performing abstract reasoning as scientists suspect. "When you think of the cognition required to write a scientific paper or explain a concept," says Sereno, "you have to pull in information from many different sources. And that's just how the cerebellum is set up."
Bigger and bigger
The study also suggests that the large size of their virtual human cerebellum is likely to be related to the sheer number of tasks with which the organ is involved in the complex human brain. The macaque cerebellum that the team analyzed, for example, amounts to just 30 percent the size of the animal's cortex.
"The fact that [the cerebellum] has such a large surface area speaks to the evolution of distinctively human behaviors and cognition," says Sereno. "It has expanded so much that the folding patterns are very complex."
As the study says, "Rather than coordinating sensory signals to execute expert physical movements, parts of the cerebellum may have been extended in humans to help coordinate fictive 'conceptual movements,' such as rapidly mentally rearranging a movement plan — or, in the fullness of time, perhaps even a mathematical equation."
Sereno concludes, "The 'little brain' is quite the jack of all trades. Mapping the cerebellum will be an interesting new frontier for the next decade."
What happens if we consider welfare programs as investments?
- A recently published study suggests that some welfare programs more than pay for themselves.
- It is one of the first major reviews of welfare programs to measure so many by a single metric.
- The findings will likely inform future welfare reform and encourage debate on how to grade success.
Welfare as an investment<p>The <a href="https://scholar.harvard.edu/files/hendren/files/welfare_vnber.pdf" target="_blank">study</a>, carried out by Nathaniel Hendren and Ben Sprung-Keyser of Harvard University, reviews 133 welfare programs through a single lens. The authors measured these programs' "Marginal Value of Public Funds" (MVPF), which is defined as the ratio of the recipients' willingness to pay for a program over its cost.</p><p>A program with an MVPF of one provides precisely as much in net benefits as it costs to deliver those benefits. For an illustration, imagine a program that hands someone a dollar. If getting that dollar doesn't alter their behavior, then the MVPF of that program is one. If it discourages them from working, then the program's cost goes up, as the program causes government tax revenues to fall in addition to costing money upfront. The MVPF goes below one in this case. <br> <br> Lastly, it is possible that getting the dollar causes the recipient to further their education and get a job that pays more taxes in the future, lowering the cost of the program in the long run and raising the MVPF. The value ratio can even hit infinity when a program fully "pays for itself."</p><p> While these are only a few examples, many others exist, and they do work to show you that a high MVPF means that a program "pays for itself," a value of one indicates a program "breaks even," and a value below one shows a program costs more money than the direct cost of the benefits would suggest.</p> After determining the programs' costs using existing literature and the willingness to pay through statistical analysis, 133 programs focusing on social insurance, education and job training, tax and cash transfers, and in-kind transfers were analyzed. The results show that some programs turn a "profit" for the government, mainly when they are focused on children:
This figure shows the MVPF for a variety of polices alongside the typical age of the beneficiaries. Clearly, programs targeted at children have a higher payoff.
Nathaniel Hendren and Ben Sprung-Keyser<p>Programs like child health services and K-12 education spending have infinite MVPF values. The authors argue this is because the programs allow children to live healthier, more productive lives and earn more money, which enables them to pay more taxes later. Programs like the preschool initiatives examined don't manage to do this as well and have a lower "profit" rate despite having decent MVPF ratios.</p><p>On the other hand, things like tuition deductions for older adults don't make back the money they cost. This is likely for several reasons, not the least of which is that there is less time for the benefactor to pay the government back in taxes. Disability insurance was likewise "unprofitable," as those collecting it have a reduced need to work and pay less back in taxes. </p>
What are the implications of all this?<div class="rm-shortcode" data-media_id="ceXv4XLv" data-player_id="FvQKszTI" data-rm-shortcode-id="3b407f5aa043eeb84f2b7ff82f97dc35"> <div id="botr_ceXv4XLv_FvQKszTI_div" class="jwplayer-media" data-jwplayer-video-src="https://content.jwplatform.com/players/ceXv4XLv-FvQKszTI.js"> <img src="https://cdn.jwplayer.com/thumbs/ceXv4XLv-1920.jpg" class="jwplayer-media-preview" /> </div> <script src="https://content.jwplatform.com/players/ceXv4XLv-FvQKszTI.js"></script> </div> <p>Firstly, it shows that direct investments in children in a variety of areas generate very high MVPFs. Likewise, the above chart shows that a large number of the programs considered pay for themselves, particularly ones that "invest in human capital" by promoting education, health, or similar things. While programs that focus on adults tend to have lower MVPF values, this isn't a hard and fast rule.</p><p>It also shows us that very many programs don't "pay for themselves" or even go below an MVPF of one. However, this study and its authors do not suggest that we abolish programs like disability payments just because they don't turn a profit.</p><p>Different motivations exist behind various programs, and just because something doesn't pay for itself isn't a definitive reason to abolish it. The returns on investment for a welfare program are diverse and often challenging to reckon in terms of money gained or lost. The point of this study was merely to provide a comprehensive review of a wide range of programs from a single perspective, one of dollars and cents. </p><p>The authors suggest that this study can be used as a starting point for further analysis of other programs not necessarily related to welfare. </p><p>It can be difficult to measure the success or failure of a government program with how many metrics you have to choose from and how many different stakeholders there are fighting for their metric to be used. This study provides us a comprehensive look through one possible lens at how some of our largest welfare programs are doing. </p><p>As America debates whether we should expand or contract our welfare state, the findings of this study offer an essential insight into how much we spend and how much we gain from these programs. </p>
Richard Feynman once asked a silly question. Two MIT students just answered it.
Here's a fun experiment to try. Go to your pantry and see if you have a box of spaghetti. If you do, take out a noodle. Grab both ends of it and bend it until it breaks in half. How many pieces did it break into? If you got two large pieces and at least one small piece you're not alone.
But science loves a good challenge<p>The mystery remained unsolved until 2005, when French scientists <a href="http://www.lmm.jussieu.fr/~audoly/" target="_blank">Basile Audoly</a> and <a href="http://www.lmm.jussieu.fr/~neukirch/" target="_blank">Sebastien Neukirch </a>won an <a href="https://www.improbable.com/ig/" target="_blank">Ig Nobel Prize</a>, an award given to scientists for real work which is of a less serious nature than the discoveries that win Nobel prizes, for finally determining why this happens. <a href="http://www.lmm.jussieu.fr/spaghetti/audoly_neukirch_fragmentation.pdf" target="_blank">Their paper describing the effect is wonderfully funny to read</a>, as it takes such a banal issue so seriously. </p><p>They demonstrated that when a rod is bent past a certain point, such as when spaghetti is snapped in half by bending it at the ends, a "snapback effect" is created. This causes energy to reverberate from the initial break to other parts of the rod, often leading to a second break elsewhere.</p><p>While this settled the issue of <em>why </em>spaghetti noodles break into three or more pieces, it didn't establish if they always had to break this way. The question of if the snapback could be regulated remained unsettled.</p>
Physicists, being themselves, immediately wanted to try and break pasta into two pieces using this info<p><a href="https://roheiss.wordpress.com/fun/" target="_blank">Ronald Heisser</a> and <a href="https://math.mit.edu/directory/profile.php?pid=1787" target="_blank">Vishal Patil</a>, two graduate students currently at Cornell and MIT respectively, read about Feynman's night of noodle snapping in class and were inspired to try and find what could be done to make sure the pasta always broke in two.</p><p><a href="http://news.mit.edu/2018/mit-mathematicians-solve-age-old-spaghetti-mystery-0813" target="_blank">By placing the noodles in a special machine</a> built for the task and recording the bending with a high-powered camera, the young scientists were able to observe in extreme detail exactly what each change in their snapping method did to the pasta. After breaking more than 500 noodles, they found the solution.</p>
The apparatus the MIT researchers built specifically for the task of snapping hundreds of spaghetti sticks.
(Courtesy of the researchers)
What possible application could this have?<p>The snapback effect is not limited to uncooked pasta noodles and can be applied to rods of all sorts. The discovery of how to cleanly break them in two could be applied to future engineering projects.</p><p>Likewise, knowing how things fragment and fail is always handy to know when you're trying to build things. Carbon Nanotubes, <a href="https://bigthink.com/ideafeed/carbon-nanotube-space-elevator" target="_self">super strong cylinders often hailed as the building material of the future</a>, are also rods which can be better understood thanks to this odd experiment.</p><p>Sometimes big discoveries can be inspired by silly questions. If it hadn't been for Richard Feynman bending noodles seventy years ago, we wouldn't know what we know now about how energy is dispersed through rods and how to control their fracturing. While not all silly questions will lead to such a significant discovery, they can all help us learn.</p>
Finding a balance between job satisfaction, money, and lifestyle is not easy.
- When most of your life is spent doing one thing, it matters if that thing is unfulfilling or if it makes you unhappy. According to research, most people are not thrilled with their jobs. However, there are ways to find purpose in your work and to reduce the negative impact that the daily grind has on your mental health.
- "The evidence is that about 70 percent of people are not engaged in what they do all day long, and about 18 percent of people are repulsed," London Business School professor Dan Cable says, calling the current state of work unhappiness an epidemic. In this video, he and other big thinkers consider what it means to find meaning in your work, discuss the parts of the brain that fuel creativity, and share strategies for reassessing your relationship to your job.
- Author James Citrin offers a career triangle model that sees work as a balance of three forces: job satisfaction, money, and lifestyle. While it is possible to have all three, Citrin says that they are not always possible at the same time, especially not early on in your career.