Big Think interview with Clay Shirky
Clay Shirky: Cognitive surplus is the somewhat unwieldy description... is my somewhat unwieldy description for the surplus free time and talents of the developed world considered as a whole. We have within the adult population of the developed world, there is well in excess of a trillion hours a year of free time. It hasn’t mainly been experienced as a surplus up until now because there was no way to pool it together in aggregate and there was no way to introduce people with disparate but complementary skills or interests until we got a network that was natively good at supporting social communication, not just as broadcast media.
So, the cognitive surplus is two different things, in a way. It is that collection of time and talents and it is the current historical fact that we can address that collection of time and talents in aggregate now that we have digital media. Whereas, before it was previously dispersed.
Question: Why is the world experiencing a cognitive surplus now?
Clay Shirky: Well, the argument... the arguments really in the book is really in three phases. It started with a conversation with a TV producer, who when I was telling her about Wikipedia, instead of being interested in how it worked or why anybody would do it, just sort of shook her head and asked me, "where do people find the time?" And I kind of snapped. I didn’t even... I wasn’t, certainly planning on saying this, it just kind of came out. I said, "Nobody works in TV gets to ask that question." TV people know where the time comes from because watching television has been a sort of half-time job for every man, woman, and child in the developed world for, you know, for decades now.
And so the first part of the argument, essentially "Where do people find the time?" is pointing out that the time has always been there. Right? From literally the structuring of the 40-hour work week, which was explicitly argued with the idea that there would be eight hours for work, eight hours for sleep, and eight hours for what we will, as the labor song went. We’ve always had that free time.
What we haven’t had yet are ways of pooling that free time in aggregate. So, the question "Where do people find the time?" is partly about... is partly about people not understanding that the time has always been there. And it’s partly about people understanding that we now have the tools to join that time up?
The second part of the argument after, "Where do people find the time?" is really: "Why do people find the time?" Right? "Why would anybody participate in an open source software movements, added a Wiki, upload their photos to Flicker, upload their videos to YouTube?" And there the answer is fairly simple. We’ve always had intrinsic motivations. We’ve always liked to do things with family and friends; we’ve always liked to do things as hobbies. But the radius and half-life of those activities have been really small. Right? If I’m being generous with my friends, they may experience and be happy about if I invite them over to dinner, or what have you.
But my generosity doesn’t move much beyond my house, and it doesn’t move much beyond my circle of friends. What the network has done now is given us a space where we can be generous at large scale, over long periods of time. Right? And someone who contributes to Wikipedia can do so with the sense that they’re participating in a resource that a third of a billion people rely on in a month. That’s a powerful incentive. So, we’re seeing all of these non-financial motivations bring people into environments where they are not only able to contribute their free time and talents, but actively want to.
And then the third argument, the most speculative, is really: "What do we as a society want to get out of this resource?" Now that free time can be addressed in aggregate. And there I make a distinction between communal value, where a bunch of people come together and do something together, and then they get the value from it. So, you know, you see, for example, these medical mailing lists where people who share a particular chronic illness get together and swap, sort of tips and techniques for dealing with symptoms, but also provide a lot of emotional support. This is a terrific thing, but it doesn’t really create a lot of value outside of the participants of that.
At the other end of the scale, there’s civic value. Where it's a group of people coming together to do something together, but their explicit goal is to change society. So, you look at patients like me, where people with medical conditions also come together, but there they’re not just commiserating and sharing tips and emotional support, they’re actually trying to change the way medical research works. And it seems to me, the communal value is pretty well set. Now, we have a society where creation of communal value is no longer much in doubt, but the civic value, what we want to get out of cognitive surplus as a form of civic participation, is the big open question in the book.
Question: How do we measure the value of cognitive surplus?
Clay Shirky: The most immediate value from the cognitive surplus comes from satisfying the intrinsic desires. Right? The kind of things that we’re motivated to do that are different from, “My boss told me so,” or “This is what I’m paid to do.” And they’re both personal and social motivations. Intrinsic motivations have both personal and social components. Personal motivations tend to be autonomy and competence. Right? The idea that I am the author of my own actions, or that I’m good at something. Social motivations tend to be membership and generosity. I am part of a like-minded group that recognizes me and accepts me as a member, or my activities are creating benefits for other people who are grateful for the work that I’ve done.
So, the primary value driving all of this stuff is really some positive sense of self that comes from participation and public action, whether it’s personal or social. As long as enough people want to get some of that value out of uploading photos to Flicker, you know, uploading videos to YouTube, uploading pictures of cats to ICanHasCheezburger, those aggregations... those aggregations will do well. Downstream from that, there’s a whole range of questions about value. Right?
So, the value of ICanHasCheezburger.com was the purveyor of LOLCats, of the pictures of cute cats with cute captions, completely slight, right? Not world-changing, not really doing much other than giving people something to laugh at a coffee break.
The value of Wikipedia? You know, in less than 10 years has become the most important reference work in the English language. So, on that range—the range of kind of socio-utility downstream of the participants—you’ve got everything from really nothing more than a bit of fun on a work day to reshaped people’s sense of what’s possible.
Question: How do the Internet's many distractions affect workday productivity?
Clay Shirky: There are several different trends at work on the work day. My friend, Dalton Conley over at NYU, the sociologist, in fact, has just written a book about the way in which the formerly relatively sharp dividing line between work and home has blended. That was a tradition in a way that started long before the Internet, although the Internet has certainly accelerated it. In a way, Mine Sweeper, right, the old time-waster, has been replaced by Facebook, the new time-waster. But Facebook is a certainly more pleasantly addictive pastime use of the service than Mine Sweeper was.
But to the larger point about going into your workday, spending all day answering emails, dealing with interruptive things, and then leaving feeling as if you’re getting nothing done... it seems to me that we are at the crux of a fairly, fairly significant social change in the way we conduct ourselves in the workplace because, to make a bold prediction, things that can’t last, don’t. Right? Since it takes longer to answer a question than to ask one, we can actually all make each other too busy to get anything done by just asking each other a bunch of questions. And the initial assumption when email, later instant messaging, and other forms of group communication came into the workplace, is that now, finally, we could be better coordinated. The better coordination means more and more communications interfaces, thus leaving your friends, and in fact, all of us leaving the workday feeling like, oh my god, all I did today was communicate but I accomplished nothing.
What we’ve seen in the kind of vanguard of social movement—the open source software movement is the largest sort of collection of participatory tools—is that open source software projects have consistently grown to such a size that they can’t actually host all of the internal communications. And what they do is they then subdivide themselves and they develop tools, not to help them communicate, but rather to help them not communicate. Which is to say, tools which allow individual workers to get their job done with a minimum of coordination. And there’s going to be a competition among businesses to who can create the best environment for their workers that minimizes interrupt logic and minimizes coordination. Because I think that the pain your friend is feeling, and again, that all of us feel, is really indicative of something quite deep, which is we can now communicate as much as we always thought we needed to in the business environment and it turns out to be catastrophic.
So, in large-scale enterprises, the trick is not starting to be to figure out which kinds of communication are critical and which are just sort of “cover your ass” constantly “cc” everybody occupational spam uses of the tool. And to start fairly rigorously stamping out that second category of them because if we all have each other communicate with one another as much as we think we need to, we’ll all swamp each other. Right? The source of your friend not getting anything done is other people, including him, on instant messages and email threads. But he is also himself the source of other people not getting anything done. And it’s going to take coordinated action, probably by the leadership of those companies to put the company back on a footing where you can minimize coordination and collaboration to the critical moments rather than having it swamp everybody.
Question: How should companies deal with these online distractions?
Clay Shirky: You know, different companies deal with it differently. I think increasingly, between the cultural expectations and the difficulty of shutting off access, this is becoming like the personal computer, like email, like instant messaging. Every one of those things—and you know, now Facebook and Twitter—every one of those things was brought into the business. Not because somebody in the executive suite said, “Now we have to have personal computers.” They were dragged into the business because the accountants hated talking to the mainframe guys. And so, once Visicalc came along, they just brought their own PC’s into the enterprise and hid it for a while.
If you went and talked to somebody about email in the mid-‘90s, you’d you know, maybe they heard about it, maybe they hadn’t. You know, there would be some, “oh, maybe some day we’ll get an email address.” Right? You go down and you talk to the sales guys and their business cards all have AOL addresses on them because their clients have demanded it.
Instant messaging, if you talk to the Wall Street guys about instant messaging in the late ‘90’s, “do you ever talk to your clients on IM.” Oh, no, no. The FCC would never let us do that.” Right? The brokers have an ICQ number. So, the second phase of all of that is the business then panicking and saying our employees are doing something that we didn’t allow them to do. At which point the hurdle the technology has to cross is, this is embedded enough in the cultural and business logic of this company, you can’t not do it.
People in call centers will lose that battle. Right? If you’re in a call center and it’s gonna be you’re in a cubicle farm and you’ve got your script, and if you’re, you know, spending a lot of time on Facebook when you should be on the phone, they’re going to shut that down. People in magazines, people in newspapers, people in the media are at the other extreme. Of course they’re going to have maximum access. But my guess is, that as with the personal computer, e-mail and instant messaging, participating in social networks as a way of figuring out what your customers are doing, figuring out what your vendors are doing, figuring out what you’re clients are doing, recruiting new hires, all of these kinds of characteristics are going to be... are going to seem to have enough value that after awhile most companies are going to capitulate and reopen the firewall inasmuch as they’ve shut it down.
Question: What will the workplace of the future look like?
Clay Shirky: Well so, the “everybody can work from wherever they are” logic has been around for a long time. And in fact, well again, well predates the Internet. I mean, really, in every consecutive year since the 1964 World’s Fair, when AT&T you know, unveiled their video phone, we’ve been promised that video conference is going to mean that nobody has to have, you know... there doesn’t need to even be any business travel anymore. And that has turned out to not merely to be wrong, but actually exactly backwards. Which is to say, communications and transportation are not substitutes for one another except at the margins. They’re mainly compliments, right? If you talk to somebody for a long time, after a while, you want to meet them face-to-face. And if you meet someone face-to-face and like them, or have business to do with them, and then you separate, guess what? You want to stay in touch. So, more transportation drives more communication, more communication drives more transportation. In particular, the ability to connect with the home office using these tools have meant more people have spent more time on the road because face time with clients is often more valuable than face time with co-workers.
So, I don’t believe that there is... I don’t believe that there’s any work coming in which the telecommuting model becomes the normal case for most workers. It just... getting humans in the same room creates a kind of coordinating value that’s impossible to replicate in software right now. Again, to the open source people... even open source projects they’ll periodically all fly to the same city to sit around and, you know, work together in the same room. So, I think the workplace of the future, I think the big change in the workplace of the future is an increasingly loft-like flexibility. Right? If you look at what Jennifer McNolty is doing at Herman Miller, the research on configurable work spaces. I think that what we’ve learned about businesses and habiting existing loft basis, such as the one in which we are doing this interview, is that the flexibility of the business to periodically reconfigure itself matters more than the kind that "anybody can work from anywhere" logic which has not played out very well.
So, I think the premium is going to be on designing work spaces that are a good fit for whatever the local work climate is, but still using the space as places to get people together face-to-face because social pools, you know, social software, is not better than face-to-face, it’s just better than nothing.
Question: How has business leadership changed because of the Internet?
Clay Shirky: the question of leadership is really interesting, because for most businesses, really, at this point, the loss of control they fear is already in the past. Right? There was a media environment in which almost any message about IBM that was in the public was created by IBM and then circulated via press release, or reported by a newspaper, or what have you.
And then of course, there was, you know, word-of-mouth, chatter on the street kinds of stuff, but that all operated at a level so much smaller than anything a large company could produce. The biggest change in leadership, I think, is that those days are over and there’s... the range of choices leaders have about the perception of their company has been quite, quite restricted because the counter-story we’ll always get at as well, and it’s just much more of a dialogue of the public.
So the two great visions of leadership we have, like, the "grand visionary" or the "micro-manager" now seem to me not to work as well. The Internet has kind of compressed the range. And leadership has become instead a combination of infusing a company with whatever the core imperatives are and making sure that the company doesn’t overbalance to far in one direction or another.
So, Amazon, to take just one example—Amazon has my favorite corporate award ever in the history of corporate awards. They have an award that you can only win as an employee, if you do something great and you didn’t ask permission first. Right? Other awards you can get if you asked permission, if you cleared things with your bosses, but if you do something really good and you just saw that it was a possibility and you did it, you get a special award for not having ask permission. And that’s an example of something that, to your earlier point about your friend, lowers the amount of internal communication required, and also sets a cultural norm for the business that no amount of memos and mission statements could possibly say. And that kind of leadership, what Bezos does, I think, in terms of creating a cultural climate where good ideas are rewarded matters so much more than, you know, either "grand visionary" or "micro-manager" in this environment.
Question: What is the value of being an active consumer rather than a passive consumer?
Clay Shirky: I think the great driver of all of these participatory changes is the choice by individuals to be active instead of passive. We fed ourselves on a notion that humans were office drones at work and couch potatoes at home and it’s just kind of shuttling back and forth from one to the other. And then when the opportunity to participate came about, even in relatively small ways, right? "I’m going to be sharing photos of this thing, or I wrote a poem I’m going to put it up on my blog and maybe only three people read it, but that’s more than none." That almost all of that is personal drive. Like that’s the big, that the big producer of the change and the individual participants are the beneficiaries. Right? That the choice to be active benefits the person who makes that choice.
The beneficiaries downstream of that, as we talked about earlier, really depends on what kind of sharing is being done. So, if I make a new LOLcat, or I find the cutest picture ever and I give it the funniest caption ever, it makes some people laugh. Right? You can’t really claim a big social benefit there. If at the other extreme, I met patients like me and I’m trying to change medical research culture, I might get a disease understood and a cure developed faster. And that benefits not only everyone that has the disease; it benefits society as a whole for having fewer resources go into the treatment of whatever disease it is. There’s a lot of work recently on ALS.
And so once you get past an individual who chooses to be active, getting benefit for themselves, you really have to look at what the participation is doing to see who benefits, but the range of benefits that are possible from, you know, harnessing this cognitive surplus is quite extraordinary.
Question: At what point does too much productivity or activity become a bad thing?
Clay Shirky: I’d say it’s bad when it becomes addictive. All right, and there’s a big discussion about whether or not the word "addiction" is an appropriate word. A large clinical conversation about Internet addiction et. al. But I will tell you that in the early ‘90s I felt it. I used the word “addict” to describe myself in a completely unironic way. I was addicted to something called UseNet which is the global set of bulletin boards and I had the addict's classic pathology, which is that I needed to be on UseNet every day, not because it made me happy, but after awhile because if I didn’t do it, it made me feel bad, which is the... and I remember it, literally. I remember the morning where I woke up and I did not need to check my email. And I thought oh... it was as if a fever broke. The deeply wired pleasures of social interaction coupled with the kind of mediated space we live in can create these kinds of addictions in people, whether it’s updating their profile page on a social network, or it’s playing a game. And it can lead people to do many of the things that the pathology of addiction does, which is to cut themselves off from their friends, to neglect their social life, neglect their schoolwork, neglect their work, and so forth. And those are the kinds of things that society has always grappled with. Right? You know, it’s the problem around Gambler’s Anonymous, right? Where it’s not that you’re injecting a chemical into your body, it’s not like alcoholism, or cocaine, or what have you. But it is the activity that is changing the... you know, that’s giving you the endorphin rush or giving you the chemical consistency, like, in your brain.
And as with gambling, there are kinds of activities that can become addicting. My personal guess, although I am not a clinician, my personal guess is that Internet addiction and even more than Internet addiction, particular classes of addiction—social gambling addiction, social network addiction and so forth—are going to be understood better in the next five years. They’re turn out to be much rarer than the hand wringing in the press currently... would currently have you believe, where everyone who uses Facebook is an addict or a computer addict. But there will be a non-trival number of cases where people are genuinely addicted. I think as a society then we’ll have to find ways to convince those people, or help them out of that addiction in the same way we have done with gambling as gambling has spread outside of Vegas to include most of the country.
Question: Will concerns about privacy be the undoing of \r\nFacebook?
Clay Shirky: No, I think Facebook is going \r\nto be fine. Facebook has a long history of planning a change in the \r\nservice that’s good for them in some way or another, overstepping their \r\nbounds, apologizing and scaling back—but not scaling back to the point \r\nthey were before the change. In a way, Facebook now uses the \r\noverstep-apology-reaction pattern as a way of saying how far they can go\r\n at any given cultural moment.
The other thing about Facebook \r\nis... Facebook is in a way our current target for our worries about \r\nprivacy in exactly the same way the music industry obsessed about \r\nNapster, newspapers obsessed about Craig’s List... Which is to say the \r\nlogic that Facebook is exposing is in many ways logic that’s implicit in\r\n the Internet itself—Facebook just happens to be its current corporate \r\navatar. But if Craig’s List had died out in 2005, it wouldn’t have \r\nhelped the classified ad business much because somebody else would have \r\nfigured it out and done it.
So, it seems to be that a lot of \r\nthe trouble that Facebook is in right now, is really people grappling \r\nwith what the Internet means for privacy rather than Facebook. What I do\r\n think is that Facebook is probably close to the outer limit of what it \r\ncan get away with in terms of privacy. I wish, as many people do, that \r\nthey were a better actor on the subject of privacy than they have been, \r\nbut their business model’s pretty clearly: maximizing sharing, \r\nmaximizing disclosure, maximizing number of Facebook URL’s in \r\ncirculation on the open Internet. I think that they will have to fight \r\nharder to get out from under the problems they’ve currently created. \r\nBut I don’t think any significant challenger to Facebook is going to \r\narise in the next couple of years. And however much bowing and scraping\r\n they have to do now, including possibly before a Congressional \r\nCommittee, I don’t believe it’s going to clip their business much in the\r\n long term.
Question: What will privacy be like in ten years?
Clay Shirky: The big change in privacy in my view has already happened with the flow of socially coordinating activity online, right? It was actually at the moment where we stopped being virtual, right, when the Internet stopped being a “what happens in Vegas, stays in Vegas” place and started just being a tool for coordinating a regular life. Like the Internet is no longer an alternative to real life, it’s a tool for arranging it. And at that moment, we lost something that we used to call personal life. Right?
Personal life was, you could walk down the street, you could be out in the park, you could be at a party. You were in public, but you were unobserved. Right? So, you could say things to your friends, and if someone overheard you, you wouldn’t react as if you had a right to privacy while walking down Fifth Avenue, but you would assume, quite reasonably that you weren’t under any kind of surveillance. And that’s gone, right? What the network does is it collapses that whole spectrum of personal life into a single dichotomy—"private" or "public." And you have to stand on one side of that line or the other. So, right away, that’s something we’re not used to and we’re not good at. I mean, prior to Facebook, Greta Garbo was the only person any of us had every heard of who had anything that could be called privacy preferences. We just, we kind of knew when to say something in relative confidence and you knew when you could say something on the street corner, and we knew when to shout things from the rooftops. But there was a spectrum there, and now there’s not.
That collapsing to a dichotomy between public and private would be remarkable enough, but the second thing that happened at the same time was the cost structure didn’t just change, it reversed. So, even in the old days, I tell this to my students and they nod politely, I think they maybe believe me, but I could tell they don’t... they can’t really tell what this feels like. In the old days, if you were a citizen and you had something to say in public, you couldn’t. Period. There was no place to upload anything, there was no place to put your opinions as there is in the blogosphere, there was no way to make a video and share it. You were locked out of public expression. And as a result anybody who went to overcome the barriers to public expression we regarded as either a narcissist or a kook. Right? You were either a rich but untalented self-published author, or you were walking around literally with a signboard around Times Square. Either way, people sort of wrote you off.
So the cost of making something public was extraordinarily high. And in the space of less than a generation, the cost of making things public has fallen to zero. The number of free services out there that for the price of a two-minute sign up and some typing will broadcast your thoughts globally to be stored by it for all time on Google servers and archive.org and so forth. They’re lining up to help you with public disclosure.
If you want to keep something private, that’s the hard part. And so in addition to collapsing to this dichotomy of public and private, we’ve also go this world where keeping things private is a costly, expensive activity. And making things public is effortless and cheap.
So privacy I think in the future looks a little bit like privacy looks in big cities now. Which is to say a series of services will set themselves up which allow for relatively private communications. Right? So, you go to... if you go to a club in the sense of either a membership club, or a nightclub, you’re doing it partly because of the enclosure that that environment creates for you. And I think there are going to be an increasing number of services that, in one way or another, set themselves up as creators of value precisely because they allow for this short of shielded personal life that we used to enjoy offline to come about online. And in fact, I think a lot of the emotional backlash against Facebook right now is that Facebook set itself up exactly as one of those spaces. Right? I mean, god forbid there be a search engine for 18-year old girls. And so when it was set up as a college, as a college site, a lot of its value was, "We’re shielding the rest of the world from this conversation." And as it’s grown, the market incentive for Facebook is to maximize incentives and defaults toward disclosure. That is, I think, the one place where I think that Facebook will probably add more services that allow not just for individuals, but for groups to opt into relatively private areas, I think we always have to say, as a way of shielding themselves from the pressure towards being public.
And so, I think the big open question, I guess, is in an environment where making things public is one of two defaults and the easier one, what is does the market for privacy look like? And although there hasn’t been much of a market for privacy so far, because we’ve relied on the inconvenience of the real world to keep a lot of what we do private. I have a feeling that more and more people are going to make fairly formal calculations that, here is a conversation, or here is a group of people and here is a topic that I want to be semi-private, and will reward spaces and services that offer that kind of privacy as an option.
Question: How are we changing our choices because of the Internet?
Clay Shirky: I had a campaign years ago called “Give your kid a GUID,” a Globally Unique ID, as they are called. That kind of Google calculation about providing a unique or rare ID for you kid. I’ve grown up with that, right? People write me, are you the clay Shirky I knew in high school. Well, I can’t exactly say, no. I mean, it’s an unusual enough name. And that kind of... providing that kind of legibility, the sociologists call it, for a society is a big calculation because everybody’s realized now, you can’t get the login named “Susan.” Right? It’s just you’re Susan1234567@AOL.com or your SCrawford, or whatever. But the awareness that these name spaces are all global all the time is, I think effecting how people think about the world.
There is certainly, I mean, I think the biggest difference we’ve seen so far in calculations that involve the Internet come from people with something significantly public at stake. Right? There’s lots and lots of private changes around anything from, "I don’t need to own a cookbook because I can get recipes from the Internet," to Match.com will help me find someone to date. The public stuff, though, is quite remarkable. In particular, politicians after the Trent Lott defenestration and after the "macaca" moment in Virginia, have seen that saying one thing to one group and another thing to another group can be a career-ending moment. And as a result, for I believe for the first time in American history, politicians are on message all the time, and that message is always national. The upside of this is there’s a little bit less of the preaching trade barriers in Michigan and open borders in Florida kind of stuff—depending on whether you’re import or export driven, you know, talking about import or export driven parts of the U.S. economy—but the down side to that is we know less about what a politician thinks when they get into office than if there had been lots of different environments they’d been talking in. I was delighted when Obama won, I’d given him money, I voted for him, but at the same time I was a little dismayed that the hope and change message was so dominant and so universally adhered to that... that the rhetoric that he used, he'd correctly assessed, had to work for every voter all the time wherever he was, but it meant that we knew less of him than we would have, I think, in an earlier campaign, even though that was the winning strategy.
Question: How is the Internet affecting business in emerging markets?
Clay Shirky: I teach a class—I’m down at the Interactive Telecommunications Program at NYU—and I teach a class in partnership with UNICEF. And one of the interesting things about thinking about access on mobile phones, is you take a lot of web design principles, right? The idea of two-way data-driven social communications and you end up repurposing them for the simplest possible devices, right? SMS back and forth with mobile phones. And in many cases, you get really astonishing bits of value, like the Fisherman’s Marketplace in Kenya, right? Fishermen coming off the ocean can SMS ahead and figure out which port has the best price for the fish they currently have. All you need for that is SMS. This is fantastic. All of the M-Pesa mobile banking stuff coming out of Africa, again designed to work with the simplest possible phones.
On the other hand, the kind of rich, face-oriented, social interactions that we’ve all gotten used to from the Web get attenuated on the mobile phone. And so, I think the question becomes, and it’s still an open question, which of the design principles that were pioneered on the Web were pioneered because we had an open system—but now once we got those principles we can move that to anything? And the closer it gets to being market-oriented or transactional, the easier it is to make it appear on the phone. And which of those things only work on a web-like interface, or on a PC-like interface with you know, a really visible large-scale screen, camera, etc., etc. The separation of the PC and the phone is now ending, right, as netbooks, iPads, iPhones converge on not a single point, but on the range that covers the difference between sitting in front of a large screen at home and carrying around tiny screen on your phone. And one of the great design challenges in the next five years is essentially figure out, at what point along that scale do you have to say, "I can’t squeeze a Web site down any further. I have to custom," versus "It’s actually the social interaction pattern I care about; it will work on any device."
So Facebook squeezes down to the phone less well than Twitter for the obvious reasons, and mobile banking and payment systems squeeze down to the phone—n fact, they’re kind of born to be on the phone. It’s not even a question of squeezing—work on the phone beautifully and don’t really get much benefit from being on the Web.
And so rather than being in the lumpy world we were in ’95, here’s your computer, bang! Here’s your phone, it’s crappy and can hardly hear anything. There’s nothing in between. Now we’ve got this whole range between the biggest and smallest devices. What we don’t yet know is where there are real break points and where it’s just a spectrum.
Recorded on May 26, 2010
Interviewed by Victoria Brown
A conversation with the writer and NYU Interactive Telecommunications Professor.
Once a week.
Subscribe to our weekly newsletter.
A new study used functional near-infrared spectroscopy (fNIRS) to measure brain activity as inexperienced and experienced soccer players took penalty kicks.
- The new study is the first to use in-the-field imaging technology to measure brain activity as people delivered penalty kicks.
- Participants were asked to kick a total of 15 penalty shots under three different scenarios, each designed to be increasingly stressful.
- Kickers who missed shots showed higher activity in brain areas that were irrelevant to kicking a soccer ball, suggesting they were overthinking.
In a 2019 soccer match, Swansea City was down 1-0 against West Brom late in the first half. A penalty was called against West Brom. Swansea midfielder Bersant Celina was preparing to deliver a penalty kick. He scuttled up to the ball, but his foot only made partial contact, lobbing it weakly to the right.
Was it a simple mistake? Maybe. But there might be deeper explanations for why professional athletes choke under high-pressure situations.
A new study published in Frontiers in Computer Science used functional near-infrared spectroscopy (fNIRS) to analyze the brain activity of inexperienced and experienced soccer players as they missed penalty shots. Although past research has explored why soccer players miss penalty shots, the recent study is the first to do so using in-the-field fNIRS measurement.
The results showed that kickers who choked were activating parts of their brain associated with long-term thinking, self-instruction, and self-reflection. The chokers, in other words, were overthinking it.
The psychology of penalty kicks
Penalty shots offer an interesting case study of how mental pressure affects physical performance. After all, there's a lot at stake, not only because the kick can sometimes render a win or loss, but also because there are sometimes millions of people anxiously watching, some of whom might have a financial interest in the outcome.
That pressure is no joke. For example, research on Men's World Cup penalty shoot-outs has shown that when the score is tied and a goal means an immediate win, players score 92 percent of kicks. But when teams are facing elimination in a shootout, and the kick determines an immediate tie or loss, players only score 60 percent of the time.
"How can it be that football players with a near perfect control over the ball (they can very precisely kick a ball over more than 50 meters) fail to score a penalty kick from only 11 meters?" study co-author Max Slutter, of the University of Twente in the Netherlands, said in a press release.
"Obviously, huge psychological pressure plays a role, but why does this pressure cause a missed penalty? We tried to answer this by measuring the brain activity of football players during the physical execution of a penalty kick."
In the new study, the researchers aimed to answer two key questions about choking under pressure among both experienced and inexperienced players: (1) What is the difference in brain activity between success (scoring) and failure (missing) when taking a penalty kick? (2) What brain activity is associated with performing under pressure during a penalty kick situation?
To find out, the researchers asked ten experienced soccer players and twelve inexperienced players to participate in a penalty-kicking task. The task was divided into three rounds, each of which was designed to be increasingly stressful:
- Round 1 had no goalkeeper and was labeled as a practice round.
- Round 2 had a friendly goalkeeper who wasn't allowed to distract the kicker.
- Round 3 had a competitive goalkeeper who was allowed to distract the kicker, and kickers were also competing for a prize.
Participants kicked five shots in each round. They wore a fNIRS-equipped headset during the task that measured activity in various parts of the brain.
All participants performed worse in the second and third rounds and reported experiencing the most pressure in the third round. Inexperienced players performed worse than experienced players, which might suggest that they were less able to deal with the mental stress.
The locations in which experienced and inexperienced players kicked the ball in each round. Red dots represent missed penalties and green dots represent scored penalties.Slutter et al., Frontiers in Computer Science, 2021.
The neuroscience of choke artists
So, what types of brain activity were associated with missed shots?
The most noticeable result was that kickers missed more shots when they showed higher activity in their prefrontal cortex (PFC), an area of the brain associated with long-term planning. This was especially true among participants who reported higher levels of anxiety. More specifically, experienced soccer players who missed shots showed high activity in the left temporal cortex, which is related to self-instruction and self-reflection.
"By activating the left temporal cortex more, experienced players neglect their automated skills and start to overthink the situation," the researchers wrote. "This increase can be seen as a distracting factor."
Also, when players of all experience levels felt anxious and missed shots, they showed less activity in the motor cortex, which is the brain area most directly associated with kicking a penalty shot.
Don't overthink it
The results suggest that mental pressure can activate parts of the brain that are irrelevant to the task at hand. In general, expert athletes show more efficient brain activity — that is, more activity in relevant areas, and less activity in irrelevant areas — and therefore experience fewer distractions. This is likely one reason why they were more successful at penalties than inexperienced players in high-stress situations.
This principle is described by neural efficiency theory, and it applies not only to athletes but experts in any field. As you gain mastery over something, you can rely more on automatic brain processes rather than deliberate thinking, which can lead to distractions. The authors of the study concluded that their results provide supporting evidence for neural efficiency theory.
Still, as long our experts are human, it seems that high-pressure situations can turn anyone into a choke artist.
What's the difference between brainwashing and rehabilitation?
- The book and movie, A Clockwork Orange, powerfully asks us to consider the murky lines between rehabilitation, brainwashing, and dehumanization.
- There are a variety of ways, from hormonal treatment to surgical lobotomies, to force a person to be more law abiding, calm, or moral.
- Is a world with less free will but also with less suffering one in which we would want to live?
Alex is a criminal. A violent and sadistic criminal. So, we decide to do something about it. We're going to "rehabilitate" him.
Using a new and exciting "Ludovico" technique, we'll change his brain chemistry to make him an upstanding, moral citizen. Alex will be forced to watch violent movies as his body is pumped with nausea-inducing drugs. After a while, he'll come to associate violence with this horrible sickness. And, after a course of Ludovico, Alex can happily return to society, never again doing an immoral or illegal act. He'll no longer be a danger to himself or anyone else.
This is the story of A Clockwork Orange by Anthony Burgess, and it raises important questions about the nature of moral decisions, free will, and the limits of rehabilitation.
Today's Clockwork Orange
This might seem like unbelievable science fiction, but it might be truer — and nearer — than we think. In 2010, Dr. Molly Crockett did a series of experiments on moral decision-making and serotonin levels. Her results showed that people with more serotonin were less aggressive or confrontational and much more easy-going and forgiving. When we're full of serotonin, we let insults pass, are more empathetic, and are less willing to do harm.
As Fydor Dostoyevsky wrote in The Brothers Karamazov, if the "entrance fee" for having free will is the horrendous suffering we see all around us, then "I hasten to return my ticket."
The idea that biology affects moral decisions is obvious. Most of us are more likely to be short-tempered and spiteful if we're tired or hungry, for instance. Conversely, we have the patience of a saint if we just have received some good news, had half a bottle of wine, or had sex.
If our decision-making can be manipulated or determined by our biology, should we not try various interventions to prevent the criminally inclined from harming others?
What is the point of prison? This is itself no easy question, and it's one with a rich philosophical debate. Surely one of the biggest reasons is to protect society by preventing criminals from reoffending. This might be achievable by manipulating a felon's serotonin levels, but why not go even further?
Today, we know enough about the brain to have identified a very particular part of the prefrontal cortex responsible for aggressive behavior. We know that certain abnormalities in the amygdala can result in anti-social behavior and rule breaking. If the purpose of the penal system is to rehabilitate, then why not "edit" these parts of the brain in some way? This could be done in a variety of ways.
Credit: Otis Historical Archives National Museum of Health and Medicine via Flickr / Wikipedia
Electroconvulsive therapy (ECT) is a surprisingly common practice in much of the developed world. Its supporters say that it can help relieve major mental health issues such as depression or bipolar disorder as well as alleviate certain types of seizures. Historically, and controversially, it has been used to "treat" homosexuality and was used to threaten those misbehaving in hospitals in the 1950s (as notoriously depicted in One Flew Over the Cuckoo's Nest). Of course, these early and crude efforts at ECT were damaging, immoral, and often left patients barely able to function as humans. Today, neuroscience and ECT are much more sophisticated. If we could easily "treat" those with aggressive or anti-social behavior, then why not?
Ideally, we might use techniques such as ECT or hormonal supplementation, but failing that, why not go even further? Why not perform a lobotomy? If the purpose of the penal system is to change the felon for the better, we should surely use all the tools at our disposal. With one fairly straightforward surgery to the prefrontal cortex, we could turn a violent, murderous criminal into a docile and law-abiding citizen. Should we do it?
Is free will worth it?
As Burgess, who penned A Clockwork Orange, wrote, "Is a man who chooses to be bad perhaps in some way better than a man who has the good imposed upon him?"
Intuitively, many say yes. Moral decisions must, in some way, be our own. Even if we know that our brains determine our actions, it's still me who controls my brain, no one else. Forcing someone to be good, by molding or changing their brain, is not creating a moral citizen. It's creating a law-abiding automaton. And robots are not humans.
And yet, it begs the question: is "free choice" worth all the evil in the world?
If my being brainwashed or "rehabilitated" means children won't die malnourished or the Holocaust would never happen, then so be it. If lobotomizing or neuro-editing a serial killer will prevent them from killing again, is that not a sacrifice worth making? There's no obvious reason why we should value free will above morality or the right to life. A world without murder and evil — even if it meant a world without free choices for some — might not be such a bad place.
As Fyodor Dostoyevsky wrote in The Brothers Karamazov, if the "entrance fee" for having free will is the horrendous suffering we see all around us, then "I hasten to return my ticket." Free will's not worth it.
Do you think the Ludovico technique from A Clockwork Orange is a great idea? Should we turn people into moral citizens and shape their brains to choose only what is good? Or is free choice more important than all the evil in the world?
A Harvard professor's study discovers the worst year to be alive.
- Harvard professor Michael McCormick argues the worst year to be alive was 536 AD.
- The year was terrible due to cataclysmic eruptions that blocked out the sun and the spread of the plague.
- 536 ushered in the coldest decade in thousands of years and started a century of economic devastation.
The past year has been nothing but the worst in the lives of many people around the globe. A rampaging pandemic, dangerous political instability, weather catastrophes, and a profound change in lifestyle that most have never experienced or imagined.
But was it the worst year ever?
Nope. Not even close. In the eyes of the historian and archaeologist Michael McCormick, the absolute "worst year to be alive" was 536.
Why was 536 so bad? You could certainly argue that 1918, the last year of World War I when the Spanish Flu killed up to 100 million people around the world, was a terrible year by all accounts. 1349 could also be considered on this morbid list as the year when the Black Death wiped out half of Europe, with up to 20 million dead from the plague. Most of the years of World War II could probably lay claim to the "worst year" title as well. But 536 was in a category of its own, argues the historian.
It all began with an eruption...
According to McCormick, Professor of Medieval History at Harvard University, 536 was the precursor year to one of the worst periods of human history. It featured a volcanic eruption early in the year that took place in Iceland, as established by a study of a Swiss glacier carried out by McCormick and the glaciologist Paul Mayewski from the Climate Change Institute of The University of Maine (UM) in Orono.
The ash spewed out by the volcano likely led to a fog that brought an 18-month-long stretch of daytime darkness across Europe, the Middle East, and portions of Asia. As wrote the Byzantine historian Procopius, "For the sun gave forth its light without brightness, like the moon, during the whole year." He also recounted that it looked like the sun was always in eclipse.
Cassiodorus, a Roman politician of that time, wrote that the sun had a "bluish" color, the moon had no luster, and "seasons seem to be all jumbled up together." What's even creepier, he described, "We marvel to see no shadows of our bodies at noon."
...that led to famine...
The dark days also brought a period of coldness, with summer temperatures falling by 1.5° C. to 2.5° C. This started the coldest decade in the past 2300 years, reports Science, leading to the devastation of crops and worldwide hunger.
...and the fall of an empire
In 541, the bubonic plague added considerably to the world's misery. Spreading from the Roman port of Pelusium in Egypt, the so-called Plague of Justinian caused the deaths of up to one half of the population of the eastern Roman Empire. This, in turn, sped up its eventual collapse, writes McCormick.
Between the environmental cataclysms, with massive volcanic eruptions also in 540 and 547, and the devastation brought on by the plague, Europe was in for an economic downturn for nearly all of the next century, until 640 when silver mining gave it a boost.
Was that the worst time in history?
Of course, the absolute worst time in history depends on who you were and where you lived.
Native Americans can easily point to 1520, when smallpox, brought over by the Spanish, killed millions of indigenous people. By 1600, up to 90 percent of the population of the Americas (about 55 million people) was wiped out by various European pathogens.
Like all things, the grisly title of "worst year ever" comes down to historical perspective.
A simple trick allowed marine biologists to prove a long-held suspicion.
- It's long been suspected that sharks navigate the oceans using Earth's magnetic field.
- Sharks are, however, difficult to experiment with.
- Using magnetism, marine biologists figured out a clever way to fool sharks into thinking they're somewhere that they're not.
For some time, scientists have suspected that sharks belong among the growing number of animals known to navigate using Earth's magnetic field. Testing anything with a shark, though, requires some care.
The key was selecting the right candidate. Keller and his colleagues chose the bonnethead shark, Sphyrna tiburo, a small critter that summers at Turkey Point Shoal off the coast of the Florida State University Coastal and Marine Laboratory with which Keller is affiliated.
Bonnetheads elsewhere have been known to complete 620-mile roundtrip migrations. As the lab's Dean Grubbs puts it, "That's not bad for a shark that is only two to three feet long. The question is how do they find their way back to that same estuary year after year." There's a report of a great white shark migrating between two locations, one in South Africa and another in Australia, year after year.
The research is published in Current Biology.
Keller and his team rounded up 20 local juvenile bonnetheads and transported them into a holding tank at the marine lab. For the tests, the researchers simulated three real-world magnetic fields. As the various magnetic fields were activated, the sharks' movements were captured by GoPro cameras and their average swimming orientations calculated by software.
The first simulation, serving as a control, mimicked the magnetic field of the nearby shoal from which the sharks had been captured. When this field was activated, the sharks essentially acted like they were "home," just swimming around as they do.
A second field was the magnetic equivalent of a location 600 kilometers south of the lab within the Gulf of Mexico. When this field was activated, the sharks, apparently mistaking themselves for being far south in the Gulf, began swimming northward toward the shoal.
The opposite occurred with a field standing in for a location in continental North America 600 km north of their home shoal — the sharks began swimming southward.
"For 50 years," says Keller, "scientists have hypothesized that sharks use the magnetic field as a navigational aid. This theory has been so popular because sharks, skates, and rays have been shown to be very sensitive to magnetic fields. They have also been trained to react to unique geomagnetic signatures, so we know they are capable of detecting and reacting to variation in the magnetic field."
His team's experiments confirm what's long been suspected, Keller says: "Sharks use map-like information from the geomagnetic field as a navigational aid. This ability is useful for navigation and possibly maintaining population structure."