Are Trolls Just Playing a Different Game Than the Rest of Us?
Trolling isn't just the actions of ornery black sheep on the web. Jonathan Zittrain explains that it's a set of behaviors due to be studied more intently in the coming years.
Jonathan Zittrain is a Professor of Law at Harvard Law School, Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, Vice Dean for Library and Information Resources for the Harvard Law School Library, and Co-Founder of the Berkman Center for Internet & Society. Previously, he was the Chair in Internet Governance and Regulation at Oxford University and a principal of the Oxford Internet Institute. He was also a visiting professor at the New York University School of Law and Stanford Law School.
Zittrain’s research interests include battles for control of digital property and content, cryptography, electronic privacy, the roles of intermediaries within Internet architecture, and the useful and unobtrusive deployment of technology in education.
He is also the author of The Future of the Internet and How to Stop It, as well as co-editor of the books, Access Denied (MIT Press, 2008), Access Controlled (MIT Press, 2010), and Access Contested (MIT Press, 2011).
Jonathan Zittrain: Trolling is certainly the topic of the year. If we were having this conversation in 1995, for one thing the screen would be about 1/16 the size and the video quality would be much poorer, but it would be a very standard configuration. The story would be hey there's now an internet; people can say anything; sometimes it gets a little out of control and maybe the government will come in and try to regulate in some way. But it's basically the government is trying to stop illegal or really extreme things from happening and otherwise it's just a free-for-all out there. Away we go. Fast-forward 10/15/20 years and we see an environment now in which many, many people want to participate, maybe have views they want to share. And if you stick your head up out of the gopher hole and happen to say something intemperate or wrong or that others may disapprove of, they might not just say, "Gee I disagree. Let's talk about this or something." They may decide to try to doxx you. Find out where you live, the names of the rest of your family to threaten you with the goal of making you feel insecure. They might try to swat you. Tell the police that there's something terrible going on in your house and before you know it there's blue and red lights outside your window and people ready to kick in the door.
This is a strange state of affairs and it's one I think with multiple causes and that actually is susceptible to pretty sustained study, most of which hasn't happened yet — in the next few years I think it will — that says what makes people want to react that way. My best cut on it, at the most abstract level, is that when we are online we may be undertaking very different activities than the people that we're talking to. One model for being online is I'm entertaining myself. I'm having fun. I am in some form or another playing a game. And if that's the case playing the game means picking a side. And that's a very different model from I'm online because I really believe earnestly in something and I want to convince you of why I'm right or have you maybe convince me. Or find people who are of like mind and we can talk about strategizing about the thing that we really care about. Those are totally different activities. And when you try to mix the two that's almost like saying if there's going to be a discussion between Seattle Seahawks fans and New England Patriots fans ahead of a Super Bowl, can't we just settle this with a discussion? Like can't we just earnestly air our differences and come to some compromise that the Patriots should win but only by five points? Like that's nuts. The purpose of the conversation is sort of just the chest-thumping that comes from the play of being a fan of a team and trash talking the other team and then you play the game.
I think that for a lot of trolling there's a sense that it's just nothing personal; we're just out to have some fun. Others have referred to this as 4chan culture, of course named after the site 4chan, for which this is largely the culture. Figuring out, I think, how to have people avoid category errors might be a good way to relieve a little bit the initial conditions that give rise to a situation that you fast-forward and somehow death or rape threats are being made, personal information is getting scattered everywhere, and the parties behind it still just like think it's no big deal. Like they just go to sleep at night and tomorrow they'll find a new cause to get involved in. I imagine we will also see some of these intermediary platforms that are commonly used for many purposes like Twitter, they're going to start taking a heavier hand at intervening when discussions get rough. And it might be as simple as being more willing to delete accounts, which, let's be clear, that's going to mean that others will be more willing to try to importune Twitter to delete accounts as part of the battle between people who are arguing about something.
Over the longer term, I think we may see coming into view something that has been promised for a while, and be careful what you ask for I think we're going to get it, which is to say some form of singular or multi-platform reputation systems. Think how different one’s Twitter or Facebook experience might be if the feed is populated on the basis of people who are known to participate with a bare minimum of civility. Let's define it easily as don't engage in serious and imminent death threats. And if you do you might not have your account deleted, but you will find that the way in which you tweet will not have the same reach as people who, either through the Twitter platform or through behavior on other platforms that has been deemed civil and productive, those will start to get reach too. And I will say that once people kind of know the basis on which they're being graded, can have a really instant effect on their behavior.
Some of us may remember the moment that Uber mistakenly made it easy for somebody to find his or her own rating as a passenger by drivers on Uber; drivers rate passengers, not just the other way around, and people are much more self-conscious now maybe about how they'll behave in an Uber if they want to retain a good rating so that they will be more likely to be picked up when they need a cab or a ride in the future. Those reputation systems both hold promise as a way of having trolling be, if not punished, not rewarded; good behavior whatever that means is in the eyes of the people rating, rising you up in the scales and giving you more reach. It, of course, also carries with it a whole host of new and difficult questions about well who runs this rating system and what if the people rating me are rating me by criteria that are unfair, that might be based on the color of my skin or my ethnicity or something like that, that we could find rating systems doing. Now we're not exactly there yet so we have to see what are the solutions that will come about to problems like trolling and what are the problems that those solutions will raise.
Online trolling isn't just the actions of ornery black sheep on the web. Jonathan Zittrain explains that it's a set of behaviors due to be studied more intently in the coming years. Zittrain, who is a professor of both computer science and law, hazards a guess that most people consider the internet a medium for entertainment. Thus, their behavior online varies from the norm because the focus is less on obtaining social acceptance and more on getting your kicks. In this video, Zittrain tackles topics ranging from online gaming, 4chan, Twitter wars, and various internet subcultures.