Skip to content
Who's in the Video
Douglas Rushkoff is the host of the Team Human podcast and a professor of digital economics at CUNY/Queens. He is also the author of a dozen bestselling books on media,[…]

DOUGLAS RUSHKOFF: Well, I'm actually the guy With the term media virus, in a book I wrote in 1994. And I look at viral media and weaponized memetics as kind of my problem child. I originally saw it as the province of the counterculture, that what we could do is identify these sort of unresolved issues in society and then nest in them and provoke an immune response-- so whether it's racism or poverty or corporate malfeasance, all the kinds of things that don't really get on the tube or don't really get discussed in an appropriate way. The only people who ended up buying that book, really, were marketers and, like, Russian propagandists.

So they look at it and they use these ideas not to promote cultural growth or awareness, but to just provoke a response, to sensationalize anything by any means necessary. And propagandists ended up really good at this. They look for whatever issues are creating the most tension in America, and then, how do we spread it? So, you know, gun control or abortion. You look for things. And what you do is really operationalize conflict. How do we get people to look at those who disagree with them as less than human? And whether you're a Left blue state person or a red Right state person, if you're watching this, I encourage you to think about how is it that you think about those others? Are you thinking about them as less than you? You probably are. It's really hard not to these days.

Well, they're not. They have similar fears. They're just expressing those fears in a different way. And if we can begin to see the other not as the way our weaponized memes are encouraging us to, but to see them as humans, well, Is weaponized memes are the clearest example of human beings becoming the medium, that we are no longer the user. We are being played. So we create algorithms to accomplish some human goal, something we want. Go out online and find this. You're my intelligent agent. You're going to find information and all that.

But we've turned those algorithms against humans. And now, what the algorithms are there to do is to find what computer hackers used to call exploits. Only, you're not looking for an exploit in a computer program or an exploit on a server. They're looking for exploits in humans. And where do you find those exploits? You find those exploits in our painstakingly evolved social mechanisms for connection. So the algorithms look for what is it that we use to establish rapport. What is it that we use to connect with another person? What are the mechanisms that provoke fear or self-defensive measures? And the algorithms will-- not knowingly, but just because they're trying everything — they will eventually find those and leverage them.

So the algorithms on Facebook have found out that people click when they see pictures of their ex-lovers having fun. If you see a picture of your ex having fun, you'll click. If you see your ex not having fun, you don't, apparently. But anything positive about an ex-- so your feed is going to get those. Even if you've unsubscribed, whatever, to that person, those things are going to slip in there because they provoke use. Is it something you need to see or want to see? Is it good for you when you're trying to leave that thing behind? Of course it's not, but it pulls you in. And all the algorithms want is to get you to do the behavior that their programmers have asked them to. So we've spent-- and we're all investing through our S&P mutual funds. We are investing trillions of dollars in companies that are developing algorithms specifically designed to make us unhappy, to play us, to abuse us, and to compromise our humanity by leveraging our most important social instincts for really isolating, atomizing purposes.


Related