AI is leaving human needs and democracy behind in its race to accomplish its current profit-generating goals.
It doesn't have to be this way, but for now it is: AI's primary purpose is to maximize profits. For all of the predictions of its benefits to society, right now, that's just window-dressing—a pie-in-the-sky vision of a world we don't actually inhabit. While some like Elon Musk issue dire warnings against finding ourselves beneath the silicon thumbs of robot overlords, the fact is we're already under threat. As long as AI is dedicated to economic goals and not societal concerns, its tunnel vision is a problem. And as so often seems to be the case these days, the benefits will go to the already wealthy and powerful.
Right now, while artificial intelligence is focusing on profit-generation, natural intelligence has proven to be more than up to the task of manipulating it, as if sneaking up behind someone distracted by a shiny object.
We're coming to understand just how adroitly AI can be played as we learn more and more about Russia's manipulation of social media during the 2016 presidential election. Facebook's much-lauded AI was working to “consume as much of your time and conscious attention as possible," as Facebook's first president Sean Parker recently put it to Mike Allen. After all, as we've often been told, “You're not the customer—you're the product" meant to draw advertisers to the platform. Cleverly parsing our newsfeeds for clues to our most addictive interests and associations, Facebook's AI somehow completely failed to notice it was being gamed by Russia, as noted in this stunning exchange between Senator Al Franken and Facebook General Counsel Colin Stretch:
What neither man explicitly says is that it was not the job of Facebook's AI to do anything but maximize the platform's profits. Democracy? Not Facebook's problem—until it was. Stretch's classic tech-speak/euphemism is that Facebook's algorithms should have had a “broader lens."
This lack of a broader lens is at the root of growing concerns that automation is going to mean the loss of a significant number of jobs. Katherine Dempsey, writing for The Nation, discussed the issue via email with deep-learning expert Yoshua Bengio, and he summed up the end game this way:
“AI will probably exacerbate inequalities, first with job disruptions—a few people will benefit greatly from the wealth created, [while] a large number will suffer because of job loss—and second because wealth created by AI is likely to be concentrated in a few companies and a few countries."
The future currently under construction is frightening if you're not among those few people. Dempsey cites a McKinsey & Company report, 'A Future That Works', describing a time in which fewer actually will. According to that report, 51% of all the work done in the U.S. economy could be automated at a savings for companies—and loss in workers' salaries—of $2.7 trillion. While only about 5% of all occupations could be fully automated, about a third of the work in 60% of them can be taken over by machines.
Dempsey also notes that AI is reinforcing existing biases. Its mistakes may be attributable to the narrowness of programmers' intentions and sensitivities, or not, but the algorithms are just not that smart so far. The New York Times cites Google Photos tagging black people as gorillas, the algorithms in Nikon cameras assuming Asian people are blinking, and a terrifying expose by ProPublica revealing that AI is being used to identify future criminals.
A Princeton study found that a “machine-learning program associated female names more than male names with familial attributes such as 'parents' and 'wedding.' Male names had stronger associations with career-related words such as 'professional' and 'salary.'" No surprise then that, as a Carnegie Mellon study found, Google is targeting ads for high-paying jobs primarily at men. Still, as Michael Carl Tschantz of the International Computer Science Institute admits, “We can't look inside the black box that makes the decisions."
And there's the problem at its basic level. As long as AI is primarily dedicated to advancing economic goals, its workings are likely to remain largely proprietary and thus unavailable for scrutiny—that's assuming its creators even know how it works. Our best—and maybe only—defense against this danger to our society is to educate ourselves and our children about AI and machine-learning technology so we aren't treating AI as some sacred form of modern magic whose workings and effects we're forced to unquestioningly accept. Forget robot overlords for now—it's the short-sighted greed of our human ones that should worry us.
Nothing reflects the complex mood of our era like gaming, says Nato Thompson, where the establishment has worked its way into the anti-establishment ethos.
What’s common to most movements of dissent, is that they don’t stay pure for long. Art curator and cultural critic Nato Thompson uses gaming to show how the anti-establishment ethos within those games has been commandeered by the very thing it sought to stick it to: "the man". The same goes for big corporations like Coca Cola and Apple, who position themselves as the ordinary human. Institutions use dissenting art and culture to ensure profits. "The spirit of anti-establishment gets into the establishment," says Thompson, and he perceives that as a broad phenomenon. America, especially in 2017, is deeply anti-establishment. Thompson wonders whether that once-useful ethos has tipped over from constructive to destructive. Nato Thompson’s most recent book is Culture as Weapon: The Art of Influence in Everyday Life.
Nato Thompson’s most recent book is Culture as Weapon: The Art of Influence in Everyday Life .
This will be music to the ears of anyone who's ever worked in customer service. Is this old managerial adage doing companies more harm than good?
The tired, old adage many businesses run by is that "the customer is always right", but Simon Sinek is here to tell us we’ve got it all wrong. All companies must make and increase profits to survive, but what’s missing is the understanding of it as a linear process. Rather than staring at the end goal, it literally pays to see it as a chain effect. When managers put their employees first, employees are empowered to deliver the ideal customer service a top company would strives for. Through an anecdote about one service industry worker who is employed at two differently run establishments, Sinek illuminates how the best managerial method is to prioritize the wellbeing of employees first. Simon Sinek's most recent book is Start With Why: How Great Leaders Inspire Everyone to Take Action.
Simon Sinek's most recent book is Start With Why: How Great Leaders Inspire Everyone to Take Action.
Despite our romanticized vision of social media as a global town square overflowing with diversity, the reality is that each user’s experience is hyper-filtered.
Are you living a segregated digital life? If you're on major social media platforms like Facebook and Twitter, the answer is probably yes.
A new report from the MIT Media Lab’s Electome project shows just how segregated we have become on platforms like Twitter. Utilizing the complete data set from Twitter, the report illustrates the clustering nature of both Clinton and Trump supporters -- with Trump supporters being more isolated from a diverse set of opinions. While no definitive conclusions were drawn as to why Clinton and Trump supporters tended to form distinct clusters, the impact of the clustering is more clear -- the user’s information flow is altered towards less diversity of opinions. We are digitally segregated.
There may be many reasons why we either are, or choose to be, segregated on Twitter. Our ability to be connected to diverse perspectives doesn’t mean we will be exposed to diverse opinions. Speaking to VICE News about the report, MIT Media Lab data journalist John West, who worked on the study, stated that, “All of this paints a bleak picture of online political discourse. It is one balkanized by ideology and issue-interest, with little potential for information flow between the online cocoons.”
How can we understand other people when we are not interacting with other people? Social media, as we have been told, was supposed to bring us together not create online cocoons.
In 2013, former Twitter CEO Dick Costolo waxed poetic to the Brookings Institution about Twitter as a global town square. Costolo set up an analogy with the Greek Agora. “You came and talked about what was going on in your part of the village, and I came and talked about what was going on in mine, and the politician was there, and we listened to the issues of the day, and a musician was there and a preacher was there, etcetera, and it was multidirectional and it was unfiltered, and it was inside out, meaning the news was coming from the people it was happening to, not some observer.”
Giving an optimistic gloss of social media’s ability to eliminate time and distance, Costolo stated that, “along comes a service like Twitter that has the elimination of time and distance built into it, but also brings back all those capabilities of the Agora. It’s inside out again, it’s coming from the participants.”
Here is the problem: the platforms we utilize for our modern day Agora have shareholders. We are expecting a public town square, but experiencing a publicly traded company. In a town square, you are walking into an environment. On social media, an environment is created for you. The business model for major social media companies, which is based on data monetization and ads instead of a monthly fee, may run counter to your own desire for diverse opinions.
“Ad-based businesses distort our online interactions,” wrote tech sociologist Zeynep Tufekci in her New York Times op-ed “Mark Zuckerberg, Let Me Pay for Facebook.” “People flock to Internet platforms because they help us connect with one another or the world’s bounty of information — a crucial, valuable function. Yet ad-based financing means that the companies have an interest in manipulating our attention on behalf of advertisers, instead of letting us connect as we wish. Many users think their feed shows everything that their friends post. It doesn’t.”
Our potential exposure to diversity doesn’t equate to actual exposure to diversity.
This was the experience of Eli Pariser, whose 2011 TED talk “Beware online filter bubbles” seems extremely prescient. “I'm progressive, politically… but I've always gone out of my way to meet conservatives. I like hearing what they're thinking about; I like seeing what they link to; I like learning a thing or two. And so I was surprised when I noticed one day that the conservatives had disappeared from my Facebook feed. And what it turned out was going on was that Facebook was looking at which links I clicked on, and it was noticing that, actually, I was clicking more on my liberal friends' links than on my conservative friends' links. And without consulting me about it, it had edited them out. They disappeared.”
There is a wide gulf between the potential for social media platforms to expose us to diverse opinions, and the reality and running of publicly traded companies. What if showing you diverse opinions would be bad for business?
Instead of trying to change social media companies towards the town square ideal, we need to come to terms that we are not in a public space. Social media is not a town square, and it never will be.