Skip to content
The Present

A.I. will serve humans—but only about 1% of them

AI is leaving human needs and democracy behind in its race to accomplish its current profit-generating goals.

It doesn’t have to be this way, but for now it is: AI’s primary purpose is to maximize profits. For all of the predictions of its benefits to society, right now, that’s just window-dressing—a pie-in-the-sky vision of a world we don’t actually inhabit. While some like Elon Musk issue dire warnings against finding ourselves beneath the silicon thumbs of robot overlords, the fact is we’re already under threat. As long as AI is dedicated to economic goals and not societal concerns, its tunnel vision is a problem. And as so often seems to be the case these days, the benefits will go to the already wealthy and powerful.


Right now, while artificial intelligence is focusing on profit-generation, natural intelligence has proven to be more than up to the task of manipulating it, as if sneaking up behind someone distracted by a shiny object.

We’re coming to understand just how adroitly AI can be played as we learn more and more about Russia’s manipulation of social media during the 2016 presidential election. Facebook’s much-lauded AI was working to “consume as much of your time and conscious attention as possible,” as Facebook’s first president Sean Parker recently put it to Mike Allen. After all, as we’ve often been told, “You’re not the customer—you’re the product” meant to draw advertisers to the platform. Cleverly parsing our newsfeeds for clues to our most addictive interests and associations, Facebook’s AI somehow completely failed to notice it was being gamed by Russia, as noted in this stunning exchange between Senator Al Franken and Facebook General Counsel Colin Stretch:

Content not available

What neither man explicitly says is that it was not the job of Facebook’s AI to do anything but maximize the platform’s profits. Democracy? Not Facebook’s problem—until it was. Stretch’s classic tech-speak/euphemism is that Facebook’s algorithms should have had a “broader lens.”

This lack of a broader lens is at the root of growing concerns that automation is going to mean the loss of a significant number of jobs. Katherine Dempsey, writing for The Nation, discussed the issue via email with deep-learning expert Yoshua Bengio, and he summed up the end game this way:

“AI will probably exacerbate inequalities, first with job disruptions—a few people will benefit greatly from the wealth created, [while] a large number will suffer because of job loss—and second because wealth created by AI is likely to be concentrated in a few companies and a few countries.”

The future currently under construction is frightening if you’re not among those few people. Dempsey cites a McKinsey & Company report, ‘A Future That Works, describing a time in which fewer actually will. According to that report, 51% of all the work done in the U.S. economy could be automated at a savings for companies—and loss in workers’ salaries—of $2.7 trillion. While only about 5% of all occupations could be fully automated, about a third of the work in 60% of them can be taken over by machines.

Dempsey also notes that AI is reinforcing existing biases. Its mistakes may be attributable to the narrowness of programmers’ intentions and sensitivities, or not, but the algorithms are just not that smart so far. The New York Times cites Google Photos tagging black people as gorillas, the algorithms in Nikon cameras assuming Asian people are blinking, and a terrifying expose by ProPublica revealing that AI is being used to identify future criminals.


(PROPUBLICA)

A Princeton study found that a “machine-learning program associated female names more than male names with familial attributes such as ‘parents’ and ‘wedding.’ Male names had stronger associations with career-related words such as ‘professional’ and ‘salary.'” No surprise then that, as a Carnegie Mellon study found, Google is targeting ads for high-paying jobs primarily at men. Still, as Michael Carl Tschantz of the International Computer Science Institute admits, “We can’t look inside the black box that makes the decisions.”

And there’s the problem at its basic level. As long as AI is primarily dedicated to advancing economic goals, its workings are likely to remain largely proprietary and thus unavailable for scrutiny—that’s assuming its creators even know how it works. Our best—and maybe only—defense against this danger to our society is to educate ourselves and our children about AI and machine-learning technology so we aren’t treating AI as some sacred form of modern magic whose workings and effects we’re forced to unquestioningly accept. Forget robot overlords for now—it’s the short-sighted greed of our human ones that should worry us.


Related

Up Next