Skip to content
Big Think Books

Can Silicon Valley save the world? Or are social networks ruining everything?

“At times, it seems as if we are condemned to try to understand our own time with conceptual frameworks more than half a century old.” Historian Niall Ferguson says it’s time for an update. 
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

At times, it seems as if we are condemned to try to understand our own time with conceptual frameworks more than half a century old. Since the financial crisis, many economists have been reduced to recycling the ideas of John Maynard Keynes, who died in 1946. Confronted with populism, writers on American and European politics repeatedly confuse it with fascism, as if the era of the world wars is the only history they have ever studied. Analysts of international relations seem to be stuck with terminology that dates from roughly the same period: realism or idealism, containment or appeasement, deterrence or disarmament. George Kennan’s ‘Long Telegram’ was dispatched just two months before Keynes’s death; Hugh Trevor Roper’s Last Days of Hitler was published the following year. Yet all this was seventy years ago. Our own era is profoundly different from the mid-twentieth century. The near-autarkic, commanding and controlling states that emerged from the Depression, the Second World War and the early Cold War exist today, if at all, only as pale shadows of their former selves. The bureaucracies and party machines that ran them are defunct or in decay. The administrative state is their final incarnation. Today, the combination of technological innovation and international economic integration has created entirely new forms of network—ranging from the criminal underworld to the rarefied ‘overworld’ of Davos—that were scarcely dreamt of by Keynes, Kennan or Trevor-Roper.

Winston Churchill famously observed, ‘The longer you can look back, the farther you can look forward.’ We, too, must look back longer and ask ourselves the question: is our age likely to repeat the experience of the period after 1500, when the printing revolution unleashed wave after wave of revolution? Will the new networks liberate us from the shackles of the administrative state as the revolutionary networks of the sixteenth, seventeenth and eighteenth centuries freed our ancestors from the shackles of spiritual and temporal hierarchy? Or will the established hierarchies of our time succeed more quickly than their imperial predecessors in co- opting the networks, and enlist them in their ancient vice of waging war?

A libertarian utopia of free and equal netizens—all interconnected, sharing all available data with maximum transparency and minimal privacy settings—has a certain appeal, especially to the young. It is romantic to imagine these netizens, like the workers in [Fritz] Lang’s “Metropolis”, spontaneously rising up against the world’s corrupt elites, then unleashing the might of artificial intelligence to liberate themselves from the drudgery of work, too. Those who try to look forward without looking back very easily fall into the trap of such wishful thinking. Since the mid-1990s, computer scientists and others have fantasized about the possibility of a ‘global brain’—a self- organizing ‘planetary superorganism’. In 1997 Michael Dertouzos looked forward to an era of ‘computer-aided peace’. ‘New information technologies open up new vistas of non-zero sumness,’ wrote one enthusiast in 2000. Governments that did not react swiftly by decentralizing would be ‘swiftly…punished’. N. Katherine Hayles was almost euphoric. ‘As inhabitants of globally interconnected networks,’ she wrote in 2006, ‘we are joined in a dynamic coevolutionary spiral with intelligent machines as well as with the other biological species with whom we share the planet.’ This virtuous upward spiral would ultimately produce a new ‘cognisphere’. Three years later, Ian Tomlin envisioned ‘infinite forms of federations between people…that overlook…differences in religion and culture to deliver the global compassion and cooperation that is vital to the survival of the planet’. ‘The social instincts of humans to meet and share ideas,’ he declared, ‘might one day be the single thing that saves our race from its own self destruction.’ ‘Informatization,’ wrote another author, would be the third wave of globalization. ‘Web 3.0’ would produce ‘a contemporary version of a “Cambrian explosion”‘ and act as ‘the power-steering for our collective intelligence’.

The masters of Silicon Valley have every incentive to romanticize the future. Balaji Srinivasanconjures up heady visions of the millennial generation collaborating in computer ‘clouds’, freed from geography, and paying each other in digital tokens, emancipated from the state payment systems. Speaking at the 2017 Harvard Commencement, Mark Zuckerberg called on the new graduates to help ‘create a world where everyone has a sense of purpose: by taking on big meaningful projects together, by redefining equality so everyone has the freedom to pursue purpose, and by building community across the world’. Yet Zuckerberg personifies the inequality of superstar economics. Most of the remedies he envisages for inequality—’universal basic income, affordable childcare, healthcare that [isn’t] tied to one company…continuous education’—cannot be achieved globally but are only viable as national policies delivered by the old twentieth-century welfare state. And when he says that ‘the struggle of our time’ is between ‘the forces of freedom, openness and global community against the forces of authoritarianism, isolationism and nationalism,’ he seems to have forgotten just how helpful his company has been to the latter.

Histories of futurology give us little reason to expect much, if any, of the Silicon Valley vision of utopia to be realized. Certainly, if Moore’s Law continues to hold, computers should be able to simulate the human brain by around 2030. But why would we expect this to have the sort of utopian outcomes imagined in the preceding paragraph? Moore’s Law has been in operation at the earliest since Charles Babbage’s ‘Analytical Engine’ was (partly) built before his death in 1871, and certainly since the Second World War. It cannot be said that there has been commensurate exponential improvement in our productivity, much less our moral conduct as a species. There is a powerful case to be made that the innovations of the earlier industrial revolutions were of more benefit to mankind than those of the most recent one. And if the principal consequence of advanced robotics and artificial intelligence really is going to be large- scale unemployment, the chances are surely quite low that a majority of mankind will uncomplainingly devote themselves to harmless leisure pursuits in return for some modest but sufficient basic income. Only the sedative-based totalitarianism imagined by Aldous Huxley would make such a social arrangement viable. A more likely outcome is a repeat of the violent upheavals that ultimately plunged the last great Networked Age into the chaos that was the French Revolution.

Moreover, the suspicion cannot be dismissed that, despite all the utopian hype, less benign forces have already learned how to use and abuse the ‘cognisphere’ to their advantage. In practice, the Internet depends for its operation on submarine cables, fibre-optic wires, satellite links and enormous warehouses full of servers. There is nothing utopian about the ownership of that infrastructure, nor the oligopolistic arrangements that make ownership of major web platforms so profitable. Vast new networks have been made possible but, like the networks of the past, they are hierarchical in structure, with small numbers of super-connected hubs towering over the mass of sparsely connected nodes. And it is no longer a mere possibility that this network can be instrumentalized by corrupt oligarchs or religious fanatics to wage a new and unpredictable kind of war in cyberspace. That war has commenced. Indices of geopolitical risk suggest that conventional and even nuclear war may not be far behind. Nor can it be ruled out that a ‘planetary superorganism’ created by the Dr Strangeloves of artificial intelligence may one day run amok, calculating—not incorrectly—that the human race is by far the biggest threat to the long-run survival of the planet itself and exterminating the lot of us.

‘I thought once everybody could speak freely and exchange information and ideas, the world is automatically going to be a better place,’ said Evan Williams, one of the co-founders of Twitter in May 2017. ‘I was wrong about that.’ The lesson of history is that trusting in networks to run the world is a recipe for anarchy: at best, power ends up in the hands of the Illuminati, but more likely it ends up in the hands of the Jacobins. Some today are tempted to give at least ‘two cheers for anarchism’. Those who lived through the wars of the 1790s and 1800s learned an important lesson that we would do well to re-learn: unless one wishes to reap one revolutionary whirlwind after another, it is better to impose some kind of hierarchical order on the world and to give it some legitimacy. At the Congress of Vienna, the five great powers agreed to establish such an order, and the pentarchy they formed provided a remarkable stability for the better part of the century that followed. Just over 200 years later, we confront the same choice they faced. Those who favour a world run by networks will end up not with the interconnected utopia of their dreams but with a world divided between FANG and BAT and prone to all the pathologies discussed above, in which malignant sub networks exploit the opportunities of the World Wide Web to spread virus-like memes and mendacities.

The alternative is that another pentarchy of great powers recognizes their common interest in resisting the spread of jihadism, criminality and cyber-vandalism, to say nothing of climate change. In the wake of the 2017 WannaCry episode, even the Russian government must understand that no state can hope to rule Cyberia for long: that malware was developed by the American NSA as a cyber weapon called EternalBlue, but was stolen and leaked by a group calling themselves the Shadow Brokers. It took a British researcher to find its ‘kill switch’, but only after hundreds of thousands of computers had been infected, including American, British, Chinese, French and Russian machines. What could better illustrate the common interest of the great powers in combating Internet anarchy? Conveniently, the architects of the post-1945 order created the institutional basis for such a new pentarchy in the form of the permanent members of the UN Security Council, an institution that retains the all-important ingredient of legitimacy. Whether or not these five great powers can make common cause once again, as their predecessors did in the nineteenth century, is the great geopolitical question of our time.

Six centuries ago, in Siena, the Torre del Mangia of the Palazzo Publico cast a long shadow over the Piazza del Campo, the fan- like space that was by turns a marketplace, a meeting place and, twice a year, a racetrack. The tower’s height was to make a point: it reached exactly the same elevation as the city’s cathedral, which stood on Siena’s highest hill, symbolizing the parity of temporal and spiritual hierarchies. A century ago, in Lang’s “Metropolis”, hierarchical power was symbolized by the skyscrapers of Manhattan, which still keep the south and east of Central Park in shade for large portions of the day. When the first great towers were built in New York, they seemed to be appropriately imposing accommodation for the hierarchical corporations that dominated the US economy.

By contrast, today’s dominant technology companies eschew the vertical. Facebook’s headquarters in Menlo Park, designed by Frank Gehry, is a sprawling campus of open-plan offices and play-areas—a ‘single room that fits thousands of people’, in Mark Zuckerberg’s words, or (perhaps more accurately) an immense kindergarten for geeks. The main building at the new ‘Apple Park’ in Cupertino is a gigantic circular spaceship with only four storeys (above ground)—’a centre for creativity and collaboration’, designed by the late Steve Jobs, Norman Foster and Jonathan Ive as if to host a lattice-like network, each node an equal, with a uniform number of edges, but just one restaurant. Google’s new headquarters in Mountain View, set amid ‘trees, landscaping, cafes, and bike paths’, will be made of ‘lightweight block-like structures which can be moved around easily’, as if constructed from Lego and located in a nature reserve: an office without foundations or a floor-plan, mimicking the constantly evolving network it hosts. Silicon Valley prefers to lie low, and not only for fear of earthquakes. Its horizontal architecture reflects the reality that it is the most important hub of a global network: the world’s town square.

On the other side of the United States, however—on New York City’s 5th Avenue—there looms a fifty-eight-storey building that represents an altogether different organizational tradition. And no one individual in the world has a bigger say in the choice between networked anarchy and world order than the absent owner of that dark tower.

Adapted from THESQUAREANDTHETOWER: Networks and Power, from Freemasons to Facebook by Niall Ferguson, published by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2017 by Niall Ferguson.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next