Now that computers have progressed to the point where they can challenge humans on Jeopardy! and defeat a Grandmaster in a game of chess, it's time for us to ask serious philosophical questions about what it means to be "human" during a time of accelerating technological change. If the Singularity is near, what impact will this new era of computational power have on the way we think about emotions, personality and even mortality? Just as Wall Street banks routinely hire mathematics and theoretical physics grads to create complex financial models and consumer goods companies hire anthropologists to study consumer habits, it is time for Silicon Valley to start hiring philosophers.
Recently, there have been a number of fascinating articles that address philosophical issues in the computational realm. For example, a wonderful adapted excerpt from Brian Christian’s The Most Human Human recently appeared in The Atlantic. In it, Brian Christian recounts his experience at The Turing Test competition in Britain, where humans must attempt to convince a panel of "blind" judges that they are, indeed, more "human" than a computer. This is harder than it sounds – computers are surprisingly good at holding quick five-minute IM chats. Even up to 30 minutes, they are sometimes able to confuse most humans by relying on a number of algorithms and conversational tricks. In one instance cited by Christian, a human judge was so convinced that he was conversing with another human that he invited the computer out for a beer afterwards (at which point, the computer promptly broke down and starting spewing gibberish).
If we finally accept that the companies of Silicon Valley need philosophers – what would they actually do? For one, they would address the changing dimensions of human-computer interactions. What should a human-computer interface look like? Which tasks should be handled by computers, and which tasks by humans? Secondly, the philosophers of Silicon Valley would tackle the inevitable moral and existential questions such as: To what degree should humans integrate technology to become "super-human"? Should we create certain technology products if we know that they will create a new caste system of the haves and have-nots?
Bringing philosophers into the corporation is not an entirely new idea. For example, celebrated thinker Alain de Botton has been raising important philosophical questions at the intersection of art and commerce for years: he's told us how Proust can change our lives, debated the pleasures and sorrows of work, and explored how status anxiety plagues us all. He routinely refers to the philosophical teachings of Epicurus, Montaigne, Nietzsche, Schopenhauer, Seneca, and Socrates. And then there’s John Armstrong, who holds the intriguing title of “philosopher-in-residence” at Melbourne Business School.
Whoever would have thought that the Gordon Gekkos of Wall Street would have ever paid top dollar for theoretical physicists to create sophisticated hedging strategies? Maybe one day a few years from now, the new "hot" major on liberal arts campuses across America will be philosophy. After all, the philosopher-kings of the computational world will be responsible for far more than carving out a visionary future for their own companies -- they will be determining how each of us lives, thinks and feels.