Skip to content
Thinking

Can AI write authentic poetry?

Cognitive psychologist and poet Keith Holyoak explores whether artificial intelligence could ever achieve poetic authenticity.
An AI-generated illustration of a man sitting at a desk, accompanied by thought-provoking poetry.
Stefano Ussi / Adobe Stock / Big Think / Ana Kova

“Time — a few centuries here or there — means very little in the world of poems.” There is something reassuring about Mary Oliver’s words. Especially in an era of rapid change, there is comfort to be had in those things that move slowly. But oceans rise and mountains fall; nothing stays the same. Not even the way poetry is made.

The disappearance of the author in 20th-century literary criticism can perhaps be traced back to the surrealist movement and its game of “exquisite corpse.” The surrealists believed that a poem can emerge not only from the unconscious mind of an individual, but from the collective mind of many individuals working in consort — even, or perhaps especially, if each individual has minimal knowledge of what the others are doing. Soon the idea of making art from recycled objects emerged. In the realm of literature, this approach took the form of found poetry.

To create a found poem, one or more people collect bits of text encountered anywhere at all, and with a little editing stitch the pieces together to form a collagelike poem. Examining this generative activity, it may be difficult to identify who if anyone is the “poet” who writes the found poem (or for that matter, to be confident that “writing” is an apt name for the process). Still, even if no one’s consciousness guided the initial creation of the constituent phrases, one or more humans will have exercised their sensitivity and discrimination in selecting the bits to include, and the way these pieces are ordered and linked to form a new whole. The author (or authors) at a minimum must do the work of a careful reader. Can the human be pushed still further into the background, or even out of the picture?

The most radical technological advance of the 20th century might seem to have nothing at all to do with the writing of poetry. If we make a list of the great leaps that led to modern civilization — control of fire, agriculture, the wheel, electricity, and perhaps a few more — the most recent addition is a machine that uses electrons to do computation. The first functioning digital computers were constructed midcentury by Alan Turing and a few others. Over the next not-quite-a-century-yet, computers became enormously faster and more powerful, began to process information in parallel rather than just sequentially, and were linked together into a vast worldwide network known as the internet. Along the way, these devices enabled the creation of artificial versions of a trait previously found only in biological life forms, most notably humans — intelligence.

In a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises to challenge humans as artistic creators.

Artificial intelligence (AI) is in the process of changing the world and its societies in ways no one can fully predict. On the hazier side of the present horizon, there may come a tipping point at which AI surpasses the general intelligence of humans. (In various specific domains, notably mathematical calculation, the intersection point was passed decades ago.) Many people anticipate this technological moment, dubbed the Singularity, as a kind of Second Coming — though whether of a savior or of Yeats’s rough beast is less clear. Perhaps by constructing an artificial human, computer scientists will finally realize Mary Shelley’s vision.

Of all the actual and potential consequences of AI, surely the least significant is that AI programs are beginning to write poetry. But that effort happens to be the AI application most relevant to our theme. And in a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises (threatens?) to challenge humans as artistic creators. If AI can be a poet, what other previously human-only roles will it slip into?

So, what is the current state of AI and computer-generated poetry? This is a less central question than might be supposed. Especially in this time of rapid AI advances, the current state of the artificial poetic arts is merely a transitory benchmark. We need to set aside the old stereotype that computer programs simply follow fixed rules and do what humans have programmed them to do, and so lack any capacity for creativity. Computer programs can now learn from enormous sets of data using methods called deep learning. What the programs learn, and how they will behave after learning, is very difficult (perhaps impossible) to predict in advance. The question has arisen(semiseriously) whether computer programs ought to be listed as coauthors of scientific papers reporting discoveries to which they contributed. There is no doubt that some forms of creativity are within the reach, and indeed the grasp, of computer programs.

But what about poetry? To evaluate computer-generated poetry, let’s pause to remind ourselves what makes a text work as a poem. A successful poem combines compelling content (what Coleridge called “good sense”) with aesthetically pleasing wordplay (metaphor and other varieties of symbolism), coupled with the various types of sound similarities and constraints of form.

In broad strokes, an automated approach to constructing poems can operate using a generate-then-select method. First, lots of candidate texts are produced, out of which some (a very few, or just one) are then selected as winners worth keeping. Roughly, computer programs can be very prolific in generating, but (to date) have proved less capable at selecting. At the risk of caricature, the computer poet can be likened to the proverbial monkey at the typewriter, pounding out reams of garbage within which the occasional Shakespearean sonnet might be found — with the key difference that the computer operates far more rapidly than any monkey (or human) could. To be fair, the program’s search can be made much less random than the monkey’s typing. Current computer poetry programs usually bring in one or more humans to help in selecting poetic gems embedded in vast quantities of computer-generated ore. An important question, of course, is whether an authentic creator requires some ability to evaluate their own creations. Perhaps, as Oscar Wilde argued, there is a sense in which an artist must act as their own critic — or not be a true artist at all.

One use of computers is simply to provide a platform for human generation and selection. The internet makes it easy for large groups of people to collaborate on projects. The kind of collective poetry writing encouraged by the surrealists has evolved into crowdsourcing websites that allow anyone to edit an emerging collective poem. Each contributor gets to play a bit part as author/editor. No doubt some people enjoy participating in the creation of poems by crowdsourcing. It’s less clear whether Sylvia Plath would have associated this activity with “the most ingrown and intense of the creative arts.”

But can computers write poetry on their own, or even make substantial contributions as partners with humans? Not surprisingly, computers are better able to generate and select poems that impose minimal constraints — the less sense and the less form the text requires, the easier for a machine to generate it. A cynic might suggest that the extremes of 20th-century free verse set the stage for AI poets by lowering the bar. (I’m reminded of an old Chinese saying, “A blind cat can catch a dead mouse.”) If that classic line of surrealism, “The exquisite corpse shall drink the new wine,” strikes you as a fine contribution to poetry, then AI is ready to get to work — there are plenty more quasi-random associations to be found by brute search.

As another example, since the 1960s computers have been creating poems in the form of haiku in English. Defined in the crudest possible way, an English haiku consists of words that total 17 syllables. Rather than actually composing haiku, some computer programs simply look for found poems of seventeen syllables. One program retrieved this haunting gem from the electronic pages of the New York Times:

We’re going to start

winning again, believe me.

We’re going to win.

The current state-of-the-art AI poets can actually generate text, rather than just retrieve it. The techniques vary, but most are founded on a mathematical discipline not typically viewed as poetic — statistics. The “big data” available to current AI systems includes massive electronic text corpora, such as Google News (which at the moment contains upward of 100 billion word tokens, ever-growing). Recall those constraints that govern language — the rules of syntax, the semantics of word meanings, the sounds described by phonology, the knowledge about context and social situations that constitutes pragmatics. All of those constraints, plus the linguistic choices and styles of individual writers, collectively yield the actual text produced by human writers — which accumulates as electronic data available for AI systems.

This massive body of text can be described by complex distributional statistics — “You shall know a word by the company it keeps,” to quote linguist John Firth. Quite amazingly, machine learning equipped with advanced statistics can recover all sorts of hidden regularities by analyzing how words are distributed in texts. These methods can determine that dogand cat are very similar in meaning, that run and running are variants of the same verb, and that king and queen are related in the same way as man and woman.

New tools for writers are becoming available — besides having access to an automated thesaurus and rhyming dictionary, a poet may be able to enlist a computerized metaphor suggester.

Closer to poetry, machine learning can identify words that are often linked metaphorically. As I discuss in my book, many metaphors are conventional. Legions of writers have generated expressions that make use of such common metaphors, leaving traces in the patterns of co-occurrence among words in texts. Armed with statistical analyses of text corpora, AI has entered the humanities. Machine learning can help identify features of word meaning that predict the rated goodness of literary metaphors and the historical period in which texts were written. New tools for writers are becoming available — besides having access to an automated thesaurus and rhyming dictionary, a poet may be able to enlist a computerized metaphor suggester. The statistics of word patterns have entered into debates about Shakespeare’s possible collaborations. A basic program for anticipating the next word a person will type can allow a human to select among possible “Shakespearean” continuations of a sentence, thereby creating a sort of parody of a sonnet by the immortal bard. A program that can find associates linking a target topic with a metaphorical source, and render them into approximate English (with some aid from a human), was able to produce a text that begins:

My marriage is an emotional prison

   Barred visitors do marriages allow

The most unitary collective scarcely organizes so much

Intimidate me with the official regulation of your prison

   Let your sexual degradation charm me. …

It’s easy to dismiss AI poetry on the grounds that it has so far failed to produce any good poems. Coleridge would doubtless have seen current AI poetry as the operation of fancy — mechanical recombination of elements — rather than the active imagination. But the fact that AI programs have yet to reach the level of human makars is not conclusive evidence that AI can never do so. For the moment, let’s grant AI poets the benefit of the doubt, and assume that (with human guidance) they will continue to improve in generating humanlike texts. If an AI were to compose a text that a human could appreciate as a poem, would that poem be authentic?

To assess the implications of a potential AI poet, let’s start by looking at AI as applied to more basic tasks, asking our entering question: shoe or footprint? For example, suppose we have a six-legged robot that can move around an open environment, climb stairs, and so on. Does it walk? Of course it does. What walking means is to use leglike appendages to traverse the solid surfaces of an environment. Walking is fundamentally defined by success in achieving its function.

Suppose further than our robot uses camera eyes to detect objects in its environment — it can avoid bumping into them and can report what kinds of objects they are. Can our robot see? Of course it can — vision is the use of information from reflected light to recognize objects and determine their locations. Once again, the definition is based on function. We don’t need to inquire as to how the robot’s vision system works, or whether its eyes are humanlike in any important way. The function of vision has been achieved.

In fact, the philosophical doctrine that has generally guided cognitive science and AI is called functionalism. Intelligence is fundamentally defined as the achievement of certain functions. There is certainly a further question as to whether the robot/computer is accomplishing these functions in a humanlike manner. Very roughly, cognitive scientists aim to develop humanlike computer models, whereas pure AI researchers just want to get the job done. But in general, functionalism grants that walking is walking, and vision is vision, regardless of the physical nature of the entity performing those functions. An eye can be built out of rods and cones or from photodiodes — as long as that eye serves the critical function of translating light into information about external objects, the entity using it has sight. From the perspective of functionalism, intelligence, like a shoe, is to be judged by what it accomplishes.

We can ask parallel questions as we consider major cognitive functions one by one. Perception? Attention? Memory? Reasoning? Decision making? Problem solving? For each it seems a functional definition is compelling. If an AI can look at a chessboard, attend to the critical pieces, remember similar positions encountered in previous games, and choose a winning move, then that AI is playing chess. And indeed, ever since 1997 when the computer program Deep Blue bested the human world champion, AI has reigned as the emperor of chess over all players on earth.

So far, the shoe fits AI just fine. But on the way to poetry we’re forced to go further and dive into the murkiest depths of the philosophy of mind. What about understanding? Emotion? Consciousness? Can these aspects of the human mind — all arguably central to poetry and other art forms — be given a strictly functional definition?

The most vexed topic of all is the nature of consciousness. Many aspects of this concept — the ability to focus or shift attention, to evaluate one’s own mental states, to perform self-regulation, and to make decisions — appear amenable to functional definitions. Cognitive scientists create computer simulations of these human capabilities, and AI programs certainly exhibit forms of them. But the aspect of consciousness that seems most important — the phenomenal sense of a rose being red, the night sky overwhelming, a loss heartbreaking — has so far defied scientific understanding.

This has been called the Hard Problem of consciousness — a more accurate term would be the Intractable Problem. Before its stony face every sensible philosophical position seems to sink into absurdity. Consider: if consciousness has a purely functional definition — the capacity for self-regulation, let’s say — then a thermostat is at least minimally conscious. Absurd. If functionalism falters, suppose we substitute materialism — the doctrine that everything in the natural world (including the subjective experience of an individual) is some form of physical matter. Could it be that consciousness is not only dependent on neural activity, but actually is neural activity? Then a neurosurgeon should be able to open up a living brain and see the consciousness inside it. Absurd. So it might seem we’re left with the famous dualism of Descartes — matter (brain included) is one kind of thing, and consciousness is something else altogether. But then the obvious correlations between neural activity and states of consciousness would appear to be some sort of odd coincidence. And how could a poet’s ethereal consciousness reach across the dark unfathomable chasm to cause an emerging poem to be scribbled on paper — an event situated firmly in the realm of matter? All absurd.

The only indisputable statement to be made about consciousness is that no consensus exists among philosophers, psychologists, neuroscientists, and AI researchers. Some neuroscientists believe they are on the threshold of providing a materialist account of consciousness, and some AI researchers believe machines will inevitably acquire consciousness as an emergent property when they reach some critical level of complexity. For what it’s worth (which I freely admit to be very little), I have no faith in these claims. The complexity argument seemed plausible a half century ago when computers were in their infancy. Today, consider a simple thought experiment: Which is more complex, the internet (including every computer attached to it), or the brain of a frog? The internet, I would say. And which is more likely to have some sort of inner experience, the internet or the frog? I’ll put my money on the amphibian.

The only indisputable statement to be made about consciousness is that no consensus exists among philosophers, psychologists, neuroscientists, and AI researchers.

If every available philosophical account of consciousness appears to be absurd, logic leaves us with two possibilities: At least one of these accounts isn’t truly absurd after all (we just need to clarify things), or the correct account awaits an insight humankind has yet to be granted. Though I remain officially agnostic, for the purpose of the specific question that presently concerns us — can AI write authentic poetry? — the preponderance of evidence leads me to answer “no.” AI has no apparent path to inner experience, which I (and many others) take to be the ultimate source of authentic poetry. A major corollary of this conclusion deserves to be stated: Inner experience can’t be defined as a computational process.

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

What AI has already accomplished is spectacular, and its further advances will continue to change the world. But for all the functions an AI can potentially achieve — the ability to converse intelligently with humans in their natural languages, to interpret their emotions based on facial expression and tone of voice, even to create new artistic works that give humans pleasure — an intelligent program will fall short of authenticity as a poet. AI lacks what is most needed to place the footprints of its own consciousness on another mind: inner experience. That is, experience shaded by the accumulated memories derived over a lifetime. The absence of inner experience also means that AI lacks what is most needed to appreciatepoetry: a sense of poetic truth, which is grounded not in objective reality but rather in subjective experience.

It remains to be seen whether AI poems will eventually go beyond intended or unintended parody and trigger emotional responses in human readers that run deeper than wry amusement. Considered as texts, will AI poems reach some level confusable with human greatness, or will poetry created without inner experience inevitably press in vain against some inherent limit? Current AI poets are in essence mining the metaphors that humans have already formed and planted in texts — in the view of Robert Frost, “The richest accumulation of the ages is the noble metaphors we have rolled up.” Could AI create genuinely novel metaphors, rather than only variations of those we humans have come up with already?

Borges thought that truly new metaphors still await discovery. New variations of old metaphors can be very beautiful, he acknowledged, “and only a few critics like myself would take the trouble to say, ‘Well, there you have eyes and stars and there you have time and the river over and over again.’ The metaphors will strike the imagination. But it may also be given to us — and why not hope for this as well? — it may also be given to us to invent metaphors that do not belong, or that do not yet belong, to accepted patterns.”

Keith J. Holyoak, Distinguished Professor of Psychology at the University of California, Los Angeles, is a psychologist and poet. He is the coauthor or editor of a number of books on cognitive psychology and has published four volumes of poetry. This article is adapted from his book “The Spider’s Thread.”

This article was originally published on MIT Press Reader.


Related

Up Next