Skip to content
Surprising Science

AI and brain interfaces may be about to change how we make music

Computer control in the form of AI and brain-computer interfaces is being introduced to the art of composing.
(Yamaha Corporation)

When Yamaha demonstrated their AI allowing a dancer to play the piano with movement in a Tokyo concert hall in November 2017, it was the latest example of the ways in which computers are increasingly getting involved in music-making. We’re not talking about the synthesizers and other CPU-based instruments in contemporary music. We’re talking about computers’ potential as composers’ tools, or as composers themselves.


Yamaha’s use of AI is quite different. In the recent performance, world-renowned dancer Kaiji Moriyama was outfitted with electrodes on his back, wrists, and ankles, and set free to express himself as AI algorithms converted his movements into musical phrases for transmission to Yamaha’s Disklavier piano via MIDI messages. (MIDI is a computer language through which musical instruments can be controlled.)

(Yamaha Corporation)

Yamaha’s AI, which they’re still developing, worked with a database of linked musical phrases from which it selected and drew melodies for sending to the instrument based on Moriyama’s motions. 

Moriyama was accompanied onstage by the Berlin Philharmonic Orchestra Scharoun Ensemble.

(Yamaha Corporation)

We’ve written previously about other musical AI, the web-based AI platform called Amper that uses AI to compose passages based on descriptors of style, instrumentation, and mood. Singer/songwriter Taryn Southern was using Amper as her primary collaborator in writing an album. 

Another fascinating avenue being explored is the use of brain-to-computer (BCI) interfaces that allow wearers to think music into existence. It’s a fascinating way for anyone to play music, but it’s especially promising for people whose physical limitations make the creation of music difficult or even impossible otherwise.

Certain electroencephalogram signals correspond to known brain activities such as the P300 ERP (for “event-related potential”) that signifies a person’s reaction to some stimulus. It’s previously been by brain-computer interface (BCI) applications in spelling, operating local environmental controls, operating web browsers, and for painting. In September 2017, researchers led by BCI expert Gernot Müller-Putz from TU Graz’s Institute of Neural Engineering published research in PLOS/One describing their “Brain Composer” project that leveraged P300 to bring musical ideas directly from composers’ mind to notated sheets of music. They work in collaboration with MoreGrasp and “Feel Your Reach”.

The researchers’ first step was training the BCI application to recognize alphabetical letters before moving on to note length and pitch, as well as notation values such as rests, ties, and such. In this video, the researchers demonstrate the successful outcome of just 90-minutes’ work for a motor-impaired subject.

(i-KNOW Conference)

This is all exciting stuff — and a heaven-send for musical souls with physical limitations — even if the results are a little odd, as in Yamaha’s case. (The BCI example sounds remarkably like the theme from Fringe.)

We’ve been augmenting our natural musical capabilities with technology ever since we picked up our first rock — and certainly by the time we honked steam-punk-looking saxophones. We should have no issue with adding AI and BCIs to our toolbox.

If AI comes up with music we might not, that’s fine. The workings of music remain pretty mysterious, in any event, so here’s an intriguing thought. Though many of us basically prefer music that’s catchy and sonorous, the stuff that really gets us in the gut tends to have something of the unexpected to it, a surprising dissonance or rhythm, and odd “hair” out of place, that makes the moment leap from our speakers or the stage and into our lives as more of an experience than a piece of art, leaving us a little startled and even moved. So if AI can beat a flesh-and-blood chess player by not thinking like a human, imagine what we’re about to hear.


Related

Up Next