AI and brain interfaces may be about to change how we make music

Computer control in the form of AI and brain-computer interfaces is being introduced to the art of composing.

When Yamaha demonstrated their AI allowing a dancer to play the piano with movement in a Tokyo concert hall in November 2017, it was the latest example of the ways in which computers are increasingly getting involved in music-making. We’re not talking about the synthesizers and other CPU-based instruments in contemporary music. We’re talking about computers’ potential as composers’ tools, or as composers themselves.


Yamaha’s use of AI is quite different. In the recent performance, world-renowned dancer Kaiji Moriyama was outfitted with electrodes on his back, wrists, and ankles, and set free to express himself as AI algorithms converted his movements into musical phrases for transmission to Yamaha’s Disklavier piano via MIDI messages. (MIDI is a computer language through which musical instruments can be controlled.)

(Yamaha Corporation)

Yamaha’s AI, which they’re still developing, worked with a database of linked musical phrases from which it selected and drew melodies for sending to the instrument based on Moriyama’s motions. 

Moriyama was accompanied onstage by the Berlin Philharmonic Orchestra Scharoun Ensemble.

(Yamaha Corporation)

We’ve written previously about other musical AI, the web-based AI platform called Amper that uses AI to compose passages based on descriptors of style, instrumentation, and mood. Singer/songwriter Taryn Southern was using Amper as her primary collaborator in writing an album. 

Another fascinating avenue being explored is the use of brain-to-computer (BCI) interfaces that allow wearers to think music into existence. It’s a fascinating way for anyone to play music, but it’s especially promising for people whose physical limitations make the creation of music difficult or even impossible otherwise.

Certain electroencephalogram signals correspond to known brain activities such as the P300 ERP (for “event-related potential”) that signifies a person’s reaction to some stimulus. It’s previously been by brain-computer interface (BCI) applications in spelling, operating local environmental controls, operating web browsers, and for painting. In September 2017, researchers led by BCI expert Gernot Müller-Putz from TU Graz's Institute of Neural Engineering published research in PLOS/One describing their “Brain Composer” project that leveraged P300 to bring musical ideas directly from composers’ mind to notated sheets of music. They work in collaboration with MoreGrasp and "Feel Your Reach”.

The researchers’ first step was training the BCI application to recognize alphabetical letters before moving on to note length and pitch, as well as notation values such as rests, ties, and such. In this video, the researchers demonstrate the successful outcome of just 90-minutes’ work for a motor-impaired subject.

(i-KNOW Conference)

This is all exciting stuff — and a heaven-send for musical souls with physical limitations — even if the results are a little odd, as in Yamaha’s case. (The BCI example sounds remarkably like the theme from Fringe.)

We’ve been augmenting our natural musical capabilities with technology ever since we picked up our first rock — and certainly by the time we honked steam-punk-looking saxophones. We should have no issue with adding AI and BCIs to our toolbox.

If AI comes up with music we might not, that’s fine. The workings of music remain pretty mysterious, in any event, so here’s an intriguing thought. Though many of us basically prefer music that’s catchy and sonorous, the stuff that really gets us in the gut tends to have something of the unexpected to it, a surprising dissonance or rhythm, and odd “hair” out of place, that makes the moment leap from our speakers or the stage and into our lives as more of an experience than a piece of art, leaving us a little startled and even moved. So if AI can beat a flesh-and-blood chess player by not thinking like a human, imagine what we’re about to hear.

 

Why practicing empathy matters, and how VR can help

VR's coolest feature? Boosting compassion and empathy.

Videos
  • Virtual reality fills us with awe and adrenaline — and the technology is only at a crude stage, explains VR filmmaker Danfung Dennis. It's capable of inspiring something much greater in us: empathy.
  • With coming technological advancements in pixel display, haptics, and sound tracking, VR users will finally be able to know what it's like to really take another person's perspective. Empathy is inherent in humans (and other animal species), but just as it can be squashed, it must be practiced in order to develop.
  • "This ability to improve ourselves to become a more empathetic and compassionate society is what I hope we will use this technology for," Dennis says.
Keep reading Show less

Why being busy is a modern sickness

We have to practice doing nothing more often.

Photo: Shutterstock
Personal Growth
  • Constantly being busy is neurologically taxing and emotionally draining.
  • In his new book, Jon Kabat-Zinn writes that you're doing a disservice to others by always being busy.
  • Busyness is often an excuse for the discomfort of being alone with your own thoughts.
Keep reading Show less

Study: 50% of people pursuing science careers in academia will drop out after 5 years

That's a sharp increase from the 1960s when it took the same share of scientists an average of 35 years to drop out of academia.

Pixabay
Surprising Science
  • The study tracked the careers of more than 100,000 scientists over 50 years.
  • The results showed career lifespans are shrinking, and fewer scientists are getting credited as the lead author on scientific papers.
  • Scientists are still pursuing careers in the private sector, however there are key differences between research conducted in academia and industry.
Keep reading Show less