Google is working on software to turn brain scans into music
It's part of a joint research project between the multi-national company and Osaka University
Google has unveiled new technology that will allow people to turn brain scans into music.
The development follows a research project involving the multi-national company and figures at Osaka University. It involved five volunteers, who were played over 500 different tracks in 10 different musical styles while lying inside an fMRI scanner.
Images of their brain activity while being played the music were then captured and fed into an AI model called Brain2Music, which learned to create songs similar to those that the volunteers were listening to, using Google's AI music generator MusicLM.
Writing in the study, the team behind the research project said "the generated music resembles the musical stimuli that human subjects experienced, with respect to semantic properties like genre, instrumentation, and mood". You can read the full report here.
It's hoped that with further research and development, the possibility may arise that people will be able to simply turn their thoughts into music using AI without any need for additional musical stimuli, but current studies are some way off reaching this point.
For more on this subject, revisit DJ Mag's 2021 piece on how artificial intelligence will help shape music production here.