Algorithm Is A Dancer: How artificial intelligence is reshaping electronic music
Artificial intelligence, the doom mongers say, will make many human jobs obsolete, and some believe it will destroy the music industry too. But the flipside is a brave new world where AI and people can co-exist, collaborate, and make beautiful music together. We investigate the weird and potentially wonderful world of algorithmic sounds
Picture this... It’s 2030, and the DJ Mag Top 100 has just been topped, for the first time ever, by an artist created with artificial intelligence. The furore is palpable. The DJ community and scene at large are up in arms.
“They took our jobs!” goes the cry of the headphone-wielding rabble, as circuit boards, computers, their fancy algorithms and all related Kraftwerk memorabilia are inflamed in massive pyres around the world. “Dance music will never be the same again!” the fictitious crowd of enraged future DJs also claim. But they’re wrong. From another perspective, AI could make our culture even more engaging, dynamic and collaborative.
Bottom line: If you make music or DJ, or produce anything creative such as make films, take photos, write or paint, then the chances of robots taking your job are pretty slim, for now. And while a 2017 study suggesting automation may cause the loss of over 800 million worldwide jobs by 2030 indicates another epic and potentially turbulent paradigm shift in workforce dynamics, job types, economics and lifestyles the world over, this particular aspect of what often feels like a scary future shouldn’t be the primary cause for concern for anyone active in the arts.
In fact, if the creativity genuinely does come from unique aspects of the human experience, such as art, experimentalism and genuine personal expression, AI has potential to create a whole new creative realm of opportunities and modes of artistic articulation. But only if we embrace AI tools with the same adventurous spirit the original dance pioneers embraced digital tools, experimenting with them and pushing them to their limits to take the music into the future — a destination that has always been the core aesthetic, source of inspiration and reference for electronic music in the first place. DJ Stingray touched on this imminent change in tech climate when I interviewed him for a DJ Mag cover feature last year.
“It’s more work to be a nerd,” he told us. “It’s more work to be a techy. You have to have some proficiency mentally. You can’t just be a consumer end person. I think maybe we have got a little lazy. We’ve lost our vision. This music started around futurism. And if we lose that we become pop music. Bad pop music.”
Future Shocks
His original context was about how electro had lost its connection with science, technology and futurism. But the same crux lies at the bottom of any concerns one may have about AI affecting electronic music. The mathematics and science behind the tech might be a hell of a lot more complex and harder to get your head around, but this is balanced by the abundance of information out there and the access we have to tech. The drive and ethos remain the same; electronic genres such as electro, techno, acid house and jungle were fused by innovative creators who wanted to make something that had never been heard before, which is exactly where AI-led electronic music is going. What Stingray is effectively saying is that it’s our job as creators to step up alongside the tech and learn with it, just as it learns from us.
This has already been happening for decades. In the 1950s, the composer Lejaren Hiller created the first computer-generated score, ‘The Lilac Suite’. In the ’90s, David Bowie developed the Verbalizer AI app, that randomised literary source material to create lyrics, and as recently as 2016, Sony’s AI Flow Machines helped to write the song ‘Daddy’s Car’, a song written in the style of The Beatles, which is kinda convincing, until you scratch the surface and realise the lyrics don’t mean anything and it all seems a bit soulless. Meanwhile, a little closer to our own world of electronic music, it’s also happening in various extremities of the genre axis.
In the commercial world of EDM, Baauer has recently collaborated with slightly eerie and crazily-followed AI-invented Instagram ‘influencer’, Miquela Sousa, on his recent hit ‘Hate Me’. Admittedly, Miquela does sound like any interchangeable Auto-tuned vocalist you might find on a trap-influenced pop record, but conceptually, the result is an intriguing combination, and an interesting adventure into new music possibilities. And in terms of its millions of streams, it’s most definitely been a commercial success, too. Elsewhere, in much more experimental pastures, Holly Herndon and Mat Dryhurst have been toying with AI concepts and new tech discourses and tools for years, including ASMR, science fiction politics, meta imagery and visual language. Their third and forthcoming album ‘PROTO’ lands in May, comprises a choir of human and AI voices, and was co-written with their own AI ‘baby’ called Spawn.
Unlike Baauer and Miquela’s pop collabo, ‘PROTO’ sounds unlike anything you’ve heard before. It hurls you straight into the heart of the future that electronic music was so obsessed with to begin with. Any interview with Herndon is worth your time, but a recent discussion with Mary Ann Hobbs on 6 Music was especially thought provoking, as she explained how AI, “challenges the understanding of the origin of ideas and idea sources”. She relates this to the strong sampling culture where electronic music is founded, how AI can help negate copyright law, and “amplifies questions around sampling x 1000”.
Automaton (and on and on)...
In another quote of Herndon’s, this time in the promotional material attached to her album, she captures an AI/music idyll, explaining how she’d, “like for people to have a sense of agency when approaching technology in their lives. I want them to know there’s a future that doesn’t sound like the past. It doesn’t have to be some sort of sci-fi hellscape where the machines take over. It can be beautiful.”
This is the ultimate positive example of how AI can encourage and help to create whole new forms, sounds and approaches to electronic music... but it also inadvertently connotes a darker negative. If artists and music lovers don’t have a sense of agency when approaching technology, we will get left behind and potentially duped by those who are savvy with the technology; for every innovative experiment and collaboration with AI, there are many more cynical commercial applications of it within mainstream music.
In an interesting, in-depth feature on The Verge last year, musician and DJ Mag writer Dani Deahl took a deep dive into AI based music software such as Amper, IDM Watson Beat and Google Magenta —programs that digest mountains of data from decades of recorded music and its successes, to try to effectively formulate or help to create hits. She neatly captured the weird sensation of witnessing AI coding creating a passable creative expression that we understand to be unique to humans, and experimented with Amper to create her own jingles. She asked the question; “If AI is currently good enough to make jingly elevator music, how long until it can create a number one hit?”
This is where the future isn’t quite so rosy for musicians commercially, and certain jobs will perhaps be taken by robots. For instance, AI generated music is already passable enough for adverts, backing music for videos and broadcasts, jingles, call holding music and ‘lite’ music for playlists in public places. It’s only a matter of years before the software becomes sophisticated and learned enough to create pop hits, which could potentially lose songwriters, top-line writers, players and collaborators certain opportunities and royalties. But that’s only looking at one side of the AI paradigm, and not taking in the much wider direction of how technology affects all of our lifestyles. We’re moving in a much more co-creative way, where accessibility and experimentation are encouraged. One example in the mainstream pop world is American Idol singer Taryn Southern, who collaborates with AI to create the music she sings to on her debut album ‘I Am AI’. In Taryn’s case, AI has enabled her to collaborate and create her own material.
Another interesting concept that is founded on access and co-creativity can be found at UK firm AI Music, which applies artificial intelligence to understand mood, location, activity and time of day. It then creates different versions of original songs to fit your time of day and activity. A sunset chill-out version of a Ceephax Acid Crew banger you usually love at 3am, for example. AI Music claims its algorithms can, through nuanced learning, potentially create thousands of different versions of a song, hyper customising your experience and relationship with music, and how you interact with it.
AI Music founder and CEO Siavash Mahdavi has explained in interviews how people pick music much more through their activity and mood, and how their software encourages collaboration and experimentation from the user. As the original song is still being used, the writer and players still get royalties and recognition. In a recent interview on BBC 5 Live, he used a comparison to pre-smartphone photography — how we took a fraction of the billions of photos that are now taken every day, and how it’s the smartphone’s AI software that has enabled us to take such creative photos. Yet professional photographers are still very much in demand, and their jobs don’t look in danger of being taken at all; the website Will Robots Take My Job? gives them a mere two per cent chance of their role becoming automated in the future.
“What we’re looking at doing,” he says, “is shifting music to a similar paradigm, so we get more and more people playing with music, lowering the barriers of entry to music creation using these tools. Looking at photography, we still have the artist level photography. That’s there to stay, and it’ll be a similar thing to music. But we’ll have more people playing with and creating music.”
Co-creation and a dialogue between fan and artist rather than passive consumption and accessibility; Stingray was right. We are lazy, we are often end consumers. But AI doesn’t have to necessarily exacerbate and amplify that. When applied, explored and experimented with effectively, by the likes of AI Music, Herndon, Baauer and any other artist willing to take the plunge with the same spirit of the unknown that OG producers had over 30 years ago when seeds for this movement were sewn, AI can take us out of the cyclical tropes music has become bounded by and enable us to explore much more potential, vastly different aesthetics, approaches, structures and mind-sets, which can only be a positive and progressive thing. See you at the Top 100 party, 2030.