The music industry is preparing for war over AI-generated songs—and streaming services are the first battleground

 

By David Salazar

When a new K-pop artist debuts, fans can typically expect that the singing will be done in Korean, with a little bit of English mixed in. It’s a trend that has emerged as the genre has gained traction globally—but one that still leaves a lot of potential fans out of the linguistic loop.

Not so with Midnatt, the latest artist from Hybe, the Korean entertainment company behind K-pop juggernaut BTS. Midnatt’s debut single, “Masquerade,” dropped in May in six languages simultaneously—Korean, Chinese, English, Japanese, Spanish, and Vietnamese. It was a feat made possible by Hybe’s $36 million acquisition last fall of an AI audio company called Supertone. Its technology can realistically generate and even replicate voices, and producers can use it to augment the voice and pronunciation of a singer—in this case to help Midnatt sound like a native speaker.

For years, synthetic voice technology was something of a novelty, used to make voice-over clips or memes, and in Japan to create synthetic pop stars, or Vocaloids (named after the Yamaha software used to make them). But in recent months, it’s begun to cross the uncanny valley, with songs like “Heart on My Sleeve,” the unsanctioned and entirely artificial collaboration between Drake and the Weeknd. Created by a TikTok user named Ghostwriter977, “Heart on My Sleeve” amassed 10 million views on TikTok and a quarter million Spotify streams this spring before it was scrubbed in response to copyright claims from Republic Records, the Universal Music Group (UMG) label that represents both Drake and the Weeknd.

“We’re going to protect our artists, we’re going to protect their brands,” says Michael Nash, executive vice president and chief digital officer at UMG. (Supertone, for its part, says it won’t replicate a voice without an artist’s permission.)

The prospect of any artist having their voice copied by AI—with or without their consent—is suddenly very real. Some artists and producers are eyeing opportunities: Singer Holly Herndon has created Holly+, a synthetic voice double, and Grimes launched Elf.Tech, software that lets people create new tracks with her vocals—in exchange for 50% of the royalties. Producer Timbaland used AI, too, to resurrect Biggie Smalls to rap over his new beat.

But many artists are understandably wary. In April, when confronted by yet another unsanctioned tune using his voice, Drake declared on Instagram: “This the final straw AI.”

AI-cloned voices are merely the start. While text tools like ChatGPT threaten knowledge workers, and Dall-E and Midjourney rattle visual artists, a growing suite of generative AI tools has the potential to upend just about every aspect of the way music is made and distributed. Dozens of music-creation software companies, often aimed at amateurs, already offer AI-powered song starters and templates, in which users can specify key, tempo, and genre, and even layer on convincing synthetic vocals. On their heels are programs like Google’s MusicLM, a yet-to-be-released text-to-music generator that can create entire compositions in response to captions.

Many of these generative AI tools are rudimentary, though likely not for long. And when more sophisticated AI-generated music starts flooding streaming and social media platforms, it will disrupt the industry’s already fragile ecosystem. Voice synthesis is the ripple. The tsunami is on its way.


The last time a new technology revolutionized music, industry leaders were largely caught unprepared. The digitalization of music in the form of MP3s, followed by peer-to-peer sharing and then streaming, transformed how it was distributed and valued. Today, Spotify pays roughly half a penny per stream—a pittance, which then needs to be split among artists, labels, and all other rights-holders. This time, the industry’s heavyweights are coming together quickly to build a fence around themselves and their artists.

At South by Southwest in March, the Recording Industry Association of America, which represents more than 1,600 labels, launched the Human Artistry Campaign (HAC), which now has more than 100 partners. The effort seems poised to become a lobbying tool for the recording industry, with a set of principles that outline the importance of human involvement in music making. The campaign is focused on three areas: preserving copyright for human-made music, which gives labels and artists ammunition to wield against AI clones; pushing back on tech companies that use copyrighted music to train their AI models; and calling for transparency around content “generated solely by AI.” Though much of this effort will likely require cooperation from streaming services, none had signed on to the campaign at press time. (Neither Spotify nor Apple Music responded to Fast Company’s requests for comment.)

The music industry is preparing for war over AI-generated songs—and streaming services are the first battleground | DeviceDaily.com
[Illustration: Yoshi Sodeoka]

A few weeks after HAC’s launch, UMG sent a letter to streamers, including Apple Music and Spotify, asking them to block AI companies from accessing the music on their services. “Copyright dictates that you can’t create a database for the purpose of training AI,” Nash says. “And certainly you cannot commercially exploit the output of AI trained on copyrighted material to directly compete with that artist on streaming platforms.” (Google’s MusicLM tool remains unreleased, in part, because researchers found that 1% of its output mimicked copyrighted work.)

The threat of AI music being trained to imitate and compete with specific artists is still small. But labels are already concerned about how their artists’ presences on streaming services are being diluted. The heads of UMG and Warner Music Group have said more than 100,000 songs are uploaded to streamers every day, while a Billboard analysis found that an average of 49,000 songs hit Spotify each day in 2022. Today there are 100 million songs on Apple Music and a similar figure reportedly on Spotify.

Increasingly, this oversupply consists of AI tracks that have been engineered explicitly to game streamers’ recommendation algorithms to secure listens—and royalties. One Spotify user identified 48 instances in which snippets of the same song were uploaded with different titles, attributed to different artists. “It’s almost like the people making the generative scam tracks are programmers writing viruses that are designed to run on Spotify,” says Jaime Brooks, a musician and producer who writes a column about streamers for The New Inquiry, as well as a Substack about the industry.

 

Instrumental, mood-based playlists are particularly fertile ground for AI-generated music. UMG has decried this aural clutter and is trying to diminish it: The company is partnering with a generative AI startup called Endel to create “soundscapes” that incorporate elements of its artists’ music. Nash says it’s a way for UMG to gain a foothold in “a category that’s known for poor quality and gaming the streaming model.” In early May, Spotify removed thousands of songs uploaded via AI music company Boomy, among others—but not because of their provenance. The songs were taken down as part of a larger effort to combat potential stream manipulation.  (Boomy worked with Spotify to resume uploading to the platform within several days, and says it’s working to address the industry-wide issue of streaming fraud).

One thing is clear: The bottom line is eroding. According to Spotify’s 2022 annual report, the three major labels (UMG, Sony Music Entertainment, and Warner Music Group) and Merlin, the largest digital-rights agency used by independent labels, are receiving a diminishing share of streams. Their cumulative slice dropped from 87% of Spotify’s streams in 2017 to 75% in 2020. The remainder went to independent musicians and others with labels too small to be part of Merlin—and, of course, the growing ranks of artificial “artists.”

It’s not hard to imagine what comes next. “The scale at which these AI machines can anonymously and by rote pump out recording after recording can really harm the fundamental economics of how the streaming ecosystem works,” says Michael Huppe, president and CEO of SoundExchange, which helps to ensure that artists receive fair royalties from streaming services. Humans may find the artistry of AI-made music lacking, but the algorithms increasingly curating their playlists and recommendation feeds may not be quite as discriminating.

To maintain the status quo, labels need streamers to draw a line between AI-generated and human-made music, either by restricting the flow of AI songs or by paying humans more. Meng Ru Kuok, cofounder and CEO of music-making app BandLab (which includes AI tools), says labels are well positioned to lead this push. “In a time of massive volume, you need trusted entities,” he says. “That’s where the cycle is reverting.”

Already UMG is working toward new royalty models with smaller services, including Deezer and Tidal, that “realign incentives so the focus is on the artists that are driving value,” according to Nash. The intent, he says, is to differentiate “real” music from generative AI copies. But Brooks fears that whatever measures are taken to restrict AI—including the largest labels using royalty-rate distinctions to ensure their artists are compensated—will put independent musicians in an even more precarious position. Such moves could create “a permanent advantage for those [larger] companies,” she says.

Within this revamped model, emerging and experimental artists may find even less opportunity on the major streamers—and see even more reason to put their creative energy into alternative platforms like Bandcamp, which allows them to sell music to and engage directly with fans, and TikTok, an increasingly powerful discovery tool. That’s something Ghostwriter977 understood by choosing TikTok as the springboard for “Heart on My Sleeve.”

If the labels’ work to keep AI at bay simply becomes an effort to build a raft for established artists, the industry’s largest players could find themselves safe—but still out at sea.

Fast Company

(12)