D.I.Y.

How big of a threat is AI to Musicians and the Music Industry?

Artificial intelligence in the creative industries is a significant concern, especially in music. But should AI worry musicians, and what can be done to protect them?

by Sonia Chien of Chartmetric Blog

With the market for artificial intelligence expected to reach $184 billion this year, there has been increasing public uncertainty about the technology’s potential effects on our lives. The impact is highly visible in the creative industries, with the music industry being among the most vulnerable. Meanwhile, regulations are just beginning to catch up to the risks that artists are facing. 

In May 2024, British musician FKA twigs delivered a testimony to the US Senate in support of the drafted NO FAKES Act, which would prevent the unauthorized use of names, images, and likenesses of public figures using AI technologies. Alongside her testimony, she also announced that later this year she would be using her own deepfake, “AI Twigs,” in order to “extend [her] reach and handle [her] social media interactions.”

FKA twigs’ reclamation of her own deepfake, besides being a formidable power move, poses some interesting questions. To what extent should artists be accepting — or even embracing — AI, and to what degree does AI pose a real danger to the well-being of the music industry that should be resisted?

As music historian Ted Gioia puts it, the fact that AI development is so shrouded in mystery is not the best omen. “This is probably the biggest warning sign for me. If AI is so wonderful, why do they have to keep it a secret?” 

Gioia goes on to describe how as AI music continues to flood music platforms, we are seeing an oversaturation of music that sounds eerily similar. He gives an example from Spotify user adamfaze, who has compiled a playlist of AI-generated music called these are all the same song, featuring 49 songs that all sound basically identical. 

Judging by an average track popularity of 0/100, these are by no means hit songs. Many were released on the same day as each other, and the names sound almost comically computer generated — just look at “Blettid” by Moditarians, “Aubad” by Dergraf, or “Bumble Mistytwill” by Parkley Newberry. Nine of the songs are not longer streamable, and the album covers for nearly all of the playlist’s tracks appear to be generic stock images of either nature or people. 

While some forms of AI do have practical applications for musicians, such as increased efficiency in music production or as stand-in promotion (such as FKA twigs’ deepfake), it is also true that use cases such as passive listening to playlisted AI music are taking away airtime and revenue from real artists. As Gioia notes: “AI is the hot thing in music but not because it’s great music. [No one is saying] I love this AI stuff. It’s being used to save costs in a deceptive way.”

Does AI pose a threat to artists?

On the topic of what we can expect in the music AI industry, Chartmetric spoke with music culture researcher, professor, and author Eric Drott. In his study “Copyright, Compensation, and Commons in the Music AI Industry,” he discusses the two prevailing firm models increasingly seen in the music AI business. 

One is a consumer-facing model, describing services like Amper, AIVA, Endel, and BandLab, which can do things like make mood-based playlists or combine a mix of musical elements to create a song on command. Some industry experts such as YouTuber Vaughn George predict that technologies like the latter will be quick to spread in popularity in the next five years — think “Hey (platform), make a song sung by David Bowie and Aretha Franklin, produced by Nile Rodgers in the style of 1930s jazz swing.”

The second kind are companies that market royalty-free library music for use in games, ads, and other online content. Since library music is generic by nature, generative AI is often used in this context as well. 

To describe the current tone towards AI in the music industry, Eric mentions his visit this year to South by Southwest, an experience which gave him a “weird sense that [music industry people] have been through the five stages of grief [with AI], and have gotten to the resignation portion of it.” He acknowledges that to some degree, this is not a misguided sentiment.

“In a certain way these things are going to be imposed upon us, and by that I mean the music industry, artists and music listeners are going to have to contend with it.” 

On the other hand, he also stresses that damage to the music industry from AI is by no means necessary or inevitable, and does not have to be something that we “fatalistically accept.” It is entirely possible, he says — while not making any predictions — that it is a bubble that bursts in the coming years. 

“If you look at the history of AI music, there were a number of times when AI seemed to be getting off the ground in the ’50s and ‘60s, but in the ‘70s a lot of people looked at the results and said, ‘This isn’t living up to the hype’.”

In the ’80s and ‘90s this happened again, when major investors in the arts, government, military, and universities once more pulled the plug on funding. This points to the possibility that AI could just be trending again until investors eventually lose confidence. 

In the meantime, the hype is still going strong, with platforms like Spotify pouring resources into projects such as the Creator Technology Research Lab, whose AI specialist director François Pachet was poached away from Sony Labs in 2017. Pachet was also a key player behind the first full album composed by AIHello World, released in 2018. The most popular song from the project, “Magic Man,” has over 6.2 million Spotify streams. 

Why is the music Industry a perfect target for AI? 

One thing that AI is very good at is compiling information from a large body of content and creating predictions from said content. Conversely, one thing that it is very bad at — and nowhere near being good at — is evaluation tasks, or determining whether something is true or false. For example, there is no way for AI to identify satire, which has led to AI-generated text responses suggesting that people eat rocks as part of a healthy diet. 

“Truth is not something that’s easily verifiable. It requires judgment, reflection, experience, and all of these intangibles that they are nowhere near modeling in these AI systems,” says Eric. However, the same problem is not applicable to music: “We don’t play music on the basis of whether it’s true or not. [AI] works really well with music because there is no ‘true’ or ‘false’ valuation.”

Another reason that AI has developed so quickly in music is that ever since the advent of the MP3, music has become a highly shareable medium. In his study, Eric describes the existence of a musical creative commons, or the result of the accumulated works of musicians past and present. 

The musical commons faces a major weakness in that it cannot be protected by the current copyright framework, which is largely designed to protect the rights of individuals. This has created an “in” for AI companies to take advantage and use the knowledge of the commons to develop their AI models.

Aside from the more obvious generative uses of AI, it also has big applications in trend prediction, such as determining artists who are likely on track to becoming stars — a practice that traditionally has been a very inexact science in the music industry. 

Now with software like Musiio, which was recently acquired by SoundCloud, increasingly reliable predictions can be made using their servers to analyze which music is most likely to attain hit status. Eric argues that non-hits are just as important in determining the ranking of break-out artists like Billie Eilish, who made her start on SoundCloud: “[Billie’s] music only stands out as exceptional if you have this entire body of music as the norm against which it defines itself as an exception. Should those artists be penalized if their music is generating data? It’s actually going to end up marginalizing them, in a way.”

Billie Eilish · Fingers Crossed

Other applications of AI include South Korean entertainment company HYBE using the AI technology Supertone to create a likeness of the late folk-rock singer Kim Kwang-seok, as well as the company’s 2023 announcement of their move to Weverse DM, a platform that allows artists to communicate directly with fans. It is not out of the question that these systems are all AI-run, or alternatively run by with a substantial amount of hidden human labor by impersonators. 

Where is the music industry most at risk?

There have been some instances in recent years of big artists being copied by AI, such as the AI track “Heart on My Sleeve” imitating the style of Drake and The Weeknd

One of the most known above-board examples is The Beatles’ 2023 song “Now And Then.” The single, which used restorative AI software to isolate the late John Lennon’s voice, has nearly 70 million streams. 

However, it is not really the big stars that are most at risk of losses due to AI development. Most threatened are behind-the-scenes production jobs, or jobs in the “generic music” field. While this may not be the sexiest department in the industry, it does embody a large area of potential revenue for artists who are just starting out and can generate some part-time income through producing backing tracks, loops, or beats.

Eric notes that the differentiation between “generic” and “creative” music in this sense is a dangerous one, especially when it comes to the music industry’s well-being. 

“The argument I see some people make is that you don’t have to worry if you’re “truly creative.” I think that kind of distinction is intensely problematic because [this is the area] where you develop your craft. So if we’re going to take that away from people [and their means of] earning money on the side, you’re eating your seed corn, so to speak.”

At the same time, the US in particular is seeing more legislation aiming to protect the interests of artists. Federal regulations such as the NO FAKES Act, the No AI FRAUD Act, and the Music Modernization Act have attempted to give artists more control over the use of their voice and likeness, target AI use of artist likenesses, and create means for artists to collect royalty payments respectively, with mixed results. The strongest legislation has primarily occurred on a state-by-state basis, with Tennessee becoming the first state to protect artists from AI impersonation this March. 

What should artists look out for, legally speaking?

Should A w

A glaring issue under US musical copyright law remains that while there are protections for the actual content of an artist’s musical performances and compositions, their name, image, and likeness (or “NIL”) remain largely unprotected. This poses an issue for artists in terms of their control over potential revenue streams, reputation management, rights to intellectual property, and preventing violations of privacy. For this reason, Eric notes, artists should be “very, very careful” about contractional language that signs over NIL rights. 

One of the downsides to the codification of NIL laws on a federal level is that it introduces a notion of transferability akin to copyright, which would also make it easier for exploitative record labels to work this into their contracts. If an artist has passed away, for example, labels could then conceivably use AI to legally generate new content from their discography post-mortem, even if it was against their wishes. 

Also legally fuzzy is what amount of recourse artists have when it comes to preventing their music from being used as training material for AI. This is in part due to the closed-door nature of music AI. While there have been some cases of AI companies using in-house composers to create the basis for their content, such as what used to be the case for the generative music app Endel, the degree to which AI companies are dipping into the musical commons is largely unreported, which likely points to numbers higher than these companies might like to reveal to the public. 

More publicly, there is an increasing number of partnerships between AI companies and big record labels, like the one between Endel and Universal Warner. In 2023, the two signed a deal to collaborate on 50 AI-generated wellness-themed albums. One result of this was a series of remixes of Roberta Flack’s GRAMMY Award-winning cover of “Killing Me Softly With His Song” for its 50th anniversary

Such as with the reboot of “Killing Me Softly,” the action of taking an old recording and finding new ways to monetize it is likely to become an increasingly common practice.

While artists with large platforms such as Roberta and Grimes have been supportive of partnerships involving AI, it’s the lesser-known artists entering into exploitative contracts that have the most to lose without legal protection. While an artist with an already large fan base would at least potentially have informal protection through bad PR if they encountered a contract issue, smaller artists may face a career-ending problem or a betrayal of their principles if they don’t read the fine print.

What’s the solution?

As big as AI is in the modern day, one thing it can’t replace is the relationship between an artist and their fans.

“We listen to artists because we like their music, but also because there’s a relationship between the artists and the music,” says Eric. “A Taylor Swift song sung by Taylor Swift has a certain kind of resonance for her fanbase. So even if [AI] is able to generate something that’s just as good musically, it wouldn’t have that human relationship that’s built into it.”

Another positive is that there is a legal precedent for supporting artists. In a 1942 casebetween the American Federation of Musicians and the prominent radio and record companies of the time, the AFM won the right to a public trust which paid musicians to play at free concerts across North America. In addition to providing paid work to artists, the ruling also channeled value back into the musical commons. 

It is time to bring back the kinds of 20th-century legal decisions that supported artists, believes Eric. “This was a very broad practice in the past. I think we lost sight of that. In the US in particular, there is a sense that these entities are too big or too out of control.”

He suggests that governments start taxing AI companies in order to bring back the value to the musical commons that has been lost and to make up for the damage they have caused to the economy and the environment. With the funds, as in the 1942 case founding the Music Performance Trust Fund (which still exists, by the way), artists could receive benefits such as health care, insurance, scholarships, and resources for their careers. 

While AI may be a strong force in modern industry, there is still hope for the future of the music industry. As long as listeners are interested in creativity and supporting real artists, and artists are interested in creating music that pushes creative boundaries, there will be a place for continued innovation in music. 


Visualizations by Nicki Camberg and cover image by Crasianne Tirado. Data as of July 15, 2024.

Share on: