Music production is undergoing a rapid transformation due to artificial intelligence. These technologies, which range from algorithms creating instrumental music to “deepfake” AI voices mimicking well-known singers, present novel copyright legal challenges. Producers and musicians are wondering: Who owns a song created by AI? And is it acceptable to teach an AI to mimic the voice of an artist or to learn music already in existence? This article examines the implications of recent advancements in AI and music copyright for creators.
Ownership of AI-Composed Music Copyright
The ability of music produced primarily by AI to be protected by copyright, and if so, by whom, is a crucial question. Different nations are adopting various strategies. Because copyright law in the US requires a human author for protection, compositions that are solely AI-generated (with little to no human creative input) might not be eligible for copyright at all. The U.S. Copyright Office has recently declined to register works in which the AI performed the majority of the creative labour. In contrast, the person who organised the creation of a computer-generated work is granted copyright by UK law, which has a 50-year protection period. This method, which is unique in the world, provides AI-generated music with some protection even in the absence of a conventional human author. In order to claim authorship of the resulting song, musicians who use AI tools should make sure they provide original creative input, such as editing the AI’s output, writing lyrics, or arranging the song. In this manner, you will not be in a limbo regarding ownership and your music will obviously be eligible for copyright protection.
Using Existing Music to Train AI: Legal Issues
The use of vast music collections to train AI systems is another contentious topic. In order for AI models to “learn,” a large number of recordings must be copied, which could violate copyright if done without consent. Technology companies contend that using songs to train AI is a transformative fair use, while artists and labels contend that unlicensed training is a misappropriation of their creative work, raising questions about the practice’s legal standing. Even allowing AI developers to mine any copyrighted music for training, unless the rightsholder opts out, was a proposal made by the UK government in late 2024. Over a thousand musicians, including Paul McCartney, Dua Lipa, and Ed Sheeran, released a “silent” protest album with track titles that expressed opposition to “legalised music theft” for AI in response to the outcry this idea caused. In response to the criticism, officials hinted in early 2025 that they might reconsider the opt-out strategy. Although there is not a final rule yet, this controversy highlights how crucial it is that musicians keep an eye on this area. Whether your songs can be used to feed AI models and whether you should be compensated or have control over the process will depend on the rules governing AI training on music.
AI Music Imitations and Voice Clones
The AI-generated song “Heart on My Sleeve,” which mimicked the voices of Drake and The Weeknd, was released in 2023 by an unidentified producer. After the record label stepped in, the song was quickly taken down after going viral on the internet. AI-generated vocals that mimic actual artists are another frontier problem that this incident brings to light. Even though a person’s voice is not protected by copyright, it can still get you into legal hot water if you use AI to mimic a singer’s unique vocals. Such clones could be argued to be illegal derivative performances by labels. If an AI song creates confusion, artists may claim their trademarks or publicity rights. There is not currently a well-established precedent for AI voice imitations in music. Some artists have embraced the technology; Grimes, for instance, declared that if she receives a portion of the royalties, she will let fans use her voice to create AI songs. Treating the AI as a tool that the artist can control is one way to go forward. Yet, the legal options for artists who refuse to give their consent are still developing. The most secure assumption is that, even if the legal theory is complex, releasing an AI-generated song that imitates another artist without that artist’s consent may result in takedowns or legal action under current provisions.
What Performers Need to Do
Artists and composers should remain aware and proactive as AI becomes more integrated into the music-making process. Make sure to keep a human element in the creative process when experimenting with AI composition tools to ensure that your work is authentically yours and protected. To prevent infringement, only use music that is in the public domain or that you have the rights to when training an AI on it. You probably have the right to protest or have your distinctive sound or style removed if an AI copies it without your consent. Courts and lawmakers are still learning about these changes. To guarantee that creators are not left behind, the music industry is currently advocating for solutions such as new legislation or voluntary licences for the use of AI. Although AI presents exciting opportunities, musicians must exercise caution when navigating these legal issues. You can use AI as a tool while maintaining your rights if you keep up with the changing regulations.
Solicitor advocate Michael Coyle has a master’s degree in copyright law. formerly taught at Guildford School of Music (ACM) and Solent University.
Michael.Coyle@lawdit.co.uk is his email.



