In early 2026, Bandcamp released a statement saying it had banned AI music from being streamed on its platform, and the world went berserk. There were heated debates on whether this was necessary or just an overreaction. While those discussions spread, one recurring line of thought was who to attribute such ownership to. The AI model or the original artist.
Music has been with us for as long as humans have walked this earth. Long before streaming playlists and algorithm-recommended hits, music was the sound of community. We had people beating drums, singing together, and passing rhythms from one generation to the next.
With every technological leap, music evolved — from oral traditions to vinyl, tapes, CDs, digital files, and now streaming in every pocket. Throughout history, music has changed, but not its purpose. It has always been about storytelling, emotion, and connection. And for me, music has always been more than that; it’s a way to connect directly with artists and their souls. You can feel what they’re feeling, hear their raw emotions, and experience their world through sound.
And then came AI. We watched artificial intelligence creep into many creative spaces, and inevitably, it took a foothold in music. If we knew ten years ago that a company or person could feed their entire catalogue into a machine, generate new music from it, and that audiences might even enjoy it more than the original, we would have called it absurd. Yet here we are, watching it happen in real time, and trying to catch up with what it all means.
Let’s back up a bit and clarify what we mean by AI-generated music. The frenzy started around late 2023. We see artists' originality being imputed into programmes and repurposed to mimic a genre, an artist’s style, or a specific mood. Thanks to rapid adoption, these AI-generated tracks can end up in playlists, on streaming charts, and in front of listeners. More often than not, without a label or knowing who actually made them.
Take the case of “Papaoutai (Afro Soul AI version)” — a reimagining of Stromae’s classic song that began circulating widely in late 2025 and early 2026. The version garnered millions of streams globally. Listeners shared it across platforms, some even saying it sounded better than the original. But….it didn’t diminish the fact that if an algorithm can produce something emotionally resonant, does that now diminish the value of the original creator? The AI version’s chart success and widespread sharing clearly show that machine-generated tracks now compete for attention alongside human creations.
Similarly, Intentions by Nigerian artist Fave was reinterpreted by AI and quickly gained traction on TikTok, with altered harmonies and vocals that people engaged with as if it were a new song. Fave responded by recording her own official version that leaned into the viral interest — a smart move to reclaim the narrative — but this also highlights the economic and cultural tension beneath the surface.
Many artists do not earn enough from streams to sustain themselves. In 2025, platforms like Spotify reported paying more than $11 billion to rights holders, the largest annual payout in music history. That sounds huge, but these numbers mask the reality for everyday creators.
On average, an artist earns just $0.0035 per stream on major streaming services. After deducting label fees, producer splits, and distribution cuts, what’s left can be minuscule. Even in Nigeria, where Afrobeat’s global reach is booming, total streaming payouts for artists reached ₦58 billion, doubling the previous year. But that sum is shared among all Nigerian artists, labels, and rights holders. For individual creators, it often doesn’t translate to a livable income.
Now imagine seeing AI-generated versions of your songs spreading faster, engaging more listeners, and monetising — all without your involvement. That would hurt a ton.
So, what happens legally? The question is not really whether AI can be creative or whether it produces better music than humans. The real issue is how society values machine-generated patterns versus human emotion and storytelling. Right now, charts, streams, playlists, and recommendations do not distinguish between human or AI output — they only measure engagement.
In Nigeria, there are copyright protections; the Copyright Act of 2022 protects musical works resulting from original creative effort. But this law predates generative AI and assumes a human author. It doesn’t address cases where the core creative output comes from automated systems trained on millions of existing songs. Nor does it clarify whether AI-generated remixes of human works are derivative works requiring consent.
This gap leaves creators exposed. Platforms and AI developers can operate with significant leverage, while individual artists lack clear tools to defend their economic interests. Enforcement is weak, and legal action is often expensive and inaccessible, leaving many creatives with the law existing more as theory than protection.
Not to say we are being anti-technology, this is more about economics, fairness, and clarity. Artists have always borrowed from one another. There’s been sampling, influence, paying homage to the OGs — these are all part of artistic evolution. But in those cases, there was negotiation, licensing, and compensation. A human composer cleared a sample, got credit, and got paid. AI remixing upended work at scale without those mechanisms necessarily in place is a structural problem.
You may argue that AI is simply repurposing existing work, and that it doesn’t “steal” in a literal sense. Some people even believe that it's just another form of creativity at some level. And that’s true — humans themselves borrow ideas all the time. But humans have agency and rights over their work. AI, as it exists today, can ingest a catalogue of millions of songs and remix them without asking permission, paying royalties, or acknowledging human authorship. That is not remix culture, it’s extraction at scale with very little return for the people who created the input data.

While doing this research, I saw a lot of people question whether AI will stifle human creativity. My answer is a big NO. People will not stop writing songs, singing, performing, or connecting emotionally to music. Human creativity already predates technology and will outlive this particular wave of tools.
The bigger question is whether creators can still make a living when machines can produce sound faster, cheaper, and without the emotional or economic investment humans carry. AI devalues creative work because it separates value from effort and story. When a machine can generate eight songs in the time it takes a human to write one, the market — which favours novelty and quantity — will increasingly reward volume and algorithmic predictability over messy, human processes.
But what if we just license AI training the way we license samples? That seems like a sensible suggestion yh? Creators get paid for the use of their work in AI models. However, licensing is only as good as its enforcement. In jurisdictions with weak enforcement or unclear legal frameworks, licensing becomes almost meaningless — like having a rulebook no one follows. And in places where artists already struggle to collect what they are owed, adding another layer of complexity without strong enforcement simply maintains the status quo.
So does this mean we are “done” for as creators? What exactly is the next step to take?
To be fair, the honest reality is that AI is not leaving; if anything, it will keep evolving. And no, creativity will not disappear. The key is building systems that protect human artists. Otherwise, the economic and cultural value of creativity risks being harvested by machines and the platforms that control them, rather than by the humans whose lived experiences made the art meaningful.
We can demand clearer labels on AI content, better royalty distribution systems, and legal frameworks that recognise both human effort and machine aid. We can treat AI as a tool, not a replacement, and insist that platforms transparently disclose when a track was generated with machine assistance. When listeners know what they are hearing, they make more informed choices. Likewise, when artists are compensated for how their work is used, they can sustain careers.
At the end of the day, music is a human language of feeling. Machines can simulate it and mimic patterns. But nothing and I mean absolutely no artificial intelligence, can truly live, feel, or own it in a human sense. Human experiences of love, heartbreak, joy, and anger are what make music meaningful. AI can never replace that. So let’s not panic, and let’s not romanticise the past either. It would be better to create an ecosystem where technology serves artists, not supplants them. Creativity is not going away, but the way we value it might, unless we choose differently.
What are your thoughts?
Comments