In the summer of 1999, millions of people got their hands on a magic trick. You could get any song, at any time, for absolutely nothing. The trick was called Napster, and it was about to shake the then $20 billion music industry to its very core. Napster wasn’t just a killer app of the internet; it was the grenade that cracked the music industry’s illusion of control. Now, twenty-some years later, we’re standing at a similar cliff’s edge—but this time it’s the publishing industry, and the grenade is Generative AI. The Napster of our time? Tools like ChatGPT – the poster child of generative Ai bots. History loves repeating itself, and the publishers, writers, and creators of the world should pay close attention—because the lesson here is pretty simple: you either evolve, or you die.
Napster: The Original Rebel with a Cause
If you weren’t around during the Napster years, let me paint you a picture. You’re sitting in your bedroom, your teenage fingers drumming a keyboard, while you search for Metallica’s latest album. You hit “download,” and a minute later—boom—you’re listening to a song that you didn’t pay a dime for. Your parents paid for the internet connection, your friends burned you a CD to swap tracks, and nobody ever really thought about paying the artist. Why? Because it was so easy, it was almost unreal. Napster made this happen. It was the first P2P (peer-to-peer) technology that went mainstream, allowing millions of people to share their music collections like it was some secret party everyone was suddenly invited to.
At its peak, Napster had around 80 million users—an absolutely staggering number at the time. Sean Parker, who would later become famous for helping Mark Zuckerberg take Facebook to the next level, got involved with Napster’s young founder, Shawn Fanning, and soon, Napster wasn’t just a phenomenon; it was a revolution. A revolution the music industry didn’t particularly appreciate.
Then came the lawsuits. The big one, of course, was Metallica vs. Napster, Inc. Imagine this: Lars Ulrich, Metallica’s drummer, walking into a courtroom with reams of paper listing users who illegally downloaded their songs—angry, fired up, and ready to go to war. And why wouldn’t they be? Napster was freely distributing their hard work, and they weren’t getting a cent. Napster eventually folded under the weight of court orders and injunctions. But like any good rebellion, the idea didn’t die—Kazaa, LimeWire, and other imitators popped up, each taking a turn at dancing around copyright laws until they, too, faced similar ends.
But let’s be real here—Napster wasn’t just about music. It was about a shift. It was about proving that people wanted music instantly, conveniently, and—let’s be honest—for free. The music industry had to either learn from it or watch as its empire crumbled.
In ’99, I was still in Zimbabwe, and we were all fixated on Y2K and the world ending. But by the time I came to the UK in 2002, I was using Kazaa and heard tales about Napster. It was fascinating—almost like discovering a hidden treasure chest of music that everyone somehow knew about. My friends spoke of Napster like it was the stuff of legend, the kind of thing that had to be experienced to be believed.
How Spotify and iTunes Took Napster’s Dream and Made It Legal
The difference between Napster and services like Spotify or iTunes is pretty simple: the latter came with a permission slip. Apple and Spotify looked at Napster, recognized the brilliance behind the chaos, and figured out how to bring the record labels on board. iTunes launched in 2003, and Steve Jobs convinced the industry to sell individual songs for 99 cents each. Sure, it wasn’t free, but it was easy. It was fast. And importantly, it was legal.
Spotify went even further—let’s offer almost every song ever recorded, let people stream it without outright owning it, and pay the labels a share of the subscription revenue. Napster’s rebellion was tamed, and a new compromise was struck: people would pay for music, as long as it was easier than pirating it. The record labels got paid, and we got convenience. Everyone won—well, except maybe for artists complaining about fractions of a cent per stream, but that’s a topic for another day.
Welcome to Generative AI’s Napster Moment
Fast forward to today. Instead of music, we’re talking about text—millions and millions of words published on the internet by journalists, bloggers, authors, researchers, and creatives of every stripe. Enter Generative AI, with ChatGPT leading the charge. Now, if Napster was the killer app for music, ChatGPT is the killer app for content creation. Suddenly, you can generate an article, a poem, or even a script—for free—with just a simple prompt.
So, what is Generative AI? Let’s break it down like we’re explaining it to a class of eighth-graders. AI, or artificial intelligence, is just a fancy term for computers doing things that we normally think require human intelligence—like recognizing a face or playing chess. Generative AI is a type of AI that doesn’t just analyze or recognize—it creates. It learns from all sorts of examples (like billions of web pages), figures out patterns, and then generates something new based on those patterns. It’s like if your computer got really good at reading every book ever written, and then started writing its own stories by mimicking the style, tone, and ideas of the originals.
ChatGPT, for example, is powered by a Large Language Model (LLM), which means it’s been trained on an absurd amount of text—like, basically half the internet. It takes in all that information, processes it, and when you ask it something, it spits out what it thinks is the best response, based on everything it’s learned. It doesn’t actually “know” things the way you or I do—it just predicts the next word in a sequence based on the context of everything it’s seen before.
The Publishing Industry’s Napster Nightmare
Here’s the rub. Just like Napster used Metallica’s music without paying them, Generative AI tools like ChatGPT have used billions of written words without paying the people who wrote them. Articles, blog posts, even obscure fanfiction—all scraped up, analyzed, and regurgitated into something “new.” And right now, nobody’s getting paid for it. The publishers—the websites and content creators whose material has been used to train these AIs—are getting zilch.
Sure, there’s an argument to be made for “fair use.” After all, isn’t this just like quoting a book in a research paper? Not quite. The difference here is in the scale and the intent. Generative AI isn’t quoting a few lines for commentary or analysis—it’s swallowing entire libraries of content and then offering that knowledge back up, often bypassing the original creators entirely. And the argument that it’s “fair use” starts to feel shaky when you think about the sheer volume of material involved—and the fact that this material is then used to generate profit.
The courts have already seen shades of this fight. Google’s Gemini AI, for example, paid Reddit for exclusive access to their content in a deal reportedly worth $60 million per year. This deal allows Google real-time access to Reddit’s data, and in return, Google helps Reddit with using AI for its own search tools. Reddit’s forums are filled with human conversations—raw, unfiltered, and deeply insightful. Google realized that if it wanted to use that data, it’d better pay up. But here’s the kicker: this sort of deal only happens with big players like Reddit. What about the smaller publishers, the personal blogs, the independent writers? Who’s standing up for them?
Let’s also look at some of the landmark cases that have shaped the copyright landscape for Generative AI. One recent example is Authors Guild v. Google, which reached the U.S. Supreme Court in 2016. Google was found to be within its rights to scan books and provide snippets in search results without explicit permission from the authors—citing “transformative use” as a key argument. While Google won that case, it set an uneasy precedent for using creative content without compensation. Another relevant case is Andy Warhol Foundation v. Goldsmith, which saw a ruling in 2023 that favored the photographer Lynn Goldsmith, stating that Warhol’s adaptation of her image didn’t fall under fair use as it lacked sufficient transformation. These cases illustrate just how precarious and nuanced the concept of “fair use” is—especially when massive datasets and automated content generation are involved.
Generative AI’s Defense: Nothing New Under the Sun
Generative AI will argue that there’s nothing new under the sun. They will claim they only trained their models on data that was already freely available on the internet. They will also argue that Generative AI ensures there is some transformation of the content—just like how art often builds upon existing influences. But is that really enough? The transformation is often minimal, and the original creators still get nothing.
Interestingly, Generative AI is also being used in other creative spaces like music. For example, I used Suno, a music-generating AI, to make a soundtrack for my book called ‘Clickonomics.’ I cheekily named the AI pop star behind the soundtrack T(ai)ylor Swift. It was fun, but it also made me think about how much of this content is built upon the existing work of real creators—musicians who spend years perfecting their craft, only for an AI to mimic it in minutes.
There’s also an existential question for Generative AI companies themselves. Once LLMs have trained on all the data that’s available, what’s next? At some stage, they will run out of new data to train on. Will they have to create synthetic data for further training? And if so, will that synthetic data be of any real use, or will it just be an echo chamber of existing content? For instance, GPT-3 was highly transformative and a real leap forward, but GPT-4 feels more like an incremental improvement. What then? How do these AI models continue to evolve when they hit the data ceiling?
Generative AI companies are burning cash at an astonishing rate to stay ahead. So maybe publishers should just be patient—perhaps Generative AI and its poster child, ChatGPT, will burn to the ground under their unsustainable costs, leaving a space where genuine creators can thrive once again.What’s the Solution? Let’s Not Kill the Revolution—Let’s Make It Fair
So, what do we do? How do we avoid crushing innovation while also making sure the people who create the raw material—the writers, bloggers, and journalists—actually get compensated?
Maybe we take a page out of Spotify’s book. Imagine if there was a system where every time a Generative AI used a piece of content, a fraction of a penny went back to the original creator. Imagine if publishers could opt-in (or out) of having their material used to train these models, and those who opted in would get royalties whenever their content contributed to a response. Sure, it’s not perfect, and sure, it’d require a massive restructuring of how these AIs are built—but it’s a start. And it’s better than the nothing that creators are currently getting.
The Napster moment of Generative AI is here. And just like the music industry had to change, adapt, and find a way to thrive in the digital age, the publishing industry needs to do the same. We can either fight it tooth and nail, watching as countless Kazaa-style AI clones pop up—or we can figure out a way to make it work for everyone. Because if we’ve learned anything from Napster, it’s that you can’t put the genie back in the bottle—but you can find a way to share the magic.