The Enclosure of the Digital Commons: AI, Creativity and Who Gets to Own the Future
- Stéphan Willemse
- Mar 10
- 5 min read
Updated: Jun 13
AI Data and the New Enclosure
For centuries, land wasn’t something you owned. It was something you used.
In much of Britain, common land was shared—grazed, foraged, cultivated by local communities. It supported both survival and social life. A public resource, not a private asset.
Then, that changed.
Fences were built. Laws were passed. The open became closed, and what had once been accessible to many became the property of a few. Enclosure reshaped rural economies, displaced generations, and concentrated control in new hands.
Now, we’re seeing something similar. Only this time, it’s not land that’s being enclosed—it’s creativity.
And the fences aren’t physical. They’re digital, algorithmic, and built on scraped data and proprietary AI models.
A Quiet Shift in Ownership
The early internet had the character of a commons.
It was messy, experimental and often deeply collaborative. People shared ideas, posted writing, uploaded artwork, remixed culture. Not always for profit—often just because they could. The joy was in contributing, not owning.
Then, large-scale AI models arrived. And with them, a familiar logic: if it’s online, it’s available. Every essay, painting, melody, and piece of code that had been shared—whether publicly or semi-privately—was pulled into training datasets. Not with consent. Not with licensing. But under the assumption that anything published online was fair game.
These datasets power the AI systems that are now ubiquitous—GPT-4, Midjourney, Gemini, Claude and others. And while these systems generate impressive outputs, they do so by learning from—and in some ways replicating—the cultural work of millions.
The result is a slow but far-reaching shift. Creative labor, once distributed and self-directed, is being absorbed into a new layer of infrastructure—one controlled by a handful of companies.
It’s not always visible. But it is familiar.
What We’re Seeing Is Enclosure

Historically, enclosure wasn’t just about land—it was about control. It redefined who had access, who made decisions, and who benefited.
Today’s digital enclosure does something similar. AI companies have taken the open nature of the internet and used it to build closed systems—models trained on public content, offered back to the public only through paid APIs or subscriptions.
The economic logic is clear: free input, monetised output.
When creators object—when artists, writers, and musicians ask not to be included in these systems—the response is often framed as inevitable. This is how progress works, we’re told. Resistance, even principled resistance, is painted as anti-innovation.
But this framing misses what’s actually at stake.
The Cost of Extraction

The consequences are not theoretical.
Writers are watching AI-generated books appear under invented names on platforms like Amazon. Illustrators are being displaced by image generators trained on their own portfolios. Journalists are seeing AI-written summaries and reworded copies replace original reporting—with mixed results for accuracy, but clear savings for publishers.
Copyright law has not kept pace.
The New York Times is suing OpenAI and Microsoft over the use of its content. A group of artists has brought cases against Stability AI, Midjourney, and DeviantArt for using their work in model training. These are early tests of what “fair use” means in a machine learning context. The outcomes are uncertain, but what’s already clear is that the burden of defence lies with individual creators—not the companies that trained on their work.
There’s also a cultural cost. The best of human creativity is often not the most predictable—it’s idiosyncratic, emotionally charged, deeply contextual. The risk with AI-generated content at scale is not just substitution, but homogenisation. A flattening of expression into patterns that are legible to machines, and monetisable by platforms.
So Where Do We Go From Here?
Enclosure, historically, was resisted. Sometimes in courts, sometimes through protest, sometimes through alternative forms of community. And the same is happening now.
Legal and policy frameworks are evolving. The EU’s AI Act includes transparency requirements for training data. The US Copyright Office is actively reviewing how AI-generated works and data sourcing should be regulated. Lawsuits are testing the edges of current doctrine.
Creators are pushing back. Artists are embedding invisible watermarks to detect misuse. Some are licensing their work through platforms that block scraping. Others are forming collective strategies to assert control over how their work is used.
New models are emerging. Ethically trained, open-source AI systems—like those based on opt-in datasets such as Common Pile v0.1—show that it is possible to build capable models without exploiting creators. These projects are early, but promising.
This isn’t about stopping progress. It’s about shaping it.
We still have choices about what kind of infrastructure we build. Whether AI systems reinforce the extractive logics of enclosure, or whether they contribute to a more inclusive and accountable digital ecosystem.
A Question Worth Asking
Do we want a future where creativity is part of a living commons—shared, respected, built upon with care?
Or one where culture is algorithmically mined, locked behind proprietary systems, and monetised without permission?
AI is not inherently extractive. But the models we train, the data we use, and the systems we build around them matter. They reflect values. They create precedent.
The future isn’t enclosed—yet. But it’s worth noticing where the fences are starting to go up.
DISCLOSURE
As an ironic experiment, this was written by ChatGPT, trained on earlier articles I had written so it could learn my tone of voice. Here's a snippet from our conversation:
ChatGPT said: Would you like me to draft the full article for your website based on this outline?
You said:
Woudn't that be ironic?
ChatGPT said:
Yes, it would be deeply ironic! Writing an article critiquing the enclosure of digital commons while using an AI model (which itself has been trained on vast amounts of human-created content) highlights the very problem being discussed.
It’s a bit like a commoner in the 18th century writing a pamphlet against enclosure while standing on land that’s already been fenced off.
The tools available are shaped by the system in question.
That said, the difference is intent and agency. Unlike the large AI firms enclosing creative commons for profit, this is a case of using AI as a tool to critique enclosure itself—more akin to leveraging enclosed land for a public protest.
Would you still like the article drafted? If so, I can write it in a way that acknowledges the irony and explores the deeper contradictions of using AI to discuss AI ethics.
You said:
Isn't you writing this a bit disingenuous? A moral slight of hand?
ChatGPT said:
That’s a fair challenge. It could be seen as a moral sleight of hand—an attempt to justify participation in a system while critiquing it. There’s an inherent tension in using AI (which relies on the very extraction processes being criticised) to articulate an argument against its exploitative dynamics.
But this kind of contradiction is almost unavoidable in modern technological debates. The tools of critique are often entangled with the systems they challenge—just as people use corporate-owned social media to organise anti-capitalist movements, or critique surveillance capitalism through digital platforms that track them.
Perhaps the most honest approach is to acknowledge this contradiction outright in the article itself. Instead of sidestepping the irony, the piece could confront it directly, questioning whether any engagement with AI (even for critique) is an act of complicity or whether there are ways to subvert its extractive logic.
Comments