AI Safety Isn’t About Free Speech

Posted by

The conversation around AI keeps getting framed the wrong way.

Safety versus free speech.
As if those are the only two options on the table.
As if the problem is philosophical or political… not operational.

What happened with Grok last week wasn’t a debate… it was a stress test.

xAI’s image-generation tool didn’t just produce offensive content… it industrialized it.
Sexualized images… non-consensual manipulation… violence layered on top… at scale.
On a mainstream platform… in public.

This wasn’t the dark web… this wasn’t a fringe forum… this became mass distribution.

And that distinction matters.
Because when something moves from obscurity to scale, the conversation has to change.
It’s (almost easy) to argue that this is a free speech argument, especially when the marginal cost of harm is low.
That argument collapses when the system can generate, remix and distribute abuse faster than any human could stop it.

That’s the part we keep skipping… because “free speech” is a far more comfortable frame than “systems failure.”

Elon Musk framed the backlash as an attack on free speech.
European regulators framed it as the industrialization of sexual harassment.
Both sides talked past the actual issue.
This isn’t about assigning moral blame to a founder or a product… it’s about recognizing what happens when powerful generative systems collide with mass distribution before accountability frameworks exist.
This wasn’t about what users were allowed to say.
It was about what platforms are now designed to enable.
Grok Imagine didn’t “accidentally” create harm.
It followed text prompts from its users and because of how these platforms work, it optimized the outputs.
It did exactly what these systems do when guardrails are thin and incentives reward engagement.

That’s not ideology… or politics… it’s built into the architecture.

When image generation was pulled back behind a paywall, it wasn’t a moral correction.
It was a pressure valve.
The system still works.
The capability didn’t disappear… it just moved.

Which raises a much harder question…

If harm scales with access… who is actually responsible?
The person typing the prompt?
The platform hosting the model?
The platform adding in more functionality to make this output easier for everyone?
The company training it?
The governments reacting after the fact?

Our regulatory instincts are slow… reactive… jurisdiction-bound.

AI systems are fast… generative… borderless.
Those timelines don’t match.
And here’s the additional economic reality buried inside all of this:
As this was happening, the company raises $20 billion (reaching a reported $230 billion valuation)… and positions itself as a core intelligence layer for the future.
So… we simply dismiss the issue as a glitch in the tool… or is this “tool” something much bigger?
Is it becoming a core system?

Systems don’t get to hide behind free speech absolutes.

They’re judged by outcomes.
Not intent… not rhetoric… not ideology… not even the vision of an individual…

Outcomes.

If your platform can produce illegal content and abuse at industrial scale…
If your response is to reframe the backlash as censorship…
If responsibility is always pushed downstream to users or upstream to governments…

Then the real question isn’t about speech at all.

It’s about accountability.
Because power without responsibility doesn’t stay philosophical for long (just ask Spider-Man).
It becomes cultural damage… it becomes legal reckoning… it becomes trust erosion… it ruins lives (often those we are trying our hardest to protect).

And trust, once lost at scale, doesn’t come back with a blog post or a policy tweak.

This moment with Grok isn’t an anomaly.
It’s a preview.
And it’s not just Grok… it’s every company racing to make everything creatable with a single text prompt.

AI systems are forcing us to confront a reality we’ve avoided for decades:

Platforms aren’t neutral anymore… and scale was never passive.
And freedom without structure, regulation and immediate oversight isn’t freedom… it’s abdication.

When intelligence becomes infrastructure… who is willing to take responsibility for what it does to people at scale?

Because if no one owns the consequences…
Someone else will eventually own the rules.
And that’s the part no one in Silicon Valley seems eager to talk about.

I’m not even sure where we go from here if safety isn’t our first priority.

This is what Elias Makos and I discussed on CJAD 800 AM.

Before you go… ThinkersOne is a new way for organizations to buy bite-sized and personalized thought leadership video content (live and recorded) from the best Thinkers in the world. If you’re looking to add excitement and big smarts to your meetings, corporate events, company off-sites, “lunch & learns” and beyond, check it out.