“Too dangerous to release.”
We’ve heard it before… but it’s happening much more often these days.
Does anyone else remember when OpenAI said something similar about an early language model… that it could be an extinction-level moment.
Just last week, Anthropic did it again with its latest model, Mythos.
A model (they claim) that is so powerful… it’s being held back from the public.
Instead, there’s something called Project Glasswing where access is limited, partners (and even competitors) are curated…
But the framing is clear: this is not for everyone.
Things sure are getting interesting in tech these days.
But is it just me or are these moments starting to look less like a one-off decision for humanity… and more like a pattern that gets humanity panting?
So what is this?
Positioning? Signalling? Marketing? Reality?
Sure… it feels like a signal to the market that says: this is not just another model… this is something different.
And, to be fair, there may be real substance behind it (I do not have the access/technical chops to pass judgement).
We’re being told that this one uncovered vulnerabilities across almost all operating systems, browsers, and core infrastructure at a scale that was previously unimaginable.
*Shudders*
That this new model can chain weaknesses together in ways that could give attackers full control of systems.
*Double shudders*
That’s not trivial… that’s a different category of risk.
But it also creates a very strange duality.
Because the same capability that can defend systems… can also break them.
The same model that helps patch vulnerabilities… could accelerate how quickly those vulnerabilities are exploited.
And that’s the part that should make us pause.
Not the idea of AI going rogue.
But the idea of AI making highly specialized, high-impact work… routine… 24/7…
What used to require deep expertise, time and intent (with a lot of cat and mouse gaming theory in the cybercrime industry)… could become fully automated (and much faster, cheaper and more accessible).
And when that happens… the levee breaks.
So, Anthropic chose the moral high road.
Limit access… control distribution… create tiers of who gets to use what… and, most importantly, give the main players time to lock down the critical infrastructure.
This is good… right?
Maybe… but think about it…
Who decides what counts as “access” to these models?
Who decides what counts as “oversight”?
Who decides what counts as “the right hands”?
Who decides what counts as “too dangerous to release?”
Because these aren’t just product decisions anymore.
Now we’re in the world of geopolitics, security, public health, economic power and governance.
And if you zoom out even further… they’re also narrative decisions.
Because framing a model as “too dangerous to release” does two things at once:
That’s a compelling narrative… especially with trillions of dollars at play.
Especially in a market where every major tech company is trying to prove that their model is the one that matters.
So yes… maybe this is about safety… but it’s also about positioning.
And we’re about to find out which one matters more.
Because if these models truly represent a step-change in capability…
Then holding them back won’t be a long-term strategy.
It will be a temporary one.
Eventually, the pressure to release… to compete… to monetize… will win.
And when that happens… that line will inevitably start to blur.
So maybe the real shift isn’t that AI is becoming more powerful.
It’s that access to that power is becoming the real product.
And if that’s true…
What happens when the most important technology in the world isn’t defined by what it can do…
But by who is given access to it?
This is what Elias Makos and I discussed on CJAD 800 AM.
Before you go… ThinkersOne is a new way for organizations to buy bite-sized and personalized thought leadership video content (live and recorded) from the best Thinkers in the world. If you’re looking to add excitement and big smarts to your meetings, corporate events, company off-sites, “lunch & learns” and beyond, check it out.
Episode #1031 of Thinking With Mitch Joel (formerly Six Pixels of Separation - The ThinkersOne…
Welcome to episode #1031 of Thinking With Mitch Joel (formerly Six Pixels of Separation). At…
Is there one link, story or idea that stopped you this week… and made you…
With every passing week, my feelings on AI shift... dramatically. Perhaps the biggest shift in…
Apple turns 50 and for some of us it’s both cause for celebration and this…
Episode #1030 of Thinking With Mitch Joel (formerly Six Pixels of Separation - The ThinkersOne…
This website uses cookies.