AI, Tragedy And The Big Surveillance Question

Posted by

When something horrific happens, the instinct is simple… prevent the next one.

The reporting that OpenAI employees flagged troubling conversations before a mass shooting is both sobering… and complicated.
On one hand, we should acknowledge something important: the system worked.
For years we watched social media platforms fail spectacularly at recognizing real-world danger… live-streamed suicides, manifestos spreading unchecked, warning signs buried in algorithms that were never created to surface this type of content… and the finger-pointing.
We built engagement machines… not early-warning systems.

Now, the AI caught it… and flagged it.

Humans reviewed it.
The concern was real enough to spark internal debate.
That really matters.
It signals that these tools are no longer passive publishers of user input… they are active interpreters of risk.

But here’s where the tension begins…

OpenAI is not a clinical body.
Its employees are not trained risk assessors.
They are technologists being asked to interpret intent, context and psychological risk… often across borders, jurisdictions and legal frameworks.
And intent is one of the hardest things to assess… even for trained professionals sitting face-to-face with someone.

That’s not a small ask… and it’s something we now need to face.

Because if we decide platforms should automatically notify authorities when certain thresholds are crossed, we must confront what that means… and we need to define those thresholds with the experts who know best.
Psychiatrists… legal scholars… civil liberties advocates… law enforcement… criminologists… not just engineers in a safety meeting.
Then the users need to really understand the social contract and terms of service when they’re messing with this tech.
Consent in this context cannot be buried on page 63 of a terms-of-service document.

And we’re also back to the bigger question: What level of surveillance are we comfortable with?

Are users fully aware that their prompts and conversations may trigger human review?
What triggers a human review?
Does paying for a service change that expectation (as opposed to using a free version where the consumer is the product)?
Where is the line between safety and constant monitoring?
And who decides when that line moves?

There’s also a deeper structural question…

In this case, law enforcement was already involved.
Mental health systems were already engaged.
So before we rush to hand more safety and responsibility to AI companies, we should ask whether existing institutions failed (in this instance) and whether shifting accountability onto an AI platform is more about optics than outcomes.
Are we looking for better prevention… or a new place to assign blame?

None of this diminishes the tragedy… it’s moments like this that sharpens the conversation.

Tragedy demands seriousness… not policy written in panic.
AI tools can absolutely help surface signals earlier.
They may become an important layer in prevention.
But building that layer responsibly means clear rules, transparent thresholds and cross-border legal clarity.
It also means acknowledging that more surveillance is not the same thing as more safety.
It cannot mean reactive overreach (or shifting blame) every time something terrible happens.
Because once we normalize continuous surveillance in the name of safety, it rarely contracts.
It expands.

The goal is not to weaken prevention.

It’s to ensure that in trying to build a safer society, we don’t quietly surrender the very boundaries that make it worth protecting.
Prevention and privacy should not be enemies… but they do require deliberate design.

That’s the real surveillance question.

This is what Elias Makos and I discussed on CJAD 800 AM.

Before you go… ThinkersOne is a new way for organizations to buy bite-sized and personalized thought leadership video content (live and recorded) from the best Thinkers in the world. If you’re looking to add excitement and big smarts to your meetings, corporate events, company off-sites, “lunch & learns” and beyond, check it out.

Leave a Reply

Your email address will not be published. Required fields are marked *