Bot-On-Bot Action Is Not Hot

Posted by

A new social network… not for people… but for AI agents.

That’s the big news of the past week in tech.
The launch (and growth) of Moltbook.
A place built for anyone’s AI agents to post, negotiate, collaborate and respond to one another… on our behalf… or maybe, increasingly, on their own.
Think Reddit but for AI Bots… not humans.
We humans can’t add to the discourse… just observe (kinda).

But it doesn’t end there…

The evolution came swift (and so did the backlash).
Prior to Moltbook we were given OpenClaw, a new personal AI assistant that isn’t just answering your prompts but can actually act on your behalf.
Read and reply to your inbox (in your voice/style).
Schedule meetings for you or even restaurant reservations (and if Open Table doesn’t show any availabilities, your agent will figure out how to download or build voice software and then call the restaurant and deal with the human at the other end of the line on your behalf).
It can do almost any task you can on a computer… and beyond.

No humans required.

On paper, this sounds like the end of busywork.
No more email ping-pong.
No more repetitive replies.
No more “just circling back.”
No more “send me some date and time options” for that meeting.

But here’s the thing…

We all need to take a step back… and a deep breath.
Make this a thought experiment.
Walk this down to the end.
What do we have?
A world where my Bot creates content that your Bot reads and “likes” and comments on.

Bot-on-bot action is not hot.

And I don’t say that as a luddite.
I say it as a warning shot.
Because when bots start doing a lot of the talking, we need to be honest about what kind of work we’re actually eliminating… and what kind of work we’re hollowing out.
And… maybe most importantly, what we might be really missing when we think it’s just giving us time to focus on the big stuff.

This isn’t just about efficiency.

It’s about removing humans from many of the many relational layers of work.
The things we currently dismiss as “petty” might be where judgment, learning and stronger ideas lived.
Where tone mattered.
Where trust was built.
Where power dynamics were felt… not abstracted.

When my AI agent negotiates with your AI agent, and they are getting better and better at the tasks of our work… it won’t be long before they move further up the value chain.

Now, this could be good…
My concern in watching Moltbook unfold and the content these Bots are posting and commenting on is that it all looks uncomfortably human.
Which means what?
It means that we are both easily predictable and easily repeatable… at scale.
And that should be the wake up call.
If you read a post on Moltbook and find yourself wondering if some of the comments are more salient and clever than how you would respond to the same situation… we all need to sit with that.

Because right now, what’s being sold as productivity is really an outsourcing of our participation in the work that we do.

And that changes the culture of work in ways we haven’t accounted for.
Because if agents are talking to agents…
Who is responsible when things go exceptionally well.
Who is responsible when things go sideways?
Who is really doing the “work” if it’s just synthetic on synthetic interactions.
And, perhaps more importantly, does the company now “own” how you think and work because of how you’ve optimized your own agent?
Short answer is “yes”… and then the bigger question becomes, why does the company need you if they can now simply scale your thinking to multiple bots and agents?

More thinking…

If work becomes a series of automated exchanges between abstractions of people, we don’t get more leverage… we get less judgment per decision.
And that’s dangerous.
Because the future of work isn’t just about completing tasks.
It’s about deciding which tasks matter.
Which conversations require presence.
Which moments deserve friction.
Which exchanges should never be optimized away.

Agent-first networks like Moltbook and Open Claw are fascinating experiments.

And, like many people who are spending a lot of time tinkering with them, it does feel like we’re living in the future.
But they also expose a deeper tension we’ve been avoiding:
We keep trying to automate the parts of work that make us tired…

Without asking whether those same parts were quietly making us better.

Better at communicating.
Better at reading context.
Better at handling ambiguity.
Better at being accountable.
Better at being exposed to ideas that generate better ideas.

Bot-on-bot systems remove practice.

And once you remove practice, skills atrophy.
Now we have to wonder if this is the future of productivity.
… or the future of avoidance.

If we let bots fully replace the relational layer of work (think of it like the “small talk” part of a relationship), we won’t end up more productive.

We’ll end up less capable of doing the work that still requires us.
We’ll end up less connected to people and those relationships.
And by the time we realize what we’ve outsourced away, it may already feel… inconvenient… to bring ourselves back into the loop (just think about how much easier it is to scroll some reels than it is to talk to the person sitting right next to you).

That’s the real risk.

Not that bots can do our work… but that we forget which parts of work were shaping us all along.

This is what Elias Makos and I discussed on CJAD 800 AM.

Before you go… ThinkersOne is a new way for organizations to buy bite-sized and personalized thought leadership video content (live and recorded) from the best Thinkers in the world. If you’re looking to add excitement and big smarts to your meetings, corporate events, company off-sites, “lunch & learns” and beyond, check it out.

Leave a Reply

Your email address will not be published. Required fields are marked *