We resent AI for imitating us — and crave it when it does not enough
Every time we build something new, we push it closer to ourselves — our face, our voice, our way of speaking. Even when it makes us uncomfortable.
Our Chomsky post made this visible again:
we react strongly when AI gets language almost right, because language feels like the last thing that separates “us” from “them.”
So here’s a tiny thought experiment:
If you had to choose a helper for your daily work —
A) a machine looking like assistant endorsing its artificial nature plainly, or
B) a human-shaped, human-voiced system that presents itself almost like a colleague —
which one would you pick?
History shows our fascination with option B goes back far beyond AI.
The Greeks gave us Talos, Da Vinci gave us the mechanical knight, 18th-century Europe had its automatons, etc.
We keep circling the same idea: a helper that resembles us, but is not quite us.
And yet, here’s the contradiction:
Human shaped AI is scary …and comforting.
We want the psychological comfort of R2-D2…
but we keep trying to build Optimus …with a smile.
We chase the human shape for a reason.
Not because machines need it — but because we do.
We understand minds by analogy, so when something looks or sounds like us, we instinctively read intention and intelligence into it.
And yet, there is no such thing in AI at that stage. If there is any intention, it is the one that we, humans, have put into it. We are misleading ourself with the human shape and/or the natural language:
machines look intelligent, sound intelligent… and still aren’t.
So I’m curious:
When you imagine AI in your life — do you want something clearly “machine-like,” or something that behaves almost human?
(If this topic resonates, the book I just released “AI: The Hunt for Intelligence-Beyond the Hype and Fear” goes into this tension — link on my profile.)