Could a whale ever make Chomsky wrong?
Most people know Noam Chomsky as the father of modern linguistics starting in the 1960s. What fewer remember is that for sixty years he has also been the most stubborn critic of the whole AI project.
His point, in three short sentences:
Children arrive on this planet with some kind of built-in language module. Give a toddler almost no data and, in no time, he will understand and produce sentences he has never heard. According to Chomsky, no amount of statistical pattern-matching (the only trick ChatGPT, Claude, Gemini & co. actually have) can explain that.
A tiny example that still trips up the biggest models:
✔️ “What do you think Mary bought?”
❌ “What do you think that Mary bought?”
Any English-speaking three-year-old instantly feels the second one is wrong. The models? They usually treat both as equally acceptable unless linguists have hammered the rule into their training data a million times. LLMs learn by example and pattern recognitions, they do not derive hard rules.
Chomsky keeps rubbing our noses in this: today’s AI can write poems, pass the bar exam, and sound eerily human… yet it still stumbles on deep grammatical constraints that a child masters without effort and without explicit teaching.
Language remains one of the most stubbornly human things we do. If he is right, something fundamentally biological is missing from our machines.
So, what do you think? Are we one clever trick away from closing that gap, or is true language understanding forever outside the reach of pure statistics?
And by the way: Chomsky has spent 60 years insisting no animal will ever cross the line into true language. Projects like CETI are now recording millions of sperm-whale ‘codas’ looking for exactly that line. What happens to the theory if they find it?
Curious what you think.