The Overestimation Problem

The Overestimation Problem

We desperately want machines to be intelligent, so we keep imagining they are.

It’s a recurring cycle of hype (Paraphrased Headlines):

  • 1960: ELIZA Chatbot simulates therapy 👉 “The machine has empathy, intelligence”

  • 1980: Expert Systems 👉 Headlines: “Intelligent machines”

  • 1997: IBM DeepBlue beats Kasparov at chess 👉 “The Machine is more intelligent than humans”

  • 2011: IBM’s Watson won Jeopardy 👉 “The Machine understands language”

  • March 2023: GPT-4 👉 Microsoft declares an early AGI

  • December 2024: 👉 OpenAI Vahid Kazemi declares “we have already achieved AGI… better than most humans at most tasks”

  • Etc.

Today we assume a chatbot that writes essays must be close to consciousness. 🤔

Humans are consistent:
we project intelligence onto anything that behaves roughly like us.

Yet behaviour is not cognition.
Imitation is not insight.

We need to keep perspective. Overstatements are just that: …overstatements. They don’t make reality and they don’t help thinking clearly.

What do you think?

👉 If the question of Intelligence teases you, I just published a book about it: “AI: The Hunt for Intelligence – Beyond the Hype and Fear”.

#AI #AGI #Intelligence

Categories

This website stores cookies on your computer. These cookies are used to provide a more personalized experience and to track your whereabouts around our website in compliance with the European General Data Protection Regulation. If you decide to to opt-out of any future tracking, a cookie will be setup in your browser to remember this choice for one year.

Accept or Deny