What Language Models Can Do Without Thought

What Language Models Can Do Without Thought
Alan Turing statue, photo by Yiming Ma / Unsplash

Claims of “intelligence” in AI focus on function: if AI behaves as-if it has a mind, it’s “intelligent.” But behavior is a useless framework when these models are designed explicitly to pass these behavioral tests. Instead, labeling – and debating whether to label – these language machines as AI or AGI achieves the opposite goal: it suggests a human-like experience of being an LLM.

An LLM is language without intelligence. It turns out that language severed from a mind is remarkably powerful. The trick is that this text is merely a user's text, but it is obscured. It takes your text, blows it up, and calibrates the debris to land nearby. Then the interface says, "Here, I wrote this for you."

In the latest piece for Tech Policy Press, I show where this intelligence framework comes from and why it no longer offers clarity for our thinking about this technology. I propose a different way to think through them: as language engines, rather than thinking machines.

The Illusion of AGI, or What Language Models Can Do Without Thought
It is not simple stubbornness that LLMs are not “intelligent,” much less a form of “general” intelligence, writes Eryk Salvaggio.