There Is No Intelligence in Artificial Intelligence

By Ricky Nelson · 3 April 2026

The more I use AI, the more I understand what it actually is. And what it is not. I've heard it described as a math function that predicts the next word, and after months of using it daily for real work, that description lines up perfectly with what I've seen.

A lightbulb filled with mathematical formulas instead of a filament

A very good guesser

That's really all it is. AI takes your input, runs it through a model trained on enormous amounts of text, and produces the sequence of words most likely to be what you're looking for. It's really good at picking which words to put together. Impressively good. But that is all that it's doing.

Think about where this started. If you wanted to identify pictures of cats, you would train your model on hundreds of thousands of images of different cats. Early on it had a hard time telling the difference between a house cat and a lion. More data, more training, and it got better. Scale that up to where we are today with language. More data, better training, and it's gotten really good at identifying patterns and formulating responses. The jump from image recognition to language generation is a massive leap in capability, but the underlying mechanism is the same. Pattern matching. Statistics. Math.

It never reaches out

Here's the thing that keeps me grounded about all of this. AI has no initiative. It never reaches out to you. It only ever responds. It sits there, completely inert, until you give it something to work with. Then it assembles a response based on what its training says you probably want to hear. That's not intelligence. That's a very sophisticated lookup.

I don't think people appreciate how significant that distinction is. Intelligence, real intelligence, involves curiosity. It involves noticing something is wrong without being told. It involves reaching out. AI does none of that.

The branch problem

Here's an engineering example that made this click for me.

Say you're working on a feature. You create a branch, or have your AI do it, complete the task, commit, and merge it back to main. Normal workflow. But now you start the next task without telling the AI that you merged. What happens? The AI doesn't check which branch it's on. It doesn't notice that the previous work has already been committed. It doesn't suggest creating a new branch. It just starts solving whatever problem you've put in front of it, blindly assembling the next words to give you what you asked for.

You have to tell it to check which branch it's on. You have to tell it to create a new one. It will never figure that out on its own, because there's nothing to figure out. There's no awareness. It's a function. Input goes in, output comes out.

This isn't a bug or a limitation that will get fixed in the next release. It's fundamental to what these systems are. They respond. They don't observe. They don't act on their own. They have no model of the world that persists between your prompts.

What AGI actually means

You hear a lot about AGI, Artificial General Intelligence. The idea is a system that can understand, learn, and apply knowledge across any domain the way a human can. Not just answering questions about code or generating text, but actually reasoning about new situations, making decisions without being prompted, and adapting to things it's never seen before.

We are nowhere close to that. What we have today is pattern matching at an extraordinary scale. It's useful. I use it every day and I've written about how much it's changed my workflow. But calling it intelligence is generous. It doesn't reason. It doesn't understand. It doesn't know anything. It predicts what word comes next.

The sales pitch

But that's not what the AI companies are selling the world. They're selling "intelligence." They're selling the idea that these models think, that they understand, that they're on the path to something that can replace human judgment. The marketing is way out ahead of the reality.

I wonder how long that lasts. At some point the gap between what's being promised and what's being delivered has to catch up. These models are genuinely useful tools. I'm proof of that. But a useful tool is very different from an intelligent system, and the conflation of those two things is something worth pushing back on.

AI is a math function. A very good one. And that's fine. Just don't let anyone tell you it's something more.

Written by me, assisted by the AI.

ai