MIKEB.MD
ES
AI

Claude Code is not a chat

Mike Bermeo ⏱ 9 min read Leer en Español

The first time you open Claude Code it’s very easy to think you already understand what you’re looking at.

A terminal.

A text box.

An AI that responds.

In other words: a chat.

And honestly, that reading makes sense — because on the surface, that’s exactly what it looks like. You type something, the system responds, you type again, it responds again.

But that’s just the facade.

The reason Claude Code feels different isn’t simply that it “writes better” or “seems smarter.” The real difference is elsewhere: Claude Code wasn’t built just to converse. It was built to work.

The most common mistake when looking at it

When we’re in front of a conversational interface, we tend to put everything in the same mental bucket.

If there’s a text box and AI responses, we assume the system works like any other chatbot: you ask, it answers, end of cycle.

That mental model is enough to understand part of the behavior — but not the most important part.

A classic chat lives, above all, in the realm of language. You give it text and it gives you text back. Sometimes it does this very well, sometimes less so, but the basic pattern stays the same.

Claude Code uses language too, of course. But language isn’t the end of the process. It’s the medium through which it organizes actions.

And that difference changes everything.

The clue is in what it can do

There’s a very simple way to notice that something more than a conversation is happening here.

Claude Code can:

CapabilityWhat it means
Read filesIt sees the real code, it doesn’t guess
Search the projectIt navigates the structure before acting
Execute commandsIt touches the environment, not just talks about it
Review resultsIt observes what happened before continuing
Change courseIt corrects its trajectory if something fails

That doesn’t quite fit the “chat” idea anymore.

Because a normal chat responds. Claude Code, on the other hand, can enter a small work cycle.

First it looks at something. Then it decides on an action. It uses a tool. It looks at the result. And from that it decides what to do next.

flowchart LR
  U(["👤 User"]):::user --> S
  S(["🧠 Model"]):::model --> T
  T(["🔧 Tool"]):::tool --> R
  R(["📋 Result"]):::result --> S
  S --> O(["💬 Response"]):::output

  classDef user   fill:#1e1b4b,stroke:#818cf8,color:#e2e2f0
  classDef model  fill:#312e81,stroke:#818cf8,color:#e2e2f0,stroke-width:2px
  classDef tool   fill:#164e63,stroke:#22d3ee,color:#e2e2f0
  classDef result fill:#1c1917,stroke:#78716c,color:#a8a29e
  classDef output fill:#14532d,stroke:#4ade80,color:#e2e2f0

  linkStyle 2,3 stroke:#22d3ee,stroke-width:2px
  linkStyle 0,1 stroke:#818cf8,stroke-width:2px

That pattern is much more important than it looks, because that’s where the word agent starts to make sense.

Not because there’s some kind of magical will hidden inside the system. Not because it “thinks like a person.” But because it doesn’t limit itself to producing text. It uses language to coordinate work.

A chat and an agent don’t follow the same path

If I had to draw the difference in the simplest way possible, it would be this:

flowchart TD
  A(["❓ User asks a question"]):::neutral --> B{{"What kind of system?"}}:::decision

  B -->|"Classic chat"| C(["✍️ Generates response"]):::chat
  B -->|"Agent"| D(["🤔 Decides an action"]):::agent

  D --> E(["🔧 Uses a tool"]):::tool
  E --> F(["📋 Receives result"]):::tool
  F --> G{{"Enough?"}}:::decision
  G -->|"No"| D
  G -->|"Yes"| H(["💬 Responds"]):::agent

  classDef neutral  fill:#1e1b4b,stroke:#818cf8,color:#e2e2f0
  classDef decision fill:#1c1917,stroke:#f59e0b,color:#fbbf24
  classDef chat     fill:#1c1917,stroke:#78716c,color:#a8a29e
  classDef agent    fill:#312e81,stroke:#818cf8,color:#e2e2f0
  classDef tool     fill:#164e63,stroke:#22d3ee,color:#e2e2f0

  linkStyle 1 stroke:#78716c,stroke-dasharray:4
  linkStyle 2,3,4,5,6,7 stroke:#818cf8,stroke-width:2px

In a chat, the path is shorter. In an agent, the path has one extra loop — and that loop is the most important of all: acting before responding.

That extra loop is precisely what makes the system stop feeling like a fancy text box and start feeling like something that can actually work.

Even if you reduce it to very simple pseudocode, the difference is immediate:

chat-vs-agent.ts
// Classic chat
function respond(question: string) {
return model.generateText(question)
}
// Agent
async function resolveTask(task: string) {
let state = task
while (!isResolved(state)) {
const action = model.decideNextStep(state)
const result = await executeTool(action)
state = incorporateResult(state, result)
}
return model.draftFinalResponse(state)
}

You don’t need to understand every line to get the idea.

The “chat” version receives text and returns text. The “agent” version has an intermediate phase where it does things, looks at results, and only then concludes.

Not a floating brain

I think a much more useful image than “smart chat” is this:

Claude Code is more like a small operational office.

There’s a part that interprets what you asked. There’s a part with access to tools. There are rules. There are limits. There’s memory. And there’s a sequence of steps that connects all of it.

flowchart TB
  U(["👤 User"]):::user --> P(["📝 Prompt"]):::neutral
  P --> M(["🧠 Model"]):::model
  M --> D(["🤔 Decision"]):::decision

  D -->|"needs more info"| T(["🔧 Tools"]):::tool
  T --> C(["🔄 Updated context"]):::context
  C -->|"loop"| M

  D -->|"ready to respond"| O(["💬 Visible response"]):::output

  classDef user     fill:#1e1b4b,stroke:#818cf8,color:#e2e2f0
  classDef neutral  fill:#1c1917,stroke:#44403c,color:#d6d3d1
  classDef model    fill:#312e81,stroke:#818cf8,color:#e2e2f0,stroke-width:2px
  classDef decision fill:#1c1917,stroke:#f59e0b,color:#fbbf24
  classDef tool     fill:#164e63,stroke:#22d3ee,color:#e2e2f0
  classDef context  fill:#1a2e1a,stroke:#4ade80,color:#86efac
  classDef output   fill:#14532d,stroke:#4ade80,color:#e2e2f0

  linkStyle 4 stroke:#22d3ee,stroke-width:2px
  linkStyle 5 stroke:#4ade80,stroke-width:2px,stroke-dasharray:4
  linkStyle 6 stroke:#4ade80,stroke-width:2px

Seen this way, the language model is still important — but it stops being “the whole system.”

This also helps understand something that sometimes gets lost in the public conversation about agents: a large part of the value isn’t only in the model, it’s in the architecture that surrounds it.

The value isn’t only in the text

When an AI expresses itself well, we tend to give all the credit to what it wrote.

We see a clear, organized, convincing response and think that’s where the magic is.

But in systems like Claude Code, a huge part of the value isn’t only in the writing. It’s in the structure that makes action possible.

It’s in being able to review a file before responding. It’s in being able to verify something instead of inventing it. It’s in being able to inspect the state of a project before making a decision.

In other words: if you really want to understand a system like this, the right question isn’t just “how good is the model.”

The more useful question is:

what system surrounds the model.

That’s where almost all the important pieces appear:

If you want to see it in more concrete terms, the architecture looks less like “question and answer” and more like this:

a-very-simplified-view.ts
const userRequest = "Review this project and tell me what's broken"
const initialDecision = model.decide(userRequest)
// "First I need to read files"
const files = await tool.readFile("src/app.ts")
const nextDecision = model.decideWithContext(files)
// "Now I need to find where this function is used"
const matches = await tool.grep("handleAuth", "src/")
const finalDecision = model.decideWithContext(matches)
// "I have enough to respond now"
return model.draftFinalResponse(finalDecision)

This isn’t Claude Code’s real source code. But it represents the logic behind it very well: observe, act, incorporate the result, and continue.

Why understanding this matters

It might seem like a fairly abstract discussion, but it actually changes a lot about how you interpret what you see on screen.

For example, once you understand that Claude Code doesn’t work purely as one big brilliant response, you stop being surprised when it doesn’t immediately reply with a final conclusion.

Maybe at that moment it’s not concluding. Maybe it’s inspecting. Maybe it’s testing. Maybe it’s reading. Maybe it’s not yet at the point in the process where it makes sense to speak.

You also start to see why the tools matter so much.

Without tools, a system like this stays on the side of words. With tools, it starts to touch the environment.

And at that point it no longer feels like a simple text generator. It feels like an operator.

A better way to look at Claude Code

If I had to summarize it in one sentence, I’d say this:

it’s not worth looking at Claude Code as an intelligent conversation.

It’s worth looking at it as a work system that communicates with you using language.

That shift in perspective, though it might seem small, clarifies a great deal.

You no longer just ask “what did it respond.” You start asking:

  • what did it review before responding
  • what tool did it use
  • what result did it observe
  • what changed as a result of that

And there you’re already much closer to understanding how it really works.

What comes next

Understanding this is just the beginning.

Once you see that Claude Code isn’t just a conversation, the really interesting question appears:

what’s the internal cycle that makes that behavior possible?

Because if the response doesn’t come all at once, and if the system can observe, act, and decide again, then there must be a mechanism that connects those stages.

And that mechanism is exactly what’s worth opening up next.

The loop.

That’s the next step.