Agents and Tool Use, Demystified
An agent is just a chat that talks to itself. Once you understand the loop, the rest is plumbing.
"Agent" is one of those words that gets thrown around so much it stops meaning anything. You read a tweet about an "AI agent" and have no idea whether they mean ChatGPT, a Zapier workflow, or a self-driving research assistant. So before we touch any code, let's nail down what an agent actually is.
An agent is an LLM in a loop that can call tools, observe results, and decide what to do next. That's the whole definition. Everything else is decoration. Once that clicks, the rest of this topic — ReAct, sub-agents, MCP, tool schemas — falls into place naturally.
01The ReAct loop in three lines
The standard agent control flow is ReAct — short for Reason-Act. Three steps, repeated:
- Reason — the model thinks about what to do given the goal and what's happened so far.
- Act — it calls a tool with structured arguments (web_search, run_code, send_email).
- Observe — it reads the tool's result and updates its mental state.
Then it loops. The agent keeps going until it decides it's done — typically by emitting a final answer instead of another tool call. That's the entire control flow. Claude, ChatGPT agents, Devin, Cursor's agent mode — they all run some variant of this.
┌─────────────┐
│ Reason │ ←─────────┐
└──────┬──────┘ │
│ │
┌──────▼──────┐ │
│ Act │ (call tool)│
└──────┬──────┘ │
│ │
┌──────▼──────┐ │
│ Observe │ ──────────┘
└─────────────┘ (or emit final answer)02What a tool actually is
A tool is a typed capability you expose to the model: a name, an input schema, an output type. web_search(query: string) => { results: SearchHit[] }. run_code(code: string, lang: string) => { stdout, stderr }. send_email(to, subject, body) => { messageId }.
The schema matters. The model uses it both to decide whether the tool is relevant and to construct valid arguments. Vague schemas produce vague tool calls. Tight schemas with clear descriptions produce reliable behavior.
On the frontend, your job is to render tool calls and their outputs differently per tool type. A web_search call should render as result cards. A run_code call should render as a code block with output below. An error response should be visually distinct from a success. Use TypeScript discriminated unions to model this; each tool gets its own renderer.
03The trace UI: how the agent narrates itself
The user can't see what's happening inside the agent loop. The trace UI is their only window — and it's the difference between "this product feels magical" and "this product feels like a black box."
The pattern that's emerged across products: a vertical list of expandable steps. Each step shows status (pending → running → done | error), tool name, optional thinking text, and the structured input/output. The currently-running step pulses; finished steps are static; errors halt the run (or trigger retry).
If your steps run for tens of seconds, show elapsed time per step. Users tolerate slowness when they can see what took how long. "Searched the web · 1.4s" makes a 30-second total feel reasonable. Without per-step timing, the same 30 seconds feels like the product is broken.
04Sub-agents and recursive traces
Production agents (Devin, Claude Code, Manus) often spawn sub-agents for delegated work — research a sub-question, refactor a single file, browse a website. Each sub-agent has its own ReAct loop, its own tool calls, its own trace.
UI implication: traces aren't flat lists. They're trees. Plan your data model accordingly:
type Step = {
id: string;
status: 'pending' | 'running' | 'done' | 'error';
tool: string;
input?: unknown;
output?: unknown;
thinking?: string;
children?: Step[]; // sub-agent's trace lives here
};
The renderer then becomes a recursive component (same shape as nested-comments). Sub-agents render indented inside their parent step.
05MCP — what it is and why people are excited
MCP (Model Context Protocol) is Anthropic's open standard for connecting LLMs to tools and data sources. The pitch: instead of every model integrating with every tool individually (the N×M problem), you have a small adapter — an MCP server — that exposes a tool or data source to any MCP-compatible client.
You connect an MCP server for your filesystem, your Postgres, your Linear, your Figma. Any MCP-aware client (Claude Desktop, Cursor, Zed) can talk to all of them. The ecosystem of servers is growing fast, and frontend teams are starting to build MCP-aware UIs that let users add and configure servers.
You don't need to be an MCP expert tomorrow. But know what it is when someone mentions it, and know that "this product supports MCP" means it can plug into the growing tool ecosystem without bespoke integrations.
Key Takeaways
- 01An agent is an LLM in a loop that calls tools, observes results, and decides when it is done.
- 02ReAct = Reason → Act → Observe, repeat. Almost every production agent runs some variant of this.
- 03Tools are typed capabilities. Tight schemas + clear descriptions produce reliable tool calls.
- 04Render tool calls differently per tool type using a TypeScript discriminated union.
- 05Real traces are trees, not lists. Sub-agents spawn nested traces.
- 06MCP is a growing standard for plugging tools into any model. Worth knowing the term and the direction.