What Are AI Agents?
Understand agents conceptually — how they differ from chatbots and why they matter.
From Chatbot to Agent
A chatbot responds to messages. An agent pursues goals.
The key difference: agents take actions in the world — browsing the web, writing files, calling APIs, running code, sending messages. They don't just generate text; they do things.
An AI agent has three components:
- Perception — it receives inputs (messages, data, tool results)
- Reasoning — it decides what to do next
- Action — it uses tools to affect the world
The Autonomy Spectrum
Agents exist on a spectrum from assistive to autonomous:
| Level | Description | Example | |-------|-------------|---------| | L0 | Responds only | FAQ chatbot | | L1 | Makes suggestions | Code completion | | L2 | Takes single actions | Email drafting | | L3 | Multi-step tasks | Research + summary | | L4 | Goal-directed | Software engineer | | L5 | Fully autonomous | Runs a business function |
Most production agents today operate at L2–L4. L5 remains aspirational.
What Makes an Agent Powerful
Tool use — agents extend their capabilities through tools: web search, code execution, API calls, file operations, database queries.
Memory — agents that remember past interactions and context can build on prior work rather than starting fresh every session.
Planning — sophisticated agents break goals into steps, monitor progress, and adapt when plans fail.
When to Use Agents vs Simpler Approaches
Use an agent when: the task requires multiple steps, requires real-world actions, or benefits from persistent context.
Use a simple prompt when: a single LLM call with good prompting is sufficient. Agents add complexity and cost — only justify them when necessary.