The Signal

AI agent vs chatbot — what's the actual difference? (And which does your business need?)

The honest comparison between AI agents and chatbots — architecture, cost, capability, and compliance — so you can make the right decision for your business without overspending.

Most businesses don't need a full AI agent. Some businesses genuinely do. The problem is that every vendor is calling their product an "AI agent" right now — including products that are very much just chatbots — which makes it almost impossible to calibrate what you actually need.

This guide gives you the real comparison: architecture, capability, cost and compliance. By the end, you should know exactly which your business needs — and be able to spot a chatbot being sold as an agent.

Definitions that actually hold up

Let's start with definitions that are precise enough to be useful:

A chatbot is software that conducts text-based conversations using either pre-scripted decision trees or a language model. The key constraint is that it can only generate text — it cannot access external data in real time, take actions outside the conversation, or chain multiple steps to complete a goal. A chatbot's knowledge is fixed at the time it's configured or trained.

An AI agent is software built on a language model that can use tools — external APIs, databases, calculations, or other systems — to retrieve real-time information and take actions, chaining multiple steps as needed to complete a complex goal. An agent's outputs are informed by live data, not just its training.

The distinction is not about how sophisticated the conversation feels. It's about whether the software can do things in the world beyond generating text. A chatbot with GPT-4 under the hood is still a chatbot if it can't access your product database or run a calculation. An agent using a smaller model with well-designed tools can be far more useful in the right context.


Side-by-side comparison

CapabilityTraditional chatbotLLM chatbot (GPT/Claude with no tools)AI agent (LLM + tools)
Answers FAQs✓ (scripted)✓ (generated)✓ (generated)
Accesses live data
Runs calculationsSometimes (hardcoded)Approximates only (exact)
Personalised answersLimited
Multi-step reasoning✓ (within conversation) (with actions)
Current information✗ (knowledge cutoff)
Takes external actionsSometimes (scripted)
Build cost$1,000–$5,000$2,000–$15,000$15,000–$80,000+
Monthly running cost$50–$300$100–$500$500–$15,000+

When a chatbot is the right choice

A chatbot — including a sophisticated LLM-powered chatbot without tool-calling — is often the right choice. Here are the situations where a chatbot outperforms an agent on the value-for-money calculation:

Customer support from a static knowledge base

If your customers ask variations of the same 50–100 questions, and the answers come from your documentation, policies or product catalogue — a chatbot trained on that content is faster to build, cheaper to run, and easier to maintain than an agent. The agent's tool-calling capability adds no value if the only "tool" needed is a search over a known document set.

Lead qualification with known criteria

If you want to qualify inbound leads by asking a structured series of questions (budget, timeline, use case, location) and routing them to the right salesperson — a chatbot is appropriate. The conversation is linear enough that the agent loop adds cost without adding capability.

Appointment booking and intake forms

Collecting information from customers in a conversational format — name, contact details, reason for enquiry — is a scripted task that a chatbot handles well. An agent with tool-calling would be appropriate only if the booking requires checking real-time availability against a live calendar API.

General information about a topic (no live data needed)

If users are asking general questions about your industry, your products or your processes — and the answers are the same regardless of who's asking and when — an LLM chatbot with a well-crafted system prompt is appropriate. The agent's tool-calling adds no value if there's no live data to retrieve.

When an AI agent is genuinely necessary

An AI agent is the right choice when the task requires information or actions that go beyond what can be handled in a conversation alone:

Live data retrieval

Any question where the answer depends on what's happening in the market right now. "What's the best home loan rate available today?" cannot be answered correctly by a chatbot — its knowledge was frozen at training or configuration time. An agent that calls a live rates API gives a useful answer. A chatbot gives a wrong one.

Personalised calculations

Borrowing power calculations, insurance premium estimates, property yield calculations, business financial projections — these require specific inputs from the user and exact arithmetic. An LLM without tools will approximate. An agent with a calculation tool will be exact.

Comparisons across multiple data sources

Comparing 22 lenders, 40 insurers, or 15,000 suburbs requires accessing and synthesising data from external sources. A chatbot cannot do this — it can only work with what it was trained on. An agent can query each source in real time and return a current, accurate comparison.

Actions with real-world effects

Checking calendar availability, submitting a form, updating a record, sending a notification — any agent behaviour that changes state outside the conversation requires tool-calling. This is agent territory.

The thing vendors don't tell you

Many products currently marketed as "AI agents" are chatbots with one narrow tool — typically a search over a document. This is a retrieval-augmented chatbot, sometimes called RAG (Retrieval-Augmented Generation). It's a valuable product and significantly better than a static chatbot, but it is not an agent in the full sense described above.

The test: ask the vendor what tools the agent can call, what external APIs it connects to, and what actions it can take. If the only "tool" is a search over documents you provide, it's a sophisticated chatbot. That might be exactly what you need — but it should be priced accordingly ($2,000–$10,000 to build, $100–$500/month to run), not at agent prices.

Quick test for any AI product: Ask it the current interest rate for a specific home loan product from a named lender. If it gives you a confident, specific answer without acknowledging that it may be outdated, it's working from training data — it's a chatbot. If it calls a tool, retrieves the current rate, and cites the source and date — it's an agent.

Australian compliance implications of the choice

The choice between chatbot and agent has compliance implications that are often overlooked:

Chatbots in regulated industries that work from static training data are risky because the information can become stale without your knowledge. A chatbot trained on interest rate data from six months ago will give wrong information — and there's no visible signal that it's wrong. For financial or insurance information, stale data is a compliance problem as well as a user experience problem.

Agents with live data are more complex to build and audit, but they produce information that is current and attributable to a specific source. When a user asks Finley about a lender's rate, the agent retrieves it from the lender's current feed — so the information is correct at the time of the query, and the source is logged. This is a stronger compliance posture than a chatbot working from a cached training set.

In short: for regulated Australian industries, the additional cost of an agent over a chatbot is often justified by the compliance advantage alone — not just the capability advantage.

The decision framework

Work through these four questions in order:

Q1: Does the question require live or personalised data to answer correctly?
If no → LLM chatbot is sufficient. If yes → continue.

Q2: Does answering require running calculations or fetching from multiple sources?
If no → retrieval-augmented chatbot (RAG) may be sufficient. If yes → continue.

Q3: Does the task require taking actions beyond the conversation?
If no → a single-tool retrieval agent may be sufficient. If yes → full multi-tool agent architecture.

Q4: Do the running cost savings or revenue opportunities justify the agent build cost?
An agent costs $15,000–$35,000 to build. A chatbot costs $2,000–$10,000. The difference is $5,000–$25,000. If the agent generates or saves more than that difference over 12 months, build the agent. If not, build the chatbot and revisit when you have more data.

The honest recommendation

If you're not sure, build the simpler thing first. An LLM chatbot with a good knowledge base gets you 60–70% of the user experience benefit of an agent at 20–30% of the cost. Validate that users engage with it, that it improves your business metrics, and that the capability gap with a full agent is actually holding you back — before committing to the larger build.

The agents we've built (Finley, Archie and Perry) needed to be full action agents because the core value proposition — accurate, real-time comparisons and calculations — is impossible to deliver without tool-calling. But for many Australian businesses, a well-designed chatbot is the right first move.

The goal is the business outcome. The technology is just the tool to get there.

Ready to build your own AI agent?

We build compliance-ready AI agents for Australian businesses — from $5,000.

Talk to us about your build →

Frequently asked questions

A chatbot generates text responses from a fixed knowledge base or training data. An AI agent uses a language model plus external tools — APIs, databases, calculation functions — to retrieve real-time information and take actions. The key difference is whether the software can do things in the world beyond generating text. A chatbot that uses GPT-4 is still a chatbot if it cannot access live data or take external actions.
In its base form (without plugins or tool access), ChatGPT is an AI assistant — a language model that generates responses from its training data. With tools enabled (web search, code execution, custom GPT actions), it exhibits agent-like behaviour. The distinction matters: ChatGPT without tools cannot tell you the current interest rate for a specific lender, because its knowledge has a cutoff date.
Most Australian businesses need a chatbot for their website — not a full AI agent. An AI agent is justified when the task requires live data retrieval (current rates, availability, market data), personalised calculations, or comparisons across multiple sources. For general customer support, FAQ answering, and lead qualification, an LLM-powered chatbot is faster to build, cheaper to run, and sufficient for the task.
A basic LLM chatbot costs $2,000–$10,000 to build and $100–$500/month to run. A Tier 2 action agent costs $15,000–$35,000 to build and $800–$3,000/month to run. The build cost difference is $5,000–$25,000. This is justified when the agent generates or saves more than that difference within 12 months — typically through referral revenue, labour cost reduction, or significantly better conversion rates.
A RAG (Retrieval-Augmented Generation) chatbot is an LLM that retrieves relevant information from a document database before generating a response. It's more accurate than a standard chatbot for knowledge base queries, but it's still a chatbot — it cannot access live external APIs, run calculations, or take actions. It's the right choice for Q&A over a document library. A full agent with multiple tools is the right choice when live data, calculations, or multi-source comparisons are required.
← Signal 04: Start an AI agent business