When someone says “AI agent,” they might mean a 2018-style decision tree that forces you down a scripted menu. Or they might mean a 2025 language model that can hold a real conversation grounded in your business’s actual content.
These are not the same thing. And the gap between them explains why most businesses have had bad experiences with agents — and why that calculus has changed.
The old model: scripted bots
For most of the 2010s, a “agent” meant an if-then decision tree. A visitor typed a question, the bot pattern-matched keywords, and returned a pre-written response. If the keyword wasn’t in the script, the bot failed gracefully — or ungracefully.
These bots required constant manual maintenance. Every new product, every pricing change, every policy update meant updating the script. The visitor experience was frustrating; responses felt robotic because they were robotic. Most businesses that deployed them saw low engagement and eventually abandoned them.
The new model: knowledge-grounded language models
Modern AI agents work fundamentally differently. Instead of matching keywords to pre-written responses, they:
- Understand the question in natural language — not just keywords, but intent and context.
- Retrieve relevant information from a knowledge base — documents, web pages, PDFs you provide.
- Generate a fluent, contextually appropriate response — in the visitor’s own phrasing, not from a fixed script.
This architecture — called Retrieval-Augmented Generation, or RAG — means the bot is only as good as the knowledge you give it. The upside: it never hallucinates answers it doesn’t have, because it can only draw from your content. The downside: garbage in, garbage out. If your FAQ page is vague, the bot’s answers will be vague.
The three things a good AI agent must do
1. Answer from your content, not the internet
The biggest failure mode for AI agents in business contexts is hallucination — confidently making up information that sounds plausible but is wrong. “Your return window is 60 days” when it’s actually 30. “This package includes after-hours support” when it doesn’t.
Hallucination happens when the model draws from its general training data instead of your specific content. A well-designed agent should be constrained to only answer from what you gave it, and should say “I don’t have that information” when the answer isn’t in your knowledge base.
2. Stay current automatically
Your website changes. Prices change. Products get added or discontinued. Policies get updated. A agent that was accurate six months ago may be dangerously wrong today if it isn’t refreshing its knowledge.
Good implementations re-crawl your sources on a schedule. You update your website; the bot updates itself.
3. Capture what it learns about visitors
Every conversation is a signal. Which questions come up most? What do visitors ask before they bounce? Who showed purchase intent but didn’t convert?
A agent that answers questions but throws away everything it learned is a missed opportunity. Built-in lead capture — even just a name and phone number at the right moment in a conversation — turns passive support into active pipeline.
Where agents actually help (and where they don’t)
Where they work well:
- Answering the same 20 questions your team fields every week
- Explaining pricing, plans, and features at 2am when nobody is online
- Capturing lead information before a visitor bounces
- Handling Tier-1 support so humans see only complex cases
Where they still fall short:
- Deeply emotional situations that require empathy and judgement
- Negotiation or anything that requires authority to make a commitment
- Highly specialised technical support that isn’t in any document
The mistake most businesses make is deploying a bot to handle everything, then blaming the technology when complex cases fall through. A better model: let the bot handle the common, repetitive cases; escalate the rest to a human.
What to look for when choosing a agent for your website
Content grounding. Can you give it your own URLs and PDFs? Does it stay constrained to them? Ask the vendor what happens when a visitor asks something outside the knowledge base.
Setup time. If it requires days of onboarding or a custom implementation project, that’s a red flag for most small and mid-sized businesses. The best tools go from URL to live agent in minutes.
Automatic refresh. Does it re-crawl your sources on a schedule, or do you have to manually retrain? Manual retraining is a chore that gets skipped.
Lead capture. Does it have built-in lead capture, or do you need to bolt on a third-party integration? For most businesses, leads are the whole point.
Pricing model. Per-conversation pricing can punish growth. Look for pricing that stays understandable as your usage rises and does not force you into complexity too early.
A note on “AI” as a marketing term
Not every agent marketed as “AI” uses the technology described above. Some are still keyword-matching systems with a large language model badge. Before committing to any platform, test it with edge cases: ask a question that’s on your website but phrased unusually. Ask something that isn’t on your website and see if it admits that gracefully.
The technology is genuinely useful now. The noise around it is also genuinely high. Let results — not marketing copy — guide your evaluation.
If you want to see how a modern, knowledge-grounded agent handles your specific website, book a Cassette demo.