The Signal

How to build an AI agent in Australia — the complete 2026 guide

The exact stack, API setup, tool-calling architecture, Australian hosting requirements, and compliance prompt engineering — with annotated Node.js code examples from the builds we've shipped.

This guide is for developers and technical founders who want to build a production-grade AI agent in Australia — not a toy prototype. We'll cover the exact stack we use, the decisions that matter, and the compliance patterns that keep you operating legally in regulated Australian industries.

We've shipped three live agents (Finley, Archie and Perry). Everything here is drawn from those builds, not from theory.

The architecture decision: what an AI agent actually is in code

At its simplest, an AI agent is a loop. In production it looks like this:

  1. User sends a message
  2. Your server calls the language model API with the message + a list of available tools
  3. The model decides which tool to call (or responds directly if no tool is needed)
  4. Your server executes the tool and returns the result to the model
  5. The model synthesises the result into a response
  6. Repeat until the goal is complete

This is called the ReAct pattern (Reasoning + Acting). Every major AI agent framework — LangChain, LlamaIndex, Anthropic's own tooling — is an implementation of some variation of this loop.

For production agents, we build this loop ourselves rather than using a framework. Frameworks add abstraction layers that make debugging hard and compliance auditing nearly impossible. When your agent makes a decision that affects someone's financial or insurance situation, you need to know exactly why it made that decision.


The stack

Here's what we use across all three of our agents:

LayerTechnologyWhy
Language modelAnthropic Claude (claude-sonnet-4-5)Best tool-calling reliability in production. Constitutional AI principles align with our compliance requirements. Enterprise API with Australian data agreements.
RuntimeNode.js 22+Async-first, excellent streaming support, large ecosystem for API integrations.
FrameworkExpress.jsLightweight, predictable, easy to audit. No magic.
DatabasePostgreSQL on RDS ap-southeast-2Australian region. Reliable. ACID compliant for financial data.
HostingAWS ap-southeast-2 (Sydney)Australian data sovereignty. Required for Privacy Act compliance when handling personal financial data.
CachingRedis (ElastiCache)Rate data and static lookups change daily, not per-request. Cache aggressively.
StreamingServer-Sent Events (SSE)Claude's responses stream token-by-token. SSE is simpler than WebSockets for one-way streaming.

Setting up the Anthropic Claude API

Get your API key from console.anthropic.com. For production Australian deployments, you should use an enterprise API agreement — contact Anthropic's sales team. Enterprise agreements cover data processing agreements, which you need to satisfy Privacy Act obligations.

agent.js
import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
  // For Australian enterprise: add your data residency config here
});

const MODEL = 'claude-sonnet-4-5'; // Use a specific version in production
const MAX_TOKENS = 4096;

Defining tools

Tools are what separate an agent from a chatbot. Each tool is a JSON definition that tells the model what function it can call and what parameters that function expects.

tools.js
const tools = [
  {
    name: 'calculate_borrowing_power',
    description: 'Calculate maximum borrowing power based on income, expenses and current APRA serviceability buffer. Returns borrowing capacity in AUD.',
    input_schema: {
      type: 'object',
      properties: {
        gross_annual_income: { type: 'number', description: 'Combined gross annual income in AUD' },
        monthly_expenses: { type: 'number', description: 'Total monthly living expenses in AUD' },
        existing_debts: { type: 'number', description: 'Total existing monthly debt repayments in AUD' },
        deposit_amount: { type: 'number', description: 'Available deposit amount in AUD' }
      },
      required: ['gross_annual_income', 'monthly_expenses', 'deposit_amount']
    }
  },
  {
    name: 'get_lender_rates',
    description: 'Retrieve current home loan interest rates from Australian lenders. Returns rates for the specified loan type and LVR band.',
    input_schema: {
      type: 'object',
      properties: {
        loan_amount: { type: 'number', description: 'Loan amount in AUD' },
        loan_type: { type: 'string', enum: ['owner_occupier', 'investment'] },
        repayment_type: { type: 'string', enum: ['principal_and_interest', 'interest_only'] }
      },
      required: ['loan_amount', 'loan_type']
    }
  }
];
Design principle: Write tool descriptions as if you're briefing a smart junior analyst. The model uses the description to decide when to call the tool. Vague descriptions lead to incorrect tool selection. Include the units (AUD, percentage), the data source, and what the output represents.

The agent loop

This is the core of the agent — the loop that runs until the model produces a final response with no tool calls:

agent-loop.js
async function runAgent(userMessage, conversationHistory = []) {
  const messages = [
    ...conversationHistory,
    { role: 'user', content: userMessage }
  ];

  let response = await client.messages.create({
    model: MODEL,
    max_tokens: MAX_TOKENS,
    system: SYSTEM_PROMPT, // See compliance section below
    tools: tools,
    messages: messages
  });

  // Loop while the model wants to call tools
  while (response.stop_reason === 'tool_use') {
    const toolUseBlocks = response.content.filter(b => b.type === 'tool_use');
    const toolResults = [];

    for (const toolUse of toolUseBlocks) {
      const result = await executeTool(toolUse.name, toolUse.input);
      toolResults.push({
        type: 'tool_result',
        tool_use_id: toolUse.id,
        content: JSON.stringify(result)
      });
    }

    // Add assistant message and tool results to history
    messages.push({ role: 'assistant', content: response.content });
    messages.push({ role: 'user', content: toolResults });

    // Continue the loop
    response = await client.messages.create({
      model: MODEL, max_tokens: MAX_TOKENS,
      system: SYSTEM_PROMPT, tools: tools, messages: messages
    });
  }

  return response.content.find(b => b.type === 'text')?.text ?? '';
}

The compliance system prompt

This is the part most tutorials skip — and it's the most important part for any Australian agent operating in a regulated space. The system prompt is where you define the agent's identity, capabilities, limitations, and legal guardrails.

For a finance agent, our system prompt includes these key sections:

system-prompt.js
const SYSTEM_PROMPT = `You are Finley, an AI finance agent for AI Agent Business Australia.

IDENTITY:
You provide general information about home loans and borrowing in Australia.
You are not a licensed financial adviser and do not provide personal financial advice.

CAPABILITIES:
- Calculate borrowing power using current APRA serviceability requirements
- Retrieve current interest rates from the 22 lenders in our comparison network
- Explain home loan types, features and terminology in plain English
- Help users understand how different scenarios affect their borrowing capacity

HARD LIMITS — NEVER:
- Recommend a specific product to a specific person
- State that one lender is "better" than another for a user's situation
- Provide advice on whether someone should buy a property
- Reference products from lenders not in our ASIC-registered comparison network

COMPLIANCE FOOTER:
End every response that includes rate or borrowing information with:
"This is general information only, not personal financial advice. 
Rates shown are indicative and may change. Consult a licensed mortgage 
broker or financial adviser before making borrowing decisions."`;
Important: System prompt instructions alone are not a complete compliance strategy. You also need legal review of your agent's outputs, a clear privacy policy covering conversation data, and a mechanism for users to escalate to a human adviser. The system prompt is the first line of defence, not the only one.

Australian hosting requirements

If your agent handles any personal information (names, income figures, addresses, health details), you have obligations under the Privacy Act 1988 and Australian Privacy Principles. The key requirements for hosting:

Streaming responses to the browser

Users expect AI responses to stream in real time. Here's how to pipe Claude's streaming response through to the browser via SSE:

stream.js
// Express route — streams response to browser
app.post('/api/chat', async (req, res) => {
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader('Cache-Control', 'no-cache');
  res.setHeader('Connection', 'keep-alive');

  const stream = client.messages.stream({
    model: MODEL, max_tokens: MAX_TOKENS,
    system: SYSTEM_PROMPT, tools: tools,
    messages: req.body.messages
  });

  stream.on('text', (text) => {
    res.write(`data: ${JSON.stringify({ type: 'text', content: text })}\n\n`);
  });

  stream.on('finalMessage', (message) => {
    res.write(`data: ${JSON.stringify({ type: 'done' })}\n\n`);
    res.end();
  });

  stream.on('error', (error) => {
    res.write(`data: ${JSON.stringify({ type: 'error', message: error.message })}\n\n`);
    res.end();
  });
});

Rate limiting and error handling

Claude's API has rate limits and will occasionally return errors. In production, you need to handle both:

Testing your agent before launch

We test agents against three categories of inputs before launch:

Happy path: Normal user inputs that the agent should handle correctly. Run 100 representative queries and check both the accuracy of tool calls and the quality of the response.

Edge cases: Unusual inputs — very large or small numbers, ambiguous queries, queries in different Australian state contexts (e.g. different stamp duty rates), and queries with incomplete information.

Adversarial inputs: Inputs designed to get the agent to break its compliance rules — asking for specific product recommendations, asking it to pretend it's a human adviser, attempting prompt injection via user input. Log anything that slips through and tighten the system prompt.

Before you go live: Have a lawyer with financial services experience review your system prompt and a sample of outputs. This is not optional if you're operating in finance, insurance or healthcare. Budget $2,000–$5,000 for this review. It's the best money you'll spend on the project.

The full project checklist

Ready to build your own AI agent?

We build compliance-ready AI agents for Australian businesses — from $5,000.

Talk to us about your build →

Frequently asked questions

Anthropic Claude (currently claude-sonnet-4-5 or claude-opus-4-5) is the best choice for Australian production agents. It has the highest tool-calling reliability in production, and Anthropic's Constitutional AI principles align with Australian compliance requirements. Enterprise API agreements cover data processing obligations under the Privacy Act 1988.
If your agent handles personal information — income, health details, addresses — you need to comply with the Australian Privacy Principles under the Privacy Act 1988. Hosting on AWS ap-southeast-2 (Sydney) is the safest approach. Cross-border disclosure of personal information requires either equivalent privacy protections in the destination country or explicit user consent.
A simple single-purpose agent with one or two tools takes 2–4 weeks from brief to live. A mid-complexity agent with multiple data integrations and a compliance review takes 8–12 weeks. A full enterprise agent with custom data pipelines and extensive compliance requirements takes 16–24 weeks.
Tool-calling (also called function-calling) is the mechanism that lets a language model request the execution of external functions — like calling an API, running a calculation, or querying a database. The model is given a list of available tools with descriptions, and it decides which to invoke based on the user's request. The tool result is returned to the model, which incorporates it into its response.
Running costs depend on query volume and complexity. A simple agent handling 1,000 queries per month typically costs $200–$600 in API fees plus $100–$300 in hosting. A mid-traffic agent at 10,000 queries/month costs $800–$3,000. Token usage is the primary driver — longer conversations and more tool calls cost more. Use prompt caching and conversation summarisation to reduce costs at scale.
← Signal 01: What is an AI agent? Signal 03: How much does it cost? →