LLM Agents
Use language models for reasoning, decision-making, and tool usage
LLM agents are the core "thinking" components in ADK TypeScript. They use Large Language Models (LLMs) for reasoning, understanding natural language, making decisions, and interacting with tools.
Unlike deterministic workflow agents that follow predefined paths, LLM agents are dynamic. They interpret instructions and context to decide how to proceed, which tools to use (if any), or whether to transfer control to another agent.
Building an effective LLM agent involves defining its identity, guiding its behavior through instructions, and equipping it with the necessary tools and capabilities.
Defining Identity and Purpose
First, establish what the agent is and what it is for.
- Name (required): Unique identifier for the agent. Critical for multi-agent systems where agents may delegate tasks. Avoid reserved names like
user
. - Description (optional; recommended for multi-agent): Concise summary of the agent's capabilities, used by other agents for routing. Make it specific enough to differentiate from peers.
- Model (required): The LLM powering the agent's reasoning (for example,
gemini-2.0-flash
). Model choice affects capability, cost, and latency. See Models.
import { LlmAgent } from "@iqai/adk";
const capitalAgent = new LlmAgent({
name: "capital_agent",
model: "gemini-2.5-flash",
description:
"Answers questions about the capital city of a given country",
});
Guiding Behavior with Instructions
The instruction
shapes the agent's behavior. It can be a string or a function that returns a string and should clarify:
- Core task or goal
- Persona (for example, helpful assistant, succinct analyst)
- Constraints (for example, scope limits, safety guidelines)
- Tool usage guidance: when and why to call specific tools
- Output format (for example, JSON, bullet list)
Tips for Effective Instructions
Be clear and specific, structure complex guidance with Markdown, include examples for tricky formats, and explicitly describe when and why tools should be used.
You can interpolate state into instruction templates when needed:
{var}
inserts the value of a state variable namedvar
{artifact.var}
inserts the text content of an artifact namedvar
- Append
?
(for example,{var?}
) to ignore missing values instead of raising an error
import { LlmAgent } from "@iqai/adk";
import dedent from "dedent";
const capitalAgent = new LlmAgent({
name: "capital_agent",
model: "gemini-2.5-flash",
description: "Answers questions about capital cities",
instruction: dedent`
You are an agent that provides the capital city of a country.
When a user asks for the capital of a country:
1. Identify the country name from the user's query.
2. Use available tools if needed to look up the capital.
3. Respond clearly, stating the capital city.
Example Query: "What's the capital of {country?}?"
Example Response: "The capital of France is Paris."
`,
});
Equipping the Agent with Tools
Tools extend the agent beyond text-only reasoning to perform actions, fetch data, or compute results.
- tools (optional): A list of callable tools available to the agent. Items can be:
- Function-based tools (for computations and simple integrations)
- Class-based tools extending a base tool abstraction
- Agent tools enabling delegation to other agents (see Multi-Agents)
The LLM decides which tool to call using tool names, descriptions, and parameter schemas, guided by the agent's instructions and conversation context. Learn more in Tools.
import { createTool } from "@iqai/adk";
import { LlmAgent } from "@iqai/adk";
import { z } from "zod";
// Define a function tool with schema
const getCapitalCity = createTool({
name: "get_capital_city",
description: "Retrieves the capital city for a given country",
schema: z.object({ country: z.string().describe("Country name") }),
fn: async ({ country }) => {
const capitals: Record<string, string> = {
france: "Paris",
japan: "Tokyo",
canada: "Ottawa",
};
const key = country.toLowerCase();
return capitals[key] || `Unknown capital for ${country}`;
},
});
// Attach tool to the agent
const capitalAgent = new LlmAgent({
name: "capital_agent",
model: "gemini-2.5-flash",
description: "Answers questions about capital cities",
instruction: "Use tools when necessary to ensure accuracy.",
tools: [getCapitalCity],
});
Advanced Configuration and Control
Fine-tune agent behavior with the following options.
Generation Control
Configure the underlying model's generation (temperature, output length, sampling, and safety policies). Refer to provider-specific options in Models.
import { LlmAgent } from "@iqai/adk";
const agent = new LlmAgent({
name: "controlled_generation",
model: "gemini-2.5-flash",
description: "Demonstrates generation controls",
generateContentConfig: {
temperature: 0.2,
maxOutputTokens: 250,
},
});
Structured Data: Input/Output Schemas
For structured exchanges:
- Input schema (optional): Enforce the expected input shape. If set, upstream messages must provide JSON matching this schema.
- Output schema (optional): Enforce the agent's final response to match a schema (for example, strict JSON response contracts).
- Output key (optional): Persist the agent's final text output into session state under a given key to pass results between steps or agents.
import { LlmAgent } from "@iqai/adk";
import { z } from "zod";
const CapitalOutput = z.object({
capital: z.string().describe("The capital of the country"),
});
const structuredAgent = new LlmAgent({
name: "structured_capital_agent",
model: "gemini-2.5-flash",
description: "Returns capital in JSON format",
instruction:
'Return ONLY a JSON object matching schema: { "capital": "..." }',
outputSchema: CapitalOutput,
outputKey: "found_capital",
});
Managing Context
Control whether prior conversation history is sent to the LLM:
default
: Include relevant conversation historynone
: Do not include prior contents; operate only on current input and instruction (useful for stateless or isolated tasks)
import { LlmAgent } from "@iqai/adk";
const statelessAgent = new LlmAgent({
name: "stateless_agent",
model: "gemini-2.5-flash",
description: "Stateless responses",
instruction: "Answer based only on the current query.",
includeContents: "none",
});
Planning
Enable multi-step reasoning and planning before execution.
- Built-in planning: Use model-native planning capabilities when available
- Plan-and-Act patterns: Instruct the model to outline a plan, then take actions (like calling tools), and produce a final answer
Choose a planning strategy based on model capabilities and the complexity of your tasks.
import { LlmAgent, BuiltInPlanner, PlanReActPlanner } from "@iqai/adk";
// Built-in planner (model-native thinking when available)
const thinkingAgent = new LlmAgent({
name: "thinking_agent",
model: "gemini-2.5-flash",
planner: new BuiltInPlanner({ thinkingConfig: { includeThinking: true } }),
});
// PlanReAct pattern (explicit plan → actions → final answer)
const planreactAgent = new LlmAgent({
name: "strategic_planner",
model: "gemini-2.5-flash",
planner: new PlanReActPlanner(),
});
Code Execution
Enable execution of code blocks emitted by the model to evaluate snippets or run tasks during agent execution. See Built-in Tools for available executors and usage considerations.
import { LlmAgent, BuiltInCodeExecutor } from "@iqai/adk";
import dedent from "dedent";
const codeAgent = new LlmAgent({
name: "code_executor",
model: "gemini-2.5-flash",
description: "Agent with code execution capabilities",
instruction: dedent`
When solving computational tasks, write Python code blocks and execute them.
Explain results clearly and handle errors gracefully.
`,
codeExecutor: new BuiltInCodeExecutor(),
disallowTransferToParent: true,
disallowTransferToPeers: true,
});
Putting It Together
A typical LLM agent configuration combines identity (name, description, model), a clear instruction, optional tools, and—when needed—generation controls, schemas, context management, planning, or code execution.
import { LlmAgent, createTool } from "@iqai/adk";
import { z } from "zod";
import dedent from "dedent";
// Tool
const getCapitalCity = createTool({
name: "get_capital_city",
description: "Retrieves the capital city for a given country",
schema: z.object({ country: z.string() }),
fn: async ({ country }) =>
({ france: "Paris", japan: "Tokyo" }[country.toLowerCase()] || "Unknown"),
});
// Output schema
const CapitalOutput = z.object({ capital: z.string() });
// Agent
const capitalAgent = new LlmAgent({
name: "capital_agent",
model: "gemini-2.5-flash",
description: "Provides capital cities with tool assistance",
instruction: dedent`
Identify the country and provide its capital.
Prefer tool lookups when uncertain. Respond clearly.
`,
tools: [getCapitalCity],
generateContentConfig: { temperature: 0.2, maxOutputTokens: 200 },
outputSchema: CapitalOutput,
outputKey: "found_capital",
});
Related Topics
Generation Control
Configure temperature, max tokens, sampling, and safety policies
Structured Data
Define input/output schemas and persist outputs via state keys
Context Management
Control conversation history inclusion for stateless or stateful tasks
Tools
Available tools, agent tools, and how to create custom tools
Multi-Agent Systems
Coordinate agents, delegate tasks, and use system-wide instructions
Callbacks
Hook into agent execution for monitoring and control
How is this guide?