Agent Builder
Fluent API for rapid agent creation with automatic session management and smart defaults
AgentBuilder provides a fluent, chainable API for rapid agent creation and configuration. While LLM Agents give you maximum control and are recommended for most use cases, AgentBuilder shines when you need quick prototyping, automatic session management, or want to create multi-agent workflows without boilerplate.
Unlike direct agent instantiation, AgentBuilder handles session creation, memory management, and configuration defaults automatically, letting you focus on building great agent experiences rather than infrastructure setup.
When to Use AgentBuilder
Use AgentBuilder for rapid prototyping, automatic session management,
multi-agent workflows, or when you want smart defaults. Use LLM
Agents directly when you need maximum
control over configuration, memory, sessions, or production systems with
specific requirements.
Quick Start
The simplest way to get started is with the convenience method:
import { AgentBuilder } from "@iqai/adk";
import * as dotenv from "dotenv";
dotenv.config();
async function main() {
const response = await AgentBuilder.withModel("gemini-2.5-flash").ask(
"Hello, what can you help me with?"
);
console.log("response: ", response);
}
main().catch(console.error);For more control, use the full builder pattern:
import { AgentBuilder } from "@iqai/adk";
import * as dotenv from "dotenv";
dotenv.config();
async function main() {
const { runner } = await AgentBuilder.create("my_assistant")
.withModel("gemini-2.5-flash")
.withInstruction("You are a helpful research assistant")
.build();
const response = await runner.ask("What is quantum computing?");
console.log("response: ", response);
}
main().catch(console.error);Configuration Options
AgentBuilder provides a comprehensive set of configuration methods organized by functionality. All methods are optional except where noted in the usage patterns below.
Core Configuration
Basic agent setup and behavior:
| Method | Type | Description |
|---|---|---|
create(name) | string | Creates a named builder instance |
withModel(model) | string | BaseLlm | LanguageModel | Sets the LLM model |
withDescription(description) | string | Adds agent description |
withInstruction(instruction) | string | InstructionProvider | Sets behavior instructions |
withAgent(agent) | BaseAgent | Wraps existing agent |
Input/Output Configuration
Data validation, schemas, and response formatting:
| Method | Type | Description |
|---|---|---|
withInputSchema(schema) | ZodSchema | Input validation schema |
withOutputSchema(schema) | ZodType<T> | Structured output format |
withOutputKey(outputKey) | string | The output key in session state to store the output of the agent |
Tools & Execution
External capabilities and code execution:
| Method | Type | Description |
|---|---|---|
withTools(...tools) | BaseTool[] | Adds tools to the agent |
withSubAgents(subAgents) | BaseAgent[] | Adds sub-agents to the agent |
withPlanner(planner) | BasePlanner | Sets the planner for the agent |
withCodeExecutor(codeExecutor) | BaseCodeExecutor | Enables code execution |
Callback Methods
Monitoring and customization hooks:
| Method | Type | Description |
|---|---|---|
withBeforeAgentCallback(callback) | BeforeAgentCallback | Before agent execution callback |
withAfterAgentCallback(callback) | AfterAgentCallback | After agent execution callback |
withBeforeModelCallback(callback) | BeforeModelCallback | Before model interaction callback |
withAfterModelCallback(callback) | AfterModelCallback | After model interaction callback |
withBeforeToolCallback(callback) | BeforeToolCallback | Before tool execution callback |
withAfterToolCallback(callback) | AfterToolCallback | After tool execution callback |
Session & Memory
State management and persistence:
| Method | Type | Description |
|---|---|---|
withMemory(memoryService) | BaseMemoryService | Adds long-term memory |
withSessionService(service, options) | BaseSessionService, SessionOptions | Custom session management |
withSession(session) | Session | Uses existing session instance |
withQuickSession(options) | SessionOptions | In-memory session with custom IDs |
withArtifactService(artifactService) | BaseArtifactService | File storage capability |
withRunConfig(config) | RunConfig | Partial<RunConfig> | Configures runtime behavior |
Multi-Agent Workflows
Agent orchestration patterns (mutually exclusive - choose one):
| Method | Type | Description |
|---|---|---|
asSequential(subAgents) | BaseAgent[] | Creates sequential workflow |
asParallel(subAgents) | BaseAgent[] | Creates parallel execution |
asLoop(subAgents, maxIterations) | BaseAgent[], number | Creates iterative execution |
asLangGraph(nodes, rootNode) | LangGraphNode[], string | Creates complex workflows |
Build Methods
Final agent construction (choose one to complete the configuration):
| Method | Type | Description |
|---|---|---|
build() | Promise<BuiltAgent> | Builds the configured agent |
buildWithSchema() | Promise<BuiltAgent> | Type-safe build with schema |
ask(message) | Promise<RunnerAskReturn> | Quick execution helper |
Requirement Patterns
AgentBuilder supports three different usage patterns, each with different requirements:
✅ Pattern 1: Named Agent (Recommended)
- Required:
create(name)+withModel() - Use when: You want a named agent for multi-agent systems or production use
const { runner } = await AgentBuilder.create("my_agent")
.withModel("gemini-2.5-flash")
.build();✅ Pattern 2: Quick Start
- Required:
withModel()only - Use when: Rapid prototyping or simple one-off tasks
const response = await AgentBuilder.withModel("gemini-2.5-flash").ask("Hello!");
console.log("response: ", response);✅ Pattern 3: Wrap Existing Agent
- Required:
create(name)+withAgent() - Use when: Adding AgentBuilder features to existing agents
import { existingLlmAgent } from "./existingLlmAgent";
const agent = await AgentBuilder.create("my_assistant")
.withModel("gemini-2.5-flash")
.withAgent(existingLlmAgent)
.build();All other configuration methods are optional and can be chained as needed with any pattern.
Configuration Options Details
create(name)
Type: string | Default: Auto-generated name
Creates a named builder instance that will generate an LLM agent with the specified name. The name serves as both a unique identifier and helps with debugging in multi-agent systems where multiple agents interact.
The name must follow JavaScript identifier rules (start with letter/underscore, contain only letters, numbers, underscores). Choose descriptive names that clearly indicate the agent's purpose.
const builder = await AgentBuilder.create("research_assistant").withModel(
"gemini-2.5-flash"
);
const builder2 = await AgentBuilder.create("_data_processor2").withModel(
"gemini-2.5-flash"
);
// Invalid names (will throw errors)withModel(model)
Type: string | BaseLlm | LanguageModel
Specifies the Large Language Model that powers your agent's reasoning and text generation capabilities. You can provide a simple string identifier for common models, a configured BaseLlm instance for custom settings, or a Vercel AI SDK LanguageModel object for advanced features.
Your model choice significantly affects response quality, speed, and cost. See Models & Providers for detailed configuration options and available models.
import { AgentBuilder, OpenAiLlm } from "@iqai/adk";
// Example 1: Using a string model identifier (most common)
const agent1 = await AgentBuilder.create("my_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent using Gemini")
.build();
// Example 2: Using a custom LLM instance
const agent2 = await AgentBuilder.create("my_agent")
.withModel(new OpenAiLlm("gpt-4o"))
.withDescription("An agent using OpenAI GPT-4")
.build();withDescription(description)
Type: string | Default: ""
Adds a brief description that explains the agent's capabilities and purpose. This is particularly important in multi-agent systems where parent agents use descriptions to make intelligent routing decisions. The description should clearly differentiate your agent from others.
const agent = await AgentBuilder.create("financial_analyst")
.withModel("gemini-2.5-flash")
.withDescription(
"Specializes in financial data analysis and investment recommendations"
)
.build();withInstruction(instruction)
Type: string | InstructionProvider | Default: ""
Defines the agent's behavior, decision-making patterns, and interaction style. This is your most important configuration as it transforms a generic LLM into a specialized agent with distinct expertise and personality.
Instructions can be static strings for consistent behavior or dynamic functions that adapt based on context. See Agent Instructions Guide for comprehensive guidance on writing effective instructions.
// Static instruction
const financeAgent = await AgentBuilder.create("financial_analyst")
.withModel("gemini-2.5-flash")
.withInstruction(
"You are a financial advisor. Provide clear, actionable investment advice."
)
.build();
// Dynamic instruction using template literals
const travelAgent = await AgentBuilder.create("travel_agent")
.withModel("gemini-2.5-flash")
.withInstruction(
`You are a travel guide.
User Preferences: {user_preferences}
Suggest personalized travel itineraries based on user preferences.`
)
.build();withAgent(agent)
Type: BaseAgent
Wraps an existing agent instance with AgentBuilder's session management and runner interface. This is useful when you have pre-configured agents but want AgentBuilder's automatic session management and convenient runner interface.
import { AgentBuilder, LlmAgent } from "@iqai/adk";
const existingAgent = new LlmAgent({
name: "my_agent",
description: "An agent for general tasks",
});
const agent = await AgentBuilder.create("my_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent wrapping an existing LlmAgent")
.withAgent(existingAgent)
.build();Configuration After Wrapping
When using withAgent(), subsequent configuration methods like withModel()
or withTools() are ignored. The existing agent's configuration is used
as-is.
withInputSchema(schema)
Type: ZodSchema | Default: undefined
Validates and structures input data before processing, ensuring your agent receives properly formatted data. This provides type safety and automatic validation for complex input requirements.
import { z } from "zod";
const inputSchema = z.object({
query: z.string().min(1),
filters: z.array(z.string()).optional(),
maxResults: z.number().min(1).max(100).default(10),
});
const agent = await AgentBuilder.create("coordinator_agent")
.withModel("gemini-2.5-flash")
.withDescription("A coordinator agent that processes user queries")
.withInputSchema(inputSchema)
.build();
// The agent will validate inputs against the defined schemawithOutputSchema(schema)
Type: ZodType<T> | Default: undefined
Enforces structured JSON output with TypeScript type safety, ensuring predictable, parseable responses. Perfect for API integrations and data processing workflows. Important: This disables tool usage as the agent can only generate structured responses.
import { z } from "zod";
const outputSchema = z.object({
summary: z.string(),
confidence: z.number().min(0).max(1),
categories: z.array(z.string()),
});
const agent = await AgentBuilder.create("coordinator_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent that summarizes and categorizes text")
.withOutputSchema(outputSchema)
.build();
// The agent's responses will now conform to the defined output schemawithOutputKey(outputKey)
Type: string | Default: undefined
Sets the key in session state where the agent's output will be stored. This makes the agent's response available to subsequent agents in multi-agent workflows, enabling data flow between agents.
const agent = await AgentBuilder.create("analyzer_agent")
.withModel("gemini-2.5-flash")
.withDescription(
"An agent that analyzes documents and provides structured output"
)
.withOutputKey("analysis")
.build();
// The agent's output will be stored in session state under the "analysis" keywithTools(...tools)
Type: BaseTool[] | Default: []
Adds tools that dramatically extend your agent's capabilities beyond text generation, enabling interaction with external systems, API calls, calculations, and code execution. You can provide multiple tools in a single call, mixing different tool types.
Accepts BaseTool instances (built-in or custom), FunctionTool wrappers for easy integration, or raw async functions that get automatically wrapped. The agent's LLM intelligently decides when and how to use each tool based on context. See Tools documentation for available tools and creating custom ones.
import {
AgentBuilder,
FileOperationsTool,
GoogleSearch,
createTool,
} from "@iqai/adk";
import { z } from "zod";
import * as dotenv from "dotenv";
dotenv.config();
const getWeather = async (city: string): Promise<string> => {
// Mock implementation of a weather fetching function
return `The current weather in ${city} is sunny with a temperature of 25°C.`;
};
const weatherTool = createTool({
name: "get_weather",
description: "Get current weather",
schema: z.object({
city: z.string().describe("The city to get weather for"),
}),
fn: async (params) => getWeather(params.city),
});
async function main() {
const { runner } = await AgentBuilder.create("my_agent")
.withModel("gemini-2.5-flash")
.withTools(
new GoogleSearch(), // Built-in tool
new FileOperationsTool(), // Another built-in tool
weatherTool
)
.build();
const response = await runner.ask("Hello, what can you help me with?");
console.log("response: ", response);
}
main().catch(console.error);withSubAgents(subAgents)
Type: BaseAgent[] | Default: []
Adds specialized sub-agents that the main agent can delegate specific tasks to. This creates a hierarchical agent system where the main agent acts as a coordinator.
import { AgentBuilder, LlmAgent } from "@iqai/adk";
const researchAgent = new LlmAgent({
name: "researcher_agent",
model: "gemini-2.5-flash",
description: "Specialize in research tasks",
});
const analysisAgent = new LlmAgent({
name: "analyst_agent",
model: "gemini-2.5-flash",
description: "Specialize in data analysis tasks",
});
async function main() {
const agent = await AgentBuilder.create("coordinator_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent coordinating research and analysis tasks")
.withSubAgents([researchAgent, analysisAgent])
.build();
}withPlanner(planner)
Type: BasePlanner | Default: undefined
Adds strategic planning capabilities that help agents break down complex tasks into manageable steps. Planners analyze requests and create execution strategies before proceeding with the actual work.
import {
AgentBuilder,
FileOperationsTool,
GoogleSearch,
PlanReActPlanner,
} from "@iqai/adk";
const agent = await AgentBuilder.create("strategic_agent")
.withModel("gemini-2.5-flash")
.withDescription(
"A strategic agent that uses planning and tools to accomplish complex tasks"
)
.withPlanner(new PlanReActPlanner())
.withTools(new GoogleSearch(), new FileOperationsTool())
.build();withCodeExecutor(codeExecutor)
Type: BaseCodeExecutor | Default: undefined
Enables your agent to write and execute code in a secure, sandboxed environment. This dramatically expands problem-solving capabilities beyond text generation, making agents capable of data analysis, calculations, visualizations, and dynamic computation.
import { AgentBuilder, BuiltInCodeExecutor } from "@iqai/adk";
const agent = await AgentBuilder.create("strategic_agent")
.withModel("gemini-2.5-flash")
.withDescription(
"A strategic agent that uses planning and tools to accomplish complex tasks"
)
.withCodeExecutor(new BuiltInCodeExecutor())
.build();withBeforeAgentCallback(callback)
Type: BeforeAgentCallback | Default: undefined
Executes before the agent processes any request. Perfect for logging, authentication, or context preparation.
const agent = await AgentBuilder.create("monitored_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent with monitoring capabilities")
.withBeforeAgentCallback(async (context) => {
console.log("Before agent execution:", context);
return undefined;
})
.build();withAfterAgentCallback(callback)
Type: AfterAgentCallback | Default: undefined
Executes after the agent completes processing. Ideal for logging results, cleanup, or post-processing.
const agent = await AgentBuilder.create("monitored_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent with monitoring capabilities")
.withAfterAgentCallback(async (context) => {
console.log("After agent execution:", context);
return undefined;
})
.build();withBeforeModelCallback(callback)
Type: BeforeModelCallback | Default: undefined
Executes before each LLM model call. Useful for request modification, caching checks, or usage tracking.
const agent = await AgentBuilder.create("monitored_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent with monitoring capabilities")
.withBeforeModelCallback(async ({ callbackContext, llmRequest }) => {
console.log("Before model execution:", { callbackContext, llmRequest });
return undefined;
})
.build();withAfterModelCallback(callback)
Type: AfterModelCallback | Default: undefined
Executes after each LLM model response. Perfect for response processing, caching, or usage analytics.
const agent = await AgentBuilder.create("monitored_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent with monitoring capabilities")
.withAfterModelCallback(async ({ callbackContext, llmResponse }) => {
console.log("After model execution:", { callbackContext, llmResponse });
return undefined;
})
.build();withBeforeToolCallback(callback)
Type: BeforeToolCallback | Default: undefined
Executes before each tool execution. Useful for authorization, logging, or parameter validation.
const agent = await AgentBuilder.create("monitored_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent with monitoring capabilities")
.withBeforeToolCallback(async (tool, args, context) => {
console.log("Before tool execution:", context);
return undefined;
})
.build();withAfterToolCallback(callback)
Type: AfterToolCallback | Default: undefined
Executes after each tool execution. Perfect for result processing, error handling, or usage tracking.
const agent = await AgentBuilder.create("monitored_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent with monitoring capabilities")
.withAfterToolCallback(async (tool, args, context, response) => {
console.log("After tool execution:", context);
return undefined;
})
.build();withMemory(memoryService)
Type: BaseMemoryService | Default: undefined
Adds long-term memory storage that persists information across conversations and sessions. Memory enables agents to remember user preferences, learned insights, and important context from previous interactions.
Available memory services include InMemoryMemoryService for simple in-memory storage and VertexAiRagMemoryService for Vertex AI RAG-based memory with semantic search capabilities. Particularly valuable for personal assistants, customer service agents, and knowledge workers that need to build relationships over time. See Sessions & Memory for memory configuration options.
import { AgentBuilder, InMemoryMemoryService } from "@iqai/adk";
const agent = await AgentBuilder.create("my_agent")
.withModel("gemini-2.5-flash")
.withDescription(
"An agent that utilizes memory for enhanced context retention"
)
.withMemory(new InMemoryMemoryService())
.build();withSessionService(service, options)
Type: BaseSessionService, SessionOptions | Default: Auto-created in-memory session
Provides custom session management for conversation state, message history, and ephemeral data during a single session. AgentBuilder creates in-memory sessions automatically, but you can customize this for persistence or multi-tenant applications.
Available session services include InMemorySessionService for simple in-memory storage, DatabaseSessionService for database-backed persistence, and VertexAiSessionService for Vertex AI integration. Sessions are lighter-weight than memory and typically reset between conversations. See Sessions & Memory for session configuration.
import { AgentBuilder, createDatabaseSessionService } from "@iqai/adk";
const agent = await AgentBuilder.create("my_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent with persistent session storage")
.withSessionService(createDatabaseSessionService("sqlite://sessions.db"))
.build();Using Together: You can combine both for comprehensive state management - sessions for current conversation context and memory for long-term retention:
import {
AgentBuilder,
InMemoryMemoryService,
createDatabaseSessionService,
} from "@iqai/adk";
const agent = await AgentBuilder.create("my_agent")
.withModel("gemini-2.5-flash")
.withDescription("An advanced agent with memory and session management")
.withMemory(new InMemoryMemoryService()) // Long-term insights
.withSessionService(createDatabaseSessionService("sqlite://sessions.db")) // Persistent sessions
.build();withSession(session)
Type: Session | Default: undefined
Provides a specific session instance for conversation state management. Use this when you need to continue an existing conversation or share sessions between agents. Requires withSessionService() to be called first.
import { AgentBuilder, InMemorySessionService } from "@iqai/adk";
import * as dotenv from "dotenv";
dotenv.config();
// First set up the session service
const sessionService = new InMemorySessionService();
async function main() {
// Create a session with some initial state
const existingSession = await sessionService.createSession(
"my_app",
"user-123",
{ conversationCount: 1 },
"session-456"
);
const agent = await AgentBuilder.create("continuing_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent that continues existing conversations")
.withSessionService(sessionService) // Required before withSession()
.withSession(existingSession) // Use the existing session
.build();
// User asks the agent to pick up the previous conversation
const response = await agent.runner.ask(
"Can you summarize where we left off?"
);
console.log("response: ", response);
}
main().catch(console.error);withQuickSession(options)
Type: SessionOptions | Default: {}
Creates an in-memory session with optional identifiers for user, app, state, or a custom session ID. Handy when you need lightweight session tracking without configuring a dedicated session service.
const { runner } = await AgentBuilder.create("quick_agent")
.withDescription("An agent that uses quick sessions for user interactions")
.withModel("gemini-2.5-flash")
.withQuickSession({
userId: "user-456",
appName: "quick-chat",
state: { greetingCount: 1 },
sessionId: "session-456",
})
.build();
// Automatically manages the session using the provided identifiers
// User asks the agent to recall their prior exchange
const response = await runner.ask("Remember this conversation");
console.log("response: ", response);withArtifactService(artifactService)
Type: BaseArtifactService | Default: undefined
Provides artifact storage and management capabilities for documents, images, and generated content. Use InMemoryArtifactService for in-memory storage or GcsArtifactService for cloud-based persistence. Essential for agents that work with files across multiple conversations.
import { AgentBuilder, InMemoryArtifactService } from "@iqai/adk";
const { runner } = await AgentBuilder.create("document_agent")
.withModel("gemini-2.5-flash")
.withDescription("An agent that manages files and documents")
.withArtifactService(new InMemoryArtifactService())
.build();
// User asks the agent to work with files
const response = await runner.ask("Process and store this document");
console.log("response: ", response);withRunConfig(config)
Type: (config: RunConfig | Partial<RunConfig>) => this | Default: {}
Configures runtime behavior including streaming mode, LLM call limits, input/output audio transcription, and other execution parameters. Essential for production deployments with specific performance and execution requirements.
import { AgentBuilder, StreamingMode } from "@iqai/adk";
import * as dotenv from "dotenv";
dotenv.config();
async function main() {
const { runner } = await AgentBuilder.create("production_agent")
.withModel("gemini-2.5-flash")
.withRunConfig({
streamingMode: StreamingMode.SSE,
maxLlmCalls: 100,
saveInputBlobsAsArtifacts: true,
})
.build();
// User asks the agent to process a request with configured limits
const response = await runner.ask("Process this data with safeguards");
console.log("response: ", response);
}
main().catch(console.error);asSequential(subAgents)
Type: BaseAgent[]
Transforms your builder into a sequential workflow where agents execute in order, with each agent receiving the output of the previous one. Perfect for pipeline workflows like research → analysis → report generation.
See Sequential Agents for detailed patterns and examples.
import { getResearchAgent } from "./agents/researcher-agent.js";
import { getAnalysisAgent } from "./agents/analyzer-agent.js";
import { getReportingAgent } from "./agents/reporter-agent.js";
const researcherAgent = getResearchAgent();
const analyzerAgent = getAnalysisAgent();
const reporterAgent = getReportingAgent();
const agent = await AgentBuilder.create("my_agent")
.withModel("gemini-2.5-flash")
.withDescription("A multi-step agent for research, analysis, and reporting")
.asSequential([researcherAgent, analyzerAgent, reporterAgent])
.build();asParallel(subAgents)
Type: BaseAgent[]
Creates a parallel execution pattern where multiple agents run simultaneously on the same input. Useful when you need different perspectives or want to process multiple aspects of a task concurrently.
See Parallel Agents for coordination patterns.
import { getAnalysisAgent } from "./agents/analysis-agent.js";
import { getResearchAgent } from "./agents/research-agent.js";
import { getReportingAgent } from "./agents/reporting-agent.js";
const sentimentAnalyzer = getAnalysisAgent();
const topicExtractor = getResearchAgent();
const summaryGenerator = getReportingAgent();
const agent = await AgentBuilder.create("my_agent")
.withModel("gemini-2.5-flash")
.withDescription(
"An agent that performs sentiment analysis, topic extraction, and summarization"
)
.asParallel([sentimentAnalyzer, topicExtractor, summaryGenerator])
.build();asLoop(subAgents, maxIterations)
Type: BaseAgent[], number
Creates an iterative execution pattern where agents repeat until a condition is met or maximum iterations reached. Essential for problem-solving workflows that require refinement and improvement.
See Loop Agents for termination conditions and patterns.
import { getProblemSolverAgent } from "./agents/problem-solver-agent.js";
import { getValidatorAgent } from "./agents/validator-agent.js";
const problemSolver = getProblemSolverAgent();
const validator = getValidatorAgent();
const agent = await AgentBuilder.create("my_agent")
.withModel("gemini-2.5-flash")
.withDescription(
"An agent that performs sentiment analysis, topic extraction, and summarization"
)
.asLoop([problemSolver, validator], 5) // Max 5 iterations
.build();asLangGraph(nodes, rootNode)
Type: LangGraphNode[], string
Creates complex, graph-based workflows with conditional branching, loops, and dynamic routing. Most powerful option for sophisticated multi-agent orchestration. See Language Graph Agents for detailed examples and patterns.
import { AgentBuilder, LlmAgent, LangGraphAgent } from "@iqai/adk";
const sentimentAgent = new LlmAgent({
name: "sentiment_analysis_agent",
description: "Analyzes sentiment of text",
});
const topicAgent = new LlmAgent({
name: "topic_extraction_agent",
description: "Extracts topics from text",
});
const summaryAgent = new LlmAgent({
name: "summarization_agent",
description: "Summarizes text",
});
const workflowNodes = [
{
id: "start-node",
name: "sentiment_analysis",
agent: sentimentAgent,
next: "topic-extraction-node",
},
{
id: "topic-extraction-node",
name: "topic_extraction",
agent: topicAgent,
next: "summarization-node",
},
{
id: "summarization-node",
name: "summarization",
agent: summaryAgent,
next: null,
},
];
const agent = await AgentBuilder.create("my_agent")
.withModel("gemini-2.5-flash")
.withDescription(
"An agent that performs sentiment analysis, topic extraction, and summarization"
)
.asLangGraph(workflowNodes, "start-node")
.build();Choosing Agent Types
You can only use ONE agent type method per builder. If you don't use any
agent type method, AgentBuilder creates a single LLM agent with your
configuration.
build()
Type: Promise<BuiltAgent>
Creates the configured agent and optionally returns the runner and session for immediate use. This is the most common way to finalize your agent configuration.
const { agent, runner, session } = await AgentBuilder.create("my_agent")
.withModel("gemini-2.5-flash")
.withDescription("A helpful research assistant")
.build();
// Use runner for conversations
// User asks the agent for the latest AI news
const response = await runner.ask("Search for recent AI news");
// Access agent directly for advanced usage
console.log("Agent name:", agent.name);
console.log("Session app:", session.appName);
console.log("Session user:", session.userId);buildWithSchema()
Type: <T>() => Promise<BuiltAgent<T, TMulti>>
Builds an agent with typed output schema, providing full TypeScript type safety for responses. The runner's ask() method returns properly typed results. Use this after calling withOutputSchema() to define the response structure.
import { z } from "zod";
const responseSchema = z.object({
summary: z.string(),
confidence: z.number(),
sources: z.array(z.string()),
});
const { runner } = await AgentBuilder.create("typed_agent")
.withModel("gemini-2.5-flash")
.withOutputSchema(responseSchema)
.buildWithSchema();
// TypeScript knows the exact return type
const result = await runner.ask("What is machine learning?");
console.log(result.summary, result.confidence, result.sources);ask(message)
Type: (message: string | FullMessage) => Promise<string>
Convenience method that builds the agent and immediately processes a single message. Returns a string response directly without needing to access a runner object. Perfect for one-off queries, rapid prototyping, or simple interactions.
import { AgentBuilder, GoogleSearch } from "@iqai/adk";
// Quick one-liner for simple queries
const response = await AgentBuilder.create("quick_agent")
.withModel("gemini-2.5-flash")
.withTools(new GoogleSearch())
.ask("What are the latest AI developments in 2025?");
console.log("response: ", response); // Direct string responseWhen to Use AgentBuilder vs LLM Agents
Use AgentBuilder When
- Rapid prototyping - Need to test ideas quickly without configuration overhead
- Automatic session management - Want sessions handled automatically with smart defaults
- Multi-agent workflows - Building sequential, parallel, or loop patterns
- Learning and experimentation - Getting started with ADK-TS concepts
- Simple applications - Basic agents without complex requirements
Use LLM Agents When
- Production systems - Need precise control over configuration and behavior
- Custom memory/sessions - Specific requirements for data persistence and management
- Complex integrations - Integrating with existing systems and architectures
- Performance optimization - Fine-tuning for specific performance requirements
- Advanced features - Need access to all configuration options and callbacks
Migration Path
Start with AgentBuilder for rapid development, then migrate to direct LLM
Agents when you need more control. AgentBuilder essentially creates LLM
agents under the hood with smart defaults.
Complete Configuration Example
Here's AgentBuilder with multiple configuration options showcasing the full range of capabilities:
import {
AgentBuilder,
InMemoryMemoryService,
BuiltInCodeExecutor,
GoogleSearch,
StreamingMode,
} from "@iqai/adk";
import { config } from "dotenv";
// Load environment variables from .env file
config();
const { runner } = await AgentBuilder.create("advanced_agent")
.withModel("gemini-2.5-flash")
.withDescription("Advanced research and analysis agent")
.withInstruction("You are a thorough research assistant")
.withTools(new GoogleSearch())
.withCodeExecutor(new BuiltInCodeExecutor())
.withMemory(new InMemoryMemoryService())
.withQuickSession({
userId: "user-123",
appName: "research-app",
})
.withRunConfig({
streamingMode: StreamingMode.SSE,
maxLlmCalls: 50,
saveInputBlobsAsArtifacts: true,
})
.withBeforeAgentCallback(async (context) => {
console.log(`Starting research task: ${context.sessionId}`);
return undefined;
})
.withAfterToolCallback(async (context) => {
console.log(`Tool ${context.name} completed`);
})
.build();
// User asks the agent to research the requested topic
const result = await runner.ask("Research quantum computing trends");Related Topics
🤖 LLM Agents
Direct agent configuration with maximum control
🛠️ Tools
Available tools and creating custom ones
🔗 Sequential Agents
Execute agents in order for pipeline workflows
⚡ Parallel Agents
Run multiple agents simultaneously
🔄 Loop Agents
Repeat agent execution until conditions are met
🧠 Sessions & Memory
Manage conversation state and long-term memory
How is this guide?