Quickstart
Build your first AI agent with ADK TypeScript in minutes
Prerequisites
Make sure you have installed ADK TypeScript before continuing.
Your First Agent
Let's start with the simplest possible agent - one that can answer questions:
Create Your Agent File
Create a new file called my-first-agent.ts
:
import { env } from "node:process";
import { AgentBuilder } from "@iqai/adk";
import dedent from "dedent";
/**
* Simple Agent Example
*
* The simplest way to create and use an AI agent with AgentBuilder.
*/
async function main() {
const question = "What is the capital of France?";
// The simplest possible usage - just model and ask!
const response = await AgentBuilder.withModel(
env.LLM_MODEL || "gemini-2.5-flash",
).ask(question);
console.log(dedent`
🤖 Simple Agent Example
═══════════════════════
📝 Question: ${question}
🤖 Response: ${response}
`);
}
main().catch(console.error);
Run Your Agent
npx tsx my-first-agent.ts
npx ts-node my-first-agent.ts
npx tsc my-first-agent.ts
node my-first-agent.js
Expected Output
You should see something like:
🤖 Simple Agent Example
═══════════════════════
📝 Question: What is the capital of France?
🤖 Response: The capital of France is Paris.
Understanding the Code
Let's break down what's happening:
Code Explanation
// 1. Import the AgentBuilder class
import { AgentBuilder } from "@iqai/adk";
// 2. Create an agent with a model and ask a question
const response = await AgentBuilder
.withModel("gemini-2.5-flash") // Specify the LLM model
.ask(question); // Ask your question
That's it! AgentBuilder provides the simplest way to get started with AI agents.
Adding Tools
Now let's make your agent more powerful by adding tools. Create a new file:
import { env } from "node:process";
import {
AgentBuilder,
GoogleSearch,
LlmAgent,
Runner,
InMemorySessionService
} from "@iqai/adk";
import { v4 as uuidv4 } from "uuid";
async function main() {
console.log("🛠️ Agent with Tools Example");
console.log("═══════════════════════════");
// Create an agent with tools
const agent = new LlmAgent({
name: "research_agent",
model: env.LLM_MODEL || "gemini-2.5-flash",
description: "A research agent that can search the web",
instruction:
"You are a helpful research assistant. Use the Google Search tool " +
"when you need current information. Provide clear, well-sourced answers.",
tools: [new GoogleSearch()],
});
// Set up a session and runner
const sessionService = new InMemorySessionService();
const session = await sessionService.createSession("quickstart-app", "user-1");
const runner = new Runner({ agent, sessionService });
// Ask a question that requires web search
const query = "What are the latest developments in AI agents this year?";
console.log(`📝 Query: ${query}\n`);
// Run the agent and stream the response
for await (const event of runner.runAsync({
userId: "user-1",
sessionId: session.id,
newMessage: {
parts: [{ text: query }],
},
})) {
if (event.author === "research_agent" && event.content?.parts) {
const content = event.content.parts
.map((part) => part.text || "")
.join("");
if (content && !event.partial) {
console.log("🤖 Response:", content);
break;
}
}
}
}
main().catch(console.error);
What's New Here?
- Tools: Added
GoogleSearch
for web searching capabilities - Sessions: Using
InMemorySessionService
for conversation memory - Streaming: Processing agent responses as they come in
- Instructions: Giving the agent specific guidance on how to behave
Configuration Options
You can customize your agent in many ways:
// Minimal agent
const response = await AgentBuilder
.withModel("gemini-2.5-flash")
.ask("Hello!");
// Agent with custom behavior
const response = await AgentBuilder
.create("helpful_assistant")
.withModel("gemini-2.5-flash")
.withDescription("A friendly and helpful assistant")
.withInstruction(
"You are a helpful assistant. Always be polite and " +
"provide clear, accurate information."
)
.ask("Hello!");
// Full-featured agent with tools and sessions
const { runner, session } = await AgentBuilder
.create("research_agent")
.withModel("gemini-2.5-flash")
.withDescription("A research agent with web search")
.withInstruction("Use tools to find accurate information")
.withTools(new GoogleSearch())
.withQuickSession("my-app", "user-123")
.build();
Common Patterns
Here are some common patterns you'll use:
Environment Configuration
# Specify your preferred model
LLM_MODEL=gemini-2.5-flash
# Or use other supported models
# LLM_MODEL=gemini-1.5-pro
# LLM_MODEL=gpt-4
Error Handling
async function safeAgentCall() {
try {
const response = await AgentBuilder
.withModel("gemini-2.5-flash")
.ask("What is the weather like?");
console.log("Success:", response);
} catch (error) {
console.error("Agent error:", error);
// Handle the error appropriately
}
}
Multiple Questions
async function multipleQuestions() {
const agent = AgentBuilder
.create("qa_agent")
.withModel("gemini-2.5-flash")
.withInstruction("Provide concise, accurate answers");
const questions = [
"What is TypeScript?",
"How do AI agents work?",
"What is the capital of Japan?"
];
for (const question of questions) {
const answer = await agent.ask(question);
console.log(`Q: ${question}`);
console.log(`A: ${answer}\n`);
}
}
Next Steps
Congratulations! You've created your first AI agent. Here's what to explore next:
🛠️ Add More Tools
Learn about built-in tools and how to create custom ones
🤖 Advanced Agents
Explore LLM agents, workflow agents, and multi-agent systems
💾 Sessions & Memory
Add persistent conversations and memory to your agents
📊 Real Examples
See complete working examples with different patterns
Troubleshooting
Common Issues
Model Access
If you get authentication errors, make sure you have access to the Gemini models or configure appropriate API keys in your environment.
Dependencies
If you encounter import errors, ensure all dependencies are installed:
npm install @iqai/adk dedent uuid