Models & Providers
Configure LLM models from Gemini, OpenAI, Anthropic, and other providers with ADK-TS agents
ADK-TS provides flexible model integration, allowing you to use various Large Language Models (LLMs) with your agents. The framework defaults to Google Gemini models but supports extensive customization through two main approaches.
Model Integration Options
ADK-TS supports two primary ways to configure models:
🎯 Option 1: Direct Model Names
Pass model names directly to agents. Gemini is default, others require environment configuration
🔌 Option 2: Vercel AI SDK
Use model instances from Vercel AI SDK for extensive provider support
Option 1: Direct Model Names
The simplest approach - pass model names as strings directly to your agents. ADK-TS defaults to Gemini models but supports other providers when properly configured.
Default: Google Gemini Models (Easiest Setup)
For Gemini models (default), you only need to set the API key:
# .env file
GOOGLE_API_KEY=your_google_api_key_hereThat's it! You can now use Gemini models with agents. The framework defaults to gemini-2.0-flash:
import { LlmAgent } from "@iqai/adk";
// Uses default Gemini model (gemini-2.0-flash)
const agent = new LlmAgent({
name: "my_agent",
description: "An agent using the default Gemini model",
instruction: "You are a helpful assistant",
});
// Use a different Gemini model
const advancedAgent = new LlmAgent({
name: "advanced_agent",
description: "Using a more powerful Gemini model",
model: "gemini-2.5-pro", // Just pass the model name
instruction: "You are an expert analyst",
});
export { agent, advancedAgent };Using Other Providers or Different Gemini Models
To use non-Gemini models or change the default Gemini model, you must configure both the model name and API key:
1. Set both the model name and corresponding API key in your .env file:
# .env file
# For OpenAI:
LLM_MODEL=gpt-4o
OPENAI_API_KEY=your_openai_api_key_here
# Or for Claude:
LLM_MODEL=claude-sonnet-4-5-20250929
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# Or for Groq:
LLM_MODEL=llama-3.3-70b-versatile
GROQ_API_KEY=your_groq_api_key_here
# Or for a different Gemini model:
LLM_MODEL=gemini-2.5-pro
GOOGLE_API_KEY=your_google_api_key_here2. Use the model in your agents:
import { LlmAgent } from "@iqai/adk";
const { LLM_MODEL } = process.env;
// Using environment-configured model
const agent = new LlmAgent({
name: "my_agent",
model: LLM_MODEL, // Will use whatever is set in .env
instruction: "You are a helpful assistant",
});
// Or directly specify a model name
const openAiAgent = new LlmAgent({
name: "openai_agent",
model: "gpt-4o", // Direct model name
instruction: "You are an expert assistant",
});
export { agent, openAiAgent };How It Works
The framework is smart enough to automatically detect which LLM provider to use based on the model name you pass. Just set the API key for that provider and pass the model name - the framework handles the rest!
- Default Gemini: Only need
GOOGLE_API_KEY(framework defaults togemini-2.0-flash) - Different model: Set the corresponding API key and pass the model name (e.g.,
model: "gpt-4o",model: "claude-3-5-sonnet-20241022") - Provider detection: Framework automatically recognizes OpenAI, Claude, Groq, and other providers from the model name
Option 2: Vercel AI SDK Integration
For more control and advanced features, use model instances from the Vercel AI SDK. This approach provides access to multiple providers with consistent APIs and advanced capabilities.
Setup Requirements
1. Install Provider Packages:
# Install the providers you want to use
npm install @ai-sdk/openai # For OpenAI models
npm install @ai-sdk/anthropic # For Anthropic models
npm install @ai-sdk/mistral # For Mistral models2. Configure API Keys:
# .env file
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
MISTRAL_API_KEY=your_mistral_api_key_here3. Use Model Instances:
import { LlmAgent } from "@iqai/adk";
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import { mistral } from "@ai-sdk/mistral";
// OpenAI models
const gpt4Agent = new LlmAgent({
name: "gpt4_agent",
description: "GPT-4 powered assistant",
model: openai("gpt-4o"),
instruction: "You are a helpful assistant",
});
// Anthropic models
const claudeAgent = new LlmAgent({
name: "claude_agent",
description: "Claude powered assistant",
model: anthropic("claude-3-5-sonnet-20241022"),
instruction: "You are a helpful assistant",
});
// Mistral models
const mistralAgent = new LlmAgent({
name: "mistral_agent",
description: "Mistral powered assistant",
model: mistral("mistral-large-latest"),
instruction: "You are a helpful assistant",
});Supported Providers
🤖 OpenAI
GPT-4o, GPT-4, GPT-3.5, and latest ChatGPT models
🧠 Anthropic
Claude 3.5 Sonnet, Claude 3 Opus, and Haiku models
🔥 Mistral
Mistral Large, Codestral, and specialized models
⚡ Groq
Ultra-fast inference for Llama, Mixtral, and Gemma models
🌐 Many Others
Google, Perplexity, Cohere, and other providers
The Vercel AI SDK supports many more providers beyond what's shown here. Check the official documentation for the complete list of supported providers and models.
Local & Open Source Models
Local and open source models (like Ollama, self-hosted models) are also
supported through the Vercel AI SDK approach. Install the appropriate provider
package (@ai-sdk/ollama, etc.) and configure as needed. Note that not all
local models support function calling reliably.
Which Option Should You Choose?
| Use Case | Recommended Option | Why |
|---|---|---|
| Getting Started | Option 1 (Gemini default) | Simple setup, just need GOOGLE_API_KEY |
| Production Apps | Option 1 with env config | Simple, reliable, fewer dependencies |
| Multi-Provider | Option 2 (Vercel AI SDK) | Unified interface, consistent APIs |
| Advanced Features | Option 2 (Vercel AI SDK) | Streaming, advanced config, type safety |
| Local/Private Models | Option 2 (Vercel AI SDK) | Only option that supports local deployment |
Next Steps
🤖 Create Your First LLM Agent
Learn how to use models with LLM agents and get started building
🔧 Use Agent Builder
Rapidly create agents with the fluent API and model configuration
🛠️ Add Tools to Your Agents
Integrate tools with different model types for enhanced capabilities
👥 Build Multi-Agent Systems
Coordinate multiple agents with different models and specializations
How is this guide?