TypeScriptADK-TS

Models

Configure and use different LLM models with your agents

ADK TypeScript provides flexible model integration, allowing you to use various Large Language Models (LLMs) with your agents. You can use Google Gemini models directly or integrate external models through wrapper classes.

Model Integration Methods

ADK TypeScript supports two primary integration approaches:

🎯 Direct Integration

Use Google Gemini models with simple string identifiers

🔌 Vercel AI SDK

Access models from multiple providers through Vercel AI SDK

Google Gemini Models

The most straightforward way to use Google's flagship models with ADK TypeScript.

Access Methods

Best for: Rapid prototyping and development

Requirements:

  • Google API key
  • Simple environment variable setup

Features:

  • Easy to get started
  • Quick iteration and testing
  • Direct API access

Best for: Production applications

Requirements:

  • Google Cloud project
  • Application Default Credentials
  • Enterprise-grade setup

Features:

  • Enterprise security and compliance
  • Advanced monitoring and scaling
  • Integration with Google Cloud services

Available Models

  • Gemini 2.0 Flash: Latest high-speed model for most use cases
  • Gemini 2.5 Pro: Powerful model for complex reasoning tasks
  • Gemini 1.5 Pro: Stable model with large context windows
  • Live API Models: Special models supporting voice/video streaming

Model Selection

Choose Flash models for speed and efficiency, Pro models for complex reasoning, and Live API models for real-time audio/video applications.

Vercel AI SDK Integration

Access models from OpenAI, Anthropic, Google, and other providers through the Vercel AI SDK integration.

Supported Providers

🤖 OpenAI

GPT-4o, GPT-4, GPT-3.5, and latest ChatGPT models

🧠 Anthropic

Claude 3.5 Sonnet, Claude 3 Opus, and Haiku models

🔥 Mistral

Mistral Large, Codestral, and specialized models

🌐 Many Others

Google, Groq, Perplexity, Cohere, and other providers

Setup Requirements

  1. Install Provider Package: You must install the specific AI SDK provider package for each provider you want to use:
    • OpenAI: npm install @ai-sdk/openai
    • Anthropic: npm install @ai-sdk/anthropic
    • Google: npm install @ai-sdk/google
    • Mistral: npm install @ai-sdk/mistral
  2. Configure API Keys: Set environment variables for your chosen providers
  3. Use in LlmAgent: Pass the configured model instance to LlmAgent

Example Usage

First, install the required provider packages:

# Install the providers you want to use
npm install @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google

Then use them in your agents:

import { LlmAgent } from "@iqai/adk";
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import { google } from "@ai-sdk/google";

// OpenAI GPT-4
const openaiAgent = new LlmAgent({
  name: "openai_agent",
  model: openai("gpt-4.1"),
  instruction: "You are a helpful assistant"
});

// Anthropic Claude
const claudeAgent = new LlmAgent({
  name: "claude_agent",
  model: anthropic("claude-3-5-sonnet-20241022"),
  instruction: "You are a helpful assistant"
});

// Google Gemini
const geminiAgent = new LlmAgent({
  name: "gemini_agent",
  model: google("gemini-2.0-flash"),
  instruction: "You are a helpful assistant"
});

Benefits of Vercel AI SDK Integration

  • Unified Interface: Consistent API across all providers
  • Type Safety: Full TypeScript support with proper type definitions
  • Streaming Support: Built-in streaming capabilities for real-time responses
  • Tool Calling: Native function calling support across providers
  • Active Maintenance: Regularly updated with new providers and features
  • Performance Optimized: Efficient request handling and response processing

Local and Open Source Models

Run models locally for privacy, cost control, or offline operation.

Local Deployment Options

🦙 Ollama

Easy local model deployment and management

🏗️ Self-Hosted

Custom model server deployments

☁️ Private Cloud

Models hosted in your own infrastructure

Considerations for Local Models

  • Tool Support: Ensure your chosen model supports function calling
  • Performance: Consider hardware requirements for model size
  • Reliability: Local models may have different reliability characteristics
  • Model Quality: Open source models vary in capability and consistency

Tool Compatibility

When using local models with tools, verify that the model supports function calling. Not all open source models have reliable tool support.

Model Configuration

Generation Parameters

Control how models generate responses:

  • Temperature: Randomness in responses (0.0 = deterministic, 1.0 = creative)
  • Max Tokens: Maximum response length
  • Top-P/Top-K: Advanced sampling parameters
  • Safety Settings: Content filtering and safety controls

Performance Optimization

  • Model Selection: Choose appropriate model size for your use case
  • Caching: Implement response caching for repeated queries
  • Batching: Group requests when possible
  • Rate Limiting: Respect provider rate limits

Cost Management

  • Model Tier Selection: Balance cost vs capability
  • Request Optimization: Minimize unnecessary model calls
  • Usage Monitoring: Track costs and usage patterns
  • Efficient Prompting: Design prompts for optimal token usage

Best Practices

Model Selection Strategy

  1. Start Simple: Begin with Fast/Flash models for development
  2. Test Thoroughly: Validate model performance with your specific use case
  3. Consider Latency: Factor in response time requirements
  4. Evaluate Quality: Test output quality across different scenarios

Environment Management

  • Separate Environments: Use different models for dev/staging/production
  • Configuration Management: Use environment variables for model selection
  • Fallback Strategies: Implement backup models for reliability
  • Monitoring: Track model performance and availability

Security Considerations

  • API Key Management: Secure storage and rotation of API keys
  • Network Security: Protect communications with model providers
  • Data Privacy: Consider data residency and privacy requirements
  • Audit Trails: Log model interactions for compliance

Troubleshooting

Common Issues

Common Problems

  • Authentication Errors: Check API keys and credentials
  • Rate Limiting: Implement proper retry logic
  • Model Availability: Verify model endpoints are accessible
  • Version Compatibility: Ensure model versions match your requirements

Performance Issues

  • Monitor response times and adjust model selection
  • Check network connectivity and latency
  • Verify proper resource allocation for local models
  • Consider model warming strategies for improved performance

How is this guide?