Callbacks
Observe, customize, and control agent behavior with powerful callback mechanisms
Callbacks provide powerful mechanisms to hook into agent execution processes, allowing you to observe, customize, and control agent behavior at specific points without modifying core framework code.
Overview
Callbacks are standard TypeScript functions that you define and associate with agents during creation. The ADK framework automatically calls these functions at predefined execution stages, acting as checkpoints where you can intervene with custom logic.
Core Capabilities
- Observation: Monitor agent behavior and execution flow
- Customization: Modify data flowing through the agent
- Control: Bypass or override default agent behaviors
- Integration: Trigger external actions or services
- State Management: Read and update session state during execution
- Security: Implement guardrails and policy enforcement
Checkpoint Pattern
Callbacks function like checkpoints during agent execution - specific moments where you can inspect what's happening and decide whether to intervene.
How Callbacks Work
Callback Registration
Callbacks are registered during agent creation:
import { LlmAgent, CallbackContext, LlmRequest, LlmResponse } from '@iqai/adk';
// Define callback functions
const beforeModelCallback = ({ callbackContext, llmRequest }: {
callbackContext: CallbackContext;
llmRequest: LlmRequest;
}): LlmResponse | undefined => {
console.log(`Processing request for agent: ${callbackContext.agentName}`);
// Return undefined to proceed normally
// Return LlmResponse to skip LLM call
return undefined;
};
const afterAgentCallback = (callbackContext: CallbackContext) => {
console.log(`Agent ${callbackContext.agentName} completed processing`);
return undefined; // Allow normal completion
};
// Create agent with callbacks
const agent = new LlmAgent({
name: "callback_agent",
model: "gemini-2.5-flash",
description: "Agent with callbacks",
instruction: "You are helpful",
beforeModelCallback,
afterAgentCallback
});
Control Flow
Callbacks control execution through their return values:
Return undefined
: Allow normal execution to continue
Return specific object: Override or skip the default behavior
beforeAgentCallback
→Content
: Skip agent execution, use returned contentbeforeModelCallback
→LlmResponse
: Skip LLM call, use returned responseafterAgentCallback
→Content
: Replace agent's outputafterModelCallback
→LlmResponse
: Replace LLM response
Callback Types
The framework provides different callback types for various execution stages:
🚀 Agent Lifecycle
Before and after complete agent processing with CallbackContext
🧠 Model Interactions
Before and after LLM communications with request/response access
🔧 Tool Execution
Before and after tool invocations with ToolContext
Documentation Structure
📋 Callback Types
Detailed reference for all callback types and their contexts
🎨 Design Patterns
Common patterns for guardrails, caching, logging, and more
🔧 Context Patterns
Working with CallbackContext and ToolContext effectively
Quick Example
Here's a simple callback that adds logging and implements a basic guardrail:
import { LlmAgent, CallbackContext, LlmRequest, LlmResponse, Content } from '@iqai/adk';
// Before model callback with guardrail
const guardrailCallback = ({ callbackContext, llmRequest }: {
callbackContext: CallbackContext;
llmRequest: LlmRequest;
}): LlmResponse | undefined => {
// Log the request
console.log(`[${callbackContext.invocationId}] Processing LLM request`);
// Check for blocked content in the last user message
const lastContent = llmRequest.contents?.[llmRequest.contents.length - 1];
const lastMessage = lastContent?.parts?.[0]?.text || "";
if (lastMessage.toLowerCase().includes("blocked")) {
// Return response to skip LLM call
return new LlmResponse({
content: {
role: "model",
parts: [{ text: "I cannot process requests containing blocked content." }]
}
});
}
return undefined; // Proceed with normal LLM call
};
// After agent callback for state management
const stateManagementCallback = (callbackContext: CallbackContext): Content | undefined => {
// Update interaction count in state
const currentCount = callbackContext.state.get('interaction_count') || 0;
callbackContext.state.set('interaction_count', currentCount + 1);
console.log(`Interaction count: ${currentCount + 1}`);
return undefined; // Use agent's original output
};
const agent = new LlmAgent({
name: "guarded_agent",
model: "gemini-2.5-flash",
description: "Agent with guardrails and state tracking",
instruction: "You are a helpful assistant",
beforeModelCallback: guardrailCallback,
afterAgentCallback: stateManagementCallback
});
Context Objects
Callbacks receive different context objects depending on their type:
CallbackContext
Used in agent lifecycle and LLM interaction callbacks:
- State Management: Read/write session state with automatic delta tracking
- Artifact Operations: Save and load files with
saveArtifact()
andloadArtifact()
- Invocation Metadata: Access to invocation ID, agent name, user content
- Event Actions: Control event generation and side effects
ToolContext
Used in tool execution callbacks, extends CallbackContext with:
- Function Call ID: Identifier for the specific tool invocation
- Authentication: Credential management with
requestCredential()
- Artifact Listing: Enumerate session artifacts with
listArtifacts()
- Memory Search: Query memory service with
searchMemory()
- Summarization Control: Skip LLM summarization of tool results
Common Use Cases
Security & Compliance
- Input Validation: Check user inputs for safety and compliance
- Output Filtering: Remove sensitive information from responses
- Access Control: Verify user permissions before tool execution
- Audit Logging: Track all interactions for compliance
Performance Optimization
- Response Caching: Cache LLM responses to avoid redundant calls
- Request Batching: Combine multiple requests for efficiency
- Resource Management: Monitor and control resource usage
- Early Termination: Stop processing when conditions are met
User Experience
- Dynamic Instructions: Adapt agent behavior based on user context
- Progress Tracking: Update users on long-running operations
- Error Recovery: Provide graceful fallbacks for failures
- Personalization: Customize responses based on user preferences
Best Practices
Design Principles
- Single Responsibility: Each callback should have one clear purpose
- Performance Awareness: Avoid blocking operations in callbacks
- Error Handling: Use try-catch blocks and graceful degradation
- State Management: Be deliberate about state changes and their scope
Implementation Guidelines
- Type Safety: Use proper TypeScript types for all callback parameters
- Testing: Unit test callbacks with mock context objects
- Documentation: Document callback behavior and side effects clearly
- Idempotency: Design callbacks to be safe for retry scenarios