Callback Types
Understanding different types of callbacks and when each type fires in the agent lifecycle
The framework provides different types of callbacks that trigger at various stages of an agent's execution. Understanding when each callback fires and what context it receives is key to using them effectively.
Agent Lifecycle Callbacks
These callbacks are available on any agent that inherits from BaseAgent
(including LlmAgent
, SequentialAgent
, ParallelAgent
, LoopAgent
, etc.).
Before Agent Callback
When: Called immediately before the agent's runAsyncImpl
method is executed. It runs after the agent's InvocationContext
is created but before its core logic begins.
Purpose: Ideal for setting up resources or state needed only for this specific agent's run, performing validation checks on the session state, logging the entry point of the agent's activity, or potentially skipping the agent execution entirely.
Signature:
type BeforeAgentCallback = (callbackContext: CallbackContext) => Promise<Content | undefined> | Content | undefined;
Example:
import { LlmAgent, CallbackContext, Content } from '@iqai/adk';
const beforeAgentCallback = (callbackContext: CallbackContext): Content | undefined => {
// Check if agent should be skipped
const skipAgent = callbackContext.state.get('skip_agent');
if (skipAgent) {
// Return content to skip agent execution
return {
role: 'model',
parts: [{ text: 'Agent execution was skipped based on state.' }]
};
}
// Log agent start
console.log(`Starting agent: ${callbackContext.agentName}`);
return undefined; // Proceed with normal execution
};
Gatekeeping Pattern: beforeAgentCallback
acts as a gatekeeper, allowing you to intercept execution before a major step and potentially prevent it based on checks like state, input validation, or permissions.
After Agent Callback
When: Called immediately after the agent's runAsyncImpl
method successfully completes. It does not run if the agent was skipped due to beforeAgentCallback
returning content or if endInvocation
was set during the agent's run.
Purpose: Useful for cleanup tasks, post-execution validation, logging the completion of an agent's activity, modifying final state, or augmenting/replacing the agent's final output.
Signature:
type AfterAgentCallback = (callbackContext: CallbackContext) => Promise<Content | undefined> | Content | undefined;
Example:
const afterAgentCallback = (callbackContext: CallbackContext): Content | undefined => {
// Update completion count
const count = callbackContext.state.get('completion_count') || 0;
callbackContext.state.set('completion_count', count + 1);
// Log completion
console.log(`Agent ${callbackContext.agentName} completed. Total completions: ${count + 1}`);
// Optionally modify the output
const shouldAddNote = callbackContext.state.get('add_completion_note');
if (shouldAddNote) {
return {
role: 'model',
parts: [{ text: 'Task completed successfully with additional processing.' }]
};
}
return undefined; // Use original output
};
Post-Processing Pattern: afterAgentCallback
allows post-processing or modification. You can inspect the result of a step and decide whether to let it pass through, change it, or completely replace it.
LLM Interaction Callbacks
These callbacks are specific to LlmAgent
and provide hooks around the interaction with the Large Language Model.
Before Model Callback
When: Called just before the generateContentAsync
request is sent to the LLM within an LlmAgent
's flow.
Purpose: Allows inspection and modification of the request going to the LLM. Use cases include adding dynamic instructions, injecting few-shot examples based on state, modifying model config, implementing guardrails, or implementing request-level caching.
Signature:
type BeforeModelCallback = ({
callbackContext,
llmRequest
}: {
callbackContext: CallbackContext;
llmRequest: LlmRequest;
}) => Promise<LlmResponse | undefined> | LlmResponse | undefined;
Example:
import { LlmAgent, CallbackContext, LlmRequest, LlmResponse } from '@iqai/adk';
const beforeModelCallback = ({ callbackContext, llmRequest }: {
callbackContext: CallbackContext;
llmRequest: LlmRequest;
}): LlmResponse | undefined => {
// Add user preferences to system instruction
const userLang = callbackContext.state.get('user_language');
if (userLang && llmRequest.config.systemInstruction) {
const existingInstruction = llmRequest.config.systemInstruction.parts?.[0]?.text || '';
llmRequest.config.systemInstruction.parts = [{
text: `${existingInstruction}\n\nUser language preference: ${userLang}`
}];
}
// Check for blocked content
const lastContent = llmRequest.contents?.[llmRequest.contents.length - 1];
const userMessage = lastContent?.parts?.[0]?.text || '';
if (userMessage.toLowerCase().includes('blocked_keyword')) {
// Return response to skip LLM call
return new LlmResponse({
content: {
role: 'model',
parts: [{ text: 'I cannot process requests containing blocked content.' }]
}
});
}
return undefined; // Proceed with LLM call
};
Return Value Effect:
- If the callback returns
undefined
, the LLM continues its normal workflow - If the callback returns an
LlmResponse
object, the call to the LLM is skipped and the returned response is used directly
After Model Callback
When: Called just after a response (LlmResponse
) is received from the LLM, before it's processed further by the invoking agent.
Purpose: Allows inspection or modification of the raw LLM response. Use cases include logging model outputs, reformatting responses, censoring sensitive information generated by the model, parsing structured data from the LLM response and storing it in state, or handling specific error codes.
Signature:
type AfterModelCallback = ({
callbackContext,
llmResponse
}: {
callbackContext: CallbackContext;
llmResponse: LlmResponse;
}) => Promise<LlmResponse | undefined> | LlmResponse | undefined;
Example:
const afterModelCallback = ({ callbackContext, llmResponse }: {
callbackContext: CallbackContext;
llmResponse: LlmResponse;
}): LlmResponse | undefined => {
// Log model response
console.log(`LLM responded for agent: ${callbackContext.agentName}`);
// Extract and save structured data
const responseText = llmResponse.content?.parts?.[0]?.text || '';
const jsonMatch = responseText.match(/```json\n(.*?)\n```/s);
if (jsonMatch) {
try {
const data = JSON.parse(jsonMatch[1]);
callbackContext.state.set('extracted_data', data);
} catch (error) {
console.warn('Failed to parse JSON from response:', error);
}
}
// Add disclaimer if needed
const addDisclaimer = callbackContext.state.get('add_disclaimer');
if (addDisclaimer && llmResponse.content) {
const modifiedResponse = { ...llmResponse };
const originalText = modifiedResponse.content.parts?.[0]?.text || '';
modifiedResponse.content.parts = [{
text: `${originalText}\n\n*This response was generated by AI and should be verified.*`
}];
return modifiedResponse;
}
return undefined; // Use original response
};
Tool Execution Callbacks
These callbacks are also specific to LlmAgent
and trigger around the execution of tools (including FunctionTool
, agent tools, etc.) that the LLM might request.
Tool execution callbacks are not yet fully implemented in the current TypeScript version of ADK. The callback infrastructure exists but tool-specific callback invocation is coming soon.
Before Tool Callback
When: Called just before a specific tool's runAsync
method is invoked, after the LLM has generated a function call for it.
Purpose: Allows inspection and modification of tool arguments, performing authorization checks before execution, logging tool usage attempts, or implementing tool-level caching.
Signature (Coming Soon):
type BeforeToolCallback = ({
toolContext,
toolArgs
}: {
toolContext: ToolContext;
toolArgs: Record<string, any>;
}) => Promise<Record<string, any> | undefined> | Record<string, any> | undefined;
Example (Coming Soon):
const beforeToolCallback = ({ toolContext, toolArgs }: {
toolContext: ToolContext;
toolArgs: Record<string, any>;
}): Record<string, any> | undefined => {
// Check authentication
const hasAuth = toolContext.state.get('api_token');
if (!hasAuth) {
// Return error result to skip tool execution
return {
error: 'Authentication required for this tool'
};
}
// Log tool usage
console.log(`Executing tool with function call ID: ${toolContext.functionCallId}`);
// Validate arguments
if (!toolArgs.query || toolArgs.query.length < 3) {
return {
error: 'Query must be at least 3 characters long'
};
}
return undefined; // Proceed with tool execution
};
Return Value Effect:
- If the callback returns
undefined
, the tool'srunAsync
method is executed with the (potentially modified)toolArgs
- If a dictionary is returned, the tool's
runAsync
method is skipped and the returned dictionary is used directly as the tool result
After Tool Callback
When: Called just after the tool's runAsync
method completes successfully.
Purpose: Allows inspection and modification of the tool's result before it's sent back to the LLM. Useful for logging tool results, post-processing or formatting results, or saving specific parts of the result to the session state.
Signature (Coming Soon):
type AfterToolCallback = ({
toolContext,
toolResult
}: {
toolContext: ToolContext;
toolResult: Record<string, any>;
}) => Promise<Record<string, any> | undefined> | Record<string, any> | undefined;
Example (Coming Soon):
const afterToolCallback = ({ toolContext, toolResult }: {
toolContext: ToolContext;
toolResult: Record<string, any>;
}): Record<string, any> | undefined => {
// Log tool result
console.log(`Tool completed with function call ID: ${toolContext.functionCallId}`);
// Save important data to state
if (toolResult.transaction_id) {
toolContext.state.set('last_transaction_id', toolResult.transaction_id);
}
// Filter sensitive information
if (toolResult.api_key) {
const filteredResult = { ...toolResult };
delete filteredResult.api_key;
return filteredResult;
}
return undefined; // Use original result
};
Return Value Effect:
- If the callback returns
undefined
, the originaltoolResult
is used - If a new dictionary is returned, it replaces the original
toolResult
Context Objects
Callbacks receive different context objects depending on their type:
CallbackContext
Used in agent lifecycle and LLM interaction callbacks:
class CallbackContext {
// Invocation metadata
readonly invocationId: string;
readonly agentName: string;
readonly appName: string;
readonly userId: string;
readonly sessionId: string;
// State management (delta-aware)
readonly state: State;
// Artifact operations
async loadArtifact(filename: string, version?: number): Promise<Part | undefined>;
async saveArtifact(filename: string, artifact: Part): Promise<number>;
// Event actions (for tracking changes)
readonly eventActions: EventActions;
}
ToolContext
Used in tool execution callbacks, extends CallbackContext:
class ToolContext extends CallbackContext {
// Tool-specific properties
readonly functionCallId?: string;
// Additional tool operations
async listArtifacts(): Promise<string[]>;
async searchMemory(query: string): Promise<SearchMemoryResponse>;
// Action controls
readonly actions: EventActions; // alias for eventActions
}
Always use the specific context type provided (CallbackContext
for agent/model, ToolContext
for tools) to ensure access to the appropriate methods and properties.
Callback Arrays
All callback types support both single callbacks and arrays of callbacks:
const agent = new LlmAgent({
name: "multi_callback_agent",
model: "gemini-2.5-flash",
description: "Agent with multiple callbacks",
instruction: "You are helpful",
// Single callback
beforeAgentCallback: singleBeforeCallback,
// Array of callbacks (executed in order until one returns a value)
afterAgentCallback: [
firstAfterCallback,
secondAfterCallback,
thirdAfterCallback
]
});
When using callback arrays:
- Callbacks are executed in the order they appear in the array
- Execution stops when a callback returns a non-undefined value
- If all callbacks return undefined, normal execution continues