Runtime Configuration
Configure streaming, speech, transcription, and execution limits for your agents
RunConfig controls how agents execute at runtime. Use it to enable streaming responses, configure speech and transcription, save incoming media as artifacts, and set safety limits on LLM calls.
Basic Usage
Apply runtime configuration when calling runner.runAsync():
// Enable SSE streaming
const runConfig = new RunConfig({
streamingMode: StreamingMode.SSE,
maxLlmCalls: 100,
});
// Use with runner
for await (const event of runner.runAsync({
userId: "user_123",
sessionId: "session_456",
newMessage: { parts: [{ text: "Hello!" }] },
runConfig,
})) {
console.log(event);
}Or set it globally via AgentBuilder:
import { AgentBuilder, StreamingMode } from "@iqai/adk";
const { runner } = await AgentBuilder.create("my-agent")
.withModel("gemini-2.0-flash-exp")
.withRunConfig({
streamingMode: StreamingMode.SSE,
saveInputBlobsAsArtifacts: true,
})
.build();Configuration Options
Streaming Mode
Control how responses are delivered:
| Mode | Behavior | Use Case |
|---|---|---|
NONE | Single complete event per response | Simple applications, testing |
SSE | Multiple partial events + final complete event | Real-time UI updates |
BIDI | Bidirectional streaming for live conversations | Voice/video applications |
import { RunConfig, StreamingMode } from "@iqai/adk";
// No streaming - simple and straightforward
const config1 = new RunConfig({
streamingMode: StreamingMode.NONE,
});
// Server-Sent Events - progressive updates
const config2 = new RunConfig({
streamingMode: StreamingMode.SSE,
});
// Bidirectional - live conversations
const config3 = new RunConfig({
streamingMode: StreamingMode.BIDI,
});Execution Limits
Prevent runaway loops with the maxLlmCalls limit:
const runConfig = new RunConfig({
maxLlmCalls: 50, // Stop after 50 LLM calls
});Default: 500 calls per invocation
Safety First
Always set maxLlmCalls in production to prevent infinite loops and unexpected costs. Values ≤ 0 disable the limit and log a warning.
Media and Artifacts
Automatically save user-provided media (images, audio, video) as artifacts:
const runConfig = new RunConfig({
saveInputBlobsAsArtifacts: true,
});Requirements:
- Runner must have an
artifactServiceconfigured - User messages must contain
inlineDataparts
Example flow:
// User sends an image
const message = {
parts: [
{
inlineData: {
mimeType: "image/png",
data: base64ImageData,
},
},
],
};
// With saveInputBlobsAsArtifacts: true
// → Image automatically saved to ArtifactService
// → Artifact version recorded in session
// → Agent can reference it laterAdvanced Options
Speech Configuration
Configure voice and language for speech-enabled agents:
import type { SpeechConfig } from "@google/genai";
const runConfig = new RunConfig({
streamingMode: StreamingMode.SSE,
responseModalities: ["AUDIO", "TEXT"],
speechConfig: {
languageCode: "en-US",
voiceConfig: {
prebuiltVoiceConfig: {
voiceName: "Kore",
},
},
} satisfies SpeechConfig,
});Available voices: Check your LLM provider's documentation for supported voice names.
Audio Transcription
Enable transcription for input and/or output audio:
const runConfig = new RunConfig({
// Transcribe user's audio input to text
inputAudioTranscription: {
enable: true,
},
// Transcribe agent's audio output to text
outputAudioTranscription: {
enable: true,
},
});Use cases:
- Display text alongside audio for accessibility
- Log conversation transcripts
- Enable text-based search of audio conversations
Realtime Input
Configure how realtime audio/video input is handled:
import type { RealtimeInputConfig } from "@google/genai";
const runConfig = new RunConfig({
realtimeInputConfig: {
// Configuration for live audio/video
} satisfies RealtimeInputConfig,
});Experimental Features
Experimental
These features are under active development and may change in future releases.
Compositional Function Calling (CFC):
const runConfig = new RunConfig({
streamingMode: StreamingMode.SSE, // Required for CFC
supportCFC: true,
});CFC allows the LLM to compose multiple tool calls in a single response.
Affective Dialog:
const runConfig = new RunConfig({
enableAffectiveDialog: true,
});Enables emotion-aware responses in live modes.
Proactivity:
import type { ProactivityConfig } from "@google/genai";
const runConfig = new RunConfig({
proactivity: {
// Allow agent to initiate responses
// Ignore irrelevant user input
} satisfies ProactivityConfig,
});Complete Example
Voice-enabled agent with full configuration:
import { RunConfig, StreamingMode } from "@iqai/adk";
import type { SpeechConfig } from "@google/genai";
// Define runtime configuration
const runConfig = new RunConfig({
// Streaming
streamingMode: StreamingMode.SSE,
// Response format
responseModalities: ["AUDIO", "TEXT"],
// Speech
speechConfig: {
languageCode: "en-US",
voiceConfig: {
prebuiltVoiceConfig: { voiceName: "Kore" },
},
} satisfies SpeechConfig,
// Transcription
inputAudioTranscription: { enable: true },
outputAudioTranscription: { enable: true },
// Artifacts
saveInputBlobsAsArtifacts: true,
// Safety
maxLlmCalls: 200,
});
// Use with runner
for await (const event of runner.runAsync({
userId: "user_123",
sessionId: "session_456",
newMessage: { parts: [{ text: "Hello!" }] },
runConfig,
})) {
console.log(event);
}Configuration Reference
All Options
| Option | Type | Default | Description |
|---|---|---|---|
streamingMode | StreamingMode | NONE | How responses are delivered |
maxLlmCalls | number | 500 | Maximum LLM calls per invocation |
saveInputBlobsAsArtifacts | boolean | false | Auto-save user media as artifacts |
responseModalities | string[] | — | Output channels (['AUDIO', 'TEXT']) |
speechConfig | SpeechConfig | — | Voice and language settings |
inputAudioTranscription | AudioTranscriptionConfig | — | Transcribe user audio |
outputAudioTranscription | AudioTranscriptionConfig | — | Transcribe agent audio |
realtimeInputConfig | RealtimeInputConfig | — | Realtime audio/video handling |
enableAffectiveDialog | boolean | — | Emotion-aware responses |
proactivity | ProactivityConfig | — | Proactive agent behavior |
supportCFC | boolean | false | Compositional Function Calling (SSE only, experimental) |
Validation Rules
maxLlmCallsmust be less thanNumber.MAX_SAFE_INTEGER- Values ≤ 0 for
maxLlmCallsdisable the limit (logs warning) supportCFConly works withstreamingMode: StreamingMode.SSE
Best Practices
- Always set maxLlmCalls in production to prevent runaway costs
- Use streaming for better UX - users see progress in real-time
- Enable transcription for accessibility when using speech
- Save important media as artifacts for future reference
- Test experimental features in development before deploying
- Choose appropriate streaming mode based on your use case:
NONE: Simple request/responseSSE: Web applications with real-time updatesBIDI: Voice/video conversations