TypeScriptADK-TS

Runtime Configuration

Configure streaming, speech, transcription, and execution limits for your agents

RunConfig controls how agents execute at runtime. Use it to enable streaming responses, configure speech and transcription, save incoming media as artifacts, and set safety limits on LLM calls.

Basic Usage

Apply runtime configuration when calling runner.runAsync():

// Enable SSE streaming
const runConfig = new RunConfig({
  streamingMode: StreamingMode.SSE,
  maxLlmCalls: 100,
});

// Use with runner
for await (const event of runner.runAsync({
  userId: "user_123",
  sessionId: "session_456",
  newMessage: { parts: [{ text: "Hello!" }] },
  runConfig,
})) {
  console.log(event);
}

Or set it globally via AgentBuilder:

import { AgentBuilder, StreamingMode } from "@iqai/adk";

const { runner } = await AgentBuilder.create("my-agent")
  .withModel("gemini-2.0-flash-exp")
  .withRunConfig({
    streamingMode: StreamingMode.SSE,
    saveInputBlobsAsArtifacts: true,
  })
  .build();

Configuration Options

Streaming Mode

Control how responses are delivered:

ModeBehaviorUse Case
NONESingle complete event per responseSimple applications, testing
SSEMultiple partial events + final complete eventReal-time UI updates
BIDIBidirectional streaming for live conversationsVoice/video applications
import { RunConfig, StreamingMode } from "@iqai/adk";

// No streaming - simple and straightforward
const config1 = new RunConfig({
  streamingMode: StreamingMode.NONE,
});

// Server-Sent Events - progressive updates
const config2 = new RunConfig({
  streamingMode: StreamingMode.SSE,
});

// Bidirectional - live conversations
const config3 = new RunConfig({
  streamingMode: StreamingMode.BIDI,
});

Execution Limits

Prevent runaway loops with the maxLlmCalls limit:

const runConfig = new RunConfig({
  maxLlmCalls: 50, // Stop after 50 LLM calls
});

Default: 500 calls per invocation

Safety First

Always set maxLlmCalls in production to prevent infinite loops and unexpected costs. Values ≤ 0 disable the limit and log a warning.

Media and Artifacts

Automatically save user-provided media (images, audio, video) as artifacts:

const runConfig = new RunConfig({
  saveInputBlobsAsArtifacts: true,
});

Requirements:

  • Runner must have an artifactService configured
  • User messages must contain inlineData parts

Example flow:

// User sends an image
const message = {
  parts: [
    {
      inlineData: {
        mimeType: "image/png",
        data: base64ImageData,
      },
    },
  ],
};

// With saveInputBlobsAsArtifacts: true
// → Image automatically saved to ArtifactService
// → Artifact version recorded in session
// → Agent can reference it later

Advanced Options

Speech Configuration

Configure voice and language for speech-enabled agents:

import type { SpeechConfig } from "@google/genai";

const runConfig = new RunConfig({
  streamingMode: StreamingMode.SSE,
  responseModalities: ["AUDIO", "TEXT"],
  speechConfig: {
    languageCode: "en-US",
    voiceConfig: {
      prebuiltVoiceConfig: {
        voiceName: "Kore",
      },
    },
  } satisfies SpeechConfig,
});

Available voices: Check your LLM provider's documentation for supported voice names.

Audio Transcription

Enable transcription for input and/or output audio:

const runConfig = new RunConfig({
  // Transcribe user's audio input to text
  inputAudioTranscription: {
    enable: true,
  },

  // Transcribe agent's audio output to text
  outputAudioTranscription: {
    enable: true,
  },
});

Use cases:

  • Display text alongside audio for accessibility
  • Log conversation transcripts
  • Enable text-based search of audio conversations

Realtime Input

Configure how realtime audio/video input is handled:

import type { RealtimeInputConfig } from "@google/genai";

const runConfig = new RunConfig({
  realtimeInputConfig: {
    // Configuration for live audio/video
  } satisfies RealtimeInputConfig,
});

Experimental Features

Experimental

These features are under active development and may change in future releases.

Compositional Function Calling (CFC):

const runConfig = new RunConfig({
  streamingMode: StreamingMode.SSE, // Required for CFC
  supportCFC: true,
});

CFC allows the LLM to compose multiple tool calls in a single response.

Affective Dialog:

const runConfig = new RunConfig({
  enableAffectiveDialog: true,
});

Enables emotion-aware responses in live modes.

Proactivity:

import type { ProactivityConfig } from "@google/genai";

const runConfig = new RunConfig({
  proactivity: {
    // Allow agent to initiate responses
    // Ignore irrelevant user input
  } satisfies ProactivityConfig,
});

Complete Example

Voice-enabled agent with full configuration:

import { RunConfig, StreamingMode } from "@iqai/adk";
import type { SpeechConfig } from "@google/genai";

// Define runtime configuration
const runConfig = new RunConfig({
  // Streaming
  streamingMode: StreamingMode.SSE,

  // Response format
  responseModalities: ["AUDIO", "TEXT"],

  // Speech
  speechConfig: {
    languageCode: "en-US",
    voiceConfig: {
      prebuiltVoiceConfig: { voiceName: "Kore" },
    },
  } satisfies SpeechConfig,

  // Transcription
  inputAudioTranscription: { enable: true },
  outputAudioTranscription: { enable: true },

  // Artifacts
  saveInputBlobsAsArtifacts: true,

  // Safety
  maxLlmCalls: 200,
});

// Use with runner
for await (const event of runner.runAsync({
  userId: "user_123",
  sessionId: "session_456",
  newMessage: { parts: [{ text: "Hello!" }] },
  runConfig,
})) {
  console.log(event);
}

Configuration Reference

All Options

OptionTypeDefaultDescription
streamingModeStreamingModeNONEHow responses are delivered
maxLlmCallsnumber500Maximum LLM calls per invocation
saveInputBlobsAsArtifactsbooleanfalseAuto-save user media as artifacts
responseModalitiesstring[]Output channels (['AUDIO', 'TEXT'])
speechConfigSpeechConfigVoice and language settings
inputAudioTranscriptionAudioTranscriptionConfigTranscribe user audio
outputAudioTranscriptionAudioTranscriptionConfigTranscribe agent audio
realtimeInputConfigRealtimeInputConfigRealtime audio/video handling
enableAffectiveDialogbooleanEmotion-aware responses
proactivityProactivityConfigProactive agent behavior
supportCFCbooleanfalseCompositional Function Calling (SSE only, experimental)

Validation Rules

  • maxLlmCalls must be less than Number.MAX_SAFE_INTEGER
  • Values ≤ 0 for maxLlmCalls disable the limit (logs warning)
  • supportCFC only works with streamingMode: StreamingMode.SSE

Best Practices

  1. Always set maxLlmCalls in production to prevent runaway costs
  2. Use streaming for better UX - users see progress in real-time
  3. Enable transcription for accessibility when using speech
  4. Save important media as artifacts for future reference
  5. Test experimental features in development before deploying
  6. Choose appropriate streaming mode based on your use case:
    • NONE: Simple request/response
    • SSE: Web applications with real-time updates
    • BIDI: Voice/video conversations

Next Steps