TypeScriptADK-TS

Getting Started

Initialize telemetry and start collecting traces and metrics for your AI agents

This guide covers initializing the telemetry service and collecting traces and metrics. The patterns shown here work for both local development and production environments.

For production-specific considerations like privacy controls and performance tuning, see the Production Deployment guide.

Prerequisites

Telemetry dependencies are included in @iqai/adk—no additional packages required.

Basic Initialization

Initialize the telemetry service before any agent operations:

import { telemetryService } from '@iqai/adk';

await telemetryService.initialize({
  appName: 'my-agent-app',
  otlpEndpoint: 'http://localhost:4318/v1/traces',
  appVersion: '1.0.0',
  enableMetrics: true,
  enableTracing: true,
});
import { telemetryService } from '@iqai/adk';

await telemetryService.initialize({
  // Required
  appName: 'my-agent-app',
  otlpEndpoint: 'http://localhost:4318/v1/traces',

  // Optional
  appVersion: '1.0.0',
  environment: 'development',

  // Feature flags
  enableTracing: true,
  enableMetrics: true,
  enableAutoInstrumentation: true, // Enable HTTP/database auto-tracing

  // Privacy controls
  captureMessageContent: true, // Set false for production

  // Performance tuning
  samplingRatio: 1.0, // 1.0 = 100% sampling
  metricExportIntervalMs: 60000, // 1 minute

  // Custom resource attributes
  resourceAttributes: {
    'deployment.name': 'local',
    'team': 'platform',
  },
});
import { telemetryService } from '@iqai/adk';

// The telemetry system respects standard OpenTelemetry environment variables
// OTEL_SERVICE_NAME, OTEL_RESOURCE_ATTRIBUTES, NODE_ENV, etc.

await telemetryService.initialize({
  appName: process.env.OTEL_SERVICE_NAME || 'my-agent-app',
  otlpEndpoint: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || 'http://localhost:4318/v1/traces',
  environment: process.env.NODE_ENV || 'development',
});
# Set environment variables (shell session)
export OTEL_SERVICE_NAME=my-agent-app
export OTEL_RESOURCE_ATTRIBUTES=deployment.environment=development,team=platform
export NODE_ENV=development
export ADK_CAPTURE_MESSAGE_CONTENT=true

Using .env Files

For local development, you can also set these variables in a .env file in your project root. ADK-TS automatically loads .env files using dotenv.

Initialize Early

Always initialize telemetry before any agent operations to ensure all traces and metrics are captured.

Configuration Reference

Required Options

OptionTypeDescription
appNamestringService name for identification in traces
otlpEndpointstringOTLP HTTP endpoint URL (e.g., http://localhost:4318/v1/traces)

Optional Options

OptionTypeDefaultDescription
appVersionstring"unknown"Application version
environmentstringAuto-detectedDeployment environment (development, staging, production)
enableTracingbooleantrueEnable distributed tracing
enableMetricsbooleantrueEnable metrics collection
enableAutoInstrumentationbooleanfalseEnable automatic HTTP/database tracing
captureMessageContentbooleantrueCapture LLM prompts and completions
samplingRationumber1.0Trace sampling ratio (0.0-1.0)
metricExportIntervalMsnumber60000Metrics export interval in milliseconds
otlpHeadersRecord<string, string>Custom headers for OTLP requests
resourceAttributesRecord<string, string>Custom resource attributes
debugbooleanfalseEnable in-memory exporter for debugging

Privacy in Production

Set captureMessageContent: false in production to avoid capturing sensitive user data in traces.

Local Development Setup

Running with Jaeger

Jaeger is a popular open-source distributed tracing system. Use it for local development:

# Start Jaeger all-in-one (includes OTLP receiver)
docker run -d \
  --name jaeger \
  -p 4318:4318 \
  -p 16686:16686 \
  jaegertracing/all-in-one:latest

Your ADK-TS app will send traces to http://localhost:4318/v1/traces, and you can view them at http://localhost:16686.

Running with OpenTelemetry Collector

For more advanced setups that need to route telemetry to multiple backends or apply processing, use the OpenTelemetry Collector:

1. Create collector configuration (otel-collector-config.yaml):

receivers:
  otlp:
    protocols:
      http:
        endpoint: 0.0.0.0:4318

exporters:
  logging:
    loglevel: debug
  otlp/jaeger:
    endpoint: http://localhost:4317
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [logging, otlp/jaeger]
    metrics:
      receivers: [otlp]
      exporters: [logging]

2. Run the collector:

docker run -d \
  -v $(pwd)/otel-collector-config.yaml:/etc/otel-collector-config.yaml \
  -p 4318:4318 \
  otel/opentelemetry-collector:latest \
  --config=/etc/otel-collector-config.yaml

When to Use the Collector

For simple setups, sending directly to Jaeger is sufficient. Use the OpenTelemetry Collector when you need to:

  • Route telemetry to multiple backends
  • Apply processing, filtering, or sampling
  • Transform or enrich telemetry data

Complete Example

Here's a complete example that initializes telemetry and runs an agent:

import { telemetryService, AgentBuilder } from "@iqai/adk";

async function main() {
  // Initialize telemetry first
  await telemetryService.initialize({
    appName: "example-agent",
    appVersion: "1.0.0",
    otlpEndpoint: "http://localhost:4318/v1/traces",
    enableMetrics: true,
    enableTracing: true,
  });

  // Build and run agent
  const response = await AgentBuilder.withModel("gemini-2.5-flash").ask(
    "What is the capital of France?",
  );
  console.log(response);

  // Graceful shutdown
  process.on("SIGTERM", async () => {
    await telemetryService.shutdown(5000);
    process.exit(0);
  });

  process.on("SIGINT", async () => {
    await telemetryService.shutdown(5000);
    process.exit(0);
  });
}

main().catch(console.error);

Viewing Traces

Prerequisites

Make sure Jaeger is running before viewing traces. If you haven't set it up yet, see the "Running with Jaeger" section above for setup instructions.

Once your application is running and sending traces:

  1. Open Jaeger UI: Navigate to http://localhost:16686
  2. Select Service: Choose your service name (e.g., example-agent)
  3. Find Traces: Click "Find Traces" to see all traces
  4. Explore: Click on any trace to see the detailed execution flow

You'll see:

  • Agent invocation spans
  • Tool execution spans
  • LLM call spans with token usage
  • Auto-instrumented HTTP calls