TypeScriptADK-TS

Langfuse Plugin

Capture, track, and analyze agent and tool interactions with Langfuse in @iqai/adk

Langfuse Plugin

The Langfuse Plugin integrates @iqai/adk agents with Langfuse for robust tracing, monitoring, and observability of LLM and tool executions. It automatically records user messages, agent actions, model generations, and tool calls, including errors and token usage.

This plugin is especially useful for:

  • Observability of agent workflows
  • Debugging tool or model errors
  • Tracking model usage and token consumption
  • Correlating events across complex agent hierarchies

Key Features

  • Automatic tracing: Tracks every invocation, agent, tool, and model generation.
  • Error monitoring: Captures LLM or tool errors with full context.
  • Token and model usage tracking: Aggregates input/output tokens and models used per invocation.
  • Hierarchical spans: Supports nested agents and tool calls with parent-child relationships.
  • Flexible serialization: Converts complex Content and Event objects into structured logs for Langfuse.
  • Flush and shutdown control: Manually flush or shutdown Langfuse client if needed.

Installation

pnpm add @iqai/adk langfuse

Setup and Usage

You can attach the plugin to your agent using AgentBuilder or directly with LlmAgent.

import { AgentBuilder, InMemorySessionService, LangfusePlugin } from "@iqai/adk";
import { openrouter } from "@openrouter/ai-sdk-provider";
import { randomUUID } from "crypto";

const langfusePlugin = new LangfusePlugin({
  publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
  secretKey: process.env.LANGFUSE_SECRET_KEY!,
  flushAt: 5,
});

const sessionService = new InMemorySessionService();

const { runner } = await AgentBuilder.withModel(
  openrouter("openai/gpt-4.1-mini"),
)
  .withDescription("Calendar scheduling assistant")
  .withInstruction("Help users manage calendar events.")
  .withTools(/* calendar tools */)
  .withSessionService(sessionService)
  .withQuickSession({
    sessionId: randomUUID(),
    appName: "calendar-agent",
  })
  .withPlugins(langfusePlugin)
  .build();

Using LlmAgent directly

import { LlmAgent, LangfusePlugin } from "@iqai/adk";

const agent = new LlmAgent({
  name: "calendar_specialist",
  description: "Agent for calendar scheduling",
  plugins: [
    new LangfusePlugin({
      publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
      secretKey: process.env.LANGFUSE_SECRET_KEY!,
    }),
  ],
});

Configuration Options

OptionTypeDefaultDescription
namestring"langfuse_plugin"Plugin name for identification
publicKeystring-Langfuse public API key
secretKeystring-Langfuse secret API key
baseUrlstring"https://us.cloud.langfuse.com"Langfuse API endpoint
releasestringundefinedOptional release tag for traces
flushAtnumber1Flush after this many events
flushIntervalnumber1000Flush interval in milliseconds

Callback Hooks

The plugin intercepts key agent events to send structured logs to Langfuse:

User messages

  • onUserMessageCallback: Records user input and triggers a user_message event.

Agent execution

  • beforeAgentCallback / afterAgentCallback: Starts and ends agent spans. Tracks input, output, models used, and sub-agent hierarchy.

Model calls

  • beforeModelCallback / afterModelCallback: Captures model request and response, including token usage and generation metadata.
  • onModelErrorCallback: Logs errors for LLM calls with full context.

Tool calls

  • beforeToolCallback / afterToolCallback: Captures tool invocation and results.
  • onToolErrorCallback: Records tool failures and metadata.

Run lifecycle

  • beforeRunCallback / afterRunCallback: Tracks overall invocation start and completion.
  • onEventCallback: Captures all events in the invocation, including function calls and final outputs.

Advanced Usage

Customizing serialization

LangfusePlugin converts complex Content or Event objects to structured logs. You can extend:

class CustomLangfusePlugin extends LangfusePlugin {
  protected toPlainText(data: any): string {
    // Custom logic to extract plain text
    return super.toPlainText(data);
  }
}

Manual flush and shutdown

By default, the Langfuse client batches events and flushes them automatically based on the flushAt and flushInterval configuration options. In some scenarios, however, you may want explicit control over when data is sent or when the client is shut down.

flush()

await langfusePlugin.flush();

Forces all buffered traces, spans, and events to be sent to Langfuse immediately.

Use this when:

  • You are running in short-lived environments (e.g. serverless functions, scripts, or cron jobs)
  • You want to ensure all telemetry is delivered before continuing execution
  • You need deterministic delivery for testing or debugging

Internally, this calls flushAsync() on the Langfuse client and does not shut it down, so the plugin can continue recording events afterward.

close()

await langfusePlugin.close();

Gracefully shuts down the Langfuse client after flushing any remaining buffered data.

Use this when:

  • Your application or worker is terminating
  • You no longer plan to record any additional traces or events
  • You want to release resources and prevent further writes

Once close() is called, the plugin should be considered inactive.

In long-running applications (e.g. servers), you typically do not need to call either method manually. In short-lived processes, a common pattern is:

await langfusePlugin.flush();
await langfusePlugin.close();

This ensures all telemetry is delivered before the process exits.

Benefits

  • Centralized observability for multi-agent workflows
  • Debugging complex tool and LLM interactions
  • Token and model usage analytics
  • Error tracking with detailed context
  • Supports nested agents and tool spans