Skip to main content
AI Kit integrates with Langfuse to trace generate/stream calls. This guide covers dependency setup, processor initialisation, and agent-level configuration.

Dependencies

pnpm add @langfuse/otel @opentelemetry/sdk-trace-node
# or
npm install @langfuse/otel @opentelemetry/sdk-trace-node

Environment variables

LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
# optional
# LANGFUSE_BASE_URL=https://us.cloud.langfuse.com

Initialise Langfuse

// instrumentation.ts
import { ensureLangfuseTelemetry } from "@ai_kit/core";

const telemetry = await ensureLangfuseTelemetry({
  shouldExportSpan: ({ otelSpan }) =>
    otelSpan?.instrumentationScope?.name !== "next.js",
});

export const flushLangfuse = () => telemetry.flush();
  • ensureLangfuseTelemetry dynamically loads @langfuse/otel, registers a NodeTracerProvider, and returns a reusable handle.
  • autoFlush defaults to "process": beforeExit, SIGINT, and SIGTERM trigger flush()/shutdown() automatically.
  • Initialise as early as possible (server entrypoint, framework plugin, …).

Next.js example

import { after } from "next/server";
import { ensureLangfuseTelemetry } from "@ai_kit/core";

const telemetry = await ensureLangfuseTelemetry();

export async function POST(req: Request) {
  const result = await agent.stream({ /* ... */ });

  after(async () => {
    await telemetry.flush();
  });

  return result.toAIStreamResponse();
}

Enable telemetry on an agent

import { Agent } from "@ai_kit/core";
import { openai } from "@ai-sdk/openai";

const supportAgent = new Agent({
  name: "support-assistant",
  model: openai("gpt-4.1-mini"),
  telemetry: true,
});
Adjust it dynamically:
supportAgent.withTelemetry(false);

await supportAgent.generate({
  prompt: "Create an incident ticket for user #42",
  telemetry: {
    functionId: "support-ticket",
    metadata: { ticketId: "42", priority: "high" },
    recordInputs: false,
  },
});
  • telemetry merges with experimental_telemetry.
  • Overrides are ignored when the agent telemetry flag is off unless you explicitly set experimental_telemetry.isEnabled.

Instrument workflows

Workflows use the same OTEL stack. See Workflow telemetry for configuration details (traceName, metadata, userId, …).

Nitro / Nuxt example

// server/plugins/langfuse-telemetry.ts
import { ensureLangfuseTelemetry } from "@ai_kit/core";

type Handle = Awaited<ReturnType<typeof ensureLangfuseTelemetry>>;
let telemetryPromise: Promise<Handle> | undefined;

const getTelemetry = () => {
  telemetryPromise ||= ensureLangfuseTelemetry({
    shouldExportSpan: ({ otelSpan }) => {
      const scope = otelSpan?.instrumentationScope?.name;
      return scope !== "nuxt" && scope !== "nitro";
    },
  });
  return telemetryPromise;
};

export const flushLangfuse = async () => {
  const telemetry = await getTelemetry();
  await telemetry.flush();
};

export default defineNitroPlugin(async nitroApp => {
  const telemetry = await getTelemetry();

  nitroApp.hooks.hook("close", async () => {
    await telemetry.shutdown();
  });
});

Additional resources