Context
You want to trace your Vercel AI SDK calls (like generateText and streamText) in LangSmith to monitor performance, costs, and debug your AI applications. There are multiple approaches available depending on your AI SDK version and requirements.
Answer
There are two main approaches to trace Vercel AI SDK calls with LangSmith, depending on your setup and preferences:
Option 1: Using wrapAISDK (Recommended for AI SDK v5)
This is the simplest and most reliable approach for AI SDK v5:
Install the latest LangSmith SDK:
npm install langsmith@latestSet up your environment variables:
LANGSMITH_TRACING=true LANGSMITH_API_KEY=your_api_key LANGSMITH_PROJECT=your_project_nameWrap the AI SDK in your code:
import { wrapAISDK } from "langsmith/experimental/vercel"; import * as ai from "ai"; import { Client } from "langsmith"; import { openai } from "@ai-sdk/openai"; const client = new Client(); // Wrap the AI SDK functions const { generateText, streamText } = wrapAISDK(ai, { client }); // Use the wrapped functions normally const result = await generateText({ model: openai("gpt-4"), prompt: "Hello, world!", }); // Ensure traces are sent await client.awaitPendingTraceBatches();
Option 2: Using OpenTelemetry Integration
For more complex setups or when you need OpenTelemetry compatibility:
Create an
instrumentation.tsfile in your project root:import { registerOTel } from '@vercel/otel'; import { initializeOTEL } from 'langsmith/experimental/otel/setup'; const { DEFAULT_LANGSMITH_SPAN_PROCESSOR } = initializeOTEL(); export function register() { registerOTel({ serviceName: 'your-project-name', spanProcessors: [DEFAULT_LANGSMITH_SPAN_PROCESSOR], }); }In your route files, initialize OTEL and use the AI SDK:
import { initializeOTEL } from "langsmith/experimental/otel/setup"; import { generateText } from "ai"; import { openai } from "@ai-sdk/openai"; initializeOTEL(); export async function POST(request: Request) { const result = await generateText({ model: openai("gpt-4"), prompt: "Hello, world!", experimental_telemetry: { isEnabled: true }, }); return Response.json(result); }
Grouping Multiple AI Calls
To group multiple AI SDK calls under a single trace, wrap them with traceable:
import { traceable } from "langsmith/traceable";
const myWorkflow = traceable(async (input: string) => {
const result1 = await generateText({
model: openai("gpt-4"),
prompt: input,
experimental_telemetry: { isEnabled: true },
});
const result2 = await generateText({
model: openai("gpt-4"),
prompt: result1.text,
experimental_telemetry: { isEnabled: true },
});
return result2;
}, { name: "my-workflow" });
await myWorkflow("Hello, world!");Common Issues and Solutions
Missing token counts or timing data: Ensure you're using the latest LangSmith SDK version and that
LANGSMITH_TRACING=trueis set.Can't add traces to playground or datasets: This is a known limitation with some AI SDK integrations. The
wrapAISDKapproach provides better compatibility.Images not displaying in traces: This has been fixed in recent versions. Update to the latest LangSmith SDK.
Environment variables not loading: Add a small delay after setting environment variables if you're loading them dynamically.
For the most reliable experience, we recommend using AI SDK v5 with the wrapAISDK approach, as it provides better compatibility and fewer edge cases than the OpenTelemetry integration.