Sometimes, you need to trace a request across multiple services.LangSmith supports distributed tracing out of the box, linking runs within a trace across services using context propagation headers (langsmith-trace and optional baggage for metadata/tags).Example client-server setup:
# client.pyfrom langsmith.run_helpers import get_current_run_tree, traceableimport httpx@traceableasync def my_client_function(): headers = {} async with httpx.AsyncClient(base_url="...") as client: if run_tree := get_current_run_tree(): # add langsmith-id to headers headers.update(run_tree.to_headers()) return await client.post("/my-route", headers=headers)
Then the server (or other service) can continue the trace by handling the headers appropriately. If you are using an asgi app Starlette or FastAPI, you can connect the distributed trace using LangSmith’s TracingMiddleware.
The TracingMiddleware class was added in langsmith==0.1.133.
Example using FastAPI:
Copy
Ask AI
from langsmith import traceablefrom langsmith.middleware import TracingMiddlewarefrom fastapi import FastAPI, Requestapp = FastAPI() # Or Flask, Django, or any other frameworkapp.add_middleware(TracingMiddleware)@traceableasync def some_function(): ...@app.post("/my-route")async def fake_route(request: Request): return await some_function()
If you are using other server frameworks, you can always “receive” the distributed trace by passing the headers in through langsmith_extra:
Copy
Ask AI
# server.pyimport langsmith as lsfrom fastapi import FastAPI, Request@ls.traceableasync def my_application(): ...app = FastAPI() # Or Flask, Django, or any other framework@app.post("/my-route")async def fake_route(request: Request): # request.headers: {"langsmith-trace": "..."} # as well as optional metadata/tags in `baggage` with ls.tracing_context(parent=request.headers): return await my_application()
The example above uses the tracing_context context manager. You can also directly specify the parent run context in the langsmith_extra parameter of a method wrapped with @traceable.
Copy
Ask AI
# ... same as above@app.post("/my-route")async def fake_route(request: Request): # request.headers: {"langsmith-trace": "..."} my_application(langsmith_extra={"parent": request.headers})
Then, the server converts the headers back to a run tree, which it uses to further continue the tracing.To pass the newly created run tree to a traceable function, we can use the withRunTree helper, which will ensure the run tree is propagated within traceable invocations.
Copy
Ask AI
// server.mtsimport { RunTree } from "langsmith";import { traceable, withRunTree } from "langsmith/traceable";import express from "express";import bodyParser from "body-parser"; const server = traceable( (text: string) => `Hello from the server! Received "${text}"`, { name: "server" } ); const app = express(); app.use(bodyParser.text());app.post("/", async (req, res) => { const runTree = RunTree.fromHeaders(req.headers); const result = await withRunTree(runTree, () => server(req.body)); res.send(result);});
Java distributed tracing uses the W3C Trace Context standard (traceparent and tracestate headers) via OpenTelemetry’s context propagation API. The trace context is automatically propagated across service boundaries when you inject and extract the context using GlobalOpenTelemetry.getPropagators().