



Production-grade observability and evaluation platform for LLM applications from LangChain, providing tracing, debugging, prompt evaluation, and performance monitoring for reliable LLM workflows in development and production.
LangSmith is a product created by the LangChain team designed specifically for observability of LLM-powered applications. LangSmith brings powerful observability to LangChain apps with tracing, prompt evaluation, and performance monitoring, helping developers debug faster, improve output quality, and ensure reliable LLM workflows in production.
Traces allow you to review the chain of prompts, model outputs, tool calls, etc., worth its weight in gold when debugging complicated chains.
Benefits:
Development: Run evaluations on curated datasets during development to compare versions, benchmark performance, and catch regressions.
Production: Monitor real user interactions in real-time to detect issues and measure quality on live traffic.
Provides visibility into each step of LLM chain execution:
Identifies Issues:
Framework-agnostic: Use with LangChain, LangGraph, or custom code.
You don't need to build your agent on LangChain to take advantage of LangSmith's tracing and evaluation features.
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY="your-api-key"
export LANGCHAIN_PROJECT="your-project-name"
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "lsv2_..."
# Use @traceable decorator
from langsmith import traceable
@traceable
def my_function(input_text):
# Your LLM logic
return result
Loading more......