Docs
Integrations
OpenAI SDK
Get Started

Observability for OpenAI SDK (Python)

If you use the OpenAI Python SDK, you can use the Langfuse drop-in replacement to get full logging by changing only the import.

- import openai
+ from langfuse.openai import openai

Langfuse automatically tracks:

  • All prompts/completions with support for streaming, async and functions
  • Latencies
  • API Errors (example)
  • Model usage (tokens) and cost (USD) (learn more)

In the Langfuse Console

Currently only Python is supported by this integration. If you are interested in an integration with the Node.js/Typescript OpenAI SDK, add your upvote/comments to this issue (opens in a new tab).

How it works

Install Langfuse SDK

The integration is compatible with OpenAI SDK versions >=0.27.8. It supports async functions and streaming for OpenAI SDK versions >=1.0.0.

pip install langfuse openai

Switch to Langfuse Wrapped OpenAI SDK

Add Langfuse credentials to your environment variables

.env
LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_PUBLIC_KEY="pk-lf-..."
LANGFUSE_HOST="https://cloud.langfuse.com" # 🇪🇺 EU region
# LANGFUSE_HOST="https://us.cloud.langfuse.com" # 🇺🇸 US region

Change import

- import openai
+ from langfuse.openai import openai

Optional, checks the SDK connection with the server. Not recommended for production usage.

openai.auth_check()

Use OpenAI SDK as usual

No changes required.

Check out the notebook for end-to-end examples of the integration:

Troubleshooting

Queuing and batching of events

The Langfuse SDKs queue and batches events in the background to reduce the number of network requests and improve overall performance. In a long-running application, this works without any additional configuration.

If you are running a short-lived application, you need to flush Langfuse to ensure that all events are flushed before the application exits.

openai.flush_langfuse()

Learn more about queuing and batching of events here.

Debug mode

If you are having issues with the integration, you can enable debug mode to get more information about the requests and responses.

openai.langfuse_debug=True

Advanced usage

Custom trace properties

You can add the following properties to the openai method, e.g. openai.chat.completions.create(), to use additional Langfuse features:

PropertyDescription
nameSet name to identify a specific type of generation.
metadataSet metadata with additional information that you want to see in Langfuse.
session_idThe current session.
user_idThe current user_id.
tagsSet tags to categorize and filter traces.
trace_idSee "Interoperability with Langfuse Python SDK" (below) for more details.
parent_observation_idSee "Interoperability with Langfuse Python SDK" (below) for more details.

Example:

openai.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
      {"role": "system", "content": "You are a very accurate calculator. You output only the result of the calculation."},
      {"role": "user", "content": "1 + 1 = "}],
    name="test-chat",
    metadata={"someMetadataKey": "someValue"},
)

Use Traces

Langfuse Tracing groups multiple observations (can be any LLM or non-LLM call) into a single trace. This integration by default creates a single trace for each openai call.

  • Add non-OpenAI related observations to the trace.
  • Group multiple OpenAI calls into a single trace while customizing the trace.
  • Have more control over the trace structure.
  • Use all Langfuse Tracing features.

New to Langfuse Tracing? Checkout this introduction to the basic concepts.

You can use any of the following three options:

  1. Python @observe() decorator
  2. Set trace_id property, best if you have an existing id from your application.
  3. Use the low-level SDK to create traces manually and add OpenAI calls to it.

Desired trace structure:

TRACE: capital_poem_generator(input="Bulgaria")
|
|-- GENERATION: get-capital
|
|-- GENERATION: generate-poem

Implementation:

from langfuse.decorators import observe
from langfuse.openai import openai
 
@observe()
def capital_poem_generator(capital)
  capital = openai.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "What is the capital of the country?"},
        {"role": "user", "content": country}],
    name="get-capital",
  ).choices[0].message.content
 
  poem = openai.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "You are a poet. Create a poem about this city."},
        {"role": "user", "content": capital}],
    name="generate-poem",
  ).choices[0].message.content
  return poem
 
capital_poem_generator("Bulgaria")

Was this page useful?

Questions? We're here to help

Subscribe to updates