Skip to content

LangChain (Python)

Overview

Floopy is a zero-SDK gateway. The langchain-openai package already supports custom base URLs, so you can route all LangChain requests through Floopy without any extra dependencies. You get caching, rate limiting, fallbacks, and observability for free.

Installation

Terminal window
pip install langchain-openai

Configuration

from langchain_openai import ChatOpenAI
import os
llm = ChatOpenAI(
base_url="https://api.floopy.ai/v1",
api_key=os.environ["FLOOPY_API_KEY"], # starts with fp_
model="gpt-4o",
)

Set FLOOPY_API_KEY in your environment. You can create one in the dashboard.

Basic Request

response = llm.invoke("Explain quantum computing in one sentence.")
print(response.content)

Switch providers by changing the model name:

llm_anthropic = ChatOpenAI(
base_url="https://api.floopy.ai/v1",
api_key=os.environ["FLOOPY_API_KEY"],
model="claude-sonnet-4-20250514",
)
response = llm_anthropic.invoke("Hello!")

Streaming

for chunk in llm.stream("Write a short poem about AI."):
print(chunk.content, end="")

Custom Headers

Pass Floopy-specific headers using default_headers:

llm = ChatOpenAI(
base_url="https://api.floopy.ai/v1",
api_key=os.environ["FLOOPY_API_KEY"],
model="gpt-4o",
default_headers={
"Floopy-Cache": "semantic",
"floopy-property-environment": "production",
"floopy-property-feature": "chat",
"floopy-fallback": "claude-sonnet-4-20250514",
},
)
HeaderDescription
Floopy-CacheCache strategy: semantic or exact
floopy-property-*Attach custom metadata for filtering in the dashboard
floopy-fallbackFallback model if the primary provider fails
floopy-session-idGroup related requests into a session
floopy-user-idAssociate requests with an end user

See the Headers Reference for the full list.