Skip to content

Integration

Overview

Floopy is compatible with the OpenAI SDK. To start routing requests through the gateway, change your baseURL to https://api.floopy.ai/v1 and use your Floopy API key. No additional SDK or library is required.

Quick Start

import { OpenAI } from "openai";
const client = new OpenAI({
baseURL: "https://api.floopy.ai/v1",
apiKey: process.env.FLOOPY_API_KEY,
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Explain quantum computing in one sentence." }],
});
console.log(response.choices[0].message.content);

Session Tracking

Track conversations across multiple requests by passing a session ID in the floopy-session-id header. You can also set floopy-session-name and floopy-session-path for richer context. These group related requests together in the dashboard logs, making it easy to follow a full conversation flow.

const response = await client.chat.completions.create(
{
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
},
{
headers: {
"floopy-session-id": "session_abc123",
"floopy-session-name": "Onboarding Chat",
"floopy-session-path": "/app/onboarding",
},
},
);

Project Tracking

Segment requests by project by passing the floopy-project-id header. This tags the request with a specific project for per-project cost tracking, dashboards, and analytics. If your API key is hard-locked to a project, this header is optional — the locked project is used automatically.

const response = await client.chat.completions.create(
{
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
},
{
headers: {
"floopy-project-id": "a1b2c3d4-5678-9abc-def0-123456789abc",
},
},
);

See the Projects feature guide for fallback chain details, per-project API keys, and environment model.

User Tracking

Use the floopy-user-id header (or the OpenAI user field) to associate requests with a specific end user. This appears in your dashboard logs and helps with per-user analytics and abuse detection.

const response = await client.chat.completions.create(
{
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
},
{
headers: {
"floopy-user-id": "user_12345",
},
},
);

Custom Properties

Attach arbitrary metadata to requests using individual floopy-property-* headers. Each header follows the pattern floopy-property-<name>: <value>. You can add as many properties as you need.

const response = await client.chat.completions.create(
{
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
},
{
headers: {
"floopy-property-environment": "production",
"floopy-property-feature": "chat-widget",
"floopy-property-version": "2.1.0",
"floopy-property-usertier": "premium",
},
},
);

These properties are searchable and filterable in the dashboard logs.

Switching Providers

Because Floopy translates all requests into a unified format, you can switch providers by changing the model name. No other code changes are needed:

// Use OpenAI
const a = await client.chat.completions.create({
model: "gpt-4o",
messages,
});
// Use Anthropic
const b = await client.chat.completions.create({
model: "claude-3-5-sonnet-20241022",
messages,
});
// Use Google Gemini
const c = await client.chat.completions.create({
model: "gemini-2.5-pro",
messages,
});

Make sure the corresponding provider is configured in Settings > Providers. See the Providers guide for setup instructions.

Model Override

Use the floopy-model-override header to override the model specified in the request body. This lets you change which model handles the request without modifying your application code.

import { OpenAI } from "openai";
const client = new OpenAI({
baseURL: "https://api.floopy.ai/v1",
apiKey: process.env.FLOOPY_API_KEY,
});
// Request body says gpt-4o, but the gateway will use claude-3-5-sonnet-20241022
const response = await client.chat.completions.create(
{
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
},
{
headers: {
"floopy-model-override": "claude-3-5-sonnet-20241022",
},
},
);
console.log(response.choices[0].message.content);

Routing Rule Override

Use the floopy-routing-rule header to override the default routing configuration for a request. This directs the request to a specific routing rule you have configured in the dashboard.

const response = await client.chat.completions.create(
{
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
},
{
headers: {
"floopy-routing-rule": "low-latency-us-east",
},
},
);

Response Headers

The gateway includes informational headers in every response. These tell you which provider and model handled the request, and whether a fallback was used.

HeaderDescription
Floopy-ProviderThe provider that handled the request (e.g. openai, anthropic, google)
Floopy-ModelThe model that was used (e.g. gpt-4o, claude-3-5-sonnet-20241022)
Floopy-Fallback-Used"true" if the primary provider failed and a fallback provider handled the request
const res = await fetch("https://api.floopy.ai/v1/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.FLOOPY_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
}),
});
console.log("Provider:", res.headers.get("Floopy-Provider"));
console.log("Model:", res.headers.get("Floopy-Model"));
console.log("Fallback Used:", res.headers.get("Floopy-Fallback-Used"));
const data = await res.json();
console.log(data.choices[0].message.content);

Streaming

Floopy supports streaming responses. Use the stream parameter as you normally would:

const stream = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Write a short poem." }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}