OpenAI
Overview
OpenAI provides the GPT series of large language models, including GPT-4o and the reasoning-focused o1/o3 families. Floopy proxies requests to OpenAI’s Chat Completions API natively with no format translation required.
Supported Models
| Model | Context Window | Notes |
|---|---|---|
o3-pro | 200K | Reasoning model, supports reasoning_effort header |
o3 | 200K | Reasoning model, supports reasoning_effort header |
o3-mini | 200K | Reasoning model, supports reasoning_effort header |
o1-pro | 200K | Reasoning model, supports reasoning_effort header |
o1 | 200K | Reasoning model, supports reasoning_effort header |
o1-mini | 128K | Reasoning model, supports reasoning_effort header |
gpt-4.5-preview | 128K | Latest preview model |
gpt-4o | 128K | Flagship multimodal model |
gpt-4o-mini | 128K | Fast and affordable |
gpt-4-turbo | 128K | Previous generation turbo |
gpt-4 | 8K | Original GPT-4 |
gpt-3.5-turbo | 16K | Legacy, cost-effective |
Setup
- Go to Settings > Providers in the dashboard.
- Click Add provider and select OpenAI.
- Paste your OpenAI API key and click Save.
Usage
import OpenAI from "openai";
const client = new OpenAI({ baseURL: "https://api.floopy.ai/v1", apiKey: process.env.FLOOPY_API_KEY,});
const response = await client.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "Explain quantum computing." }],});from openai import OpenAI
client = OpenAI(base_url="https://api.floopy.ai/v1", api_key=os.environ["FLOOPY_API_KEY"])
response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Explain quantum computing."}],)curl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Explain quantum computing."}]}'Provider-Specific Features
- Reasoning effort — Set the
x-floopy-reasoning-effortheader tolow,medium, orhighwhen using o1/o3 models to control how much reasoning the model performs. - Fine-tuned models — Use the
ft:prefix in the model name (e.g.ft:gpt-4o-mini:my-org:my-model:abc123) to route to your fine-tuned models.
Fallback
Route to Anthropic if OpenAI is unavailable by setting the fallback header:
curl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "x-floopy-fallback-provider: anthropic" \ -H "x-floopy-fallback-model: claude-sonnet-4-6" \ -H "Content-Type: application/json" \ -d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'