Skip to content

OpenAI

Overview

OpenAI provides the GPT series of large language models, including GPT-4o and the reasoning-focused o1/o3 families. Floopy proxies requests to OpenAI’s Chat Completions API natively with no format translation required.

Supported Models

ModelContext WindowNotes
o3-pro200KReasoning model, supports reasoning_effort header
o3200KReasoning model, supports reasoning_effort header
o3-mini200KReasoning model, supports reasoning_effort header
o1-pro200KReasoning model, supports reasoning_effort header
o1200KReasoning model, supports reasoning_effort header
o1-mini128KReasoning model, supports reasoning_effort header
gpt-4.5-preview128KLatest preview model
gpt-4o128KFlagship multimodal model
gpt-4o-mini128KFast and affordable
gpt-4-turbo128KPrevious generation turbo
gpt-48KOriginal GPT-4
gpt-3.5-turbo16KLegacy, cost-effective

Setup

  1. Go to Settings > Providers in the dashboard.
  2. Click Add provider and select OpenAI.
  3. Paste your OpenAI API key and click Save.

Usage

import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.floopy.ai/v1",
apiKey: process.env.FLOOPY_API_KEY,
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Explain quantum computing." }],
});

Provider-Specific Features

  • Reasoning effort — Set the x-floopy-reasoning-effort header to low, medium, or high when using o1/o3 models to control how much reasoning the model performs.
  • Fine-tuned models — Use the ft: prefix in the model name (e.g. ft:gpt-4o-mini:my-org:my-model:abc123) to route to your fine-tuned models.

Fallback

Route to Anthropic if OpenAI is unavailable by setting the fallback header:

Terminal window
curl https://api.floopy.ai/v1/chat/completions \
-H "Authorization: Bearer $FLOOPY_API_KEY" \
-H "x-floopy-fallback-provider: anthropic" \
-H "x-floopy-fallback-model: claude-sonnet-4-6" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'