Integração
Visão Geral
O Floopy é compatível com o SDK da OpenAI. Para começar a rotear requisições pelo gateway, altere sua baseURL para https://api.floopy.ai/v1 e use sua API key do Floopy. Nenhum SDK ou biblioteca adicional é necessário.
Início Rápido
import { OpenAI } from "openai";
const client = new OpenAI({ baseURL: "https://api.floopy.ai/v1", apiKey: process.env.FLOOPY_API_KEY,});
const response = await client.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "Explain quantum computing in one sentence." }],});
console.log(response.choices[0].message.content);from openai import OpenAIimport os
client = OpenAI( base_url="https://api.floopy.ai/v1", api_key=os.environ["FLOOPY_API_KEY"],)
response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Explain quantum computing in one sentence."}],)
print(response.choices[0].message.content)curl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Explain quantum computing in one sentence."}] }'Rastreamento de Sessão
Acompanhe conversas ao longo de múltiplas requisições passando um ID de sessão no header floopy-session-id. Você também pode definir floopy-session-name e floopy-session-path para contexto mais rico. Esses headers agrupam requisições relacionadas nos logs do dashboard, facilitando o acompanhamento de um fluxo completo de conversa.
const response = await client.chat.completions.create( { model: "gpt-4o", messages: [{ role: "user", content: "Hello" }], }, { headers: { "floopy-session-id": "session_abc123", "floopy-session-name": "Onboarding Chat", "floopy-session-path": "/app/onboarding", }, },);response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}], extra_headers={ "floopy-session-id": "session_abc123", "floopy-session-name": "Onboarding Chat", "floopy-session-path": "/app/onboarding", },)curl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -H "floopy-session-id: session_abc123" \ -H "floopy-session-name: Onboarding Chat" \ -H "floopy-session-path: /app/onboarding" \ -d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}] }'Rastreamento de Projeto
Segmente requisições por projeto enviando o header floopy-project-id. Isso marca a requisição com um projeto específico para rastreamento de custos por projeto, dashboards e analytics. Se sua chave de API está travada em um projeto, esse header é opcional — o projeto travado é usado automaticamente.
const response = await client.chat.completions.create( { model: "gpt-4o", messages: [{ role: "user", content: "Hello" }], }, { headers: { "floopy-project-id": "a1b2c3d4-5678-9abc-def0-123456789abc", }, },);response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}], extra_headers={ "floopy-project-id": "a1b2c3d4-5678-9abc-def0-123456789abc", },)curl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -H "floopy-project-id: a1b2c3d4-5678-9abc-def0-123456789abc" \ -d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}] }'req, _ := http.NewRequest("POST", "https://api.floopy.ai/v1/chat/completions", body)req.Header.Set("Authorization", "Bearer "+os.Getenv("FLOOPY_API_KEY"))req.Header.Set("Content-Type", "application/json")req.Header.Set("floopy-project-id", "a1b2c3d4-5678-9abc-def0-123456789abc")
resp, err := http.DefaultClient.Do(req)Veja o guia de Projetos para detalhes sobre a cadeia de fallback, chaves de API por projeto e modelo de ambientes.
Rastreamento de Usuário
Use o header floopy-user-id (ou o campo user da OpenAI) para associar requisições a um usuário final específico. Isso aparece nos logs do dashboard e ajuda com analytics por usuário e detecção de abuso.
const response = await client.chat.completions.create( { model: "gpt-4o", messages: [{ role: "user", content: "Hello" }], }, { headers: { "floopy-user-id": "user_12345", }, },);response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}], extra_headers={ "floopy-user-id": "user_12345", },)curl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -H "floopy-user-id: user_12345" \ -d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}] }'Propriedades Personalizadas
Anexe metadados arbitrários às requisições usando headers individuais floopy-property-*. Cada header segue o padrão floopy-property-<nome>: <valor>. Você pode adicionar quantas propriedades precisar.
const response = await client.chat.completions.create( { model: "gpt-4o", messages: [{ role: "user", content: "Hello" }], }, { headers: { "floopy-property-environment": "production", "floopy-property-feature": "chat-widget", "floopy-property-version": "2.1.0", "floopy-property-usertier": "premium", }, },);response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}], extra_headers={ "floopy-property-environment": "production", "floopy-property-feature": "chat-widget", "floopy-property-version": "2.1.0", "floopy-property-usertier": "premium", },)curl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -H "floopy-property-environment: production" \ -H "floopy-property-feature: chat-widget" \ -H "floopy-property-version: 2.1.0" \ -H "floopy-property-usertier: premium" \ -d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}] }'Essas propriedades são pesquisáveis e filtráveis nos logs do dashboard.
Alternando entre Providers
Como o Floopy traduz todas as requisições em um formato unificado, você pode alternar entre providers mudando o nome do modelo. Nenhuma outra alteração de código é necessária:
// Use OpenAIconst a = await client.chat.completions.create({ model: "gpt-4o", messages,});
// Use Anthropicconst b = await client.chat.completions.create({ model: "claude-3-5-sonnet-20241022", messages,});
// Use Google Geminiconst c = await client.chat.completions.create({ model: "gemini-2.5-pro", messages,});# Use OpenAIa = client.chat.completions.create( model="gpt-4o", messages=messages,)
# Use Anthropicb = client.chat.completions.create( model="claude-3-5-sonnet-20241022", messages=messages,)
# Use Google Geminic = client.chat.completions.create( model="gemini-2.5-pro", messages=messages,)# Use OpenAIcurl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
# Use Anthropiccurl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "claude-3-5-sonnet-20241022", "messages": [{"role": "user", "content": "Hello"}]}'
# Use Google Geminicurl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "gemini-2.5-pro", "messages": [{"role": "user", "content": "Hello"}]}'Certifique-se de que o provider correspondente está configurado em Settings > Providers. Consulte o guia de Providers para instruções de configuração.
Model Override
Use o header floopy-model-override para substituir o modelo especificado no corpo da requisição. Isso permite que você altere qual modelo processa a requisição sem modificar o código da sua aplicação.
import { OpenAI } from "openai";
const client = new OpenAI({ baseURL: "https://api.floopy.ai/v1", apiKey: process.env.FLOOPY_API_KEY,});
// Request body says gpt-4o, but the gateway will use claude-3-5-sonnet-20241022const response = await client.chat.completions.create( { model: "gpt-4o", messages: [{ role: "user", content: "Hello" }], }, { headers: { "floopy-model-override": "claude-3-5-sonnet-20241022", }, },);
console.log(response.choices[0].message.content);from openai import OpenAIimport os
client = OpenAI( base_url="https://api.floopy.ai/v1", api_key=os.environ["FLOOPY_API_KEY"],)
# Request body says gpt-4o, but the gateway will use claude-3-5-sonnet-20241022response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}], extra_headers={ "floopy-model-override": "claude-3-5-sonnet-20241022", },)
print(response.choices[0].message.content)curl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -H "floopy-model-override: claude-3-5-sonnet-20241022" \ -d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}] }'Routing Rule Override
Use o header floopy-routing-rule para substituir a configuração de routing padrão de uma requisição. Isso direciona a requisição para uma routing rule específica que você configurou no dashboard.
const response = await client.chat.completions.create( { model: "gpt-4o", messages: [{ role: "user", content: "Hello" }], }, { headers: { "floopy-routing-rule": "low-latency-us-east", }, },);response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}], extra_headers={ "floopy-routing-rule": "low-latency-us-east", },)curl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -H "floopy-routing-rule: low-latency-us-east" \ -d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}] }'Headers de Resposta
O gateway inclui headers informativos em cada resposta. Eles indicam qual provider e modelo processou a requisição, e se um fallback foi utilizado.
| Header | Descrição |
|---|---|
Floopy-Provider | O provider que processou a requisição (ex.: openai, anthropic, google) |
Floopy-Model | O modelo que foi utilizado (ex.: gpt-4o, claude-3-5-sonnet-20241022) |
Floopy-Fallback-Used | "true" se o provider primário falhou e um provider de fallback processou a requisição |
const res = await fetch("https://api.floopy.ai/v1/chat/completions", { method: "POST", headers: { "Authorization": `Bearer ${process.env.FLOOPY_API_KEY}`, "Content-Type": "application/json", }, body: JSON.stringify({ model: "gpt-4o", messages: [{ role: "user", content: "Hello" }], }),});
console.log("Provider:", res.headers.get("Floopy-Provider"));console.log("Model:", res.headers.get("Floopy-Model"));console.log("Fallback Used:", res.headers.get("Floopy-Fallback-Used"));
const data = await res.json();console.log(data.choices[0].message.content);from openai import OpenAIimport os
client = OpenAI( base_url="https://api.floopy.ai/v1", api_key=os.environ["FLOOPY_API_KEY"],)
response = client.chat.completions.with_raw_response.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}],)
print("Provider:", response.headers.get("Floopy-Provider"))print("Model:", response.headers.get("Floopy-Model"))print("Fallback Used:", response.headers.get("Floopy-Fallback-Used"))
completion = response.parse()print(completion.choices[0].message.content)curl -i https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}] }'
# The -i flag prints response headers, including:# Floopy-Provider: openai# Floopy-Model: gpt-4o# Floopy-Fallback-Used: falseStreaming
O Floopy suporta respostas em streaming. Use o parâmetro stream como faria normalmente:
const stream = await client.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "Write a short poem." }], stream: true,});
for await (const chunk of stream) { process.stdout.write(chunk.choices[0]?.delta?.content || "");}stream = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Write a short poem."}], stream=True,)
for chunk in stream: content = chunk.choices[0].delta.content if content: print(content, end="")curl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Write a short poem."}], "stream": true }'