Perplexity
Overview
Perplexity provides search-augmented language models that combine LLM capabilities with real-time web search, returning grounded responses with citations. Floopy proxies requests to Perplexity’s OpenAI-compatible API.
Supported Models
| Model | Context Window | Notes |
|---|---|---|
sonar | 128K | Lightweight search-augmented model |
sonar-pro | 200K | Advanced research with multi-source synthesis |
sonar-reasoning-pro | 128K | Chain-of-thought reasoning with search |
sonar-deep-research | 128K | Deep multi-step research |
Setup
- Go to Settings > Providers in the dashboard.
- Click Add provider and select Perplexity.
- Paste your Perplexity API key and click Save.
Usage
import OpenAI from "openai";
const client = new OpenAI({ baseURL: "https://api.floopy.ai/v1", apiKey: process.env.FLOOPY_API_KEY,});
const response = await client.chat.completions.create({ model: "sonar-pro", messages: [{ role: "user", content: "What are the latest developments in fusion energy?" }],});from openai import OpenAI
client = OpenAI(base_url="https://api.floopy.ai/v1", api_key=os.environ["FLOOPY_API_KEY"])
response = client.chat.completions.create( model="sonar-pro", messages=[{"role": "user", "content": "What are the latest developments in fusion energy?"}],)curl https://api.floopy.ai/v1/chat/completions \ -H "Authorization: Bearer $FLOOPY_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "sonar-pro", "messages": [{"role": "user", "content": "What are the latest developments in fusion energy?"}]}'Provider-Specific Features
- Search-grounded responses — All Sonar models include real-time web search, returning responses grounded in current information with source citations.
- Per-request fees — In addition to token costs, Perplexity charges per-request search fees ($0.005-$0.022 depending on model and search depth).
- Reasoning mode —
sonar-reasoning-procombines chain-of-thought reasoning with web search for complex analytical tasks.