Playground
Overview
The Playground lets you send prompts to any configured model and see the response in real time, all within the Floopy dashboard. It is the fastest way to test a prompt, compare model behavior, or debug an unexpected output without writing any code.
Using the Playground
- Open Playground from the dashboard sidebar.
- Select a model and provider from the dropdowns. Only models and providers configured in your organization are available.
- Write your prompt in the input area. You can set a system message and one or more user messages.
- Click Run to send the request.
The response appears in the output panel alongside metadata: token usage, latency, and cost.
Model Parameters
You can adjust model parameters before running a prompt:
- Temperature (0—2) — Controls randomness. Lower values produce more deterministic outputs; higher values increase creativity and variation.
- Max Tokens — Maximum number of tokens the model can generate in the response.
- Top P (0—1) — Nucleus sampling threshold. The model considers only tokens whose cumulative probability mass reaches this threshold.
- Frequency Penalty (-2 to 2) — Reduces repetition by penalizing tokens that have already appeared. Positive values decrease repetition; negative values encourage it.
- Presence Penalty (-2 to 2) — Encourages the model to talk about new topics by penalizing tokens that have appeared at all, regardless of frequency.
- Stop Sequences — Comma-separated strings that cause the model to stop generating when encountered.
- Reasoning Effort — Controls how much computation the model spends reasoning before responding. Options: None, Minimal, Low, Medium, High.
- Response Format — The format of the model’s response: Plain text or JSON mode. JSON mode constrains the output to valid JSON.
These parameters are set per request and do not affect your production configuration.
Multi-Turn Conversations
The Playground supports multi-turn conversations. After receiving a response, add another user message to continue the conversation. The full message history is sent with each request, so the model has context from previous turns.
Click Clear to reset the conversation and start fresh.
Credit Usage
Playground requests use your organization’s credits, just like production API calls. Token usage and cost are displayed after each response so you can track spending during testing.
Cached responses in the Playground are free, the same as in production.
Common Use Cases
- Prompt iteration — quickly test different phrasings and see how the model responds.
- Model comparison — switch between models to compare output quality, speed, and cost for the same prompt.
- Debugging — reproduce a production issue by replaying the same prompt and parameters.
- Onboarding — let new team members explore available models without setting up a development environment.
Tips
- Use the Playground to test prompts before saving them in Prompt Management. This avoids creating unnecessary versions.
- If a response seems wrong, check the model parameters. Temperature and max tokens have a significant impact on output quality.
- Copy the response directly from the output panel to share with teammates.