Portkey
Portkey is a solid AI gateway with prompt management, observability, and request routing — widely used by teams that need unified LLM access and cost tracking. Floopy includes those gateway capabilities, but the product is continuous agent optimization: session-level feedback propagation plus dynamic multi-source weighting decide which model handles each call. If you need proxying and a prompt library, Portkey fits; if you want the gateway to also learn from end-user signal, Floopy is the fit.
Helicone
Helicone is a strong observability layer — per-request logging, caching, and feedback APIs for debugging and fine-tune data collection. Floopy also logs every request and accepts scores, but the unit of learning is different: one NPS per session propagated to every routing decision in that session, instead of per-request thumbs up/down. If your goal is per-request debug data, Helicone works well; if your goal is session-level routing that improves against end-user outcomes, Floopy is built for it.
LiteLLM
LiteLLM is an excellent open-source proxy for unifying multi-provider SDK calls with retry and fallback rules — a natural fit if you run your own infrastructure and want static routing. Floopy is managed SaaS and goes further: the router learns from session NPS, LLM-as-judge scoring, manual ratings, and public benchmarks, with weights that shift as signal accumulates. Use LiteLLM when you want self-hosted proxy ergonomics; use Floopy when you want feedback-driven routing without running the infrastructure.
Maxim
Maxim focuses on evaluation, experimentation, and prompt testing — a helpful tool during development to compare model outputs and measure prompt quality offline. Floopy is a production-time feedback loop: your live session NPS and auto scoring continuously re-rank models so routing improves after deploy, not just before. Maxim and Floopy are complementary — eval pipelines on one side, runtime optimization on the other.
Bifrost
Bifrost is a fast Rust LLM gateway focused on low-latency request proxying. Floopy keeps latency overhead low too (see the benchmark page), but the core difference is what the gateway does with that latency budget: Floopy runs a feedback-driven routing decision per request informed by session-level signal, rather than a purely static proxy. If you need the thinnest possible proxy, Bifrost wins on latency; if you want a gateway that learns, Floopy is designed for it.
TensorZero
TensorZero pioneered the open-source feedback-loop approach in 2024 with excellent engineering and a self-hosted architecture. If your team has the DevOps capacity and wants full infrastructure control, it's a solid choice. Floopy takes a different path: managed SaaS, session-level end-user NPS as the primary signal (rather than developer-defined metrics), and cross-tenant intelligence that improves every customer's routing as the platform grows. Choose based on whether you want to run infrastructure yourself and what feedback source you trust most.