You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @Oceania2018 I'm the maintainer of LiteLLM - we provide an Open source proxy for load balancing Azure + OpenAI + Bedrock + Vertex - 100+ LLMs
It can process (500+ requests/second)
From this thread it looks like you're trying to maximize throughput - I hope our solution makes it easier for you. (i'd love feedback if you're trying to do this)
Here's the quick start on using LiteLLM Proxy for load balancing
Inspired by this load balancing idea.
Load balance allow across multiple models, providers, and keys, avoid toke limitation.
The text was updated successfully, but these errors were encountered: