Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(perf) use aiohttp for custom_openai #7514

Merged
merged 9 commits into from
Jan 3, 2025
Merged

(perf) use aiohttp for custom_openai #7514

merged 9 commits into from
Jan 3, 2025

Conversation

ishaan-jaff
Copy link
Contributor

@ishaan-jaff ishaan-jaff commented Jan 3, 2025

(beta) 10x Higher RPS with aiohttp

  • BETA - adds a new custom_llm_provider=aiohttp_openai that uses aiohttp for calling logic
  • We see 10x higher RPS when using aiohttp

Relevant issues

Type

🆕 New Feature
✅ Test

Changes

[REQUIRED] Testing - Attach a screenshot of any new tests passing locally

If UI changes, send a screenshot/GIF of working UI fixes

Copy link

vercel bot commented Jan 3, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 3, 2025 6:01am

@ishaan-jaff ishaan-jaff merged commit d861aa8 into main Jan 3, 2025
23 of 29 checks passed
@RyanMarten
Copy link

Fixes #6592

rajatvig pushed a commit to rajatvig/litellm that referenced this pull request Jan 16, 2025
* use aiohttp handler

* BaseLLMAIOHTTPHandler

* use CustomOpenAIChatConfig

* CustomOpenAIChatConfig

* CustomOpenAIChatConfig

* fix linting

* AiohttpOpenAIChatConfig

* fix order

* aiohttp_openai
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants