Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Cannot get past 50 RPS #6592

Open
vutrung96 opened this issue Nov 5, 2024 · 21 comments
Open

[Bug]: Cannot get past 50 RPS #6592

vutrung96 opened this issue Nov 5, 2024 · 21 comments
Assignees
Labels
bug Something isn't working

Comments

@vutrung96
Copy link

vutrung96 commented Nov 5, 2024

What happened?

I have OpenAI tier 5 usage, which should give me 30,000 RPM = 500 RPS with "gpt-4o-mini". However I struggle get past 50 RPS.

The minimal replication:

from litellm import acompletion

tasks = [acompletion(
    model="gpt-4o-mini",
    messages=[
      {"role": "system", "content": "You're an agent who answers yes or no"},
      {"role": "user", "content": "Is the sky blue?"},
    ],
) for i in range(2000)]

I only get 50 items/second as opposed to ~500 items/second when sending raw HTTP requests.

Relevant log output

 16%|█████████████████████▌                                                                                                                 | 320/2000 [00:09<00:40, 41.49it/s]

Twitter / LinkedIn details

No response

@ishaan-jaff
Copy link
Contributor

hi @vutrung96 looking into this, how do you get the % complete log output ?

@vutrung96
Copy link
Author

Hi @ishaan-jaff I was just using tqdm

@CharlieJCJ
Copy link

Hi @ishaan-jaff , any updates on this, also facing this issue!

@ishaan-jaff
Copy link
Contributor

hi @vutrung96 @CharlieJCJ do you see the issue on litellm.router too ? https://docs.litellm.ai/docs/routing

It would help me if you could test with litellm router too

@RyanMarten
Copy link

Hi @ishaan-jaff
We tracked down the root cause of the issue.

Litellm uses the official OpenAI python client

client: Optional[Union[OpenAI, AsyncOpenAI]] = None,

The official OpenAI client has performance issues with high numbers of concurrent requests due to issues in httpx

The issues in httpx are due to a number of factors related to anyio vs asyncio

Which are addressed in the open PRs below

We saw this when implementing litellm as the backend for our synthetic data engine

When using our own openai client (with aiohttp instead of httpx) we saturate the highest rate limits (30,000 requests per minute on gpt-4o-mini tier 5). When using litellm, the performance issues cap us well under the highest rate limit (200 queries per second - 12,000 requests per minute).

@ishaan-jaff
Copy link
Contributor

ishaan-jaff commented Jan 2, 2025

@RyanMarten you are right ! just ran a load test to confirm. The right is with aiohttp it's 10x more RPS

Screenshot 2025-01-02 at 3 36 31 PM

@ishaan-jaff ishaan-jaff self-assigned this Jan 2, 2025
@ishaan-jaff
Copy link
Contributor

@RyanMarten started work on this

#7514

  • added a new custom_llm_provider=aiohttp_openai that uses aiohttp for calling logic

@RyanMarten, @vutrung96 and @CharlieJCJ can y'all help us test this change as we start rolling it out ?

As of now we just added support for non-streaming. I can let you know once streaming support is added too

@RyanMarten
Copy link

RyanMarten commented Jan 7, 2025

@ishaan-jaff Thanks for creating a PR for this! We can certainly help test the change 😄. I'll run a benchmarking test with model=aiohttp_openai/gpt-4o-mini

Our use-case on non-streaming so that shouldn't be a problem.

@RyanMarten
Copy link

Here is our benchmarking using the curator request processor and viewer (with different backends). I see that this was released in https://github.com/BerriAI/litellm/releases/tag/v1.56.8. I upgraded my litellm version to the latest poetry add litellm@latest which is 1.57.0.

from bespokelabs.curator import LLM
from datasets import Dataset

dataset = Dataset.from_dict({"prompt": ["write me a poem"] * 100_000})

(1) our own aiohttp backend

llm = LLM(
    prompt_func=lambda row: row["prompt"],
    model_name="gpt-4o-mini",
    backend="openai",
)
dataset = llm(dataset)
Screenshot 2025-01-06 at 4 51 08 PM

(2) default litellm backend

llm = LLM(
    prompt_func=lambda row: row["prompt"],
    model_name="gpt-4o-mini",
    backend="litellm",
)
dataset = llm(dataset)
Screenshot 2025-01-06 at 5 00 56 PM

(3) litellm backend with aiohttp_openai

llm = LLM(
    prompt_func=lambda row: row["prompt"],
    model_name="aiohttp_openai/gpt-4o-mini",
    backend="litellm",
)
dataset = llm(dataset)
Screenshot 2025-01-06 at 5 07 50 PM

For some reason I'm not seeing an improvement in performance

RyanMarten added a commit to bespokelabsai/curator that referenced this issue Jan 7, 2025
@ishaan-jaff
Copy link
Contributor

hmm that's odd - we see RPS going much higher on our testing

Do you see anything off with our implementation (I know you mentioned you also implemented aiohttp) ?

https://github.com/BerriAI/litellm/blob/main/litellm/llms/custom_httpx/aiohttp_handler.py#L30

@ishaan-jaff
Copy link
Contributor

ohh - I think I know the issue, it's still getting routed to the OpenAI sdk when you pass aiohttp_openai/gpt-4o-mini

(we route to using the OpenAI sdk if the model is recognized as an OpenAI model)

In my testing I was using aiohttp_openai/mock_model

will update this thread to ensure aiohttp_openai/gpt-4o-mini uses aiohttp_openai

@RyanMarten
Copy link

RyanMarten commented Jan 7, 2025

Ah yes, what you said about the routing makes sense!

When the fix is in, I'll try my benchmark again and post the results 👍

@ishaan-jaff
Copy link
Contributor

ishaan-jaff commented Jan 7, 2025

Fixed here @RyanMarten #7598

could you test on our new release ? (Will be out in 12 hrs) on v1.57.2

@RyanMarten
Copy link

@ishaan-jaff - yes absolutely (looking out for the release)

@ishaan-jaff
Copy link
Contributor

Sorry ci / cd causing issues - will update here once new release out

@RyanMarten
Copy link

@RyanMarten you are right ! just ran a load test to confirm. The right is with aiohttp it's 10x more RPS

Screenshot 2025-01-02 at 3 36 31 PM

@ishaan-jaff Also curious, what software / visualization are you using for your load tests?

@ishaan-jaff
Copy link
Contributor

@RyanMarten -can you help test this: https://github.com/BerriAI/litellm/releases/tag/v1.57.3

Also curious, what software / visualization are you using for your load tests?

I was using locust

@RyanMarten
Copy link

RyanMarten commented Jan 10, 2025

poetry add litellm@latest
Using version ^1.57.4 for litellm

from bespokelabs.curator import LLM
from datasets import Dataset

dataset = Dataset.from_dict({"prompt": ["write me a poem"] * 100_000})

llm = LLM(
    prompt_func=lambda row: row["prompt"],
    model_name="aiohttp_openai/gpt-4o-mini",
    backend="litellm",
)

dataset = llm(dataset)

I'm getting this error now which I wasn't before. I think this is an issue from our side, let me test.

Traceback (most recent call last):
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/aiohttp_handler.py", line 112, in _make_common_sync_call
    response = sync_httpx_client.post(
               ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 528, in post
    raise e
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 509, in post
    response.raise_for_status()
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/httpx/_models.py", line 763, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/litellm/main.py", line 1501, in completion
    response = base_llm_aiohttp_handler.completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/aiohttp_handler.py", line 302, in completion
    response = self._make_common_sync_call(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/aiohttp_handler.py", line 132, in _make_common_sync_call
    raise self._handle_error(e=e, provider_config=provider_config)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/aiohttp_handler.py", line 389, in _handle_error
    raise provider_config.get_error_class(
litellm.llms.openai.common_utils.OpenAIError: {
    "error": {
        "message": "you must provide a model parameter",
        "type": "invalid_request_error",
        "param": null,
        "code": null
    }
}


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/ryan/curator/../dcft_dump/SPEED_TEST.py", line 6, in <module>
    llm = LLM(
          ^^^^
  File "/Users/ryan/curator/src/bespokelabs/curator/llm/llm.py", line 111, in __init__
    self._request_processor = _RequestProcessorFactory.create(backend_params, batch=batch, response_format=response_format, backend=backend)
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ryan/curator/src/bespokelabs/curator/request_processor/_factory.py", line 127, in create
    _request_processor = LiteLLMOnlineRequestProcessor(config)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ryan/curator/src/bespokelabs/curator/request_processor/online/litellm_online_request_processor.py", line 46, in __init__
    self.header_based_max_requests_per_minute, self.header_based_max_tokens_per_minute = self.get_header_based_rate_limits()
                                                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ryan/curator/src/bespokelabs/curator/request_processor/online/litellm_online_request_processor.py", line 154, in get_header_based_rate_limits
    headers = self.test_call()
              ^^^^^^^^^^^^^^^^
  File "/Users/ryan/curator/src/bespokelabs/curator/request_processor/online/litellm_online_request_processor.py", line 127, in test_call
    completion = litellm.completion(
                 ^^^^^^^^^^^^^^^^^^^
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1022, in wrapper
    raise e
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/litellm/utils.py", line 900, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/litellm/main.py", line 2955, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2189, in exception_type
    raise e
  File "/Users/ryan/curator/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2158, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: Aiohttp_openaiException - {
    "error": {
        "message": "you must provide a model parameter",
        "type": "invalid_request_error",
        "param": null,
        "code": null
    }
}

@RyanMarten
Copy link

RyanMarten commented Jan 10, 2025

Ah this is because we do a test call with completion instead of acompletion

completion = litellm.completion(model="aiohttp_openai/gpt-4o-mini",messages=[{"role": "user", "content": "hi"}])

fails with an unintuitive error message

litellm.exceptions.APIConnectionError: litellm.APIConnectionError: Aiohttp_openaiException - {
    "error": {
        "message": "you must provide a model parameter",
        "type": "invalid_request_error",
        "param": null,
        "code": null
    }
}

What I can do is just switch this call to use acompletion as well

@RyanMarten
Copy link

OK now I'm running into an issue in the main loop where

2025-01-09 16:26:13,066 - bespokelabs.curator.request_processor.online.base_online_request_processor - WARNING - Encountered 'APIConnectionError: litellm.APIConnectionError: Aiohttp_openaiException - Event loop is closed' during attempt 1 of 10 while processing request 0

@vutrung96 could you take a look at this since you wrote the custom event loop handling?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants