Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: How to use Local downloaded Hugging face model #7645

Open
mustangs0786 opened this issue Jan 9, 2025 · 0 comments
Open

[Bug]: How to use Local downloaded Hugging face model #7645

mustangs0786 opened this issue Jan 9, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@mustangs0786
Copy link

What happened?

Hi team,

i have downloaded phi model from hugging face in my local folder, now i want to pass the local path and use that to call LLM, i don't want to hit hugging face again..

how to do that

import os
from litellm import completion

messages = [{ "content": "There's a llama in my garden 😱 What should I do?","role": "user"}]

response = completion(
model="huggingface/Users/rahul/Qwen2.5-3B-Instruct",
messages=[{ "content": "Hello, how are you?","role": "user"}],
stream=True
)
print(response)

i am getting error here.. please anyone help......

Relevant log output

No response

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

latest

Twitter / LinkedIn details

No response

@mustangs0786 mustangs0786 added the bug Something isn't working label Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant