Skip to content

Commit

Permalink
upd llama.cpp docs
Browse files Browse the repository at this point in the history
  • Loading branch information
Vaibhavs10 committed Jan 17, 2025
1 parent cffc8ee commit e7dfdfc
Showing 1 changed file with 3 additions and 5 deletions.
8 changes: 3 additions & 5 deletions docs/hub/gguf-llamacpp.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,7 @@ Once installed, you can use the `llama-cli` or `llama-server` as follows:

```bash
llama-cli
--hf-repo lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF \
--hf-file Meta-Llama-3-8B-Instruct-Q8_0.gguf \
-hf bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0 \
-p "You are a helpful assistant" -cnv
```

Expand All @@ -42,8 +41,7 @@ Additionally, you can invoke an OpenAI spec chat completions endpoint directly u

```bash
llama-server \
--hf-repo lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF \
--hf-file Meta-Llama-3-8B-Instruct-Q8_0.gguf
-hf bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
```

After running the server you can simply utilise the endpoint as below:
Expand All @@ -66,6 +64,6 @@ curl http://localhost:8080/v1/chat/completions \
}'
```

Replace `--hf-repo` with any valid Hugging Face hub repo name and `--hf-file` with the GGUF file name in the hub repo - off you go! 🦙
Replace `-hf` with any valid Hugging Face hub repo name - off you go! 🦙

Note: Remember to `build` llama.cpp with `LLAMA_CURL=1` :)

0 comments on commit e7dfdfc

Please sign in to comment.