From f6226d6b7c1812122c2808686d2f5ea2e1c81556 Mon Sep 17 00:00:00 2001 From: Mishig Date: Fri, 15 Mar 2024 03:59:47 -0700 Subject: [PATCH] Add GGUF doc page (#1247) * Add GGUF docs * add more info * better wording * tip * full url * wording * tip * Add `## Inference with llama.cpp` * word consistency * link to https://github.com/ggerganov/llama.cpp/blob/master/scripts/hf.sh * Update docs/hub/models-gguf.md Co-authored-by: Pedro Cuenca * Update docs/hub/models-gguf.md Co-authored-by: Pedro Cuenca * Update docs/hub/models-gguf.md Co-authored-by: Pedro Cuenca * Update docs/hub/models-gguf.md Co-authored-by: Pedro Cuenca * Update docs/hub/models-gguf.md Co-authored-by: Pedro Cuenca * Apply suggestions from code review Co-authored-by: Julien Chaumond * rn `gguf`.md & add under `Advanced` sections * Viewer for metadata & tensors info --------- Co-authored-by: Pedro Cuenca Co-authored-by: Julien Chaumond --- docs/hub/gguf.md | 71 +++++++++++++++++++++++++++++++++++++ docs/hub/models-advanced.md | 3 +- 2 files changed, 73 insertions(+), 1 deletion(-) create mode 100644 docs/hub/gguf.md diff --git a/docs/hub/gguf.md b/docs/hub/gguf.md new file mode 100644 index 000000000..95cc1b555 --- /dev/null +++ b/docs/hub/gguf.md @@ -0,0 +1,71 @@ +# GGUF + +Hugging Face Hub supports all file formats, but has built-in features for [GGUF format](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md), a binary format that is optimized for quick loading and saving of models, making it highly efficient for inference purposes. GGUF is designed for use with GGML and other executors. GGUF was developed by [@ggerganov](https://huggingface.co/ggerganov) who is also the developer of [llama.cpp](https://github.com/ggerganov/llama.cpp), a popular C/C++ LLM inference framework. Models initially developed in frameworks like PyTorch can be converted to GGUF format for use with those engines. + +
+ +
+ +As we can see in this graph, unlike tensor-only file formats like [safetensors](https://huggingface.co/docs/safetensors) – which is also a recommended model format for the Hub – GGUF encodes both the tensors and a standardized set of metadata. + +## Finding GGUF files + +You can browse all models with GGUF files filtering by the GGUF tag: [hf.co/models?library=gguf](https://huggingface.co/models?library=gguf). + +
+ + +
+ +For example, you can check out [TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF) for seeing GGUF files in action. + +
+ + +
+ +## Viewer for metadata & tensors info + +The Hub has a viewer for GGUF files that lets a user check out metadata & tensors info (name, shape, precison). + +
+ + +
+ +## Usage with llama.cpp + +Llama.cpp has a helper script, [`scripts/hf.sh`](https://github.com/ggerganov/llama.cpp/blob/master/scripts/hf.sh), that makes it easy to download GGUF files from Hugging Face Hub. You can use it with a repo and file name, or with a URL to the GGUF file entry on the Hub: + +```bash +./main \ + -m $(./scripts/hf.sh --repo TheBloke/Mixtral-8x7B-v0.1-GGUF --file mixtral-8x7b-v0.1.Q4_K_M.gguf) \ + -p "I believe the meaning of life is" -n 64 + +./main \ + -m $(./scripts/hf.sh https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q4_K_M.gguf) \ + -p "I believe the meaning of life is" -n 64 + +./main \ + -m $(./scripts/hf.sh --url https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q4_K_M.gguf) \ + -p "I believe the meaning of life is" -n 64 +``` + +Find more information [here](https://github.com/ggerganov/llama.cpp/pull/5501). + +## Parsing the metadata with @huggingface/gguf + +We've also created a javascript GGUF parser that works on remotely hosted files (e.g. Hugging Face Hub). + +```bash +npm install @huggingface/gguf +``` + +```ts +import { gguf } from "@huggingface/gguf"; +// remote GGUF file from https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF +const URL_LLAMA = "https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/191239b/llama-2-7b-chat.Q2_K.gguf"; +const { metadata, tensorInfos } = await gguf(URL_LLAMA); +``` + +Find more information [here](https://github.com/huggingface/huggingface.js/tree/main/packages/gguf). \ No newline at end of file diff --git a/docs/hub/models-advanced.md b/docs/hub/models-advanced.md index 5d51d5514..1d84dd305 100644 --- a/docs/hub/models-advanced.md +++ b/docs/hub/models-advanced.md @@ -3,4 +3,5 @@ ## Contents - [Integrate your library with the Hub](./models-adding-libraries) -- [Adding new tasks to the Hub](./models-tasks) \ No newline at end of file +- [Adding new tasks to the Hub](./models-tasks) +- [GGUF format](./gguf) \ No newline at end of file