-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft to add text-generation parameters #1393
Merged
Merged
Changes from 6 commits
Commits
Show all changes
9 commits
Select commit
Hold shift + click to select a range
1f1df45
first draft to add text-generation parameters
Wauplin 98a1b54
headers
Wauplin 97c7a8b
more structure
Wauplin 0e37b4c
add chat-completion
Wauplin 458879f
better handling of arrays
Wauplin 330312e
better handling of parameters
Wauplin d44d7f3
Add new tasks pages (fill mask, summarization, question answering, se…
Wauplin 3eb85e8
mention chat completion in text generation docs
Wauplin 486e809
fix chat completion snippets
Wauplin File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,203 @@ | ||
## Chat completion | ||
|
||
Generate a response given a list of messages. | ||
This is a subtask of [`text-generation`](./text_generation) designed to generate responses in a conversational context. | ||
|
||
|
||
|
||
### Recommended models | ||
|
||
- [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it): A text-generation model trained to follow instructions. | ||
- [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct): Very powerful text generation model trained to follow instructions. | ||
- [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct): Small yet powerful text generation model. | ||
- [AI-MO/NuminaMath-7B-TIR](https://huggingface.co/AI-MO/NuminaMath-7B-TIR): A very powerful model that can solve mathematical problems. | ||
- [HuggingFaceH4/starchat2-15b-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1): Strong coding assistant model. | ||
- [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407): Very strong open-source large language model. | ||
|
||
|
||
|
||
### API specification | ||
|
||
#### Request | ||
|
||
| Payload | | | | ||
| :--- | :--- | :--- | | ||
| **frequency_penalty** | _number_ | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | | ||
| **logprobs** | _boolean_ | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. | | ||
| **max_tokens** | _integer_ | The maximum number of tokens that can be generated in the chat completion. | | ||
| **messages*** | _object[]_ | A list of messages comprising the conversation so far. | | ||
| ** content** | _string_ | | | ||
| ** name** | _string_ | | | ||
| ** role*** | _string_ | | | ||
| ** tool_calls** | _object[]_ | | | ||
| ** function*** | _object_ | | | ||
| ** arguments*** | _object_ | | | ||
| ** description** | _string_ | | | ||
| ** name*** | _string_ | | | ||
| ** id*** | _integer_ | | | ||
| ** type*** | _string_ | | | ||
| **presence_penalty** | _number_ | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics | | ||
| **seed** | _integer_ | | | ||
| **stop** | _string[]_ | Up to 4 sequences where the API will stop generating further tokens. | | ||
| **stream** | _boolean_ | | | ||
| **temperature** | _number_ | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. | | ||
| **tool_choice** | _object_ | One of the following: | | ||
| ** (#1)** | | | | ||
| ** FunctionName*** | _string_ | | | ||
| ** (#2)** | | Possible values: OneOf | | ||
| **tool_prompt** | _string_ | A prompt to be appended before the tools | | ||
| **tools** | _object[]_ | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. | | ||
| ** function*** | _object_ | | | ||
| ** arguments*** | _object_ | | | ||
| ** description** | _string_ | | | ||
| ** name*** | _string_ | | | ||
| ** type*** | _string_ | | | ||
| **top_logprobs** | _integer_ | An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used. | | ||
| **top_p** | _number_ | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | | ||
|
||
|
||
Some options can be configured by passing headers to the Inference API. Here are the available headers: | ||
|
||
| Headers | | | | ||
| :--- | :--- | :--- | | ||
| **authorization** | _string_ | Authentication header in the form `'Bearer: hf_****'` when `hf_****` is a personal user access token with Inference API permission. You can generate one from [your settings page](https://huggingface.co/settings/tokens). | | ||
| **x-use-cache** | _boolean, default to `true`_ | There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching [here](../parameters#caching]). | | ||
| **x-wait-for-model** | _boolean, default to `false`_ | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability [here](../overview#eligibility]). | | ||
|
||
For more information about Inference API headers, check out the parameters [guide](../parameters). | ||
|
||
#### Response | ||
|
||
Output type depends on the `stream` input parameter. | ||
If `stream` is `false` (default), the response will be a JSON object with the following fields: | ||
|
||
| Body | | | ||
| :--- | :--- | :--- | | ||
| **choices** | _object[]_ | | | ||
| ** finish_reason** | _string_ | | | ||
| ** index** | _integer_ | | | ||
| ** logprobs** | _object_ | | | ||
| ** content** | _object[]_ | | | ||
| ** logprob** | _number_ | | | ||
| ** token** | _string_ | | | ||
| ** top_logprobs** | _object[]_ | | | ||
| ** logprob** | _number_ | | | ||
| ** token** | _string_ | | | ||
| ** message** | _object_ | | | ||
| ** content** | _string_ | | | ||
| ** name** | _string_ | | | ||
| ** role** | _string_ | | | ||
| ** tool_calls** | _object[]_ | | | ||
| ** function** | _object_ | | | ||
| ** arguments** | _object_ | | | ||
| ** description** | _string_ | | | ||
| ** name** | _string_ | | | ||
| ** id** | _integer_ | | | ||
| ** type** | _string_ | | | ||
| **created** | _integer_ | | | ||
| **id** | _string_ | | | ||
| **model** | _string_ | | | ||
| **object** | _string_ | | | ||
| **system_fingerprint** | _string_ | | | ||
| **usage** | _object_ | | | ||
| ** completion_tokens** | _integer_ | | | ||
| ** prompt_tokens** | _integer_ | | | ||
| ** total_tokens** | _integer_ | | | ||
|
||
|
||
If `stream` is `true`, generated tokens are returned as a stream, using Server-Sent Events (SSE). | ||
For more information about streaming, check out [this guide](https://huggingface.co/docs/text-generation-inference/conceptual/streaming). | ||
|
||
| Body | | | ||
| :--- | :--- | :--- | | ||
| **choices** | _object[]_ | | | ||
| ** delta** | _object_ | | | ||
| ** content** | _string_ | | | ||
| ** role** | _string_ | | | ||
| ** tool_calls** | _object_ | | | ||
| ** function** | _object_ | | | ||
| ** arguments** | _string_ | | | ||
| ** name** | _string_ | | | ||
| ** id** | _string_ | | | ||
| ** index** | _integer_ | | | ||
| ** type** | _string_ | | | ||
| ** finish_reason** | _string_ | | | ||
| ** index** | _integer_ | | | ||
| ** logprobs** | _object_ | | | ||
| ** content** | _object[]_ | | | ||
| ** logprob** | _number_ | | | ||
| ** token** | _string_ | | | ||
| ** top_logprobs** | _object[]_ | | | ||
| ** logprob** | _number_ | | | ||
| ** token** | _string_ | | | ||
| **created** | _integer_ | | | ||
| **id** | _string_ | | | ||
| **model** | _string_ | | | ||
| **object** | _string_ | | | ||
| **system_fingerprint** | _string_ | | | ||
|
||
|
||
### Using the API | ||
|
||
|
||
<inferencesnippet> | ||
|
||
<curl> | ||
```bash | ||
curl https://api-inference.huggingface.co/models/undefined \ | ||
-X POST \ | ||
-d '{"inputs": "Can you please let us know more details about your "}' \ | ||
-H 'Content-Type: application/json' \ | ||
-H "Authorization: Bearer hf_***" | ||
|
||
``` | ||
</curl> | ||
|
||
<python> | ||
```py | ||
import requests | ||
|
||
API_URL = "https://api-inference.huggingface.co/models/undefined" | ||
headers = {"Authorization": "Bearer hf_***"} | ||
|
||
def query(payload): | ||
response = requests.post(API_URL, headers=headers, json=payload) | ||
return response.json() | ||
|
||
output = query({ | ||
"inputs": "Can you please let us know more details about your ", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To update to conversational API before release There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. that was a mistake on my side. Fixed in 486e809 |
||
}) | ||
``` | ||
|
||
To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion). | ||
</python> | ||
|
||
<js> | ||
```js | ||
async function query(data) { | ||
const response = await fetch( | ||
"https://api-inference.huggingface.co/models/undefined", | ||
{ | ||
headers: { | ||
Authorization: "Bearer hf_***" | ||
"Content-Type": "application/json", | ||
}, | ||
method: "POST", | ||
body: JSON.stringify(data), | ||
} | ||
); | ||
const result = await response.json(); | ||
return result; | ||
} | ||
|
||
query({"inputs": "Can you please let us know more details about your "}).then((response) => { | ||
console.log(JSON.stringify(response)); | ||
}); | ||
``` | ||
|
||
To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#chatcompletion). | ||
</js> | ||
|
||
</inferencesnippet> | ||
|
||
|
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed in a follow-up, we should prevent having frozen models here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, related thread: #1384 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/huggingface/hub-docs/pull/1396/files