-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve readability of the quick tour. #501
base: main
Are you sure you want to change the base?
Improve readability of the quick tour. #501
Conversation
Actually, I'm not sure running 2 models at the same time is true. When I run
it only evaluates the second model. It seems the code doc in
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the doc you're referring to, you can see that the keys are not the same - you cannot evaluate 2 models at the same time, but you can specify a range of parameters to use for one model however
|
||
Tasks details can be found in the | ||
All supported tasks can be found at the [tasks_list](available-tasks). For more details, you can have a look at the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also support the tasks that are community provided in the extended folder
the [tasks_list](available-tasks) in the format: | ||
Here, the first argument specifies which model(s) to run, and the second argument specifies how to evaluate them. | ||
|
||
Multiple models can be evaluated at the same time by using a comma-separated list. For example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, we can only evaluate one model at a time - however we can specifiy precision, peft weights, ...
The task specification might be a bit hard to grasp as first. The format is as follows: | ||
|
||
```bash | ||
{suite}|{task}|{num_few_shot}|{0 or 1 to automatically reduce `num_few_shot` if prompt is too long} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
automatically adapt the number of few shot examples presented to the model if the prompt is too long for the context size of the task or the model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I would add this explanation on antoher line)
No description provided.