-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Functionary on Qwen 2.5 #271
Comments
Thanks @themrzmaster ! We are trying with Qwen 2.5 models already. We will release them if they are good. |
@jeffreymeetkai any update about this?very interested in it |
The Qwen 2.5 finetunes are not better than llama 3.1, thus we decided not to release them. |
@jeffreymeetkai I'm very much interested to know the latest functionary model's performance compared to firefunctionv2 |
@xdevfaheem You can refer to this PR from Berkeley's Function-Calling leaderboard. We are currently still the world's best OSS function-calling model. Although, the PR is not updated with firefunctionv2 yet, previous records indicate that our performance is better than firefunctionv2. |
I'm curious why Functionary-v3.2 isn't appearing on the latest BFCL leaderboard. |
We will contact the BFCL team soon to ask if they can evaluate the v3.2 models. Nevertheless, our earlier evaluations on them show that small-v3.2 is similar in performance compared to small-v3.1 across BFCL and Toolsandbox. This evaluation was done before medium-v3.2 was released but it should have similar performance compared to medium-v3.1. |
Again, thank you for your work!
I think this project does not have the attention it should. This is by far the best OS model that can serve as a general agent in my use cases.
Would be curious to see how Qwen 2.5 32b would behave. Its a nice model size for consumer size gpus. License is Apache 2.0
https://huggingface.co/Qwen/Qwen2.5-32B
Thanks again
The text was updated successfully, but these errors were encountered: