Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the official evaluation script #28

Open
bozheng-hit opened this issue Jul 16, 2022 · 2 comments
Open

About the official evaluation script #28

bozheng-hit opened this issue Jul 16, 2022 · 2 comments

Comments

@bozheng-hit
Copy link

Is there an evaluation script that can directly compare a prediction file against the gold prediction file, i.e., the official evaluation script?

@massive-dev-amz
Copy link

massive-dev-amz commented Jul 20, 2022

Hi @bozheng-hit, recommend using eval.ai for official test results. We will be opening submissions to the MMNLU-22 phases soon.

@bozheng-hit
Copy link
Author

Hi @bozheng-hit, recommend using eval.ai for official test results. We will be opening submissions to the MMNLU-22 phases soon.

Is it possible to return results for all languages separately?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants