Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add evaluation of PRM trained by TRL #12

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

CJReinforce
Copy link

Add the evaluation of PRM trained by the PRMTrainer of TRL.

I reproduced Qwen2.5-Math-7B-PRM800K using the PRMTranier of TRL. The performance of the reproduced PRM evaluated by run_eval_prm_trl.py are:

gsm8k error acc: 46.9, correct acc: 96.4, f1: 63.1
math error acc: 55.9, correct acc: 82.0, f1: 66.5
olympiadbench error acc: 39.0, correct acc: 67.8, f1: 49.6
omnimath error acc: 34.4, correct acc: 66.8, f1: 45.4
ProcessBench average f1: 56.1

@CJReinforce
Copy link
Author

In the 05769fa commit, I fixed a bug where tokens were being concatenated incorrectly. Now appending token of separator after token of each step, instead of concatenating separator string and each step string before tokenization. This modification aligns with the training process in TRL.

After fixing the bug, the performance of my reproduced PRM evaluated by run_eval_prm_trl.py are:

gsm8k error acc: 52.2, correct acc: 96.9, f1: 67.8
math error acc: 58.9, correct acc: 81.3, f1: 68.3
olympiadbench error acc: 45.1, correct acc: 64.3, f1: 53.0
omnimath error acc: 42.7, correct acc: 66.4, f1: 52.0
ProcessBench. Average F1: 60.3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant