-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
大哥,你本人测试的结果也是这样吗 #8
Comments
你可以尝试重新训练,或者更换bert-base-chinese为hfl/chinese-roberta-wwm-ext |
|
接上一条... 请问超参数可以公布一下吗? |
改用了hfl/chinese-roberta-wwm-ext也还是差着一点 |
是的, 我调一下超参数, batch_size=128, best_f1=0.7923 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
训练了33个epoch得到下面的结果:
avg_precision: 0.7733637138826565, avg_recall: 0.7902737446924301, avg_f1: 0.7806660288039
Best F1: 0.7875311899437892
这个结果是不是跟bert+crf比还差一点点呢?
https://github.com/lonePatient/BERT-NER-Pytorch 这个里面的BERT+CRF
Accuracy (entity) | Recall (entity) | F1 score (entity)
0.7977 | 0.8177 | 0.8076
The text was updated successfully, but these errors were encountered: