Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

大哥,你本人测试的结果也是这样吗 #8

Open
1a2cjitenfei opened this issue Apr 18, 2022 · 6 comments
Open

大哥,你本人测试的结果也是这样吗 #8

1a2cjitenfei opened this issue Apr 18, 2022 · 6 comments

Comments

@1a2cjitenfei
Copy link

1a2cjitenfei commented Apr 18, 2022

训练了33个epoch得到下面的结果:
avg_precision: 0.7733637138826565, avg_recall: 0.7902737446924301, avg_f1: 0.7806660288039
Best F1: 0.7875311899437892

这个结果是不是跟bert+crf比还差一点点呢?
https://github.com/lonePatient/BERT-NER-Pytorch 这个里面的BERT+CRF

Accuracy (entity) | Recall (entity) | F1 score (entity)
0.7977 | 0.8177 | 0.8076

@gaohongkui
Copy link
Owner

gaohongkui commented May 4, 2022

我的与苏神的结果一致
image

你可以尝试重新训练,或者更换bert-base-chinese为hfl/chinese-roberta-wwm-ext

@nlper01
Copy link

nlper01 commented Jul 18, 2022

图片
训练了好几次,改用了hfl/chinese-roberta-wwm-ext还是差着一点呀

@feitboiling
Copy link

image
你好, 我的结果也是这样的, 请问除了改成roberta 还有什么办法吗?

@feitboiling
Copy link

接上一条... 请问超参数可以公布一下吗?

@nlper01
Copy link

nlper01 commented Aug 6, 2022

改用了hfl/chinese-roberta-wwm-ext也还是差着一点
我记得是10个还是33个epoch后模型就已经收敛了

@feitboiling
Copy link

是的, 我调一下超参数, batch_size=128, best_f1=0.7923

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants