Some errors happened in the evaluate mode in test.py #11
Answered
by
Dylan-H-Wang
BugMaker2002
asked this question in
Q&A
-
Beta Was this translation helpful? Give feedback.
Answered by
Dylan-H-Wang
Dec 9, 2023
Replies: 1 comment
-
Because the provided weights are SSL-pre-trained weights and it is expected to be fine-tuned on some datasets. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
Dylan-H-Wang
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Because the provided weights are SSL-pre-trained weights and it is expected to be fine-tuned on some datasets.
The '--evaluate' flag is used for inference which means the weights for
--pretrained
should be the result of fine-tuned model.In a word, run
test.py
with provided pre-trained weights without--evaluate
flag first. Then, you can runtest.py
with '--evaluate' using the output weights from the lasttest.py
run.