-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dimension mismatch when using Coca for VQA task #516
Comments
Hi @jemmyshin, I think there was an issue similar to this one that was fixed some time ago, any chance that you are using an older version? Otherwise this is a bug, I will check what the issue is. |
I used the code in Coca Colab so it should be |
Hi @jemmyshin, so indeed there is a little bug in some sense, however you can probably already do what you want, if I understand it without any changes in the codebase. In the meantime I will open a PR. The reason that a longer If I understand you are not getting an answer after your prompt, the reason for that is the tokenizer. if you replace
you should get the answer after the prompt, the issue is that the tokenizer adds padding and end of text token by default, I will make a pr to fix this but you should be able to try with this already. Let me know if this actually works! |
@jemmyshin Hi, can you share the full code of VQA in coca? Thanks! |
I use
generate
endpoint to do VQA task in Coca model, but got this error:It seems that this issue will not happen in
beam_search
mode but appear intop_k
ortop_p
mode.Also, when I change
max_seq_len
parameter ingenerate
I got different outputs. For example:max_seq_len
= 20 andgeneration_type
=top_p
will not raise this error message. However this will not work formax_seq_len
= 78 andgeneration_type
=top_p
.Am I use this in a wrong way?
The text was updated successfully, but these errors were encountered: