-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace the operator of "torch.einsum" #4
Comments
Hi @zhangnju |
I updated the library, now everything works? |
I am able to run your model with your onnx code, but I similarly have a problem converting to tensorrt. I tried with tensorrt versions 8.5.2.2, and 8.6.1, and the error for both is at the aten_unsqueeze op:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
q, k, v = (torch.einsum("tbh, oh -> tbo", x, self.attn.in_proj_weight) + self.attn.in_proj_bias).contiguous().chunk(
3, dim=-1)
@Lednik7 Thanks for your great work on Clip-ONNX. for the pytorch operator of "torch.einsum" , if we don't want to use this operator , do you have other codes to replace this operator?
this operator is not friendly to some Inference engine, like NV TensorRT, so if you have other codes to replace einsum, that will be better
The text was updated successfully, but these errors were encountered: