You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We recently added a commit to raise Dynamo accumulated cache size limit to make compile work with large models like 70b whose num_layer is greater than default limit (64): #45 (comment).
Now this number has been officially raised in PyTorch (related PR) from 64 to 256. So we can revert this commit as the new default is enough.
I am holding the revert change as the torch PR just got merged in today's nightly, and most env does not have this "fix" yet.
This is to be revisited later once that PR is picked by most env we have, and then we can revert it.
The text was updated successfully, but these errors were encountered:
We recently added a commit to raise Dynamo accumulated cache size limit to make compile work with large models like 70b whose num_layer is greater than default limit (64): #45 (comment).
Now this number has been officially raised in PyTorch (related PR) from 64 to 256. So we can revert this commit as the new default is enough.
I am holding the revert change as the torch PR just got merged in today's nightly, and most env does not have this "fix" yet.
This is to be revisited later once that PR is picked by most env we have, and then we can revert it.
The text was updated successfully, but these errors were encountered: