Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid duplicated signatures when building transactions #4230

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

grarco
Copy link
Collaborator

@grarco grarco commented Jan 10, 2025

Describe your changes

Closes #4203.

This PR modifies adds a new method on the Tx type to remove duplicated sections. This function is then called in the signing function before producing the wrapper signature (which is computed on all the sections) and the dump_tx function (in case the dumped tx needed to be signed offline).

To support this logic the PartialEq implementation for the Authorization type is now manually provided and ignores the way the signer is specified, focusing instead on the targets and the actual signature.

As a side note, also the PartialEq implementation for SigningTxData has been updated to account for the shielded hash.

Checklist before merging

  • If this PR has some consensus breaking changes, I added the corresponding breaking:: labels
    • This will require 2 reviewers to approve the changes
  • If this PR requires changes to the docs or specs, a corresponding PR is opened in the namada-docs repo
    • Relevant PR if applies:
  • If this PR affects services such as namada-indexer or namada-masp-indexer, a corresponding PR is opened in that repo
    • Relevant PR if applies:

@grarco grarco marked this pull request as ready for review January 10, 2025 19:46
@grarco grarco requested review from tzemanovic and murisi January 10, 2025 19:47
Copy link

codecov bot commented Jan 10, 2025

Codecov Report

Attention: Patch coverage is 98.71795% with 1 line in your changes missing coverage. Please review.

Project coverage is 74.60%. Comparing base (9527ea9) to head (fa5bdbf).
Report is 5 commits behind head on main.

Files with missing lines Patch % Lines
crates/tx/src/section.rs 96.87% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4230      +/-   ##
==========================================
+ Coverage   74.59%   74.60%   +0.01%     
==========================================
  Files         342      342              
  Lines      108724   108792      +68     
==========================================
+ Hits        81101    81169      +68     
  Misses      27623    27623              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@murisi murisi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While the overall approach taken here to avoid duplicate signatures makes sense, checking whether Authorization sections are duplicates after they have been produced (by checking structural equality) seems less robust than not producing duplicates in the first place. But perhaps this (situation where a signing process is redundantly being done twice) is unavoidable...

crates/sdk/src/signing.rs Outdated Show resolved Hide resolved
.iter()
.all(|pubkey| other.public_keys.contains(pubkey))
.all(|pubkey| other_public_keys.contains(pubkey))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this code only prove that public_keys is a subset of other_public_keys? Imagine that public_keys = vec![x, x, y, y] and other_public_keys = vec![w,x,y,z], wouldn't this logic conclude that public_keys and other_public_keys are equivalent (because both vectors have the same length and all the elements of the former are contained in the latter)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right! I tried a different approach in fa5bdbf

// target(s) and the signatures match ,regardless of how the signer is
// expressed

// Check equivalence of the targets ignoring the specific ordering
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this code only prove that targets is a subset of other_targets? Imagine that targets = vec![x, x, y, y] and other_targets = vec![w,x,y,z], wouldn't this logic conclude that targets and other_targets are equivalent (because both vectors have the same length and all the elements of the former are contained in the latter)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, thanks!

signatures: other_signatures,
} = other;

// Two authorizations are equal when they are computed over the same
Copy link
Collaborator

@murisi murisi Jan 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This definition could work. But it has the implication that if Authorization::signer contains the variant Signer::PubKeys and someone tampers with one of the public keys in that list so that Authorization::verify_signature can no longer pass, this PartialEq implementation would still say that the tampered instance is still equal to the original. No? If so, is this desirable?

Also, given that this function defines a partial equivalence relation, I guess false should be returned in those cases where there is insufficient information to check/conclude equality of the signers? No?

Copy link
Collaborator Author

@grarco grarco Jan 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's correct. The representation of the signer being different in the duplicated authorizations I could see on mainnet was the drive to modify this trait implementation. My reasoning was that once we establish the equality of the signature's targets and of the signatures themselves we can safely consider the two Authorizations to be the same even if the signers are expressed differently. In case of the signers being straight different (which would cause one of the two instances to fail the signature verification check) I believe we are looking at something else than just equality: I guess one should check the integrity of the sections at the byte level using a digest or similar techniques (see also my answer to your next question since it's related to this one)

@@ -442,6 +441,39 @@ pub struct Authorization {
pub signatures: BTreeMap<u8, common::Signature>,
}

impl PartialEq for Authorization {
fn eq(&self, other: &Self) -> bool {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it matter that partially equivalent Authorizations might not have the same cryptographic hash? Or is this impossible? Could this sort of definition of partial equality confuse people in the future? (This definition seems to be calculated specifically to deal with our duplicate signatures issue (no?), but could be unintuitive in other cases/situations...)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe PartialEq should guarantee a byte-level match between the two instances but rather a logical equivalence (as I said in the previous question, when safety is critical one should explicitly rely on the digest of the expected instance).

I do see your point on this implementation being potentially confusing in other situations: I could try to reintroduce the check on the signer with a more refined logic. My only concern is the case when a multisig signer is expressed as a set of keys in one instance and as an address in the other one: in this case there's no way I can resolve the equivalence without a query to storage (but there's a chance we always express multisig signers as a set of keys which would solve this issue).

An alternative could be to come up with a different function for this comparison and restore the previous implementation of PartialEq (net of ignoring the order of the set of keys which I think is a required update regardless).

@grarco
Copy link
Collaborator Author

grarco commented Jan 14, 2025

While the overall approach taken here to avoid duplicate signatures makes sense, checking whether Authorization sections are duplicates after they have been produced (by checking structural equality) seems less robust than not producing duplicates in the first place. But perhaps this (situation where a signing process is redundantly being done twice) is unavoidable...

We already try to filter duplicated instanced of SigningTxData but there are cases where this logic fails (such as in case of different wrapper signers, even though we should only produce a single wrapper signature, see #3901, or the case of an offline procedure where we could get duplicated signatures from the members of a multisig account): hence the need to integrate this logic with a pruning mechanism for duplicated sections. I do agree with you though and we should try to improve the signing procedure over time to avoid producing duplicated signatures as much as possible

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Duplicated signatures in transactions
3 participants