-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade the World #358
Upgrade the World #358
Conversation
2968699
to
840a385
Compare
840a385
to
10b193e
Compare
VSS CI failure is unrelated and should be fixed by #357. |
10b193e
to
912e657
Compare
…ock must succeed 3ae9ecb test: fix off-by-one in `checkpoint_insert_existing` (valued mammal) e6aeaea doc(core): document panic for `CheckPoint::insert` (valued mammal) 2970b83 fix(core): calling `CheckPoint::insert` with existing block must succeed (志宇) Pull request description: ### Description Previously, we were panicking when the caller tried to insert a block at height 0. However, this should succeed if the block hash does not change. ### Notes to the reviewers For context: * lightningdevkit/ldk-node#358 * https://discord.com/channels/753336465005608961/978744259693916230/1283050849429356629 ### Changelog notice * Fix `CheckPoint::insert` to not panic when inserting existing genesis block (same block height and hash). ### Checklists #### All Submissions: * [x] I've signed all my commits * [x] I followed the [contribution guidelines](https://github.com/bitcoindevkit/bdk/blob/master/CONTRIBUTING.md) * [x] I ran `cargo fmt` and `cargo clippy` before committing #### Bugfixes: ~~* [ ] This pull request breaks the existing API~~ * [x] I've added tests to reproduce the issue which are now passing ACKs for top commit: ValuedMammal: Self ACK 3ae9ecb notmandatory: ACK 3ae9ecb Tree-SHA512: 638d8aacac59ad18b5ee369d08f39bb12cc37fa9b3f936404b60dbf44938ef850ca341d00566840b5a77909d31c9291ab56299d986d832005ca96cd91a0b0e55
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost through the LDK upgrade commits.
/// transaction fee) this value will be zero. For channels created prior to LDK Node 0.4 | ||
/// the channel is always treated as outbound (and thus this value is never zero). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the channel is always treated as outbound (and thus this value is never zero).
Not sure what is meant by this. Is because LDK Node only allowed outbound channels? If so, why do we need to say this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it has nothing to do with LDK Node per se. This too is directly copied from the LDK docs to account for the behavior change:
Note that if this channel is inbound (and thus our counterparty pays the commitment transaction fee) this value will be zero. For ChannelMonitors created prior to LDK 0.0.124, the channel is always treated as outbound (and thus this value is never zero).
/// The amount available to claim, in satoshis, excluding the on-chain fees which will be | ||
/// required to do so. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this mention other amounts no longer included here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mh, I left the docs to mirror LDK's docs, and there they seem unchanged?
@@ -164,7 +166,8 @@ where | |||
ConfirmationTarget::OnchainPayment => 5000, | |||
ConfirmationTarget::ChannelFunding => 1000, | |||
ConfirmationTarget::Lightning(ldk_target) => match ldk_target { | |||
LdkConfirmationTarget::OnChainSweep => 5000, | |||
LdkConfirmationTarget::MaximumFeeEstimate => 8000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What went into picking this number out of curiosity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mhh, not much tbh. For the maximum I mostly put a number that's higher in relation to the prior confirmation target, as I'm not super sure how to estimate these fallbacks best. That said, note that we a) current should never use these number in the first place as we enforce/block on updating fee rate estimations on startup (could even consider putting a debug_assert
here for the time being and b) potentially should re-evaluate all fee-rate related magic numbers in a follow-up eventually to make sure they still reflect the current fee market. Although, the latter should probably go hand-in-hand with updates to LDK to re-enable interop with other implementations on regtest/signet/testnet which currently seems more or less broken for many users (cf. lightningdevkit/rust-lightning#2925).
912e657
to
9db3046
Compare
Rebased to resolve minor conflict. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed up to but not including the first persistence commit.
98fc9b5
to
97784c5
Compare
Made a few more changes (in the respective fixup commits) to make diff --git a/src/builder.rs b/src/builder.rs
index 97c34f0f..123f9a0d 100644
--- a/src/builder.rs
+++ b/src/builder.rs
@@ -343,4 +343,6 @@ impl NodeBuilder {
#[cfg(any(vss, vss_test))]
pub fn build_with_vss_store(&self, url: String, store_id: String) -> Result<Node, BuildError> {
+ use bitcoin::key::Secp256k1;
+
let logger = setup_logger(&self.config)?;
@@ -352,12 +354,11 @@ impl NodeBuilder {
let config = Arc::new(self.config.clone());
- let xprv = bitcoin::bip32::ExtendedPrivKey::new_master(config.network.into(), &seed_bytes)
- .map_err(|e| {
- log_error!(logger, "Failed to derive master secret: {}", e);
- BuildError::InvalidSeedBytes
- })?;
+ let xprv = bitcoin::bip32::Xpriv::new_master(config.network, &seed_bytes).map_err(|e| {
+ log_error!(logger, "Failed to derive master secret: {}", e);
+ BuildError::InvalidSeedBytes
+ })?;
let vss_xprv = xprv
- .ckd_priv(&Secp256k1::new(), ChildNumber::Hardened { index: 877 })
+ .derive_priv(&Secp256k1::new(), &[ChildNumber::Hardened { index: 877 }])
.map_err(|e| {
log_error!(logger, "Failed to derive VSS secret: {}", e);
diff --git a/src/io/vss_store.rs b/src/io/vss_store.rs
index d7308ae6..474f7dbc 100644
--- a/src/io/vss_store.rs
+++ b/src/io/vss_store.rs
@@ -138,5 +138,12 @@ impl KVStore for VssStore {
// unwrap safety: resp.value must be always present for a non-erroneous VSS response, otherwise
// it is an API-violation which is converted to [`VssError::InternalServerError`] in [`VssClient`]
- let storable = Storable::decode(&resp.value.unwrap().value[..])?;
+ let storable = Storable::decode(&resp.value.unwrap().value[..]).map_err(|e| {
+ let msg = format!(
+ "Failed to decode data read from key {}/{}/{}: {}",
+ primary_namespace, secondary_namespace, key, e
+ );
+ Error::new(ErrorKind::Other, msg)
+ })?;
+
Ok(self.storable_builder.deconstruct(storable)?.0)
} |
src/wallet/ser.rs
Outdated
let txs_len = ChangeSetSerWrapper(&self.0.txs).serialized_length() as u64; | ||
let txouts_len = self.0.txouts.serialized_length() as u64; | ||
let anchors_len = ChangeSetSerWrapper(&self.0.anchors).serialized_length() as u64; | ||
let last_seen_len = self.0.last_seen.serialized_length() as u64; | ||
let total_len = BigSize(txs_len + txouts_len + anchors_len + last_seen_len); | ||
total_len.write(writer)?; | ||
|
||
ChangeSetSerWrapper(&self.0.txs).write(writer)?; | ||
self.0.txouts.write(writer)?; | ||
ChangeSetSerWrapper(&self.0.anchors).write(writer)?; | ||
self.0.last_seen.write(writer)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we use the TLV macros instead? Or at least write a length for each field? That way we can accommodate removed fields.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we use the TLV macros instead?
I don't think we can, at least not simply impl_writeable_tlv_based
if I'm not mistaken? I don't want to write even more boilerplate newtypes for all the individual types that the ChangeSet
s are composed of (many of which we don't use and hence don't have serialization logic for in LDK), but we're also unable implement Writeable
/Readable
directly for them. So it's more convenient to implement Readable
/Writeable
manually and using the De/SerWrapper
s where needed?
Or at least write a length for each field? That way we can accommodate removed fields.
Hmm, yeah, we could. I now added a fixup that replaces the total length with per-field lengths using some new read/write macros to DRY it up a bit. Note that all this might be over the top as BDK committed to make only additive, backwards compatible changes to the ChangeSet
types in form of Option
s. But I'd rather be safe than sorry here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mh, nevermind the "I don't think we can" part. Went down the rabbit hole to clean up the serialization based on macros, until I realized that everything I wrote was already available in LDK. While I still don't think we can just use the high-level macros like impl_writeable_tlv_based
, I now switched to use encode_tlv_stream
/decode_tlv_stream
.
// We require a descriptor and return `None` to signal creation of a new wallet otherwise. | ||
if let Some(descriptor) = | ||
read_bdk_wallet_descriptor(Arc::clone(&kv_store), Arc::clone(&logger))? | ||
{ | ||
change_set.descriptor = Some(descriptor); | ||
} else { | ||
return Ok(None); | ||
} | ||
|
||
// We require a change_descriptor and return `None` to signal creation of a new wallet otherwise. | ||
if let Some(change_descriptor) = | ||
read_bdk_wallet_change_descriptor(Arc::clone(&kv_store), Arc::clone(&logger))? | ||
{ | ||
change_set.change_descriptor = Some(change_descriptor); | ||
} else { | ||
return Ok(None); | ||
} | ||
|
||
// We require a network and return `None` to signal creation of a new wallet otherwise. | ||
if let Some(network) = read_bdk_wallet_network(Arc::clone(&kv_store), Arc::clone(&logger))? { | ||
change_set.network = Some(network); | ||
} else { | ||
return Ok(None); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this is saying if we ever failed to write one of these, then we will start with a fresh wallet? Presumably this is safe because the data below would never have been written?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Presumably this is safe because the data below would never have been written?
Right, descriptor
/change_descriptor
/network
are mandatory and we'll always write them before any of the other data. I now added a debug_assert
to check this invariant in persist
.
4687ee2
to
0fcd231
Compare
src/wallet/ser.rs
Outdated
let len = BigSize(self.0.len() as u64); | ||
len.write(writer)?; | ||
for (time, txid) in self.0.iter() { | ||
ChangeSetSerWrapper(time).write(writer)?; | ||
txid.write(writer)?; | ||
write_len_prefixed_field!(writer, ChangeSetSerWrapper(time))?; | ||
write_len_prefixed_field!(writer, txid)?; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could use WithoutLength
and Iterable
using encoding
in encode_tlv_stream
? Iterable
is pub(crate)
unfortunately, so would need to copy the code. Likewise elsewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mhh, I didn't consider Iterable
so far, but I had a look at WithoutLength
before. The issue is that we don't have an implementation of Writeable
/Readable
for WithoutLength<BTreeSet>
either, so we'd at the very least need to find a workaround for it (i.e., duplicating the code, and dropping it after we added it upstream?).
Could you expand on how Iterable
would help in this context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm... right, on second thought we probably don't need to use Iterable
or WithoutLength
. But shouldn't we not write the length prefix for the BTreeSet
size? Any BTreeSet
or BTreeMap
will always be written in the context of an encode_tlv_stream
, so we'd end up writing the length twice. In LDK, we use WithoutLength
to avoid this, but that's only because the serialization format predated the use of encode_tlv_stream
.
Now pinned the BDK dependencies as we're stuck on them until we get an LDK version with a compatible |
src/wallet/ser.rs
Outdated
pub(crate) struct ChangeSetSerWrapper<'a, T>(pub &'a T); | ||
pub(crate) struct ChangeSetDeserWrapper<T>(pub T); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we give this a different name given it isn't used exclusively with ChangeSet
types? Or maybe create a separate wrapper for the other types?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mhh, I named it that way as it's all ChangeSet
-related/-sub-types. If we deem it more general we might want to move the ser
module somewhere more central, too, but for now I kept it in wallet
?
src/wallet/ser.rs
Outdated
write_len_prefixed_field!(writer, ChangeSetSerWrapper(time))?; | ||
write_len_prefixed_field!(writer, txid)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When writing BTreeSet
items, I guess only using length-prefixes means the item type can grow (e.g., by becoming a 3-tuple) but can never shrink or change a portion of the tuple. Maybe this is ok though since any change should probably result in using a different TLV type in the TLV stream containing the BTreeSet
. Or should we treat the tuple itself as a type by writing it as a TLV stream?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mhh, good point. I now switched to {read,write}_tlv_fields
and also introduced a CHANGESET_SERIALIZATION_VERSION
byte for top-level ChangeSet
sub-types to make sure we're potentially able to write some migration logic if it ever becomes necessary.
0fcd231
to
0423965
Compare
0423965
to
5f1379d
Compare
Rebased this PR on top of #366 (which needs to land first) for now to avoid introducing too many rebase conflicts. |
let mut locked_persister = self.persister.lock().unwrap(); | ||
locked_wallet.persist(&mut locked_persister).map_err(|e| { | ||
log_error!(self.logger, "Failed to persist wallet: {}", e); | ||
Error::PersistenceFailed | ||
})?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If psbt.extract_tx()
fails below, is there any implications of persisting here? Will the wallet think the inputs have been spent? Will it think a change output has been used when it really hasn't?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will the wallet think the inputs have been spent?
No, BDK will only pickup on the created transaction through chain syncing, i.e., once it has been broadcast and at least landed in the mempool.
Will it think a change output has been used when it really hasn't?
Yes, IIUC it may result in the internal descriptor being advanced, i.e., changes may be persisted via the IndexerChangeSet
. I think we could consider moving the persistence down to be the very last step, and checking after acquiring the wallet Mutex
that no unpersisted changes are staged to begin with, and then calling take_staged
in case of any failure during this flow to undo the changes before skipping persistence. This seems feasible, but I'm not totally sure whether we should go down the road of such micro-optimizations already, or if we should do so at a later date when we got more overall experience with the new syncing and wallet logic (and BDK possibly shipped the final 1.0, as more breaking changes are about to come when upgrading to later beta releases, hence the pins in Cargo.toml
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now experimented with the mentioned approach (see example here: tnull@8ed0783), but I don't think we want to go this way as of now: FWIW, even if we jump through all these hoops to leave a clean stage behind in case of a failure, the next step in case of a success will be broadcasting the transaction which might just fail outright, in which case we would still have advanced and persisted the indexer needlessly. We might want to revisit this though when we come around to actually implement rebroadcasting for the on-chain wallet transactions? Now tracking at #367
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we never call take_staged
, does that mean the ChangeSet
returned by staged
will continuously grow. I assume the purpose of it is to communicate what needs to be persisted, but it wasn't 100% clear from the docs. It mentions "committed" in some paces and "persisted" in others, though I assume those are equivalent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we never call take_staged, does that mean the ChangeSet returned by staged will continuously grow.
Nope, it will be taken internally by persist
: https://github.com/bitcoindevkit/bdk/blob/ec7342d80775408fec68adb3da4816bd62efcf15/crates/wallet/src/wallet/persisted.rs#L183-L190
I assume the purpose of it is to communicate what needs to be persisted, but it wasn't 100% clear from the docs. It mentions "committed" in some paces and "persisted" in others, though I assume those are equivalent?
Yeah, I agree the docs leave some room for improvement here and there. Also see bitcoindevkit/bdk#1600 and bitcoindevkit/bdk#1542.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, so take_staged
is basically "unstage".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, so
take_staged
is basically "unstage".
Yes, when used like in the linked commit above.
@jkczyz Let me know when I can squash. |
Sure, go ahead! |
5f1379d
to
4ab06c2
Compare
Squashed all the fixups without further changes. |
We will be adding some wallet persistence/serialization related types and in a separate module down the line, so here we perepare for it by already moving the wallet code to a module directory.
... we update LDK, lightning-liquidity, BDK, rust-bitcoin, rust-esplora-client, rust-electrum-client, etc.
4ab06c2
to
d0de144
Compare
Rebased after #366 landed. |
Closes #195.
... we update LDK, lightning-liquidity, BDK, rust-bitcoin, rust-esplora-client,
rust-electrum-client, etc.
Notably, we mostly maintain the functionalities here, any feature upgrades and refactors (e.g., adding additional chain data sources which we now can finally do) will happen in follow-up PRs.
Should be ready-for-review. The last commit can be dropped if BDK ships their next release during the review process, otherwise we'll (partially) revert it afterwards (see #359).