-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: shard aware peer store pruning #2167
Conversation
You can find the image built from this PR at
Built from 20a959d |
8033fe2
to
b339c19
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Roughly looks fine to me. I cannot immediately think of another simple mechanism to prune with shard load balancing. However, simulating this (and other conn/peer behaviour) becomes more and more important to really understand the impact of changes. For example, I mentioned in comment below that I think this approach will make us more likely to prune peers that support multiple shards, which should in fact be a reason for us to prefer to keep them.
for peer in peers[0..count]: | ||
if peersToPrune.len >= pruningCount: | ||
break | ||
|
||
peersToPrune.incl(peer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing to remember is that these peers more than likely also support other shards that we may be interested in, so removing them here also removes them from the other supported shards' slots. Not sure if it's a good idea to make this more complicated in this PR, but a possible issue I foresee is that peers that support more shards (good!) are more likely to be pruned overall. Perhaps a future improvement is to sort the peers here by number of supported shards and prune the peers first that support fewer/no other shards?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing to remember is that these peers more than likely also support other shards that we may be interested in, so removing them here also removes them from the other supported shards' slots. Not sure if it's a good idea to make this more complicated in this PR, but a possible issue I foresee is that peers that support more shards (good!) are more likely to be pruned overall. Perhaps a future improvement is to sort the peers here by number of supported shards and prune the peers first that support fewer/no other shards?
On the flip side, peers that support less shard might be more decentralized (not running in data center with beefy hardware).
Peer that support all shards == central server (exaggeration but still). We have to be careful about what we incentivize.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm!
b339c19
to
cddf2a0
Compare
Blocked by #2151 |
9482fa6
to
0235987
Compare
0235987
to
2f15554
Compare
Description
With the addition of sharded peer management, peer store pruning must anticipate sharding changes (keep some peers for each shards just in case).
I went with removing peers on shards with higher than average count. I'm thinking that doing this in a loop would have the effect of balancing peers per shards eventually.
I also remove all unconnected peers without shards.
Let me know if another strategy would make more sense.
Changes
How to test
Simulations will be done. All unit tests already pass.
Tracking #1940