You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Doesn't scale to many txs, nor large data. There are locks (read/write caches) that are contended with many txs. There are large copies with large data.
For real organic content, where the working set of accounts isn't large, so say 10-100x txs but only ~2 times as large account data size, we can mitigate the copies part by being smarter in allocating memory.
For the read write caches, we don't have mitigations at the moment.
We probably have to rewrite the accountsdb to be a proper database. Won't scale to 1M tps and it can't be fixed. Alessandro has ideas that should be easy to prototype.
A “pragmatic” LSM tree DB would work well here. Clickhouse is one option (read this paper). Even RocksDB would be an improvement. Pragmatic here means not being pure as the pure LSM designs. Instead, having a mix of keys and data or keys and split data (a la whisper) depending on account sizes, etc.
Read only cache LRU contention is a problem. Alessandro suspects we could move to a sampled LRU to approximate LRU while reducing contention.
Doesn't scale to many txs, nor large data. There are locks (read/write caches) that are contended with many txs. There are large copies with large data.
For real organic content, where the working set of accounts isn't large, so say 10-100x txs but only ~2 times as large account data size, we can mitigate the copies part by being smarter in allocating memory.
For the read write caches, we don't have mitigations at the moment.
We probably have to rewrite the accountsdb to be a proper database. Won't scale to 1M tps and it can't be fixed. Alessandro has ideas that should be easy to prototype.
A “pragmatic” LSM tree DB would work well here. Clickhouse is one option (read this paper). Even RocksDB would be an improvement. Pragmatic here means not being pure as the pure LSM designs. Instead, having a mix of keys and data or keys and split data (a la whisper) depending on account sizes, etc.
Read only cache LRU contention is a problem. Alessandro suspects we could move to a sampled LRU to approximate LRU while reducing contention.
The text was updated successfully, but these errors were encountered: