Skip to content

Initial list of priorities

ledgerwatch edited this page Dec 20, 2019 · 2 revisions

Maintenance

A pragmatic plan is needed for handling maintenance and releases. The things to consider:

  1. CI (Continious Integration)
  2. CD (Continious Deployment)
  3. HF (Hard Fork) coordination
  4. Security incidents

Adapdation for applied research and innovation

Inserting the mechanism for maintaining dual database layout and enable experimentation with a different database (e.g. LMDB Lightning Memory-Mapped Database: https://github.com/LMDB/lmdb)

Alignment with the Stateless Ethereum Research and Development

Generation of conformance/consensus tests via retesteth (https://github.com/ethereum/retesteth) integration, current exists for go-ethereum and Aleth (C++ client, but likely to be decomissioned)

Burning part of the tx fees for more predictable fee market

Experimental implementation of EIP-1559: https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1559.md or Skinny EIP-1559: https://ethereum-magicians.org/t/skinny-eip-1559/3738

Leadership in up-coming Eth changes

Integration (Rust implementation already exists here: https://github.com/matter-labs/eip1962) and testing of EIP-1962 (EC arithmetic and pairings with runtime definition): https://eips.ethereum.org/EIPS/eip-1962

Separation of Secret Store into a component

Secret store lives in the same codebase, but it’s not an ethereum thing really, we’re actually working on pulling that out so that it’s usable by anythingm including the ethereum client https://wiki.parity.io/Secret-Store

Improvements to POSDAO

POSDAO is a DPOS (Delegated Proof Of Stake) protocol for sidechains. POSDAO can run on AuRa (first Proof Of Authority algorithm built into OpenEthereum, used in Kovan network) or HBBFT (Honey Badger Byzantine Fault Tolerance: https://medium.com/poa-network/poa-network-how-honey-badger-bft-consensus-works-4b16c0f1ff94. The AuRa part is mostly complete. The integration of POSDAO with HBBFT is not yet proposed in form of Pull Requests, it is work in progress. It is important to keep AuRa working.

Improvements in the snapshot sync

Currently warp sync is the main mechanism. Warp sync is more efficient than "fast sync", but it has two main downsides. Firstly, the warp snapshots need to be pre-generated before being served. Secondly, the recipient of a wrap can only verify the warp’s correctness after it has been downloaded in its entirety. In short term, fast sync could be implemented if needed. Longer term, more efficient version can be implement, for example, Red Queen/Firehose, in collaboration with turbo-geth. This, however, requires non-trivial changes to the database (see "Adaptation to applied research and innovation").

Practical chain pruning

It is believed that GDPR (General Data Protection Regulation) will be one of the biggest issues for mainstream adoption of old and new chains. Practical pruning of the old chain data needs to be implemented to comply. We can look for techniques that would be required to deal with every growing size of the blockchain, in terms of 3 main components:

  • block headers
  • block bodies
  • transaction receipts

Unlike the State, these data are almost append-only (the only time they get rewritten are reorganisations of the chain, they are relatively rare).

The chain of block headers grows the slowest (in terms of its size), but it is necessary to download even for the light clients. There are two potential solutions to that:

  1. Regular checkpointing. Once in a while, let’s say once a year, a new checkpoint hash is publicly agreed and hard-coded into the Ethereum implementation code. That way, light clients and others do no need to download all the headers. @frozeman frozeman 1 hour ago I would suggest once every 100,000 blocks. As for GDPR there are requirements to act pithing 30 weeks to remove public data if requested.

Under Article 12.3 of the GDPR, you have 30 days to provide information on the action your organization will decide to take on a legitimate erasure request. This timeframe can be extended up to 60 days depending on the complexity of the request.

With an automatic 30 day pruning, this could solve the problem for public blockchains to be GDPR compliant.

@AlexeyAkhunov Reply… 2. There could be a potential to generate the compact proof of header chain ancestry. It would allow to not download the entire chain of headers and verify PoW on every single header, but instead use proof systems like SNARK or STARK to have a compact proof that a certain header is correct in the sense that it has ancestry all the way to the genesis block (or some other “checkpoint” block), with all PoW correct along the way. 3. The sequence of block bodies grows fast, and it is expected to grow even faster after the “Instanbul” upgrade, lowering the cost of inclusion data into a block from 68 gas per byte to 16 gas per byte.

The sequence of transaction receipts grows the fastest, but most users don’t require the entire sequence. Also, any receipt can be regenerated by re-executing the corresponding transaction in the context of the historical state. The history of state tends to be more compact to store than receipts, though re-generation of a receipt tends to be 10 times slower than simply fetching it from the database (data from tests on turbo-geth from September 2018).