-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: L1Reader #1099
base: develop
Are you sure you want to change the base?
feat: L1Reader #1099
Conversation
WalkthroughThis pull request introduces a comprehensive refactoring of the Scroll rollup's Layer 1 (L1) interaction mechanisms. The changes primarily focus on creating a new Changes
Sequence DiagramsequenceDiagram
participant Client
participant L1Reader
participant ScrollChain
participant L1MessageQueue
Client->>L1Reader: Initialize with contract addresses
L1Reader->>ScrollChain: Fetch latest finalized batch
L1Reader->>L1MessageQueue: Get next unfinalized message index
L1Reader->>ScrollChain: Fetch rollup events in block range
L1Reader-->>Client: Return processed events
Poem
Tip CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
4d5b802
to
da9610f
Compare
4ad7431
to
0327451
Compare
a45c76b
to
4e6f759
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🔭 Outside diff range comments (1)
rollup/da_syncer/blob_client/beacon_node_client.go (1)
Line range hint
100-151
: Prevent wrap-around when calculating the slot.If
blockTime < c.genesisTime
, subtractingc.genesisTime
fromblockTime
in an unsigned context can lead to wrap-around. Consider adding a boundary check:+ if blockTime < c.genesisTime { + return nil, fmt.Errorf("invalid blockTime: must be >= genesisTime (got %d < %d)", blockTime, c.genesisTime) + }This ensures the calculated slot remains valid and prevents downstream errors.
🧹 Nitpick comments (13)
common/heapmap.go (3)
9-15
: Make initial capacity configurable.
Currently, the constructor hardcodes theNewShrinkingMap
capacity to1000
. If you anticipate different use cases with varying sizes, consider allowing the caller to customize it:-func NewHeapMap[K comparable, T Comparable[T]](keyFromElement func(T) K) *HeapMap[K, T] { - return &HeapMap[K, T]{ - h: NewHeap[T](), - m: NewShrinkingMap[K, *HeapElement[T]](1000), - keyFromElement: keyFromElement, - } +func NewHeapMap[K comparable, T Comparable[T]](keyFromElement func(T) K, capacity int) *HeapMap[K, T] { + return &HeapMap[K, T]{ + h: NewHeap[T](), + m: NewShrinkingMap[K, *HeapElement[T]](capacity), + keyFromElement: keyFromElement, + } }
21-32
: Clarify behavior on duplicate keys.
When the key exists,Push
silently returnsfalse
instead of updating or throwing an error. Confirm that this behavior aligns with your data structure requirements for handling duplicates.
59-69
: Unify removal code to avoid duplication.
RemoveByKey
andRemoveByElement
share similar logic. You could consolidate them to reduce duplication.func (hm *HeapMap[K, T]) remove(key K) bool { heapElement, exists := hm.m.Get(key) if !exists { return false } hm.h.Remove(heapElement) hm.m.Delete(key) return true } -func (hm *HeapMap[K, T]) RemoveByElement(element T) bool { - key := hm.keyFromElement(element) - return hm.remove(key) -} - -func (hm *HeapMap[K, T]) RemoveByKey(key K) bool { - return hm.remove(key) -} +func (hm *HeapMap[K, T]) RemoveByElement(element T) bool { + return hm.remove(hm.keyFromElement(element)) +} + +func (hm *HeapMap[K, T]) RemoveByKey(key K) bool { + return hm.remove(key) +}common/shrinkingmap.go (1)
50-64
: Pre-allocate slices for improved performance.
When gathering keys or values, you already know the final slice size. Consider usingmake([]K, 0, len(s.m))
ormake([]V, 0, len(s.m))
to reduce allocations.func (s *ShrinkingMap[K, V]) Keys() []K { - var keys []K + keys := make([]K, 0, len(s.m)) for k := range s.m { keys = append(keys, k) } return keys } func (s *ShrinkingMap[K, V]) Values() []V { - var values []V + values := make([]V, 0, len(s.m)) for _, v := range s.m { values = append(values, v) } return values }rollup/da_syncer/da/calldata_blob_source.go (2)
18-18
: Hardcoded block range
Using 500 as a default range is acceptable, but consider future configurability to handle varying network conditions.
53-53
: Calculating next fetch boundary
Incrementingds.l1Height
by a predefined range is straightforward; verify that the logic handles edge cases near chain heads.rollup/l1/abi.go (5)
25-34
: Document the metadata usage
Adding clear documentation forScrollChainMetaData
andL1MessageQueueMetaDataManual
helps maintainers quickly understand their usage.
60-76
: Aggregate event definitions
The definitions for CommitBatch events look good. Consider adding doc comments to clarify usage, especially if these structures are used across multiple packages.
118-142
: Consider code reuse
RevertBatchEvent
has methods that directly mirrorCommitBatchEvent
. If usage expands, consider factoring out common logic into a helper.
161-192
: Check potential expansions
These getters are consistent. If finalizing logic changes, consider how to avoid updating all event types individually. Potential extension point.
225-237
: Unified code path
newCommitBatchArgsFromCommitBatchWithProof
mostly duplicates logic fromnewCommitBatchArgs
. If possible, unify them, so you only maintain one code path.rollup/l1/reader_test.go (1)
11-125
: Expand test coverage with edge cases.The table-driven tests are well-structured and comprehensive for typical scenarios. Consider adding additional test cases, such as:
fromBlock > toBlock
(verifying no calls or an immediate return).fromBlock == toBlock
(verifying a single-block batch).
These edge cases help ensure robust, error-proof behavior for extreme or out-of-range inputs.rollup/da_syncer/blob_client/beacon_node_client.go (1)
Line range hint
29-66
: Use a context-aware client to handle cancellations and timeouts.The constructor currently performs direct
http.Get
requests which disregard the passedcontext.Context
. Consider creating an HTTP request attached to the context or reusing ahttp.Client.Do(request.WithContext(ctx))
. This practice aligns better with common patterns for cancelable or timeout-bound operations.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
common/heapmap.go
(1 hunks)common/shrinkingmap.go
(1 hunks)eth/backend.go
(2 hunks)rollup/da_syncer/blob_client/beacon_node_client.go
(3 hunks)rollup/da_syncer/blob_client/blob_client.go
(2 hunks)rollup/da_syncer/blob_client/blob_scan_client.go
(1 hunks)rollup/da_syncer/blob_client/block_native_client.go
(1 hunks)rollup/da_syncer/da/calldata_blob_source.go
(1 hunks)rollup/da_syncer/da/commitV0.go
(3 hunks)rollup/da_syncer/da/commitV1.go
(3 hunks)rollup/da_syncer/data_source.go
(2 hunks)rollup/da_syncer/syncing_pipeline.go
(3 hunks)rollup/l1/abi.go
(1 hunks)rollup/l1/abi_test.go
(1 hunks)rollup/l1/l1msg_bindings.go
(1 hunks)rollup/l1/reader.go
(1 hunks)rollup/l1/reader_test.go
(1 hunks)rollup/l1/types.go
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- rollup/l1/l1msg_bindings.go
🔇 Additional comments (81)
common/heapmap.go (9)
1-2
: No issues with package declaration.
The package name "common" is clear and concise.
3-7
: Consider concurrency safety or usage clarifications.
The HeapMap
structure doesn’t implement any locking. If concurrent access is intended, you might need to safeguard it with synchronization, or otherwise document it as single-thread only.
17-19
: Length method is straightforward and clear.
No issues found.
34-40
: Handle empty heap scenario for Pop.
If the heap is empty, Pop()
may return an invalid element or panic, depending on Heap[T]
’s implementation. Consider adding a protective check or clarifying usage.
42-44
: Handle empty heap scenario for Peek.
Similar to Pop
, if no elements exist, Peek()
might cause unexpected behavior. Ensure you handle or document empty-heap usage.
46-57
: Element-based removal looks good.
The logic cleanly prevents errors by checking the map first and ensures consistent removal in both the heap and map.
71-74
: Clear method re-initializes map effectively.
Using a fresh ShrinkingMap
is a solid approach to ensure any deleted keys counters are reset.
76-86
: Exposure of all keys and elements.
Keys()
and Elements()
provide convenient snapshot functionality. The order is not guaranteed, which is fine for typical map usage.
88-90
: Has
method is straightforward and concise.
The name conveys its purpose. No issues noted.
rollup/da_syncer/da/commitV1.go (8)
11-11
: Consistent import
This import is required for the revised l1.Reader
usage and is aligned with the new dependency structure.
23-26
: Function signature updated to use l1.Reader
and commitEvent
Switching from separate parameters to an event-driven approach enhances maintainability and clarity.
33-33
: Enhanced error context
Including the batch index from commitEvent.BatchIndex()
in the error message aids debugging.
36-36
: Direct call to FetchTxBlobHash
Utilizing the dedicated API from l1Reader
centralizes the logic for retrieving transaction blob hashes.
41-44
: Retrieving block header with l1Reader
The error handling is concise, and referencing the block number via commitEvent.BlockNumber()
keeps the logic well-scoped to the event context.
45-45
: Context-based blob retrieval
Fetching blobs via GetBlobByVersionedHashAndBlockTime
ensures timing consistency with block data.
50-50
: Detailed validation error
Including commitEvent.BatchIndex().Uint64()
and versionedHash
in the message ensures critical context for troubleshooting missing blobs.
73-73
: Creating V0 commit batch with updated parameters
Passing the event-driven parameters (commitEvent.BlockNumber()
, etc.) provides a coherent flow from the new approach.
rollup/da_syncer/da/calldata_blob_source.go (13)
14-14
: New import usage
Allows referencing event types and the l1.Reader
struct, consistent with the updated design.
26-31
: New fields in CalldataBlobSource
Storing l1Reader
and ctx
directly in the struct fosters simpler, more direct access to external dependencies and context.
36-47
: Revised constructor
Adopting the l1Reader
reference instead of the older client interface nicely encapsulates L1 interactions under the new architecture.
59-59
: Fetching finalized block number
Using ds.l1Reader.GetLatestFinalizedBlockNumber()
helps ensure the pipeline only processes confirmed blocks.
67-67
: Boundary check
Returning ErrSourceExhausted
early if l1Height
surpasses the target block is an elegant exit condition.
71-71
: Retrieving rollup events
ds.l1Reader.FetchRollupEventsInRange(ds.l1Height, to)
neatly offloads event retrieval while staying consistent with the new L1 code.
73-73
: Graceful error handling
Raising a TemporaryError
ensures the process can retry if the event fetch fails.
75-75
: Consolidated DA processing
Delegating to processRollupEventsToDA
keeps the data flow from L1 events to DA entries straightforward.
77-77
: Uniform error wrapping
Maintaining consistent error handling patterns facilitates smoother backoff/retry in upstream components.
80-80
: Incrementing ds.l1Height
Advancing the local L1 height after consumption helps ensure the next query won’t repeat blocks.
85-85
: Exposing updated height
Returning the latest l1Height
offers a clear checkpoint to external callers.
88-111
: Refined processRollupEventsToDA
Switching on the event types, extracting commitEvent
if applicable, and returning specialized entries is both robust and cleanly partitioned.
119-141
: getCommitBatchDA
with event-driven input
The function references commitEvent
directly, removing redundant parameters. This approach is consistent with the new event-based design.
rollup/da_syncer/da/commitV0.go (4)
13-13
: Import alignment
Including github.com/scroll-tech/go-ethereum/rollup/l1
is essential for event-driven changes.
29-29
: Eliminating duplicate fields
Leveraging the l1.CommitBatchEvent
struct reduces parameter clutter and keeps everything tied to the event source.
36-36
: Concise error message
Tying the error directly to commitEvent.BatchIndex()
ensures that logs accurately identify problematic batches.
39-39
: Use of NewCommitBatchDAV0WithChunks
Constructing the V0 batch object from a single call consolidates logic under one helper function.
rollup/da_syncer/syncing_pipeline.go (5)
18-18
: Import for rollup/l1
Establishes the pipeline’s connection to l1.Client
and the new Reader
interface.
44-48
: NewSyncingPipeline
refactor
Replacing the previous client with l1.Client
and constructing an l1.Reader
clarifies the boundary between L1 data retrieval and pipeline logic.
50-50
: Error context
Providing a direct message on l1.Reader
creation failure aids in diagnosing setup issues.
55-55
: Conditional beacon node client
Adding this client only if an endpoint is provided is a practical approach to optional DA sources.
72-72
: Data source factory
Passing l1Reader
into NewDataSourceFactory
unifies L1 reads under a consistent framework throughout the pipeline.
rollup/l1/reader.go (18)
1-2
: New l1
package
Self-contained design for reading L1 data, reinforcing modularity and separation of concerns.
3-15
: Comprehensive imports
All relevant packages for ABI interactions, transaction handling, and logging are properly included.
17-25
: Constant definitions
Provides event names and fallback block-range constants in a single place, preventing magic numbers scattered throughout the code.
27-37
: Reader
struct
Encapsulating context, config, and ABIs inside a dedicated struct centralizes L1 interactions neatly.
39-44
: Config
struct
Collecting addresses into a structured type simplifies passing them around and clarifies usage.
45-68
: NewReader
constructor
Validates nonzero addresses and initializes critical ABIs, helping avoid misconfiguration at runtime.
97-117
: LatestFinalizedBatch
Fetches the last finalized batch index by invoking an ABI-packed contract call, consistently mirroring patterns used elsewhere.
119-129
: GetLatestFinalizedBlockNumber
Uses the special rpc.FinalizedBlockNumber
to retrieve finality info, while verifying that block numbers fit within an int64
.
131-134
: FetchBlockHeaderByNumber
Lightweight wrapper around the client call, ensuring a consistent pattern for block header retrieval.
136-143
: FetchTxData
Acquires the raw data payload from the transaction, bridging from a log reference (txHash
, blockHash
) to the actual bytecode.
145-156
: FetchTxBlobHash
Extracts the blob hash from the transaction and returns an error if none are present, guaranteeing proper usage of blob-based TX data.
158-188
: FetchRollupEventsInRange
Leverages a batched approach for filter logs, handling large block ranges without single-query bottlenecks.
190-231
: FetchRollupEventsInRangeWithCallback
Adds a callback-based version for more granular control, processing each batch of events as soon as they're retrieved.
233-291
: processLogsToRollupEvents
Comprehensively maps log topics to typed events (commit, revert, or finalize), returning descriptive errors if logs can’t be unpacked.
293-315
: queryInBatches
Implements a generic batching utility that gracefully stops on errors or if the callback signals to discontinue.
317-342
: fetchTx
Safely falls back to block retrieval if direct transaction fetch fails, covering unindexed or reorg scenarios.
344-380
: FetchCommitTxData
Method ID parsing and multiple commitBatch
method variants are handled carefully, ensuring the function is extensible for future changes.
70-95
: FinalizedL1MessageQueueIndex
Effectively retrieves the unfinalized queue index, adjusting for an off-by-one scenario when zero is returned.
Run the following script to locate all references in the project and confirm their usage:
✅ Verification successful
Let me gather more context about the usage of this function and related components.
Let me gather more context about the usage of this function in the codebase.
Based on the analysis of the codebase, I can now provide the final response:
FinalizedL1MessageQueueIndex
correctly implements L1 message queue index retrieval
The function is correctly implemented to retrieve the finalized L1 message queue index by:
- Calling the
nextUnfinalizedQueueIndex
method on the L1MessageQueue contract - Properly handling the zero case (returning 0)
- Adjusting the result by subtracting 1 from non-zero values to get the last finalized index
The function is self-contained and its only usage is through the Reader interface, which is appropriately used by other components in the codebase for L1 message synchronization.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
rg -A 5 $'FinalizedL1MessageQueueIndex'
Length of output: 409
Script:
#!/bin/bash
# Search for the interface or struct definition of Reader
ast-grep --pattern 'type Reader struct {
$$$
}'
# Look for nextUnfinalizedQueueIndex usage and definition
rg -A 3 'nextUnfinalizedQueueIndex'
# Search for L1MessageQueue references
rg -A 3 'L1MessageQueue'
Length of output: 55205
Script:
#!/bin/bash
# Look for references to FinalizedL1MessageQueueIndex in other files
rg -l "FinalizedL1MessageQueueIndex"
# Search for files that might use this function
ast-grep --pattern 'r.FinalizedL1MessageQueueIndex'
# Look for related functions in the Reader struct
ast-grep --pattern 'func (r *Reader) $$$($$$) $$$'
Length of output: 16041
rollup/l1/abi.go (11)
1-11
: Ensure error handling when importing packages
These lines look appropriate for the package definition and imports. However, watch for potential import conflicts in the future. Everything is fine for now.
35-48
: Constant clarity
The constants are descriptive. Ensure they are referenced correctly in your codebase to avoid magic numbers.
50-58
: Interface usage
The RollupEvent interface is straightforward and appears consistent. Confirm that each event type implements these methods thoroughly.
77-104
: Validate ordering in CompareTo
The CompareTo
method on line 101 is a simple wrapper around Cmp
. This is fine, but ensure that all places using CompareTo
handle negative, zero, and positive results consistently.
105-117
: Revert events
The struct naming and field usage are consistent with the commit event. The separation between unpacked and final event types is clean.
143-148
: Finalize batch event (unpacked) structure
Looks good for capturing minimal event data. No issues here.
150-160
: Finalize batch event struct
Similar to commit and revert events, ensure consistent usage across the codebase.
193-210
: Validate event ID before data unpack
UnpackLog
is well-structured. It checks event IDs and unpacks data carefully. Good job.
212-218
: Albebraic naming
CommitBatchArgs
is an excellent name clarifying usage. The fields are concise. Overall, well done.
219-223
: Consider top-level usage
newCommitBatchArgs
is convenient. Ensure you handle possible mismatch in parameter arrays.
239-245
: Separation of concerns
commitBatchWithBlobProofArgs
is well-defined. Just ensure you test scenarios with invalid proof data.
rollup/l1/types.go (2)
1-10
: Package structuring
The package name l1
is consistent with the file location. Imports are minimal and essential. Approved.
12-22
: Interface completeness
The Client
interface covers essential Ethereum client functionality. It’s broad but looks apt for an L1 interface. Ensure you test each method thoroughly, especially subscriptions.
rollup/da_syncer/data_source.go (4)
11-11
: Import usage
Importing l1
is good. Confirm that no cyclical dependencies appear.
22-22
: Renaming clarity
Renaming l1Reader
from l1Client
clarifies its role. Good alignment with the refactor.
27-31
: Constructor coherence
NewDataSourceFactory
clearly injects l1Reader
and the rest. Well-structured for dependency injection.
38-38
: Factory method
Using ds.l1Reader
in OpenDataSource
keeps the logic consistent. If you expand to multiple data sources, keep them consistent with the new naming.
rollup/da_syncer/blob_client/blob_client.go (2)
20-20
: Method signature update
GetBlobByVersionedHashAndBlockTime
is more expressive than block number usage. Good domain naming, assuming time-based retrieval is valid.
35-41
: Round-robin logic
GetBlobByVersionedHashAndBlockTime
tries each client, rotating if one fails. This is a neat fallback approach. Ensure you have adequate logging for each failure.
rollup/l1/abi_test.go (3)
1-12
: Test structure
Package l1
with common testing packages. Good separation.
14-18
: Event signature tests
Verifies event IDs thoroughly. This guards against ABI drift, which is excellent.
20-79
: UnpackLog tests
Comprehensive tests for each event. Potential expansions exist (e.g., invalid data). Overall, well executed.
eth/backend.go (2)
60-60
: Use of new l1
import looks appropriate.
The addition of the import statement for rollup/l1
seamlessly aligns with the overall refactoring, ensuring that any reliance on the new l1.Client
interface is resolved.
113-113
: Validate the change to the l1Client
parameter.
Switching from the old sync_service.EthClient
to the new l1.Client
aligns well with the broader refactor. Confirm that other callsites or constructors referencing the New
function also adopt the new interface.
Run the following script to ensure references to sync_service.EthClient
have been transitioned correctly:
✅ Verification successful
Let me gather more information to verify the transition.
Let me gather one more piece of information to check the actual callsites of eth.New.
The change to l1Client
parameter in eth.New is properly transitioned
Based on the verification results:
- The old
sync_service.EthClient
is now only used within the rollup sync service components (rollup_sync_service/rollup_sync_service.go
androllup_sync_service/l1client.go
). - All callsites to
eth.New
have been updated to handle the newl1.Client
parameter, with most calls passingnil
for non-rollup scenarios. - The only production callsite that passes a non-nil client is in
cmd/utils/flags.go
, which correctly uses the new interface.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that references to `sync_service.EthClient` are removed where appropriate
rg "sync_service.EthClient" -A 3 -B 3
Length of output: 2234
Script:
#!/bin/bash
# Check for any remaining imports of sync_service.EthClient
rg "\".*sync_service\"" --type go
# Find all calls to eth.New to verify parameter types
ast-grep --pattern 'New($$$)'
Length of output: 10110
Script:
#!/bin/bash
# Find direct calls to eth.New to verify parameter types
rg "eth\.New\(" -A 2 -B 2
Length of output: 3672
1. Purpose or design rationale of this PR
This PR implements L1Reader: a streamlined way to read L1 data without worrying about type of events and logs. It also utilizes the L1Reader in the L1 follower mode syncing pipeline to greatly simplify the code.
Replaces #1018 as this PR is based on top of
develop
branch.2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.go
been updated?4. Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
New Features
HeapMap
data structure for efficient storage and retrieval of elements.ShrinkingMap
for retrieving keys and values.l1
package with functionalities for interacting with the ScrollChain and L1MessageQueue contracts.Reader
struct for fetching and processing rollup events from the L1 blockchain.Method Updates
l1
package.Tests