-
Notifications
You must be signed in to change notification settings - Fork 620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(relayer): adapt to CodecV7 for EuclidV2 #1583
base: develop
Are you sure you want to change the base?
Conversation
…submit multiple batches in a single transaction
WalkthroughThe changes span build configuration updates, ABI and event signature revisions, enhanced error handling, and new configuration options. Docker build commands now include an explicit platform flag and the base images have been updated. Several functions in the fetcher and event parser now integrate blob client creation with improved error management. ABI definitions, tests, and utility methods were modified to support versioned events and additional parameters. In addition, new configuration entries, database migration scripts, and rollup logic enhancements were introduced, along with a version bump. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Fetcher App
participant L1F as NewL1MessageFetcher
participant Blob as BlobClient
participant Logic as L1FetcherLogic
participant Parser as L1EventParser
App->>L1F: Initialize L1MessageFetcher
L1F->>Blob: Create BlobClient instance
Blob-->>L1F: Return BlobClient / Error
L1F->>Logic: Instantiate L1FetcherLogic with BlobClient
Logic->>Parser: Pass BlobClient to L1EventParser
Parser->>Blob: Call getBatchBlockRangeFromBlob when needed
Suggested reviewers
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure. 🔧 golangci-lint (1.62.2)level=warning msg="[runner] Can't run linter goanalysis_metalinter: buildir: failed to load package zstd: could not load export data: no export data for "github.com/scroll-tech/da-codec/encoding/zstd"" 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (1)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
…ntextID logic to support multiple batches by concatenating batch hashes
return | ||
} | ||
|
||
dbParentBatch, err = r.batchOrm.GetBatchByIndex(r.ctx, dbBatch.Index-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most of these batches are already in memory (dbBatches
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed to use dbBatches
except for i==0
in 5a479c3
|
||
r.metrics.rollupL2RelayerCommitBlockHeight.Set(float64(maxBlockHeight)) | ||
r.metrics.rollupL2RelayerCommitThroughput.Add(float64(totalGasUsed)) | ||
r.metrics.rollupL2RelayerProcessPendingBatchSuccessTotal.Add(float64(len(batchesToSubmit))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might also want to monitor the time series of len(batchesToSubmit)
(to see average submission).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done in 5a479c3
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #1583 +/- ##
===========================================
- Coverage 42.12% 41.28% -0.85%
===========================================
Files 222 222
Lines 17739 18225 +486
===========================================
+ Hits 7473 7524 +51
- Misses 9557 9975 +418
- Partials 709 726 +17
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
f277483
to
e324f2a
Compare
e324f2a
to
f4e17bc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🔭 Outside diff range comments (3)
build/dockerfiles/coordinator-api.Dockerfile (1)
18-19
:⚠️ Potential issueInconsistent Base Image Versions Detected.
The libzkp build stage (lines 1–17) uses the updated Go 1.22 image (chef
stage), yet the dependencies download stage (line 19) still usesgo-1.21-rust-nightly-2023-12-03
. To maintain consistency and reduce potential compatibility issues, please update the base image in the latter stage to Go 1.22.coordinator/internal/orm/chunk.go (2)
140-141
:⚠️ Potential issueMethod visibility should be updated to match usage patterns.
The method
getLatestChunk
is currently private (unexported) but according to the PR summary, it should be renamed toGetLatestChunk
to make it exported. This change is needed to maintain API consistency as part of the CodecV7 adaptation.-func (o *Chunk) getLatestChunk(ctx context.Context) (*Chunk, error) { +func (o *Chunk) GetLatestChunk(ctx context.Context) (*Chunk, error) {
235-235
:⚠️ Potential issueUpdate method call to use the exported version.
This call to
getLatestChunk
should be updated to useGetLatestChunk
if the method is renamed as suggested in the API changes.- parentChunk, err := o.getLatestChunk(ctx) + parentChunk, err := o.GetLatestChunk(ctx)
🧹 Nitpick comments (18)
build/dockerfiles/bridgehistoryapi-db-cli.Dockerfile (1)
17-17
: CGO_LDFLAGS Inconsistency:
Unlike the fetcher and API Dockerfiles which setCGO_LDFLAGS
to-Wl,--no-as-needed -ldl
, this file uses only-ldl
. Please verify if this difference is intentional for thedb_cli
build or if the flag should be standardized for consistency.bridge-history-api/internal/utils/utils_test.go (1)
50-61
: Validates commitBatches flow with explicit version handling.
The new test confirms that calls to the function without chunk data returnstart
andfinish
as zero, matching the logic for “commitBatches.”Potentially add a scenario testing actual chunk data under “commitBatches” if future requirements introduce such usage.
rollup/internal/orm/chunk.go (1)
33-34
: Ensure consistent handling of queue hash strings.Storing the L1 message queue hash as a string is consistent with the other hash fields in this struct. However, if you plan to enforce a canonical encoding (e.g., always including the "0x" prefix and ensuring fixed length), consider validating these fields before saving them to the database.
rollup/internal/controller/sender/sender_test.go (1)
917-930
: Ensure test determinism vs. randomness.
randBlobs
helps simulate real-world usage. However, random tests risk flakiness if specific outcomes are required. If determinism is needed for reproducible tests, consider seeding the random generator or using fixed data in certain tests.rollup/internal/controller/watcher/bundle_proposer.go (1)
136-136
: Confirm boundary checks for batch retrieval
UsingGetCommittedBatchesGEIndexGECodecVersion
withmaxBatchesThisBundle
is sensible. Consider verifying that extreme limits (e.g., zero) are handled gracefully, though this might be unlikely given the implementation.rollup/tests/rollup_test.go (2)
425-429
: TODO items should be addressed before merging.The commented TODO regarding mock bridge contract ABI updates should be addressed to ensure complete test coverage of the finalization flow.
Would you like me to help implement the missing mock bridge contract ABI updates to support the new methods for testing?
386-399
: Consider using proper transaction mocking.Instead of simulating batch commits with hardcoded transaction hashes, consider extending the tests to use proper transaction submission through the mock contracts for more realistic testing.
rollup/internal/controller/relayer/l2_relayer_metrics.go (1)
43-46
: Consider adding alerting thresholds for this new metric.Since this metric tracks a critical aspect of the new functionality, consider setting up alerting thresholds to detect when batches per transaction fall outside expected ranges.
rollup/internal/controller/sender/sender.go (1)
173-200
: Ensure consistent handling of empty blob slices.Currently, the code checks
if blobs != nil
before callingmakeSidecar(blobs)
. However, an empty slice ([]*kzg4844.Blob{}
) is not nil but will still trigger a call tomakeSidecar
, resulting in an immediate error. This behavior might be intentional, but if your intention is to skip sidecar creation for an empty slice, consider checkinglen(blobs) > 0
as well.-if blobs != nil { +if blobs != nil && len(blobs) > 0 { sidecar, err = makeSidecar(blobs) ... }rollup/abi/bridge_abi_test.go (1)
175-190
: Practical debugging utility for ABI methods and errors.
TestPrintABISignatures
is a convenient function to list all ABI methods and error signatures. Including this test is beneficial for diagnostic and development workflows.Consider adding an explanatory comment on how to run or interpret these printed signatures for any new contributors.
rollup/internal/controller/relayer/l2_relayer.go (3)
449-475
: Enforcing a timeout-based forced submission.Introducing
forceSubmit
afterr.cfg.BatchSubmission.TimeoutSec
ensures that old batches don’t linger indefinitely. This is crucial for timely submission. However, consider logging the exact “aging” in seconds (or a best-effort timestamp) so operators or logs can see how long it waited before forcing.log.Info("Forcing submission of batches due to timeout", - "batch index", batchesToSubmit[0].Batch.Index, - "created at", batchesToSubmit[0].Batch.CreatedAt) + "batch index", batchesToSubmit[0].Batch.Index, + "created at", batchesToSubmit[0].Batch.CreatedAt, + "elapsedSeconds", time.Since(batchesToSubmit[0].Batch.CreatedAt).Seconds())
543-551
: Context ID strategy for multiple batch commits.Building the context ID as “v7-...” is straightforward for version-bound grouping. Consider whether future versions (e.g.,
v8-...
) might cause logic collisions if the prefix alone is used as a segmentation key. A label-based scheme or suffix might help if you plan to grow multiple versions simultaneously.
1182-1187
: Multi-batch commit payload for EuclidV2.Packing
commitBatches
with the new signature properly aggregates version,firstBatchHash
, andlastBatchHash
. This is consistent with the shift to a multi-batch approach. Just ensure instructive logs for any revert or mismatch between first/last batch to ease on-call diagnosis.rollup/abi/bridge_abi.go (1)
27-27
: Refactor proliferation of multiple finalize methods.The ABI includes numerous finalization methods (e.g.,
finalizeBatch
,finalizeBatch4844
,finalizeBatchWithProof
, etc.). Consider consolidating logic or adopting a parameterized approach to reduce duplication and improve maintainability. Multiple specialized entry points may complicate the contract’s external interface and testing.bridge-history-api/cmd/fetcher/app/app.go (1)
71-74
: Error handling improvement is good, but there's an inconsistencyThe addition of error handling for L1MessageFetcher is a good improvement. However, I noticed that the L2MessageFetcher (line 77) doesn't have the same error handling pattern. Consider updating L2MessageFetcher to follow the same error handling approach for consistency.
-l2MessageFetcher := fetcher.NewL2MessageFetcher(subCtx, cfg.L2, db, l2Client) +l2MessageFetcher, err := fetcher.NewL2MessageFetcher(subCtx, cfg.L2, db, l2Client) +if err != nil { + log.Crit("failed to create L2MessageFetcher", "err", err) +}bridge-history-api/internal/controller/fetcher/l1_fetcher.go (1)
40-59
: Good implementation of blob client handlingThe implementation of blob client creation and configuration is well done. I like that:
- It supports multiple client types (BeaconNode, BlobScan, BlockNative)
- There's proper error handling for BeaconNodeClient creation
- The code returns an error if no blob clients are configured
One enhancement to consider: Add debug logging when successfully creating a blob client, not just when there's an error. This would make debugging configurations easier.
if cfg.BlobScanAPIEndpoint != "" { blobClient.AddBlobClient(blob_client.NewBlobScanClient(cfg.BlobScanAPIEndpoint)) + log.Debug("Successfully created BlobScanClient", "endpoint", cfg.BlobScanAPIEndpoint) }
bridge-history-api/abi/backend_abi.go (1)
168-168
: Multiple events sharing the same name could cause ambiguity.You have two separate “RevertBatch” events in the ABI (one with
(batchIndex, batchHash)
, another with(startBatchIndex, finishBatchIndex)
). Overloading event names, even with differing parameters, may confuse some indexers or block explorers. It can be beneficial to rename one of them (e.g., “RevertBatchRange”) for clarity and consistency.bridge-history-api/internal/logic/l1_event_parser.go (1)
473-495
:getBatchBlockRangeFromBlob
efficiently leverages blob data.Your decoding flow is clear and includes checks for nil blobs, empty blocks, and decode failures. For additional robustness, consider a context timeout or a retry strategy for large or delayed blob retrievals.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
bridge-history-api/go.sum
is excluded by!**/*.sum
go.work.sum
is excluded by!**/*.sum
rollup/go.sum
is excluded by!**/*.sum
📒 Files selected for processing (48)
bridge-history-api/Makefile
(1 hunks)bridge-history-api/abi/backend_abi.go
(4 hunks)bridge-history-api/abi/backend_abi_test.go
(1 hunks)bridge-history-api/cmd/fetcher/app/app.go
(1 hunks)bridge-history-api/conf/config.json
(1 hunks)bridge-history-api/go.mod
(1 hunks)bridge-history-api/internal/config/config.go
(1 hunks)bridge-history-api/internal/controller/fetcher/l1_fetcher.go
(4 hunks)bridge-history-api/internal/logic/l1_event_parser.go
(7 hunks)bridge-history-api/internal/logic/l1_fetcher.go
(12 hunks)bridge-history-api/internal/orm/batch_event.go
(1 hunks)bridge-history-api/internal/utils/utils.go
(5 hunks)bridge-history-api/internal/utils/utils_test.go
(1 hunks)build/.golangci.yml
(2 hunks)build/dockerfiles/bridgehistoryapi-api.Dockerfile
(1 hunks)build/dockerfiles/bridgehistoryapi-db-cli.Dockerfile
(1 hunks)build/dockerfiles/bridgehistoryapi-fetcher.Dockerfile
(1 hunks)build/dockerfiles/coordinator-api.Dockerfile
(1 hunks)build/dockerfiles/coordinator-cron.Dockerfile
(1 hunks)build/dockerfiles/db_cli.Dockerfile
(1 hunks)build/dockerfiles/gas_oracle.Dockerfile
(1 hunks)build/dockerfiles/rollup_relayer.Dockerfile
(1 hunks)common/types/message/message.go
(2 hunks)coordinator/internal/logic/provertask/batch_prover_task.go
(3 hunks)coordinator/internal/orm/chunk.go
(1 hunks)database/migrate/migrate_test.go
(1 hunks)database/migrate/migrations/00025_add_l1_message_queue_hash.sql
(1 hunks)rollup/Makefile
(1 hunks)rollup/abi/bridge_abi.go
(1 hunks)rollup/abi/bridge_abi_test.go
(2 hunks)rollup/cmd/rollup_relayer/app/app.go
(1 hunks)rollup/conf/config.json
(1 hunks)rollup/go.mod
(1 hunks)rollup/internal/config/relayer.go
(2 hunks)rollup/internal/controller/relayer/l2_relayer.go
(8 hunks)rollup/internal/controller/relayer/l2_relayer_metrics.go
(2 hunks)rollup/internal/controller/sender/sender.go
(3 hunks)rollup/internal/controller/sender/sender_test.go
(17 hunks)rollup/internal/controller/watcher/batch_proposer.go
(3 hunks)rollup/internal/controller/watcher/bundle_proposer.go
(5 hunks)rollup/internal/controller/watcher/bundle_proposer_test.go
(3 hunks)rollup/internal/controller/watcher/chunk_proposer.go
(3 hunks)rollup/internal/controller/watcher/l2_watcher.go
(2 hunks)rollup/internal/orm/batch.go
(5 hunks)rollup/internal/orm/chunk.go
(5 hunks)rollup/internal/orm/orm_test.go
(3 hunks)rollup/tests/bridge_test.go
(2 hunks)rollup/tests/rollup_test.go
(5 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: tests
🔇 Additional comments (114)
rollup/internal/controller/watcher/l2_watcher.go (3)
104-121
: LGTM: Addition of AuthorizationList field to TransactionDataThe addition of the
AuthorizationList
field populated viatx.SetCodeAuthorizations()
correctly adapts theTransactionData
struct to the updated interface requirements. This change aligns with the PR objective of adapting to CodecV7 for EuclidV2.
139-145
: Great addition of L1 message transaction counterThe new counter for L1 message transactions provides valuable visibility into block composition. This enhanced logging will be helpful for monitoring and debugging, especially when handling multiple batches in a single transaction.
147-147
: Would this be affected after migrating to message queue v2?Note: This line was previously discussed in an earlier review, but it appears the concern was resolved as this is for L2.
build/.golangci.yml (2)
93-94
: Complexity Threshold Increased.
Thegocyclo
linter’smin-complexity
value has been raised from 30 to 40. Please verify that this higher threshold aligns with the project’s standards and that it won’t overlook overly complex functions that might benefit from refactoring.
257-260
: New Exclusion Rule for Govet Shadow Warnings.
A new exclusion rule filtering govet messages for shadowed declarations oferr
orctx
has been added. This can help reduce noise in reports; however, ensure that suppressing these warnings won’t hide genuine issues related to variable shadowing.build/dockerfiles/gas_oracle.Dockerfile (1)
2-2
: Base Image Updated for Go 1.22.
The base image has been updated toscrolltech/go-rust-builder:go-1.22-rust-nightly-2023-12-03
, aligning with the new Go version. Confirm that all dependencies and build tools are compatible with this upgrade.build/dockerfiles/coordinator-cron.Dockerfile (1)
2-2
: Base Image Update in Coordinator-Cron Dockerfile.
The Dockerfile now uses the updated base image for Go 1.22, ensuring consistency with other services. Please test the build to ensure no compatibility issues arise from this change.build/dockerfiles/db_cli.Dockerfile (1)
2-3
: Base Image Updated for db_cli.
The base image has been updated toscrolltech/go-rust-builder:go-1.22-rust-nightly-2023-12-03
, which helps standardize the build environments across components. Verify that this change does not affect dependency resolution or build stability for thedb_cli
component.build/dockerfiles/bridgehistoryapi-fetcher.Dockerfile (1)
2-2
: Base Image Upgrade to Go 1.22:
The update toscrolltech/go-rust-builder:go-1.22-rust-nightly-2023-12-03
is correctly applied. This change aligns with the overall upgrade strategy and should help ensure compatibility with the newer Go environment.build/dockerfiles/bridgehistoryapi-api.Dockerfile (1)
2-2
: Base Image Update:
The Dockerfile now uses the updated base imagescrolltech/go-rust-builder:go-1.22-rust-nightly-2023-12-03
. This is consistent with the other components and helps maintain a uniform build environment.build/dockerfiles/bridgehistoryapi-db-cli.Dockerfile (1)
2-2
: Base Image Upgrade:
The Dockerfile’s base image has been updated toscrolltech/go-rust-builder:go-1.22-rust-nightly-2023-12-03
, aligning it with the other images.bridge-history-api/Makefile (1)
40-42
: Docker Build Platform Specification:
The addition of the--platform=linux/amd64
option to the docker build commands forbridgehistoryapi-fetcher
,bridgehistoryapi-api
, andbridgehistoryapi-db-cli
is a positive enhancement. It ensures that the built images target a consistent architecture, which can help mitigate cross-platform build issues.rollup/Makefile (1)
35-36
: Enforcing Linux/AMD64 Build Platform:
Including--platform=linux/amd64
in the docker build commands for both thegas_oracle
androllup-relayer
targets promotes consistency and ensures that the images are built for the intended architecture. This change complements similar updates in other Makefiles.build/dockerfiles/rollup_relayer.Dockerfile (1)
2-2
: Base Image Version Update Verification
The base image has been successfully updated togo-1.22
(scrolltech/go-rust-builder:go-1.22-rust-nightly-2023-12-03
). Please verify that all build dependencies and steps remain compatible with Go 1.22, and that this update aligns with the changes introduced for CodecV7 in the EuclidV2 project.bridge-history-api/internal/utils/utils_test.go (5)
23-27
: Good integration test for single-chunk scenario.
The test updates successfully verify the new return signature, ensuringversion
is correctly captured alongsidestart
andfinish
.
30-34
: Multiple-chunk scenario looks correct.
This test effectively validatesversion
,start
, andfinish
when handling multiple chunk inputs.
37-41
: Genesis batch test is thorough.
Verifying thatversion
,start
, andfinish
are zero for a genesis batch aligns with the updated function.
44-48
: Checks for commitBatchWithBlobProof are consistent.
The test ensures that a non-zero version is properly parsed and that the block range is accurate.
63-74
: Parent batch hash extraction test appears correct.
The test properly verifies that the second argument is interpreted as the parent batch hash for commitBatches, reinforcing safe parsing.bridge-history-api/internal/utils/utils.go (11)
69-70
: Renaming to GetBatchVersionAndBlockRangeFromCalldata is clear.
Returning an additionalversion
parameter alongside block range data neatly aligns with new usage needs.
73-81
: Error checks ensure robust input validation.
These explicit error returns guard against shorttxData
or ABI misinterpretations, reducing the risk of malformed calls.
85-85
: Local declaration ofversion
is properly scoped.
This avoids accidental cross-function usage and maintains clarity in the codebase.
99-99
: Error handling for commitBatch argument decoding is clear.
Providing structured error messages helps debugging potential ABI item mismatches.
103-103
: Assigningversion = args.Version
is aligned with new function design.
Good job capturing the batch version from arguments.
115-115
: Error handling for commitBatchWithBlobProof argument decoding.
Similar structured error logic ensures clarity when arguments fail to decode.
119-120
: Extraction ofversion
from commitBatchWithBlobProof arguments.
Consistently setsversion
as needed before calculating block ranges.
121-130
: Handling commitBatches & commitAndFinalizeBatch methods with minimal arguments.
Returningversion
alongside a(0, 0)
range is consistent with the test coverage.
140-140
:invalid chunks
error ensures chunk presence is mandatory for certain methods.
Provides explicit feedback for callers attempting to commit empty blocks.
151-152
: Returning the previously computederr
ensures consistent error propagation.
Exposes the same error context raised in prior lines, preventing silent failures.
88-88
:✅ Verification successful
Returning zero range for importGenesisBatch is acceptable.
Confirming thatimportGenesisBatch
doesn't require a block range. Ensure any future usage doesn’t need further modifications here.Please verify if
importGenesisBatch
should also return theversion
from calldata, if any, or if zero is always a safe default.
🏁 Script executed:
#!/bin/bash # Searching references to importGenesisBatch usage in codebase rg -A 5 'importGenesisBatch'Length of output: 30852
Verification Complete: Returning zeros for importGenesisBatch is correct.
After reviewing the ABI and usage throughout the codebase, it’s clear that the
importGenesisBatch
function accepts only two parameters (_batchHeader and _stateRoot) and does not include a version in its interface. The branch inbridge-history-api/internal/utils/utils.go
that returns0, 0, 0, nil
correctly aligns with its expected behavior. No additional version extraction is needed at this time.database/migrate/migrate_test.go (3)
62-62
: Version number updated correctly for database migration.The expected database version has been updated from 24 to 25, which aligns with the addition of the new migration script
00025_add_l1_message_queue_hash.sql
that introduces the message queue hash columns.
69-69
: Version number updated consistently.Consistent update of the expected database version to 25 in the testMigrate function, maintaining alignment with other test cases.
75-75
: Version number updated correctly for rollback test.The expected database version has been correctly updated to 25 in the testRollback function, ensuring consistency across all test cases that check the database version.
rollup/conf/config.json (1)
54-58
: Added new batch submission configuration section.The new
batch_submission
configuration section enables multiple batches to be submitted in a single transaction, with proper constraints:
min_batches
: 1 (ensures at least one batch is submitted)max_batches
: 6 (limits the maximum number of batches per transaction)timeout
: 300 (provides a timeout mechanism in seconds)This configuration enhancement improves system efficiency by allowing batch aggregation while maintaining control over submission behavior.
coordinator/internal/orm/chunk.go (1)
31-32
:❓ Verification inconclusive
Added L1 message queue hash fields to Chunk struct.
These new fields store the L1 message queue hash state before and after a chunk is processed, which is essential for tracking state transitions and ensuring data integrity in the EuclidV2 implementation with CodecV7.
Verify that these new fields are properly populated during chunk creation by checking the
InsertChunk
method implementation:
🏁 Script executed:
#!/bin/bash # Check if InsertChunk method populates the new L1 message queue hash fields rg -A 5 -B 5 "PrevL1MessageQueueHash|PostL1MessageQueueHash" coordinator/internal/orm/chunk.goLength of output: 1229
Attention: Verify L1 Message Queue Hash Field Population
The new fields (
PrevL1MessageQueueHash
andPostL1MessageQueueHash
) have been correctly added to theChunk
struct (lines 31-32 incoordinator/internal/orm/chunk.go
) to support tracking state transitions in the EuclidV2 implementation with CodecV7. However, while the grep output confirms that these fields are present in the struct definition, please verify that theInsertChunk
method properly initializes these fields during chunk creation. In particular:
- Ensure that
InsertChunk
assigns valid queue hash values to bothPrevL1MessageQueueHash
andPostL1MessageQueueHash
.- If the
InsertChunk
method is defined in a different file, double-check that it has been updated accordingly.rollup/tests/bridge_test.go (2)
25-29
: Import statement reordering.The import statements have been reordered for better organization, which maintains code consistency but doesn't affect functionality.
211-211
:✅ Verification successful
Added new test case for CodecV7.
Added a new test case
TestCommitBatchAndFinalizeBundleCodecV7
to verify the CodecV7 implementation for EuclidV2. This test complements the existing codec tests and ensures the new functionality works correctly.Verify that the implementation of this test function exists in the codebase:
🏁 Script executed:
#!/bin/bash # Search for the implementation of the TestCommitBatchAndFinalizeBundleCodecV7 function rg -A 5 "func testCommitBatchAndFinalizeBundleCodecV7" rollup/tests/Length of output: 377
Test Implementation Verification OK for CodecV7
- Verified that the implementation of
testCommitBatchAndFinalizeBundleCodecV7
exists inrollup/tests/rollup_test.go
.- The test function appropriately verifies the new CodecV7 functionality for EuclidV2 as intended.
- No further changes or removals are required.
database/migrate/migrations/00025_add_l1_message_queue_hash.sql (1)
1-27
: Migration script looks good for tracking L1 message queue hashes.The migration script properly adds the
prev_l1_message_queue_hash
andpost_l1_message_queue_hash
columns to both thechunk
andbatch
tables, with appropriate default values. The rollback script usesIF EXISTS
clauses which is a good practice to prevent errors during rollback.common/types/message/message.go (2)
47-56
: BatchTaskDetail struct enhanced with KZG-related fields.The
BatchTaskDetail
struct has been properly extended with KZG proof, commitment and challenge fields. These additions align well with the PR objective of adapting to CodecV7 for EuclidV2.
64-74
: ChunkInfo struct updated with message queue hash tracking.The addition of the
PrevMsgQueueHash
field to theChunkInfo
struct allows for better tracking of the message queue state between chunks. This complements the database schema changes in migration 00025.rollup/cmd/rollup_relayer/app/app.go (1)
85-93
: Good configuration validation for batch submission settings.The added sanity checks for the batch submission configuration ensure that:
- The BatchSubmission configuration is not nil
- MinBatches is at least 1
- MaxBatches is at least 1
These checks are important to prevent runtime issues with invalid configuration values.
rollup/internal/controller/watcher/bundle_proposer_test.go (1)
125-142
: Test improvement to simulate batch submissions in separate transactions.This enhancement to the test correctly simulates real-world batch submission by:
- Retrieving all batches and filtering out the genesis batch
- Using a database transaction to update the commit transaction hash and rollup status
- Adding proper error handling with detailed error messages
The test now better validates the functionality that allows for the submission of multiple batches within a single transaction.
rollup/internal/orm/chunk.go (4)
102-103
: Exported method name looks good.Renaming
getLatestChunk
toGetLatestChunk
is appropriate for external consumption. The implementation aligns well with the rest of the ORM usage.
121-121
: Usage of the new GetLatestChunk method looks fine.No issues spotted; this call pattern is consistent with the rest of the code.
193-193
: Same usage pattern.This usage mirrors the earlier
GetLatestChunk
logic. No additional concerns.
235-236
: Consistent field assignment for queue hashes.Storing the hex-encoded queue hashes via
Hex()
is in line with how other fields in this struct are managed. No issues here.rollup/internal/config/relayer.go (2)
34-41
: New BatchSubmission struct appears well-defined.The fields and JSON tags are clear. Ensure validation for potential edge cases, such as negative values or extremely high limits.
60-61
: RelayerConfig extension is consistent.Adding the
BatchSubmission
pointer aligns with the new struct. The configuration now supports more flexible batch submissions.rollup/internal/controller/sender/sender_test.go (12)
47-47
: Clarify intention of multiple nil entries and picking one blob.Using
[nil, nil, randBlobs(2)[0]]
fortxBlob
may raise questions about why only the first blob of the generated array is chosen. Verify that this meets your test coverage requirements and is not simply leftover debugging code.
183-192
: Conditional blob usage is appropriate.Conditionally creating a blob slice if
txBlob[i]
is non-nil effectively tests both blob and non-blob scenarios. This helps vary coverage without duplicating test logic.
231-234
: Repeated conditional pattern.This logic duplicates the earlier approach of passing a single blob when available. No new concerns.
275-277
: Skipping blob-based tests is reasonable here.The skip allows focusing on zero-blob scenarios. This is fine for selectively testing different transaction types.
352-355
: Sidecar creation looks correct.Passing a single blob from
txBlob[i]
ensures we test the sidecar path only when a blob is available.
394-396
: Similar sidecar creation pattern.This mirrors the previous implementation of a single-blob sidecar. No new remarks.
604-606
: Reusing the “blobs if not nil” logic.Again, this repeats the pattern of only constructing the slice for non-nil blobs. Behavior is consistent.
607-607
: Transaction submission path.Submission logic aligns with the rest of the test structure and matches the updated blob approach.
651-651
: Good coverage with randBlobs(1).Generating a single random blob here ensures the tests handle blob transactions thoroughly.
693-693
: Continued usage of random blob generation.This call further extends coverage of blob-handling paths.
753-753
: Maintaining consistent test approach.Again, calling
randBlobs(1)
ensures each test scenario is validated. Implementation looks good.
944-959
: Valid check for MaxPendingBlobTxs.This test ensures that sending more blob-carrying transactions than allowed triggers the correct error. No issues spotted.
rollup/internal/controller/watcher/chunk_proposer.go (4)
11-11
: Import usage confirmed.Importing
common
for handlingHexToHash
is justified by the new queue hash logic. No concerns here.
303-320
: Handling message queue hashes for CodecV7 is logical.The fallback to an empty hash for the first chunk in the new fork is sensible. However, consider whether an entirely missing parent chunk should be recoverable without error (e.g., in a new chain scenario). Currently, this returns an error. Confirm that a missing parent chunk is indeed invalid in all flows.
326-334
: Incremental PostL1MessageQueueHash updates.Applying L1 messages per block is coherent with the new approach. The error check and fallback are appropriately handled.
372-372
: Reverting PostL1MessageQueueHash on block removal.Clearing out the last block’s changes to keep the queue hash consistent is correct. No further issues.
rollup/internal/orm/orm_test.go (3)
313-314
: Straightforward update call
Performing the commit transaction hash and rollup status update looks solid, and the subsequent assertion ensures correctness.
332-351
: Well-tested retrieval logic
These assertions comprehensively cover various offsets and codec versions, validating thatGetCommittedBatchesGEIndexGECodecVersion
correctly filters and orders committed batches.
361-370
: Good verification of updated statuses
By confirming the proving and rollup statuses after updates, you ensure the new method behaves as intended under different status transitions.rollup/internal/controller/watcher/bundle_proposer.go (2)
190-193
: Ensuring all batches from the same commit TX
Including all same-transaction batches together prevents partial updates and maintains atomic consistency in each proposed bundle.
204-207
: Consistent inclusion on block timeout
Applying the same all-or-nothing bundling for a timing threshold helps prevent partial commits of related batches.rollup/internal/controller/watcher/batch_proposer.go (4)
296-297
: Adding message queue integration fields
IncludingTotalL1MessagePoppedBefore
andPrevL1MessageQueueHash
gives improved insight into prior L1 message state within each batch.
301-305
: Conditional block aggregation by codec version
Appending chunk blocks only forCodecV7
or newer maintains backward compatibility and handles advanced features for newer codecs.
333-342
: Handling chunk limit logic
Truncating the last chunk and recalculating metrics ensures the batch stays within gas and size constraints.
383-385
: Consistent chunk field usage
AssigningPrevL1MessageQueueHash
andPostL1MessageQueueHash
in theChunk
struct aligns well with the expanded data model for L1 message queues.rollup/tests/rollup_test.go (2)
222-450
: Good test implementation for CodecV7 and multi-batch transactions.This test function thoroughly verifies the new functionality for committing multiple batches in a single transaction with CodecV7. The structure is clear and uses proper assertions.
405-423
: Well-designed test for multi-batch transaction handling.This section effectively tests the core functionality of submitting multiple batches in a single transaction, which is a key feature of this PR. The test correctly verifies that batches committed in the same transaction are grouped into a single bundle.
coordinator/internal/logic/provertask/batch_prover_task.go (3)
227-235
: Good addition of PrevMsgQueueHash for CodecV7 compatibility.The addition of
PrevMsgQueueHash
to theChunkInfo
struct properly supports the new field required for EuclidV2 with CodecV7.
295-306
: Good validation and extraction of blob data proof components.Adding validation for blob data proof length and properly extracting KZG proof, commitment and challenge values improves the robustness of the code.
The memory layout comment is particularly helpful for future maintainers.
278-279
:✅ Verification successful
Confirm CodecV7 compatibility across all code paths.
The method now explicitly handles CodecV7, but ensure that all other processing logic throughout the codebase is also compatible with the new codec version.
🏁 Script executed:
#!/bin/bash # Check for other places where codec versions are handled to ensure CodecV7 is included rg "case encoding\.Codec(V3|V4|V5|V6|V7)" --type goLength of output: 555
CodecV7 is correctly integrated across all related code paths.
The search results show that both the
coordinator/internal/logic/provertask/batch_prover_task.go
file and the relayer controller (rollup/internal/controller/relayer/l2_relayer.go
) explicitly handleCodecV7
. This confirms that the necessary branching logic has been updated to incorporate the new codec version without leaving any missing references.rollup/internal/controller/relayer/l2_relayer_metrics.go (1)
11-12
: New metric aligns with multi-batch transaction feature.Adding
rollupL2RelayerProcessBatchesPerTxCount
is a good way to track the number of batches processed per transaction, which is a key metric for the new feature.rollup/internal/orm/batch.go (4)
38-39
: Good addition of L1 message queue hash fields.The addition of
PrevL1MessageQueueHash
andPostL1MessageQueueHash
fields to theBatch
struct properly supports the tracking of message queue state for EuclidV2.
168-176
: Improved method name clarity and filtering.Renaming to
GetCommittedBatchesGEIndexGECodecVersion
and filtering for committed batches makes the intent clearer and ensures only valid batches are included in bundle operations.
304-305
: Message queue hash field initialization in InsertBatch.Proper initialization of the new message queue hash fields ensures data consistency across the system.
403-415
: Database transaction support for multi-batch commits.The updated
UpdateCommitTxHashAndRollupStatus
method with transaction support is essential for the multi-batch transaction feature, enabling atomic updates to multiple batches.rollup/internal/controller/sender/sender.go (1)
684-695
: Validate nil checks and blob copying logic.The function
makeSidecar
robustly guards against an empty or partially nil blobsInput slice. The per-blob checks and error reporting are correct and help surface invalid data early. The loop’s approach is clear, and partial sidecar creation is rightly disallowed.rollup/abi/bridge_abi_test.go (2)
26-38
: Showcasing improved test reliability with explicit test parameter usage.Switching to
assert.NoError(t, err)
correctly associates the assertion with the test instance, enhancing test clarity. These changes follow standard practices in Go testing withtestify
.
118-137
: Good coverage for new Pack-based tests.These newly introduced tests (e.g.,
TestPackCommitAndFinalizeBatch
) broaden coverage for multiple parameters and ensure ABI consistency. The structured approach of verifying each Pack invocation is helpful for preventing regression in ABI calls.rollup/internal/controller/relayer/l2_relayer.go (3)
414-417
: Avoid committing a batch with index 0.Emitting an error for a batch with index 0 is consistent with the notion that genesis handling is separate. This check prevents accidental double handling of genesis, which is good. Keep the error descriptive if future expansions remove or rename the “genesis” concept.
684-687
: Improved finalizeBundle method for CodecV7 with end-chunk usage.Encapsulating calls to
constructFinalizeBundlePayloadCodecV7
ensures that finalizing complex bundles is separate from earlier versions. The reliance on chunk-based checks for message queue correctness is a good approach. Double-check your logic if the final chunk is missing or partially verified.
381-394
:✅ Verification successful
Efficient retrieval of pending batches but watch out for smaller versions.
By beginning your check with
if i == 0 && encoding.CodecVersion(dbBatch.CodecVersion) < encoding.CodecV7
, you are effectively deferring older versions toprocessPendingBatchesV4
. This branching is logical for separating legacy vs. newer flows, but ensure that any intermediate codec (e.g.,CodecV6
) does not slip into an unintended path—particularly if future expansions happen.Please confirm that no path uses V6 incorrectly by searching usage references:
🏁 Script executed:
#!/bin/bash ast-grep --pattern $'encoding.CodecV6'Length of output: 2223
Legacy Flow Handling Verified: No unintended CodecV6 paths
- The conditional check with
if i == 0 && encoding.CodecVersion(dbBatch.CodecVersion) < encoding.CodecV7
properly diverts legacy flows toprocessPendingBatchesV4
.- An
ast-grep
search confirms that references toencoding.CodecV6
appear only within migration SQL files and do not affect runtime logic.- Continue to watch for any future additions of intermediate codec versions.
rollup/abi/bridge_abi.go (2)
27-27
: Consider verifying overloaded function behavior forrevertBatch
.There are two
revertBatch
entries with different signatures in this ABI. While function overloading is allowed in Solidity, it can introduce confusion and potential maintenance overhead. Verify that callers accurately distinguish between these overloaded functions and that this duplication is truly necessary.Do you want me to generate a shell script to search the codebase for usage of each overloaded signature to confirm correctness?
27-27
:❓ Verification inconclusive
Ensure comprehensive test coverage for newly added and modified functions.
The newly introduced
commitAndFinalizeBatch
function and other methods (e.g.,commitBatches
,finalizeBatch4844
) may not be covered by tests yet. Given that Codecov indicates patch coverage issues in other parts of the codebase, ensure these functions are adequately tested to maintain or improve coverage.Below is a script to help locate references to these newly added methods and assess overall coverage:
🏁 Script executed:
#!/bin/bash # Search for references to commitAndFinalizeBatch and similar newly added functions. rg -A 5 "commitAndFinalizeBatch|commitBatches|finalizeBatch4844" # Optionally, run Go coverage across the project (assuming a Go-based monorepo). go test ./... -coverprofile=coverage.out && go tool cover -func=coverage.outLength of output: 34236
Attention: Verify Comprehensive Test Coverage for New Functions
The new references for the functions (e.g.,
commitAndFinalizeBatch
,commitBatches
, andfinalizeBatch4844
) have been found in various test files (such asrollup/abi/bridge_abi_test.go
andbridge-history-api/internal/utils/utils_test.go
). However, the automated coverage command returned a module-related configuration error (indicating that./...
isn’t recognized by the currentgo.work
setup). Please take the following actions:
Review Test Coverage:
- Confirm that the tests invoked in files like
rollup/abi/bridge_abi_test.go
adequately exercise the new functions.- Check that utility tests (e.g., in
bridge-history-api/internal/utils/utils_test.go
) reliably cover bothcommitBatches
andcommitAndFinalizeBatch
.Resolve CI/Test Configuration:
- Investigate and address the module configuration issue reported by the
go test ./... -coverprofile=coverage.out
command to ensure that coverage metrics are correctly gathered.Ensure these verifications are performed to maintain or improve overall test coverage.
bridge-history-api/go.mod (1)
13-14
: Dependency updates look appropriate.Added a direct dependency on the da-codec package and updated go-ethereum to include the header hash compatibility hotfix. These changes align with the PR objective to adapt to CodecV7 for EuclidV2.
rollup/go.mod (1)
16-16
:❓ Verification inconclusive
Verify version consistency across modules.
The version of da-codec used here (
v0.1.3-0.20250227072756-a1482833595f
) is slightly newer than the one used in bridge-history-api (v0.1.3-0.20250226072559-f8a8d3898f54
). While this may be intentional, using different versions across modules could potentially lead to compatibility issues.
🏁 Script executed:
#!/bin/bash # Check for any incompatible differences between the two da-codec versions echo "Comparing da-codec versions between modules:" echo "bridge-history-api: v0.1.3-0.20250226072559-f8a8d3898f54" echo "rollup: v0.1.3-0.20250227072756-a1482833595f" # Check if these versions exist in the da-codec repository echo -e "\nChecking commits in da-codec repository:" git ls-remote https://github.com/scroll-tech/da-codec.git f8a8d3898f54 git ls-remote https://github.com/scroll-tech/da-codec.git a1482833595f # Look for relevant imports in the codebase echo -e "\nFiles importing da-codec in the project:" rg -l "github.com/scroll-tech/da-codec"Length of output: 2381
Action Required: Confirm da-codec Version Intent
The dependency version discrepancy between the rollup (
v0.1.3-0.20250227072756-a1482833595f
) and bridge-history-api (v0.1.3-0.20250226072559-f8a8d3898f54
) modules has been noted. The verification script confirms that both commits exist and are referenced across multiple parts of the codebase. However, this version mismatch might lead to compatibility issues if not intentional.Please verify that using two different versions of da-codec is deliberate. If this is an oversight, consider aligning both modules to a consistent da-codec version.
- rollup/go.mod (line 16):
github.com/scroll-tech/da-codec v0.1.3-0.20250227072756-a1482833595f
- bridge-history-api/go.mod:
github.com/scroll-tech/da-codec v0.1.3-0.20250226072559-f8a8d3898f54
bridge-history-api/abi/backend_abi_test.go (1)
10-13
: Good test coverage for event signatures.This test verifies that the event signatures for both V0 and V7 RevertBatch events match their expected Keccak256 hashes. This helps ensure event handling compatibility with different versions of the system.
bridge-history-api/internal/orm/batch_event.go (1)
119-121
: Improved logic for handling reverted batches.The change from filtering by batch_hash to excluding finalized batches is a logical improvement. It enforces the constraint that once a batch is finalized, it cannot be reverted, which is an important business rule for maintaining data integrity.
bridge-history-api/conf/config.json (2)
22-22
: Default zero address for MessageQueueV2Addr - is this intentional?The MessageQueueV2Addr is set to the zero address (0x0000000000000000000000000000000000000000). While this might be intentional for a feature that's not yet fully deployed, please confirm if this is the expected default value.
25-26
: LGTM for BlobScanAPIEndpoint configurationThe BlobScanAPIEndpoint is properly configured with a valid URL format. This addition aligns well with the new fields in the FetcherConfig struct.
bridge-history-api/internal/config/config.go (1)
33-33
: New configuration fields added correctlyThe new fields (MessageQueueV2Addr, BeaconNodeAPIEndpoint, BlobScanAPIEndpoint, and BlockNativeAPIEndpoint) are properly added to the FetcherConfig struct with appropriate JSON tags. These align with the changes in the config.json file.
Minor style note: There's an empty line at line 37 that seems unnecessary but doesn't affect functionality.
Also applies to: 37-40
bridge-history-api/internal/controller/fetcher/l1_fetcher.go (1)
65-65
: Updated function signature and return values look goodThe update to include the blobClient in the l1FetcherLogic creation and the modified return statement to include the error are implemented correctly. This is consistent with the error handling added in app.go.
Also applies to: 82-82
bridge-history-api/abi/backend_abi.go (2)
58-59
: Add event signature variables for revert batch.These two new hash variables are logically separated for the legacy (V0) and new (V7) "RevertBatch" events, which is a clean approach to handling versioned events.
275-284
: Versioned revert event types look consistent and well structured.The introduction of
L1RevertBatchV0Event
andL1RevertBatchV7Event
diagrams the evolution of the revert batch approach. This clearly delineates the fields needed for each version.bridge-history-api/internal/logic/l1_event_parser.go (6)
25-27
: New blobClient field strengthens the parser's data retrieval flow.Adding this field is a neat way to gather batch-related blob data directly within the parser. Just ensure the blobClient has robust error-handling to avoid unexpected panics.
31-37
: Constructor updated to accept blobClient.Passing in the
blobClient
parameter upon parser creation avoids global state and enhances maintainability. This is a good design choice.
240-255
: Informative comment block clearly explains the multi-blob approach.Your inline documentation for batch events ≥ CodecV7 is thorough, making it easier to understand the logic behind multi-blob flows and how parent batch hashes must be tracked.
269-321
: Codec-based parsing handles multi-blobs gracefully.The approach of matching each “CommitBatch” event to one blob by indexing via
currentIndex
is well done, including the bounds check oncommitTx.BlobHashes()
. Handling the parent batch hash for each sub-batch also looks correct.
330-342
: Dedicated case forL1RevertBatchV0EventSig
ensures backward compatibility.This case cleanly unpacks the original revert event version (V0). Looks good.
342-355
:❓ Verification inconclusive
Range-based reversion for
L1RevertBatchV7EventSig
.Iterating from
StartBatchIndex
toFinishBatchIndex
is logical for mass reverts. However, consider clarifying whether scenarios withstartBatchIndex > finishBatchIndex
can occur, and if so, confirm that no partial revert events slip through.
🏁 Script executed:
#!/bin/bash # Searching for references to RevertBatch0 usage and verifying potential constraints on StartBatchIndex/FinishBatchIndex rg "RevertBatch0"Length of output: 278
Clarify the boundary check for batch reversion range
Iterating from
StartBatchIndex
toFinishBatchIndex
works well for batching reverts, but please confirm whether it's possible forStartBatchIndex
to be greater thanFinishBatchIndex
. If such cases are valid per the protocol, ensure that they're either explicitly handled (to avoid silently skipping ranges) or that the underlying system enforcesStartBatchIndex <= FinishBatchIndex
. For clarity and future maintainability, adding a comment or a runtime check to document this invariant would be beneficial.
- Verify if reversed index ranges (
StartBatchIndex > FinishBatchIndex
) are ever emitted.- If so, ensure no unintended partial reverts occur.
- Otherwise, document the assumption that
StartBatchIndex
is always less than or equal toFinishBatchIndex
.bridge-history-api/internal/logic/l1_fetcher.go (7)
53-53
: BlobClient parameter extends fetcher functionality.Injecting
blobClient
inNewL1FetcherLogic
centralizes all data fetching in a consistent workflow, paving the way for batch block range retrieval.
123-125
: Optional MessageQueueV2 address inclusion.Appending
MessageQueueV2Addr
only when provided ensures no extraneous tracking occurs. This is a safe way to handle optional addresses.
137-137
: Initializing the parser withblobClient
.Great that you reuse the same
blobClient
for the parser to avoid multiple clients and reduce redundancy.
176-218
: Refined error handling ingetRevertedTxs
.Breaking early and returning
(nil, err)
after logging ensures consistent error propagation. This approach helps avoid partial data states if the receipt or sender retrieval fails.
228-228
: Expanding topics array from 16 to 17.This change accommodates new events (V0 and V7 revert batch, etc.), ensuring logs for both are captured.
237-246
: Comprehensive event signature coverage.Adding both revert batch events, finalize batch, queue transaction, and deposit-wrapped-token ensures you don’t miss new logs. This is the correct approach given the fresh additions in
backend_abi.go
.
269-269
: Incorporate reverted transactions into fetch results.Storing
l1RevertedTxs
in the finalL1FilterResult
clarifies which transactions had reverts, improving insights and debugging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
rollup/internal/controller/watcher/bundle_proposer.go (2)
170-174
: Validate empty CommitTxHash with a targeted test.
This check properly references the correct batch when reporting the error. Consider adding or updating a test to ensure this path is exercised when a batch’sCommitTxHash
is unexpectedly empty.
187-198
: Refactor repeated logic to avoid duplication.
The flow that includes callingallBatchesCommittedInSameTXIncluded(batches)
and then updating metrics and DB is repeated again at lines 200–212. Extracting a helper method could reduce code duplication and simplify maintenance.Example refactor outline:
+func (p *BundleProposer) handleCommittedTXIncludedAndUpdate(batches []*orm.Batch, codecVersion encoding.CodecVersion) error { + updatedBatches, err := p.allBatchesCommittedInSameTXIncluded(batches) + if err != nil { + return fmt.Errorf("failed to include all batches committed in the same tx: %w", err) + } + p.bundleFirstBlockTimeoutReached.Inc() + p.bundleBatchesNum.Set(float64(len(updatedBatches))) + return p.updateDBBundleInfo(updatedBatches, codecVersion) +} // Inside proposeBundle(): -if uint64(len(batches)) == maxBatchesThisBundle { - ... - return p.updateDBBundleInfo(batches, codecVersion) +if uint64(len(batches)) == maxBatchesThisBundle { + return p.handleCommittedTXIncludedAndUpdate(batches, codecVersion) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
rollup/internal/controller/watcher/bundle_proposer.go
(5 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: tests
🔇 Additional comments (3)
rollup/internal/controller/watcher/bundle_proposer.go (3)
136-136
: Ensure only committed batches are included.
Good enhancement callingGetCommittedBatchesGEIndexGECodecVersion
here instead of fetching uncommitted batches. This strengthens data integrity and consistency in the bundling logic.
200-212
: Duplicated block identified above.
This logic block recurs the same sequence of calls and metric updates as lines 187–198. See the previous comment for a possible approach to refactor and reduce repetition.
219-262
: Comprehensive handling of committed batches in the same transaction.
This new helper method gracefully checks for matching commit hashes, removes partial batches if needed, and prevents slice out-of-bounds. It is well-documented and appears robust.
Purpose or design rationale of this PR
This PR implements changes according to
CodecV7
forEuclidV2
. Depends on scroll-tech/da-codec#33CodecV7
min
andmax
batches in combination withtimeout
PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
Deployment tag versioning
Has
tag
incommon/version.go
been updated or have you addedbump-version
label to this PR?Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores