-
Notifications
You must be signed in to change notification settings - Fork 7
Compute eoa worker ID independent of twmq lease token #38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Caution Review failedThe pull request is closed. WalkthroughIntroduces a ResetNoncesTransaction to atomically reset EOA nonce state in Redis, adds tracing and worker_id instrumentation, adjusts lock acquisition to use worker_id, modifies pending-to-borrowed move to optionally return and set a new optimistic nonce, updates confirm/send flows for nonce sync and recycled nonce handling, and refines error classification to make “oversized” non-retryable. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Worker as Worker
participant Store as AtomicEoaExecutorStore
participant Redis as Redis
note over Worker,Store: Reset nonces
Worker->>Store: reset_nonces(current_chain_tx_count)
Store->>Redis: WATCH optimistic,last,recycled,manual_reset
Store->>Redis: GET health / nonce_resets
Store->>Redis: PIPELINE SET health (optional)<br/>SET optimistic = chain_tx_count<br/>SET cached = chain_tx_count<br/>DEL recycled<br/>DEL manual_reset
Redis-->>Store: OK / Retry on conflict
Store-->>Worker: Result
sequenceDiagram
autonumber
participant Confirm as ConfirmFlow
participant Store as EoaExecutorStore
participant Chain as ChainClient
note over Confirm,Store: Handle NonceSyncRequired
Confirm->>Store: get_cached_transaction_count()
alt NonceSyncRequired
Confirm->>Chain: get_current_tx_count()
Chain-->>Confirm: current_chain_tx_count
Confirm->>Store: update_cached_transaction_count(current_chain_tx_count)
Confirm-->>Confirm: use current_chain_tx_count as cached
else OK
Store-->>Confirm: cached_transaction_count
end
Confirm-->>Confirm: proceed with subsequent logic
sequenceDiagram
autonumber
participant Send as SendFlow
participant Store as EoaExecutorStore
note over Send,Store: Recycled nonce check
Send->>Store: clean_and_get_recycled_nonces()
Store-->>Send: recycled_nonces (cleaned)
Send-->>Send: branch based on remaining count
sequenceDiagram
autonumber
participant Worker as Worker
participant Store as EoaExecutorStore
participant Redis as Redis
note over Worker,Store: EOA lock with worker_id
Worker->>Store: acquire_eoa_lock_aggressively(worker_id)
Store->>Redis: SETNX lock_key = worker_id EX lock_ttl
alt Lock acquired
Redis-->>Store: OK
else Locked
Store->>Redis: GET lock_key
Redis-->>Store: conflict_worker_id
Store-->>Worker: Not acquired (logs conflict_worker_id)
end
sequenceDiagram
autonumber
participant Sender as Sender
participant Classifier as SendErrorClassifier
participant Result as SubmissionResult
note over Sender,Result: Oversized -> non-retryable
Sender->>Classifier: classify(error_str)
alt contains "oversized"
Classifier-->>Sender: DeterministicFailureNonRetryable
Sender->>Result: map to Fail (non-retryable)
else other deterministic
Classifier-->>Sender: DeterministicFailure (retryable path depends)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. 📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (2)
✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (6)
executors/src/eoa/worker/error.rs (1)
154-204
: Optional: broaden deterministic bucket, keep default safetyConsider adding other well-known non-retryable phrases (e.g., “max fee per gas less than block base fee”) if they apply in your environments. Current default-to-possibly-sent remains a safe fallback.
executors/src/eoa/worker/send.rs (1)
231-291
: Typo: clean_prepration_results -> clean_preparation_resultsMethod name has a spelling mistake; consider renaming for clarity and consistency. Update its two call sites in this file.
- async fn clean_prepration_results( + async fn clean_preparation_results( &self, @@ - let cleaned_results = self - .clean_prepration_results(prepared_results_with_pending, false) + let cleaned_results = self + .clean_preparation_results(prepared_results_with_pending, false) .await?; @@ - let cleaned_results = self - .clean_prepration_results(prepared_results_with_pending, true) + let cleaned_results = self + .clean_preparation_results(prepared_results_with_pending, true) .await?;executors/src/eoa/store/atomic.rs (4)
662-670
: Watch the health key to avoid losing nonce_resets updates.During reset, you read and then write
eoa_health
. If another operation (even from the same lock owner on a different task) updates the health between validation and operation, the last write wins and anonce_resets
timestamp can be lost. Including the health key inWATCH
makes the pipeline retry instead of dropping that data.Apply this diff:
fn watch_keys(&self) -> Vec<String> { vec![ self.keys.optimistic_transaction_count_key_name(), self.keys.last_transaction_count_key_name(), self.keys.recycled_nonces_zset_name(), self.keys.manual_reset_key_name(), + self.keys.eoa_health_key_name(), ] }
Context: Per prior learnings, lock-based ownership ensures only the holder mutates state; this change just protects health metrics from concurrent updates by the same worker/tasks.
671-692
: Use the provided Redis connection for validation reads (connection-scoped WATCH semantics).
validation
currently ignores the suppliedConnectionManager
and callsstore.get_eoa_health()
, which likely uses a different connection. Reading via the same connection you used toWATCH
gives you consistent semantics (and avoids an extra roundtrip/connection borrow). This also pairs with watching the health key.Apply this diff:
- async fn validation( - &self, - _conn: &mut ConnectionManager, - store: &EoaExecutorStore, - ) -> Result<Self::ValidationData, TransactionStoreError> { - let now = chrono::Utc::now().timestamp_millis().max(0) as u64; - - // Get current health data to prepare update - let current_health = store.get_eoa_health().await?; - let health_update = if let Some(mut health) = current_health { + async fn validation( + &self, + conn: &mut ConnectionManager, + store: &EoaExecutorStore, + ) -> Result<Self::ValidationData, TransactionStoreError> { + let now = chrono::Utc::now().timestamp_millis().max(0) as u64; + + // Read health via the same connection that was WATCH'ed + let health_key = store.eoa_health_key_name(); + let health_json: Option<String> = conn.get(&health_key).await?; + let health_update = if let Some(json) = health_json { + let mut health: EoaHealth = serde_json::from_str(&json)?; health.nonce_resets.push(now); // Keep only the last 5 nonce reset timestamps if health.nonce_resets.len() > 5 { health.nonce_resets.drain(0..health.nonce_resets.len() - 5); } Some(serde_json::to_string(&health)?) } else { None }; Ok(health_update) }
710-721
: Minor: be consistent with borrowing when deleting keys.
pipeline.del(recycled_key)
moves theString
while the rest use borrowed&str
. Uniformly borrowing avoids unnecessary moves and keeps style consistent.Apply this diff:
- pipeline.del(recycled_key); + pipeline.del(&recycled_key);
512-515
: Consistentkeys
reference inResetNoncesTransaction
The only occurrence of
&self.store.keys
in this impl should be&self.keys
to match every other call (and becauseAtomicEoaExecutorStore
has its ownkeys
field).• File:
executors/src/eoa/store/atomic.rs
, line 513Apply this diff:
- let reset_tx = ResetNoncesTransaction { - keys: &self.store.keys, + let reset_tx = ResetNoncesTransaction { + keys: &self.keys, current_chain_tx_count, };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
executors/src/eoa/store/atomic.rs
(4 hunks)executors/src/eoa/store/mod.rs
(2 hunks)executors/src/eoa/worker/confirm.rs
(2 hunks)executors/src/eoa/worker/error.rs
(3 hunks)executors/src/eoa/worker/mod.rs
(3 hunks)executors/src/eoa/worker/send.rs
(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-07-06T15:44:13.701Z
Learnt from: d4mr
PR: thirdweb-dev/engine-core#5
File: executors/src/eoa/worker.rs:173-176
Timestamp: 2025-07-06T15:44:13.701Z
Learning: The EOA executor store uses comprehensive WATCH-based coordination where every Redis state mutation watches the lock key and validates ownership before proceeding. If the lock is lost during any operation, the transaction fails with LockLost error. This makes aggressive lock acquisition safe because only the actual lock holder can successfully perform state mutations, regardless of who claims the lock.
Applied to files:
executors/src/eoa/store/mod.rs
executors/src/eoa/store/atomic.rs
📚 Learning: 2025-07-06T15:44:13.701Z
Learnt from: d4mr
PR: thirdweb-dev/engine-core#5
File: executors/src/eoa/worker.rs:173-176
Timestamp: 2025-07-06T15:44:13.701Z
Learning: In the EOA executor system, aggressive lock acquisition is safe because every Redis state mutation uses WATCH operations on the lock key. If the lock is lost during a transaction, the WATCH causes the transaction to fail and the worker exits gracefully. This provides coordination between workers even when using forceful lock takeover.
Applied to files:
executors/src/eoa/store/mod.rs
🧬 Code graph analysis (5)
executors/src/eoa/worker/send.rs (1)
executors/src/eoa/store/atomic.rs (1)
worker_id
(93-95)
executors/src/eoa/worker/confirm.rs (1)
executors/src/eoa/store/atomic.rs (1)
worker_id
(93-95)
executors/src/eoa/store/mod.rs (1)
executors/src/external_bundler/deployment.rs (1)
lock_key
(73-75)
executors/src/eoa/worker/mod.rs (2)
executors/src/eoa/store/atomic.rs (1)
worker_id
(93-95)executors/src/eoa/store/mod.rs (2)
new
(112-118)new
(296-310)
executors/src/eoa/store/atomic.rs (2)
twmq/src/lib.rs (7)
name
(174-176)redis
(1184-1186)redis
(1192-1192)redis
(1284-1286)redis
(1292-1292)redis
(1322-1324)redis
(1330-1332)executors/src/eoa/store/mod.rs (4)
optimistic_transaction_count_key_name
(235-246)last_transaction_count_key_name
(255-263)recycled_nonces_zset_name
(213-224)manual_reset_key_name
(281-292)
🔇 Additional comments (11)
executors/src/eoa/store/mod.rs (1)
287-291
: Formatting-only tweak looks goodNo behavior change; consistent with other key formatters.
executors/src/eoa/worker/confirm.rs (1)
56-64
: Good fallback on NonceSyncRequiredInitializing the cached transaction count from the fresh chain count is a sensible recovery path and logged with sufficient context.
executors/src/eoa/worker/mod.rs (4)
26-28
: Metrics import rework is fineConsolidated imports are clear; no functional change.
162-165
: Lock acquisition now parameterized by worker_id — goodPassing the computed
worker_id
into aggressive lock acquisition matches the new identity flow.
169-178
: Async minimal-account check with error downgrade — good tradeoffFail-closed to non-minimal on errors is pragmatic for inflight caps; logs the cause.
199-200
: Include worker_id in lock-release error log — goodImproves traceability for lock lifecycle events.
executors/src/eoa/worker/error.rs (3)
143-146
: New non-retryable classification is appropriateDistinct
DeterministicFailureNonRetryable
clarifies handling and prevents futile retries.
193-196
: Capture “oversized” as non-retryableMapping “oversized” errors to non-retryable is correct; RPCs won’t accept such payloads after retries.
312-320
: Map non-retryable deterministic failures to Fail in batch processingEnsures oversized and similar errors short-circuit to failure without requeue storms.
executors/src/eoa/store/atomic.rs (2)
397-401
: Nice addition: tracing on update_cached_transaction_count.The instrumentation with current_chain_tx_count will help correlate nonce sync events in logs without adding overhead. LGTM.
648-723
: Packaging reset into SafeRedisTransaction looks solid.Clear separation of validation and operation, correct keys updated (optimistic, cached, recycled, manual reset), and bounded
nonce_resets
history. The logging added inreset_nonces
is also helpful for ops.
#[tracing::instrument(skip_all, fields(worker_id = self.store.worker_id))] | ||
pub async fn confirm_flow(&self) -> Result<CleanupReport, EoaExecutorWorkerError> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix tracing field: call the accessor and format properly
Same as send flow: use the accessor and %
formatter.
- #[tracing::instrument(skip_all, fields(worker_id = self.store.worker_id))]
+ #[tracing::instrument(skip_all, fields(worker_id = %self.store.worker_id()))]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
#[tracing::instrument(skip_all, fields(worker_id = self.store.worker_id))] | |
pub async fn confirm_flow(&self) -> Result<CleanupReport, EoaExecutorWorkerError> { | |
#[tracing::instrument(skip_all, fields(worker_id = %self.store.worker_id()))] | |
pub async fn confirm_flow(&self) -> Result<CleanupReport, EoaExecutorWorkerError> { |
🤖 Prompt for AI Agents
In executors/src/eoa/worker/confirm.rs around lines 30 to 31, the tracing
attribute uses the store field directly instead of calling the accessor and
using the percent formatter; change the attribute to call the accessor and
format with `%` (e.g. replace fields(worker_id = self.store.worker_id) with
fields(worker_id = %self.store.worker_id())) so the worker_id is obtained via
its accessor and displayed correctly in the span.
let worker_id = format!("{}:{}", uuid::Uuid::new_v4(), job.lease_token); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Align worker_id with PR objective: remove dependency on twmq lease_token
PR title: “Compute eoa worker ID independent of twmq lease token.” Current code still concatenates job.lease_token
. Generate a worker_id that’s not derived from lease tokens. Optionally include eoa/chain for traceability.
Minimal fix:
- let worker_id = format!("{}:{}", uuid::Uuid::new_v4(), job.lease_token);
+ let worker_id = uuid::Uuid::new_v4().to_string();
Alternative (better observability):
- let worker_id = format!("{}:{}", uuid::Uuid::new_v4(), job.lease_token);
+ let worker_id = format!("{}:{}:{}", data.chain_id, data.eoa_address, uuid::Uuid::new_v4());
Verification script to ensure no other lease-token–derived worker IDs remain:
🏁 Script executed:
#!/bin/bash
# Find worker_id constructions that include lease_token
rg -n -C2 --type=rust 'worker_id.*lease_token'
Length of output: 443
Remove lease_token from worker_id
generation
The code at executors/src/eoa/worker/mod.rs:153
still concatenates job.lease_token
into the worker_id
, which contradicts the PR’s objective to “compute eoa worker ID independent of twmq lease token.” Please update this to drop any dependency on lease_token
.
Suggested fixes:
- Minimal change:
- let worker_id = format!("{}:{}", uuid::Uuid::new_v4(), job.lease_token);
- let worker_id = uuid::Uuid::new_v4().to_string();
- Better observability (include chain and EOA address):
- let worker_id = format!("{}:{}", uuid::Uuid::new_v4(), job.lease_token);
- let worker_id = format!(
-
"{}:{}:{}",
-
data.chain_id,
-
data.eoa_address,
-
uuid::Uuid::new_v4()
- );
Verification confirms this is the only occurrence of lease_token
in any worker_id
construction:
$ rg -n -C2 --type=rust 'worker_id.*lease_token'
executors/src/eoa/worker/mod.rs:153: let worker_id = format!("{}:{}", uuid::Uuid::new_v4(), job.lease_token);
🤖 Prompt for AI Agents
In executors/src/eoa/worker/mod.rs around lines 153 to 154, the worker_id is
currently built by concatenating a UUID with job.lease_token; remove the
lease_token dependency so the worker ID is computed independent of the twmq
lease token. Fix by changing the construction to either (minimal) use only a
fresh UUID (e.g., UUID alone) or (preferred for observability) format the ID to
include chain and the EOA address (e.g., UUID:<chain>:<eoa_address>) instead of
including job.lease_token; ensure any variables used (chain or eoa address) are
available in scope and update any tests/logs that assert the old format.
#[tracing::instrument(skip_all, fields(worker_id = self.store.worker_id))] | ||
pub async fn send_flow(&self) -> Result<u32, EoaExecutorWorkerError> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix tracing field: call the accessor and format properly
AtomicEoaExecutorStore
exposes worker_id()
; using self.store.worker_id
will fail if the field isn’t public. Also prefer %
to use Display formatting.
Apply:
- #[tracing::instrument(skip_all, fields(worker_id = self.store.worker_id))]
+ #[tracing::instrument(skip_all, fields(worker_id = %self.store.worker_id()))]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
#[tracing::instrument(skip_all, fields(worker_id = self.store.worker_id))] | |
pub async fn send_flow(&self) -> Result<u32, EoaExecutorWorkerError> { | |
#[tracing::instrument(skip_all, fields(worker_id = %self.store.worker_id()))] | |
pub async fn send_flow(&self) -> Result<u32, EoaExecutorWorkerError> { |
🤖 Prompt for AI Agents
In executors/src/eoa/worker/send.rs around lines 19 to 20, the tracing field
references a non-public struct field and doesn't use Display formatting; change
the attribute to call the public accessor and use % for Display formatting so it
reads the worker id via self.store.worker_id() and formats it with % (e.g.,
fields(worker_id = %self.store.worker_id())); keep skip_all and the rest of the
attribute the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (6)
executors/src/eoa/store/pending.rs (2)
40-45
: Defensive watching: include the pending ZSET in WATCH keys.Even though the EOA lock should serialize writers, adding the pending ZSET strengthens atomicity if any non-locked code ever touches it. This also future-proofs the operation against accidental writes from new code paths.
Proposed change:
fn watch_keys(&self) -> Vec<String> { vec![ self.keys.optimistic_transaction_count_key_name(), self.keys.borrowed_transactions_hashmap_name(), + self.keys.pending_transactions_zset_name(), ] }
If you’re confident all writers respect the EOA lock, this can be considered optional.
34-34
: API shape:Option<u64>
is effectively alwaysSome
here.Validation rejects empty
transactions
, sonew_optimistic_tx_count
will always beSome
. If you don’t anticipate empty slices reachingoperation
, consider returningu64
directly to reduce downstream branching. If you keepOption<u64>
for API symmetry, that’s fine—just be aware it’s non-None in practice.Would you like me to open a follow-up to align the return type?
executors/src/eoa/store/atomic.rs (4)
144-145
: Trim span fields to avoid logging large/PII-ish payloads.
#[tracing::instrument(..., fields(transactions = ?transactions))]
will record the full BorrowedTransactionData list via Debug, which may be large and include calldata. Prefer lighter fields (counts, nonce ranges, tx_ids) and log details at debug when needed.Example adjustment (attribute plus a targeted debug log inside the function):
- #[tracing::instrument(skip_all, fields(transactions = ?transactions))] + #[tracing::instrument(skip_all, fields(tx_count = %transactions.len()))] pub async fn atomic_move_pending_to_borrowed_with_incremented_nonces( &self, transactions: &[BorrowedTransactionData], ) -> Result<usize, TransactionStoreError> { + tracing::debug!( + tx_ids = ?transactions.iter().map(|t| &t.transaction_id).collect::<Vec<_>>(), + nonces = ?transactions.iter().map(|t| t.signed_transaction.nonce()).collect::<Vec<_>>(), + "moving pending->borrowed with incremented nonces" + );
673-681
: Consider WATCHing the health key as well.
operation
updateseoa_health_key_name()
when available, but it’s not included inwatch_keys()
. The EOA lock should serialize writers, so this is optional; adding it would catch any accidental out-of-band writes.fn watch_keys(&self) -> Vec<String> { vec![ self.keys.optimistic_transaction_count_key_name(), self.keys.last_transaction_count_key_name(), self.keys.recycled_nonces_zset_name(), self.keys.manual_reset_key_name(), + self.keys.eoa_health_key_name(), ] }
721-733
: Minor consistency nit: pass keys by reference uniformly.You use
pipeline.del(recycled_key)
(owned) butpipeline.del(&manual_reset_key)
(borrowed). Both work; using&
for both improves consistency/readability.- pipeline.del(recycled_key); + pipeline.del(&recycled_key);
721-726
: Reset semantics: ensure downstream flows tolerate live pending/borrowed state.Resetting optimistic and cached nonces to
current_chain_tx_count
without touching pending/borrowed queues is likely intentional. Verify that post-reset processors won’t resurrect stale items with nonces ≥ the new baseline.If needed, we can add a follow-up operation to quarantine or recheck borrowed/pending after a reset.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
executors/src/eoa/store/atomic.rs
(5 hunks)executors/src/eoa/store/mod.rs
(2 hunks)executors/src/eoa/store/pending.rs
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- executors/src/eoa/store/mod.rs
🧰 Additional context used
🧬 Code graph analysis (1)
executors/src/eoa/store/atomic.rs (2)
executors/src/eoa/store/pending.rs (8)
name
(36-38)name
(171-173)watch_keys
(40-45)watch_keys
(175-180)validation
(47-119)validation
(182-237)operation
(121-149)operation
(239-262)executors/src/eoa/store/mod.rs (7)
optimistic_transaction_count_key_name
(235-246)last_transaction_count_key_name
(255-263)recycled_nonces_zset_name
(213-224)manual_reset_key_name
(281-292)now
(786-788)serde_json
(685-685)pipeline
(739-739)
🔇 Additional comments (5)
executors/src/eoa/store/pending.rs (1)
62-66
: Good guard: require synchronized optimistic nonce before proceeding.Failing fast with
NonceSyncRequired { eoa, chain_id }
prevents state drift when the optimistic nonce is missing. This aligns with the invariant that incremented-nonce moves must start from a known baseline.executors/src/eoa/store/atomic.rs (4)
149-166
: Return handling and logging look good.Destructuring the new
(usize, Option<u64>)
and logging when the optimistic count advances is clear and non-invasive to callers returningusize
. Once thepending.rs
bug is fixed, this will consistently log the correct value.
405-412
: Nice: instrumenting chain tx count in nonce sync.Adding
current_chain_tx_count
to the span helps correlate sync events and diagnose drift.
518-547
: Reset flow composition looks correct; good separation via SafeRedisTransaction.You create the transaction, execute with retry, and log outcomes—cleanly encapsulated.
682-704
: Validation is cohesive; bounded growth fornonce_resets
is good.Capturing a timestamp and trimming to the last 5 protects against unbounded health growth. Serialization errors are surfaced properly.
let new_optimistic_tx_count = self | ||
.transactions | ||
.last() | ||
.map(|tx| tx.signed_transaction.nonce() + 1); | ||
|
||
// Update optimistic tx count to highest nonce + 1, if we have a new optimistic nonce | ||
if let Some(new_optimistic_tx_count) = new_optimistic_tx_count { | ||
pipeline.set(&optimistic_key, new_optimistic_tx_count); | ||
} | ||
|
||
self.transactions.len() | ||
(self.transactions.len(), new_optimistic_tx_count) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correctness: deriving new_optimistic_tx_count from last()
is order-dependent; use max nonce instead.
self.transactions.last()
assumes the input slice is sorted by nonce. Validation guarantees the set of nonces is sequential, but not that self.transactions
is ordered. If the slice is unsorted, the optimistic nonce can be set too low, leading to duplicate/non-monotonic optimistic nonces on the next enqueue. Compute from the maximum nonce instead.
Apply this diff:
- let new_optimistic_tx_count = self
- .transactions
- .last()
- .map(|tx| tx.signed_transaction.nonce() + 1);
+ let new_optimistic_tx_count = self
+ .transactions
+ .iter()
+ .map(|tx| tx.signed_transaction.nonce())
+ .max()
+ .map(|n| n + 1);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let new_optimistic_tx_count = self | |
.transactions | |
.last() | |
.map(|tx| tx.signed_transaction.nonce() + 1); | |
// Update optimistic tx count to highest nonce + 1, if we have a new optimistic nonce | |
if let Some(new_optimistic_tx_count) = new_optimistic_tx_count { | |
pipeline.set(&optimistic_key, new_optimistic_tx_count); | |
} | |
self.transactions.len() | |
(self.transactions.len(), new_optimistic_tx_count) | |
} | |
let new_optimistic_tx_count = self | |
.transactions | |
.iter() | |
.map(|tx| tx.signed_transaction.nonce()) | |
.max() | |
.map(|n| n + 1); | |
// Update optimistic tx count to highest nonce + 1, if we have a new optimistic nonce | |
if let Some(new_optimistic_tx_count) = new_optimistic_tx_count { | |
pipeline.set(&optimistic_key, new_optimistic_tx_count); | |
} | |
(self.transactions.len(), new_optimistic_tx_count) | |
} |
🤖 Prompt for AI Agents
In executors/src/eoa/store/pending.rs around lines 138-149, the code uses
self.transactions.last() to compute new_optimistic_tx_count which is
order-dependent and wrong when transactions are unsorted; replace that logic by
iterating all transactions to compute the maximum signed_transaction.nonce(),
add 1 to that max to form new_optimistic_tx_count, set that value on the
pipeline (pipeline.set(&optimistic_key, new_optimistic_tx_count)), and return it
(wrapped the same way as before) so the optimistic nonce always advances to
max_nonce + 1 regardless of input order; ensure you preserve the Option
semantics (None if no transactions) and any type conversions expected by
pipeline.set and the function return.
Summary by CodeRabbit
Bug Fixes
Refactor