-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Petera/handle resourceexhaused ingestion #686
base: feature/local-tx-reexecution
Are you sure you want to change the base?
Petera/handle resourceexhaused ingestion #686
Conversation
WalkthroughThe pull request introduces significant changes across multiple files, focusing on refining the mock generation process, updating the API structure, and enhancing documentation. Key modifications include the removal of certain commands in the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant API
participant Storage
participant EventSubscriber
User->>API: Request transaction
API->>Storage: Store transaction
Storage-->>API: Confirm transaction stored
API->>EventSubscriber: Notify transaction event
EventSubscriber-->>API: Confirm event processed
API-->>User: Respond with transaction details
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
🛑 Comments failed to post (63)
services/requester/utils.go (1)
11-12:
⚠️ Potential issueAdd input validation and error handling.
The function should validate its input parameters and handle potential errors.
Consider this improvement:
-func replaceAddresses(script []byte, chainID flow.ChainID) []byte { +func replaceAddresses(script []byte, chainID flow.ChainID) ([]byte, error) { + if len(script) == 0 { + return nil, fmt.Errorf("empty script") + }Committable suggestion skipped: line range outside the PR's diff.
tests/web3js/eth_rate_limit_test.js (2)
5-6:
⚠️ Potential issueIncrease the process exit timeout
The current 5-second timeout might be insufficient for processing 1000 requests plus the 1-second delay between batches. Consider increasing it to at least 15 seconds to prevent premature exits.
- setTimeout(() => process.exit(0), 5000) // make sure the process exits + setTimeout(() => process.exit(0), 15000) // make sure the process exits📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.this.timeout(0) setTimeout(() => process.exit(0), 15000) // make sure the process exits
28-29: 🛠️ Refactor suggestion
Make assertions more resilient
The strict equality assertions might make the test flaky due to timing and network conditions. Consider using range assertions instead.
- assert.equal(requestsMade, requestLimit, 'more requests made than the limit') - assert.equal(requestsFailed, requests - requestLimit, 'failed requests don\'t match expected value') + assert.isAtMost(requestsMade, requestLimit, 'more requests made than the limit') + assert.approximately(requestsFailed, requests - requestLimit, 5, 'failed requests significantly deviate from expected value')📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.assert.isAtMost(requestsMade, requestLimit, 'more requests made than the limit') assert.approximately(requestsFailed, requests - requestLimit, 5, 'failed requests significantly deviate from expected value')
storage/pebble/db.go (1)
12-13: 🛠️ Refactor suggestion
Consider making cache size configurable
The hardcoded 1MB cache size (
1 << 20
) seems quite small for a production database. A small cache could lead to increased disk I/O and potential performance issues under load.Consider:
- Making the cache size configurable through environment variables or configuration
- Setting a more reasonable default size based on system resources
- Adding documentation about cache size implications
+// DefaultCacheSize is the default size of the block cache (64MB) +const DefaultCacheSize = 64 << 20 + // OpenDB opens a new pebble database at the provided directory. -func OpenDB(dir string) (*pebble.DB, error) { +func OpenDB(dir string, opts ...Option) (*pebble.DB, error) { - cache := pebble.NewCache(1 << 20) + cache := pebble.NewCache(DefaultCacheSize)Committable suggestion skipped: line range outside the PR's diff.
tests/web3js/eth_get_storage_at_test.js (1)
38-50: 🛠️ Refactor suggestion
⚠️ Potential issueFix variable declaration and consider refactoring repeated code.
- The variable
newValue
on line 39 is missinglet
orconst
declaration, which could lead to unexpected behavior- Consider extracting the contract value update logic into a helper function since it's used multiple times
- newValue = 100 + const newValue = 100Consider adding a helper function:
async function updateContractValue(contract, address, value) { const updateData = contract.methods.store(value).encodeABI() const res = await helpers.signAndSend({ from: conf.eoa.address, to: address, data: updateData, value: '0', gasPrice: conf.minGasPrice, }) assert.equal(res.receipt.status, conf.successStatus) return res }tests/web3js/eth_revert_reason_test.js (2)
80-93: 🛠️ Refactor suggestion
Refactor duplicated polling logic into a helper function
This polling logic is duplicated from the previous implementation. To improve maintainability and reduce code duplication, consider extracting it into a reusable helper function.
Create a helper function in
helpers.js
:// Add to helpers.js async function waitForTransactionReceipt(txHash, maxRetries = 30, interval = 1000) { let receipt = null; let attempts = 0; while (receipt == null && attempts < maxRetries) { const response = await callRPCMethod( 'eth_getTransactionReceipt', [txHash] ); receipt = response.body.result; if (receipt == null) { attempts++; await new Promise(resolve => setTimeout(resolve, interval)); } } if (receipt == null) { throw new Error('Transaction receipt not found after maximum retries'); } return { body: { result: receipt } }; }Then simplify both polling sections to:
- let rcp = null - while (rcp == null) { - rcp = await helpers.callRPCMethod( - 'eth_getTransactionReceipt', - [txHash] - ) - if (rcp.body.result == null) { - rcp = null - } - } + let rcp = await helpers.waitForTransactionReceipt(txHash)
45-56:
⚠️ Potential issueAdd timeout and delay to prevent resource exhaustion
The polling mechanism could potentially lead to resource exhaustion issues:
- No timeout mechanism could result in infinite loops
- Continuous polling without delays could overwhelm the node
Consider implementing this improved version:
let rcp = null + const MAX_RETRIES = 30 + const POLLING_INTERVAL = 1000 // ms + let attempts = 0 // wait until the transaction is executed & indexed, and its // receipt becomes available. - while (rcp == null) { + while (rcp == null && attempts < MAX_RETRIES) { rcp = await helpers.callRPCMethod( 'eth_getTransactionReceipt', [txHash] ) if (rcp.body.result == null) { rcp = null + attempts++ + await new Promise(resolve => setTimeout(resolve, POLLING_INTERVAL)) } } + if (rcp == null) { + throw new Error('Transaction receipt not found after maximum retries') + }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.let rcp = null const MAX_RETRIES = 30 const POLLING_INTERVAL = 1000 // ms let attempts = 0 // wait until the transaction is executed & indexed, and its // receipt becomes available. while (rcp == null && attempts < MAX_RETRIES) { rcp = await helpers.callRPCMethod( 'eth_getTransactionReceipt', [txHash] ) if (rcp.body.result == null) { rcp = null attempts++ await new Promise(resolve => setTimeout(resolve, POLLING_INTERVAL)) } } if (rcp == null) { throw new Error('Transaction receipt not found after maximum retries') }
storage/pebble/register_storage_test.go (1)
93-160: 🛠️ Refactor suggestion
Add test cases for error handling in storage getter.
The current tests don't verify the behavior when the storage getter function returns an error.
Consider adding a test case like this:
runDB("should handle storage getter errors", t, func(t *testing.T, db *Storage) { t.Parallel() storageGet := func(id flow.RegisterID, height uint64) (flow.RegisterValue, error) { return nil, fmt.Errorf("storage error") } s := NewStorageSnapshot(storageGet, 0) v, err := s.GetValue(owner, key) require.Error(t, err) require.Empty(t, v) // Verify the error is not cached count := 0 storageGet = func(id flow.RegisterID, height uint64) (flow.RegisterValue, error) { count++ return []byte{1}, nil } s = NewStorageSnapshot(storageGet, 0) v, err = s.GetValue(owner, key) require.NoError(t, err) require.Equal(t, []byte{1}, v) require.Equal(t, 1, count) })tests/integration_test.go (1)
63-74: 🛠️ Refactor suggestion
Consider adding resource exhaustion test scenarios
Given that this PR focuses on handling resource exhaustion during ingestion, consider enhancing this test configuration to:
- Test with non-zero gas prices to simulate real-world conditions
- Add test cases for resource exhaustion scenarios (e.g., memory limits, transaction queue limits)
- Verify proper error handling when resources are exhausted
go.mod (1)
198-202: 💡 Codebase verification
Security packages need version updates
The current versions in go.mod are not the latest available:
golang.org/x/crypto v0.28.0
→ v0.29.0 availablegolang.org/x/sys v0.26.0
→ v0.27.0 availablegolang.org/x/text v0.19.0
→ v0.20.0 availableConsider updating to the latest stable versions for improved security and bug fixes.
🔗 Analysis chain
Security-related package updates
The updates to core security packages (
crypto
,sys
,text
) are good security practices. However, let's verify if these versions are the latest stable releases.
Since we can't use
go
commands in the sandbox environment, let's try to verify the versions against the official Go proxy index:🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if we're using the latest stable versions of security-critical packages echo "Checking latest versions of security packages:" go list -m -versions golang.org/x/crypto | tail -n 1 go list -m -versions golang.org/x/sys | tail -n 1 go list -m -versions golang.org/x/text | tail -n 1Length of output: 541
Script:
#!/bin/bash # Query the Go proxy index for latest versions echo "Checking golang.org/x/crypto versions:" curl -s "https://proxy.golang.org/golang.org/x/crypto/@v/list" | sort -V | tail -n 5 echo -e "\nChecking golang.org/x/sys versions:" curl -s "https://proxy.golang.org/golang.org/x/sys/@v/list" | sort -V | tail -n 5 echo -e "\nChecking golang.org/x/text versions:" curl -s "https://proxy.golang.org/golang.org/x/text/@v/list" | sort -V | tail -n 5Length of output: 632
tests/helpers.go (1)
141-158: 💡 Codebase verification
Rate limit increase requires careful consideration
The rate limit change from 50 to 500 requests/second in the test configuration requires attention:
- The default production rate limit is 50 req/s (from
cmd/run/cmd.go
)- The README documents the default as 50 req/s for client requests
- The test
eth_rate_limit_test.js
specifically tests rate limiting behavior- The change represents a 10x increase which could mask potential rate-limiting issues
Consider:
- Keeping the test rate limit aligned with production defaults (50)
- Or documenting why tests need the increased limit
- Ensuring rate limit tests still effectively validate the rate limiting behavior
🔗 Analysis chain
LGTM! Verify rate limit increase impact
The configuration changes look good, particularly the increased rate limit which should help handle resource exhaustion. The use of
t.TempDir()
for the database directory is a good testing practice.Let's verify the rate limit change impact:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any rate limit related configurations or constants in the codebase # to ensure consistency and identify potential impacts rg -g '!{vendor,node_modules}/**' -A 3 'RateLimit.*=.*[0-9]+'Length of output: 63
Script:
#!/bin/bash # Let's try a broader search for rate limit related code rg -i "ratelimit|rate.*limit" -g '!{vendor,node_modules}/**' # Also search for the specific config struct field ast-grep --pattern 'RateLimit $$$'Length of output: 5800
storage/pebble/storage.go (2)
19-23:
⚠️ Potential issueAdd nil check for
db
in theNew
functionThe
New
function does not check if the provideddb
isnil
. Ifdb
isnil
, it could lead to nil pointer dereferences in methods that uses.db
. Please add a nil check to prevent potential runtime errors.Apply this diff to add the nil check:
func New(db *pebble.DB, log zerolog.Logger) *Storage { + if db == nil { + return nil + } return &Storage{ db: db, log: log, } }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func New(db *pebble.DB, log zerolog.Logger) *Storage { if db == nil { return nil } return &Storage{ db: db, log: log, }
31-33:
⚠️ Potential issueEnsure
batch
is notnil
before using itIn the
set
method,batch
is used without checking if it'snil
. Ifbatch
isnil
, callingbatch.Set
will result in a nil pointer dereference. Consider adding a check or enforcing thatbatch
must not benil
.Apply this diff to add a nil check for
batch
:func (s *Storage) set(keyCode byte, key []byte, value []byte, batch *pebble.Batch) error { prefixedKey := makePrefix(keyCode, key) + if batch == nil { + return errors.New("batch cannot be nil") + } return batch.Set(prefixedKey, value, nil) }Alternatively, update the method's documentation to specify that
batch
must not benil
.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.prefixedKey := makePrefix(keyCode, key) if batch == nil { return errors.New("batch cannot be nil") } return batch.Set(prefixedKey, value, nil)
services/replayer/blocks_provider.go (3)
28-38:
⚠️ Potential issueHandle errors when retrieving block hashes
In the anonymous function that retrieves block hashes, errors are being silently ignored by returning an empty hash. This could lead to unexpected behavior downstream if the hash is expected to be valid.
Consider modifying the function to return an error when a block cannot be retrieved or its hash cannot be computed. This allows for proper error handling and prevents silent failures.
func (bs *blockSnapshot) BlockContext() (evmTypes.BlockContext, error) { return blocks.NewBlockContext( bs.chainID, bs.block.Height, bs.block.Timestamp, - func(n uint64) gethCommon.Hash { + func(n uint64) (gethCommon.Hash, error) { block, err := bs.blocks.GetByHeight(n) if err != nil { - return gethCommon.Hash{} + return gethCommon.Hash{}, err } blockHash, err := block.Hash() if err != nil { - return gethCommon.Hash{} + return gethCommon.Hash{}, err } - return blockHash + return blockHash, nil }, bs.block.PrevRandao, bs.tracer, ) }This change will require updating
blocks.NewBlockContext
to handle the error returned by the hash function.Committable suggestion skipped: line range outside the PR's diff.
22-42: 🛠️ Refactor suggestion
Propagate errors from
BlockContext
The
BlockContext
method currently does not handle errors returned byblocks.NewBlockContext
, which might occur due to issues in the block data or parameters.Consider checking and propagating errors returned by
blocks.NewBlockContext
to ensure any issues are caught and handled appropriately.func (bs *blockSnapshot) BlockContext() (evmTypes.BlockContext, error) { - return blocks.NewBlockContext( + blockCtx, err := blocks.NewBlockContext( bs.chainID, bs.block.Height, bs.block.Timestamp, func(n uint64) gethCommon.Hash { // existing code... }, bs.block.PrevRandao, bs.tracer, ) + if err != nil { + return nil, err + } + return blockCtx, nil }Committable suggestion skipped: line range outside the PR's diff.
65-76:
⚠️ Potential issueEnsure thread safety of
BlocksProvider
The
BlocksProvider
maintainslatestBlock
which is modified inOnBlockReceived
and read inGetSnapshotAt
. IfBlocksProvider
is accessed by multiple goroutines, this could lead to data races.Consider adding synchronization mechanisms to protect access to
latestBlock
. Using async.RWMutex
can ensure that reads and writes are properly synchronized.+import ( + "fmt" + "sync" + + "github.com/onflow/flow-evm-gateway/models" + // other imports... +) type BlocksProvider struct { blocks storage.BlockIndexer chainID flowGo.ChainID tracer *tracers.Tracer latestBlock *models.Block + mu sync.RWMutex } func (bp *BlocksProvider) OnBlockReceived(block *models.Block) error { + bp.mu.Lock() if bp.latestBlock != nil && bp.latestBlock.Height != (block.Height-1) { + bp.mu.Unlock() return fmt.Errorf( "%w: received new block: %d, non-sequential of latest block: %d", models.ErrInvalidHeight, block.Height, bp.latestBlock.Height, ) } bp.latestBlock = block + bp.mu.Unlock() return nil } func (bp *BlocksProvider) GetSnapshotAt(height uint64) ( evmTypes.BlockSnapshot, error, ) { + bp.mu.RLock() if bp.latestBlock != nil && bp.latestBlock.Height == height { snapshot := &blockSnapshot{ BlocksProvider: bp, block: *bp.latestBlock, } + bp.mu.RUnlock() return snapshot, nil } + bp.mu.RUnlock() block, err := bp.blocks.GetByHeight(height) if err != nil { return nil, err } return &blockSnapshot{ BlocksProvider: bp, block: *block, }, nil }Committable suggestion skipped: line range outside the PR's diff.
services/requester/remote_cadence_arch.go (1)
29-34:
⚠️ Potential issuePotential Data Race on
cachedCalls
MapThe
cachedCalls
map is accessed and modified without synchronization. If instances ofRemoteCadenceArch
are used concurrently, this could lead to data races and undefined behavior.Apply this diff to add synchronization using a mutex:
+import ( + "sync" +) type RemoteCadenceArch struct { blockHeight uint64 client *CrossSporkClient chainID flow.ChainID cachedCalls map[string]evmTypes.Data + mu sync.RWMutex }Update the
Run
method to use the mutex:func (rca *RemoteCadenceArch) Run(input []byte) ([]byte, error) { key := hex.EncodeToString(crypto.Keccak256(input)) + rca.mu.RLock() if result, ok := rca.cachedCalls[key]; ok { + rca.mu.RUnlock() return result, nil } + rca.mu.RUnlock() evmResult, err := rca.runCall(input) if err != nil { return nil, err } + rca.mu.Lock() rca.cachedCalls[key] = evmResult.ReturnedData + rca.mu.Unlock() return evmResult.ReturnedData, nil }Ensure all other accesses to
cachedCalls
are properly synchronized.Committable suggestion skipped: line range outside the PR's diff.
services/evm/executor.go (2)
128-129:
⚠️ Potential issueCorrect block height comparison for EVM
BLOCKHASH
opcodeThe EVM specification states that
BLOCKHASH
should return zero when the block number is not in the range[current_block_number - 256, current_block_number - 1]
. The current comparison excludes the block atcurrent_block_number - 256
. Adjust the condition to include it.Apply this diff to include the block at
current_block_number - 256
:// If the given block height is more than 256 blocks // in the past, return an empty block hash. - if s.block.Height-n > 256 { + if s.block.Height-n >= 256 { return common.Hash{} }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if s.block.Height-n >= 256 { return common.Hash{}
93-95:
⚠️ Potential issuePrevent potential nil pointer dereference in
res.Invalid()
Before calling
res.Invalid()
, ensure thatres
is not nil to avoid a runtime panic ifbv.DirectCall()
orbv.RunTransaction()
return a nil result without an error.Apply this diff to check if
res
is nil:if err != nil { return err } + if res == nil { + return fmt.Errorf("nil result for transaction %s", tx.Hash()) + } // we should never produce invalid transaction, since if the transaction was emitted from the evm core // it must have either been successful or failed, invalid transactions are not emitted if res.Invalid() {Committable suggestion skipped: line range outside the PR's diff.
services/replayer/call_tracer_collector.go (2)
95-112: 🛠️ Refactor suggestion
Refactor duplicated panic recovery code in tracing methods.
The deferred panic recovery blocks in
OnTxStart
,OnTxEnd
,OnEnter
,OnExit
, andOnLog
are duplicated. Refactoring this code into a helper function will reduce duplication and improve maintainability.You can create a helper function for panic recovery:
func recoverPanic(l zerolog.Logger, context string) { if r := recover(); r != nil { err, ok := r.(error) if !ok { err = fmt.Errorf("panic: %v", r) } l.Err(err).Stack().Msg(fmt.Sprintf("%s trace collection failed", context)) } }Then, update the deferred functions in each method:
wrapped.OnTxStart = func(vm *tracing.VMContext, tx *types.Transaction, from common.Address) { - defer func() { - if r := recover(); r != nil { - err, ok := r.(error) - if !ok { - err = fmt.Errorf("panic: %v", r) - } - l.Err(err).Stack().Msg("OnTxStart trace collection failed") - } - }() + defer recoverPanic(l, "OnTxStart") // existing code } wrapped.OnTxEnd = func(receipt *types.Receipt, err error) { - defer func() { /* ... */ }() + defer recoverPanic(l, "OnTxEnd") // existing code } // Repeat for OnEnter, OnExit, and OnLogAlso applies to: 114-127, 143-163, 165-178, 180-193
41-41:
⚠️ Potential issueEnsure thread safety when accessing
resultsByTxID
map.The
resultsByTxID
map is accessed and modified without synchronization. IfCallTracerCollector
is used concurrently, this could lead to data races and panics. Introduce synchronization to ensure thread safety.Add a mutex to the struct:
type CallTracerCollector struct { tracer *tracers.Tracer + mu sync.Mutex resultsByTxID map[common.Hash]json.RawMessage logger zerolog.Logger }
Modify the
Collect
method to use the mutex:func (ct *CallTracerCollector) Collect(txID common.Hash) (json.RawMessage, error) { + ct.mu.Lock() + defer ct.mu.Unlock() // collect the trace result result, found := ct.resultsByTxID[txID] if !found { return nil, fmt.Errorf("trace result for tx: %s, not found", txID.String()) } // remove the result delete(ct.resultsByTxID, txID) return result, nil }And in the
OnTxEnd
function withinNewSafeTxTracer
:// collect results for the tracer res, err := ct.tracer.GetResult() if err != nil { l.Error().Err(err).Msg("failed to produce trace results") return } +ct.mu.Lock() ct.resultsByTxID[receipt.TxHash] = res +ct.mu.Unlock() // reset tracing to have fresh state if err := ct.ResetTracer(); err != nil { l.Error().Err(err).Msg("failed to reset tracer") return }Ensure you import the
sync
package:import ( // existing imports "sync" )This will protect concurrent access to
resultsByTxID
and prevent potential race conditions.Also applies to: 73-84, 128-141
storage/pebble/receipts.go (2)
93-93:
⚠️ Potential issueConcurrency concern: Accessing shared resources without synchronization in
GetByTransactionID
The removal of mutex locks in the
GetByTransactionID
method may lead to data races when accessed concurrently. Accessingr.store.get
without synchronization could cause inconsistent state or panics in a multi-threaded environment.Consider reintroducing synchronization mechanisms or ensuring that the underlying
r.store
operations are thread-safe.
112-112:
⚠️ Potential issueConcurrency concern: Unsynchronized access in
GetByBlockHeight
The
GetByBlockHeight
method no longer uses mutex locks after the changes. This might introduce race conditions if the method is called concurrently by multiple goroutines.Ensure that concurrent accesses to this method are safe, possibly by adding synchronization or verifying the thread safety of the underlying storage.
storage/pebble/register_storage.go (1)
66-68: 🛠️ Refactor suggestion
Avoid variable shadowing by renaming the recovered variable
In the deferred function, the variable
r
shadows the method receiverr
, which can lead to confusion. Renaming the variable enhances code clarity.Apply this diff to rename the variable:
defer func() { - if r := recover(); r != nil { - err = fmt.Errorf("panic: %v", r) + if rec := recover(); rec != nil { + err = fmt.Errorf("panic: %v", rec) } }()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if rec := recover(); rec != nil { err = fmt.Errorf("panic: %v", rec) }
services/ingestion/engine.go (1)
152-157: 🛠️ Refactor suggestion
Avoid using
log.Fatal
in defer; handlebatch.Close
error appropriatelyUsing
e.log.Fatal
inside a defer function may cause the application to exit unexpectedly. It's advisable to handle the error without terminating the application abruptly, especially within a defer statement.Apply this diff to handle the error gracefully:
defer func(batch *pebbleDB.Batch) { if err := batch.Close(); err != nil { - e.log.Fatal().Err(err).Msg("failed to close batch") + e.log.Error().Err(err).Msg("failed to close batch") + // Consider propagating the error or handling it appropriately } }(batch)Committable suggestion skipped: line range outside the PR's diff.
api/debug.go (1)
436-445:
⚠️ Potential issuePrevent nil pointer dereference in
isDefaultCallTracer
In the
isDefaultCallTracer
function, you dereference*config.Tracer
without checking ifconfig.Tracer
is nil. This could cause a nil pointer dereference ifconfig.Tracer
is nil.Apply this diff to fix the issue:
func isDefaultCallTracer(config *tracers.TraceConfig) bool { if config == nil { return false } + if config.Tracer == nil { + return false + } if *config.Tracer != replayer.TracerName { return false } tracerConfig := json.RawMessage(replayer.TracerConfig) return slices.Equal(config.TracerConfig, tracerConfig) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if config == nil { return false } if config.Tracer == nil { return false } if *config.Tracer != replayer.TracerName { return false } tracerConfig := json.RawMessage(replayer.TracerConfig) return slices.Equal(config.TracerConfig, tracerConfig)
services/requester/requester.go (1)
423-428:
⚠️ Potential issuePrevent potential overflow in gas limit calculation
In the
EstimateGas
function, there is a possibility of integer overflow when calculatingmid = failingGasLimit * 2
. To ensure robustness, add a check to prevent overflow for large gas limit values.Apply this diff to address the potential overflow:
if mid > failingGasLimit*2 { + if failingGasLimit > math.MaxUint64/2 { + mid = passingGasLimit + } else { mid = failingGasLimit * 2 + } }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if mid > failingGasLimit*2 { // Most txs don't need much higher gas limit than their gas used, and most txs don't // require near the full block limit of gas, so the selection of where to bisect the // range here is skewed to favor the low side. if failingGasLimit > math.MaxUint64/2 { mid = passingGasLimit } else { mid = failingGasLimit * 2 } }
services/ingestion/event_subscriber.go (4)
278-282: 🛠️ Refactor suggestion
Implement Exponential Backoff for Rate Limiting
Currently, when a
ResourceExhausted
error occurs, the code sleeps for 100 milliseconds before retrying. Implementing an exponential backoff strategy can be more effective in handling rate limits imposed by the access node.Apply this change to implement exponential backoff:
- time.Sleep(100 * time.Millisecond) + // Implement exponential backoff with a maximum delay + maxBackoff := time.Second * 5 + if retryCount < 10 { + backoffDuration := time.Duration(math.Pow(2, float64(retryCount))) * time.Millisecond + if backoffDuration > maxBackoff { + backoffDuration = maxBackoff + } + time.Sleep(backoffDuration) + retryCount++ + } else { + return 0, fmt.Errorf("exceeded maximum retries due to rate limiting") + }Ensure you initialize and increment
retryCount
appropriately within the retry loop.Also applies to: 287-293, 362-368, 377-383
166-174:
⚠️ Potential issuePotential Infinite Recovery Loop
In the
subscribe
method, ifevmEvents.Err != nil
orr.recovery
is true, and after callingrecover
,r.recovery
remains true, the loop continues without advancing. Ensure that there is logic to prevent an infinite loop in recovery mode.Consider adding a maximum retry count or timeout mechanism to prevent indefinite looping. For example:
retryLimit := 5 retryCount := 0 for { // existing logic if r.recovery { retryCount++ if retryCount > retryLimit { eventsChan <- models.NewBlockEventsError(fmt.Errorf("exceeded maximum recovery attempts")) return } continue } // reset retryCount if recovery is successful retryCount = 0 }
208-229: 🛠️ Refactor suggestion
Handle Context Cancellation in Backfill Loop
In the
backfill
method, the loop does not explicitly check for context cancellation, which could lead to unnecessary iterations after the context is canceled. Add context cancellation checks to exit promptly when the context is done.Apply this change:
for { select { case <-ctx.Done(): r.logger.Info().Msg("Backfill cancelled") return default: // Existing logic } }This ensures that the backfill process responds promptly to cancellation signals.
347-349: 🛠️ Refactor suggestion
Remove Unused Parameters in Function Signature
The parameters
blockExecutedEventType
andtxExecutedEventType
are passed toaccumulateBlockEvents
but are not used within the function. Remove unused parameters to clean up the code.Apply this diff to remove the unused parameters:
-func (r *RPCEventSubscriber) accumulateBlockEvents( - ctx context.Context, - block flow.BlockEvents, - blockExecutedEventType string, - txExecutedEventType string, -) (models.BlockEvents, error) { +func (r *RPCEventSubscriber) accumulateBlockEvents( + ctx context.Context, + block flow.BlockEvents, +) (models.BlockEvents, error) {Also, ensure that any calls to this function are updated accordingly.
Committable suggestion skipped: line range outside the PR's diff.
services/ingestion/engine_test.go (1)
657-659: 🛠️ Refactor suggestion
Use
assert.NoError
instead ofrequire.NoError
inside deferred functionsUsing
require.NoError
within adefer
function can cause the test to fail after it has already completed, which can lead to confusing test results and make debugging more difficult. Replacerequire.NoError
withassert.NoError
to log any errors without abruptly failing the test.Apply this diff to make the change:
defer func() { - require.NoError(t, batch.Close()) + assert.NoError(t, batch.Close()) }()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.defer func() { assert.NoError(t, batch.Close()) }()
tests/web3js/debug_traces_test.js (1)
41-42:
⚠️ Potential issueUse numbers instead of BigInt literals in
assert.lengthOf
The
assert.lengthOf
function expects a number as the length. Using BigInt literals (9856n
,9806n
) may lead to unexpected behavior. Consider changing them to regular numbers if the values are within the safe integer range.Apply this diff to fix the issue:
- assert.lengthOf(txTrace.input, 9856n) - assert.lengthOf(txTrace.output, 9806n) + assert.lengthOf(txTrace.input, 9856) + assert.lengthOf(txTrace.output, 9806)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.assert.lengthOf(txTrace.input, 9856) assert.lengthOf(txTrace.output, 9806)
storage/index_test.go (24)
406-409:
⚠️ Potential issueClose batches after committing in
TestBloomsForBlockRange
Ensure batches are closed after committing when storing receipts in the blooms test.
Apply this diff:
batch := s.DB.NewBatch() err := s.ReceiptIndexer.Store([]*models.Receipt{r}, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.err := s.ReceiptIndexer.Store([]*models.Receipt{r}, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err)
289-292:
⚠️ Potential issueClose batches after commit in
TestStoreReceipt
In the
ReceiptTestSuite
, ensure batches are closed after committing when storing receipts.Apply this diff:
batch := s.DB.NewBatch() err := s.ReceiptIndexer.Store([]*models.Receipt{receipt}, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.err := s.ReceiptIndexer.Store([]*models.Receipt{receipt}, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err)
97-101:
⚠️ Potential issueClose batches after committing in
TestGet
In the
TestGet
method ofBlockTestSuite
, the batch is not closed after committing. This could lead to resource leaks.Apply this diff:
batch := b.DB.NewBatch() err := b.Blocks.Store(height+1, flowID, block, batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) + err = batch.Close() + b.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.err := b.Blocks.Store(height+1, flowID, block, batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) err = batch.Close() b.Require().NoError(err)
588-592:
⚠️ Potential issueClose batches after committing in
TransactionTestSuite
In
TestStoreTransaction
, ensure the batch is closed after committing.Apply this diff:
batch := s.DB.NewBatch() err := s.TransactionIndexer.Store(tx, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.err := s.TransactionIndexer.Store(tx, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err)
449-451:
⚠️ Potential issueClose batches after committing when storing multiple receipts
In storing multiple receipts per block, close the batch after committing.
Apply this diff:
batch := s.DB.NewBatch() s.Require().NoError(s.ReceiptIndexer.Store(receipts, batch)) err := batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.s.Require().NoError(s.ReceiptIndexer.Store(receipts, batch)) err := batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err)
343-346:
⚠️ Potential issueClose batches after committing when storing single receipt
Ensure batches are closed after committing in
TestGetReceiptByTransactionID
.Apply this diff:
batch := s.DB.NewBatch() err := s.ReceiptIndexer.Store([]*models.Receipt{receipt}, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.err := s.ReceiptIndexer.Store([]*models.Receipt{receipt}, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err)
134-141:
⚠️ Potential issueEnsure batch closure after commit in
TestStore
In
TestStore
, the batch is not closed after committing. Closing the batch is necessary to release resources.Apply this diff:
batch := b.DB.NewBatch() err := b.Blocks.Store(2, flowID, block, batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) + err = batch.Close() + b.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := b.DB.NewBatch() err := b.Blocks.Store(2, flowID, block, batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) err = batch.Close() b.Require().NoError(err)
501-504:
⚠️ Potential issueEnsure batch closure in single height range tests
Close batches after committing in the
single height range
test case.Apply this diff:
batch := s.DB.NewBatch() s.Require().NoError(s.ReceiptIndexer.Store(receipts, batch)) err := batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.s.Require().NoError(s.ReceiptIndexer.Store(receipts, batch)) err := batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err)
601-604:
⚠️ Potential issueClose batches after committing when storing transactions
In
TestGetTransaction
, close the batch after committing.Apply this diff:
batch := s.DB.NewBatch() err := s.TransactionIndexer.Store(tx, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.err := s.TransactionIndexer.Store(tx, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err)
664-668:
⚠️ Potential issueClose batches after committing in
TraceTestSuite
In
TestStore
, ensure batches are closed after committing to prevent resource leaks.Apply this diff:
batch := s.DB.NewBatch() err := s.TraceIndexer.StoreTransaction(id, trace, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.err := s.TraceIndexer.StoreTransaction(id, trace, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err)
29-36:
⚠️ Potential issueEnsure batches are closed after commit to prevent resource leaks
In
TestBlocks
, the batch created withdb.NewBatch()
is not closed after committing. Batches should be closed to release resources and prevent memory leaks.Apply this diff to close the batch:
err := bl.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = batch.Commit(pebble2.Sync) require.NoError(t, err) + err = batch.Close() + require.NoError(t, err) suite.Run(t, &BlockTestSuite{ Blocks: bl, DB: db, })📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := db.NewBatch() err := bl.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = batch.Commit(pebble2.Sync) require.NoError(t, err) err = batch.Close() require.NoError(t, err)
262-266:
⚠️ Potential issueClose batches after committing when retrieving Cadence IDs
In
Cadence ID from EVM height
test case, batches should be closed after committing.Apply this diff:
batch := b.DB.NewBatch() err := b.Blocks.Store(uint64(i), cadenceIDs[i], mocks.NewBlock(evmHeight), batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) + err = batch.Close() + b.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := b.DB.NewBatch() err := b.Blocks.Store(uint64(i), cadenceIDs[i], mocks.NewBlock(evmHeight), batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) err = batch.Close() b.Require().NoError(err)
181-185:
⚠️ Potential issueClose batches after commit in
TestHeights
In
TestHeights
, batches are not closed after committing. This practice should be corrected to prevent resource leaks.Apply this diff:
batch := b.DB.NewBatch() err := b.Blocks.Store(lastHeight+10, flow.Identifier{byte(i)}, mocks.NewBlock(lastHeight), batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) + err = batch.Close() + b.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.err := b.Blocks.Store(lastHeight+10, flow.Identifier{byte(i)}, mocks.NewBlock(lastHeight), batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) err = batch.Close() b.Require().NoError(err)
226-230:
⚠️ Potential issueClose batches after commit when updating latest Cadence height
In the
last Cadence height
test case, batches are not closed after committing.Apply this diff:
batch := b.DB.NewBatch() err := b.Blocks.Store(lastHeight, flow.Identifier{byte(i)}, mocks.NewBlock(lastHeight-10), batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) + err = batch.Close() + b.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := b.DB.NewBatch() err := b.Blocks.Store(lastHeight, flow.Identifier{byte(i)}, mocks.NewBlock(lastHeight-10), batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) err = batch.Close() b.Require().NoError(err)
155-160:
⚠️ Potential issueClose batches within loops to prevent resource leaks
In the loop within
TestStore
, batches are created and committed but not closed. Ensure that each batch is closed after committing.Apply this diff:
for i := 0; i < 10; i++ { batch := b.DB.NewBatch() err := b.Blocks.Store(uint64(i+5), flow.Identifier{byte(i)}, mocks.NewBlock(uint64(10+i)), batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) + err = batch.Close() + b.Require().NoError(err) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := b.DB.NewBatch() err := b.Blocks.Store(uint64(i+5), flow.Identifier{byte(i)}, mocks.NewBlock(uint64(10+i)), batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) err = batch.Close() b.Require().NoError(err)
205-210:
⚠️ Potential issueClose batches after commit when storing blocks
Ensure batches are closed after committing in the
get height by ID
test case withinTestHeights
.Apply this diff:
batch := b.DB.NewBatch() err := b.Blocks.Store(uint64(i), cadenceIDs[i], blocks[i], batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) + err = batch.Close() + b.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := b.DB.NewBatch() err := b.Blocks.Store(uint64(i), cadenceIDs[i], blocks[i], batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) err = batch.Close() b.Require().NoError(err)
243-247:
⚠️ Potential issueClose batches after committing when mapping Cadence heights
In
Cadence height from EVM height
test case, ensure batches are closed after committing.Apply this diff:
batch := b.DB.NewBatch() err := b.Blocks.Store(cadenceHeights[i], flow.Identifier{byte(i)}, mocks.NewBlock(evmHeight), batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) + err = batch.Close() + b.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := b.DB.NewBatch() err := b.Blocks.Store(cadenceHeights[i], flow.Identifier{byte(i)}, mocks.NewBlock(evmHeight), batch) b.Require().NoError(err) err = batch.Commit(pebble2.Sync) b.Require().NoError(err) err = batch.Close() b.Require().NoError(err)
313-316:
⚠️ Potential issueClose batches after committing in multiple receipt storage
When storing multiple receipts, ensure the batch is closed after committing.
Apply this diff:
batch := s.DB.NewBatch() err := s.ReceiptIndexer.Store(receipts, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.err := s.ReceiptIndexer.Store(receipts, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err)
677-682:
⚠️ Potential issueClose batches within loops in
TestStore
When overwriting existing traces in a loop, ensure each batch is closed after committing.
Apply this diff:
for i := 0; i < 2; i++ { id := common.Hash{0x01} trace := json.RawMessage(`{ "test": "foo" }`) batch := s.DB.NewBatch() err := s.TraceIndexer.StoreTransaction(id, trace, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := s.DB.NewBatch() err := s.TraceIndexer.StoreTransaction(id, trace, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err)
694-698:
⚠️ Potential issueClose batches after committing when testing trace retrieval
In
TestGet
, ensure the batch is closed after committing.Apply this diff:
batch := s.DB.NewBatch() err := s.TraceIndexer.StoreTransaction(id, trace, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.err := s.TraceIndexer.StoreTransaction(id, trace, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err)
333-334:
⚠️ Potential issueClose batches even on error during receipt storage
Even when an error occurs during receipt storage, the batch should be closed to prevent resource leaks.
Apply this diff:
batch := s.DB.NewBatch() err := s.ReceiptIndexer.Store(receipts, batch) s.Require().EqualError(err, "can't store receipts for multiple heights") + err = batch.Close() + s.Require().NoError(err)Committable suggestion skipped: line range outside the PR's diff.
48-59:
⚠️ Potential issueClose batches after use to prevent resource leaks in
TestReceipts
In
TestReceipts
, the batch created is not closed after committing. Ensure that the batch is closed to free resources.Apply this diff:
err = bl.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = bl.Store(30, flow.Identifier{0x1}, mocks.NewBlock(10), batch) require.NoError(t, err) err = bl.Store(30, flow.Identifier{0x1}, mocks.NewBlock(300), batch) require.NoError(t, err) err = batch.Commit(pebble2.Sync) require.NoError(t, err) + err = batch.Close() + require.NoError(t, err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := db.NewBatch() err := bl.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = bl.Store(30, flow.Identifier{0x1}, mocks.NewBlock(10), batch) // update first and latest height require.NoError(t, err) err = bl.Store(30, flow.Identifier{0x1}, mocks.NewBlock(300), batch) // update latest require.NoError(t, err) err = batch.Commit(pebble2.Sync) require.NoError(t, err) err = batch.Close() require.NoError(t, err)
366-379:
⚠️ Potential issueClose batches after committing when storing receipts within tests
In
TestGetReceiptByBlockHeight
, batches need to be closed after committing to prevent resource leaks.Apply this diff:
batch := s.DB.NewBatch() err := s.ReceiptIndexer.Store([]*models.Receipt{receipt}, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err) batch = s.DB.NewBatch() // add one more receipt that shouldn't be retrieved r := mocks.NewReceipt(4, common.HexToHash("0x2")) s.Require().NoError(s.ReceiptIndexer.Store([]*models.Receipt{r}, batch)) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.err := s.ReceiptIndexer.Store([]*models.Receipt{receipt}, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err) batch = s.DB.NewBatch() // add one more receipt that shouldn't be retrieved r := mocks.NewReceipt(4, common.HexToHash("0x2")) s.Require().NoError(s.ReceiptIndexer.Store([]*models.Receipt{r}, batch)) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) err = batch.Close() s.Require().NoError(err)
628-631:
⚠️ Potential issueEnsure batch closure in transaction loop
When storing multiple transactions in a loop, close each batch after committing.
Apply this diff:
for i := 0; i < 10; i++ { tx = mocks.NewTransaction(uint64(10 + i)) batch := s.DB.NewBatch() err := s.TransactionIndexer.Store(tx, batch) s.Require().NoError(err) err = batch.Commit(pebble2.Sync) s.Require().NoError(err) + err = batch.Close() + s.Require().NoError(err) }Committable suggestion skipped: line range outside the PR's diff.
api/api.go (1)
346-346:
⚠️ Potential issueCheck for potential nil transaction in block.
Ensure that the transaction retrieved is not nil before proceeding. Add necessary checks to handle cases where a transaction might not be found at the given index.
Apply this diff to handle nil transactions:
if err != nil { return handleError[*ethTypes.Transaction](err, l, b.collector) } +if tx == nil { + return nil, nil +}Committable suggestion skipped: line range outside the PR's diff.
bootstrap/bootstrap.go (5)
361-361:
⚠️ Potential issuePotential Deadlock in
StopMetricsServer
In the
StopMetricsServer
method, using<-b.metrics.Done()
to wait for the metrics server to finish may cause a deadlock if theDone()
channel is not properly closed or signaled. Ensure that the metrics server correctly closes or signals theDone()
channel to prevent the application from hanging during shutdown.
396-404: 🛠️ Refactor suggestion
Enhance Error Handling in
StopDB
MethodIn the
StopDB
method, the error returned byb.db.Close()
is logged but not acted upon. Consider handling the error more robustly, such as implementing a retry mechanism or triggering alerts if the database fails to close properly. This will ensure resource cleanup and prevent potential data corruption.
485-569:
⚠️ Potential issuePrevent Resource Leaks in
setupStorage
In the
setupStorage
function, if an error occurs after openingdb
, the database may not be closed properly, leading to resource leaks.Consider deferring the closure of
db
at the point of successful opening:db, err := pebble.OpenDB(config.DatabaseDir) if err != nil { return nil, nil, err } defer func() { if err != nil { db.Close() } }()This ensures that
db
is closed if an error occurs later in the function.
137-141: 🛠️ Refactor suggestion
Check for Nil
callTracerCollector
After InitializationAfter initializing
callTracerCollector
, consider checking if it'snil
before using it to prevent potential nil pointer dereferences.
523-526:
⚠️ Potential issueCorrect Typographical Error in Variable Name
There's a typo in the variable
evmBlokcHeight
; it should beevmBlockHeight
. This could lead to confusion and errors in the code.Apply this diff to fix the typo:
- evmBlokcHeight := uint64(0) + evmBlockHeight := uint64(0)Ensure all references to
evmBlokcHeight
are updated accordingly.Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will loop forever until the AN is available. Is this ok?
could we put this in an grpc interceptor?
Looks good otherwise
Closes: #???
Description
For contributor use:
master
branchFiles changed
in the Github PR explorerSummary by CodeRabbit
Release Notes
New Features
debug_flowHeightByBlock
to enhance block height retrieval.TraceCall
method.EventSubscriber
interface.Bug Fixes
Documentation
Chores