-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Append log events for all inner transactions failure #6176
Conversation
…ompletedTxEventIdentifier for completed inner move balance of v3
process/transaction/shardProcess.go
Outdated
@@ -662,7 +662,12 @@ func (txProc *txProcessor) processRelayedTxV3( | |||
if check.IfNil(relayerAcnt) { | |||
return vmcommon.UserError, txProc.executingFailedTransaction(tx, relayerAcnt, process.ErrNilRelayerAccount) | |||
} | |||
err := txProc.relayedTxV3Processor.CheckRelayedTx(tx) | |||
funcName, _, err := txProc.argsParser.ParseCallData(string(tx.Data)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can't you add this check inside the relayedTxV3Process.CheckRelayedTX function?
process/transaction/shardProcess.go
Outdated
@@ -970,6 +982,10 @@ func (txProc *txProcessor) processUserTx( | |||
switch txType { | |||
case process.MoveBalance: | |||
err = txProc.processMoveBalance(userTx, acntSnd, acntDst, dstShardTxType, originalTxHash, true) | |||
isUserTxOfRelayedV3 := len(originalTx.InnerTransactions) > 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this if can be deleted and add a complete event log for each such processing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But actually the completeEventLog should be done only if destination shard is the current shard.
otherwise you may have execution on the destination shard and create another log event.
so add log event is this is intra shard only. otherwise, completeLogEvent is added by scProcessor. plus add an integration test and verify how many completeLogEvents are generated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated as suggested, only for intra shard inner tx of type move balance
process/transactionLog/process.go
Outdated
if oldLogs.Address == nil { | ||
oldLogs.Address = newLog.Address | ||
} | ||
oldLogs.Events = append(oldLogs.Events, newLog.Events...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is super inefficient. until now for a hash we added the logEvent only once when all the execution was completed. Please do that here as well.
In the current format this PR is not ok.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pushed
updated the flow a bit so this only happens for failed inner intra shard transactions. The append functionality is still needed, as for failed intra shard inner transactions we don't have scrs, but only events on the big relayed tx(key = relayed tx hash).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but you could accumulate the logs at the transaction processor level and push here once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pushed.. added a new component able to accumulate all logs during execution, then at the end only one SaveLog is done
also removed AppendLogs functionality
update tx unmarshaller to support relayed v3 as well
…iversx/mx-chain-go into append_log_events # Conflicts: # factory/processing/processComponents.go # go.mod # go.sum # process/errors.go
…nent that accumulates logs during execution of relayed tx from all failed inner txs
|
||
type failedTxLogsAccumulator struct { | ||
mut sync.RWMutex | ||
logsMap map[string]*logData |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how this map will be used? is it possible to use a lot of memory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe evaluating using LRU mechanism to cleanup old entries
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should not use a lot of memory. This is how it works: during the execution of a relayed tx v3, all inner tx scrs will be pushed here, then at the end of the execution, they all acumulated scrs will be forwarded to txLogsProcessor, in order to write into that component only once. Once they are forwarded, the entry for the current tx will be deleted from the map.
if isRelayed { | ||
ignorableError = sc.failedTxLogsAccumulator.SaveLogs(logsTxHash, tx, processIfErrorLogs) | ||
} else { | ||
ignorableError = sc.txLogsProcessor.SaveLog(logsTxHash, tx, processIfErrorLogs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
before newly added isRelayed
check, the relayedTx logs were saved to txLogsProcessor
but now they will be saved only to failedTxLogsAccumulator
, is it ok to save only to failedTxLogsAccumulator
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
failedTxLogsAccumulator only accumulates them during the execution of the tx. At the end, the accumulated logs are forwarded to the txLogsProcessor. Should be no difference for relayedV1 and relayedV2
|
||
if len(tx.InnerTransactions) > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we check here also for relayed v2 not allowed? or is this already checked?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be imposible to have v1 and v2 on the same tx, as they both are part of the data field. Thus no need for check
if err != nil { | ||
scrHash = originalTxHash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need a log message for this error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not needed
} | ||
|
||
// SaveLogs saves the logs into the internal map | ||
func (accumulator *failedTxLogsAccumulator) SaveLogs(txHash []byte, tx data.TransactionHandler, logs []*vmcommon.LogEntry) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since we are not saving the logs to persistent storage, is it possible that we lose unintended failed tx logs in case of a restart? or the logs will be saved later via tx logs processor if needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don’t think we can lose logs. If something fails during tx execution and we somehow lose this, the tx won’t be seen as executed too.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## feat/relayedv3 #6176 +/- ##
==================================================
- Coverage 78.78% 78.76% -0.03%
==================================================
Files 753 755 +2
Lines 98341 99289 +948
==================================================
+ Hits 77481 78207 +726
- Misses 15626 15825 +199
- Partials 5234 5257 +23 ☔ View full report in Codecov by Sentry. |
Reasoning behind the pull request
Proposed changes
Testing procedure
Pre-requisites
Based on the Contributing Guidelines the PR author and the reviewers must check the following requirements are met:
feat
branch created?feat
branch merging, do all satellite projects have a proper tag insidego.mod
?