Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core/state: move state log mechanism to a separate layer #30569

Merged
merged 6 commits into from
Oct 23, 2024

Conversation

holiman
Copy link
Contributor

@holiman holiman commented Oct 10, 2024

In this PR, I have moved the logging-facilities out of *state.StateDB, in to a wrapping struct which implements vm.StateDB instead.

In most places, it was pretty straight-forward.

  • First, hoisting the invocations from state objects up to the statedb.
  • Then making the mutation-methods simply return the previous value, so that the external logging layer could log everything.

Some internal code uses the direct object-accessors to mutate the state, particularly in testing and in setting up state overrides, which means that these changes are unobservable for the hooked layer. This is fine, how we configure the overrides are not necessarily part of the API we want to publish.

The trickiest part about the layering is that when the selfdestructs are finally deleted during Finalise, there's the possibility that someone sent some ether to it, which is burnt at that point, and thus needs to be logged. The hooked layer reaches into the inner layer to figure out these events.

In package vm, the conversion from state.StateDB + hooks into a hooked vm.StateDB is performed where needed.

@holiman holiman mentioned this pull request Oct 10, 2024
core/blockchain.go Outdated Show resolved Hide resolved
@holiman
Copy link
Contributor Author

holiman commented Oct 11, 2024

After having considered it some more, I am even more convinced that the approach of #30441, adding read-hooks inside the state.StateDB, is bad for more reasons than I thought previously.

It does not discriminate between event sources.

  1. In the state processing, between transactions, we check balance and nonce. This would trigger the read-hooks. However, I'd argue that this is bad: it leaks implementation details and is not useful information. For example: maybe we check all txs nonces in a block first, before we commence executing transactions. Or maybe we verify each transcation (balance + nonce etc) right before execution. And maybe we invoke GetNonce twice, in some place, rather than save it as a temp variable and pass to another callsite. These things are internals, and should not be observable by the tracers. It makes the tracers more complex, having to try to figure out which events are actually useful and which are not.
  2. There's an element of recursiveness: if, in an OnOpCode or step function, the tracer decides to look up an account, it will then trigger a read-hook. Making the call-graph evm interpreter loop -> tracer engine->onOp -> statedb -> tracer engine->OnBalanceRead. Even worse: if the balance read-hook for address X queries the balance for address X+1, then we have a true recursive situation.

The "solution" to these sorts of problems would be to, in certain situations, disable the logger. So while executing the read-hook, the logger would be removed and then added back in defer. I think it becomes pretty messy.


With the layered solution, there are no such complexities, as long as we can switch between the logging-statedb and the non-logging-statedb.

  1. We can simply use a non-logging-statedb while processing the tx envelopes in state processor.
  2. We can simply pass a non-logging-statedb to the tracer engine.

Switching between one and the other can probably be done in many ways, I'm open to suggestions. One way would be to have two interfaces

type LoggingEnabled interface {
    WithLoggingDisabled() vm.StateDB
}
type LoggingDisabled interface {
    WithLoggingEnabled() vm.StateDB
}

Example how that would look, going from a dual-layered logging statedb to a single-layered raw statedb:

	if evm.Config.Tracer != nil && evm.Config.Tracer.OnTxStart != nil {
		ctx := evm.GetVMContext()
		newctx := &(*ctx) // shallow copy
		if sdb, ok := ctx.StateDB.(vm.LoggingEnabled); ok {
			newctx.StateDB = sdb.WithLoggingDisabled()
		}
		evm.Config.Tracer.OnTxStart(newctx, tx, msg.From)
		if evm.Config.Tracer.OnTxEnd != nil {
			defer func() {
				evm.Config.Tracer.OnTxEnd(receipt, err)
			}()
		}
	}

@holiman
Copy link
Contributor Author

holiman commented Oct 11, 2024

Minus the ugliness regarding swapping between shimmed and non-shimmed state, this PR is mostly done. Ideas for how to make the switching nicer are appreciated

@holiman holiman marked this pull request as ready for review October 11, 2024 09:33
@holiman holiman changed the title core/state: wip move state log mechanism in to a separate layer core/state: move state log mechanism to a separate layer Oct 11, 2024
core/state/statedb_logger.go Outdated Show resolved Hide resolved
@s1na
Copy link
Contributor

s1na commented Oct 11, 2024

In the state processing, between transactions, we check balance and nonce. This would trigger the read-hooks. However, I'd argue that this is bad: it leaks implementation details and is not useful information. For example: maybe we check all txs nonces in a block first, before we commence executing transactions. Or maybe we verify each transcation (balance + nonce etc) right before execution. And maybe we invoke GetNonce twice, in some place, rather than save it as a temp variable and pass to another callsite. These things are internals, and should not be observable by the tracers. It makes the tracers more complex, having to try to figure out which events are actually useful and which are not.

My understanding for the main use-case of the read hooks is to collect the prestate for the transaction/call. So the ordering and how often we emit a OnReadBalance doesn't matter for the tracers. IF that were to matter you are right, then we would need to add a reason to specify what is this read about.

There's an element of recursiveness: if, in an OnOpCode or step function, the tracer decides to look up an account, it will then trigger a read-hook. Making the call-graph evm interpreter loop -> tracer engine->onOp -> statedb -> tracer engine->OnBalanceRead. Even worse: if the balance read-hook for address X queries the balance for address X+1, then we have a true recursive situation.

This was an interesting realization. I think the friction point here is statedb emitting logs for the same methods that are exposed to the tracers via a statedb instance. Honestly I don't like so much that we are exposing statedb to the tracers. We had to do it exactly to fetch prestate values. So IMO if we add read hooks we can drop the statedb. But I'd ask for opinion from users before committing to that.

Generally I am ok with your approach if it allows us to keep the read hooks :)

@holiman
Copy link
Contributor Author

holiman commented Oct 11, 2024 via email

@fjl
Copy link
Contributor

fjl commented Oct 11, 2024

I think if it's possible to remove state read access from tracers, we should do it. Full state access will become impossible later with stateless clients, so it will have to be removed at that time anyway.

@namiloh
Copy link

namiloh commented Oct 12, 2024

Well, I would assume users of somewhat advanced tracing to be using stateful clients.

Anyway, it is a decision unrelated to this PR. It is related to the other PR, since that one doesn't have the ability to switch between logging/nonlogging statedb.

If we want to remove state access, let's someone make a PR and discuss it then, IMO.

@fjl , you are a type artist. Any ideas for making the hacks in this PR neater?

@holiman
Copy link
Contributor Author

holiman commented Oct 12, 2024

@rjl493456442 implemented an alternative state overriding here: #29950. In this implementation, we switch out the backend from underneath the *state.StateDB, so that the Reader which it uses, checks the overrides before reaching into the snapdata or triedata.
Reader interface:

// Reader defines the interface for accessing accounts and storage slots
// associated with a specific state.
type Reader interface {
	// Account retrieves the account associated with a particular address.
	//
	// - Returns a nil account if it does not exist
	// - Returns an error only if an unexpected issue occurs
	// - The returned account is safe to modify after the call
	Account(addr common.Address) (*types.StateAccount, error)

	// Storage retrieves the storage slot associated with a particular account
	// address and slot key.
	//
	// - Returns an empty slot if it does not exist
	// - Returns an error only if an unexpected issue occurs
	// - The returned storage slot is safe to modify after the call
	Storage(addr common.Address, slot common.Hash) (common.Hash, error)

	// Copy returns a deep-copied state reader.
	Copy() Reader
}

But if the the core parts of vm now accesses the statedb through the interface vm.StateDB, instead of directly the struct, and we now have the ability to layer them, then we would have an alternative option to layer the overrides on top of the statedb instead.

So we could have e.g. logging layer <-> override layer <-> regular statedb <-> snapdata/trie .
Or we could have e.g. logging layer <-> regular statedb <-> override layer <-> snapdata/trie .

I am not really sure what are the pros and cons with either approach. @rjl493456442 any thoughts?

core/state/statedb_logger.go Outdated Show resolved Hide resolved
Copy link
Contributor

@s1na s1na left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

core/state/statedb_logger.go Outdated Show resolved Hide resolved
core/vm/interface.go Outdated Show resolved Hide resolved
@fjl
Copy link
Contributor

fjl commented Oct 17, 2024

Notes:

  • wrap the StateDB deeper into the processing call stack, that way we can avoid the extra methods like IntermediateRoot
  • pass logger + statedb in ApplyTransactionWithEVM and wrap there
  • remove SetLogger on StateDB
  • duplicate vm.StateDB methods in the wrapper to avoid embedding of StateDB. It's more boilerplate but safer for the future.
  • for the burn tracking of deleted accounts in Finalise, perhaps it would be acceptable to keep Finalise in the vm.StateDB interface. In the wrapper implementation, we could iterate the mutated state and emit a balance change for all deleted accounts that still have a balance.
  • alternative: pass the burn callback into Finalise as a parameter

core/state/statedb.go Outdated Show resolved Hide resolved
core/state/statedb.go Outdated Show resolved Hide resolved
core/state/statedb.go Outdated Show resolved Hide resolved
@holiman
Copy link
Contributor Author

holiman commented Oct 18, 2024

Notes:

  • wrap the StateDB deeper into the processing call stack, that way we can avoid the extra methods like IntermediateRoot
  • pass logger + statedb in ApplyTransactionWithEVM and wrap there (logger passed implicitly via the evm config)
  • remove SetLogger on StateDB
  • duplicate vm.StateDB methods in the wrapper to avoid embedding of StateDB. It's more boilerplate but safer for the future.
  • for the burn tracking of deleted accounts in Finalise, perhaps it would be acceptable to keep Finalise in the vm.StateDB interface. In the wrapper implementation, we could iterate the mutated state and emit a balance change for all deleted accounts that still have a balance.
    • Instead of iterating mutations, I iterate s.inner.journal.dirties. The reason is that the mutations are per-block, not per-tx. The journal dirties are in the correct scope
    • alternative: pass the burn callback into Finalise as a parameter

…te layer

core/state: wip move state log mechanism in to a separate layer
core/state: fix tests
core/state: fix miscalc in OnBalanceChange
eth/tracers: fix tests
core/state: re-enable verkle witness generation
internal/ethapi: fix simulation + new logging schema
…g burn

core/state, core/vm: refactor statedb hooking

core/state: trace consensus finalize and system calls

eth/tracers/internal/tracetest: fix tests after refactor

core/state: some renaming and cleanup of statedb-hooking system

core/state: remove unecessary methods, implement hooked subbalance, more testing
return prev
}

func (s *hookedStateDB) SetNonce(address common.Address, nonce uint64) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we rename this SetNonce to IncreaseNonce? The semantic always expect to increment the nonce by 1

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good idea!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About this. I'd prefer to leave that change out of this PR

@holiman
Copy link
Contributor Author

holiman commented Oct 21, 2024

@rjl493456442 your commit 8c7526c undoes an intentional change:

duplicate vm.StateDB methods in the wrapper to avoid embedding of StateDB. It's more boilerplate but safer for the future.

@rjl493456442
Copy link
Member

@holiman I undo my cleanup commit and push various fixes on top

Copy link
Member

@lightclient lightclient left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a really nice refactor. Left a couple thoughts. Looking forward to trying another statedb wrapper for witness gathering.


// hookedStateDB represents a statedb which emits calls to tracing-hooks
// on state operations.
type hookedStateDB struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tracingStateDB?

hookedStateDB means very little to me. I would have kinda expected it to have generic hooks support instead of just tracing hooks.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I initially called it that. But then I thought, it's using the *tracing.Hooks. I can go with either

core/state/statedb_hooked.go Outdated Show resolved Hide resolved
@holiman holiman added this to the 1.14.12 milestone Oct 22, 2024
@holiman
Copy link
Contributor Author

holiman commented Oct 22, 2024

Looking forward to trying another statedb wrapper for witness gathering.

For witness gathering, this PR will not make any noticeable difference. Witnesses are not collected at the level of the statedb API; they're more in the MPT node level.

@holiman holiman merged commit 459bb4a into ethereum:master Oct 23, 2024
3 checks passed
holiman added a commit that referenced this pull request Nov 19, 2024
This PR moves the logging/tracing-facilities out of `*state.StateDB`,
in to a wrapping struct which implements `vm.StateDB` instead.

In most places, it is a pretty straight-forward change: 
- First, hoisting the invocations from state objects up to the statedb. 
- Then making the mutation-methods simply return the previous value, so
that the external logging layer could log everything.

Some internal code uses the direct object-accessors to mutate the state,
particularly in testing and in setting up state overrides, which means
that these changes are unobservable for the hooked layer. Thus, configuring
the overrides are not necessarily part of the API we want to publish.

The trickiest part about the layering is that when the selfdestructs are
finally deleted during `Finalise`, there's the possibility that someone
sent some ether to it, which is burnt at that point, and thus needs to
be logged. The hooked layer reaches into the inner layer to figure out
these events.

In package `vm`, the conversion from `state.StateDB + hooks` into a
hooked `vm.StateDB` is performed where needed.

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants