-
Notifications
You must be signed in to change notification settings - Fork 781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upstream merge v1.14.12 #432
Upstream merge v1.14.12 #432
Conversation
This pull request removes the `fsync` of index files in freezer.ModifyAncients function for performance gain. Originally, fsync is added after each freezer write operation to ensure the written data is truly transferred into disk. Unfortunately, it turns out `fsync` can be relatively slow, especially on macOS (see ethereum/go-ethereum#28754 for more information). In this pull request, fsync for index file is removed as it turns out index file can be recovered even after a unclean shutdown. But fsync for data file is still kept, as we have no meaningful way to validate the data correctness after unclean shutdown. --- **But why do we need the `fsync` in the first place?** As it's necessary for freezer to survive/recover after the machine crash (e.g. power failure). In linux, whenever the file write is performed, the file metadata update and data update are not necessarily performed at the same time. Typically, the metadata will be flushed/journalled ahead of the file data. Therefore, we make the pessimistic assumption that the file is first extended with invalid "garbage" data (normally zero bytes) and that afterwards the correct data replaces the garbage. We have observed that the index file of the freezer often contain garbage entry with zero value (filenumber = 0, offset = 0) after a machine power failure. It proves that the index file is extended without the data being flushed. And this corruption can destroy the whole freezer data eventually. Performing fsync after each write operation can reduce the time window for data to be transferred to the disk and ensure the correctness of the data in the disk to the greatest extent. --- **How can we maintain this guarantee without relying on fsync?** Because the items in the index file are strictly in order, we can leverage this characteristic to detect the corruption and truncate them when freezer is opened. Specifically these validation rules are performed for each index file: For two consecutive index items: - If their file numbers are the same, then the offset of the latter one MUST not be less than that of the former. - If the file number of the latter one is equal to that of the former plus one, then the offset of the latter one MUST not be 0. - If their file numbers are not equal, and the latter's file number is not equal to the former plus 1, the latter one is valid And also, for the first non-head item, it must refer to the earliest data file, or the next file if the earliest file is not sufficient to place the first item(very special case, only theoretical possible in tests) With these validation rules, we can detect the invalid item in index file with greatest possibility. --- But unfortunately, these scenarios are not covered and could still lead to a freezer corruption if it occurs: **All items in index file are in zero value** It's impossible to distinguish if they are truly zero (e.g. all the data entries maintained in freezer are zero size) or just the garbage left by OS. In this case, these index items will be kept by truncating the entire data file, namely the freezer is corrupted. However, we can consider that the probability of this situation occurring is quite low, and even if it occurs, the freezer can be considered to be close to an empty state. Rerun the state sync should be acceptable. **Index file is integral while relative data file is corrupted** It might be possible the data file is corrupted whose file size is extended correctly with garbage filled (e.g. zero bytes). In this case, it's impossible to detect the corruption by index validation. We can either choose to `fsync` the data file, or blindly believe that if index file is integral then the data file could be integral with very high chance. In this pull request, the first option is taken.
Remove console extensions for already deleted API namespaces (les, vflux and ethash).
The bulk of this PR is authored by @lightclient , in the original EOF-work. More recently, the code has been picked up and reworked for the new EOF specification, by @MariusVanDerWijden , in ethereum/go-ethereum#29518, and also @shemnon has contributed with fixes. This PR is an attempt to start eating the elephant one small bite at a time, by selecting only the eof-validation as a standalone piece which can be merged without interfering too much in the core stuff. In this PR: - [x] Validation of eof containers, lifted from #29518, along with test-vectors from consensus-tests and fuzzing, to ensure that the move did not lose any functionality. - [x] Definition of eof opcodes, which is a prerequisite for validation - [x] Addition of `undefined` to a jumptable entry item. I'm not super-happy with this, but for the moment it seems the least invasive way to do it. A better way might be to go back and allowing nil-items or nil execute-functions to denote "undefined". - [x] benchmarks of eof validation speed --------- Co-authored-by: lightclient <lightclient@protonmail.com> Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de> Co-authored-by: Danno Ferrin <danno.ferrin@shemnon.com>
minimizes the time when the lock is held
This implements recent changes to EIP-7685, EIP-6110, and execution-apis. --------- Co-authored-by: lightclient <lightclient@protonmail.com> Co-authored-by: Shude Li <islishude@gmail.com>
…(#30520) This fixes `debug_traceBlock` methods for JS tracers in that it correctly applies the beacon block root processing to the state.
A couple of tests set the debug level to `TRACE` on stdout, and all subsequent tests in the same package are also affected by that, resulting in outputs of tens of megabytes. This PR removes such calls from two packages where it was prevalent. This makes getting a summary of failing tests simpler, and possibly reduces some strain from the CI pipeline.
Block no longer has Requests. This PR just removes some code that wasn't removed in #30425.
Allows live custom tracers to access contract transient storage through the StateDB interface.
This is a redo of #29052 based on newer specs. Here we implement EIPs scheduled for the Prague fork: - EIP-7002: Execution layer triggerable withdrawals - EIP-7251: Increase the MAX_EFFECTIVE_BALANCE Co-authored-by: lightclient <lightclient@protonmail.com>
This fixes a few issues missed in #29052: * `requests` must be hex encoded, so added a helper to marshal. * The statedb was committed too early and so the result of the system calls was lost. * For devnet-4 we need to pull off the type byte prefix from the request data.
This change makes the trie commit operation concurrent, if the number of changes exceed 100. Co-authored-by: stevemilk <wangpeculiar@gmail.com> Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Changelog: https://golangci-lint.run/product/changelog/#1610 Removes `exportloopref` (no longer needed), replaces it with `copyloopvar` which is basically the opposite. Also adds: - `durationcheck` - `gocheckcompilerdirectives` - `reassign` - `mirror` - `tenv` --------- Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
This change brings geth into compliance with the current engine API specification for the Prague fork. I have moved the assignment of ExecutionPayloadEnvelope.Requests into BlockToExecutableData to ensure there is a single place where the type is removed. While doing so, I noticed that handling of requests in the miner was not quite correct for the empty payload. It would return `nil` requests for the empty payload even for blocks after the Prague fork. To fix this, I have added the emptyRequests field in miner.Payload.
calculating a reasonable tx blob fee cap (`max_blob_fee_per_gas * total_blob_gas`) only depends on the excess blob gas of the parent header. The parent header is assumed to be correct, so the method should not be able to fail and return an error.
Use `github.com/decred/dcrd/dcrec/secp256k1/v4` directly rather than `github.com/btcsuite/btcd/btcec/v2` which is just a wrapper around the underlying decred library. Inspired by cosmos/cosmos-sdk#15018 `github.com/btcsuite/btcd/btcec/v2` has a very annoying breaking change when upgrading from `v2.3.3` to `v2.3.4`. The easiest way to workaround this is to just remove the wrapper. Would be very nice if you could backport this to the release branches. References: - btcsuite/btcd#2221 - cometbft/cometbft#4294 - cometbft/cometbft#3728 - zeta-chain/node#2934
## Description Omit null `witness` field from payload envelope. ## Motivation Currently, JSON encoded payload types always include `"witness": null`, which, I believe, is not intentional.
~~Opening this as a draft to have a discussion.~~ Pressed the wrong button I had [a previous PR ](ethereum/go-ethereum#24616 long time ago which reduced the peak memory used during reorgs by not accumulating all transactions and logs. This PR reduces the peak memory further by not storing the blocks in memory. However this means we need to pull the blocks back up from storage multiple times during the reorg. I collected the following numbers on peak memory usage: // Master: BenchmarkReorg-8 10000 899591 ns/op 820154 B/op 1440 allocs/op 1549443072 bytes of heap used // WithoutOldChain: BenchmarkReorg-8 10000 1147281 ns/op 943163 B/op 1564 allocs/op 1163870208 bytes of heap used // WithoutNewChain: BenchmarkReorg-8 10000 1018922 ns/op 943580 B/op 1564 allocs/op 1171890176 bytes of heap used Each block contains a transaction with ~50k bytes and we're doing a 10k block reorg, so the chain should be ~500MB in size --------- Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Breaking changes: - The ChainConfig was exposed to tracers via VMContext passed in `OnTxStart`. This is unnecessary specially looking through the lens of live tracers as chain config remains the same throughout the lifetime of the program. It was there so that native API-invoked tracers could access it. So instead we moved it to the constructor of API tracers. Non-breaking: - Change the default config of the tracers to be `{}` instead of nil. This way an extra nil check can be avoided. Refactoring: - Rename `supply` struct to `supplyTracer`. - Un-export some hook definitions.
Fixes an issue missed in #30576 where we send empty requests for a full payload being resolved, causing hash mismatch later on when we get the payload back via `NewPayload`.
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin HS <martin@swende.se>
Adds a protocol handler fuzzer to fuzz the ETH68 protocol handlers
Fix panic in tests
add unit tests for `p2p/addrutil` --------- Co-authored-by: Martin HS <martin@swende.se>
fixes a typo on one of the postmortems
Fixes an error in the binary iterator, adds additional testcases --------- Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This is one further step towards removing account management from `geth`. This PR deprecates the flag `unlock`, and makes the flag moot: unlock via geth is no longer possible.
The [kilic](https://github.com/kilic/bls12-381) bls12381 implementation has been archived. It shouldn't be necessary to include it as a fuzzing target any longer. This also adds fuzzers for G1/G2 mul that use inputs that are guaranteed to be valid. Previously, we just did random input fuzzing for these precompiles.
This adds an API method `DropTransactions` to legacy pool, blob pool and txpool interface. This method removes all txs currently tracked in the pools. It modifies the simulated beacon to use the new method in `Rollback` which removes previous hacky implementation that also erroneously reset the gas tip to 1 gwei. --------- Co-authored-by: Felix Lange <fjl@twurst.com>
Continuation of ethereum/go-ethereum#30381
0d08eb1
to
d9ef08f
Compare
d9ef08f
to
fdfd720
Compare
Reviewing this with the protocol team! Sorry about the delay, we are changing some processes. |
Reviewed by redoing the merge and resolving the merge-conflicts by comparison with this branch. And also reviewed the critical parts of the upstream geth diff itself that were changed in v1.14.12. Found some minor issues in your merge-commit and implemented fixes along the way. Closing this PR now in favor of #452 |
Description
Merge commits from upstream geth v1.14.12
Corresponding monorepo PR: ethereum-optimism/optimism#13002
Tests
Additional context
Metadata