-
Notifications
You must be signed in to change notification settings - Fork 632
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Print StateRecord
s corresponding to the range of a state part generated
#8798
Conversation
The dump needs to be of the state at the beginning of the last block of the previous epoch.
Using `get_state_response_header()` instead of reimplementing it.
…p` also print the beginning and end of corresponding StateRecords.
@marcelo-gonzalez: can you review please? |
The dump needs to be of the state at the beginning of the last block of the previous epoch.
Using `get_state_response_header()` instead of reimplementing it.
for item in trie.iter().unwrap() { | ||
if let Ok((key, value)) = item { | ||
let key_len = key.len(); | ||
if let Some(sr) = StateRecord::from_raw_key_value(key, value) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally fine if this is what's actually useful, but is printing out the first and last state record exactly required? looks like it's just to print it out as debugging info, so it feels to me like quite a bit of work to call StateRecord::from_raw_key_value
on every node just to print only that out in the end. For the debugging purposes here would it do to just call Trie::print_recursive()
w/ a low max depth?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After trying this code out, I simplified:
- Removed the node key length
- Print only the first state record, not the last
print_recursive()
would be good, but I don't want too much output to stderr.
Are we still on track to increase protocol version each release?
Making two changes to fuzzer at once -- adding 1.32.1 and 1.33.0 releases.
This was added in #6778, but it's honestly really not needed. There's been some discussion that it's easier to just run mock-node directly, and that the benchmarks don't add a ton of value. Also it's not great to have added binary files to the tree.
ClientActor::syncing_info() prints log messages when the bool it returns indicating whether syncing is needed doesn't match the current syncing state. The problem is that this doesn't always match up to when the client actually changes its sync status. This function is called in sync_wait_period(), which is just meant to tell how long to wait until trying to call run_sync_step() again. The result is that when all peers have disconnected from a node, it prints terrible log spam like: ``` 2023-03-25T01:20:27.598443Z WARN client: Sync: no peers available, disabling sync 2023-03-25T01:20:27.678527Z WARN client: Sync: no peers available, disabling sync 2023-03-25T01:20:27.701984Z WARN client: Sync: no peers available, disabling sync ... ... ``` and you get similar log spam right after node finds a peer after it had none previously: ``` 2023-03-25T01:30:50.957743Z INFO client: Sync: height: 14, peer id/height: ed25519:GyUjwR2XmFHrhsQsrW69NLKnZym6mVYqB4BLcHudv6gx/100, enabling sync 2023-03-25T01:30:50.976113Z INFO client: Sync: height: 14, peer id/height: ed25519:GyUjwR2XmFHrhsQsrW69NLKnZym6mVYqB4BLcHudv6gx/100, enabling sync 2023-03-25T01:30:51.057412Z INFO client: Sync: height: 14, peer id/height: ed25519:GyUjwR2XmFHrhsQsrW69NLKnZym6mVYqB4BLcHudv6gx/100, enabling sync ... ... ``` To fix it we can just have ClientActor::syncing_info() return a type that tells if we need to sync, and only print a log message in run_sync_step() when we actually change the sync status
* fix: Epoch Sync implementation is delayed * fix: Epoch Sync implementation is delayed --------- Co-authored-by: near-bulldozer[bot] <73298989+near-bulldozer[bot]@users.noreply.github.com>
This version has added support for revoked ssh keys and has revoked the leaked github key. Seems like a thing to be better-safe-than-sorry about.
State Sync remains important functionality that needs to be tested, and which is expected to work on networks that are smaller than testnet/mainnet TODO: Run Nayduck before merging
#8818) First off, we should always use `safe_add_gas` when adding gas to avoid bugs and for code consistency, even in cases when we know there won't be an overflow. Moreover, in this particular case, I actually wasn't able to strictly prove that the value never overflows as it depends on the send fees set in the config. This is technically a protocol change. In practice, we don't expect this addition to ever overflow and if this actually happens in practice, I argue that it's better to loudly fail rather than corrupt the state of the network by acting on the overflowed value of the `total_gas_burnt`.
* protocol_upgrade_num_epochs * reduce_wait_for_missing_block * announce_account_horizon epoch_sync_enabled is also useless, will follow-up, untangling it is too involved to be included in the same PR. #7842
Using `get_state_response_header()` instead of reimplementing it.
@akhi3030 please take a look |
To help us understand why creation of certain parts is as slow as 10 second, look at the corresponding
StateRecord
s for the first and the last node of the state part.Based on #8794