Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add is_syncing to /node/syncing #121

Merged
merged 1 commit into from
Feb 2, 2021
Merged

Conversation

paulhauner
Copy link
Contributor

Proposed Changes

Adds the is_syncing: boolfield to the ../node/syncing endpoint as per the reasons documented here: #77 (comment)

Additional Info

I would like to change head_slot to refer to the block at the head of the canonical chain,for consistency with the other uses of "head" in this API. I've left that out of this PR since that's a non-backwards-compatible change and I'd like to get this merged regardless of that change.

Copy link
Contributor

@ajsutton ajsutton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Big fan of this change (and the suggested change to make head slot mean the slot of the best block).

@mpetrunic mpetrunic requested a review from djrtwo January 13, 2021 07:41
@mcdee
Copy link
Contributor

mcdee commented Jan 13, 2021

From the description in the related comment "I might only have a few blocks left to sync and I'm synced-enough, so lets turn on all the duties endpoints and health endpoint" this doesn't sound like it is an is_syncing flag, it sounds like some sort of ready_to_serve flag. Would it make sense to rename and clarify appropriately?

@paulhauner
Copy link
Contributor Author

From the description in the related comment "I might only have a few blocks left to sync and I'm synced-enough, so lets turn on all the duties endpoints and health endpoint" this doesn't sound like it is an is_syncing flag, it sounds like some sort of ready_to_serve flag. Would it make sense to rename and clarify appropriately?

I see the appeal, but I feel like we'd end up just doing a s/syncing/ready to serve across the entire repo. Perhaps I haven't understood correctly, though.

The concept of a node being "in sync" is pretty complex; you can't say you're syncing just cause someone on the network says they have a block for you, you need to be doing some sort of majority-ish heuristics. This makes me feel that saying "I'm synced" when there's still a few blocks left isn't really a lie because "being synced" is already rather subjective.

@mpetrunic
Copy link
Collaborator

@paulhauner I see that you are calculating sync_distance as sync_distance = self.slot_clock.now() - chain.head_info().slot which makes sync_distance a bit useless since you already get head slot and you know wall slot. When specifying that field I was aiming for that field to be sync_distance = bestPeerHeadSlot - chain.head_info().slot since in case of a lot of skipped slots, this would indicate you are not synced.

@paulhauner
Copy link
Contributor Author

I was aiming for that field to be sync_distance = bestPeerHeadSlot - chain.head_info().slot

I agree that this would be nice, but consider the case where you're syncing two chains at once. There's nothing to say that your head is on the same chain as your peer best head. This makes the basic definition of bestPeerHeadSlot - chain.head_info().slot insufficient in some scenarios.

this would indicate you are not synced.

Since we have the is_syncing field, this is not the case. We are explicit about if we are syncing or not, regardless of sync_distance.

Perhaps we can re-visit the rest of this struct in another issue/PR @mpetrunic? I agree it's not ideal at the moment. However, adding this is_syncing field should be trivial for clients: if they don't already have a concept internally they can declare is_syncing = sync_distance > 0.

@djrtwo djrtwo merged commit e36b026 into ethereum:master Feb 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants