Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gracefully reject mining of already existing blocks #174

Open
dforsten opened this issue Jul 22, 2019 · 10 comments
Open

Gracefully reject mining of already existing blocks #174

dforsten opened this issue Jul 22, 2019 · 10 comments
Assignees

Comments

@dforsten
Copy link

With Honey Badger BFT all nodes mine the same block simultaneously, while syncing each other's chain.

Since some nodes will be faster than others it can happen that the block to be mined has already been synced from another validator, causing an attempt to mine a block that is already imported.

This causes a verification error on the block to be mined since the block is in fact identical to the parent, causing a "TimestampOverflow" error.

Parity needs to be adjusted to expect this case and gracefully reject adding a block to be mined if it is identical to the parent block - this is a common and expected case with the Honey Badger BFT protocol.

@dforsten dforsten self-assigned this Jul 22, 2019
@afck
Copy link
Collaborator

afck commented Jul 22, 2019

Good point!
But it sounds like the proper solution would be to select the correct parent, based on the block number and epoch: The block in epoch n should always set the one from epoch n - 1 as its parent, even if we've already imported a later one (and it should wait if we don't have the block from n - 1 yet—but I guess that can't happen).
Would we still receive the above error with that? Because then the new block would really be fully identical with the existing one (and not a child of it). And if it isn't that would actually be a good reason to panic.

@dforsten
Copy link
Author

I think conceptually it is as if the new block can come from any of the validator client, the current validator client just being one of them.
In that sense the block which is going to be added should be treated the same as any coming from other validator, and follow the same rules.
In this case the rule is to discard the mined block if it is identical to one which has already been imported.

But I see what you mean, in principle the new pending block should be assured to be parented to the block that was the current block at the time the Honey Badger epoch has started, and that is not guaranteed right now, since the creation of the pending block is deferred until the transaction batch is available, at which point another validator client may have sent it to us already.

@dforsten
Copy link
Author

So what we really have to do is to create the pending block at the start of the epoch already, and simply update it when we have the actual transactions to be included in it.

@dforsten
Copy link
Author

What I need to investigate is what will happen if a block from another validator gets imported while there is a pending block - will the imported block be ignored in that case?

@afck
Copy link
Collaborator

afck commented Jul 22, 2019

I'm not sure about the empty pending block… but I think the Parity code has something like this already, does it? So if it fits better into the codebase, that makes sense, of course.

What I need to investigate is what will happen if a block from another validator gets imported while there is a pending block - will the imported block be ignored in that case?

I'm pretty sure it won't be ignored: In Ethereum if we're currently mining block n, and we receive another, valid block n, we'll stop our mining work and start mining n + 1 instead.

Regarding the block author:
Does the current code (in the hbbft branch, I mean) still set the block author to our current node? Because it mustn't: If two validators create otherwise identical blocks with different authors, they would have different hashes, wouldn't they? So the chain would diverge at this point, and we wouldn't be able to generate a seal using threshold signatures.

@dforsten
Copy link
Author

Let me check, but I think the author is simply set to 0 in the hbbft branch.

@dforsten
Copy link
Author

Yes, the author is 0 currently.

I can send transactions and produce blocks in a multi-node setup without problems - and it runs stable except for those occasional "TimestampOverflow" errors when a block is imported from another node before it is mined.

@dforsten
Copy link
Author

I'm pretty sure it won't be ignored: In Ethereum if we're currently mining block n, and we receive another, valid block n, we'll stop our mining work and start mining n + 1 instead.

That is an interesting point - so maybe the best way would be to just cancel the current hbbft epoch and/or the current pending block if a new block has been imported.
Probably have to be super-careful not to run into race conditions when hbbft processing happens in another thread compared to block import notifications, or we will see the same error again, but very infrequent.

@afck
Copy link
Collaborator

afck commented Jul 22, 2019

You're right, we should probably just cancel the epoch. The hbbft crate will need an API for that first, though: poanetwork/hbbft#413

@dforsten
Copy link
Author

Note to self:
To avoid race conditions I have to absolutely assure that the notification about imported blocks is sent before the blocks are being imported, then carefully synchronize the notification thread with any thread that may be attempting to create a new pending block at the same time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants