-
Notifications
You must be signed in to change notification settings - Fork 224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minimal implementation of backward verification for IBC relayer #709
Conversation
b4e7118
to
9920070
Compare
02d917b
to
7e29ddc
Compare
2f26850
to
84a43de
Compare
7a8d97f
to
b8d2153
Compare
a4e2edb
to
551ba8c
Compare
2474ed9
to
c333662
Compare
64da11e
to
14ed02d
Compare
Codecov Report
@@ Coverage Diff @@
## master #709 +/- ##
========================================
+ Coverage 53.6% 54.0% +0.3%
========================================
Files 197 198 +1
Lines 13682 13861 +179
Branches 3482 3515 +33
========================================
+ Hits 7341 7489 +148
- Misses 5942 5971 +29
- Partials 399 401 +2
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did a first pass here, looks awesome so far! Will take another look a little later today.
@@ -14,3 +14,6 @@ Cargo.lock | |||
|
|||
# RPC probe results | |||
/rpc-probe/probe-results/ | |||
|
|||
# Proptest regressions dumps | |||
**/*.proptest-regressions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't we supposed to keep the regressions in version control?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question, I would have thought not since those are evidence of a test failing and would typically be fixed and thus obsolete before the PR gets merged. But perhaps I am understanding this wrong?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding of proptest's failure persistence is that it re-runs any previous known failure cases before trying to generate any more novel ones, which provides additional assurance in future that the bug has been eliminated.
I haven't used proptest extensively though and may not be using it properly yet. In playing around with it today I found that it works well when you write failing tests for the first time (it persists the random seed it used to create that failure as claimed in the docs). When you then introduce a new failing test, it doesn't update the regressions file for some reason.
But if you delete the regressions folder entirely and both tests fail, it saves the seeds for both failing tests 🤷♂️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I had missed that, really neat feature actually, though it seems a bit brittle indeed in some cases.
Then it makes sense to not ignore those files and leave it it up to developers to check in these regressions in or not (one may not want to check-in a regression for a broken test).
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I'd say that, with property-based testing, any regressions anyone finds should be captured and committed.
Not checking it in is analogous to writing a test locally that fails on your machine (potentially showing that something's broken), but not adding it to version control, so the bug/broken code never gets found by others 😁
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this looks great. I think that we'll pick up any further issues on the IBC side and we can fix them as we encounter them.
Close: #361
Description
This PR introduces support for backward verification, ie. verification of a block whose height is lower than the height of the highest trusted state.
Backward verification is implemented by taking a sliding window of length two between the trusted state and the target block and checking whether the
last_block_id
hash of the higher block matches the computed hash of the lower block.This feature is only available if the
unstable
flag of thetendermint-light-client
crate is enabled.If the flag is disabled, then any attempt to verify a block whose height is lower than the highest trusted state will result in a
TargetLowerThanTrustedState
error.Tests
This PR comes with two sets of tests:
testgen
to synthesize chains of blocks of various length, to test both the happy path and failure cases by mutating the either thelast_block_id
hash or some property of a block which influences the computed hash in order to get a hash mismatch during backward verification.kvstore-test
crate which exercises the happy path against thekvstore
proxy app in a running Tendermint node.Performance
The algorithm implemented in this PR is very inefficient in case the target block is much lower than the highest trusted state.
For a trusted state at height
T
, and a target block at heightH
, it will fetch and check hashes ofT - H
blocks.The specification details a strategy for minimizing the height of the trusted state the algorithm starts from.
Implementing this strategy would require some substantial changes to the
LightStore
API and is thus out of scope of this PR, but should be tackled in a follow-up, before this feature is stabilized (ie. available without theunstable
flag).