You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/tutorial-first-steps.md
+62-34
Original file line number
Diff line number
Diff line change
@@ -2,12 +2,12 @@
2
2
3
3
Kani is unlike the testing tools you may already be familiar with.
4
4
Much of testing is concerned with thinking of new corner cases that need to be covered.
5
-
With Kani, all the corner cases are covered from the start, and the new concern is narrowing down the scope to something manageable for the checker.
5
+
With Kani, all the corner cases are covered from the start, and the new concern is narrowing down the scope to something manageable for the verifier.
6
6
7
7
Consider this first program (which can be found under [`first-steps-v1`](https://github.com/model-checking/kani/tree/main/docs/src/tutorial/first-steps-v1/)):
Think about the test harness you would need to write to test this function.
@@ -18,7 +18,7 @@ And if this function was more complicated—for example, if some of the branches
18
18
We can try to property test a function like this, but if we're naive about it (and consider all possible `u32` inputs), then it's unlikely we'll ever find the bug.
Notably, we haven't had to write explicit assertions in our proof harness: by default, Kani will find a host of erroneous conditions which include a reachable call to `panic` or a failing `assert`.
54
+
If Kani had run successfully on this harness, this amounts to a mathematical proof that there is no input that could cause a panic in `estimate_size`.
53
55
54
56
### Getting a trace
55
57
56
58
By default, Kani only reports failures, not how the failure happened.
57
-
This is because, in its full generality, understanding how a failure happened requires exploring a full (potentially large) execution trace.
58
-
Here, we've just got some nondeterministic inputs up front, but that's something of a special case that has a "simpler" explanation (just the choice of nondeterministic input).
59
+
In this running example, it seems obvious what we're interested in (the value of `x` that caused the failure) because we just have one unknown input at the start (similar to the property test), but that's kind of a special case.
60
+
In general, understanding how a failure happened requires exploring a full (potentially large) _execution trace_.
59
61
60
-
To see traces, run:
62
+
An execution trace is a record of exactly how a failure can occur.
63
+
Nondeterminism (like a call to `kani::any()`, which could return any value) can appear in the middle of its execution.
64
+
A trace is a record of exactly how execution proceeded, including concrete choices (like `1023`) for all of these nondeterministic values.
65
+
66
+
To get a trace for a failing check in Kani, run:
61
67
62
68
```
63
-
kani --visualize src/main.rs
69
+
cargo kani --visualize
64
70
```
65
71
66
-
This command runs Kani and generates the HTML report in `report-main/html/index.html`.
72
+
This command runs Kani and generates an HTML report that includes a trace.
67
73
Open the report with your preferred browser.
68
-
From this report, we can find the trace of the failure and filter through it to find the relevant line (at present time, an unfortunate amount of generated code is present in the trace):
74
+
Under the "Errors" heading, click on the "trace" link to find the trace for this failure.
75
+
76
+
From this trace report, we can filter through it to find relevant lines.
77
+
A good rule of thumb is to search for either `kani::any()` or assignments to variables you're interested in.
78
+
At present time, an unfortunate amount of generated code is present in the trace.
79
+
This code isn't a part of the Rust code you wrote, but is an internal implementation detail of how Kani runs proof harnesses.
80
+
Still, searching for `kani::any()` quickly finds us these lines:
69
81
70
82
```
71
83
let x: u32 = kani::any();
72
84
x = 1023u
73
85
```
74
86
75
87
Here we're seeing the line of code and the value assigned in this particular trace.
76
-
Like property testing, this is just one example of a failure.
77
-
To find more, we'd presumably fix this issue and then re-run Kani.
88
+
Like property testing, this is just one **example** of a failure.
89
+
To proceed, we recommend fixing the code to avoid this particular issue and then re-running Kani to see if you find more issues.
78
90
79
91
### Exercise: Try other failures
80
92
@@ -106,14 +118,14 @@ But Kani still catches the issue:
It seems a bit odd that our example function is tested against billions of possible inputs, when it really only seems to be designed to handle a few thousand.
158
+
Let's encode this fact about our function by asserting some reasonable upper bound on our input, after we've fixed our bug.
159
+
(New code available under [`first-steps-v2`](https://github.com/model-checking/kani/tree/main/docs/src/tutorial/first-steps-v2/)):
Now we've stated our previously implicit expectation: this function should never be called with inputs that are too big.
154
-
But if we attempt to verify this, we get a problem:
165
+
Now we've explicitly stated our previously implicit expectation: this function should never be called with inputs that are too big.
166
+
But if we attempt to verify this modified function, we run into a problem:
155
167
156
168
```
157
169
[...]
158
170
RESULTS:
159
171
[...]
160
-
Check 3: final_form::estimate_size.assertion.1
172
+
Check 3: estimate_size.assertion.1
161
173
- Status: FAILURE
162
174
- Description: "assertion failed: x < 4096"
163
175
[...]
164
176
VERIFICATION:- FAILED
165
177
```
166
178
167
-
We intended this to be a precondition of calling the function, but Kani is treating it like a failure.
168
-
If we call this function with too large of a value, it will crash with an assertion failure.
169
-
But we know that, that was our intention.
179
+
What we want is a _precondition_ for `estimate_size`.
180
+
That is, something that should always be true every time we call the function.
181
+
By putting the assertion at the beginning, we ensure the function immediately fails if that expectation is not met.
182
+
183
+
But our proof harness will still call this function with any integer, even ones that just don't meet the function's preconditions.
184
+
That's... not a useful or interesting result.
185
+
We know that won't work already.
186
+
How do we go back to successfully verifying this function?
170
187
171
-
This is the purpose of _proof harnesses_.
188
+
This is the purpose of writing a proof harness.
172
189
Much like property testing (which would also fail in this assertion), we need to set up our preconditions, call the function in question, then assert our postconditions.
173
190
Here's a revised example of the proof harness, one that now succeeds:
0 commit comments