-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SMT context pruning #3440
Merged
Merged
SMT context pruning #3440
+5,447
−2,081
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…n an SMT query, relative to what is explicitly opened
Loading status checks…
…T encoding
Loading status checks…
Loading status checks…
…lib to with with context_pruning
Loading status checks…
…llow using_facts_from and pruning to compose
…t include decls at prior levels:an exception is the top-most level for pruned query
…ster with an unsat core, disabling context pruning for it for now
10 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR provides support for SMT context pruning, enabled with
--ext context_pruning
In summary: When encoding a query to the SMT solver, with this option, F* computes the subset of the context (all facts from the dependence graph of the current query) that are reachable from the free names of the query, pruning away the rest. This results in significantly smaller SMT contexts, leading to smaller queries that the SMT solver can, in many cases, decide faster.
The speedups are particularly noticeable for small queries: E.g. while in default mode, a typical Pulse query can take 180--200ms, with context pruning it can take 1-2ms. The result is an end-to-end verification time speedup on Pulse files anywhere from 2x -- 10x, e.g, for Pulse.Lib.LinkedList, the default mode verification time on my machine is around 75 seconds; with context pruning, it goes through in 8 seconds.
For details on how the context pruning algorithm works, see the comment in FStar.SMTEncoding.Pruning.fsti. Many thanks to @Chris-Hawblitzel and @mtzguido for discussions that led to this design.
Some further remarks:
Incremental contexts and restart-solver
The SMT context is computed incrementally. That is, for a sequence of SMT queries in a file,
q1, q2, ...
, we compute the context forq1
and provide it to the solver; then push a context, check-sat on q1; and pop the context. Forq2
, the context ofq1
remains visible to the solver, we compute the context forq2
and provide to the solver any additional facts reachable fromq2
, etc. I.e., for a sequence of queries, the context grows monotonically. This seems to lead to the best overall performance, since queries in a file have a lot of shared context that needs to be provided to the solver only once. However, later queries in a file may have larger contexts than strictly necessary. In this case, issuing a #restart-solver directive clears the context, and initializes a new solver state which then starts accumulating facts again. As such, a #restart-solver is a useful mechanism for trimming the context of a proof. If you want a fresh, minimal context for every query (which may be useful for stability of particularly tricky VCs), you can run with the--z3refresh
option, and every query gets a fresh solver instance with its own query-specific context.Enabling & Disabling context pruning locally
You can set
--ext context_pruning
to enable it; and--ext context_pruning=
to disable it (the latter assigns the empty value to the context_pruning key). This can be done within a file, with a #push-options etc. Changing the value of this option implicitly causes the solver to be restarted for the next query.Solver State
This PR also revises the way solver states are managed internally, making the handling more modular---though this should not be user visible
Fixing a bug in using_facts_from
--using_facts_from
is still supported and composes with context pruning. For example, you can still do--ext context_pruning --using_facts_from '* -FStar.Seq.slice_slice'
to specifically excludeslice_slice
even if it is reachable for a given query.In revising this, I fixed a bug in the handling of
using_facts_from
which would cause too many facts to be pruned away. So, if you're using--using_facts_from
you may see a difference in behavior even if you are not using--ext context_pruning
Interaction with hints
Using
--ext context_pruning
disables--use_hints
. This PR also removes support for the--hint_hook
feature, that was an experimental addition last summer, but remains unused, as far as I can tell.Usage in F* and other repos
All files in the F* repo now use --ext context_pruning except:
assert false
in Queens.fst, which intentially aims to try to prove false from the entire contextThe Pulse repo works with --ext context_pruning with significant speedups: Expect a follow up PR there
The GPUVer repo also works with --ext context_pruning, also with significant speedups
EverParse works with --ext context_pruning
HACL*: it works on ~847 build targets, but fails on a handful of files. Some of the files required running with --z3refresh, for tighter contexts. I didn't investigate the failures below further
I would like to start evaluating this in other repositories, improving it, and rolling it out progressively.
I have checked that this PR works with other repos in the Check World build. There is one breakage in steel, for a flaky proof in a old file CSL.Semantics. I have admitted that for now. But, once this PR is merged, that file can use --ext context_pruning and the admit can be removed.
Future improvements
There are many F* specific quirks that are handled in FStar.SMTEncoding.Pruning.fst. I spent a lot of time tuning how top-level squashed facts and assumptions are included in the pruned context. This is largely to ensure compatibility with existing developments that (ab?)use top-level facts. In the future, I would like to have a stricter mode for top-level squashed facts, e.g., requiring users to explicitly introduce them using Pervasives.intro_ambient and providing explicit triggers.
Other improvements could be related to the handling of deeply embedded quantifiers, i.e., l_quant_interp assertions. There is an abundance of these and they get pulled into the context, often needlessly, though I retain them for backwards compat since a non-trivial number of proofs need them. I would like to tighten the handling of this, perhaps revising how they are encoded so that newer developments do not have to bring these into the context.
Currently, the context does not prune SMT declarations of function symbols---it only prunes assertions. So, the declarations still lead to some bloat in SMT queries. I would like to remove that.