Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SMT context pruning #3440

Merged
merged 78 commits into from
Sep 5, 2024
Merged

SMT context pruning #3440

merged 78 commits into from
Sep 5, 2024

Conversation

nikswamy
Copy link
Collaborator

@nikswamy nikswamy commented Sep 5, 2024

This PR provides support for SMT context pruning, enabled with --ext context_pruning

In summary: When encoding a query to the SMT solver, with this option, F* computes the subset of the context (all facts from the dependence graph of the current query) that are reachable from the free names of the query, pruning away the rest. This results in significantly smaller SMT contexts, leading to smaller queries that the SMT solver can, in many cases, decide faster.

The speedups are particularly noticeable for small queries: E.g. while in default mode, a typical Pulse query can take 180--200ms, with context pruning it can take 1-2ms. The result is an end-to-end verification time speedup on Pulse files anywhere from 2x -- 10x, e.g, for Pulse.Lib.LinkedList, the default mode verification time on my machine is around 75 seconds; with context pruning, it goes through in 8 seconds.

For details on how the context pruning algorithm works, see the comment in FStar.SMTEncoding.Pruning.fsti. Many thanks to @Chris-Hawblitzel and @mtzguido for discussions that led to this design.

Some further remarks:

Incremental contexts and restart-solver

The SMT context is computed incrementally. That is, for a sequence of SMT queries in a file, q1, q2, ..., we compute the context for q1 and provide it to the solver; then push a context, check-sat on q1; and pop the context. For q2, the context of q1 remains visible to the solver, we compute the context for q2 and provide to the solver any additional facts reachable from q2, etc. I.e., for a sequence of queries, the context grows monotonically. This seems to lead to the best overall performance, since queries in a file have a lot of shared context that needs to be provided to the solver only once. However, later queries in a file may have larger contexts than strictly necessary. In this case, issuing a #restart-solver directive clears the context, and initializes a new solver state which then starts accumulating facts again. As such, a #restart-solver is a useful mechanism for trimming the context of a proof. If you want a fresh, minimal context for every query (which may be useful for stability of particularly tricky VCs), you can run with the --z3refresh option, and every query gets a fresh solver instance with its own query-specific context.

Enabling & Disabling context pruning locally

You can set --ext context_pruning to enable it; and --ext context_pruning= to disable it (the latter assigns the empty value to the context_pruning key). This can be done within a file, with a #push-options etc. Changing the value of this option implicitly causes the solver to be restarted for the next query.

Solver State

This PR also revises the way solver states are managed internally, making the handling more modular---though this should not be user visible

Fixing a bug in using_facts_from

--using_facts_from is still supported and composes with context pruning. For example, you can still do --ext context_pruning --using_facts_from '* -FStar.Seq.slice_slice' to specifically exclude slice_slice even if it is reachable for a given query.

In revising this, I fixed a bug in the handling of using_facts_from which would cause too many facts to be pruned away. So, if you're using --using_facts_from you may see a difference in behavior even if you are not using --ext context_pruning

Interaction with hints

Using --ext context_pruning disables --use_hints. This PR also removes support for the --hint_hook feature, that was an experimental addition last summer, but remains unused, as far as I can tell.

Usage in F* and other repos

  • All files in the F* repo now use --ext context_pruning except:

    • FStar.ModifiesGen, which is very fragile and continues to use unsat cores
    • ulib/legacy/LowStarToFStarBuffer.fst which has some unstable proofs and continues to use unsat cores
    • One top-level assert false in Queens.fst, which intentially aims to try to prove false from the entire context
    • examples/layered_effects/Z3EncodingIssue.fst which has a quirky proof relying on some unusual unfolding behavior
  • The Pulse repo works with --ext context_pruning with significant speedups: Expect a follow up PR there

  • The GPUVer repo also works with --ext context_pruning, also with significant speedups

  • EverParse works with --ext context_pruning

  • HACL*: it works on ~847 build targets, but fails on a handful of files. Some of the files required running with --z3refresh, for tighter contexts. I didn't investigate the failures below further

    • Vale.AsLowStar.Test.fst
    • Vale.SHA.SHA_helpers.fst
    • Vale.AES.GCM_helpers_BE.fst
    • Hacl.Poly1305.Field32xN.Lemmas0.fst
    • Hacl.Spec.K256.Field52.Lemmas5.fst
    • Hacl.Impl.Frodo.KEM.Encaps.fst
    • Hacl.Meta.Curve25519.fst

I would like to start evaluating this in other repositories, improving it, and rolling it out progressively.

I have checked that this PR works with other repos in the Check World build. There is one breakage in steel, for a flaky proof in a old file CSL.Semantics. I have admitted that for now. But, once this PR is merged, that file can use --ext context_pruning and the admit can be removed.

Future improvements

There are many F* specific quirks that are handled in FStar.SMTEncoding.Pruning.fst. I spent a lot of time tuning how top-level squashed facts and assumptions are included in the pruned context. This is largely to ensure compatibility with existing developments that (ab?)use top-level facts. In the future, I would like to have a stricter mode for top-level squashed facts, e.g., requiring users to explicitly introduce them using Pervasives.intro_ambient and providing explicit triggers.

Other improvements could be related to the handling of deeply embedded quantifiers, i.e., l_quant_interp assertions. There is an abundance of these and they get pulled into the context, often needlessly, though I retain them for backwards compat since a non-trivial number of proofs need them. I would like to tighten the handling of this, perhaps revising how they are encoded so that newer developments do not have to bring these into the context.

Currently, the context does not prune SMT declarations of function symbols---it only prunes assertions. So, the declarations still lead to some bloat in SMT queries. I would like to remove that.

…n an SMT query, relative to what is explicitly opened

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
…lib to with with context_pruning
…llow using_facts_from and pruning to compose
@nikswamy nikswamy enabled auto-merge September 5, 2024 18:44
@nikswamy nikswamy merged commit b600d8a into master Sep 5, 2024
3 checks passed
@nikswamy nikswamy deleted the nik_context_pruning branch September 5, 2024 20:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant