CLEVER is a benchmark suite for end-to-end code generation and formal verification in Lean 4, adapted from the HumanEval dataset. The goal is to move beyond test-case-driven evaluation by requiring models to generate not only implementations but also formal specifications and proofs — all verifiable by Lean’s type checker.
CLEVER evaluates models across a staged verification pipeline:
- Task 1: Generate a formal specification from a natural language description.
- Task 2: Prove semantic equivalence to a hidden ground-truth specification.
- Task 3: Synthesize a Lean implementation that satisfies the specification.
- Task 4: Prove the implementation's correctness against the reference specification.
Each step is independently verified, allowing fine-grained diagnosis of model performance.
- Non-computable specifications that prevent implementation leakage and enforce semantic reasoning.
- Staged certification design to isolate failures across spec generation, implementation, and proof.
- Supports symbolic proof search with agents like COPRA, enabling deeper proof automation analysis.
To evaluate your LLM-generated solutions against the CLEVER benchmark, use the Python API to package and submit them as LeanProblemView objects. Each submission is compiled and verified using Lean 4, and results are returned as structured ValidationResult objects.
-
Installing the python package: You can simply use
pip install clever-benchto install the package from PyPi. Alternatively, you can install from source by cloning the repository and then usepip install -e .After installing the package, run the command
clever-bench-install(this will work on Linux, no prerequisite of installing Lean 4).If you are not on Linux, then you will have install the Lean 4 dependency by yourself. After Lean 4 installation, you can run
cd <path-to-clever-package-installation/lean4-dir> && lake exe cache get && lake build(or equivalent command) to build the Lean 4 environment (This is a one time step only). -
Load the benchmark:
from clever_bench.benchmark import Benchmark benchmark = Benchmark(is_sample=True) # or is_sample=False for actual HumanEval problems in `src/lean4/human_eval` benchmark.load_all()
-
Select a task (e.g., proof generation):
from clever_bench.task import ProblemViewTask, TaskComponent task = ProblemViewTask(benchmark, TaskComponent.PROOF_GENERATION)
-
Get a problem and fill in your solution:
problem = task.get_view(3) # Abstraction to hide the staged problem details and only show relevant fields for the selected task for problem with id 3 problem.implementation = "<your Lean implementation>" problem.correctness_proof = "<your proof>"
-
Submit the solution:
import asyncio result = asyncio.run(task.submit_async(problem, timeout_in_ms=30000)) print(result.correctness_ok, result.error_message)
The returned ValidationResult will tell you whether your implementation compiled, and whether the proofs were accepted by Lean (i.e., no sorry).
CLEVER also supports multi-stage verification: the Python API automatically hides irrelevant fields during each task (e.g., only showing the natural language description field for spec generation), enabling clean task-specific prompting and evaluation.
This process allows you to validate your solutions programmatically—whether you're using LLMs, proof agents, or writing Lean by hand.
You can try our baselines here: Baseline Provers
View the latest results and submit your own: CLEVER Leaderboard
We welcome submissions from the community! To add your results to the leaderboard:
- Evaluate your approach using the Python API described above
- Document your methodology with a preprint or publication
- Submit your results by contacting us with your evaluation metrics and methodology details
For submissions, please contact: amitayush@utexas.edu
- Install Lean 4: https://leanprover.github.io/lean4/doc/quickstart.html
- Run
lake build cleverto build the project after changing the directory tosrc/lean4. OR use VS Code with Lean 4 extension to build the project. - All the formalizations are in the
src/lean4/human_evaldirectory. - Some extra formalizations are provided in
src/lean4/sample_examples/directory, some of which were used in prompts created for evaluating the benchmark.
You can find the paper describing CLEVER at https://arxiv.org/abs/2505.13938.
@inproceedings{thakur2025clever,
title={CLEVER: A Curated Benchmark for Formally Verified Code Generation},
author={Amitayush Thakur and Jasper Lee and George Tsoukalas and Meghana Sistla and Matthew Zhao and Stefan Zetzsche and Greg Durrett and Yisong Yue and Swarat Chaudhuri},
booktitle={39th Conference on Neural Information Processing Systems (NeurIPS 2025)},
year={2025}
}