Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark number of ACIR opcodes per PR #3182

Closed
Savio-Sou opened this issue Oct 16, 2023 · 1 comment · Fixed by #3250
Closed

Benchmark number of ACIR opcodes per PR #3182

Savio-Sou opened this issue Oct 16, 2023 · 1 comment · Fixed by #3250
Labels
CI enhancement New feature or request

Comments

@Savio-Sou
Copy link
Collaborator

Problem

As surfaced in #2720 (comment), the CI does not currently check for changes in the number of ACVM opcodes generated by a set of benchmark Noir programs.

This leads to the situation where certain PRs could have bumped the number of ACVM opcodes (hence degrade compilation and proving speeds) of certain Noir keywords / language features, where core contributors:

  • Would not have noticed until the community reports it
  • Would have to go through a painful debugging process to identify the PR that caused the bump

Happy Case

A set of benchmarking programs should be aligned upon, where the CI should then:

  1. Automatically benchmark the number of ACIR opcodes generated by the programs on each PR
  2. Highlight programs that came with considerable bumps (e.g. ≥+10%) in number of ACIR opcodes

Alternatives Considered

No response

Additional Context

A significant portion of the work could be borrowed from Aztec's implementation: AztecProtocol/aztec-packages#2733

Would you like to submit a PR for this Issue?

No

Support Needs

No response

@Savio-Sou Savio-Sou added enhancement New feature or request CI labels Oct 16, 2023
@Savio-Sou Savio-Sou added this to Noir Oct 16, 2023
@github-project-automation github-project-automation bot moved this to 📋 Backlog in Noir Oct 16, 2023
@TomAFrench
Copy link
Member

Aztec's implementation of benchmarking doesn't work us as it's a very different usecase. I have a side-project to benchmark Noir programs which we could use with a couple of days of prioritisation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI enhancement New feature or request
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

2 participants