Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: add Regorus ACI benchmark tests #7298

Merged
merged 1 commit into from
Jan 22, 2025

Conversation

anderseknert
Copy link
Member

@anderseknert anderseknert commented Jan 22, 2025

Seeing the performance comparison of Regorus and OPA in the Regorus docs made me curious to try it out myself. This PR adds the test files from their repo (MIT licensed) and two benchmark tests that evaluates the policy using the same parameters as the opa eval command from their example.

The benchmarks show that doing the full opa eval dance where we build/compile the bundle from scratch is about as fast/slow as their example shows (~30 milliseconds here, but that's without the overhead of the eval command line tool).

However, building the bundle and query beforehand — as would be done in the case of a real OPA deployment — and measuring only evaluation time, shows something else entirely: ~0.09 milliseconds, or 90 microseconds.

So while the numbers they present aren't misleading in any way, they're not representative of OPA's evaluation performance as experienced running OPA as a server, or through the Go API.

That they compile the bundle in that short time is impressive nonetheless!

Saving the benchmark here as it's a good one to use for future improvements.

Seeing the performance comparison of Regorus and OPA in the Regorus
[docs](https://github.com/microsoft/regorus?tab=readme-ov-file#performance)
made me curious to try it out myself. This PR adds the test files from
their repo (MIT licensed) and two benchmark tests that evaluates the
policy using the same parameters as the `opa eval` command from their
example.

The benchmarks show that doing the full `opa eval` dance where we
build/compile the bundle from scratch is about as fast/slow as their
example shows (~30 milliseconds here, but that's without the overhead of
the eval command line tool).

However, building the bundle and query beforehand — as would be done in
the case of a real OPA deployment — and measuring only
_evaluation time_, shows something else entirely: ~0.09 milliseconds, or
90 microseconds.

So while the numbers they present aren't misleading in any way, they're
not representative of OPA's evaluation performance as experiences
running OPA as a server, or through the Go API.

That they compile the bundle in that short time is impressive
nonetheless!

Saving the benchmark here as it's a good one to use for future
improvements.

Signed-off-by: Anders Eknert <anders@styra.com>
@anderseknert
Copy link
Member Author

Can also be tested using opa bench to benchmark only evaluation time:

opa bench --v0-compatible -b tests/aci -i tests/aci/input.json data.framework.mount_overlay=x
+-------------------------------------------------+------------+
| samples                                         |      12486 |
| ns/op                                           |      95293 |
| B/op                                            |      53898 |
| allocs/op                                       |       1140 |
+-------------------------------------------------+------------+

Although you should do so with OPA built from main where this issue is fixed.

Copy link
Contributor

@johanfylling johanfylling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@anderseknert anderseknert merged commit 43590a4 into open-policy-agent:main Jan 22, 2025
28 checks passed
@anderseknert anderseknert deleted the regorus-benchmark branch January 22, 2025 17:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants