Skip to content

Commit

Permalink
opa eval: add '--count=#' flag
Browse files Browse the repository at this point in the history
`opa eval --count=# --profile` will now evaluate the query # times,
and display aggregated results for metrics and the expression-time
profile.

Before, `opa eval --profile` would yield output like this (JSON):

    "metrics": {
      "timer_rego_external_resolve_ns": 233,
      "timer_rego_load_files_ns": 138866,
      "timer_rego_module_compile_ns": 432654,
      "timer_rego_module_parse_ns": 81454,
      "timer_rego_query_compile_ns": 54892,
      "timer_rego_query_eval_ns": 135624,
      "timer_rego_query_parse_ns": 3886
    },
    "profile": [
      {
        "total_time_ns": 75705,
        "num_eval": 4,
        "num_redo": 2,
        "location": {
          "file": "t.rego",
          "row": 8,
          "col": 2
        }
      },

Now, both of these are replaced by "aggragated_" variants that
include some statistics:

    "aggregated_metrics": {
      "timer_rego_external_resolve_ns": {
        "75%": 1618.5,
        "90%": 1933.8000000000002,
        "95%": 1954,
        "99%": 1954,
        "99.9%": 1954,
        "99.99%": 1954,
        "count": 10,
        "max": 1954,
        "mean": 1137.1,
        "median": 1140,
        "min": 311,
        "stddev": 514.9278493148337
      },

and

    "aggregated_profile": [
      {
        "total_time_ns_stats": {
          "75%": 63369.75,
          "90%": 69523.5,
          "95%": 69691,
          "99%": 69691,
          "99.9%": 69691,
          "99.99%": 69691,
          "count": 10,
          "max": 69691,
          "mean": 39030.9,
          "median": 53758.5,
          "min": 3390,
          "stddev": 27635.790954666016
        },
        "num_eval": 1,
        "num_redo": 1,
        "location": {
          "file": "t.rego",
          "row": 9,
          "col": 2
        }
      }

The table format has been adjusted as well, and only displays a subset of
the statistics gathered:
Especially the 99.9 and 99.99 percentiles hardly matter for the sample
sizes I'd expect to be used here: they will almost certainly match "max".

The output is less unwieldy now:

    +------------------------------+---------+----------+---------------+----------------+---------------+
    |            METRIC            |   MIN   |   MAX    |     MEAN      |      90%       |      99%      |
    +------------------------------+---------+----------+---------------+----------------+---------------+
    | timer_rego_load_files_ns     | 349969  | 2549399  | 1.4760619e+06 | 2.5312689e+06  | 2.549399e+06  |
    | timer_rego_module_compile_ns | 1087507 | 24537496 | 1.120074e+07  | 2.41699473e+07 | 2.4537496e+07 |
    | timer_rego_module_parse_ns   | 275531  | 1915263  | 1.126406e+06  | 1.9016968e+06  | 1.915263e+06  |
    | timer_rego_query_compile_ns  | 61663   | 64395    | 63062.5       | 64374.1        | 64395         |
    | timer_rego_query_eval_ns     | 161812  | 1198092  | 637754        | 1.1846622e+06  | 1.198092e+06  |
    | timer_rego_query_parse_ns    | 6078    | 6078     | 6078          | 6078           | 6078          |
    +------------------------------+---------+----------+---------------+----------------+---------------+
    +----------+-------------+-------------+-------------+-------------+----------+----------+-----------------+
    |   MIN    |     MAX     |    MEAN     |     90%     |     99%     | NUM EVAL | NUM REDO |    LOCATION     |
    +----------+-------------+-------------+-------------+-------------+----------+----------+-----------------+
    | 43.875µs | 26.135469ms | 11.494512ms | 25.746215ms | 26.135469ms | 1        | 1        | data.rbac.allow |
    | 21.478µs | 211.461µs   | 98.102µs    | 205.72µs    | 211.461µs   | 1        | 1        | rbac.rego:13    |
    | 19.652µs | 123.537µs   | 73.161µs    | 122.75µs    | 123.537µs   | 1        | 1        | rbac.rego:40    |
    | 12.303µs | 117.277µs   | 61.59µs     | 116.733µs   | 117.277µs   | 2        | 1        | rbac.rego:50    |
    | 12.224µs | 93.214µs    | 51.289µs    | 92.217µs    | 93.214µs    | 1        | 1        | rbac.rego:44    |
    | 5.561µs  | 84.121µs    | 43.002µs    | 83.469µs    | 84.121µs    | 1        | 1        | rbac.rego:51    |
    | 5.56µs   | 71.712µs    | 36.545µs    | 71.158µs    | 71.712µs    | 1        | 0        | rbac.rego:45    |
    | 4.958µs  | 66.04µs     | 33.161µs    | 65.636µs    | 66.04µs     | 1        | 2        | rbac.rego:49    |
    | 4.326µs  | 65.836µs    | 30.461µs    | 65.083µs    | 65.836µs    | 1        | 1        | rbac.rego:6     |
    | 3.948µs  | 43.399µs    | 24.167µs    | 43.055µs    | 43.399µs    | 1        | 2        | rbac.rego:55    |
    +----------+-------------+-------------+-------------+-------------+----------+----------+-----------------+

Fixes #3651.

Signed-off-by: Stephan Renatus <stephan.renatus@gmail.com>
  • Loading branch information
srenatus committed Jul 21, 2021
1 parent 452bc83 commit 3de4bbd
Show file tree
Hide file tree
Showing 7 changed files with 294 additions and 66 deletions.
8 changes: 4 additions & 4 deletions cmd/bench.go
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,10 @@ import (
"sort"
"testing"

"github.com/open-policy-agent/opa/compile"

"github.com/olekukonko/tablewriter"
"github.com/spf13/cobra"

"github.com/open-policy-agent/opa/compile"
"github.com/open-policy-agent/opa/internal/presentation"
"github.com/open-policy-agent/opa/metrics"
"github.com/open-policy-agent/opa/rego"
Expand Down Expand Up @@ -121,10 +120,11 @@ func benchMain(args []string, params benchmarkCommandParams, w io.Writer, r benc

ctx := context.Background()
var benchFunc func(context.Context, ...rego.EvalOption) error
rg := rego.New(ectx.regoArgs...)

if !params.partial {
// Take the eval context and prepare anything else we possible can before benchmarking the evaluation
pq, err := ectx.r.PrepareForEval(ctx)
pq, err := rg.PrepareForEval(ctx)
if err != nil {
errRender := renderBenchmarkError(params, err, w)
return 1, errRender
Expand All @@ -141,7 +141,7 @@ func benchMain(args []string, params benchmarkCommandParams, w io.Writer, r benc
}
} else {
// As with normal evaluation, prepare as much as possible up front.
pq, err := ectx.r.PrepareForPartial(ctx)
pq, err := rg.PrepareForPartial(ctx)
if err != nil {
errRender := renderBenchmarkError(params, err, w)
return 1, errRender
Expand Down
3 changes: 2 additions & 1 deletion cmd/bench_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,8 @@ func validateBenchMainPrep(t *testing.T, args []string, params benchmarkCommandP
mockRunner.onRun = func(ctx context.Context, ectx *evalContext, params benchmarkCommandParams, f func(context.Context, ...rego.EvalOption) error) (testing.BenchmarkResult, error) {

// cheat and use the ectx to evalute the query to ensure the input setup on it was valid
pq, err := ectx.r.PrepareForEval(ctx)
r := rego.New(ectx.regoArgs...)
pq, err := r.PrepareForEval(ctx)
if err != nil {
return testing.BenchmarkResult{}, err
}
Expand Down
139 changes: 98 additions & 41 deletions cmd/eval.go
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ type evalCommandParams struct {
profile bool
profileCriteria repeatedStringFlag
profileLimit intFlag
count int
prettyLimit intFlag
fail bool
failDefined bool
Expand All @@ -74,6 +75,7 @@ func newEvalCommandParams() evalCommandParams {
}),
explain: newExplainFlag([]string{explainModeOff, explainModeFull, explainModeNotes, explainModeFails}),
target: util.NewEnumFlag(compile.TargetRego, []string{compile.TargetRego, compile.TargetWasm}),
count: 1,
}
}

Expand Down Expand Up @@ -262,34 +264,105 @@ Loads a single JSON file, applying it to the input document; or all the schema f
setExplainFlag(evalCommand.Flags(), params.explain)
addSchemaFlag(evalCommand.Flags(), &params.schemaPath)
addTargetFlag(evalCommand.Flags(), params.target)
addCountFlag(evalCommand.Flags(), &params.count, "benchmark")

RootCommand.AddCommand(evalCommand)
}

func eval(args []string, params evalCommandParams, w io.Writer) (bool, error) {

ctx := context.Background()

ectx, err := setupEval(args, params)
if err != nil {
return false, err
}

ctx := context.Background()
results := make([]pr.Output, ectx.params.count)
profiles := make([][]profiler.ExprStats, ectx.params.count)
timers := make([]map[string]interface{}, ectx.params.count)

for i := 0; i < ectx.params.count; i++ {
results[i] = evalOnce(ctx, ectx)
profiles[i] = results[i].Profile
if ts, ok := results[i].Metrics.(metrics.TimerMetrics); ok {
timers[i] = ts.Timers()
}
}

result := results[0]

if ectx.params.count > 1 {
result.Profile = nil
result.Metrics = nil
result.AggregatedProfile = profiler.AggregateProfiles(profiles...)
timersAggregated := map[string]interface{}{}
for name := range timers[0] {
var vals []int64
for _, t := range timers {
val, ok := t[name].(int64)
if !ok {
return false, fmt.Errorf("missing timer for %s" + name)
}
vals = append(vals, val)
}
timersAggregated[name] = metrics.Statistics(vals...)
}
result.AggregatedMetrics = timersAggregated
}

switch ectx.params.outputFormat.String() {
case evalBindingsOutput:
err = pr.Bindings(w, result)
case evalValuesOutput:
err = pr.Values(w, result)
case evalPrettyOutput:
err = pr.Pretty(w, result)
case evalSourceOutput:
err = pr.Source(w, result)
case evalRawOutput:
err = pr.Raw(w, result)
default:
err = pr.JSON(w, result)
}

if err != nil {
return false, err
} else if len(result.Errors) > 0 {
// If the rego package returned an error, return a special error here so
// that the command doesn't print the same error twice. The error will
// have been printed above by the presentation package.
return false, regoError{}
} else if len(result.Result) == 0 {
return false, nil
}

return true, nil
}

func evalOnce(ctx context.Context, ectx *evalContext) pr.Output {
var result pr.Output
var resultErr error

var parsedModules map[string]*ast.Module

if ectx.metrics != nil {
ectx.metrics.Clear()
}
if ectx.profiler != nil {
ectx.profiler.reset()
}
r := rego.New(ectx.regoArgs...)

if !ectx.params.partial {
var pq rego.PreparedEvalQuery
pq, resultErr = ectx.r.PrepareForEval(ctx)
pq, resultErr = r.PrepareForEval(ctx)
if resultErr == nil {
parsedModules = pq.Modules()
result.Result, resultErr = pq.Eval(ctx, ectx.evalArgs...)
}
} else {
var pq rego.PreparedPartialQuery
pq, resultErr = ectx.r.PrepareForPartial(ctx)
pq, resultErr = r.PrepareForPartial(ctx)
if resultErr == nil {
parsedModules = pq.Modules()
result.Partial, resultErr = pq.Partial(ctx, ectx.evalArgs...)
Expand Down Expand Up @@ -321,50 +394,24 @@ func eval(args []string, params evalCommandParams, w io.Writer) (bool, error) {
sortOrder = getProfileSortOrder(strings.Split(ectx.params.profileCriteria.String(), ","))
}

result.Profile = ectx.profiler.ReportTopNResults(ectx.params.profileLimit.v, sortOrder)
result.Profile = ectx.profiler.p.ReportTopNResults(ectx.params.profileLimit.v, sortOrder)
}

if ectx.params.coverage {
report := ectx.cover.Report(parsedModules)
result.Coverage = &report
}

switch params.outputFormat.String() {
case evalBindingsOutput:
err = pr.Bindings(w, result)
case evalValuesOutput:
err = pr.Values(w, result)
case evalPrettyOutput:
err = pr.Pretty(w, result)
case evalSourceOutput:
err = pr.Source(w, result)
case evalRawOutput:
err = pr.Raw(w, result)
default:
err = pr.JSON(w, result)
}

if err != nil {
return false, err
} else if len(result.Errors) > 0 {
// If the rego package returned an error, return a special error here so
// that the command doesn't print the same error twice. The error will
// have been printed above by the presentation package.
return false, regoError{}
} else if len(result.Result) == 0 {
return false, nil
} else {
return true, nil
}
return result
}

type evalContext struct {
params evalCommandParams
metrics metrics.Metrics
profiler *profiler.Profiler
profiler *resettableProfiler
cover *cover.Cover
tracer *topdown.BufferTracer
r *rego.Rego
regoArgs []func(*rego.Rego)
evalArgs []rego.EvalOption
}

Expand Down Expand Up @@ -470,10 +517,10 @@ func setupEval(args []string, params evalCommandParams) (*evalContext, error) {
evalArgs = append(evalArgs, rego.EvalInstrument(true))
}

var p *profiler.Profiler
rp := resettableProfiler{}
if params.profile {
p = profiler.New()
evalArgs = append(evalArgs, rego.EvalQueryTracer(p))
rp.p = profiler.New()
evalArgs = append(evalArgs, rego.EvalQueryTracer(&rp))
}

if params.partial {
Expand All @@ -493,21 +540,31 @@ func setupEval(args []string, params evalCommandParams) (*evalContext, error) {
regoArgs = append(regoArgs, rego.StrictBuiltinErrors(true))
}

eval := rego.New(regoArgs...)

evalCtx := &evalContext{
params: params,
metrics: m,
profiler: p,
profiler: &rp,
cover: c,
tracer: tracer,
r: eval,
regoArgs: regoArgs,
evalArgs: evalArgs,
}

return evalCtx, nil
}

type resettableProfiler struct {
p *profiler.Profiler
}

func (r *resettableProfiler) reset() {
r.p = profiler.New()
}

func (*resettableProfiler) Enabled() bool { return true }
func (r *resettableProfiler) TraceEvent(ev topdown.Event) { r.p.TraceEvent(ev) }
func (r *resettableProfiler) Config() topdown.TraceConfig { return r.p.Config() }

func getProfileSortOrder(sortOrder []string) []string {

// convert the sort order slice to a map for faster lookups
Expand Down
38 changes: 38 additions & 0 deletions docs/content/policy-performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -286,6 +286,7 @@ The `opa eval` command provides the following profiler options:
| <span class="opa-keep-it-together">`--profile`</span> | Enables expression profiling and outputs profiler results. | off |
| <span class="opa-keep-it-together">`--profile-sort`</span> | Criteria to sort the expression profiling results. This options implies `--profile`. | total_time_ns => num_eval => num_redo => file => line |
| <span class="opa-keep-it-together">`--profile-limit`</span> | Desired number of profiling results sorted on the given criteria. This options implies `--profile`. | 10 |
| <span class="opa-keep-it-together">`--count`</span> | Desired number of evaluations that profiling metrics are to be captured for. With `--format=pretty`, the output will contain min, max, mean and the 90th and 99th percentile. All collected percentiles can be found in the JSON output. | 1 |

#### Sort criteria for the profile results

Expand Down Expand Up @@ -402,6 +403,43 @@ false
As seen from the above table, all results are displayed. The profile results are
sorted on the default sort criteria.

To evaluation the policy multiple times, and aggregate the profiling data over those
runs, pass `--count=NUMBER`:


```bash
opa eval --data rbac.rego --profile --format=pretty --count=10 'data.rbac.allow'
```

**Sample Output**
```ruby
false
+------------------------------+---------+----------+---------------+----------------+---------------+
| METRIC | MIN | MAX | MEAN | 90% | 99% |
+------------------------------+---------+----------+---------------+----------------+---------------+
| timer_rego_load_files_ns | 349969 | 2549399 | 1.4760619e+06 | 2.5312689e+06 | 2.549399e+06 |
| timer_rego_module_compile_ns | 1087507 | 24537496 | 1.120074e+07 | 2.41699473e+07 | 2.4537496e+07 |
| timer_rego_module_parse_ns | 275531 | 1915263 | 1.126406e+06 | 1.9016968e+06 | 1.915263e+06 |
| timer_rego_query_compile_ns | 61663 | 64395 | 63062.5 | 64374.1 | 64395 |
| timer_rego_query_eval_ns | 161812 | 1198092 | 637754 | 1.1846622e+06 | 1.198092e+06 |
| timer_rego_query_parse_ns | 6078 | 6078 | 6078 | 6078 | 6078 |
+------------------------------+---------+----------+---------------+----------------+---------------+
+----------+-------------+-------------+-------------+-------------+----------+----------+-----------------+
| MIN | MAX | MEAN | 90% | 99% | NUM EVAL | NUM REDO | LOCATION |
+----------+-------------+-------------+-------------+-------------+----------+----------+-----------------+
| 43.875µs | 26.135469ms | 11.494512ms | 25.746215ms | 26.135469ms | 1 | 1 | data.rbac.allow |
| 21.478µs | 211.461µs | 98.102µs | 205.72µs | 211.461µs | 1 | 1 | rbac.rego:13 |
| 19.652µs | 123.537µs | 73.161µs | 122.75µs | 123.537µs | 1 | 1 | rbac.rego:40 |
| 12.303µs | 117.277µs | 61.59µs | 116.733µs | 117.277µs | 2 | 1 | rbac.rego:50 |
| 12.224µs | 93.214µs | 51.289µs | 92.217µs | 93.214µs | 1 | 1 | rbac.rego:44 |
| 5.561µs | 84.121µs | 43.002µs | 83.469µs | 84.121µs | 1 | 1 | rbac.rego:51 |
| 5.56µs | 71.712µs | 36.545µs | 71.158µs | 71.712µs | 1 | 0 | rbac.rego:45 |
| 4.958µs | 66.04µs | 33.161µs | 65.636µs | 66.04µs | 1 | 2 | rbac.rego:49 |
| 4.326µs | 65.836µs | 30.461µs | 65.083µs | 65.836µs | 1 | 1 | rbac.rego:6 |
| 3.948µs | 43.399µs | 24.167µs | 43.055µs | 43.399µs | 1 | 2 | rbac.rego:55 |
+----------+-------------+-------------+-------------+-------------+----------+----------+-----------------+
```

##### Example: Display top `5` profile results

```bash
Expand Down
Loading

0 comments on commit 3de4bbd

Please sign in to comment.