diff --git a/docs/index.md b/docs/index.md index 495e987d5..fe06b6bcc 100644 --- a/docs/index.md +++ b/docs/index.md @@ -4948,7 +4948,7 @@ When you spin up a process yourself you should generally have it pipe its output Go's built-in `testing` package provides support for running `Benchmark`s. Earlier versions of Ginkgo subject-node variants that were able to mimic Go's `Benchmark` tests. As of Ginkgo 2.0 these nodes are no longer available. Instead, Ginkgo users can benchmark their code using Gomega's substantially more flexible `gmeasure` package. If you're interested, check out the `gmeasure` [docs](https://onsi.github.io/gomega/#gmeasure-benchmarking-code). Here we'll just provide a quick example to show how `gmeasure` integrates into Ginkgo's reporting infrastructure. -`gmeasure` is structured around the metaphor of Experiments. With `gmeasure` you create ``Experiments` that can record multiple named `Measurements`. Each named `Measurement` can record multiple values (either `float64` or `duration`). `Experiments` can then produce reports to show the statistical distribution of their `Measurements` and different `Measurements`, potentially from different `Experiments` can be ranked and compared. `Experiments` can also be cached using an `ExperimentCache` - this can be helpful to avoid rerunning expensive experiments _and_ to save off "gold-master" experiments to compare against to identify potential regressions in performance - orchestrating all that is left to the user. +`gmeasure` is structured around the metaphor of Experiments. With `gmeasure` you create `Experiments` that can record multiple named `Measurements`. Each named `Measurement` can record multiple values (either `float64` or `duration`). `Experiments` can then produce reports to show the statistical distribution of their `Measurements` and different `Measurements`, potentially from different `Experiments` can be ranked and compared. `Experiments` can also be cached using an `ExperimentCache` - this can be helpful to avoid rerunning expensive experiments _and_ to save off "gold-master" experiments to compare against to identify potential regressions in performance - orchestrating all that is left to the user. Here's an example where we profile how long it takes to repaginate books: