Skip to content

emilbayes/random-benchmark

Folders and files

NameName
Last commit message
Last commit date
Mar 4, 2015
Nov 21, 2014
Nov 21, 2014
Mar 24, 2015
Nov 21, 2014
Nov 24, 2014
Nov 21, 2014
Mar 4, 2015
Mar 4, 2015
Mar 4, 2015

Repository files navigation

Random Benchmark Build Status

This is not a random benchmark! ... or is it? The suite was developed to track the performance progress of xorshift, which @AndreasMadsen and I co-developed during a Node.js hackathon in Copenhagen.

Suite

The benchmark includes the following npm packages:

Methodology: To keep comparisons consistent, all packages are benchmarked on their ability to generate doubles in the range [0, 1). If this is not provided with the package, normalisation is done in the appropiate wrapper.

Each package is sampled 100 times, each sample running 1e6 iterations and then normalising the mean and standard deviation with the number of iterations, to get a measure for the performance of a single operation. This, however, might be misleading because all the operations of a single package are batched.

Installation

npm install random-benchmark
cd node_modules/random-benchmark
npm install

Usage

npm test

If you're developing your own RNG you may symlink the package into random-benchmark/node_modules/ and write a wrapper so you can test it against the suite.

Development

The benchmark is strongly inspired by htmlparser-benchmark and levinstein-benchmark. It is composed of four layers:

  • index.js is the general CLI interface. The available wrappers are loaded here and spawned as workers.
  • worker.js is responsible for taking a given wrapper and turning it into a benchmark as well as monitoring progress.
  • benchmark.js is the abstract "class", where the nitty-gritty details of running each wrapper is implemented, as well as calculating statistics using summary.
  • wrapper/*.js is a file for each benchmark to run in the suite. A wrapper follows the signature fn(iterations, callback), where callback is a standard Node.js style callback. iterations is how many times the operation should be repeated for the current sample. Benchmark will repeat this several times to calculate a sample mean.

License

ISC © 2014, Emil Bay github@tixz.dk

About

Benchmark of pseudorandom number generators

Resources

License

Stars

Watchers

Forks

Packages

No packages published