This repository contains the benchmarks used in the comparison between pest and nom.
All the files are in benches/
:
benches/pest.rs
is the "pest" benchmark (fastest one, recognizes the AST but does not transform to numbers, strings, vectors, etc)benches/full_pest.rs
is the "pest (custom AST)" benchmark (parses the values)benches/nom.rs
is the test JSON parser for nombenches/nom_f64.rs
is a version of the nom parser where numbers are parsed asf64
instead off32
$ cargo bench
Compiling pestvsnom v0.1.0 (file:///Users/geal/dev/rust/projects/pestvsnom)
Finished release [optimized] target(s) in 7.57 secs
Running target/release/deps/pestvsnom-97707cfcfc27f95f
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/release/deps/full_pest-d19d1d9a2d0599f4
running 1 test
test full_pest ... bench: 82,444,592 ns/iter (+/- 11,316,792)
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured; 0 filtered out
Running target/release/deps/nom-8768745f161d04c9
running 1 test
test nom_f32 ... bench: 296,648,189 ns/iter (+/- 22,709,297)
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured; 0 filtered out
Running target/release/deps/nom_f64-1730d2858dff8973
running 1 test
test nom_f64 ... bench: 63,573,729 ns/iter (+/- 17,063,945)
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured; 0 filtered out
Running target/release/deps/pest-46b9649508cd945f
running 1 test
test pest ... bench: 38,894,560 ns/iter (+/- 14,268,167)
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured; 0 filtered out
Running target/release/deps/ujson4c-8a2da84cc6e075c6
running 1 test
test ujson4c ... bench: 10,099,504 ns/iter (+/- 7,062,072)
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured; 0 filtered out
From these results (run on a late 2013 Macbook Pro, CPU 2,3 GHz Intel Core i7, with rustc 1.21.0-nightly (b75d1f0ce 2017-08-02)
), we can see that the "nom_f64" and "full_pest" benchmarks are in the same range, the "pest" parser is, as expected, faster, but the original nom parser is way slower.
As it turns out, the main cost comes from converting to a f32
in the FromStr
implementation for f32
.
There might be an interesting investigation (and possibly an optimization of float parsing?) to do there.