-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comparison diagram isn't very informative and makes pest look bad. #5
Comments
Thanks for the raising this issue. Here's my stance on the matter: both parsers listed in the comparison offer great performance and pest's goal is not to compete with them, but merely to show how pest can offer performance in the same order of magnitude. Now, things will be different in 3.0. I have rewritten the generator and I'm working on the optimizer with great results. |
I can't find any new benchmarks on pest - is there any updated information from the performance benchmarks mentioned above? |
There's a nom_benchmark repository in geal/gcouprie (nom's author) github that could be used (it runs nom, pest and one or two others as far as i can remember), but it's quite outdated, and needs some work. |
Good idea! It'll be nice if we can get an official repo created for that. |
There's a nom_benchmark repository in geal/gcouprie (nom's author) that could be used (it runs nom, pest and one or two others as far as i can remember), but it's quite outdated, and needs some work. |
One more repo that's more up to date: https://github.com/rosetta-rs/parse-rosetta-rs#results |
It only lists two competitors, and only one of those is in the parser family. Benchmarks against more of the common parsers like combine and lalrpop would also be useful.
Pest is the listed as the slowest solution, with barely any explanation as to how it could be better. Making excuses for why they run faster is a weak argument; highlighting Pest's strengths is a lot better.
The text was updated successfully, but these errors were encountered: