-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add std/esm benchmark. #33
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the pull request. There's one thing I'm wondering about though: Does @std/esm
count as a web developer tool? And does it even make sense to consider outside of Node?
@@ -34,6 +34,7 @@ | |||
"license": "BSD-3-Clause", | |||
"dependencies": { | |||
"@babel/standalone": "7.0.0-beta.32", | |||
"@std/esm": "^0.21.1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use exact dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the pull request. There's one thing I'm wondering about though: Does @std/esm count as a web developer tool? And does it even make sense to consider outside of Node?
Not outside Node no. It's Node for sure. Is there a more appropriate suite?
The suite description states
This is a benchmark suite designed to measure the JavaScript related workloads commonly used by Web Developers nowadays, for example the core workloads in popular tools like Babel or TypeScript.
Lodash is on more than 2 million websites and impacts over 150,000 npm packages (many of those used by web devs). Since this will be used to load it and others, in the Node context, it seems like a good fit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, good point. I do think we should have a lodash
benchmark in there for sure. lodash
is one of the most popular npm
(frontend) dependencies (if not the most popular).
As for @std/esm
, we should probably create a Node tooling benchmark for Node only/mostly packages. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a good lodash
benchmark suite already?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a good lodash benchmark suite already?
I have a pretty robust one. It allows swapping out different versions of Lodash/Underscore. It's a long running benchmark though designed to ensure the things Lodash is good at stay good and the things it's bad at stay reasonable.
As for @std/esm, we should probably create a Node tooling benchmark for Node only/mostly packages. WDYT
That's cool too though so testing the Node flavors of Babel, Webpack, etc.?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm very interested in the lodash
benchmark. As long as the runtime is roughly comparable to the other tests it should be fine, and if not we could reduce the iteration count a bit I guess.
That's cool too though so testing the Node flavors of Babel, Webpack, etc.?
I don't think it makes sense to have webpack there. This is definitely something that should be owned and operated by the benchmarking WG. Maybe you can join the call on Monday to discuss the idea?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As long as the runtime is roughly comparable to the other tests it should be fine, and if not we could reduce the iteration count a bit I guess.
It'd need some tweaking it's a pretty massive suite.
This is definitely something that should be owned and operated by the benchmarking WG. Maybe you can join the call on Monday to discuss the idea?
Sure thing! You can DM details if you'd like.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd need some tweaking it's a pretty massive suite.
It also seems to mess with the global state of benchmark.js, which needs to be fixed.
Sure thing! You can DM details if you'd like.
Added note to nodejs/benchmarking#198 for monday's meeting.
@@ -45,6 +46,7 @@ | |||
"esprima": "4.0.0", | |||
"jshint": "2.9.5", | |||
"lebab": "2.7.7", | |||
"lodash-es": "^4.17.5", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. Do we even need the explicit lodash-es
dependency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would committing the entire package to the vendor folder be a better place?
@@ -0,0 +1,3 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is not bundled with the webpack build? Can we avoid external configuration files?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, I can remove it and specify options via its API.
Cool. I'll close this then to go down the WG route. |
Awesome, thanks. Looking forward to a |
I’d love to hear more about why you think adding a lodash benchmark to WTB makes sense. lodash is an immensely popular and important project for the ecosystem. It’s not, however, a “tool” that’s used in build scripts (at least not directly) in the same way that acorn, babel, babylon, buble, chai, coffeescript, espree, esprima, jshint, lebab, prepack, prettier, source-map, typescript, uglify-es, uglify-js are. lodash fits better in the category of frameworks/libraries IMHO, and therefore I’m thinking perhaps Speedometer would be a more appropriate benchmark in which to include lodash. WDYT? I worry that adding a lodash benchmark to WTB would not necessarily represent its real-world usage in web developer tools — it would be more akin to a synthetic benchmark. Am I looking at this the wrong way? |
Okay, will do too! |
I was thinking of Besides that, I guess |
Resolution from benchmarking WG meeting: We will create a separate benchmark suite with the Node specific tools/libraries/loaders/runtimes, and it'll live in https://github.com/nodejs/benchmarking. @jdalton signed up for this. |
This adds a basic
@std/esm
benchmark of loading 643lodash-es
modules. The@std/esm
module loader will be used to power the next major release of lodash.It has a bunch of moving parts. On first load it does a parse using a highly tweaked version of acorn. On subsequent loads it simulates the ESM runtime with its own module loading pipeline.
This test is for the cli runs and should be skipped for browser runs.