-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Preparing prepublishOnly #1064
Conversation
Just a quick sketch - running out in a minute. |
Curious if there's a way to make it impossible (or difficult) to version/publish if tests are failing locally. |
Add npm test in prepublish, as in
but we never relied hard on all tests passing before... |
As far as publishing on npm goes, I've never done it so I'm not familiar with the workflow yet :( |
@Feder1co5oave: We'll get that sorted soon. The main thing to keep in mind is that it is its own thing...totally unrelated to the GitHub. Unlike something like Composer (Packagist) for PHP, which ties a bit more directly there. |
@Feder1co5oave: The two big commands on publishing are the
That's why it feels weird to me coming from Packagist. Normally, I push to |
It seems there's no need to |
I think when you publish from a folder, as in Since minification needs uglifyjs and takes a few seconds, I think we should use it sparely on the user side. |
When I was looking at the docs re pack and prepack, it says it’s fired during pack and publish both; so, think your assessment is valid re doing a “test run”. The way I normally do manual publishing is to run the tests (we have one failing), run the build (to get Most of the hooks seem to jump in before one or more of the NPM-provided commands (the only non-NPM ones we have are build and bench...wonder if Also, when we get Travis running if lint and tests fail, we won’t be able to merge a PR; so, gonna start getting pretty hardline about things passing. |
from docs
|
On a CI note, would be nice to tie lint, tests, and bench...not sure what to do on the bench. |
I was thinking we could do something where we bench just marked and have a threshold that it needs to pass or it fails |
I think benching on a virtualized free hosted service is not a very good
idea
|
@Feder1co5oave: Why not? Wouldn't we get a more consistent device setup. I'm remembering your bench against mine be a 3 second difference or something like that. Definitely leaning toward the concept of not being a bench against anything - just marked itself. |
Replaced by #1065 |
Because running on virtualized shared services is nothing like running on a personal device (the server might easily be overloaded). And they also might get annoyed by the fact that you're hogging up resources to repeatedly run the same converter on the same input a thousand times (that's what |
@Feder1co5oave the idea behind running https://beachape.com/blog/2016/11/02/rust-performance-testing-on-travis-ci/ |
Three issues comes to my mind:
|
I've never done performance testing on CI before either, but it seems like it is something we should try to figure out.
|
|
Trying to think of a decent modular concept for the lifecycle here - might help with CI as well??
$ npm test
$ npm run bench
$ npm pack
- createmin
(prepack
- if understanding correctly, this means we would be covered even if accidentally skipping theminify
step, yeah?)$ npm version
- need to be able to specify whether major, minor, or patch$ npm publish
- createsmin
as wellhttps://docs.npmjs.com/misc/scripts