Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Preparing prepublishOnly #1064

Closed
wants to merge 3 commits into from
Closed

[WIP] Preparing prepublishOnly #1064

wants to merge 3 commits into from

Conversation

joshbruce
Copy link
Member

@joshbruce joshbruce commented Feb 19, 2018

Trying to think of a decent modular concept for the lifecycle here - might help with CI as well??

  1. $ npm test
  2. $ npm run bench
  3. $ npm pack - create min (prepack - if understanding correctly, this means we would be covered even if accidentally skipping the minify step, yeah?)
  4. $ npm version - need to be able to specify whether major, minor, or patch
  5. $ npm publish - creates min as well

https://docs.npmjs.com/misc/scripts

@joshbruce
Copy link
Member Author

Just a quick sketch - running out in a minute.

@joshbruce
Copy link
Member Author

Curious if there's a way to make it impossible (or difficult) to version/publish if tests are failing locally.

@Feder1co5oave
Copy link
Contributor

Add npm test in prepublish, as in

npm test && npm run build && git commit ...

but we never relied hard on all tests passing before...

@Feder1co5oave
Copy link
Contributor

As far as publishing on npm goes, I've never done it so I'm not familiar with the workflow yet :(

@joshbruce
Copy link
Member Author

@Feder1co5oave: We'll get that sorted soon. The main thing to keep in mind is that it is its own thing...totally unrelated to the GitHub. Unlike something like Composer (Packagist) for PHP, which ties a bit more directly there.

@joshbruce
Copy link
Member Author

@Feder1co5oave: The two big commands on publishing are the

npm version [major|minor|patch] and the npm publish

That's why it feels weird to me coming from Packagist. Normally, I push to master - Packagist handles the deployment and availability and all I have to do is create the new Release on GitHub.

@Feder1co5oave
Copy link
Contributor

Docs

It seems there's no need to npm pack before npm publishing?
So I'm not sure prepack is fired when publishing
I'm not sure dependencies are available when prepack is called on a fresh copy installed from npm or github.

@Feder1co5oave
Copy link
Contributor

Feder1co5oave commented Feb 19, 2018

I think when you publish from a folder, as in npm publish ., the prepack script is not fired (?)

Since minification needs uglifyjs and takes a few seconds, I think we should use it sparely on the user side.
When npm installing marked, the marked.js file is usually used, so there's no real need to generate an up-to-date minified version (?)

@joshbruce
Copy link
Member Author

When I was looking at the docs re pack and prepack, it says it’s fired during pack and publish both; so, think your assessment is valid re doing a “test run”.

The way I normally do manual publishing is to run the tests (we have one failing), run the build (to get min - not sure why it didn’t work out this time), run version (which you can do manually by updating package.json), then run publish (which actually pushes the contents to NPM).

Most of the hooks seem to jump in before one or more of the NPM-provided commands (the only non-NPM ones we have are build and bench...wonder if pack could be equivalent to our build??)

Also, when we get Travis running if lint and tests fail, we won’t be able to merge a PR; so, gonna start getting pretty hardline about things passing.

@UziTech
Copy link
Member

UziTech commented Feb 19, 2018

from docs

prepack: run BEFORE a tarball is packed (on npm pack, npm publish, and when installing git dependencies)

@joshbruce
Copy link
Member Author

On a CI note, would be nice to tie lint, tests, and bench...not sure what to do on the bench.

@UziTech
Copy link
Member

UziTech commented Feb 19, 2018

I was thinking we could do something where we bench just marked and have a threshold that it needs to pass or it fails

@Feder1co5oave
Copy link
Contributor

Feder1co5oave commented Feb 19, 2018 via email

@joshbruce
Copy link
Member Author

@Feder1co5oave: Why not? Wouldn't we get a more consistent device setup. I'm remembering your bench against mine be a 3 second difference or something like that. Definitely leaning toward the concept of not being a bench against anything - just marked itself.

@joshbruce joshbruce mentioned this pull request Feb 19, 2018
8 tasks
@joshbruce joshbruce closed this Feb 19, 2018
@joshbruce
Copy link
Member Author

Replaced by #1065

@joshbruce
Copy link
Member Author

See #963 re benching. Agree with @UziTech. Benchmarking against ourselves and a goal probably a good way to go.

@Feder1co5oave
Copy link
Contributor

Feder1co5oave commented Feb 26, 2018

Because running on virtualized shared services is nothing like running on a personal device (the server might easily be overloaded). And they also might get annoyed by the fact that you're hogging up resources to repeatedly run the same converter on the same input a thousand times (that's what npm test --bench does)

@UziTech
Copy link
Member

UziTech commented Feb 26, 2018

@Feder1co5oave the idea behind running --bench in CI is not to make sure it goes fast but to make sure a pr doesn't make it much slower. If a PR passes all the tests but makes marked run 2x slower it would be nice to notice that before merging it.

https://beachape.com/blog/2016/11/02/rust-performance-testing-on-travis-ci/

@Feder1co5oave
Copy link
Contributor

Three issues comes to my mind:

  1. Adding tests makes benching slower, because the input size is bigger
  2. I still don't trust travis to run things within predictable times. I see running times in my build vary significantly.
  3. I wouldn't know how to set this up

@UziTech
Copy link
Member

UziTech commented Feb 26, 2018

I've never done performance testing on CI before either, but it seems like it is something we should try to figure out.

  1. The performance tests could only be run on certain tests for consistency.
  2. It wouldn't be perfect, but maybe better than nothing.
  3. The idea is to run the tests on each PR along side code without the PR (probably latest released version) and compare the results to see if there is any big hit to performance.

@joshbruce
Copy link
Member Author

@Feder1co5oave:

  1. I think in this case, we're not looking to bench against showdown, for example. Just to make sure. We would do it against a static, known markdown string each time. And compare to a static number we set, or something else.
  2. I don't trust Travis because of that failing test. Think we should either fix the issue or remove the test for now. (Our contributors won't trust Travis or us, if we keep merging things that fail.)
  3. That's the conundrum. I'm getting more familiar with Circle, not Travis; so, not sure how much help I would be on that score.

@UziTech

  1. Agreed.
  2. Agreed. I know we're engineers and like the perfection of math and computation, but sometimes good enough is good enough.
  3. Agreed. So, I'm running a continuous deployment build right now that installs Angular CLI every time it is run, then it runs commands against that install. Maybe we could do something like that with Travis. Have it grab the latest release from NPM - run an NPM script - grab that time (say it's 5s). Then run the same NPM script against the submission - grab that time (say it's either 3s or 10s). In the first case, totally taking that PR - in the second case, Travis fails. If the contributor asked, we check Travis, say the PR doubled the time it takes to run the bench against the current release. I don't know if that's possible, strictly speaking, but I know I'm grabbing a package and installing it straight from package.json. Not sure what the full on build environment is doing though compared to something like Circle and Travis.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants