-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Smoke testing with select npm packages #82
Comments
I would love to help out, though I'm not sure where all this would be done... a digitalocean instance maybe? Or could we set up a task in travisCI? |
If I can feed the static analysis output into a queryable database, we should be able to get a list of "affected modules" given a changelist fairly cheaply. If we further filter that against a minimum number of downloads or dependents, we can at least get automated smoke tests. I'd love to help out with this effort. |
@therebelrobot open question how it actually gets done but I suspect we'd be best to use our own CI resources, so at the moment it might look like a combination of Jenkins and build servers somewhere. DigitalOcean and/or Rackspace for Linux, Rackspace for possible Windows testing (might be best to defer that one!). |
oh, if anybody wants to make a start on this then I can provide server resources on one of our free accounts, if it's legitimately for this effort and won't be wasted resource. |
That makes sense. I would love to take a look at it, though I may not be the best to take point on it, given how much experience I have on automated regression testing (which is little). |
@rvagg how about a new job in jenkins we run as part of the RC/release procedure? We could check in a list packages (or have it as part of documentation that we can parse somehow) that we pass to the build and run. |
Subbing to this issue, the big question I have is: is there any way we could know / try to run any arbitrary module's tests? I suppose we could try a bunch of the most common options? |
Pre-select a group of modules, top 50 downloaded from npm to start, and choose only modules where |
I think selecting 10 packages – pure js and native – that doesn't have dependencies such as mysql and lives in the top 100 would be a good enough start. Edit: ..with the added benefit of being able to trust their test suites enough to have it part of our verification. |
I started work on this here: https://github.com/rvagg/iojs-smoke-tests Uses Docker to isolate so it's limited to Linux atm. The procedure could also work on a dedicated machine too so perhaps if we wanted to multi-OS this we could. For now though, the tricky bit is in making the test runs informative and non-flakey. |
We already do this in our internal (IBM) builds for a list of modules and @jasnell was working to pull this over into the Node builds. It was originally in perl so he was going to port it over to run under Node. Right now we run in 2 flavors. 1) just do the install, 2) do the install and run built in tests for the module. The second is a subset of the first as its not always easy to automatically identify and run the built in tests. One thought we had along the way would be if there was guidance from the Node community to module developers with respect to how to include tests so that they could be easily run. More recently we've also broken it out into 2 different types of runs. One which fixes the module versions so that we can tell if its the runtime that causes regressions. A second runs with the latest versions of the modules but a fixed runtime level so that we can identify regressions in the modules themselves. |
The node port on this is still in plan, it just ended up being pushed to a
|
I'm not sure if this is what you mean, but (as previously mentioned in this thread) |
Right, the suggestion is to just start with the ones that do. |
In reply to @mhdawson's reply, a couple of comments before. For simple installation testing, I think what we have is quite solid. It takes a list of npm modules and iteratively does The way we do the testing is something like:
However, there are a couple of modules which don't come with tests packaged when you do So what we do is something more like:
I think this sort of approach as a principle is sane, but the implementation we have is not bulletproof, primarily because there can be inconsistencies with the versions and other stuff which I haven't quite covered here. For example, sometimes it's very difficult to tell if a module installation failed or succeeded because exit codes are not reliable. You can easily get false positives or false negatives since there are modules which don't install the 'standard' way. What we currently have is written in Python, and went from being a ~30 line script to ~700 lines at the moment, as these sort of issues arose. Depending on what you want to test exactly, you may find that this is really easy to overthink, as I have found out myself. |
Good point, I know of a lot of express submodules that do not bundle tests.. :/ All of them should have repo info in package.json though. Also, we shouldn't have to clone them, we can probably just grab github tarballs of the repos. |
Another key challenge is that several of the modules require additional dependencies that aren't necessarily part of the install. Testing the redis module, for instance, requires a redis server. It's certainly not a difficult problem to solve if we know in advance what additional resources the modules are going to require but it does make fully automating the process more difficult. The other challenge is the fact that the test output is not standardized. While many do end up producing consistent output, there are quite a few that do not. In order to appropriately determine the status of the tests when we're done, unless we're looking for a simple pass/fail, we would need to either (a) get all the modules updated to produce consistent test output or (b) implement one off parsers to read the output for specific modules. As @CGavrila says, it's easy to overthink it tho. The tl/dr version is this:
|
I'm not entirely sure how a more consistent output test output like tap would make much change in this instance? When some package's test suite fails, a developer will most likely have to inspect the failing test anyway. Also, most of them provides a fairly understandable message to pinpoint at which point the test failed. Also, since the packages are selected manually in head of time, I would say it is fairly easy to set up a Docker environment which will be able to spin up any required external services before running the test. I would imagine a |
As I said, it's not a huge problem ;) going with a strictly pass/fail
|
Late to the party. FWIW @othiym23 made this suggestion almost a year ago. Does it still stand as best practice? |
I used https://github.com/rvagg/iojs-smoke-tests on the last release, @ceejbot contributed a bunch of additional popular packages and it's been helpful but there's still some work to be done to make it more useful. |
I was going to take a look at some of this for the docker-iojs working group. I'll report back here with what I find 😄 |
Now lives in the citgm repository and is run on ci.nodejs.org! ☀️ |
Awesome!
|
AFAIK the smoker job is not fully working. Should we open another issue?
|
This discussion came up on the TC meeting today, prompted by a question on IRC, noting it here as a TODO if someone has spare energy and time to devote to starting this effort.
It would be ideal if io.js was regularly tested against a list of npm packages to test for breakage. Perhaps the list could comprise some of the most popular packages and/or some of the most interesting use-cases of Node/io.js to test for edge-cases. The tests could be simply running the test suites of specific versions against the given version of io.js.
The text was updated successfully, but these errors were encountered: