-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make the tests stable #239
Conversation
Debug data is a short-term patch so we can debug potential (likely) later failures better - it's noisy, but it'll be useful. Retries are a medium-term fix, until we can work out why the tests are actually failing.
Before this, the tests ran on Linux, but not OSX/Windows (because the `env` preprocessor order isn't define, and it was valid to run it before Browserify, which apparently doesn't work).
I see the following in the CI server logs:
Is that expected? |
Notice that the debug logs are currently causing the authentication token to be printed in the CI servers, which means that anyone looking can get access to our testing accounts. |
Can you edit the PR and make a PR to my branch please? |
Heh, it's because of |
@@ -12,7 +12,7 @@ module.exports = (config) -> | |||
captureConsole: true | |||
|
|||
karmaConfig.plugins.push(require('karma-env-preprocessor')) | |||
karmaConfig.preprocessors['**/*.spec.coffee'] = [ 'env' ] | |||
karmaConfig.preprocessors['**/*.spec.coffee'] = [ 'browserify', 'env' ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aha
@emirotin Base branch changed. @jviotti Yes, you're right, that's not good. What should we do about this? Since it does initially look like this is now stable I'm no longer that worried about further debugging, so I could turn debugging off, and we can change the password. That doesn't totally solve the preexisting problem though - presumably anybody could submit a PR to this project that turns on debug, and immediately grab any new password we use? |
@pimterry PRs from forks will not have access to the secure environment variables we put in the CI servers, and we can safely trust people with write access to these repos (whose PRs can read the secure environment variables), since they are the resin.io team. |
Let's remove |
Ah, and also wrap the |
Hm, it shouldn't actually throw, it should be a warning, and
|
6afc41d
to
c25fdd0
Compare
Debug removed, dotenv silenced (all these builds are a great way of testing if the tests really are stable now!) |
Also @jviotti you're totally right about the hiding secure vars thing for Travis, but the docs for Appveyor don't mention anything about that. They make it sound like it's all PRs or none for them, and you should only turn on vars in PRs for private repos: https://www.appveyor.com/docs/build-configuration/#secure-variables. Haven't tested though, is it just that the docs don't tell the whole story? |
@pimterry Looks like a docs-induced confusion :) I use secure env vars in Appveyor for some projects, and under settings, I see: Which is exactly what we need. |
CI tests seem to be hung? No activity in Travis |
Not sure why Travis has hung, but it's queued in their infrastructure, not our code, so there's not much we can do. We're in the queue. There were a few pushes in quick succession for this, and for each one we seem to trigger two builds (the push on the branch and the PR itself), on both Travis & Appveyor, each of which runs two jobs (for two node versions), each of which runs the entire test suite for Node and the Browser, twice (because of I could kill jobs for all commits but the very latest batch, but I quite want to let the builds all run, just to confirm they're definitely now stable. I'm not sure why we're notifying on branches and PRs and duplicating these builds. Could we stop doing that? It'd also be good to find a way to not run the whole 20 minute test suite twice with each build (as discussed on Flowdoc). Sounds like that's not practical for now, but we need to be a little careful - if the builds go from the current ~50 minutes to 60 then we'll hit the Travis time limit, and the builds won't run at all. |
Interestingly, one of the builds has failed: https://ci.appveyor.com/project/resin-io/resin-sdk/build/1.0.938/job/4w6t2nxcoxtcyg4g#L2414. No longer because of ETIMEDOUT, but because:
I'm pretty sure I actually know what this is: I'm running tests locally, and that test just failed randomly for me too, I think because it uses fixed device uuids. Presumably device uuids must be unique across all devices, not just per-user? That would mean these tests will fail if anybody happens to be running those same few tests at the same time anywhere, which isn't great. Separate to the ETIMEDOUT issue, and unlikely to come up often normally, but still pretty messy. |
Yeah, tests are not isolated and it's not great at all. Ideally we'd like to run each of them against of a separate fresh DB :( |
Autogenerated ones will be, but the test explicitly provides fixed UUIDs: https://github.com/resin-io/resin-sdk/blob/af90835dee327bb450239bc768aeea56b09cdc6d/tests/integration.spec.coffee#L1129-L1130. I was assuming that that's why they're conflicting: both create the same device, and end up failing at some point because of the existing FK relationships (not sure where, maybe when you create new a device with an existing UUID but within a different application it explodes? Sounds like it should imo). Does that sound plausible? I think we can find a way to isolate these though, at the cost of some extra complexity (assuming the above is right). I'll take a look tomorrow - let me know if I'm totally off track or if this isn't worth fixing. |
The "data is referenced" is when you try to delete as far as I remember, but maybe it's also thrown when you violate the unique constraint? @Page- any ideas? I'll merge this one now into my branch. |
The "data is referenced" is when you try to delete something that is referenced by a foreign key, as far as I'm aware that's the only case |
This fixes #237, building on top of #236.
This change is very simple, just fixes a Karma config bug (arguably a Karma bug, see karma-runner/karma#2505), brings in the new resin-request and puts balena-io-modules/balena-request#78 and balena-io-modules/balena-request#79 to use.
End result is that we now output some debug info during the tests about the requests being made, and retry failing requests. The debug output is just so we can debug this better if it does start to go wrong later - it's a bit noisy for now, but we can remove it soon once we're sure this change makes the tests stable.