-
Notifications
You must be signed in to change notification settings - Fork 699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gaps in test coverage #280
Comments
#278 Fixed a bug that should have been caught by our tests. The previous commit was adc8ab956605e8167b3597dd63d338b3b08a2fe0 Here's the post submit Dag Run ID: 2018-01-10T15:10:22 Here's the logs from the run_tests step. It doesn't look like the actual tests ran; I don't see any commands indicating commands being run (e.g. helm test). Although this could be an issue with stdout/stderr not being properly captured.
|
Looks like a red herring, I think its just an issue of not grabbing logs from the subprocess. |
So the junit test does indicate a test failure
But gubernator is reporting all tests as having passed |
It looks like we might need to have a test case in the xml
|
FYI, #278 's test still failed: https://storage.googleapis.com/kubernetes-jenkins/pr-logs/pull/tensorflow_k8s/278/tf-k8s-presubmit/340/artifacts/junit_e2e.xml
|
There are so many errors in the master now, could we file a release for the prior commit which could works? |
@gaocegege The test isn't fixed yet. That's a good idea, I think the last known good release is. gs://tf-on-k8s-dogfood-releases/v20171223-37af20d/ |
Prow still reports failure even if some of the junit files indicate fail. It looks like gubernator's status for the job is based on the exit code of the prow job. So we need to update the E2E tests to set exit code based on whether any tests failed. |
Our tests aren't catching certain bugs.
For example #274. In that case the controller was crash looping because the spec was incorrect
The text was updated successfully, but these errors were encountered: