Removing KubernetesPipelineTest.runWithSlaveConnectTimeout #506
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Removing a test added in #421 (and lightly edited in #447).
In #503 I saw that it had flaked, which I could reproduce as follows:
This is because it was waiting for
podTemplate
to run and then grabbing metadata, but that metadata will soon be deleted. That aspect is easily correctable usingSemaphoreStep
. But then I started to see theassertLogNotContains
fail. Why should the log not contain this message? We are setting a 10s timeout on agent connection, and either the agent connects in <10s or it connects in >10s; we cannot predict that in the test. So then I was going to delete the log assertion and just use a 101s timeout (distinct fromDEFAULT_SLAVE_JENKINS_CONNECTION_TIMEOUT
), but wondered what the whole point of the test was to begin with? Presumably to verify that some configuration was taking effect. But looking again at #421 I see that it was patchingKubernetesDeclarativeAgent
, normally tested inKubernetesDeclarativeAgentTest
, yet this test was using a Scripted Pipeline—not even touching thesrc/main/
code being patched! So it does not seem to have had much if any value.