Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix pod template ID inconsistency over master reboots #988

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

fredcooke
Copy link

@fredcooke fredcooke commented May 7, 2021

Since @Vlatombe's commit 2cfefd1 our dynamically defined pod templates have been broken after a master reboot with every job failing with the message Not expecting pod template to be null at this point from line 92 of KubernetesSlave.java. Because the template has a unique ID after every reboot and is not consistent, the agent pods fail to look up the existing pod's template. In our stack that uniqueness is already provided in the label so an additional unique ID is not required, and in fact because these are not configured in Jenkins on a persistent basis it causes this issue. Allowing us to set the ID with our already unique label value should solve our issue. This change should be backward compatible and not break other uses using the system in a different manner (hence no null check on the ID field).

With respect to the check list:

  • Whilst there may be issues raised in Jira/GitHub I've not explored and do not have any such reference handy.
  • I have not provided tests, however I'm happy to be schooled in how I might do that provided it's not overly arduous.
  • Question about master to master PRs: Is this item motivated by people wrongly PRing the wrong changes or something? Or so that the merge commit carries the branch name in the history post-merge? I don't understand this one.

Checklist:

  • Make sure you are opening from a topic/feature/bugfix branch (right side) and not your master branch!
  • Ensure that the pull request title represents the desired changelog entry
  • Please describe what you did
  • Link to relevant issues in GitHub or Jira
  • Link to relevant pull requests, esp. upstream and downstream changes
  • Ensure you have provided tests - that demonstrates feature works or fixes the issue

@fredcooke
Copy link
Author

Actually, looking at the code again, if the constructor just always set the ID from the PodTemplateStep object whether it was null or not, that would have the same effect and reduce the change's size even further. Then if it's set it will be used, and if not set it'll be null and be the same as the no args constructor which just calls the one arg constructor with a cast null anyway. If you'd like me to rewrite like that and force it up, I shall do so. Cheers.

@fredcooke
Copy link
Author

Nearly an hour to complete all builds, tests must be super thorough or build env on the slow side, I guess. Impressive either way.

@Vlatombe
Copy link
Member

Vlatombe commented May 7, 2021

Are you running the latest version? The ID should be consistent across restarts as of #900 (1.28.2 and above)

@fredcooke
Copy link
Author

Hi Vincent, yes, we're on 1.29.4 in both of the instances I have access to. Looking at the change you linked, the block is as follows:

        if (id == null) {
            // Use the label and a digest of the current object representation to get the same value every restart if the object isn't saved.
            id = getLabel() + "-" + Util.getDigestOf(toString());
        }

And sits in the readResolve method near the bottom of the file and only runs when that is called if ID is null. However because the no-args constructor calls the id-only-arg constructor with null and that constructor tests for null and sets the ID to a totally random value at object creation time, your code will never run.

    public PodTemplate() {
        this((String) null);
    }

    @DataBoundConstructor
    public PodTemplate(@CheckForNull String id) {
        if (Util.fixEmpty(id) == null) {
            this.id = UUID.randomUUID().toString();
        } else {
            this.id = id;
        }
        recomputeLabelDerivedFields();
    }

I'm not sure how it ever worked, to be honest, as the set ID if null on construct thing predates the if null then use label+hash change. Hmmmmm. Thoughts?

@fredcooke
Copy link
Author

Of course, another way to solve this would be to move your line of code to the constructor in place of the line that's there and do away with the block you referenced. I'm 99% confident that that would work just fine for us, too. If you'd prefer to leave the ID out of the PodTemplateStep class and just do that, I'll discard this and do a new PR.

@fredcooke
Copy link
Author

My previous comment was wrong, that would not work as label is always null in those two constructors. In any case, just allowing us to set the ID would be sufficient to solve this, then it's on the user to make sure they're unique. So much for 99% confidence :-D

@fredcooke
Copy link
Author

@Vlatombe any thoughts on the above? Have I missed something or am I correct in my analysis? Cheers :)

@Vlatombe
Copy link
Member

Vlatombe commented May 10, 2021

I'm not sure how it ever worked

That's because Jenkins uses xstream to deserialize objects from disk on startup, which bypasses constructors.

I suggest you take a look at https://github.com/jenkinsci/kubernetes-plugin/blob/cf954c3825c722de2722e59b0870d9dc40da1099/src/test/java/org/csanchez/jenkins/plugins/kubernetes/pipeline/RestartPipelineTest.java, understand what makes your failing scenario different, write a new test, and then we can iterate.

@fredcooke
Copy link
Author

Okay, once again, then, these are dynamic, they are never saved to the Jenkins state. They come in new every time and we want them to be consistent every time, and currently that is impossible as far as I can tell. Is the PR okay given that it changes nothing for existing users and just allows that to happen naturally from the place where I put it before I realised that podTemplate( actually created a PodTemplateStep object? :-D

@fredcooke
Copy link
Author

I've figured out your test regime and am going to try to add a meaningful test on top of this to seal the deal. Chances are it will pass, but it should be obvious that without the earlier commit it would not have. The example you linked wasn't the ideal example, but I found some that are more useful to guide me. Cheers.

@Vlatombe
Copy link
Member

they are never saved to the Jenkins state

Actually, the pod template object is persisted as part of the step execution, then it is re-registered on resume.
See

super.onResume();
try {
KubernetesCloud cloud = resolveCloud();
TaskListener listener = getContext().get(TaskListener.class);
newTemplate.setListener(listener);
LOGGER.log(Level.FINE, "Injecting template after resume: " + newTemplate);
cloud.addDynamicTemplate(newTemplate);
} catch (AbortException e) {
throw new RuntimeException(e.getMessage(), e.getCause());
} catch (IOException | InterruptedException e) {
LOGGER.log(Level.WARNING, "Unable to inject task listener", e);
}

@fredcooke
Copy link
Author

@Vlatombe Hmmmm, can you tell me exactly where I should be looking on-disk to prove/disprove the not saved thing?

I've been sick so haven't made much progress on producing a test case. Before putting together the simple branch/PR i did look on disk and there was an empty XML tag where I expected the data to be, but perhaps I was looking in the wrong place. I can't remember what the tag was right now (up for medicine in the middle of the night) and won't be looking until tomorrow. I guess it could be the case that the entire cloud is not persisted, and the whole thing is done dynamically for us, and that breaks some expectations.

I wish there was an easy way to do a forked build and try out this change but IMO it's always better to get fixes up stream than to diverge for short term gain.

Looking at that code you linked, it calls a method on the newTemplate object which is null at class load time and only ever set by the start method so the onResume would NPE if start hadn't run first. And if start runs first then everything I said is still relevant. Am I missing something?

@fredcooke
Copy link
Author

Okay, here is a redacted snippet from jenkins home config.xml with the empty element highlighted in red:

Not saved

I assume that that section is where you expect the data to be? And once again, this is because it's 100% dynamic, we don't want it to be saved, we just want it to be consistent and repeatable. Hope that helps clear up the save restore thing.

@Vlatombe
Copy link
Member

Vlatombe commented May 19, 2021

Looking at that code you linked, it calls a method on the newTemplate object which is null at class load time and only ever set by the start method so the onResume would NPE if start hadn't run first. And if start runs first then everything I said is still relevant. Am I missing something?

You don't understand how pipeline restarts work so you're making incorrect assumptions. All StepExecution objects are serialized as part of the pipeline execution (JENKINS_HOME/jobs/<name>/builds/<buildnumber>/program.dat). When Jenkins restarts, these objects are deserialized, and the onResume method is called. The PodTemplate instance is brought back as it was created (including id).

I assume that that section is where you expect the data to be?

No.

You should focus on providing a relevant test that is failing for your use case (or at least provide a simple Jenkinsfile and a list of steps to reproduce).

@fredcooke
Copy link
Author

Hi Vincent, FYI, I've been sick for a week hence no attempt at a unit test, however from your latest comment I think we're getting closer to bridging the communication gap :-D

I assume program.dat only exists for jobs that were interrupted when Jenkins restarted, right? The job failures we're getting are brand new builds not being resumed from a prior state. Throwing that error from cold with a click of "build now" after a reboot with several valid agent pods waiting to be used but not recognisable by the jobs for the reason we're going to uncover together :-D

There are no program.dat files on the path pattern that you stated, but there are a few nested a bit more deeply

On the instance I'm looking at for example material the following is the case:

  • 160 jobs
  • 446 branches between them
  • Greater than 131072 builds between them (exact number harder to get, argument list length limit reached)
  • Only 14 program.dat files

So let's assume that it's a brand new build, so no program.dat exists, and pods do exist, and because of the code structure that I'm trying to add to, the IDs are inherently different between what's running and what needs to run but something else somehow matches up causing a failure rather than a new instance to spawn slowly and repopulate its caches.

Clearly then we're not reserialising from that file, right? So in that case my logic/assertion about the class structure is correct, is it not? If not, please help me as I'm a bit lost. I will try to add a test as I'm a big fan of having them and if it prevents us being broken again in future that's a win win :-) Not sure when I can spend some time on that though and I'd like to understand what's going on fully before even trying. Thanks for your time so far! I think we're getting close :-)

…pplied by dynamic groovy and then applied if present in the step execution, thus leaving functionality unchanged for existing use cases such as manually configuring pod templates in a Jenkins UI.
… as the default constructor which just passes null directly into this constructor anyway. So if there's a value we get what we want, and if there's not, we still get what we want. Simpler is better.
@fredcooke fredcooke force-pushed the fix-pod-template-id-situation branch from fd64777 to 3f3178e Compare May 25, 2021 02:31
@fredcooke
Copy link
Author

@Vlatombe rebased and force pushed - any thoughts on the above? Did I understand your comments correctly? Are my assertions correct? I really need to understand that to write an effective test so your feedback is most welcome. Also, is there anyone else super knowledgeable and perhaps a little less busy working on this plugin that could chime in? If so, please tag them :-)

@fredcooke
Copy link
Author

@jglick perhaps you have time to respond to my second to last comment? I'd greatly appreciate some input on this. Cheers.

@fredcooke
Copy link
Author

I have found the issue with the test you linked, @Vlatombe - and I have them running locally and doing what they should and am working on a set of tests to try to illustrate the problem. I don't expect they'll get merged as is, but are a point for discussion and to be run on your end to illustrate things and hopefully show the real need for consistent IDs for this key class given the way it's used on lines 90-92 and 105 of https://github.com/jenkinsci/kubernetes-plugin/blob/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/KubernetesSlave.java

To reiterate:

Before 2cfefd1 agent pods survived controller restarts and picked up new jobs just fine, also sourced new agent pods if none available, all good.

After 2cfefd1 post restart, agent pod jobs fail with Not expecting pod template to be null at this point for brand new jobs (not restored from saved files) but will happily source new agent pods if the old ones are deleted and work fine on them.

The only thing required for the failure to be present is an agent of a certain type to be present before restart and required after restart and restart to be fast enough that they don't self clean up. They never clean up without manual intervention because they communicate with the host just fine, but can't get the template because the ID is not consistent.

In a kubernetes world where all pods are transient and master can go away briefly from time to time as retires hosts and/or cluster auto scaler reshuffles resources, this is a big deal and causing a lot of pain for something in the order of 50 devs and half a dozen or more SRE/DevOps engineers having to resolve by nuking stale/broken agents. This solution is sub ideal as caches are empty on new agents and initial builds slow - not true before that commit when they were able to be reused across restarts.

Hopefully I can get something useful pushed today/tonight. Any feedback on the comment from nearly 2 weeks ago (20 May) still welcome if someone has time. Cheers.

@fredcooke
Copy link
Author

import static java.util.Arrays.*;
import static org.csanchez.jenkins.plugins.kubernetes.KubernetesTestUtil.*;

ugh, not cool.

…ffs and locally used methods can always be found explicitly by name.
…w a faster CI build on the PR as the full suite is massive.
… percent confident in that, but can't seem to run it locally, seems like a precondition is for a legit kube env to be available for this particular test. That's both good and bad as it's more realistic but I don't currently have that available myself.
…it into the next part of the same test. The message wasn't important and I've clearly misconfigured it somehow.
@Vlatombe
Copy link
Member

Vlatombe commented Jun 1, 2021

After 2cfefd1 post restart, agent pod jobs fail with Not expecting pod template to be null at this point for brand new jobs

Use unique labels, and your brand new jobs won't ever be scheduled on an agent that should be garbage collected.

@fredcooke
Copy link
Author

@Vlatombe they are unique, the pod definition to label relationship is 1:1 as intended and optimal for stateless style Jenkins use. Multiple types of jobs could run inside the same container, so it's decoupled at that level so for example pod spec A is used by job styles 1, 2, and 3, wherease pod spec B is used by job styles 4, 5, and 6.

I may have miscommunicated above, they're new builds of an existing job, using Build Now or a push call back.

Thanks for dropping by, but can you please respond to the comment on May 20?

I'm making progress on the tests, but it's not doing what I need just yet. Still working on it.

@fredcooke
Copy link
Author

To add to that, essentially the only differences between the pod styles is memory/cpu allocation and dind/not-dind. Check out this file and perhaps it'll help make sense of what I mean by "100% dynamic" and why it's a problem right now:

https://github.com/jenkinsci/kubernetes-plugin/blob/f9e55068655ebabac22f4be69183a035d5ee2b2e/src/test/resources/org/csanchez/jenkins/plugins/kubernetes/pipeline/dynamicPodTemplateStepSupportsRestart.groovy

This isn't far off what we actually use, just simplified and redacted as I never share employment info on the web nor divulge employer IP in public.

@Vlatombe
Copy link
Member

Vlatombe commented Jun 1, 2021

f9e5506/src/test/resources/org/csanchez/jenkins/plugins/kubernetes/pipeline/dynamicPodTemplateStepSupportsRestart.groovy

This can't work. The label must be unique per-build.

@fredcooke
Copy link
Author

Ummmm, the label is for a pod template, not a build, why must the pod template be unique per build? I don't understand. This worked just fine until that commit I keep referring to and provided immutable from-source declarative build definitions that didn't rely on jenkins master storage. I'm a bit shocked to hear that. What's your alternative suggestion? And why do you even say that given a pod template label is for a pod template that pods are spawned from and a job and build of just consume the same based on a match?

@Vlatombe
Copy link
Member

Vlatombe commented Jun 1, 2021

Ummmm, the label is for a pod template

If you define it in the cloud configuration, yes. This can be managed-as-code by techniques using jcasc, and seeing how you described your use case, this would be a better way if you want to keep 1 label <> multiple jobs.

If you use podTemplate DSL, the pod template only lives within its block even if technically, due to Jenkins architecture limitations, it is available for use to other jobs. So it's really 1 pod template <> 1 build.

What's your alternative suggestion?

Use the recommended idiom https://github.com/jenkinsci/kubernetes-plugin#pipeline-support, or use pod templates declared in the cloud configuration.

@fredcooke
Copy link
Author

If you use podTemplate DSL, the pod template only lives within its block even if technically, due to Jenkins architecture limitations, it is available for use to other jobs. So it's really 1 pod template <> 1 build.

I understand that it's one pod template per build, but why will you not allow us to let that pod template match and be verifyably the same and therefore be able to look up the matching pods just as it always used to when it was looked up by label and not forcibly random ID? You made a breaking change to the plugin API without bumping the major version, that's not very good practice. I've seen (don't ask me for links right now, don't have them on hand) a number of other conversations about (seemingly) this exact issue starting around the same time as that fateful commit. I believe this is not just affecting us.

The actual change in this branch is (unlike the fateful commit) backward compatible and results in NO change in behaviour for any existing configurations out there. But by adding an ID to the podTemplate() PodTemplateStep definition we can return to how it used to work which was repeatable and reliable and efficient. I'm going to keep working on these tests until they illustrate the problem. If at that point you still want to veto it, then we'll look at the situation again.

Use the recommended idiom https://github.com/jenkinsci/kubernetes-plugin#pipeline-support, or use pod templates declared in the cloud configuration.

I'll give this a read and think about it and get back to you.

Copy link
Member

@jglick jglick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not clear to me what issue this purports to solve.

Comment on lines +32 to +39
def yamlPodHash = '#{project.artifactId}-#{project.version}-maven-image'

// one build per job to ensure tag consistency
try {
lock(resource: env.JOB_NAME.split('/')[1], inversePrecedence: true) {
milestone()
podTemplate(label: yamlPodHash, instanceCap: 6, idleMinutes: 60, cloud: 'dynamicPodTemplateStepSupportsRestart', yaml: yamlPod) {
node(yamlPodHash) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just remove the deprecated label parameter

Suggested change
def yamlPodHash = '#{project.artifactId}-#{project.version}-maven-image'
// one build per job to ensure tag consistency
try {
lock(resource: env.JOB_NAME.split('/')[1], inversePrecedence: true) {
milestone()
podTemplate(label: yamlPodHash, instanceCap: 6, idleMinutes: 60, cloud: 'dynamicPodTemplateStepSupportsRestart', yaml: yamlPod) {
node(yamlPodHash) {
// one build per job to ensure tag consistency
try {
lock(resource: env.JOB_NAME.split('/')[1], inversePrecedence: true) {
milestone()
podTemplate(instanceCap: 6, idleMinutes: 60, cloud: 'dynamicPodTemplateStepSupportsRestart', yaml: yamlPod) {
node(POD_LABEL) {

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Jesse, thanks for taking a look! I don't understand how that can function without it in a dynamic way (not hard coded into the org setup).

The class used to work by looking up the pod template and thus available pods by the label. Then it changed to looking up by ID but with enforced randomness in the ID - removing the label would be fine if we could control the ID, but we currently can't control that.

The components that are equivalent to the above snippet are reusable libraries of pod-definition (podTemplate, if you will) and are defined in a totally separate and independent package to the organisation config/setup - this is rewritten completely every boot and is always the same and always works because always the same. On top of the pod template reusable blocks ride a bunch of job types with a many to one relationship to a pod spec, ie, three different build styles might use one pod spec, and another 3 additional build styles might use a second different pod spec. Different sets of build styles and pod specs are available in different packages with the final jenkins build including those that are useful to a particular team/project.

Without putting these pod specs in the organisation where IMO they do not belong how can you get reusable pods across N jobs that share M build styles that share 1 pod spec/template/definition that is implemented by P agent pods?

The real version of this has the body passed in from the out side and run as body() in place of the println on line 42 of that groovy.

How does POD_LABEL provide the same experience for us that we had before this got broken and what is the perceived benefit of these items being IDed randomly at creation without it having any bearing on the yaml or name included? Help! :-D

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does POD_LABEL provide the same experience for us that we had before

I am afraid I am unable to follow what you are trying to accomplish or more to the point what you think does not already work. Simple

podTemplate(yaml: whatever) {
  node(POD_LABEL) {
    sh 'true' // whatever body
  }
}

should work reliably, including across controller restarts, regardless of the number of jobs, concurrent builds, different YAML contents, etc. etc. You should not need to pay any attention to “id”, “label”, etc., or have any global configuration beyond the physical connection to the K8s cluster & namespace (not even that, if the controller itself is running inside K8s and using a service account with appropriate permissions). The above snippet will create a Pod with the definition you specify, wait for its agent to connect to the controller, do work, then delete the pod and clean up, with no interference with other builds or jobs or global settings.

In the future we could introduce a separate step that bypasses the Jenkins queue altogether and makes things even simpler:

pod(yaml: whatever) {
  sh 'true'
}

by analogy with https://www.jenkins.io/doc/pipeline/steps/docker-slaves/#dockernode-allocate-a-docker-node but for now the POD_LABEL idiom is the glue needed to make dynamic pod configuration work with the stock node step.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey Jesse, we're somewhat in the same boat here :-) You don't understand what we're trying to do and we don't understand why you're not letting us do it :-D But the above clarifies your angle so hopefully I can build on that and explain what our intent is:

The above snippet will create a Pod with the definition you specify, wait for its agent to connect to the controller, do work, then delete the pod and clean up, with no interference with other builds or jobs or global settings.

This is 100% not what we want :-D The idea is that we specify the definition in such a way that it matches existing pods and can then just use them. The result of the change Vincent made was that it doesn't match or not match, it just fails.

It seems like you've interpreted that groovy file as having the intent to create a bespoke single use pod, use it, and dump it - is that right? That's not the intent, and this setup worked flawlessly for years before the commit that changed from:

set label in unique to def way
lookup by label in KubeSlave class
just work

to

set label in unique to def way, get random id
lookup by id in KubeSlave class
fail

Allowing us to set that ID would restore perfect clean functionality for us - but both of you seem pretty heavily against that, so I need to understand what a solution that meets your idea of how it should be used AND meets our idea of an acceptable way to use it is, and if that can be found, rework our stuff and roll it out everywhere. And if not, maybe just fork it, add the one line, and hope no further breakage occurs. Though I really really really don't want to do that for a lot of reasons.

One of the reasons we don't want brand new pods every time is the cache that is held in the pod between builds will be empty every time and every build will be ultra slow, not just those first thing in the working day. Even if/when we do something to improve the cache seeding situation that seeding will be a fairly heavy overhead (largeish data copy) for a new pod and not something we'd want to do hundreds or thousands of times per day.

Our goal is:

  1. Our jenkins(actually many of them) is fresh/new every time it boots, always the same, eg - those random IDs appear to break that - but perhaps there's another way
  2. Our builds specify the pod style they use in a self contained way
  3. Pods are reused throughout the day until their inactivity timeout is reached and they're killed and the cluster can down scale - hot caches, fast builds - set to 1 hour inactivity so if used regularly through the day, life span of approx 9 or 10 hours for a single-time-zone team
  4. The jenkins assembly can be packaged together as a composition including sets of rules/pod-defs that can immediately be used
  5. After a restart IF the pod defs are the same, the old pods should be successfully reused
  6. Ideally minimal rework of our stack as we're pretty happy with how it is at the moment - however we're open to change, it's just not clear what the new acceptable solution would look like yet.

I don't understand how any fresh-on-boot system can be consistent and can match the pod template when it runs if it's not possible to create those and specify the ID or have the ID derived from a hash of the contents? That would be fine, too.

Maybe it IS possible but in a different way and not in the build command spec like now where it goes via the PodTemplateStep object (created confusingly by podTemplate() call) and not directly into a PodTemplate object where that ID can already be set.

I hope that clarified a few things for you as your post did for me, and I look forward to your response! :-D

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't want brand new pods every time

The kubernetes plugin is designed to create a pod for every build (or more precisely, every node block), and then throw it away once the build (block) exits. Reusing pods is not a supported use case. (There is an option to keep a pod after exiting, but this is solely for diagnostic purposes, and even that is a dubious practice.)

the cache that is held in the pod between builds will be empty every time

You can specify a persistent volume to reuse between pods, assuming the storage class supports concurrent mounts. This is generally a quite fragile setup, though (special requirements on volume drivers, UID mismatches, file locking issues, ad nauseam) so you are generally better off using a mirror (if dependency downloads are the problem) or a distributed-friendly build system with a cache server (if incremental builds are what you are looking for).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My take on this, with the given plugin state:

  • Static pod templates defined in the Jenkins UI or through JCasc can be reused (I think this is actually the original use case introduced in the plugin)
  • Dynamic pod templates defined in Jenkinsfiles or shared libraries through the DSL are single-use. They're not intended to be reused or shared with other projects or builds.

The fact dynamic pod templates (or corresponding pods) are accessible to other jobs is a undesirable side-effect of using the Cloud API. Unfortunately this is what we have today, and as Jesse mentioned in his earlier response this is something we want to eliminate if we have the opportunity.

For your use case, I think it can fit with the current plugin if you manage static pod templates using the jcasc aproach. I'm not in favour to make any plugin change that would facilicate re-use of pods if they have been defined in a Jenkinsfile.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For your use case, I think it can fit with the current plugin if you manage static pod templates

IIUC no, the request was to reuse pods, not just pod templates, for performance reasons. Certainly reusing pod templates across jobs is well supported (even if we would be inclined to recommend the podTemplate step going forward), but that is still assuming one pod per node block.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See

private RetentionStrategy determineRetentionStrategy() {
if (podTemplate.getIdleMinutes() == 0) {
return new OnceRetentionStrategy(cloud.getRetentionTimeout());
} else {
return new CloudRetentionStrategy(podTemplate.getIdleMinutes());
}
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OnceRetentionStrategy is the normal usage, but the option to use CloudRetentionStrategy suggests that the long-lived-agent style was deliberately added—albeit without tests, documentation, or the ability to use multiple executors? Bisecting says in #91, whose description seems to match this request.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Jesse, Vincent, long time no talk! I did see all of these updates but have been busy with other stuff.

This is still burning everyone, however, and the number of people it's burning keeps growing. Please stop calling it a rare demand. I speak for 50+ people across 7 or so organisations when I comment here. I'd rather not repeat that a third or subsequent time. This is a big deal for a lot of people across two banks and various other companies.

More efficient for very short builds, at the cost of a higher risk of cross-build interference and other maintenance overhead.

Please, this is your personal perception and not any part of our reality. These aspects are managed properly and never give us any trouble.

this VM-like use case with reusable agents is outside the design space of this plugin

It's not "vm like" it's normal for well composed software that is assembled from small reusable pieces.

You have personally, and I commend you for your curiosity and diligence in researching it, proven that that's not the case. It was added to this plugin by the founder of the plugin, and for good reason. Let's call that the design space. There are another dozen people wanting it on the various links you provided, in addition to the multiple dozens I bring to the table.

I'm not in favour to make any plugin change that would facilicate re-use of pods if they have been defined in a Jenkinsfile.

Our Jenkinsfiles are one or two line, paraphrasing:

buildACertainWayWithACertainAmountOfResources
triggerOtherDownstreamBuilds(["X", "Y", "Z"])

And to your other comment:

Dynamic pod templates defined in Jenkinsfiles or shared libraries through the DSL are single-use. They're not intended to be reused or shared with other projects or builds.

How are pod templates defined in shared libraries that are static with respect to a thousand builds not static in reality?

I understand that you can't guarantee that, but can you make the assumption that someone neck deep in configuring this has the talent to ensure they don't hang themselves with the rope they're given, and just give them the rope?

All of our pod templates are named and versioned, unique and immutable. If they don't match, you get a new pod. If they do, during a master/controller run, or even after a restart before the fateful change, they get reused.

I'd like to take this moment to remind you that this is ONLY across restarts that there is now an issue. It STILL works as required during a single master node execution cycle. Please do not cripple it any further.

My change just lets me control the ID such that it's not random and is deliberate and meaningful in a controlled and safe way. For existing users, this makes no difference unless they're currently seeing a "field doesn't exist" of the same name by random chance. Only then could behaviour change.

My change simply matches the available-to-me API with the underlying object API, it doesn't add anything new, it just exposes a TINY bit more control that wasn't previous required when the default behaviour made sense and wasn't broken. I understand the desire to make them unique for your throw-away style of builds, but please understand our desire to keep them matching and reusable and look-up-able for our builds that work the same exact way, guaranteed by their own semantics, whether there is cache available, or not, just MUCH slower when not.

Please let us move on and let this PR die in a good way.

@jglick
Copy link
Member

jglick commented Jun 18, 2021

Some discussion in JENKINS-42422 also.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants