-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider triggering follow-on pipelines on success of earlier pipelines #68
Comments
Pipelines in Pipelines (Experimental) could work well in this scenario. I just don't know how I feel about having to fill out like 15+ params in one pipeline, I also don't know how painful debugging problems would be when you're dealing with multiple pipelines in one. Otherwise, this seems like the best approach.
As this StackOverflow answer says: "Tekton indeed doesn't really offers any easy way to trigger one pipeline from another". Most solutions to this I can find involve one of two approaches:
Since both of these approaches involve modifying the underlying pipelines, I would strongly recommend guarding the new final task using a new param (e.g.
I'm not sure what the benefits to this approach would be compared to the other two. |
I think the A single pipeline has the benefit of being visually pleasing for a demo to show the end to end workflow for how an intelligent application makes it into production. I do agree that I would not want to wait for a container build pipeline just to debug some issue in the test or rollout steps of the pipeline |
Moving discussion about this from Slack to here:
|
I'd like to point out that currently, the |
We have to walk before we run. This is a PoC. It can be as limited in options / scope as reasonable, but it should be correct in what it is doing. Today the PR filed have nothing in common with all the previous steps that the
Then we are talking about different pipeline which should as its input take the previous PipelineRun id, or something, to be able to pull information about the container image from it (I assume that can be done, somehow). Which, frankly, might be the most practical quick-fix solution for now.
Make the PR-filing Tasks optional, based on some parameters. But again, this is a PoC, we show what can be done in its entirety, not claiming that it is perfect for every use-case.
Unless everything is green, there is some problem there, something unexpected happened. If we are talking about GitOps, testing is just a gating step to CD. We can by all means have separate smaller Pipelines for people who prefer them, with the manual inputs. But the PoC should show as much automation and using results of the previous steps as possible. |
Sorry, I know you linked the issue but could you be more specific on "requires so much manual input"? If you mean these params: Lines 8 to 20 in f148f63
|
The |
I believe by combining those two pipelines into one we will make scaling/expansion in the future unnecessarily hard; no point in combining the pipelines now and separating them again later. If we're trying to be "correct" I think using one of the approaches I suggested will be faster, easier, and more scalable.
I totally agree, we can create an issue for this. The intention was for somebody to copy over whatever image-ref they wanted to use to the pipelinerun "change the parameter values in the PipelineRun definition to match.": Line 148 in f148f63
However I can see how that's confusing and having an easy copy/paste command would be nice (see below). Ideally this will be automated of course once we resolve this issue.
I agree, we could use the $ PIPELINERUN_NAME=$(oc get pipelinerun \
--selector=tekton.dev/pipeline=test-mlflow-image \
--sort-by=.status.startTime \
-o jsonpath='{range .items[*]}{.metadata.name}{" "}{.status.conditions[0].type}{" "}{.status.conditions[0].status}{"\n"}{end}' | \
awk '$2=="Succeeded" && $3=="True" {print $1}' | \
tail -n 1) && \
oc get pipelinerun $PIPELINERUN_NAME \
-o jsonpath='{.status.pipelineSpec.tasks[?(@.name=="skopeo-copy")].params[?(@.name=="destImageURL")].value}'
docker://quay.io/rhoai-edge/bike-rentals-auto-ml:1-3c1e170d-0144-452c-96de-f60aad045a39
We could, or we could just keep the pipelines separate ¯\(ツ)/¯ those optional tasks can now mean your testing flow stops working because somebody accidentally introduced a bug in the pipeline when trying to change the git flow.
In that case I still like the use of Pipelines in Pipelines more than combining the steps from multiple pipeline files into one.
The world works in mysterious ways, especially when programming is involved.
I 100% agree, but don't think combining all of the steps from every pipeline file into one is a good idea for multiple reasons (listed above). |
Can you elaborate on this please? I agree they should match ideally, but for example the user could have multiple ACM apps running that use the same quay model but with different digests e.g. an old version and a new version. Even if we assume that's never going to be the case, what you think we should do? Have the testing pipeline check the repo files to see if they match? Use
That may not be what the user always wants but it's fair as a default 👍 . |
The PoC currently does not show running two different apps with the same container image but different version of that image, so it's not like this would prevent demonstrating what the PoC demonstrates today. But anyway: the PR goes to the repo of the ACM app that typically consumes the new image first, and the user is welcome to later merge that change to the more conservative branch that drives the second ACM app. QE / stage / prod. Whatever.
If the testing pipeline pushed the container image somewhere, that's the value that the |
Sorry, I'm still not understanding. Are you suggesting that the testing pipeline should change |
Overall: any manual step we could remove from the process, we should try to do so. Because doing something manually, be it copying SHA-256's or creating PipelineRuns, is error prone, it depends on the human factor rather than supporting humans by automating what can be automated. Of the approaches that you suggested, they all seem to require technology or techniques that we currently don't have in the PoC. I'd prefer to fully use what we depend on today before adding something new to the mix. I see nothing wrong having primarily a single Pipeline in the PoC, plus possibly instructions what bits to remove if the user wants to avoid / skip some steps. Maybe they don't want to test. Maybe they don't want to file the PR. They can always edit it out. |
Yes. Well, the rollout pipeline via the PR, just like the digest. But in the context of this issue, it's basically the same pipeline. |
Pipelines in Pipelines is extremely simple though, it's like one small file... |
The https://github.com/tektoncd/experimental/tree/main/pipelines-in-pipelines starts with
Not gonna happen. |
??? |
Fair enough. But it is yet another thing that the user would have to ask their cluster-admin to do for them:
IIRC, the last thing I had to do as an admin for this pipelines part was to install the Red Hat OpenShift Pipelines operator, which says "provided by Red Hat". For some admins and some clusters, the confinement and separation of duties that the current PoC approach provides might be the dealbreaker. |
Sure, sounds fine to me. I'll close this one for now then 👍 |
Currently we have 3 pipelines, that are roughly:
The normal flow of things is in that order:
Build -> Test -> Rollout
Right now, someone has to kick off each one manually after the success of the earlier ones. I think that we should consider either just having one pipeline that does all of the steps, or have a separate outer pipeline that triggers and monitors the existing ones, or have triggers that will kick off subsequent pipelines after the success of earlier ones.
The text was updated successfully, but these errors were encountered: