-
Notifications
You must be signed in to change notification settings - Fork 315
Update pipeline-service prod #4028
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update pipeline-service prod #4028
Conversation
gabemontero
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the pinning looks good @enarha but found a couple of anomalies ... ptal
| key: db.name | ||
| name: tekton-results-database | ||
| image: quay.io/redhat-appstudio/tekton-results-api:bae7851ff584423503af324200f52cd28ca99116 | ||
| image: quay.io/redhat-appstudio/tekton-results-api:ed360eccc021ad5eedf8ea9c0732912ef602b15a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep we are allowing a bump of the api server image while we still pin the watcher image at the level with the mem leak (and the performance cheats which we need to live with until the WG approved rewrite of logging to bypass the GRPC calls occurs).
the watcher image is not changed here
looks good
| --dry-run=client \ | ||
| -o yaml | kubectl apply -f - | ||
| image: quay.io/konflux-ci/appstudio-utils:ab6b0b8e40e440158e7288c73aff1cf83a2cc8a9@sha256:24179f0efd06c65d16868c2d7eb82573cce8e43533de6cea14fec3b7446e0b14 | ||
| image: quay.io/redhat-appstudio/appstudio-utils:dbbdd82734232e6289e8fbae5b4c858481a7c057 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this intentional ?? .... I think konflux-ci is the new approved way ... maybe this is getting picked up from the pipeline service repo and should be reverted there ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. I compared that with staging and considered it good, but it isn't. The problem is #3848 updated the deploy.yaml files directly, instead of the refs in the pipeline-service repo and now our update is overriding it. I'll create the fix in the service repo, but we can't consume it without the OSP bump (or at least without revering it in both service and infra repos and staging clusters, which I'm not willing to do). So I'll create the change in the service repo and revert those specific changes in the deploy.yaml files. @rcerven FYI.
| enable-git-resolver: true | ||
| enable-hub-resolver: true | ||
| enable-tekton-oci-bundles: true | ||
| enable-step-actions: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change includes the Tekton Results update to enable TLS verification (pipeline-service commit SHA 706387e06fde062c3c8fc23531ea9fe8a62a372f), but does not include the OSP version bump (pipeline-service commit SHA 8eada62e76fb135d2cba5b7929411181c0a0e9e0). The latter is planned for update a week from now.
905bcda to
62673a8
Compare
| - args: [] | ||
| image: quay.io/redhat-appstudio/pipeline-service-exporter:2782659983c94692497467cd1cf952b1bc1f0da4 | ||
| - args: | ||
| - -pprof-address | ||
| - "6060" | ||
| image: quay.io/redhat-appstudio/pipeline-service-exporter:c93b6d93e8bbd4540a4d565a35bae2a8b081b000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like this cluster was not up to date, thus the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| image: quay.io/openshift-pipeline/openshift-pipelines-pipelines-operator-bundle-container-index@sha256:c2c6587e059b0b5144f4b2cff79f31f1f6fee36f0927b301a17a3b608237134f | ||
| image: quay.io/openshift-pipeline/openshift-pipelines-pipelines-operator-bundle-container-index@sha256:beec206d5f9650173348c0d9db404faee6b791ec6d25a9ea3410a909a8e37187 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above. It is inline with the other clusters though.
gabemontero
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/hold
(until you release the hold at the date you specified in your konflux announce email
/lgtm
|
[APPROVALNOTIFIER] This PR is APPROVED Approval requirements bypassed by manually added approval. This pull-request has been approved by: enarha, gabemontero The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
I think you kube-linter error is a github.com hiccup |
This change does not include the OSP bump, see the commit message. I actually planned to merge it as soon as the CI pass and I get the respective approvals. I see the CI is already green, so unless you have any objections, I'll remove the hold. |
sorry totally missed that in the description /hold cancel |
The change includes the Tekton Results update to enable TLS verification (pipeline-service commit SHA 706387e06fde062c3c8fc23531ea9fe8a62a372f), but does not include the OSP version bump (pipeline-service commit SHA 8eada62e76fb135d2cba5b7929411181c0a0e9e0). The latter is planned for update a week from now.