-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(feat): (testing): Add extensible unpacking interface + controller tests #65
(feat): (testing): Add extensible unpacking interface + controller tests #65
Conversation
Signed-off-by: Bryce Palmer <bpalmer@redhat.com>
Signed-off-by: Bryce Palmer <bpalmer@redhat.com>
@joelanford Follow up for exploring potential alternatives as we discussed #66 |
Signed-off-by: Bryce Palmer <bpalmer@redhat.com>
Signed-off-by: Bryce Palmer <bpalmer@redhat.com>
Signed-off-by: Bryce Palmer <bpalmer@redhat.com>
Signed-off-by: Bryce Palmer <everettraven@gmail.com>
Signed-off-by: Bryce Palmer <everettraven@gmail.com>
Signed-off-by: Bryce Palmer <everettraven@gmail.com>
- apiGroups: | ||
- batch | ||
resources: | ||
- jobs | ||
verbs: | ||
- create | ||
- get | ||
- list | ||
- watch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the switch to pods instead of jobs only because rukpak provisioner uses an unpack pod instead of a job? This may have subtle but salient implications for catalogd due to the fact that there's product level differences between bundle images and catalog images. For eg, bundle images are generally publicly pull-able(at least has been advised to be that way, but that's totally up for debate possible in v1), however, catalog images are private/public, and private images will need pull secrets to be pulled, that's easily passable to a Job, vs a pod that needs additional configuration in the code.
With that context, think more differences between bundle and catalog images showing up, and therefore the unpack pod needing to be more and more configured in code with for each difference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I don't think there really is a difference between catalog and bundle images. It is just as possible for a bundle image to require a pull secret as a catalog image
- Jobs and Pods are (as far as I know) both easily capable of dealing with pull secrets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the switch to pods instead of jobs only because rukpak provisioner uses an unpack pod instead of a job?
Since we are pretty much copying the logic that rukpak uses internally and rukpak uses a Pod instead of Job, yes.
This may have subtle but salient implications for catalogd due to the fact that there's product level differences between bundle images and catalog images
I'm not entirely sure I am following here. With FBC aren't catalog images just going to be images that contain a specific directory/file structure? IIUC this is exactly what a bundle is just on a smaller scale and the files are instead Kubernetes manifests. I understand that there are different views on what a catalog vs a bundle is, hence the creation of a Catalog
API, but at the end of the day their base architecture follows a similar pattern of putting everything as files in a filesystem and they are just read accordingly. I don't think we should embed any specific logic to differentiate between a catalog and a bundle as IMO it just adds unnecessary complexity. IMO someone should actually be able to put an image reference that references a bundle image instead of a catalog image and catalogd should still successfully be able to "unpack" it. The place where it should fail is when attempting to render contents that would only exist in a catalog.
@joelanford also makes good points - for reference, a Pod using a pull secret looks something like (Kubernetes docs on this):
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've been bitten too many times for using pods directly instead of existing Kubernetes primitives in v0. When trying to use the pull secret example, I wasn't trying to suggest that you cannot do that with pods, but to the fact that the job/deployment controller does exactly that, pass on the pull secret to the pod it creates, but instead of letting the job/deployment controller do that we sign up for doing that ourselves, and then have to keep including tasks like this in our controller that we could have just delegated to the existing controllers. Eg tomorrow if there's a policy change that requires change in how pods are bootstrapped (think something similar to PSA), then there'll be three controllers incorporating those changes: the jobs controller, the deployment controller, and our controller. And then we have to worry about OCP releases/backports etc etc. Instead, using Kubernetes primitives mean we make minimal changes on our controller to incorporate policy changes, the same change that every other project will be making.
But, sounds like we need to have this discussion in rukpak instead.
Not resolving the conversation now so that we remember to capture this in an issue in rukpak by referencing this later.
// TODO: None of the rukpak CRD validations (both static and from the rukpak | ||
// webhooks) related to the source are present here. Which of them do we need? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't remember what the rukpak CRD validations
exactly are, but this is probably highlighting a difference, catalogd doesn't need to be concerned with CRD validations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does need to be concerned. This would make sure that you can't create a catalog with a non-sensical spec.source
field. e.g.
- specify type, but not specify the corresponding struct
- specify type, but specify a different struct
- specify multiple structs
- not specify type
- etc...
} | ||
} | ||
|
||
func (i *Image) ensureUnpackPod(ctx context.Context, catalog *catalogdv1beta1.Catalog, pod *corev1.Pod) (controllerutil.OperationResult, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't think we have to solve this in this PR, but mentioning this for the record:
Essentially there's concerns about the fidelity of pod logs (specifically, logs being complete, and therefore a reliable source of complete content). Instead unpack jobs/pods should send the data back to catalogd using a service endpoint:port that catalogd makes available for these uploads, and the logs are then stored reliably in the filesystem/create confimap/s with the data, before they're consumed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, and I agree with putting this out of scope.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, this sounds like it is likely also something that rukpak needs to consider
I'm concerned that this PR review is treading into: "let's dissect and tweak stuff about how rukpak does things", and I think: If we diverge here, it just means more work when we go back and converge. |
Signed-off-by: Bryce Palmer <bpalmer@redhat.com>
+1. FWIW we could use #66 as the issue for the convergence of the implementations |
Signed-off-by: Bryce Palmer <bpalmer@redhat.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Sounds like we have to resolve some of the issues I brought up in rukpak instead of here, so we can move these conversations to rukpak instead.
Signed-off-by: Bryce Palmer <bpalmer@redhat.com>
New changes are detected. LGTM label has been removed. |
Description
image
source.Catalog
controllerMotivation
OwnerReference
based watch on the unpack job #63CatalogSource
controller error handling during reconciliation #6