-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enhancements/oc/mirroring-configuration-manifests: Propose a new enhancement #188
Conversation
|
||
that can be applied to the local registry with the given `oc image mirror` command. | ||
|
||
With this enhancement, the output format would gain an additional directory `manifests` as a sibling to the current `v2` containing manifest files that can be applied to the local cluster with: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hrm. I think we need to talk about what is output first, rather than jumping straight to a directory structure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to talk about what is output first...
Manifests! In the short term, this will be a ConfigMap with the release signature, but I don't think it matters from a CLI perspective, does it?
|
||
will mirror the 4.3.0 release image and other images referenced by the release image into the mirror repository at registry.example.com. | ||
|
||
This enhancement would also apply associated manifests to the cluster that `oc` connects to, with the manifest application beginning after the image mirror had completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the odds that someone actually has access to the machine in a restricted network? This wouldn't help most disconnected customers as much. So it's possible that this just isn't the right first step (but thinking through the next step might be fine).
Might be good to break this up into three parts:
- mirroring and outputting a set of manifests that can be applied to the server
- mirroring and outputting and applying to a target server directly
- a description of the human flow for mirroring to a central registry in a fully air gapped environment
instead of the two sections below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the odds that someone actually has access to the machine in a restricted network?
Slim, but you may decide to mirror to clusters without having a restricted network, e.g. to reduce exposure to RH outages or whatever.
mirroring and outputting a set of manifests that can be applied to the server
Is this "pushing images directly to a different registry, but dropping manifests to disk, because I need to sneakernet those in"? I guess we can do that, but it doesn't seem all that different from mirroring to disk locally, turning around and pushing the images immediately with oc image mirror ...
, and then sneakernetting the manifests in to oc apply
them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...mirroring to disk locally, turning around and pushing the images immediately with oc image mirror ..., and then sneakernetting the manifests in to oc apply them.
This is confusing. If we have to sneakernet the manifests then one has to sneakernet everything - including the images.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have to sneakernet the manifests then one has to sneakernet everything - including the images.
No, the 4.2 docs (which predate --to-dir
) say:
In this procedure, you place the mirror registry on a bastion host that has access to both your network and the internet. If you do not have access to a bastion host, use the method that best fits your restrictions to bring the contents of the mirror registry into your restricted network.
So there are some folks with a local registry accessible from inside and outside, but a cluster that cannot reach outside. Or that was just the best story we had for 4.2, even if there were no users in that situation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I see the term sneakernet, and therefore the necessity to do so, we must be dealing with an "air-gapped", i.e. physically isolated, network and it can't be connected to anything that is connected to the internet. If it isn't air-gapped, and is connected to the outside in some way, shape or form, there would be no need to sneakernet so why would we.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I see the term sneakernet...
Our official openshift-docs use "restricted network" instead of "disconnected", "sneakernet", etc. specifically because there are a whole range of possible setups in this space. You might want to use oc adm release mirror ...
to mirror into your local space in any of those, and maybe even if you have unrestricted access to the wider internet but have, say, policies limiting a running cluster's reliance on assets you do not directly control.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mirroring and outputting a set of manifests that can be applied to the server
Is this "pushing images directly to a different registry, but dropping manifests to disk, because I need to sneakernet those in"? I guess we can do that...
Talking to @mazzystr about this enhancement today, I'm going to reverse my earlier "hrm..." to be "yeah, we want this without a separate command", because it is how install is going to work when the mirror registry is available for immediate container image pushing. Updated the enhancement to add a new --manifests-to-dir
with 8d55831 -> 9517e1f.
For disconnected clusters, `oc adm release mirror ...` already helps users copy the release image and its dependencies into their local mirror. | ||
This enhancment extends that command to also create and apply manifests to their local cluster, to bring in information that cannot be represented by container images. | ||
For example, `oc` might produce ConfigMap manifests containing the release image signature which the cluster-version operator could use to [verify the mirrored release][cvo-config-map-signatures]. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we also want, at least as an option, the ability for 'oc' to do the verification so only verified data is ever sneakernetted onto a possibly highly sensitive network?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... the ability for 'oc' to do the verification so only verified data is ever sneakernetted onto a possibly highly sensitive network?
Like "does this signature check out?"? That sounds good to me, but is an internal oc
decision. It won't affect the user interface. Unless you also want flags to turn it on and off? But I don't know why you'd want flags to turn it off, once oc
grows the ability to verify signatures. I guess there is a bit of wiggle room, because oc
-side verification would be "do the sig stores configured in release B have a valid signature for release B by the keys trusted in release B?" and in a cluster currently running release A, the CVO would be checking "do the sig stores configured in release A have a valid signature for release B by the keys trusted in release A?". But the point of ConfigMap signatures is that we can populate the ConfigMap sig store configured in release A, so I'm not worried about that. And if we have concerns about drifting "by the keys trusted in release", we could just include ConfigMaps for all the signatures we can hunt down and let the CVO sort it out inside. I guess that would be a new flag (--include-all-signatures
?), but we can make that proposal in follow-up work; it doesn't have to be in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for leaving it for CVO for verify the signatures at this point of time. I am still not sure what value oc
side verfication will add to this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My original question above came out of a lack of understanding of how this ultimately works. However as it turns out in order to "acquire" the signature(s) needed to populate the configmap Verify has to be called. The verification process is what populates a structure containing those signatures that successfully verified the given image. It if doesn't return any signatures (which "should" never happen) something is wrong in which case I just warn the user and proceed with rest of the mirroring task.
8d55831
to
9517e1f
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: wking The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
||
The proposal for [applying directly to the target cluster](#applying-directly-to-the-target-cluster) adds a new dependency on the cluster to which `oc` connects for Kubernetes activity. | ||
The explicit `--apply-manifests` argument requires users who have been using the previous `oc adm release mirror` implementation (which did not push manifests) to explicitly opt in to the new functionality, so there is no change that they accidentally pushing manifests into the wrong cluster. | ||
FIXME: check to see how this works if there are conflicts but neither `--force-conflicts` not `--overwrite` was set. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested this with my implemented --apply-manifests option and the cluster generates an error if the ConfigMap, based on name and namespace, already exists.
I added an --overwrite option and, if provided, first check if the ConfigMap exists and if so it updates the existing ConfigMap. If it doesn't already exist it just attempts the create.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reference, the way 'oc apply' current works WRT creation/update of a configmap is:
- If the given configmap, as defined by namespace and name, already exists, it will update it without the need for any additional option. The resulting message is different in that it says the configmap was "configured" vs "created".
- The configmap is truly updated as opposed to replaced. This has the following behavior WRT the binaryData field. If the name on binaryData field is changed, e.g. I changed sha256-3a516480dfd68e0f87f702b4d7bdd6f6a0acfdac5cd2e9767b838ceede34d70d-1 to sha256-3a516480dfd68e0f87f702b4d7bdd6f6a0acfdac5cd2e9767b838ceede34d70d-4, and the binaryData name field was updated.
Also, --overwrite would only apply when --apply-manifests option is specified. Currently 'oc adm release mirror' default behavior when writing images out to --to-dir is to simply overwrite what is currently there with no warning or indication that it is doing so. Make sense to have the newly added configmap output work in the same manner - simply overwrite whatever is there by default with no warning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to suggest an alternative solution for mirroring data like release signatures, which is based on the idea I had picked up when I started working on Cincinnati: the cluster receives all information required for upgrades via the update graph. If for some reason the graph doesn't contain all that is required, we can always insert it there. This has at least two advantages. 1. The cluster only needs access to an image registry and a graph endpoint 2. The mirroring scope remains small, including only the image registry and the graph endpoint.
For example, image signatures could be attached as metadata to each release in the graph. The cluster could still have the abstraction of ConfigMaps for image signatures, just that they would be created by the CVO on update graph fetch, rather than relying on some external tool to create them.
Further, with the advancement of processes around https://github.com/openshift/cincinnati-graph-data and its direct integration with Cincinnati, I suggest that we use that repository to store, or at least reference, all relevant release information which is not contained in the release images directly. To pick up on the example above, this would mean that we push the signatures to that repository or at least include a signature URL for every release.
In case of references, Cincinnati and the mirroring tool can insert the signatures in the graph or offline content respectively, and the (offline) cluster will receive it via the update graph.
/close |
@sdodson: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@wking: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
472c178
to
18b01db
Compare
0057c0a
to
a5dc8b1
Compare
Here is the use case that @rthallisey and I are working on; many clusters will exist in a disconnected network, sharing one container registry. We need to define a clear workflow for how the customer will:
The workflow here seems oriented toward one cluster. We can probably make it work for multi-cluster, but I'll observe that there are now three different kinds of data that need to go to three different places:
If we can reduce the number of data types the customer has to work with and/or the number of places that data has to go, that would help. For example, including signatures with the graph data would simplify things substantially. |
You'll have some choices here. One possible flow:
Or don't worry about specific releases, and just mirror every release in your favorite channel(s) into your network.
There are options here too, but I think this is out of scope for this enhancement. We have no public enhancement around local Cincinnati yet, but you can float one or discuss in OTA-97 (internal; sorry external folks).
Yeah, you'd have to push sig ConfigMaps to each cluster for 4.4 and 4.5 (this enhancement). There are a number of ways you could share sigs between clusters though, and we'll probably have a story around for some future 4.y.
We are not touching that with this in this enhancement. Just setting the stage, because the config manifest handling from this enhancement may be used (in some future implementation) to also collect whatever internet-available information about Red Hat's Cincinnati graph and package it up for shipping in to the restricted-network cluster.
But also requires that we have a story for local Cincinnati (or at least local JSON serving), which we don't at the moment. And there may be other internet-available information that needs collecting for restricted-network use cases which cannot be conveniently slotted into a Cincinnati graph. Like... I dunno, dynamic Insights rules or something. |
…ncement Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and --apply-config and --config-to-dir options. The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment)
7dcc2b9
to
884166a
Compare
…ncement Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and --apply-config and --config-to-dir options. The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment)
884166a
to
7e24df7
Compare
I've pushed 884166a -> 7e24df7, adding a link to cincinnati-graph-data, mentioning |
…ncement Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and --apply-config and --config-to-dir options. The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment)
7e24df7
to
c6ca032
Compare
``` | ||
|
||
As discussed in [the *version skew strategy* section](#version-skew-strategy), new versions of `oc` may add additional manifests as `oc` learns about capturing new information from the wider internet (e.g. information for [in-cluster Cincinnati](#in-cluster-cincinnati). | ||
Workflows that apply manifests individually with `oc apply -f "mirror/config/${FILEPATH}"` would need continual review to ensure they were not forgetting any manifests which the user wishes to apply. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is unconvincing. Why would we allow someone to add subdirectories? If we need optional subdirectories, how would we set them up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather just say "for now we don't do recursive". If we don't need it concretely now, we don't require it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We probably already need to tell users what command to run in mirror (as part of output), so having an explicit list there is better in the short term.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Floated #283 with a reroll to scope narrowly around a release image signature ConfigMap, since I haven't had any luck convincing you that we want this PR's broad, generic scoping.
This PR implements the current baseline of openshift/enhancements#238 (which is a continuation of the work originated here: openshift/enhancements#188) Local package pkg/verify was reused with minor changes from CVO so there is PR openshift/library-go#725 to move package verify into library-go.
This PR implements the current baseline of openshift/enhancements#238 (which is a continuation of the work originated here: openshift/enhancements#188) Local package pkg/verify was reused with minor changes from CVO so there is PR openshift/library-go#725 to move package verify into library-go.
This PR implements the current baseline of openshift/enhancements#238 (which is a continuation of the work originated here: openshift/enhancements#188) Local package pkg/verify was reused with minor changes from CVO so there is PR openshift/library-go#725 to move package verify into library-go.
This PR implements the current baseline of openshift/enhancements#238 (which is a continuation of the work originated here: openshift/enhancements#188) Local package pkg/verify was reused with minor changes from CVO so there is PR openshift/library-go#725 to move package verify into library-go.
This PR implements the current baseline of openshift/enhancements#238 (which is a continuation of the work originated here: openshift/enhancements#188) Local package pkg/verify was reused with minor changes from CVO so there is PR openshift/library-go#725 to move package verify into library-go.
This PR implements the current baseline of openshift/enhancements#238 (which is a continuation of the work originated here: openshift/enhancements#188) Local package pkg/verify was reused with minor changes from CVO so there is PR openshift/library-go#725 to move package verify into library-go.
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and options which leave the format implicit (--apply-release-image-signature and --release-image-signature-to-dir). The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. The narrow scoping vs openshift#188 accomodates pushback from Clayton in [5] and similar out of band discussion. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment) [5]: openshift#188 (comment)
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and options which leave the format implicit (--apply-release-image-signature and --release-image-signature-to-dir). The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. The narrow scoping vs openshift#188 accommodates pushback from Clayton in [5] and similar out of band discussion. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment) [5]: openshift#188 (comment)
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and options which leave the format implicit (--apply-release-image-signature and --release-image-signature-to-dir). The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. The narrow scoping vs openshift#188 accommodates pushback from Clayton in [5] and similar out of band discussion. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment) [5]: openshift#188 (comment)
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and options which leave the format implicit (--apply-release-image-signature and --release-image-signature-to-dir). The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. The narrow scoping vs openshift#188 accommodates pushback from Clayton in [5] and similar out of band discussion. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment) [5]: openshift#188 (comment)
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and options which leave the format implicit (--apply-release-image-signature and --release-image-signature-to-dir). The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. The narrow scoping vs openshift#188 accommodates pushback from Clayton in [5] and similar out of band discussion. The two-releases of backwards compatibility is based on [6]. I'm extrapolating from Clayton's comment to assume that "releases" means 4.(y+2), not 4.y -> 6.0. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment) [5]: openshift#188 (comment) [6]: openshift#283 (comment)
Obsoleted, for now, by #283. /close |
@wking: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Stubbing out how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279.
CC @jottofar, @LalatenduMohanty, @smarterclayton