Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLI Manager #924

Merged
merged 23 commits into from
Feb 3, 2022
Merged

Conversation

rosspeoples
Copy link

Provide a way to distribute CLI tools and plugins with a krew-compatible API for disconnected environments.

@openshift-ci openshift-ci bot requested review from hardys and sdodson October 6, 2021 14:32
@rosspeoples
Copy link
Author

/assign @soltysh

@rosspeoples
Copy link
Author

Updated based on feedback. Thanks!

The user will invoke the following commands:

* `oc tools list` will access the connected OpenShift cluster, if connected, and will retrieve information about available CLIs (ClusterCLI CRs)
* `oc tools install odo` will install `odo` to a user's home directory, as a standalone binary.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this be multi-arch aware?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, oc is already aware of the os/arch it's running on, so the idea is to pass along that information to the CLI manager's API to filter results based on platform (os/arch).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a good point for krew, iirc it does that too already, so you should use that as argument for krew, as mentioned above.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you saying we should recommend using krew over oc plugin?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nevermind, I think the answer is yes and I'm updating the proposal to reflect that, krew just makes more sense than implementing a new client-side component.

* Caveats
* Homepage
* Version
* Platform/architecture
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This field implies that perhaps I could have 2 CRs for the same command for different architectures? Is the name of the CR significant, or does tool install look only at the path field?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was going to put a note there for clarification, but thought it better to be in a design details section: A single CR supports multiple versions, and each version supports multiple platforms (os/arch). The name of the CR is used for tool install.

NAME DESC LATEST INSTALLED
----- ----- ----- -----
kubectl Kubernetes CLI 1.15 1.13
oc OpenShift CLI 4.4 4.3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will oc know which version of the tool is installed locally? Will it ask each of them to report their version? Where does the info from the latest column come from?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great questions...keeping track of locally installed version was an interesting problem. Since a SHA256 hash is kept on every version/os/arch binary, oc can simply hash the binary locally and query the API with that digest to get the current version. The digests are stored in the CRs status field, so as long as no one removes those, oc should always know what version is installed.

In the CR, it is expected that the list of versions provided is in order from oldest to newest, so the latest is always at the end of the list. Attempting to parse version numbers seemed too problematic given how different projects handle versioning.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either explain the questions popping up in proposal section, or add a questions section towards the end of the proposal, It's fine and good to even leave some questions unaswered

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, doesn't krew track that information for you?

on the cluster.

Currently, users can access `consoleclidownloads` custom resources for helm, oc, and odo either from oc with `oc get consoleclidownloads` or through
the `downloads -n openshift-console` route. This access will still be available, through the updated ClusterCLI custom resources. The ClusterCLI custom
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean that newer versions of oc will use the new CRD instead of the old mechanism? Is there a version skew concern with that change, or will the old mechanism work for old clients?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old mechanism isn't going away yet (maybe never). The current oc command to extract binaries from those CRs isn't changing either. This would add a new subcommand to oc that will be used exclusively for the new CRD.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oc isn't actively using the current mechanism, afaict it's only used by webconsole, since oc adm release extract retrieves the binaries directly from a release image.


Each CLI will provide an image.
Each CLI is responsible for creating a CR to hold metadata. The CR will serve to deliver the metadata and description
of its deliverable binary. Initially, CRs will be accessed using `oc tools` command, and `oc tools install <cli>` will extract the binary to a user's home directory.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will oc tools install oc work?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps...if only to update/replace the currently installed version. On unix-like systems, this is pretty straightforward, but for Windows it's more complicated, as you cannot replace the executable of a running process.

### Risks and Mitigations

Distributing undesirable binaries is always a risk. Some mitigations include requiring cluster-admins to maintain the index, and the verification of downloaded
binaries using SHA256 hashes.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Verifying SHA256 hashes is only good if these hashes can be trusted (for example because they are signed with a RH-owned key). Might be useful to clarify this if this can be done in a few words.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated to hopefully clarify that it's the responsibility of the cluster-admins to publish trusted binaries.

CLI images would serve the artifacts to enable disconnected downloads, and OpenShift CLIs will be available as
[krew plugins](https://github.com/kubernetes-sigs/krew-index/tree/master/plugins).

* Krew and Krew plugins are upstream projects that Kubernetes users are already familiar with
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest the proposal section be shaped in the following way:

  1. introduce what krew is and how it fits the image of delivering plugins based on the index metadata, and most importantly why we picked krew instead of creating our own mechanism
  2. describe how we'll serve the index metadata for 1 inside the cluster
  3. describe how a binary author is meant to have his binary served by 2.

First describe the process from a high level, then create a detailed section for each of them drilling down into implementation details.

The user will invoke the following commands:

* `oc tools list` will access the connected OpenShift cluster, if connected, and will retrieve information about available CLIs (ClusterCLI CRs)
* `oc tools install odo` will install `odo` to a user's home directory, as a standalone binary.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a good point for krew, iirc it does that too already, so you should use that as argument for krew, as mentioned above.

NAME DESC LATEST INSTALLED
----- ----- ----- -----
kubectl Kubernetes CLI 1.15 1.13
oc OpenShift CLI 4.4 4.3
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either explain the questions popping up in proposal section, or add a questions section towards the end of the proposal, It's fine and good to even leave some questions unaswered

NAME DESC LATEST INSTALLED
----- ----- ----- -----
kubectl Kubernetes CLI 1.15 1.13
oc OpenShift CLI 4.4 4.3
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, doesn't krew track that information for you?

* `oc krew install odo`

I could then interact with `odo`:
* `oc odo --help`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO this is ugly. odo is normally a standalone tool, likewise for tools like tkn or helm. If someone is using these tools in any other context, they're used to just typing 'odo --help' or 'helm install'. In the worst case, they're following samples or have scripts that call the tool directly and wouldn't work in this scenario.

If there are tools that shouldn't exist on their own or don't have much direct usage then I agree a plugin mechanism makes a lot of sense. I don't like wrapping well-known binaries and providing a different way to access them, especially when it'll primarily only be used for disconnected environments.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see this proposal as only dealing with the 'disconnected' use case. IMO the value of this proposal is really in 'standardizing' how we ship / distribute, cli tools to customers that interact with OCP.

alias='...' is a small price to pay for re-establishing foo as a command line tool over oc foo. Today without this; the way[s] that we ship CLI tools lack a lot of standardization and 'verification' that an 'enterprise' focused company should be providing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am interested in both points from @deboer-tim and @sferich888

  • I need a supported mechanism to distribute a CLI tool from an optional operator, this includes disconnected environments but it's more than just that...I need to be sure the official built CLI binary is distributed to the end user
  • For one of our projects (OADP, Velero), the current usage is to invoke the cli binary just as 'velero' and not as 'oc velero', the usage is a little strange needing to put 'oc' in front. I don't forsee this as a blocker, for our usage, but would be nice to see if we can improve in future.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The initial phase will be covering oc plugins, only. I'm envisioning addressing concerns from @deboer-tim and @jwmatthews at later stages.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in order for "oc odo" to work, doesn't that mean "odo" is already on the user's path? so "odo" would just work, without any aliasing, right?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No because of how the name of the plug-in works, the name is 'kube_odo' or 'oc_odo' so some alias/symlink would be needed to get the name you want, but it's a small price to pay for the delivery mechanisms (IMO)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soltysh it seems like this thread was never really resolved satisfactorily (or even unsatisfactorily, i.e. saying "yes the ux is degraded but it unification of cli mechanisms is worth it and is the direction we're going")

is there an alternative here? or are the concerns raised on this thread being rejected as insufficient to change implementation directions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this was more or less answered in the "alternatives" section below: we'll move forward w/ this approach and users can use alias if they want an easy direct invocation

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross make sure to add followups mentioning that we will want to address the problem of all CLIs as well. This initial proposal is only meant for plugins.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


#### Story 2

As owner of a CLI or plugin, I want to publish it to users of the cluster. I need to create a Plugin CR for my tool, or provide the required information about my tool

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW the GitHub actions team has shown that finding the versions and associated binaries for most RH/OpenShift CLIs can be completely automated: https://github.com/redhat-actions/openshift-tools-installer

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right in your previous comment that shipping non-oc binaries through this mechanism isn't the best possible way, but the undeniable benefit is that it will work just fine in the disconnected environment, which the above gh actions will require the additional step of mirroring them. We're aiming at simplifying that bit by shipping the binaries through images which is a supported way of providing binaries to customers.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely, providing standalone binaries through this mechanism is a good step forward.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross add a followup/further development section and mention it there

RPMs or images are the only options, and images must be deployed by the RH pipeline via operators. This proposal is for delivering
plugins via images, because this will enable offering plugins offline through an existing image registry.
* Plugin owners must be able to easily distribute their binaries
* The version of a plugin that a user is offered is appropriate for the version of the cluster
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not entirely true, maybe this makes sense for the plugins which are shipped in the default payload, but for optional operators/components it does not. I'd not make that a goal of the proposal.


Each component wishing to provide customers with their plugins will build and publish images via a trusted image registry
and create a Plugin CR to provide an image name and the file path within that image for the binaries.
Clients (i.e. `krew`) will read the index and download binaries from the Controller. The Controller is responsible building the index from CRs, for pulling the images,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Clients (i.e. `krew`) will read the index and download binaries from the Controller. The Controller is responsible building the index from CRs, for pulling the images,
Clients (i.e. `krew`) will read the index and download binaries from the Controller. The Controller is responsible for building the index from CRs, for pulling the images,

* A `krew`-compatible custom index can provide available plugins for a cluster in disconnected environments
* The index will be served by a Controller with its contents managed by cluster-admins via CRs
* The binaries will also be served by the Controller that will pull images from a trusted image registry and extract the binaries
* With `krew index add https://someother-third-party-index` we won't limit cluster-admins from adding their own index with whatever plugins they want
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a client-side action, not a cluster-admin. iow. a cluster client can modify the list of indexes he works with. We will ensure that the cluster one will be the first on that list.


#### Story 2

As owner of a CLI or plugin, I want to publish it to users of the cluster. I need to create a Plugin CR for my tool, or provide the required information about my tool
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right in your previous comment that shipping non-oc binaries through this mechanism isn't the best possible way, but the undeniable benefit is that it will work just fine in the disconnected environment, which the above gh actions will require the additional step of mirroring them. We're aiming at simplifying that bit by shipping the binaries through images which is a supported way of providing binaries to customers.


#### Story 2

As owner of a CLI or plugin, I want to publish it to users of the cluster. I need to create a Plugin CR for my tool, or provide the required information about my tool
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely, providing standalone binaries through this mechanism is a good step forward.


## Drawbacks

Being that `krew` is for distributing `kubectl` plugins rather than generic CLIs, one drawback is how a non-`kubeclt` plugin
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Being that `krew` is for distributing `kubectl` plugins rather than generic CLIs, one drawback is how a non-`kubeclt` plugin
Being that `krew` is for distributing `kubectl` plugins rather than generic CLIs, one drawback is how a non-`kubectl` plugin

Being that `krew` is for distributing `kubectl` plugins rather than generic CLIs, one drawback is how a non-`kubeclt` plugin
is executed after being installed. The binary will always be prefixed with `kubectl-`, so for example, `odo` would be `kubectl-odo`.
This is how `kubectl` plugins work. You can either execute it with that name, or through `kubectl` or `oc` as though it were a
`kubectl` plugin. For example: `kubectl odo` or `oc odo`.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As commented elsewhere by @deboer-tim this should be called out as a followup/extension of this mechanism.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also call out that every modern shell (bash, fish, powershell, etc) provides an 'alias' feature. Thus there are simple workarounds to re-establishing existing behaviors (for user continuity).

Copy link

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few minor nits, but I'm mostly OK with this as is.

* The Controller will not build or package plugins
* No recommending/limiting which plugins are served by the API
* `krew` will not create or update the Plugin CRs, those will be managed by cluster-admins
* No modifications to `oc` to support this functionality

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How? Isn't the oc config (for krew) going to need to know what 'index' to search or pull from (as its going to be cluster specific, and/or an oc client can be setup to talk to multiple clusters)?

I like this 'statement', for the purpose of the proposal, however from a UX perspective; I think 'minimal' changes to oc (or the build in krew plugin we ship) will need to know how to talk to the cluster index (without forcing the user (possibly 100's of them) to get data from a cluster-admin and configure the cli on how to talk to the cluster).

* A `krew`-compatible custom index can provide available plugins for a cluster in disconnected environments
* The index will be served by a Controller with its contents managed by cluster-admins via CRs
* The binaries will also be served by the Controller that will pull images from a trusted image registry and extract the binaries
* With `krew index add https://someother-third-party-index` we won't limit users from adding their own index with whatever plugins they want

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like for us (Red Hat) as part of the oc 'configuration' setup ~3 indexes for customers by default.

  • Red Hat supported plugins -> on cluster approved plugins as defined by cluster-admin
  • Third Part ISV (Certified) plugins -> on cluster approved plugins as defined by cluster-admin
  • Community (upstream krew) plugins

IMO, asking users (again potentially 100's per cluster) configure what index they want to pull images from is a step we can simplify.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we are considering having a set built-in indexes that will be present on all clusters. The only real question is how to deal with those in connected vs disconnected envs, but if we can make krew to fail silently and ignore not available index w/o too much time delay that should work. I'd update this doc with additional finding that will be part of the first POC.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross while adding final updates, maybe worth mentioning that we'll embed those 3 indexes.

* The index will be served by a Controller with its contents managed by cluster-admins via CRs
* The binaries will also be served by the Controller that will pull images from a trusted image registry and extract the binaries
* With `krew index add https://someother-third-party-index` we won't limit users from adding their own index with whatever plugins they want
* As `krew` itself is a `kubectl` plugin, it can be invoked using either using `kubectl krew` or `oc krew`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this proposal facilitate / require us to ship krew as part of oc (plugin) to simplify the user experience?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I talked with @soltysh about this yesterday. We are going to embed krew into oc for a simplified experience. I'm going to update the goals section to reflect this.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross i guess this is not addressed, but yeah we want to embed krew in oc by default.

* `oc krew install odo`

I could then interact with `odo`:
* `oc odo --help`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see this proposal as only dealing with the 'disconnected' use case. IMO the value of this proposal is really in 'standardizing' how we ship / distribute, cli tools to customers that interact with OCP.

alias='...' is a small price to pay for re-establishing foo as a command line tool over oc foo. Today without this; the way[s] that we ship CLI tools lack a lot of standardization and 'verification' that an 'enterprise' focused company should be providing.

odo OpenShift Developer CLI 1.0 Not Installed
```

#### Story 2

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we make this user-story look more like that of the 'samples operator' where in; curated lists can be presented to admins that they can then choose to 'distribute or not' (for their cluster)?

In my mind; asking admins to 'manually curate' a list of our (or other developers) plugins is like asking them curate the internet. If this is all we can do 'day 1' that's find; but how can we expand on this to simplify what we deliver to admins and users alike?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A good option I considered is that operators would inject their binaries information into that CR during installation. This would allow admins to have a pre-populated list that he can later modify.

### Risks and Mitigations

Distributing undesirable binaries is always a risk. Some mitigations include requiring cluster-admins to maintain the index, and the verification of downloaded
binaries using SHA256 hashes. Cluster-admins are responsible for publishing only trusted binaries.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to work in signature verification? And let admins state what 'signatures' they 'trust'? This would then be used by clients talking to the server to only pull things that a cluster admin defines?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be possible from within the controller when submitting a CR, but krew doesn't support it from the client side.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably something we could help add in krew.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross please add in followup section.

Being that `krew` is for distributing `kubectl` plugins rather than generic CLIs, one drawback is how a non-`kubeclt` plugin
is executed after being installed. The binary will always be prefixed with `kubectl-`, so for example, `odo` would be `kubectl-odo`.
This is how `kubectl` plugins work. You can either execute it with that name, or through `kubectl` or `oc` as though it were a
`kubectl` plugin. For example: `kubectl odo` or `oc odo`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also call out that every modern shell (bash, fish, powershell, etc) provides an 'alias' feature. Thus there are simple workarounds to re-establishing existing behaviors (for user continuity).

@rosspeoples
Copy link
Author

Any additional thoughts, concerns, or anything outstanding that hasn't been addressed yet?

* A `krew`-compatible custom index can provide available plugins for a cluster in disconnected environments
* The index will be served by a Controller with its contents managed by cluster-admins via CRs
* The binaries will also be served by the Controller that will pull images from a trusted image registry and extract the binaries
* With `krew index add https://someother-third-party-index` we won't limit users from adding their own index with whatever plugins they want
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we are considering having a set built-in indexes that will be present on all clusters. The only real question is how to deal with those in connected vs disconnected envs, but if we can make krew to fail silently and ignore not available index w/o too much time delay that should work. I'd update this doc with additional finding that will be part of the first POC.

* The index will be served by a Controller with its contents managed by cluster-admins via CRs
* The binaries will also be served by the Controller that will pull images from a trusted image registry and extract the binaries
* With `krew index add https://someother-third-party-index` we won't limit users from adding their own index with whatever plugins they want
* As `krew` itself is a `kubectl` plugin, it can be invoked using either using `kubectl krew` or `oc krew`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross i guess this is not addressed, but yeah we want to embed krew in oc by default.

* A `krew`-compatible custom index can provide available plugins for a cluster in disconnected environments
* The index will be served by a Controller with its contents managed by cluster-admins via CRs
* The binaries will also be served by the Controller that will pull images from a trusted image registry and extract the binaries
* With `krew index add https://someother-third-party-index` we won't limit users from adding their own index with whatever plugins they want
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross while adding final updates, maybe worth mentioning that we'll embed those 3 indexes.

* `oc krew install odo`

I could then interact with `odo`:
* `oc odo --help`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The initial phase will be covering oc plugins, only. I'm envisioning addressing concerns from @deboer-tim and @jwmatthews at later stages.

odo OpenShift Developer CLI 1.0 Not Installed
```

#### Story 2
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A good option I considered is that operators would inject their binaries information into that CR during installation. This would allow admins to have a pre-populated list that he can later modify.

### Risks and Mitigations

Distributing undesirable binaries is always a risk. Some mitigations include requiring cluster-admins to maintain the index, and the verification of downloaded
binaries using SHA256 hashes. Cluster-admins are responsible for publishing only trusted binaries.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably something we could help add in krew.

### Risks and Mitigations

Distributing undesirable binaries is always a risk. Some mitigations include requiring cluster-admins to maintain the index, and the verification of downloaded
binaries using SHA256 hashes. Cluster-admins are responsible for publishing only trusted binaries.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross please add in followup section.


#### Story 2

As owner of a CLI or plugin, I want to publish it to users of the cluster. I need to create a Plugin CR for my tool, or provide the required information about my tool
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross add a followup/further development section and mention it there

* The index will be served by a Controller with its contents managed by cluster-admins via CRs
* The binaries will also be served by the Controller that will pull images from a trusted image registry and extract the binaries
* With `krew index add https://someother-third-party-index` we won't limit users from adding their own index with whatever plugins they want
* The controller could be optional, since connected environments can use the default `krew` index out-of-the box
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross the biggest missing element in this proposal is the discussion whether we want that controller to be part of every default OCP installation or rather an optional element. I think it's crucial before we make final decision which direction we want to go.

Pros for built-in:

  • available from day 1 for all clusters
  • all binaries including the built-ins can be served using single unified mechanism
    Cons for built-in
  • additional controller running by default

Pros for opt-in:

  • less resource usage by default
    Cons for opt-in:
  • yet another operator to manage and mirror, especially in disconnected envs.
  • separate mechanism for built-in and 3rd party binaries
  • missing plugin CRs for optional elements installed before this controller is installed.

I'm personally leaning towards buit-in, and if I don't see any objections I'd like to pursuit that one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bar for adding an "always present" component is very very high. Close enough to insurmountable that we are considering making existing always-present components optional #922. I don't think this is going to meet that bar. For example, consider all of the disconnected deployments, workload-only clusters, and clusters consumed by kcp. Users won't be interacting with those to get their clients.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

other discussion from the slack thread(paraphrased):

maciej: this is replacing an existing component (console cli downloader server) so it's not a net-add to the core

bparees: once we have #922 we could make this component optional, so putting it in the core in 4.11 (when we'll have that capability to disable, hopefully) might be an acceptable middle-ground, with a plan to move it to OLM in the future when OLM is capable of the full "cluster-lifecycling/day0" story.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross I'm missing the section I've mentioned above describing options and which one was picked any why.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross can you replace this sentence and below with something like:

* There are two possible approaches for this new controller:
1. Controller should be installed by default...
pros... cons...
2. Controller being optionally installed...
pros... cons...
Given above arguments it is reasonable to include the controller by default.

* The controller could be optional, since connected environments can use the default `krew` index out-of-the box
* As `krew` itself is a `kubectl` plugin, it can be invoked using either using `kubectl krew` or `oc krew`

Existing methods of downloading binaries (i.e. the console) will not be affected by this proposal. For the initial implementation, supported plugins will create Plugin CRs. The plugins will be downloaded and installed using `krew`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will the console be enabled to serve/show this content? or you'll have to use the cli?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we go the built-in route, ideally I'd like for console to be able to re-use that. Iow. we'd replace the current container which serves those binaries from cli-artifacts with this mechanism.

* `oc krew install odo`

I could then interact with `odo`:
* `oc odo --help`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in order for "oc odo" to work, doesn't that mean "odo" is already on the user's path? so "odo" would just work, without any aliasing, right?


A new CRD will be created:
* API: `config.openshift.io/v1`
* Kind: `Plugin`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you flesh out the rest of this CRD spec?

* Plugin Custom Resource Definition compatible with `krew`
* Must work with image registries that require image pull secrets
* Use `krew` manage plugins made available via CRs
* A Controller to manage plugins that will serve binaries from images
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this implies the controller will be pulling images, this introduces the possibility of a delta in image pulling behavior between what happens if you create a pod that refs the image, and what this controller does, with respect to:

  • use of the cluster scoped image pull secret
  • respecting ICSP rules for pulling from mirrors
  • use of CAs to talk to the registry
  • use of proxies to talk to the registry

this has been a long struggle for builds (one of the main things that does its own image pulling on the cluster). So if you're going to do this, it needs to be done right and then kept in sync as crio/pod image pulling behavior may continue to change in the future (as it did when ICSP was introduced)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soltysh i don't really see how this was addressed in the design..... are you going to use a sidecar approach to allow k8s to handle the image pulling for you and avoid the problem, or is there another plan?

any sort of details about how the images will be pulled/extracted seems like a good thing to have in the design details.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deejross ^

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added these points to the Design Details section, as they are important.

Copy link

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve
/hold
leaving some time for @bparees @dhellmann to comment, if not I'll drop the hold by end of week

@openshift-ci openshift-ci bot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lgtm Indicates that a PR is ready to be merged. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Dec 20, 2021
* Another operator to manage and mirror for disconnected environments
* Separate mechanism for built-in and third-party binaries
* Missing plugin CRs for optional elements installed before controller is installed
* Won't be able to replace existing, temporary system used by console which is serving some binaries
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not clear why it couldn't replace this also.

we don't necessarily need to solve this now, but long term i would still expect this component to be a candidate to move to olm, so we'll need to discuss it then

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the controller is optional, then the console wouldn't be able to rely on it being present, so the existing mechanism would need to remain.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the controller is optional, then the console wouldn't be able to rely on it being present, so the existing mechanism would need to remain.

no, if it's optional the console would have to be able to handle it not being present and tell the admin "hey if you want cli download functionality, please install/enable this optional feature/capability"

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Dec 21, 2021
@bparees
Copy link
Contributor

bparees commented Dec 23, 2021

@deejross @soltysh can we get a bit of a roadmap here, as there seem to be a few phases of implementation.

e.g.:

phase 1: add new CRs+component that serves plugin binaries, existing cli download server continues running, serves RH native binaries. Console is unaware of the new CR/plugin server

phase 2: console is aware of plugin server, serves both the native binaries and the plugin binaries

phase 3: cli manager takes over serving native binaries(as plugins?), existing cli download server goes away

phase 4: cli manager becomes an optional olm component that is installed by default on day 0

that list may not be accurate, please write your own, but i'm interested in understanding mostly:

  1. when the console is going to become aware of this thing and use it (basically ensuring the console team is on board w/ this plan/design)
  2. when the existing cli server will go away, since part of why we are saying it's ok to add this new component to the core is because we're getting rid of an existing one

@afflom
Copy link

afflom commented Jan 25, 2022

I'm a bit late to the party here, but can we consider not using OCI images for this? I agree that OCI formatted objects are the best way to mirror content, but this can be accomplished without the overhead of handling OCI images. I suggest storing arbitrary content in its original format within an OCI registry as we are seeing with Helm (https://helm.sh/docs/topics/registries/). cc @sabre1041

Ross Peoples added 2 commits February 1, 2022 09:25
@rosspeoples
Copy link
Author

@bparees I've added a roadmap section to the proposal. Any other concerns or anything I missed?

@bparees
Copy link
Contributor

bparees commented Feb 1, 2022

These comment doesn't seem fully addressed (making this thing optional in the future):

#924 (comment)

assuming you agree, please put that on the roadmap and otherwise my concerns are addressed

@bparees
Copy link
Contributor

bparees commented Feb 1, 2022

Controller should be optional, but enabled by default as it will be replacing the existing mechanism which was intented to be temporary

maybe that was intended to address optionality? but it stops short of addressing what the console would have to do if this thing is optional (being optional means the CRDs also don't get registered)

again i think making it optional can be a future goal (especially since it needs coordination w/ the console), i just want to see that it's in your plans

@rosspeoples
Copy link
Author

@bparees fair enough, I have added that to the Followup section.

@bparees
Copy link
Contributor

bparees commented Feb 1, 2022

thanks, lgtm. not sure if i'm the only person you're still waiting on sign-off from

Copy link

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/hold cancel

@openshift-ci openshift-ci bot added lgtm Indicates that a PR is ready to be merged. and removed do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels Feb 3, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 3, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sferich888, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 3, 2022

@deejross: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-robot openshift-merge-robot merged commit 6ae7884 into openshift:master Feb 3, 2022
@jupierce
Copy link
Contributor

jupierce commented Feb 7, 2022

After reading the proposal and the comments, production distribution is still unclear to me. I see things described as "bring your own" and also references to default/payload based plugins. If any plugins are supposed to be part of the release payload and the release payload only contains images for a single architecture, how would we have multi-arch support as implied by the CR? What is the nature of:

Image string `json:"image"`

Is it an imagestreamtag? A pullspec with a floating tag? A pullspec with a sha256 digest?

  • If imagestreamtag, it cannot presently support multiarch.
  • If pullspec with floating tag, what is the registry, what is the tag, how is is updated in the CR at build time, how is it QE'd before the artifact carrying image is GA'd / released to a known registry?
  • If pullspec with sha256, what is the registry, how is it updated in the CR at build time, how is it QE'd before the artifact carrying image is GA'd / released to a known registry?

Optional operators have solved these problems. If all RH plugins are to be delivered via OLM, then I have fewer concerns.

@rosspeoples
Copy link
Author

@jupierce: Since images are just compressed archives, you can put as many arch/os-specific binaries in a single image as you like. One of the other fields is Files where you specify which files/binaries you would like to extract for the given Platform (i.e. arch/os combination).

This allows image creation to be as flexible as needed. You can create one image for all platforms, or one image per platform. For your question regarding the Image field, that's a standard image identifier. This means it could be as generic as busybox, or as specific as registry/image:tag.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.