- Release Signoff Checklist
- Summary
- Motivation
- Proposal
- Design Details
- Implementation History
- Drawbacks
- Alternatives
- Infrastructure Needed (optional)
- Enhancement issue in release milestone, which links to KEP dir in kubernetes/enhancements (not the initial KEP PR)
- KEP approvers have approved the KEP status as
implementable
- Design details are appropriately documented
- Test plan is in place, giving consideration to SIG Architecture and SIG Testing input
- Graduation criteria is in place
- "Implementation History" section is up-to-date for milestone
- User-facing documentation has been created in kubernetes/website, for publication to kubernetes.io
- Supporting documentation e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
Local clusters like Kind, K3d, Minikube, and Microk8s let users iterate on Kubernetes quickly in a hermetic environment. To avoid network round-trip latency, these clusters can be configured to pull from a local, insecure registry.
This KEP proposes a standard for how these clusters should expose their support for this feature, so that tooling can interoperate with them without redundant configuration.
Many local clusters currently support local registries. But they put the onus of configuration on the user. First, the user has to follow instructions to setup the registry in the cluster.
- https://kind.sigs.k8s.io/docs/user/local-registry/
- https://microk8s.io/docs/registry-built-in
- https://github.com/rancher/k3d/blob/master/docs/registries.md#using-a-local-registry
- https://minikube.sigs.k8s.io/docs/handbook/registry/#enabling-insecure-registries
When the cluster has been set up, the user has to manually go through each individual tool that needs to push images to the cluster, and configure it with the hostname and port of the new registry.
The motivation of this KEP is to remove this configuration redundancy, and reduce the burden on the user to configure all their image tools.
-
Agree on a standard way for cluster configuration tools to record how developer tools should interact with the local registry.
-
Agree on a standard way for developer tools to read that information when pushing images to the cluster.
-
Modifying how local registries are currently implemented on these clusters.
-
How this might be extended to support multiple registries. This proposal assumes clusters have at most one registry, because that's how all existing implementations work.
-
Any API for configuring a local registry in a cluster. If there was a standard CRD that configured a local registry, and all implementations agreed to support that CRD, this KEP would become moot. That approach would have substantial technical challenges. OpenShift supports a CRD for this, which they use to configure registries.conf inside the cluster.
-
A general-purpose mechanism for publicly exposing cluster configuration.
-
A general-purpose mechanism for cluster capability detection for developer tooling. With the best practices around cluster capability detection moving fast, local registries are the only best practice that's mature enough where this makes sense.
-
The creation of Git repositories that can host the API proposed in this KEP.
-
Many local clusters expose other methods to load container images into the cluster, independent of the local registry. This proposal doesn't address how developer tools should detect them.
Tools that configure a local registry should also apply a ConfigMap that communicates "LocalRegistryHosting".
The ConfigMap specifies everything a tool might need to know about how to interact with the local registry.
Any tool that pushes an image to a registry should be able to read this ConfigMap, and decide to push to the local registry instead.
Alice is setting up a development environment on Kubernetes.
Alice creates a local Kind cluster. She follows the Kind-specific instructions for setting up a local registry during cluster creation.
Alice will have some tool she interacts with that builds images and pushes them to a registry the cluster can access. That tool might be an IDE like VSCode, an infra-as-code toolchain like Pulumi, or a multi-service dev environment like Tilt.
On startup, the tool should connect to the cluster and read a ConfigMap.
If the config specifies a registry location, the tool may automatically adjust image pushes to push to the specified registry.
If the config specifies a help
URL, the tool may prompt the user to set up a
registry for faster development.
-
Users may see the ConfigMap and draw mistaken conclusions about how to interact with it. They think deleting the ConfigMap would delete the local registry (which it does not).
-
This KEP does not specify any automatic mechanism for keeping the LocalRegistryHosting up-to-date with how the cluster is configured. For example, the user might delete the registry. The cluster might not have a way of knowing that the registry has died.
This KEP only defines the specification for a ConfigMap that includes versioned structures. There are potential, minimal risks around the usage of this ConfigMap, but mitigation is delegated to the cluster configuration tools and their documentation.
Risk: Tool X reads a local registry host from the ConfigMap and tries to push images to it, but the URL is out of date.
Mitigation: It is the responsibility of the cluster admin / configuration tooling to keep the ConfigMap up-to-date either by manual adjustment or via a controller.
Risk: By definition, the ConfigMap can include multiple versions of the
structures defined in this KEP. localRegistryHosting.v1
and
localRegistryHosting.v2
can be present at the same time. Readers might get
confused what version they are supposed to use.
Mitigation: Cluster configuration tools should document that they are
consuming the specification defined in this KEP and briefly outline the best
practices - i.e. using the latest vX
when possible.
Tools that configure a local registry should apply a ConfigMap to the cluster.
Documentation that educates people on how to set up a registry manually should include instructions on what the ConfigMap should be.
The name of the ConfigMap must be local-registry-hosting
.
The namespace must be kube-public
.
Under data
, the ConfigMap should contain structures in the format
localRegistryHosting.vX
. vX
is the major version of the structure. The
contents of each field are YAML.
Example:
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
The localRegistryHosting.vX
field of the ConfigMap describes how tools should
communicate with a local registry. Tools should use this registry to load images
into the cluster.
The contents of a localRegistryHosting.vX
specification are frozen and will not
change.
When adding, removing or renaming fields, future proposals should increment the
MAJOR version of an API hosted in the ConfigMap - e.g. localRegistryHosting.vX
where X is incremented.
Writers of this ConfigMap (i.e., cluster configuration tooling) should write as many top-level fields as the cluster supports. The most recent version is the source of truth.
Readers of this ConfigMap should start with the most recent version they support and work backwards. Readers are responsible for doing any defaulting of fields themselves without the assistance of any common API machinery.
Example ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:5000"
hostFromContainerRuntime: "registry:5000"
hostFromClusterNetwork: "kind-registry:5000"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
Golang implementation:
// LocalRegistryHostingV1 describes a local registry that developer tools can
// connect to. A local registry allows clients to load images into the local
// cluster by pushing to this registry.
type LocalRegistryHostingV1 struct {
// Host documents the host (hostname and port) of the registry, as seen from
// outside the cluster.
//
// This is the registry host that tools outside the cluster should push images
// to.
Host string `yaml:"host,omitempty"`
// HostFromClusterNetwork documents the host (hostname and port) of the
// registry, as seen from networking inside the container pods.
//
// This is the registry host that tools running on pods inside the cluster
// should push images to. If not set, then tools inside the cluster should
// assume the local registry is not available to them.
HostFromClusterNetwork string `yaml:"hostFromClusterNetwork,omitempty"`
// HostFromContainerRuntime documents the host (hostname and port) of the
// registry, as seen from the cluster's container runtime.
//
// When tools apply Kubernetes objects to the cluster, this host should be
// used for image name fields. If not set, users of this field should use the
// value of Host instead.
//
// Note that it doesn't make sense semantically to define this field, but not
// define Host or HostFromClusterNetwork. That would imply a way to pull
// images without a way to push images.
HostFromContainerRuntime string `yaml:"hostFromContainerRuntime,omitempty"`
// Help contains a URL pointing to documentation for users on how to set
// up and configure a local registry.
//
// Tools can use this to nudge users to enable the registry. When possible,
// the writer should use as permanent a URL as possible to prevent drift
// (e.g., a version control SHA).
//
// When image pushes to a registry host specified in one of the other fields
// fail, the tool should display this help URL to the user. The help URL
// should contain instructions on how to diagnose broken or misconfigured
// registries.
Help string `yaml:"help,omitempty"`
}
Writers of the ConfigMap are responsible for validating the config against the specifications in this proposal.
It is out of scope for this proposal to create shared API machinery that can be used for this purpose.
Not directly applicable.
The proposed API in this KEP will iterate in MAJOR increments - e.g. v1, v2, v3.
Cluster configuration tools are free to decide if they wish to also upgrade/downgrade the proposed ConfigMap structures as part of their upgrade/downgrade process. It is out of scope for this proposal to define or host the API machinery for that.
N/A
2020-05-08: initial KEP draft 2020-05-20: addressed initial reviewer comments 2020-06-10: KEP marked as implementable
This KEP is contingent on the maintainers of local clusters agreeing to it (specifically: Kind, K3d, Minikube, Microk8s, and others), since this is really about better documenting what they're already doing.
This KEP is very narrow in scope and lightweight in implementation. It wouldn't make sense if there was a more ambitious local registry proposal. It also wouldn't make sense if there was a more ambitious proposal for cluster feature discovery.
SIG Cluster Lifecycle have discussed many alternative proposals for where this data would be stored, including:
- Using annotations on a Namespace
- Using a plain ConfigMap
- Using a ConfigMap with an embedded, versioned component config
- Using a Custom Resource
This proposal attempts to strike the right balance between a format that follows existing conventions, and a format that doesn't require too much Kubernetes API machinery to implement.
This proposal also doesn't use alpha/beta versioning, to avoid the common graduation expectations that come with Kubernetes-core hosted features.
The group had a longer discussion about whether this should be "inert" data, or whether there should be mechanisms for keeping it up to date. In some cases, there is a one-to-one mapping between the cluster config and the LocalRegistryHosting. For example, on Kind, the containerd config has the registry host in it. The group also discussed whether there should be a solution for reflecting parts of the container configuration outside the cluster, and let tools outside the cluster read that to infer the existence of a local registry.
But ultimately, the semantics of how these values align changes a lot, and relying on it might be unwise.
In the future, this config might apply to remote clusters. For example, the user might have a remote managed development cluster with an in-cluster registry. Or each developer might have their own namespace, with an in-cluster registry per-namespace. The current proposal could potentially expand to include more registry topologies. But to avoid over-engineering, this proposal doesn't explore deeply what that might look like.
Remote clusters sometimes support a registry secured by a self-signed CA. (e.g., OpenShift's image controller). A future version of this config might contain a specification for how to share the appropriate certificates and auto-load them into image push tooling. But there's currently no common standard for how to configure image push tools with these certificates.
This proposal is focused on local, insecure registries. If a cluster offers a
secure registry, they can use the help
field to instruct the user how to
configure their tools to use it.
Many local clusters support multiple methods for loading images, with different
trade-offs (e.g., kind load
, exposing the Docker socket directly, etc). Local
cluster maintainers have expressed interest in advertising these other loaders,
and have often written lengthy documentation on how to use them.
An earlier version of this proposal considered a single ConfigMap with a field for each image loading method. Tools would read them all at once and pick between them.
But it seemed too early to add other image loaders. The big concern is interoperability. If two clusters expose the same image loading mechanism, tools will interact with them in the same way.
Local registries are the one mechanism that seem to have critical mass right now and we can guarantee interoperability for, because registry behavior is well-specified.
In the future, there might be other mechanisms to gather all the image loading methods (e.g., creating a ConfigMap for each method and applying a common label.)
None