Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

change kubeadm image url #2805

Open
willzhang opened this issue Jan 19, 2023 · 14 comments
Open

change kubeadm image url #2805

willzhang opened this issue Jan 19, 2023 · 14 comments
Labels
kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-kep https://git.k8s.io/enhancements/keps/README.md priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Milestone

Comments

@willzhang
Copy link

willzhang commented Jan 19, 2023

Is this a BUG REPORT or FEATURE REQUEST?

FEATURE REQUEST

Versions

kubeadm version (use kubeadm version):
v1.26.0
Environment:

  • Kubernetes version (use kubectl version): v1.26.0
  • Cloud provider or hardware configuration: baremetal
  • OS (e.g. from /etc/os-release): ubuntu 22.04
  • Kernel (e.g. uname -a): 5.x
  • Container runtime (CRI) (e.g. containerd, cri-o): containerd and cri-o
  • Container networking plugin (CNI) (e.g. Calico, Cilium): calico
  • Others: registry with harbor and docker registy at local offline environment

What happened?

as kubernetes/sig-release#2146 discuss, the long image url registry.k8s.io/kubernetes/kube-apiserver:v1.26.0 have been ready, kubeadm init should use this url as default to pull images.

What you expected to happen?

kubeadmin init cluster pull images from registry.k8s.io/kubernetes/kube-xx as default action.

How to reproduce it (as minimally and precisely as possible)?

here have one image list for kubernetes cluster installtion generate by kubespray offline scripts.

root@ubuntu:~# cat images.list
docker.io/mirantis/k8s-netchecker-server:v1.2.2
docker.io/mirantis/k8s-netchecker-agent:v1.2.2
quay.io/coreos/etcd:v3.5.6
quay.io/cilium/cilium:v1.12.1
quay.io/cilium/operator:v1.12.1
quay.io/cilium/hubble-relay:v1.12.1
quay.io/cilium/certgen:v0.1.8
quay.io/cilium/hubble-ui:v0.9.2
quay.io/cilium/hubble-ui-backend:v0.9.2
docker.io/envoyproxy/envoy:v1.22.5
ghcr.io/k8snetworkplumbingwg/multus-cni:v3.8-amd64
docker.io/flannelcni/flannel:v0.20.1
docker.io/flannelcni/flannel-cni-plugin:v1.2.0
quay.io/calico/node:v3.24.5
quay.io/calico/cni:v3.24.5
quay.io/calico/pod2daemon-flexvol:v3.24.5
quay.io/calico/kube-controllers:v3.24.5
quay.io/calico/typha:v3.24.5
quay.io/calico/apiserver:v3.24.5
docker.io/weaveworks/weave-kube:2.8.1
docker.io/weaveworks/weave-npc:2.8.1
docker.io/kubeovn/kube-ovn:v1.10.7
docker.io/cloudnativelabs/kube-router:v1.5.1
registry.k8s.io/pause:3.7
ghcr.io/kube-vip/kube-vip:v0.5.5
docker.io/library/nginx:1.23.2-alpine
docker.io/library/haproxy:2.6.6-alpine
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/dns/k8s-dns-node-cache:1.21.1
registry.k8s.io/cpa/cluster-proportional-autoscaler-amd64:1.8.5
docker.io/library/registry:2.8.1
registry.k8s.io/metrics-server/metrics-server:v0.6.2
registry.k8s.io/sig-storage/local-volume-provisioner:v2.5.0
quay.io/external_storage/cephfs-provisioner:v2.1.0-k8s1.11
quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11
docker.io/rancher/local-path-provisioner:v0.0.22
registry.k8s.io/ingress-nginx/controller:v1.5.1
docker.io/amazon/aws-alb-ingress-controller:v1.1.9
quay.io/jetstack/cert-manager-controller:v1.10.1
quay.io/jetstack/cert-manager-cainjector:v1.10.1
quay.io/jetstack/cert-manager-webhook:v1.10.1
registry.k8s.io/sig-storage/csi-attacher:v3.3.0
registry.k8s.io/sig-storage/csi-provisioner:v3.0.0
registry.k8s.io/sig-storage/csi-snapshotter:v5.0.0
registry.k8s.io/sig-storage/snapshot-controller:v4.2.1
registry.k8s.io/sig-storage/csi-resizer:v1.3.0
registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.4.0
docker.io/k8scloudprovider/cinder-csi-plugin:v1.22.0
docker.io/amazon/aws-ebs-csi-driver:v0.5.0
docker.io/kubernetesui/dashboard:v2.7.0
docker.io/kubernetesui/metrics-scraper:v1.0.8
quay.io/metallb/speaker:v0.12.1
quay.io/metallb/controller:v0.12.1
registry.k8s.io/kube-apiserver:v1.25.5
registry.k8s.io/kube-controller-manager:v1.25.5
registry.k8s.io/kube-scheduler:v1.25.5
registry.k8s.io/kube-proxy:v1.25.5

the problems i face:

1.I can not just replace registry.k8s.io quay.io docker.io to 192.168.72.10 (local offline harbor registry), harbor not support shot image url with registry.k8s.io/kube-apiserver:v1.25.5
2. I can not just change ${kube_image_repo} to ${kube_image_repo}/kubernetes basicly, becauese some application use long format url, eg registry.k8s.io/sig-storage/csi-resizer:v1.3.0
3. I can pull from registry.k8s.io/kubernetes/kube-apiserver:v1.26.0 , but as default kubeadm only pull from registry.k8s.io/kube-apiserver:v1.26.0 , i must change kubeadm config again.

so this deferente image url Leading to many dilemmas

Anything else we need to know?

1、This will solve the confusion of the image format and make the image of the core components of kuberenrtes consistent with the image of all other applications.

2、It is easier to move images in different places through variables and scripts without special configuration.

@neolit123
Copy link
Member

as mentioned on the k/release ticket it's a complicated change and needs a KEP (proposal doc/plan). not clear if we want to do it. we could survey kubeadm users about it.

it will beak kubeadm users and needs a smooth transition period.

@neolit123 neolit123 added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. kind/feature Categorizes issue or PR as related to a new feature. kind/design Categorizes issue or PR as related to design. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Jan 19, 2023
@neolit123 neolit123 added this to the Next milestone Jan 19, 2023
@willzhang
Copy link
Author

willzhang commented Jan 29, 2023

can change to this imageRepository: registry.k8s.io/kubernetes

root@node1:~# kubeadm config print init-defaults 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io/kubernetes
kind: ClusterConfiguration
kubernetesVersion: 1.25.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

or

root@node1:~# kubeadm config images list
registry.k8s.io/kubernetes/kube-apiserver:v1.25.6
registry.k8s.io/kubernetes/kube-controller-manager:v1.25.6
registry.k8s.io/kubernetes/kube-scheduler:v1.25.6
registry.k8s.io/kubernetes/kube-proxy:v1.25.6
registry.k8s.io/kubernetes/pause:3.8
registry.k8s.io/kubernetes/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3

@javiroman
Copy link

javiroman commented Jan 30, 2023

I have the same problem with 1.26.1 @willzhang where can I change the "imageRepository" value?

$ kubeadm config images list
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3

@willzhang
Copy link
Author

I have the same problem with 1.26.1 @willzhang where can I change the "imageRepository" value?

$ kubeadm config images list

registry.k8s.io/kube-apiserver:v1.26.1

registry.k8s.io/kube-controller-manager:v1.26.1

registry.k8s.io/kube-scheduler:v1.26.1

registry.k8s.io/kube-proxy:v1.26.1

registry.k8s.io/pause:3.9

registry.k8s.io/etcd:3.5.6-0

registry.k8s.io/coredns/coredns:v1.9.3

We need wait official support this feature.

@SataQiu
Copy link
Member

SataQiu commented Jan 31, 2023

Currently, you can specify the base image repository with the --image-repository flag (maybe a workaround?)
For example:

kubeadm config images list --image-repository registry.k8s.io/kubernetes
kubeadm init --image-repository=registry.k8s.io/kubernetes

@willzhang
Copy link
Author

willzhang commented Jan 31, 2023

yes ,but Many cluster installation tools like kubepspray encapsulate kubeadm, it's hard to change kubeadm init .

and not only kubeadm init, in offline environment need more operation like image pull image tag image push.

@majorinche
Copy link

Currently, you can specify the base image repository with the --image-repository flag (maybe a workaround?) For example:

kubeadm config images list --image-repository registry.k8s.io/kubernetes
kubeadm init --image-repository=registry.k8s.io/kubernetes

but it doesn't work for pause, it is still registry.k8s.io, did not change as specified by --image-repository.
failed to get sandbox image "registry.k8s.io/pause:3.6": failed to pull image "registry.k8s.io/pause:3.6"

@pacoxu
Copy link
Member

pacoxu commented Mar 7, 2023

if you are using containerd, pause image is configured in /etc/containerd/config.toml.

@myysophia
Copy link

I have the same problem with 1.22.10 where can I change the "imageRepository" value?

my question: kubeadm config images pull error, my image already exists locally, why does kubeadm images pull still pull it?

1.confirm the images exist

[root@k8s-master-node1 ~]# docker images
REPOSITORY                                                            TAG        IMAGE ID       CREATED         SIZE
registry.myrepo.com/kainstall/kube-apiserver            v1.22.10   4f5d508856b7   11 months ago   128MB
registry.myrepo.com/kainstall/kube-scheduler            v1.22.10   d1249c1cae8c   11 months ago   52.7MB
registry.myrepo.com/kainstall/kube-controller-manager   v1.22.10   5454a57b8516   11 months ago   122MB
registry.myrepo.com/kainstall/kube-proxy                v1.22.10   cb930b7f07c8   11 months ago   104MB
registry.myrepo.com/kainstall/etcd                      3.5.0-0    004811815584   22 months ago   295MB
registry.myrepo.com/kainstall/coredns                   v1.8.4     8d147537fb7d   23 months ago   47.6MB
registry.myrepo.com/kainstall/pause                     3.5        ed210e3e4a5b   2 years ago     683kB

2.use kubeadmcfg.yaml list images

[root@k8s-master-node1 ~]# kubeadm config images list --config=/etc/kubernetes/kubeadmcfg.yaml 
registry.myrepo.com/kainstall/kube-apiserver:v1.22.10
registry.myrepo.com/kainstall/kube-controller-manager:v1.22.10
registry.myrepo.com/kainstall/kube-scheduler:v1.22.10
registry.myrepo.com/kainstall/kube-proxy:v1.22.10
registry.myrepo.com/kainstall/pause:3.5
registry.myrepo.com/kainstall/etcd:3.5.0-0
registry.myrepo.com/kainstall/coredns:v1.8.4

3. kubeadm config images pull error, my image already exists locally, why does kubeadm images pull still pull it?


[root@k8s-master-node1 ~]# kubeadm config images pull --config=/etc/kubernetes/kubeadmcfg.yaml --v=6
I0505 21:33:57.869432   10018 initconfiguration.go:247] loading configuration from "/etc/kubernetes/kubeadmcfg.yaml"
I0505 21:33:57.876371   10018 interface.go:431] Looking for default routes with IPv4 addresses
I0505 21:33:57.876396   10018 interface.go:436] Default route transits interface "eth1"
I0505 21:33:57.876615   10018 interface.go:208] Interface eth1 is up
I0505 21:33:57.876693   10018 interface.go:256] Interface "eth1" has 1 addresses :[10.50.10.21/24].
I0505 21:33:57.876744   10018 interface.go:223] Checking addr  10.50.10.21/24.
I0505 21:33:57.876759   10018 interface.go:230] IP found 10.50.10.21
I0505 21:33:57.876772   10018 interface.go:262] Found valid IPv4 address 10.50.10.21 for interface "eth1".
I0505 21:33:57.876785   10018 interface.go:442] Found active IP 10.50.10.21 
exit status 1
output: E0505 21:34:49.021343   10116 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.myrepo.com/kainstall/kube-apiserver:v1.22.10\": failed to resolve reference \"registry.myrepo.com/kainstall/kube-apiserver:v1.22.10\": failed to do request: Head \"https://registry.myrepo.com/v2/kainstall/kube-apiserver/manifests/v1.22.10\": dial tcp: lookup registry.myrepo.com on 10.0.2.3:53: read udp 10.0.2.15:47034->10.0.2.3:53: i/o timeout" image="registry.myrepo.com/kainstall/kube-apiserver:v1.22.10"
time="2023-05-05T21:34:49+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"registry.myrepo.com/kainstall/kube-apiserver:v1.22.10\": failed to resolve reference \"registry.myrepo.com/kainstall/kube-apiserver:v1.22.10\": failed to do request: Head \"https://registry.myrepo.com/v2/kainstall/kube-apiserver/manifests/v1.22.10\": dial tcp: lookup registry.myrepo.com on 10.0.2.3:53: read udp 10.0.2.15:47034->10.0.2.3:53: i/o timeout"

4. According to this judgment rule, it should not be pulled again. I'm not sure if the code I checked is correct:)

// PullImage will pull an image if it is not present locally
// retrying up to retries times
// it returns true if it attempted to pull, and any errors from pulling
func PullImage(image string, retries int) (bool, error) {
	// once we have configurable log levels
	// if this did not return an error, then the image exists locally
	if err := exec.NewHostCmd("docker", "inspect", "--type=image", image); err == nil {
		return false, nil
	}

	// otherwise try to pull it
	var err error
	if err = exec.NewHostCmd("docker", "pull", image).Run(); err != nil {
		for i := 0; i < retries; i++ {
			time.Sleep(time.Second * time.Duration(i+1))
			if err = exec.NewHostCmd("docker", "pull", image).Run(); err == nil {
				break
			}
		}
	}

5、here is my InitConfiguration and config

	
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    runtime-cgroups: /system.slice/containerd.service
    pod-infra-container-image: registry.myrepo.com/kainstall/pause:3.5

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.22.10
controlPlaneEndpoint: apiserver.cluster.local:6443
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/16
imageRepository: registry.myrepo.com/kainstall
apiServer:
  certSANs:
  - 127.0.0.1
  - apiserver.cluster.local
  - 10.50.10.21
  extraArgs:
    event-ttl: '720h'
    service-node-port-range: '30000-50000'
    audit-log-maxage: '20'
    audit-log-maxbackup: '10'
    audit-log-maxsize: '100'
    audit-log-path: /var/log/kube-audit/audit.log
    audit-policy-file: /etc/kubernetes/audit-policy.yaml
  extraVolumes:
  - name: audit-config
    hostPath: /etc/kubernetes/audit-policy.yaml
    mountPath: /etc/kubernetes/audit-policy.yaml
    readOnly: true
    pathType: File
  - name: audit-log
    hostPath: /var/log/kube-audit
    mountPath: /var/log/kube-audit
    pathType: DirectoryOrCreate
controllerManager:
  extraArgs:
    bind-address: 0.0.0.0
    node-cidr-mask-size: '24'
    node-monitor-grace-period: '20s'
    pod-eviction-timeout: '2m'
    terminated-pod-gc-threshold: '30'
    cluster-signing-duration: 87600h
    feature-gates: RotateKubeletServerCertificate=true
  extraVolumes:
scheduler:
  extraArgs:
    bind-address: 0.0.0.0

@pacoxu
Copy link
Member

pacoxu commented May 6, 2023

@myysophia your problem seems to be different.

I find that you are using criSocket: unix:///run/containerd/containerd.sock in the configuration but use docker images for checking local image. That would be different.

You can use check image status of containerd with nerdctl`crictl` with the correct socket.

@myysophia
Copy link

@pacoxu Thank you for reminding:)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 18, 2024
@neolit123 neolit123 added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Feb 19, 2024
@neolit123
Copy link
Member

neolit123 commented Feb 19, 2024

adding frozen label.
there is no plan for this and it's not an easy change.

if someone wants to write a KEP please go ahead.
https://github.com/kubernetes/enhancements/blob/master/keps/README.md

@neolit123 neolit123 added the needs-kep https://git.k8s.io/enhancements/keps/README.md label Feb 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-kep https://git.k8s.io/enhancements/keps/README.md priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

9 participants