-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
with KUBE_REPO_PREFIX, but the kubeadm is using "gcr.io/google_containers/pause-amd64" #257
Comments
This is due that kubelet operates independently of kubeadm. You have to edit the kubelet's manifest separately on all nodes in order to set kubeadm does never touch this part of the system. cat > /etc/systemd/system/kubelet.service.d/20-pod-infra-image.conf <<EOF
[Service]
Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=<your-image>"
EOF
systemctl daemon-reload
systemctl restart kubelet |
how to fix this on minikube, there is no kubelet on minikube Thanks |
@luxas thank you very much. it works [Service] [Install] |
ping @maojiawei , from k8s source v1.5.x\kubeadm/env.go,v1.6.x\kubeadm/env.go,v1.7.x\kubeadm/env.go ,u can use |
@anjia0532 How to set registy in 1.8+? |
@RainingNight use |
@avnish30jn thanks : ) |
What keywords did you search in kubeadm issues before filing this one?
KUBE_REPO_PREFIX
Is this a BUG REPORT or FEATURE REQUEST?
Choose one: BUG REPORT
Versions
kubeadm version (use
kubeadm version
):1.6.1
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:33:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Environment:
Kubernetes version (use
kubectl version
): 1.6.1Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:24:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration: openstack
OS (e.g. from /etc/os-release): centos7.2
Kernel (e.g.
uname -a
):Linux cloud4ourself-kubetest.novalocal 3.10.0-327.el7.x86_64 kubeadm join on slave node fails preflight checks #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/LinuxOthers:
What happened?
with KUBE_REPO_PREFIX parameter , but the kubeadm is using "gcr.io/google_containers/pause-amd64"
What you expected to happen?
don't with any gcr.io docker images
How to reproduce it (as minimally and precisely as possible)?
KUBE_ETCD_IMAGE=reg-i.testpay.com/google_containers/etcd-amd64:3.0.17
KUBE_REPO_PREFIX=reg-i.testpay.com/google_containers
kubeadm init
Anything else we need to know?
some log of kubelet as follow
Apr 28 16:57:10 cloud4ourself-kubetest kubelet: E0428 16:57:10.835906 8646 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "etcd-cloud4ourself-kubetest.novalocal_kube-system(d0de60f648c76b86f28f555b8c14e25d)" failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: {"message":"Get https://gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout"}
Apr 28 16:57:10 cloud4ourself-kubetest kubelet: E0428 16:57:10.835934 8646 kuberuntime_manager.go:619] createPodSandbox for pod "etcd-cloud4ourself-kubetest.novalocal_kube-system(d0de60f648c76b86f28f555b8c14e25d)" failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: {"message":"Get https://gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout"}
Apr 28 16:57:10 cloud4ourself-kubetest kubelet: E0428 16:57:10.835974 8646 pod_workers.go:182] Error syncing pod d0de60f648c76b86f28f555b8c14e25d ("etcd-cloud4ourself-kubetest.novalocal_kube-system(d0de60f648c76b86f28f555b8c14e25d)"), skipping: failed to "CreatePodSandbox" for "etcd-cloud4ourself-kubetest.novalocal_kube-system(d0de60f648c76b86f28f555b8c14e25d)" with CreatePodSandboxError: "CreatePodSandbox for pod "etcd-cloud4ourself-kubetest.novalocal_kube-system(d0de60f648c76b86f28f555b8c14e25d)" failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: {"message":"Get https://gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout"}"
Apr 28 16:57:11 cloud4ourself-kubetest kubelet: E0428 16:57:11.342406 8646 eviction_manager.go:214] eviction manager: unexpected err: failed GetNode: node 'cloud4ourself-kubetest.novalocal' not found
Apr 28 16:57:14 cloud4ourself-kubetest kubelet: E0428 16:57:14.836318 8646 remote_runtime.go:86] RunPodSandbox from runtime service failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: {"message":"Get https://gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout"}
The text was updated successfully, but these errors were encountered: