Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes v1.7.0: The Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.96.0.10": provided IP is not in the valid range. #2280

Closed
grantnicholas opened this issue Dec 7, 2017 · 8 comments
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@grantnicholas
Copy link

grantnicholas commented Dec 7, 2017

Please provide the following details:

Environment:

Minikube version (use minikube version): v0.24.1

  • OS (e.g. from /etc/os-release):
ProductName:	Mac OS X
ProductVersion:	10.12.6
BuildVersion:	16G29
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): "Boot2DockerURL": "file:///Users/gnicholas/.minikube/cache/iso/minikube-v0.23.6.iso"
  • Others: minikube start --kubernetes-version="v1.7.0"

What happened: The exact same issue as in this issue: #2240

What you expected to happen:
Upgrading minikube to 0.24.1 should have resolved the invalid ip issue for kubedns, but it does not

How to reproduce it (as minimally and precisely as possible):
minikube start --kubernetes-version="v1.7.0"

(I have not tested all kubernetes versions, but I have tested v1.7.0 and it breaks while v1.8.0 works as intended).

Output of minikube logs (if applicable):

C02V887LHTD5:airflow gnicholas$ kubectl logs kube-addon-manager-minikube -n=kube-system
INFO: == Kubernetes addon manager started at 2017-12-07T16:51:53+0000 with ADDON_CHECK_INTERVAL_SEC=60 ==
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace "kube-system" configured
INFO: == Successfully started /opt/namespace.yaml in namespace  at 2017-12-07T16:51:54+0000
INFO: == Default service account in the kube-system namespace has token default-token-5l9jk ==
find: /etc/kubernetes/admission-controls: No such file or directory
INFO: == Annotating the old addon resources at 2017-12-07T16:51:54+0000 ==
INFO: == Annotate resources completed successfully at 2017-12-07T16:51:54+0000 ==
INFO: == Annotate resources completed successfully at 2017-12-07T16:51:54+0000 ==
INFO: == Executing apply to spin up new addon resources at 2017-12-07T16:51:54+0000 ==
configmap "kube-dns" created
pod "storage-provisioner" created
INFO: == Kubernetes addon ensure completed at 2017-12-07T16:51:55+0000 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
replicationcontroller "kubernetes-dashboard" created
**The Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.96.0.10": provided IP is not in the valid range**

Anything else do we need to know:
@r2d4 Adding you as you resolved the initial issue. Is there a compatability matrix between minikube versions and kubernetes versions that should be documented or is this just a bug?

EDIT: as you suggested previously, a workaround is using minikube config set bootstrapper kubeadm.

@thrawn01
Copy link

thrawn01 commented Dec 7, 2017

@grantnicholas I've been working through this issue for most of the day. The root of the problem is this change. fc105d3 (See the comments on that hash) which breaks the current 0.24.1 release when using localkube (which is the default).

From my investigation a few things break from this change. localkube 1.72 (which is installed by minikube 0.24.1) sets /etc/resolv.conf nameserver in containers to 10.0.0.10 which will never resolve because dns is exposed on 10.96.0.10. In addition the addon manager can't load the kube-dns-svc.yaml to expose the service on that port because of the Invalid value: "10.96.0.10": provided IP is not in the valid range error.

The work around is to edit the /etc/kubernetes/addons/kube-dns-svc.yaml file on the minikube vm and change the clusterIP value back to clusterIP: 10.0.0.10, the addon manager will pick up the change and dns will begin working again.

The permanent fix would be to roll back the fc105d3 change and cut a new release until 1.8 is ready. I've testing compiling and running master and it doesn't suffer from this as it uses the 1.8 localkube.

@ambition-consulting
Copy link

1.7.1 - 1.7.5 also broken.

@amnk
Copy link

amnk commented Dec 13, 2017

@thrawn01 thanks very much for detailed investigation, but after your proposed fix I got the following:

The Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.0.0.10": field is immutable

@thrawn01
Copy link

thrawn01 commented Dec 14, 2017

@amnk @r2d4 mentioned a few other work arounds that might work #2240 (comment) I have not tried any of these myself as the workaround above worked for me.

@amnk
Copy link

amnk commented Dec 15, 2017

@thrawn01 thanks! One of them (using kubeadm) worked for me.

@r2d4 r2d4 added the kind/documentation Categorizes issue or PR as related to documentation. label Mar 5, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 3, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 3, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants