Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress controller addon #611

Closed
bprashanth opened this issue Sep 20, 2016 · 21 comments
Closed

Ingress controller addon #611

bprashanth opened this issue Sep 20, 2016 · 21 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@bprashanth
Copy link

Can we deploy the nginx controller (https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx) as an addon so it works out of the box, and document how to swap it with one of the several other implementations out there?

@r2d4 r2d4 added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 20, 2016
@r2d4
Copy link
Contributor

r2d4 commented Sep 21, 2016

Sounds like a good idea. I think we should enable it by default, but also allow it to be disabled or swapped on startup through our minikube config command. I can take this one.

@r2d4 r2d4 self-assigned this Sep 21, 2016
@bprashanth
Copy link
Author

Great, let me know if you run into roadblocks. I believe people are already running it on local-up-cluster so it should just work, in theory: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#local-cluster

@webwurst
Copy link
Contributor

The nginx ingress controller does work with minikube, I already tested that successfully :)

@r2d4
Copy link
Contributor

r2d4 commented Sep 29, 2016

I actually used it in a demo on tuesday 👍, I'm waiting for #639 to enable it by default though

@wstrange
Copy link

wstrange commented Nov 3, 2016

Now that #639 is merged, can we get the ingress addon out of the box?

I have a bunch of developers starting to use minikube. Setting up an ingress is a bit of pain - it would be really nice if it was out of the box

@r2d4
Copy link
Contributor

r2d4 commented Nov 3, 2016

We're waiting on kubernetes-retired/contrib#1879 (comment)

I've left another friendly ping. Once that gets merged and released I have a working branch I can merge to minikube

k8s-github-robot pushed a commit to kubernetes-retired/contrib that referenced this issue Nov 3, 2016
Automatic merge from submit-queue

Make map_hash_bucket_size configurable

I was getting an error while trying to run the nginx controller in minikube.  This allows this nginx configuration option to be passed in through a configmap.  

The default value depends on the processor's cache line size (32 | 64 | 128), however ServerNameHashBucketSize is determined similarly, so I've set it to the same default (64).

Fixes #1817

ref kubernetes/minikube#611

cc @bprashanth
@danielepolencic
Copy link

is it possible to enable this by default now that kubernetes-retired/contrib#1879 was merged?

@bprashanth
Copy link
Author

Not sure if it's in an image yet. If you need a new image pushed please upload a pr with the version bump to the makefile.

aledbf pushed a commit to aledbf/contrib that referenced this issue Nov 10, 2016
…size

Automatic merge from submit-queue

Make map_hash_bucket_size configurable

I was getting an error while trying to run the nginx controller in minikube.  This allows this nginx configuration option to be passed in through a configmap.  

The default value depends on the processor's cache line size (32 | 64 | 128), however ServerNameHashBucketSize is determined similarly, so I've set it to the same default (64).

Fixes kubernetes-retired#1817

ref kubernetes/minikube#611

cc @bprashanth
@danielepolencic
Copy link

I'm not sure which version I should bump in the makefile.

@r2d4
Copy link
Contributor

r2d4 commented Dec 6, 2016

@danielepolencic take a look at kubernetes-retired/contrib#2015 (comment)

I am contemplating building our own image as it seems easy enough. I reopened that PR for now.. Would love to get this in minikube

@danielepolencic
Copy link

danielepolencic commented Dec 7, 2016

@r2d4 I'd love to get this in minikube too.

I'm not an expert by any means, but I'm happy to help. Is there anything I can do?

@r2d4
Copy link
Contributor

r2d4 commented Dec 8, 2016

This should be in the next version of minikube with

minikube addons enable ingress

@r2d4 r2d4 closed this as completed Dec 8, 2016
@danielepolencic
Copy link

Thanks!

@r2d4
Copy link
Contributor

r2d4 commented Dec 8, 2016

For the implementation, you can see https://github.com/kubernetes/minikube/tree/master/deploy/addons/ingress

The controller image is gcr.io/k8s-minikube/nginx-ingress-controller:0.8.4 and its built from this PR kubernetes-retired/contrib#2015

@donaldguy
Copy link

I'm not having much luck with this. When I enabled the addon initially, The service was created but the two rcs and the configmap weren't ( the namespace logs in the dashboard show there was a failure at ImagePull, possibly a transient network issue, maybe IPv6 routing ) or any time while I waited thereafter.

Failed to pull image "gcr.io/google_containers/defaultbackend:1.0": image pull failed for gcr.io/google_containers/defaultbackend:1.0, this may be because there are no credentials on this request. details: (Get https://gcr.io/v1/repositories/google_containers/defaultbackend/tags/1.0: dial tcp [2607:f8b0:400c:c13::52]:443: connect: network is unreachable)

I tried kubectl createing them myself, and the RCs would appear healthy for a few seconds at a time but promptly disappear before I could use them for anything. I would guess this might be due to some special handling of the kubernetes.io/cluster-service label? but it could also be something simpler

I'm gonna try recreating my VM with the addon enabled

Any other debug suggestions? / what logs etc might be helpful?

@r2d4
Copy link
Contributor

r2d4 commented Dec 12, 2016

I would try running the YAMLs themselves slightly modified. You will need to remove the cluster-service annotation (this is used by the addon-manager) and you may want to change the namespace from kube-system to default for easier debugging.

The yaml files used in minikube are here. There are no major differences from the example ones used in contrib/ingress

https://github.com/kubernetes/minikube/tree/master/deploy/addons/ingress

Which version of minikube are you running? You may want to check that your addon-manager is at least version 5.1. Older ones might not support creating the configmap. It might also help to get the output of kubectl logs on those pods that are failing.

@donaldguy
Copy link

I just rebuilt minikube from master and blew away ~/.minikube , idk if there's a different iso I should point at as well or if that's managed

Addon manager was enabled (by default) and image is gcr.io/google-containers/kube-addon-manager:v5.1

running minikube addons enable ingress didn't result in any output on docker events (after eval $(minikube docker-env) or clearly relevant output in tail -F /var/lib/localkube/localkube.* inside the VM.

There is no sign that the addon-manager is aware of either the configmap or the rc.yaml

kubectl --namespace kube-system logs kube-addon-manager-minikube
== Kubernetes addon manager started at 2016-12-12T21:23:46+0000 with ADDON_CHECK_INTERVAL_SEC=60 ==
namespace "kube-system" configured
== Successfully started /opt/namespace.yaml in namespace  at 2016-12-12T21:23:46+0000
== default service account in the kube-system namespace has token default-token-b13r0 ==
find: `/etc/kubernetes/admission-controls': No such file or directory
INFO: Creating new ReplicationController from file /etc/kubernetes/addons/kube-dns-rc.yaml in namespace kube-system, name: kube-dns-v20
INFO: Creating new ReplicationController from file /etc/kubernetes/addons/dashboard-rc.yaml in namespace kube-system, name: kubernetes-dashboard
INFO: Creating new Service from file /etc/kubernetes/addons/dashboard-svc.yaml in namespace kube-system, name: kubernetes-dashboard
INFO: Creating new Service from file /etc/kubernetes/addons/kube-dns-svc.yaml in namespace kube-system, name: kube-dns
You have exposed your service on an external port on all nodes in your
cluster.  If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30000) to serve traffic.

See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "kubernetes-dashboard" created
replicationcontroller "kube-dns-v20" created
service "kube-dns" created
replicationcontroller "kubernetes-dashboard" created
INFO: == Kubernetes addon update completed successfully at 2016-12-12T21:23:47+0000 ==
INFO: == Kubernetes addon update completed successfully at 2016-12-12T21:24:47+0000 ==
INFO: == Kubernetes addon update completed successfully at 2016-12-12T21:25:47+0000 ==
INFO: Creating new Service from file /etc/kubernetes/addons/ingress-svc.yaml in namespace kube-system, name: default-http-backend
You have exposed your service on an external port on all nodes in your
cluster.  If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30001) to serve traffic.

See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "default-http-backend" created
INFO: == Kubernetes addon update completed successfully at 2016-12-12T21:26:48+0000 ==
INFO: == Kubernetes addon update completed successfully at 2016-12-12T21:27:47+0000 ==
INFO: == Kubernetes addon update completed successfully at 2016-12-12T21:28:47+0000 ==
INFO: == Kubernetes addon update completed successfully at 2016-12-12T21:29:47+0000 ==
INFO: == Kubernetes addon update completed successfully at 2016-12-12T21:30:48+0000 ==
INFO: == Kubernetes addon update completed successfully at 2016-12-12T21:31:47+0000 ==
INFO: == Kubernetes addon update completed successfully at 2016-12-12T21:32:47+0000 ==

but they do seem to be in its dir and have the right content

docker@minikube:/var/lib$ ls -la /etc/kubernetes/addons
total 32
drwxr-xr-x    2 root     root           180 Dec 12 21:26 ./
drwxr-xr-x    4 root     root            80 Dec 12 21:22 ../
-rw-r-----    1 root     root          1522 Dec 12 21:22 dashboard-rc.yaml
-rw-r-----    1 root     root          1011 Dec 12 21:22 dashboard-svc.yaml
-rw-r-----    1 root     root           795 Dec 12 21:26 ingress-configmap.yaml
-rw-r-----    1 root     root          3479 Dec 12 21:26 ingress-rc.yaml
-rw-r-----    1 root     root           911 Dec 12 21:26 ingress-svc.yaml
-rw-r-----    1 root     root          4106 Dec 12 21:22 kube-dns-rc.yaml
-rw-r-----    1 root     root           942 Dec 12 21:22 kube-dns-svc.yaml
docker@minikube:/var/lib$ sudo md5sum /etc/kubernetes/addons/ingress*
086dfbb264a6a8033e8309235314adcf  /etc/kubernetes/addons/ingress-configmap.yaml
0c29ef726ec784c7a8fedd18436ac030  /etc/kubernetes/addons/ingress-rc.yaml
4a7a39a336ca3ac4cdc077e6c4c9e8d8  /etc/kubernetes/addons/ingress-svc.yaml

(also cat'ed but seems uneeded here)

@donaldguy
Copy link

looks indeed like ConfigMap support didn't come in til 5.2: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/CHANGELOG.md#version-52-wed-october-26-2016-zihong-zheng-zihongzgooglecom

I can try updating the addon-manager in place. What version were you running to test against?

@r2d4
Copy link
Contributor

r2d4 commented Dec 12, 2016

Ah, thats probably the issue. I ran some tests against 6.1, but thought I had also ran them against 5.1 also.

6.1 looks like it accepts all resources.

@sslavic
Copy link

sslavic commented Dec 7, 2017

Initial feature request mentioned

and document how to swap it with one of the several other implementations out there

Was this documented? I couldn't find it.

@Jokero
Copy link
Contributor

Jokero commented Apr 18, 2018

Looks like helm nginx-ingress chart does not work with minikube when ingress addon is disabled. Does anyone also have that problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

8 participants