Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dashboard not working after re-deployment in GCE #2415

Closed
vongohren opened this issue Sep 27, 2017 · 27 comments
Closed

Dashboard not working after re-deployment in GCE #2415

vongohren opened this issue Sep 27, 2017 · 27 comments

Comments

@vongohren
Copy link

vongohren commented Sep 27, 2017

Environment
Dashboard version: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.0
Kubernetes version: 1.7.4 og pool and master cluster version 1.7.6
Running on GCE
Steps to reproduce

Have the defualt GCE cluster running with 1.7.5. Verify the dashboard works on http://localhost:8001/ui
Then try to deploy the recomended version:
https://github.com/kubernetes/dashboard/blob/master/src/deploy/recommended/kubernetes-dashboard.yaml

Observed result

The recommended version fails with error:

secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" configured
service "kubernetes-dashboard" configured
Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["services"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["proxy"]}] user=&{snorre.edwin@bekk.no  [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]
Expected result

To see the dashboard

Comments

A collegue of mine deployed this kubernetes-dashboard, after a mistake and now I cant get it back. Iv tried the alternative version and other things, but I cant seem to get it working again

@vongohren vongohren changed the title Dashboard alternative not working after deployment Dashboard not working after re-deployment in GCE Sep 27, 2017
@m3co-code
Copy link

This looks to me like privilege escalation protection. Are you sure that with the account you want to create apply the dashboard.yaml you have the necessary right to create secrets etc.? You can't grant more permissions than your own account has in kubernetes.

@floreks
Copy link
Member

floreks commented Sep 28, 2017

Exactly, as @marco-jantke said. Just look at the error message. It says forbidden, which means you do not have privileges to create all resources. Only cluster admin can deploy Dashboard.

@floreks floreks closed this as completed Sep 28, 2017
@vongohren
Copy link
Author

@marco-jantke @floreks I did look at them, but I created a fresh new cluster, shouldnt my account be the administrator then?

@floreks
Copy link
Member

floreks commented Sep 28, 2017

I don't know GCE cluster setup so I can't tell if it should or not. I see however that server responds with Forbidden error which means you do not have access to create some resources.

@m3co-code
Copy link

Make sure to grant yourself in GC IAM the Container Engine Admin/Cluster Admin rights. Hope this helps, but further support for that is not part of the kubernetes/dashboard project.

@bartoszhernas
Copy link

Hi, I am a cluster admin and I am still getting the same error, any ideas?

@bartoszhernas
Copy link

Also, because my master's had been updated to 1.7.6-gke.1, the dashboard stopped working

@jshapiro26
Copy link

The dashboard has stopped working for me on 1.7.6-gke.1 as well across 5 clusters. I can see that my nodes are still at 1.7.5.

@bartoszhernas
Copy link

bartoszhernas commented Oct 11, 2017 via email

@floreks
Copy link
Member

floreks commented Oct 11, 2017

"Stopped working" does not really help us diagnose the problem. We need much more details together with logs from Dashboard to be able to help or point you in the right direction.

@pgayvallet
Copy link

pgayvallet commented Oct 23, 2017

Same issue here, kube 1.7.6-gke.1 on gke, cluster admin, still getting the error :

Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["services"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["proxy"]}] user=&{******* [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/" "/apis" "/apis/" "/healthz" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]

Was deploying correctly before upgrading the master from 1.7.5 to 1.7.6

@derluke
Copy link

derluke commented Nov 23, 2017

same here

@KptnKMan
Copy link

KptnKMan commented Nov 29, 2017

Experiencing the same issue here.

kubernetes 1.8.4
Latest Dashboard 1.8.0
fresh installation on AWS.
Rebuilding cluster shows no change.

Error:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["configmaps"], ResourceNames:["kubernetes-dashboard-settings"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["configmaps"], ResourceNames:["kubernetes-dashboard-settings"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["services"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["proxy"]} PolicyRule{Resources:["services/proxy"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services/proxy"], ResourceNames:["http:heapster:"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services/proxy"], ResourceNames:["https:heapster:"], APIGroups:[""], Verbs:["get"]}] user=&{worker  [system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]

@floreks
Copy link
Member

floreks commented Nov 29, 2017

User that you want to create Dashboard with has no permissions to create Role with some verbs. You need to use admin account that has all the privileges to create objects in the cluster.

@KptnKMan
Copy link

@floreks I'm not using any specific user, other than kubernetes-admin in my kubeconfig.

Can you explain a little further what I'm doing wrong?

Here is a copy of my kubeconfig:

apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: ssl/ca.pem
    server: https://address.to-elb.com
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
users:
  - name: kubernetes-admin
    user:
      client-certificate: ssl/worker.pem
      client-key: ssl/worker-key.pem
current-context: kubernetes-admin@kubernetes

@danyx23
Copy link

danyx23 commented Nov 30, 2017

I ran into the same problem running on a fresh GKE 1.8.3 cluster. I have the cluster-admin role binding active for my user ( kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<myusername>) but when I run kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml to update the dashboard it fails when trying to create the new role with

Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-key-holder"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["secrets"], ResourceNames:["kubernetes-dashboard-certs"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["configmaps"], ResourceNames:["kubernetes-dashboard-settings"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["configmaps"], ResourceNames:["kubernetes-dashboard-settings"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["services"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["proxy"]} PolicyRule{Resources:["services/proxy"], ResourceNames:["heapster"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services/proxy"], ResourceNames:["http:heapster:"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services/proxy"], ResourceNames:["https:heapster:"], APIGroups:[""], Verbs:["get"]}] user=&{daniel@douglasconnect.com  [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]

I am a bit at a loss because I would think that having cluster wide admin RBAC setup for my user should make this kind or error impossible. What can I do to debug this problem further? Thanks!

@floreks
Copy link
Member

floreks commented Nov 30, 2017

GKE kubernetes setup is more restrictive AFAIK. You need to use their API to somehow grant you necessary privileges.

@KptnKMan
Copy link

Hi @floreks I think we're still unsure how to resolve this issue.
Could you clarify if you mean that we need to run this within the cluster-admin context? Because that is what I'm attempting right now and I am still getting the exact same error.

I am attempting this on AWS, but it is basically the same as a bare metal install.

apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: ssl/ca.pem
    server: https://address.to-elb.com
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: cluster-admin
  name: cluster-admin@kubernetes
users:
  - name: cluster-admin
    user:
      client-certificate: ssl/worker.pem
      client-key: ssl/worker-key.pem
current-context: cluster-admin@kubernetes

Otherwise is there some other way in which I'm supposed to run this "as the admin user"?

@floreks
Copy link
Member

floreks commented Nov 30, 2017

I am pretty sure that this setup is not same as "bare metal" nor kubeadm, because I have used both and there is no problem with deploying Dashboard. It has to be environment specific issue and your "admin" is not an actual admin with all privileges. I can't solve this for you as I don't have access to GKE nor AWS to test their deployments of kubernetes.

This might help with GKE setup: https://cloud.google.com/kubernetes-engine/docs/how-to/iam-integration

I believe you need to use gcloud and their API to grant yourself more privileges. As for AWS this might be a similar case.

@KptnKMan
Copy link

The AWS deployment I have is an environment that is simply running on top of AWS, hence why its like a bare metal deployment. It seems beside the point, but just clarifying that its not on GKE/GC.

Is there a setting or api-server flag that should be enabled for this, as I never had this problem with previous versions of the dashboard. Also, the dashboard actually works, but I get this annoying error and I understand that its a security concern. I'm trying to do this correctly with RBAC.

@floreks
Copy link
Member

floreks commented Nov 30, 2017

Can you paste api-server parameters you are using to start it?

@KptnKMan
Copy link

KptnKMan commented Dec 1, 2017

@floreks, and everyone, I think I managed to fix the issue (at least for my setup).
I got some help from @liggitt on the kubernetes slack, who was super awesome.

THESE ARE ALL THE STEPS I USED:

First I determined that I did not have the correct roles installed, which should be setup by the api-server, by default:

$ kubectl get roles --all-namespaces
No resources found.

I needed to run the api-server with the flag --authorization-mode=RBAC,AllowAlways which I learned will enable RBAC by default but will drop back to AllowAlways if auth fails.

This is verified in the api-server logs which will show a bunch of lines like:

Nov 30 23:48:08 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:08.955830    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/cluster-admin
Nov 30 23:48:08 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:08.970721    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/system:discovery
Nov 30 23:48:08 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:08.985079    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/system:basic-user
Nov 30 23:48:09 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:09.005096    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/admin
Nov 30 23:48:09 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:09.032102    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/edit
Nov 30 23:48:09 ip-10-1-11-197 kube-apiserver[8216]: I1130 23:48:09.048804    8216 storage_rbac.go:198] created clusterrole.rbac.authorization.k8s.io/view

This is not a production recommended solution, so I needed to bind it to a role.
However, it worked:

$ kubectl get roles --all-namespaces
NAMESPACE     NAME                                             AGE
kube-public   system:controller:bootstrap-signer               23m
kube-system   extension-apiserver-authentication-reader        23m
kube-system   system::leader-locking-kube-controller-manager   23m
kube-system   system::leader-locking-kube-scheduler            23m
kube-system   system:controller:bootstrap-signer               23m
kube-system   system:controller:cloud-provider                 23m
kube-system   system:controller:token-cleaner                  23m

Next I discovered that the only role that is enabled by default for SuperUser access is the system:masters group, not a particular username.
So my Admin cert creation process needed to include O=system:masters as the Org name:

$ openssl genrsa -out config/ssl/admin-key.pem 2048
$ openssl req -new -key config/ssl/admin-key.pem -out config/ssl/admin.csr -subj '/C=AU/ST=Some-State/O=system:masters/CN=cluster-admin'
openssl x509 -req -in config/ssl/admin.csr -CA config/ssl/ca.pem -CAkey config/ssl/ca-key.pem -CAcreateserial -out config/ssl/admin.pem -days 365

I changed my api-server flag to only --authorization-mode=RBAC and restarted services.
Using my new cert in my kubeconfig:

apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: ssl/ca.pem
    server: https://address.to-elb.com
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: cluster-admin
  name: cluster-admin@kubernetes
users:
  - name: cluster-admin
    user:
      client-certificate: ssl/admin.pem
      client-key: ssl/admin-key.pem
current-context: cluster-admin@kubernetes

I was able to successfully query:

$ kube-deploy get roles --all-namespaces
NAMESPACE     NAME                                             AGE
kube-public   system:controller:bootstrap-signer               42m
kube-system   extension-apiserver-authentication-reader        42m
kube-system   system::leader-locking-kube-controller-manager   42m
kube-system   system::leader-locking-kube-scheduler            42m
kube-system   system:controller:bootstrap-signer               42m
kube-system   system:controller:cloud-provider                 42m
kube-system   system:controller:token-cleaner                  42m

Lastly, with the correct permissions and roles bound, I could create Dashboard with correct permissions, using only RBAC:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created

This is what worked for me, I hope anyone who finds this finds it helpful. 👍

@floreks
Copy link
Member

floreks commented Dec 1, 2017

Great feedback :) This is probably the same issue with GKE. There are some additional steps required to enable RBACs. We'll link this solution to our FAQ so everyone can benefit from it.

@mofelee
Copy link

mofelee commented May 24, 2018

If you enabled RBAC, just type

kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user $(gcloud config get-value account)

and

~ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

secret "kubernetes-dashboard-certs" unchanged
serviceaccount "kubernetes-dashboard" unchanged
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" unchanged
deployment "kubernetes-dashboard" unchanged
service "kubernetes-dashboard" unchanged

@fatih
Copy link

fatih commented May 31, 2018

I also had a test cluster and had the same issue. Adding --authorization-mode=RBAC fixed it. Not sure if this is the only reason, but wanted to add if someone else had this problem.

@kirkre
Copy link

kirkre commented Jan 18, 2019

I found that even with the owner role "Full access to all resources", the suggestion from mofelee is needed on GKE:

kubectl create clusterrolebinding cluster-admin-binding
--clusterrole cluster-admin --user $(gcloud config get-value account)

@Maxiaoyu0
Copy link

If you enabled RBAC, just type

kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user $(gcloud config get-value account)

and

~ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

secret "kubernetes-dashboard-certs" unchanged
serviceaccount "kubernetes-dashboard" unchanged
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" unchanged
deployment "kubernetes-dashboard" unchanged
service "kubernetes-dashboard" unchanged

this is the true resolution .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests