Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Devworkspace native auth on Kubernetes - Unauthorized! Configured service account doesn't have access #21049

Closed
nils-mosbach opened this issue Jan 21, 2022 · 16 comments
Labels
area/che-server kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system.

Comments

@nils-mosbach
Copy link

nils-mosbach commented Jan 21, 2022

Describe the bug

We have a keycloak that authenticate agains GitLabs OIDC provider. (Informationen on our setup can be found here: #20962 (comment)) That actually served us well using Eclipse Che 7.40.

Upgrading to 7.42 seems to require native auth, so we tried to setup OAuth on Kubernetes using keycloak based on information provided by @sparkoo #21036 (comment). Are we missing something?

Dashboard just shows a blank page and shows the following errors.

image

It seems roles, bindings and serviceaccounts are not created. Should this be done by eclipse che in the provision api call?


Since chectl fails if oauth wasn't properly setup: Is the "non-native-auth" way of handling users deprecated with the release of Che 7.42?

I get the idea of using OpenShifts buildin authentication and understand that some might find keycloak quite heavy compared to dex. But I think setting up OAuth on the Kubernetes API server isn't really something a lot of users do on a regular basis. I'm not even sure if all managed kubernetes services allow changing these settings. Troubleshooting this tends to be quite difficult as well. (#21036 (comment))

Is setting nativeUserMode: false still a supported option?

Che version

7.42@latest

Steps to reproduce

I've created a new client kubernetes in keycloak and set the following settings on our kube-api-server:

oidc-client-id: kubernetes
oidc-issuer-url: 'https://auth.company.dev/auth/realms/git-dev'
oidc-username-claim: email

Chectl does not verify this setting as valid and throws a warning. Not sure what's the issue, so tried bypassing the check --skip-oidc-provider-check. The chectl log file unfortunately does not contain aditional information.

Check if OIDC Provider installed...NOT INSTALLED
    → OIDC Provider is not installed in order to deploy Eclipse Che. To bypass OIDC Provider check use '--skip-oidc-provider-check' flag

Che configuration contains the following configuration.

  auth:
      externalIdentityProvider: true
      identityProviderURL: 'https://auth.company.dev/auth/realms/git-dev'
      openShiftoAuth: false
      oAuthClientName: 'dev-studio'
      oAuthSecret: 'db82*******'

Additional Info:

  • We're running all services on a proper wildcard certificate.

Expected behavior

User native mode on kubernetes should work.

Runtime

Kubernetes (vanilla)

Screenshots

No response

Installation method

chectl/latest

Environment

Linux

Eclipse Che Logs

2022-01-21 12:20:49,422[nio-8080-exec-6]  [ERROR] [c.a.c.r.RuntimeExceptionMapper 47]   - Internal Server Error occurred, error time: 2022-01-21 12:20:49
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://10.43.0.1/api/v1/namespaces/workspace-nm-33j87e/secrets/workspace-credentials-secret. Message: Unauthorized! Configured service account doesn't have access. Service account may have been revoked. Unauthorized.
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:686)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:623)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:565)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:526)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:493)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:475)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:807)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:188)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:155)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:88)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.configurator.CredentialsSecretConfigurator.configure(CredentialsSecretConfigurator.java:43)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesNamespaceFactory.configureNamespace(KubernetesNamespaceFactory.java:570)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesNamespaceFactory.getOrCreate(KubernetesNamespaceFactory.java:334)
	at org.eclipse.che.workspace.infrastructure.kubernetes.provision.NamespaceProvisioner.provision(NamespaceProvisioner.java:42)
	at org.eclipse.che.workspace.infrastructure.kubernetes.api.server.KubernetesNamespaceService.provision(KubernetesNamespaceService.java:95)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.base/java.lang.reflect.Method.invoke(Unknown Source)
	at org.everrest.core.impl.method.DefaultMethodInvoker.invokeMethod(DefaultMethodInvoker.java:174)
	at org.everrest.core.impl.method.DefaultMethodInvoker.invokeMethod(DefaultMethodInvoker.java:61)
	at org.everrest.core.impl.RequestDispatcher.doInvokeResource(RequestDispatcher.java:329)
	at org.everrest.core.impl.RequestDispatcher.invokeSubResourceMethod(RequestDispatcher.java:319)
	at org.everrest.core.impl.RequestDispatcher.dispatch(RequestDispatcher.java:257)
	at org.everrest.core.impl.RequestDispatcher.dispatch(RequestDispatcher.java:131)
	at org.everrest.core.impl.RequestHandlerImpl.handleRequest(RequestHandlerImpl.java:61)
	at org.everrest.core.impl.EverrestProcessor.process(EverrestProcessor.java:130)
	at org.everrest.core.servlet.EverrestServlet.service(EverrestServlet.java:62)
	at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:777)
	at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:290)
	at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:280)
	at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:184)
	at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:89)
	at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
	at org.eclipse.che.core.metrics.ApiResponseMetricFilter.doFilter(ApiResponseMetricFilter.java:46)
	at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
	at org.eclipse.che.multiuser.api.authentication.commons.filter.MultiUserEnvironmentInitializationFilter.doFilter(MultiUserEnvironmentInitializationFilter.java:161)
	at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
	at org.eclipse.che.commons.logback.filter.RequestIdLoggerFilter.doFilter(RequestIdLoggerFilter.java:50)
	at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
	at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:121)
	at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:133)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158)
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541)
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:119)
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
	at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:769)
	at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:690)
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:353)
	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382)
	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:872)
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1705)
	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
	at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
	at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
	at java.base/java.lang.Thread.run(Unknown Source)

Additional context

Kubernetes: v1.20.4
Provisioned by Rancher

No errors in Kuberentes API Server logs.

OAuth2 Proxy Logs

[2022/01/21 12:17:06] [oauthproxy.go:862] No valid authentication in request. Initiating login.
10.42.3.0:51718 - 5906d68c8edb947f7082975011bdf5e5 - - [2022/01/21 12:17:06] che.company.dev GET - "/dashboard/assets/branding/manifest.json" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36 Edg/97.0.1072.62" 302 363 0.000
10.42.3.0:52770 - 7c70b5c9fb6e52e5d6d9faebfb06e6c1 - n.m@company.com [2022/01/21 12:18:08] che.company.dev GET / "/api/keycloak/settings" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36 Edg/97.0.1072.62" 404 88 0.004
10.42.3.0:52770 - b8a9a9c83a52fafd29537b4c0c40c021 - n.m@company.com [2022/01/21 12:18:11] che.company.dev GET / "/api/dex/settings" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36 Edg/97.0.1072.62" 404 85 0.002
10.42.3.0:52770 - a029efa58feb4e2eadc4d8d8e9ebd9ef - n.m@company.com [2022/01/21 12:18:19] che.company.dev GET / "/api/oidc/settings" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36 Edg/97.0.1072.62" 404 84 0.002
10.42.3.0:52770 - 52caf250ee8999ce9c0ffdf955b9be2c - n.m@company.com [2022/01/21 12:18:23] che.company.dev GET / "/api/" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36 Edg/97.0.1072.62" 200 726 0.015
10.42.3.0:52770 - db0ee253c8ccfe6226eb73950e1194c4 - n.m@company.com [2022/01/21 12:18:42] che.company.dev GET / "/api/oauth/settings" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36 Edg/97.0.1072.62" 404 81 0.003
10.42.3.0:52770 - 5d02cc30fd50867f136bc7eb9bb346ca - n.m@company.com [2022/01/21 12:18:44] che.company.dev GET / "/api/oauth/" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36 Edg/97.0.1072.62" 200 28 0.003
10.42.3.0:52770 - 53d36dc1c20c58e961478856d84ebcb4 - n.m@company.com [2022/01/21 12:18:46] che.company.dev GET / "/api/" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36 Edg/97.0.1072.62" 200 726 0.006
...

Since Che log tries to find a secret in the workspaces' name:

$ kubectl get secrets --namespace workspace-nm-33j87e
NAME                  TYPE                                  DATA   AGE
default-token-ppmwz   kubernetes.io/service-account-token   3      110m
@nils-mosbach nils-mosbach added the kind/bug Outline of a bug - must adhere to the bug report template. label Jan 21, 2022
@che-bot che-bot added the status/need-triage An issue that needs to be prioritized by the curator responsible for the triage. See https://github. label Jan 21, 2022
@ibuziuk ibuziuk added severity/P1 Has a major impact to usage or development of the system. area/che-server and removed status/need-triage An issue that needs to be prioritized by the curator responsible for the triage. See https://github. labels Jan 21, 2022
@ibuziuk
Copy link
Member

ibuziuk commented Jan 21, 2022

@sparkoo could you please take a look?

@sparkoo
Copy link
Member

sparkoo commented Jan 24, 2022

@nils-mosbach It's not only about Dex being more lightweight then Keycloak. With devworkspaces being Kubernetes objects, we need somehow control access to them. So we need to set Kubernetes RBAC rules so that only devworkspace owner has an access. For that we need Kubernetes to know the user, thus we require Kubernetes to be configured with OIDC.

So for devworkspace engine (default since 7.42 I think), OIDC Kubernetes is mandatory. For che-workspace engine, you can still use built-in keycloak as before. However, I can't guarantee for how long this will be supported.

@nils-mosbach
Copy link
Author

@sparkoo Thanks! Makes sense. As soon as I got this working, I'll drop a summary what was necessary for Rancher deployed clusters here. Maybe that helps others as well.

Do you have an idea what could be the issue?

@sparkoo
Copy link
Member

sparkoo commented Jan 24, 2022

It looks like kubernetes is not properly configured with oidc. If you see login page and can actually login and you're redirected to dashboard page, oidc server and che configuration should be fine.

io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://10.43.0.1/api/v1/namespaces/workspace-nm-33j87e/secrets/workspace-credentials-secret. Message: Unauthorized! Configured service account doesn't have access. Service account may have been revoked. Unauthorized. this failure makes me think that kubernetes is not properly configured with oidc. Under the hood, we're using user's token in authorization header for requests to kubernetes API (here to get some secret). The token is obtained during the login. We're setting the RBAC rules so only user has access to the resources in the namespace, so if Kubernetes don't know the token or token is for different user than set in RoleBinding, it rejects the request.

Please note that you have to use same client (id and secret) for both che and kubernetes apiserver.

Here's the doc how to configure kubernetes apiserver with oidc https://kubernetes.io/docs/reference/access-authn-authz/authentication/#configuring-the-api-server

Here's chectl code that is checking oidc configuration of kubernetes (I'm not sure it's 100% bulletproof) https://github.com/che-incubator/chectl/blob/main/src/commands/server/deploy.ts#L443

@guydog28
Copy link

Looking at the code, woud this check fail when the oidc args are listed under args and not commands? not sure exactly how the go api works for that. in our deployment the command is just the command and everything else, including oidc settings are under args, and we fail the check.

@nils-mosbach
Copy link
Author

nils-mosbach commented Jan 24, 2022

Getting closer... :)
Thanks, using the same client id actually helped. I got a little bit confused since kube api server doesn't allow setting an OAuth secret and Che/Oauth2 proxy enforces one. But it seems that the kube api server doesn't need one even for protected clients.

We've tested kubectl login with a cluster admin account and tried to use "regular"-user accounts for che. While digging a little deeper, after we created a clusterrolebinding for the user that should access che, dashboard worked as expected.

$ kubectl create clusterrolebinding oidc-cluster-mn-admin --clusterrole=cluster-admin --user=nm@company.com
clusterrolebinding.rbac.authorization.k8s.io/oidc-cluster-mn-admin created

Question is: Should this be done by all users manually or is this a process that should be handled by che? For now granting access rights before users are going to login is fine, but later on it'll be nice if che will grant all required rights for accessing the users namespace. After we granted cluster-admin rights there are at least roles and bindings created for the workspace.

$ kubectl get rolebindings
NAME                                                ROLE                                                            AGE
clusterrolebinding-22cf4                            ClusterRole/admin                                               3d5h
clusterrolebinding-588wx                            ClusterRole/edit                                                3d5h
clusterrolebinding-9qrjk                            ClusterRole/project-member                                      3d5h
clusterrolebinding-kcqb2                            ClusterRole/project-owner                                       3d5h
dev-studio-cheworkspaces-clusterrole                ClusterRole/dev-studio-cheworkspaces-clusterrole                3d5h
dev-studio-cheworkspaces-devworkspace-clusterrole   ClusterRole/dev-studio-cheworkspaces-devworkspace-clusterrole   3d5h
devworkspace-dw                                     Role/workspace                                                  41m

$ kubectl get roles
NAME        CREATED AT
workspace   2022-01-24T18:53:04Z

$ kubectl get serviceaccounts 
NAME                           SECRETS   AGE
default                        1         3d5h
workspace0f9e060375c541c0-sa   1         40m
workspacec80c964c91c44afa-sa   1         41m

Removing the clusterrole admin binding will cause the dashboard to crash again. We're still struggling with tokens that expire after ~5 mins but that might be due to the token lifespan (keycloak/gitlab). I'll have a look on this tomorrow.

In our case chectl still deploys devworkspace operator 0.9. Is this causing the issue?

chectl check
Problem is, that rancher deploys kube api service not in the kube-system namespace. I think that service lives as a docker container somewhere on each of the ctrl-plane nodes. (https://forums.rancher.com/t/where-does-the-kubernetes-api-server-run-in-rancher/16317) So the chectl-check as implemented by deploy.ts will always fail. Not a big deal since you can skip it.

@sparkoo
Copy link
Member

sparkoo commented Jan 25, 2022

Looking at the code, woud this check fail when the oidc args are listed under args and not commands? not sure exactly how the go api works for that. in our deployment the command is just the command and everything else, including oidc settings are under args, and we fail the check.

That's great information for us. Can you please create an issue so we can improve the check? Or you can propose PR yourself.

cc: @tolusha

@sparkoo
Copy link
Member

sparkoo commented Jan 25, 2022

@nils-mosbach looks promising. Seems like you're logged with the user that kubernetes knows. I would not recommend to set cluster-admin permissions for the user. In general, Che should set all needed permissions for user during the namespace provisioning (che-dashboard request to che-server). Do you create namespace yourself or is it created by Che?

The permissions are created in https://github.com/eclipse-che/che-server/blob/main/infrastructures/kubernetes/src/main/java/org/eclipse/che/workspace/infrastructure/kubernetes/namespace/configurator/UserPermissionConfigurator.java. As you can see there, Che does not know any particular cluster roles, but it get's it from che.infra.kubernetes.user_cluster_roles configuration. This is set from env CHE_INFRA_KUBERNETES_USER__CLUSTER__ROLES so please check che configmap for this value. che-operator creates this configmap https://github.com/eclipse-che/che-operator/blob/main/pkg/deploy/server/server_configmap.go#L180.

So in the final, in user's namespace you should have something like eclipse-che-cheworkspaces-clusterrole and eclipse-che-cheworkspaces-devworkspace-clusterrole RoleBindings referring ClusterRoles with same name and your User. We can probably move from there to know what exactly is wrong. Do you see any errors in che-server or che-operator logs?

@nils-mosbach
Copy link
Author

nils-mosbach commented Jan 26, 2022

I think we finally found the issue which caused it.

Kubernetes uses sub as the default value for mapping user names with keycloak. We changed that to 'email'. Che-Server uses name by default.

It seems that in the users table there's is a colum that contains the name which che will use for creating rolebindings. Even I tried setting CHE_OIDC_USERNAME__CLAIM that had no effect since that mapping is fetched from the database for existing users.

After we deleted the database and started fresh, the name column in the database contains the users email. Which solves the authentication issue since the proper user name (email) is set in the rolebinding. Downside of this: Che doesn't show the users full name in the dashboards user dropdown.

@sparkoo: Thanks a lot! If you want to improve chectl checks, one thing that would be nice is if CHE_OIDC_USERNAME__CLAIM matches oidc-username-claim. Anyway: Great work! :)

@nils-mosbach
Copy link
Author

For others reading this, setting everything up on rancher was quite simple.

Deploy keycloak

  • Deploy keycloak using an ssl certificate (e.g. https://auth.company.dev)
  • Create a realm (e.g. git-dev)
  • Create a client for kubernetes (e.g. kubernetes)

Note that this client must be used by che and kubernetes and must be protected. Otherweise OAuth2-Proxy will throw an error that secret must not be empty.

Setup OIDC for kube-api-server

On Rancher provisioned clusters the api server can be configured using extra_args.

  • In Rancher navigate to the edit cluster view.
  • Click on "Edit as as YAML" on the upper right. That'll display all configuration settings
  • Set oidc-client-id, oidc-issuer-url, oidc-username-claim
kube-api:
  always_pull_images: false
  extra_args:
    oidc-client-id: kubernetes
    oidc-issuer-url: 'https://auth.company.dev/auth/realms/git-dev'
    oidc-username-claim: email

After saving the changes it takes a couple of minutes until all ctrl plane nodes are restarted. If using a self signed certificate, which we don't, it gets a little tricky since kube api server must know the CAs certificate.

Configure Che

Set the external identifty provider and oidc username claim in the cr.

spec:
  server:
    customCheProperties:
      CHE_OIDC_USERNAME__CLAIM: "email"

  auth:
      externalIdentityProvider: true
      identityProviderURL: 'https://auth.company.dev/auth/realms/git-dev'
      openShiftoAuth: false
      oAuthClientName: 'kubernetes'
      oAuthSecret: '0...2'

@gidduhome
Copy link

@nils-mosbach May I know how did you configure kubernetis with OIDC provider? I mean, which yaml file needs to be modified? I'm not able to narrown down to which piece thru Rancher.

@nils-mosbach
Copy link
Author

Rancher does not deploy the api server as a Kubernetes resource. That’s why the official Kubernetes documentation doesn’t work.

for rancher: when you edit your Cluster there’s an option to switch to yaml mode. That of course only works with rancher provisioned clusters.

In other cases that needs to be set in the api-server/kube-System namespace as described by the K8s docs.

@gidduhome
Copy link

@nils-mosbach Thank you!! Let me play around and see.

@cristian-radu
Copy link

I ran into this same issue with Che 7.43 deployed via the che-operator 7.43 onto a GKE cluster with a separate Keycloak instance that I manage myself.

In case someone finds this helpful, here are the steps I took to finally get a working installation.

On GKE (probably EKS and AKS too) you cannot just change Kubernetes API server settings since this is being managed by the cloud provider.

  1. I had to enable GKE Identity Service so that I can authenticate users from Keycloak against Che. I chose to use the email claim to identify my users, so I have userClaim = email in its config. It is important to note here that this service seems to be implemented as an Envoy proxy in front of the Kubernetes API server, so now you have a new endpoint for Kubernetes API that will accept requests authenticated via an OIDC provider.
  2. As in the above comments, I also changed the Che username claim to email. The contents of the Che database were not important to me, so I dropped it and after restarting the che server, it started to populate the Name field of the usr table in the database with the email value from the Keycloak claim and also started to correctly create the Kubernetes RoleBindings in the user's namespace with the user's email as the Subject.
    customCheProperties:
      CHE_OIDC_USERNAME__CLAIM: "email"
  1. I had to get the che server to connect to the GKE Identity Service endpoint, instead of the standard Kubernetes one obtained from the in-cluster config by setting: CHE_INFRA_KUBERNETES_MASTER__URL=https://gke-oidc-envoy.anthos-identity-service
  2. The che-dashboard also makes some requests directly to Kubernetes which seem to be related to managing devworkspace objects. Here it was not possible to configure the master url, so as a quick and dirty hack, I forked it and changed the entrypoint script. I modified the standard Kubernetes environment variables so that the dashboard's client would instead pick up the envoy proxy endpoint. This sucks, I know, but I was fed up at this point.
set -a

KUBERNETES_PORT=tcp://<gke-oidc-envoy LB IP>:443
KUBERNETES_PORT_443_TCP_ADDR=<gke-oidc-envoy LB IP>
KUBERNETES_PORT_443_TCP=tcp://<gke-oidc-envoy LB IP>:443
KUBERNETES_SERVICE_HOST=<gke-oidc-envoy LB IP>

set +a

Lots of effort was required before I was "CodeReady" :P

Bonne chance!

@debkantap
Copy link

set -a

KUBERNETES_PORT=tcp://:443
KUBERNETES_PORT_443_TCP_ADDR=
KUBERNETES_PORT_443_TCP=tcp://:443
KUBERNETES_SERVICE_HOST=

set +a

Hi...
Can you please elaborate your steps. We were able to login to che-dashboard with keycloak as OIDC but authorization not happening, I think your solution will work for us, but need more detail step pls.

Thanks & regards
DK

@huonguyenlt
Copy link

Hi @nils-mosbach , could you please share with me how to set up keycloak client? Is there any option I have to change beside using the default ones when create the client. Thank alot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/che-server kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system.
Projects
None yet
Development

No branches or pull requests

9 participants