Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure executing: POST sshconfigmap or gitconfig configmaps already exists #15904

Closed
4 of 22 tasks
skabashnyuk opened this issue Feb 3, 2020 · 8 comments
Closed
4 of 22 tasks
Labels
area/che-server kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system.

Comments

@skabashnyuk
Copy link
Contributor

Describe the bug

Noticed during an inspection of che-server logs.

org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesInfrastructureException: Failure executing: POST at: https://f8osoproxy-test-dsaas-production.09b5.dsaas.openshiftapps.com/api/v1/namespaces/user-namespace/configmaps. Message: configmaps "workspaceID-gitconfig" already exists. Received status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[], group=null, kind=configmaps, name=workspaceID-gitconfig, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=configmaps "workspaceID-gitconfig" already exists, metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=AlreadyExists, status=Failure, additionalProperties={}).
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesConfigsMaps.create(KubernetesConfigsMaps.java:52)
	at org.eclipse.che.workspace.infrastructure.kubernetes.wsplugins.brokerphases.DeployBroker.execute(DeployBroker.java:92)
	at org.eclipse.che.workspace.infrastructure.kubernetes.wsplugins.brokerphases.PrepareStorage.execute(PrepareStorage.java:72)
	at org.eclipse.che.workspace.infrastructure.kubernetes.wsplugins.brokerphases.ListenBrokerEvents.execute(ListenBrokerEvents.java:63)
	at org.eclipse.che.workspace.infrastructure.kubernetes.wsplugins.PluginBrokerManager.getTooling(PluginBrokerManager.java:124)
	at org.eclipse.che.core.tracing.TracingInterceptor.invoke(TracingInterceptor.java:61)
	at org.eclipse.che.workspace.infrastructure.kubernetes.wsplugins.SidecarToolingProvisioner.provision(SidecarToolingProvisioner.java:80)
	at org.eclipse.che.core.tracing.TracingInterceptor.invoke(TracingInterceptor.java:61)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesInternalRuntime.internalStart(KubernetesInternalRuntime.java:182)
	at org.eclipse.che.api.workspace.server.spi.InternalRuntime.start(InternalRuntime.java:141)
	at org.eclipse.che.core.tracing.TracingInterceptor.invoke(TracingInterceptor.java:61)
	at org.eclipse.che.api.workspace.server.WorkspaceRuntimes$StartRuntimeTask.run(WorkspaceRuntimes.java:873)
	at org.eclipse.che.commons.lang.concurrent.CopyThreadLocalRunnable.run(CopyThreadLocalRunnable.java:38)
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
	at io.opentracing.contrib.concurrent.TracedRunnable.run(TracedRunnable.java:30)
	at io.micrometer.core.instrument.internal.TimedRunnable.run(TimedRunnable.java:44)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at org.eclipse.che.commons.observability.CountedThreadFactory.lambda$newThread$0(CountedThreadFactory.java:74)
	at java.lang.Thread.run(Thread.java:748)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://f8osoproxy-test-dsaas-production.09b5.dsaas.openshiftapps.com/api/v1/namespaces/usernamespace/configmaps. Message: configmaps "workspaceID-gitconfig" already exists. Received status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[], group=null, kind=configmaps, name=workspaceID-gitconfig, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=configmaps "workspaceID-gitconfig" already exists, metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=AlreadyExists, status=Failure, additionalProperties={}).
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:476)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:415)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:780)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:349)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesConfigsMaps.create(KubernetesConfigsMaps.java:50)
	... 19 common frames omitted
org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesInfrastructureException: Failure executing: POST at: https://f8osoproxy-test-dsaas-production.09b5.dsaas.openshiftapps.com/api/v1/namespaces/usernamespace/configmaps. Message: configmaps "workspacemmeID-sshconfigmap" already exists. Received status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[], group=null, kind=configmaps, name=workspaceID-sshconfigmap, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=configmaps "workspaceID-sshconfigmap" already exists, metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=AlreadyExists, status=Failure, additionalProperties={}).
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesConfigsMaps.create(KubernetesConfigsMaps.java:52)
	at org.eclipse.che.workspace.infrastructure.kubernetes.wsplugins.brokerphases.DeployBroker.execute(DeployBroker.java:92)
	at org.eclipse.che.workspace.infrastructure.kubernetes.wsplugins.brokerphases.PrepareStorage.execute(PrepareStorage.java:72)
	at org.eclipse.che.workspace.infrastructure.kubernetes.wsplugins.brokerphases.ListenBrokerEvents.execute(ListenBrokerEvents.java:63)
	at org.eclipse.che.workspace.infrastructure.kubernetes.wsplugins.PluginBrokerManager.getTooling(PluginBrokerManager.java:124)
	at org.eclipse.che.core.tracing.TracingInterceptor.invoke(TracingInterceptor.java:61)
	at org.eclipse.che.workspace.infrastructure.kubernetes.wsplugins.SidecarToolingProvisioner.provision(SidecarToolingProvisioner.java:80)
	at org.eclipse.che.core.tracing.TracingInterceptor.invoke(TracingInterceptor.java:61)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesInternalRuntime.internalStart(KubernetesInternalRuntime.java:182)
	at org.eclipse.che.api.workspace.server.spi.InternalRuntime.start(InternalRuntime.java:141)
	at org.eclipse.che.core.tracing.TracingInterceptor.invoke(TracingInterceptor.java:61)
	at org.eclipse.che.api.workspace.server.WorkspaceRuntimes$StartRuntimeTask.run(WorkspaceRuntimes.java:873)
	at org.eclipse.che.commons.lang.concurrent.CopyThreadLocalRunnable.run(CopyThreadLocalRunnable.java:38)
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
	at io.opentracing.contrib.concurrent.TracedRunnable.run(TracedRunnable.java:30)
	at io.micrometer.core.instrument.internal.TimedRunnable.run(TimedRunnable.java:44)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at org.eclipse.che.commons.observability.CountedThreadFactory.lambda$newThread$0(CountedThreadFactory.java:74)
	at java.lang.Thread.run(Thread.java:748)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://f8osoproxy-test-dsaas-production.09b5.dsaas.openshiftapps.com/api/v1/namespaces/usernamepsace/configmaps. Message: configmaps "workspaceID-sshconfigmap" already exists. Received status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[], group=null, kind=configmaps, name=workspacemID-sshconfigmap, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=configmaps "workspacemID-sshconfigmap" already exists, metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=AlreadyExists, status=Failure, additionalProperties={}).
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:476)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:415)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:780)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:349)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesConfigsMaps.create(KubernetesConfigsMaps.java:50)
	... 19 common frames omitted

Che version

  • latest
  • nightly
  • other: 7.7.1

Steps to reproduce

Unknown

Expected behavior

This error should not happen.

Runtime

  • kubernetes (include output of kubectl version)
  • Openshift (include output of oc version)
  • minikube (include output of minikube version and kubectl version)
  • minishift (include output of minishift version and oc version)
  • docker-desktop + K8S (include output of docker version and kubectl version)
  • other: (please specify)

Screenshots

Installation method

  • chectl
  • che-operator
  • minishift-addon
  • I don't know

Environment

  • my computer
    • Windows
    • [] Linux
    • macOS
  • Cloud
    • Amazon
    • Azure
    • GCE
    • other (please specify)
  • other: please specify

Additional context

n/a

@skabashnyuk skabashnyuk added kind/bug Outline of a bug - must adhere to the bug report template. area/che-server severity/P1 Has a major impact to usage or development of the system. labels Feb 3, 2020
@skabashnyuk
Copy link
Contributor Author

Same error is seen in #15900

@skabashnyuk
Copy link
Contributor Author

From my observation configmaps has some delay in the deletion process. I would suggest trying to use random names for config map names like https://github.com/eclipse/che/blob/master/infrastructures/kubernetes/src/main/java/org/eclipse/che/workspace/infrastructure/kubernetes/wsplugins/brokerphases/BrokerEnvironmentFactory.java#L138-L139

CC @vinokurig

@ericwill ericwill mentioned this issue Feb 10, 2020
12 tasks
@sleshchenko
Copy link
Member

@skabashnyuk I believe (but not 100% sure) it may be caused by incorrect workspace k8s resources cleaning up... I mean when workspace is idled but k8s resources are not clean up. Next workspace start will show an error message and object already exists, and maybe config map is just a first object we try to create.
Have you reproduced it even without the idling issue I've described?

@skabashnyuk
Copy link
Contributor Author

skabashnyuk commented Feb 12, 2020

I mean when workspace is idled but k8s resources are not clean up.

I thought about that as well but didn't found any confirmation about that.

Have you reproduced it even without the idling issue I've described?

I got this error somehow by continuously starting/stopping two workspaces in parallel.
I've got the impression that object deletion in k8s is not instant and they exist even after k8s client reported that he successfully delete it.

@sleshchenko
Copy link
Member

There are objects where we are not able to generate names, like some services...
So, instead of generating names it would be more reliable if we find a way to make sure that objects are terminated and removed before we mark workspace as stopped.

@skabashnyuk
Copy link
Contributor Author

@sleshchenko you probably right. @metlos has the same vision that naming should not be a reason for the problem. It might be related to https://github.com/eclipse/che/blob/master/infrastructures/kubernetes/src/main/java/org/eclipse/che/workspace/infrastructure/kubernetes/provision/UniqueNamesProvisioner.java. We will take care of it.

@skabashnyuk
Copy link
Contributor Author

I was able to reproduce the state when config maps were not deleted with such a case

2020-02-12 13:26:30,318[aceSharedPool-2]  [ERROR] [.IdentityProviderConfigFactory 190]  - Cannot retrieve User OpenShift token from the 'openshift-v4' identity provider
org.eclipse.che.api.core.BadRequestException: Invalid token.
	at org.eclipse.che.multiuser.keycloak.server.KeycloakServiceClient.doRequest(KeycloakServiceClient.java:197)
	at org.eclipse.che.multiuser.keycloak.server.KeycloakServiceClient.getIdentityProviderToken(KeycloakServiceClient.java:135)
	at org.eclipse.che.workspace.infrastructure.openshift.multiuser.oauth.IdentityProviderConfigFactory.personalizeConfig(IdentityProviderConfigFactory.java:173)
	at org.eclipse.che.workspace.infrastructure.openshift.multiuser.oauth.IdentityProviderConfigFactory.buildConfig(IdentityProviderConfigFactory.java:167)
	at org.eclipse.che.workspace.infrastructure.openshift.OpenShiftClientFactory.buildConfig(OpenShiftClientFactory.java:142)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesClientFactory.create(KubernetesClientFactory.java:89)
	at org.eclipse.che.workspace.infrastructure.openshift.project.OpenShiftProject.prepare(OpenShiftProject.java:99)
	at org.eclipse.che.workspace.infrastructure.openshift.project.OpenShiftProjectFactory.getOrCreate(OpenShiftProjectFactory.java:86)
	at org.eclipse.che.workspace.infrastructure.openshift.project.OpenShiftProjectFactory.getOrCreate(OpenShiftProjectFactory.java:49)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.pvc.CommonPVCStrategy.prepare(CommonPVCStrategy.java:200)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesInternalRuntime.internalStart(KubernetesInternalRuntime.java:203)
	at org.eclipse.che.api.workspace.server.spi.InternalRuntime.start(InternalRuntime.java:141)
	at org.eclipse.che.api.workspace.server.WorkspaceRuntimes$StartRuntimeTask.run(WorkspaceRuntimes.java:891)
	at org.eclipse.che.commons.lang.concurrent.CopyThreadLocalRunnable.run(CopyThreadLocalRunnable.java:38)
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2020-02-12 13:26:30,319[aceSharedPool-2]  [WARN ] [.i.k.KubernetesInternalRuntime 252]  - Failed to start Kubernetes runtime of workspace workspaceoi6n81du3ed85my2. Cause: Your session has expired. 
Please <a href='javascript:location.reload();' target='_top'>login</a> to Che again to get access to your OpenShift account
2020-02-12 13:26:30,328[aceSharedPool-2]  [ERROR] [.IdentityProviderConfigFactory 190]  - Cannot retrieve User OpenShift token from the 'openshift-v4' identity provider
org.eclipse.che.api.core.BadRequestException: Invalid token.
	at org.eclipse.che.multiuser.keycloak.server.KeycloakServiceClient.doRequest(KeycloakServiceClient.java:197)
	at org.eclipse.che.multiuser.keycloak.server.KeycloakServiceClient.getIdentityProviderToken(KeycloakServiceClient.java:135)
	at org.eclipse.che.workspace.infrastructure.openshift.multiuser.oauth.IdentityProviderConfigFactory.personalizeConfig(IdentityProviderConfigFactory.java:173)
	at org.eclipse.che.workspace.infrastructure.openshift.multiuser.oauth.IdentityProviderConfigFactory.buildConfig(IdentityProviderConfigFactory.java:167)
	at org.eclipse.che.workspace.infrastructure.openshift.OpenShiftClientFactory.buildConfig(OpenShiftClientFactory.java:142)
	at org.eclipse.che.workspace.infrastructure.openshift.OpenShiftClientFactory.createOC(OpenShiftClientFactory.java:96)
	at org.eclipse.che.workspace.infrastructure.openshift.project.OpenShiftRoutes.delete(OpenShiftRoutes.java:84)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesNamespace.doRemove(KubernetesNamespace.java:256)
	at org.eclipse.che.workspace.infrastructure.openshift.project.OpenShiftProject.cleanUp(OpenShiftProject.java:155)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesInternalRuntime.internalStart(KubernetesInternalRuntime.java:263)
	at org.eclipse.che.api.workspace.server.spi.InternalRuntime.start(InternalRuntime.java:141)
	at org.eclipse.che.api.workspace.server.WorkspaceRuntimes$StartRuntimeTask.run(WorkspaceRuntimes.java:891)
	at org.eclipse.che.commons.lang.concurrent.CopyThreadLocalRunnable.run(CopyThreadLocalRunnable.java:38)
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2020-02-12 13:26:30,336[aceSharedPool-2]  [ERROR] [.IdentityProviderConfigFactory 190]  - Cannot retrieve User OpenShift token from the 'openshift-v4' identity provider
org.eclipse.che.api.core.BadRequestException: Invalid token.
	at org.eclipse.che.multiuser.keycloak.server.KeycloakServiceClient.doRequest(KeycloakServiceClient.java:197)
	at org.eclipse.che.multiuser.keycloak.server.KeycloakServiceClient.getIdentityProviderToken(KeycloakServiceClient.java:135)
	at org.eclipse.che.workspace.infrastructure.openshift.multiuser.oauth.IdentityProviderConfigFactory.personalizeConfig(IdentityProviderConfigFactory.java:173)
	at org.eclipse.che.workspace.infrastructure.openshift.multiuser.oauth.IdentityProviderConfigFactory.buildConfig(IdentityProviderConfigFactory.java:167)
	at org.eclipse.che.workspace.infrastructure.openshift.OpenShiftClientFactory.buildConfig(OpenShiftClientFactory.java:142)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesClientFactory.create(KubernetesClientFactory.java:89)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesServices.delete(KubernetesServices.java:86)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesNamespace.doRemove(KubernetesNamespace.java:256)
	at org.eclipse.che.workspace.infrastructure.openshift.project.OpenShiftProject.cleanUp(OpenShiftProject.java:155)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesInternalRuntime.internalStart(KubernetesInternalRuntime.java:263)
	at org.eclipse.che.api.workspace.server.spi.InternalRuntime.start(InternalRuntime.java:141)
	at org.eclipse.che.api.workspace.server.WorkspaceRuntimes$StartRuntimeTask.run(WorkspaceRuntimes.java:891)
	at org.eclipse.che.commons.lang.concurrent.CopyThreadLocalRunnable.run(CopyThreadLocalRunnable.java:38)
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2020-02-12 13:26:30,349[aceSharedPool-2]  [ERROR] [.IdentityProviderConfigFactory 190]  - Cannot retrieve User OpenShift token from the 'openshift-v4' identity provider
org.eclipse.che.api.core.BadRequestException: Invalid token.
	at org.eclipse.che.multiuser.keycloak.server.KeycloakServiceClient.doRequest(KeycloakServiceClient.java:197)
	at org.eclipse.che.multiuser.keycloak.server.KeycloakServiceClient.getIdentityProviderToken(KeycloakServiceClient.java:135)
	at org.eclipse.che.workspace.infrastructure.openshift.multiuser.oauth.IdentityProviderConfigFactory.personalizeConfig(IdentityProviderConfigFactory.java:173)
	at org.eclipse.che.workspace.infrastructure.openshift.multiuser.oauth.IdentityProviderConfigFactory.buildConfig(IdentityProviderConfigFactory.java:167)
	at org.eclipse.che.workspace.infrastructure.openshift.OpenShiftClientFactory.buildConfig(OpenShiftClientFactory.java:142)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesClientFactory.create(KubernetesClientFactory.java:89)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesDeployments.delete(KubernetesDeployments.java:772)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesNamespace.doRemove(KubernetesNamespace.java:256)
	at org.eclipse.che.workspace.infrastructure.openshift.project.OpenShiftProject.cleanUp(OpenShiftProject.java:155)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesInternalRuntime.internalStart(KubernetesInternalRuntime.java:263)
	at org.eclipse.che.api.workspace.server.spi.InternalRuntime.start(InternalRuntime.java:141)
	at org.eclipse.che.api.workspace.server.WorkspaceRuntimes$StartRuntimeTask.run(WorkspaceRuntimes.java:891)
	at org.eclipse.che.commons.lang.concurrent.CopyThreadLocalRunnable.run(CopyThreadLocalRunnable.java:38)
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2020-02-12 13:26:30,362[aceSharedPool-2]  [ERROR] [.IdentityProviderConfigFactory 190]  - Cannot retrieve User OpenShift token from the 'openshift-v4' identity provider
org.eclipse.che.api.core.BadRequestException: Invalid token.
	at org.eclipse.che.multiuser.keycloak.server.KeycloakServiceClient.doRequest(KeycloakServiceClient.java:197)
	at org.eclipse.che.multiuser.keycloak.server.KeycloakServiceClient.getIdentityProviderToken(KeycloakServiceClient.java:135)
	at org.eclipse.che.workspace.infrastructure.openshift.multiuser.oauth.IdentityProviderConfigFactory.personalizeConfig(IdentityProviderConfigFactory.java:173)
	at org.eclipse.che.workspace.infrastructure.openshift.multiuser.oauth.IdentityProviderConfigFactory.buildConfig(IdentityProviderConfigFactory.java:167)
	at org.eclipse.che.workspace.infrastructure.openshift.OpenShiftClientFactory.buildConfig(OpenShiftClientFactory.java:142)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesClientFactory.create(KubernetesClientFactory.java:89)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesSecrets.delete(KubernetesSecrets.java:63)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesNamespace.doRemove(KubernetesNamespace.java:256)
	at org.eclipse.che.workspace.infrastructure.openshift.project.OpenShiftProject.cleanUp(OpenShiftProject.java:155)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesInternalRuntime.internalStart(KubernetesInternalRuntime.java:263)
	at org.eclipse.che.api.workspace.server.spi.InternalRuntime.start(InternalRuntime.java:141)
	at org.eclipse.che.api.workspace.server.WorkspaceRuntimes$StartRuntimeTask.run(WorkspaceRuntimes.java:891)
	at org.eclipse.che.commons.lang.concurrent.CopyThreadLocalRunnable.run(CopyThreadLocalRunnable.java:38)
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2020-02-12 13:26:30,369[aceSharedPool-2]  [ERROR] [.IdentityProviderConfigFactory 190]  - Cannot retrieve User OpenShift token from the 'openshift-v4' identity provider
org.eclipse.che.api.core.BadRequestException: Invalid token.
	at org.eclipse.che.multiuser.keycloak.server.KeycloakServiceClient.doRequest(KeycloakServiceClient.java:197)
	at org.eclipse.che.multiuser.keycloak.server.KeycloakServiceClient.getIdentityProviderToken(KeycloakServiceClient.java:135)
	at org.eclipse.che.workspace.infrastructure.openshift.multiuser.oauth.IdentityProviderConfigFactory.personalizeConfig(IdentityProviderConfigFactory.java:173)
	at org.eclipse.che.workspace.infrastructure.openshift.multiuser.oauth.IdentityProviderConfigFactory.buildConfig(IdentityProviderConfigFactory.java:167)
	at org.eclipse.che.workspace.infrastructure.openshift.OpenShiftClientFactory.buildConfig(OpenShiftClientFactory.java:142)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesClientFactory.create(KubernetesClientFactory.java:89)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesConfigsMaps.delete(KubernetesConfigsMaps.java:64)
	at org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesNamespace.doRemove(KubernetesNamespace.java:256)
	at org.eclipse.che.workspace.infrastructure.openshift.project.OpenShiftProject.cleanUp(OpenShiftProject.java:155)
	at org.eclipse.che.workspace.infrastructure.kubernetes.KubernetesInternalRuntime.internalStart(KubernetesInternalRuntime.java:263)
	at org.eclipse.che.api.workspace.server.spi.InternalRuntime.start(InternalRuntime.java:141)
	at org.eclipse.che.api.workspace.server.WorkspaceRuntimes$StartRuntimeTask.run(WorkspaceRuntimes.java:891)
	at org.eclipse.che.commons.lang.concurrent.CopyThreadLocalRunnable.run(CopyThreadLocalRunnable.java:38)
	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2020-02-12 13:26:30,377[aceSharedPool-2]  [WARN ] [.i.k.KubernetesInternalRuntime 265]  - Failed to clean up namespace after workspace 'workspaceoi6n81du3ed85my2' start failing. Cause: Error(s) occurs while cleaning up the namespace. Your session has expired. 
Please <a href='javascript:location.reload();' target='_top'>login</a> to Che again to get access to your OpenShift account Your session has expired. 
Please <a href='javascript:location.reload();' target='_top'>login</a> to Che again to get access to your OpenShift account Your session has expired. 
Please <a href='javascript:location.reload();' target='_top'>login</a> to Che again to get access to your OpenShift account Your session has expired. 
Please <a href='javascript:location.reload();' target='_top'>login</a> to Che again to get access to your OpenShift account Your session has expired. 
Please <a href='javascript:location.reload();' target='_top'>login</a> to Che again to get access to your OpenShift account
2020-02-12 13:26:30,398[aceSharedPool-2]  [INFO ] [o.e.c.a.w.s.WorkspaceRuntimes 915]   - Workspace 'skabashn:golang-b6noe' with id 'workspaceoi6n81du3ed85my2' start failed

@skabashnyuk
Copy link
Contributor Author

Duplicate #14859

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/che-server kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system.
Projects
None yet
Development

No branches or pull requests

3 participants