Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rare error on start workspace. #14859

Closed
4 of 23 tasks
AndrienkoAleksandr opened this issue Oct 11, 2019 · 7 comments
Closed
4 of 23 tasks

Rare error on start workspace. #14859

AndrienkoAleksandr opened this issue Oct 11, 2019 · 7 comments
Labels
kind/bug Outline of a bug - must adhere to the bug report template. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. severity/P2 Has a minor but important impact to the usage or development of the system. status/info-needed More information is needed before the issue can move into the “analyzing” state for engineering.

Comments

@AndrienkoAleksandr
Copy link
Contributor

AndrienkoAleksandr commented Oct 11, 2019

Describe the bug

Sometimes workspace stops in a bad way and don't clean up some resources. And on new restart workspace you can find such error:

Error: Failed to run the workspace: "Failure executing: POST at: https://172.30.0.1/api/v1/namespaces/che/configmaps. Message: configmaps "workspace47n4ojnzugm1jgd3-sshconfigmap" already exists. Received status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[], group=null, kind=configmaps, name=workspace47n4ojnzugm1jgd3-sshconfigmap, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=configmaps "workspace47n4ojnzugm1jgd3-sshconfigmap" already exists, metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=AlreadyExists, status=Failure, additionalProperties={}).

After one more restart error gone and workspace starts fine.

Che version

  • latest
  • nightly
  • other: please specify

Runtime

  • kubernetes (include output of kubectl version)
  • Openshift (include output of oc version)
  • minikube (include output of minikube version and kubectl version)
  • minishift (include output of minishift version and oc version)
  • docker-desktop + K8S (include output of docker version and kubectl version)
  • other: (please specify)

Screenshots

Installation method

  • chectl
  • che-operator
  • minishift-addon
  • I don't know

Environment

  • my computer
    • Windows
    • Linux
    • macOS
  • Cloud
    • Amazon
    • Azure
    • GCE
    • other (please specify)
  • other: please specify

Additional context

@AndrienkoAleksandr AndrienkoAleksandr added kind/bug Outline of a bug - must adhere to the bug report template. team/platform labels Oct 11, 2019
@tolusha tolusha added the severity/P2 Has a minor but important impact to the usage or development of the system. label Oct 11, 2019
@skabashnyuk
Copy link
Contributor

  • What is your chectl version?
  • Did you use che-operator to install che?
  • Do you know steps to reproduce this bug?
  • What stack did you use?

@tsmaeder tsmaeder changed the title Seldom error on start workspace. Rare error on start workspace. Oct 14, 2019
@skabashnyuk skabashnyuk added the status/info-needed More information is needed before the issue can move into the “analyzing” state for engineering. label Oct 29, 2019
@skabashnyuk
Copy link
Contributor

Any messages in a che-server logs?

@skabashnyuk skabashnyuk added this to the Backlog - Platform milestone Oct 30, 2019
@sleshchenko
Copy link
Member

@skabashnyuk

What is your chectl version?
Did you use che-operator to install che?
What stack did you use?

I believe it does not matter.

Do you know steps to reproduce this bug?

it's a good question. It's quite easy to simulate such a situation but not easy to reproduce it in a normal way. I can imagine a situation Che Server was killed during workspace start/stop or error occurred on a phase error processing but before cleaning namespace https://github.com/eclipse/che/blob/5d38d7a715bee75b90b1de8fc4bc1a930023f9e4/infrastructures/kubernetes/src/main/java/org/eclipse/che/workspace/infrastructure/kubernetes/KubernetesInternalRuntime.java#L259

The solution here would be like - try to clean up resources before workspace start, I have no idea how much time it costs

@lautou
Copy link

lautou commented Nov 25, 2019

I don't know if it is related or not. I get this issue in 7.4.0 (was already the case in 7.3.0):
My workspace appears as not running in Eclipse Che console although workspace pods are runnings.
image
image

When i click on the workspace, it try to open the workspace it instances a new mkdir workspace pod
image
Of course this pod is stuck, because persistence volume is still bound by the previous workspaces pods.
image
Any clue why we have a discrepancy between the running pod and the fact they are marked as stopped in Eclipse Che?

@lautou
Copy link

lautou commented Nov 25, 2019

Please finds the logs as attachment.
Eclipse_Che_Logs.zip

@lautou
Copy link

lautou commented Nov 26, 2019

Finally i found the root cause : #15312

@che-bot
Copy link
Contributor

che-bot commented May 27, 2020

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

@che-bot che-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 27, 2020
@che-bot che-bot closed this as completed Jun 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Outline of a bug - must adhere to the bug report template. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. severity/P2 Has a minor but important impact to the usage or development of the system. status/info-needed More information is needed before the issue can move into the “analyzing” state for engineering.
Projects
None yet
Development

No branches or pull requests

6 participants