Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failure to start a devWorkspace with low memoryLimit #462

Closed
benoitf opened this issue Jun 18, 2021 · 1 comment · Fixed by #464
Closed

failure to start a devWorkspace with low memoryLimit #462

benoitf opened this issue Jun 18, 2021 · 1 comment · Fixed by #464
Assignees
Labels
sprint/current Is assigned to issues which are planned to work on in the current team sprint
Milestone

Comments

@benoitf
Copy link
Collaborator

benoitf commented Jun 18, 2021

Using the following devWorkspace, workspace is not starting

NAMESPACE   NAME      DEVWORKSPACE ID             PHASE      INFO
default     failure   workspace622225a967194e5b   Starting   Waiting for workspace deployment

devWorkspace to use:

kind: DevWorkspace
apiVersion: workspace.devfile.io/v1alpha2
metadata:
  name: failure
spec:
  started: true
  template:
    components:
      - name: tools
        container:
          image: quay.io/eclipse/che-quarkus:next
          memoryLimit: 1G
      - name: ubi-minimal
        container:
          image: registry.access.redhat.com/ubi8/ubi-minimal
          command: ['tail']
          args: ['-f', '/dev/null']
          memoryLimit: 32M

Error is shown in devworkspace-controller pod:

spec.template.spec.containers[1].resources.requests: Invalid value: \"64M\": must be less than or equal to memory 
{"level":"info","ts":1624016878.2157364,"logger":"controllers.DevWorkspace","msg":"Waiting on deployment to be ready","Request.Namespace":"default","Request.Name":"failure","devworkspace_id":"workspace622225a967194e5b"}
{"level":"error","ts":1624016878.2306747,"logger":"controller","msg":"Reconciler error","reconcilerGroup":"workspace.devfile.io","reconcilerKind":"DevWorkspace","controller":"devworkspace","name":"failure","namespace":"default","error":"Deployment.apps \"workspace622225a967194e5b\" is invalid: spec.template.spec.containers[1].resources.requests: Invalid value: \"64M\": must be less than or equal to memory limit","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.3/pkg/internal/controller/controller.go:246\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.3/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.3/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.18.8/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.18.8/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.18.8/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.18.8/pkg/util/wait/wait.go:90"}

So it seems that if we specify memoryLimit to 32M, devWorkspace controller sets as memoryRequest 64M (I suspect it should in that case use the limit value)

And anyway, failure is not reported to 'end user'

@amisevsk
Copy link
Collaborator

We should add some error handling for request > limit.

As background, I believe the reason we chose 64Mi as a default was that we were seeing errors on container creation when the memory limit is too low in OpenShift -- e.g. #304

@amisevsk amisevsk self-assigned this Jun 18, 2021
@sleshchenko sleshchenko added this to the v0.7.0 milestone Jun 22, 2021
@sleshchenko sleshchenko added the sprint/current Is assigned to issues which are planned to work on in the current team sprint label Jun 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sprint/current Is assigned to issues which are planned to work on in the current team sprint
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants