Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

driver/docker: enable setting hard/soft memory limits #8087

Merged
merged 7 commits into from
Jun 1, 2020

Conversation

shoenig
Copy link
Member

@shoenig shoenig commented Jun 1, 2020

Fixes #2093

Enable configuring memory_hard_limit in the docker config stanza for tasks.
If set, this field will be passed to the container runtime as --memory, and
the memory configuration from the task resource configuration will be passed
as --memory_reservation, creating hard and soft memory limits for tasks using
the docker task driver.

Fixes #2093

Enable configuring `memory_hard_limit` in the docker config stanza for tasks.
If set, this field will be passed to the container runtime as `--memory`, and
the `memory` configuration from the task resource configuration will be passed
as `--memory_reservation`, creating hard and soft memory limits for tasks using
the docker task driver.
@shoenig
Copy link
Member Author

shoenig commented Jun 1, 2020

Example)

$ cat example.nomad 
job "example" {
  datacenters = ["dc1"]

  group "cache" {
    task "redis" {
      driver = "docker"

      config {
        image = "redis:3.2"

	memory_hard_limit = 512 # new!

        port_map {
          db = 6379
        }
      }

      resources {
        cpu    = 500
        memory = 256

        network {
          mbits = 10
          port "db" {}
        }
      }
    }
  }
}

memory and memory_reservation are now hard / soft values

$ docker inspect 1f6 | jq .[].HostConfig.Memory
536870912
$ docker inspect 1f6 | jq .[].HostConfig.MemoryReservation
268435456

Environment variable remains on soft limit, as staying under that limit is the intent

$ docker inspect 1f6 | jq .[].Config.Env | grep NOMAD_MEMORY_LIMIT
  "NOMAD_MEMORY_LIMIT=256",

Copy link
Member

@schmichael schmichael left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code and approach look great, but I think there's 2 open questions:

  1. If a host runs out of memory and the oomkiller goes on a rampage, do things over their soft limit get killed first?
  2. Do we need to provide an option to disable this feature on a per-client basis (like Docker volumes from the host)?

I believe the answer to 1 is Yes in which case the answer to 2 may be No. It might be nice to loosely document the answer to 1 as I expect it's many people's first question.

CHANGELOG.md Outdated Show resolved Hide resolved
website/pages/docs/drivers/docker.mdx Outdated Show resolved Hide resolved
shoenig and others added 2 commits June 1, 2020 10:52
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
@shoenig
Copy link
Member Author

shoenig commented Jun 1, 2020

From the docker and cgroup docs, it seems like #1 is a weak "yes".

Allows you to specify a soft limit smaller than --memory which is activated when Docker detects contention or low memory on the host machine.

When the system detects memory contention or low memory, control groups
are pushed back to their soft limits. If the soft limit of each control
group is very high, they are pushed back as much as possible to make
sure that one control group does not starve the others of memory.

require.Equal(t, int64(512*1024*1024), memory)
require.Equal(t, int64(256*1024*1024), memoryReservation)
})
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if there's an integration test we could piggy back on to ensure that Docker is interpreting our limits as we expect. Similar to this: https://github.com/hashicorp/nomad/blob/v0.11.2/drivers/docker/driver_test.go#L1406-L1454

Not a big deal or blocker.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh, I was looking for something like these and missed them. Added!

@shoenig shoenig merged commit 81355d9 into master Jun 1, 2020
@shoenig shoenig deleted the f-docker-mem-config branch June 1, 2020 17:16
shoenig added a commit that referenced this pull request Jun 1, 2020
driver/docker: enable setting hard/soft memory limits
@shoenig shoenig added this to the 0.11.3 milestone Jun 1, 2020
@analytically
Copy link

failed to create container: API error (400): Minimum memoryswap limit should be larger than memory limit, see usage

@analytically
Copy link

does this set memory.limit_in_bytes?

@shoenig
Copy link
Member Author

shoenig commented Jun 10, 2020

@analytically do you mind opening a fresh ticket, and include some contextual information like docker version, what the task & driver configs look like. Thanks!

@github-actions
Copy link

github-actions bot commented Jan 3, 2023

I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 3, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[question] Launch docker container with hard memory limit higher than specified in task resources
4 participants