-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
driver/docker: enable setting hard/soft memory limits #8087
Conversation
Fixes #2093 Enable configuring `memory_hard_limit` in the docker config stanza for tasks. If set, this field will be passed to the container runtime as `--memory`, and the `memory` configuration from the task resource configuration will be passed as `--memory_reservation`, creating hard and soft memory limits for tasks using the docker task driver.
Example) $ cat example.nomad
job "example" {
datacenters = ["dc1"]
group "cache" {
task "redis" {
driver = "docker"
config {
image = "redis:3.2"
memory_hard_limit = 512 # new!
port_map {
db = 6379
}
}
resources {
cpu = 500
memory = 256
network {
mbits = 10
port "db" {}
}
}
}
}
}
Environment variable remains on soft limit, as staying under that limit is the intent $ docker inspect 1f6 | jq .[].Config.Env | grep NOMAD_MEMORY_LIMIT
"NOMAD_MEMORY_LIMIT=256", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code and approach look great, but I think there's 2 open questions:
- If a host runs out of memory and the oomkiller goes on a rampage, do things over their soft limit get killed first?
- Do we need to provide an option to disable this feature on a per-client basis (like Docker volumes from the host)?
I believe the answer to 1 is Yes
in which case the answer to 2 may be No
. It might be nice to loosely document the answer to 1 as I expect it's many people's first question.
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
From the docker and cgroup docs, it seems like #1 is a weak "yes".
|
require.Equal(t, int64(512*1024*1024), memory) | ||
require.Equal(t, int64(256*1024*1024), memoryReservation) | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if there's an integration test we could piggy back on to ensure that Docker is interpreting our limits as we expect. Similar to this: https://github.com/hashicorp/nomad/blob/v0.11.2/drivers/docker/driver_test.go#L1406-L1454
Not a big deal or blocker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh, I was looking for something like these and missed them. Added!
114a08d
to
5a0dc99
Compare
driver/docker: enable setting hard/soft memory limits
failed to create container: API error (400): Minimum memoryswap limit should be larger than memory limit, see usage |
does this set memory.limit_in_bytes? |
@analytically do you mind opening a fresh ticket, and include some contextual information like docker version, what the task & driver configs look like. Thanks! |
References: https://www.nomadproject.io/docs/drivers/docker#memory_hard_limit hashicorp/nomad#8087 femiwiki/femiwiki#116 (comment) (cherry picked from commit cc93e51)
I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions. |
Fixes #2093
Enable configuring
memory_hard_limit
in the docker config stanza for tasks.If set, this field will be passed to the container runtime as
--memory
, andthe
memory
configuration from the task resource configuration will be passedas
--memory_reservation
, creating hard and soft memory limits for tasks usingthe docker task driver.