Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support parallel QEMU kola tests based on memory amount #3118

Open
jlebon opened this issue Oct 12, 2022 · 1 comment
Open

Support parallel QEMU kola tests based on memory amount #3118

jlebon opened this issue Oct 12, 2022 · 1 comment
Labels
enhancement New feature or request

Comments

@jlebon
Copy link
Member

jlebon commented Oct 12, 2022

Feature Request

Desired Feature

In addition to per-process parallelism, kola should support not going over a certain amount of total memory. This ensures that it meshes well in Kubernetes pods where memory limits could be enforced.

One strawman is to add a new --qemu-max-memory MEM_IN_MIB. With this, kola will treat --parallel as a maximum while respecting --qemu-max-memory.

Example Usage

kola run --parallel auto --qemu-max-memory 6144

Other Information

Split out of #1438.

If there's a way for kola to auto-discover how much memory is available to the cgroup it's running in, we could have a --qemu-max-memory auto like we do for --parallel.

@jlebon jlebon added the enhancement New feature or request label Oct 12, 2022
@jlebon
Copy link
Member Author

jlebon commented Oct 12, 2022

Most tests require only 1G of RAM and only recently did some tests get bumped to 1.5G due to https://bugzilla.redhat.com/show_bug.cgi?id=1907030 (hopefully temporarily). So I don't think we should spend a lot of time on the scheduling algorithm here. Even a basic one like "wait until enough memory has been freed to schedule the next test" should suffice.

jlebon added a commit to jlebon/fedora-coreos-pipeline that referenced this issue Oct 12, 2022
Some of the tests require 1.5G of RAM now due to
https://bugzilla.redhat.com/show_bug.cgi?id=1907030. We need to bump our
calculation of parallelism accordingly in the absence of
coreos/coreos-assembler#3118. Otherwise
depending on how kola schedules tests, we might go over the limit.
dustymabe pushed a commit to coreos/fedora-coreos-pipeline that referenced this issue Oct 12, 2022
Some of the tests require 1.5G of RAM now due to
https://bugzilla.redhat.com/show_bug.cgi?id=1907030. We need to bump our
calculation of parallelism accordingly in the absence of
coreos/coreos-assembler#3118. Otherwise
depending on how kola schedules tests, we might go over the limit.
jlebon added a commit to jlebon/coreos-assembler that referenced this issue Oct 12, 2022
This roughly matches what we do now in e.g. the FCOS pipeline and a few
other places. Otherwise, we may hit against our memory limit and get
evicted.

See coreos#3118 for more
background info.
jlebon added a commit to jlebon/coreos-assembler that referenced this issue Oct 12, 2022
This roughly matches what we do now in e.g. the FCOS pipeline and a few
other places. Otherwise, we may hit against our memory limit and get
evicted.

See coreos#3118 for more
background info.
jlebon added a commit that referenced this issue Oct 12, 2022
This roughly matches what we do now in e.g. the FCOS pipeline and a few
other places. Otherwise, we may hit against our memory limit and get
evicted.

See #3118 for more
background info.
dustymabe pushed a commit to dustymabe/coreos-assembler that referenced this issue Nov 3, 2022
This roughly matches what we do now in e.g. the FCOS pipeline and a few
other places. Otherwise, we may hit against our memory limit and get
evicted.

See coreos#3118 for more
background info.

(cherry picked from commit 5d12d39)
dustymabe pushed a commit to dustymabe/coreos-assembler that referenced this issue Nov 3, 2022
This roughly matches what we do now in e.g. the FCOS pipeline and a few
other places. Otherwise, we may hit against our memory limit and get
evicted.

See coreos#3118 for more
background info.

(cherry picked from commit 5d12d39)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant