-
Notifications
You must be signed in to change notification settings - Fork 294
CA-416516: vm.slice/cgroup.procs write operation gets EBUSY #6650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
see cgroups v2 "no internal processes" rule if cgroup.subtree_control is not empty, and we attach a pid to cgroup.procs, kernel would return EBUSY Signed-off-by: Chunjie Zhu <chunjie.zhu@cloud.com>
|
add @rosslagerwall @MarkSymsCtx @DeliZhangX @stephenchengCloud @liulinC to review |
|
It looks like there is something wrong with my account. I cannot add reviewers. |
|
Thanks @BengangY . |
| # into cgroup.procs, kernel would return EBUSY | ||
| cgroup_slice_dir = os.path.join("/sys/fs/cgroup", cgroup_slice) | ||
| qemu_dm_dir = os.path.join(cgroup_slice_dir, "qemu-dm") | ||
| if not os.path.exists(qemu_dm_dir): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
os.makedirs(qemu_dm_dir, exist_ok=True) ? https://www.w3schools.com/python/ref_os_makedirs.asp
| # if cgroup.subtree_control is not empty, and we attach a pid | ||
| # into cgroup.procs, kernel would return EBUSY | ||
| cgroup_slice_dir = os.path.join("/sys/fs/cgroup", cgroup_slice) | ||
| qemu_dm_dir = os.path.join(cgroup_slice_dir, "qemu-dm") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can be one line: os.path.join("/sys/fs/cgroup", cgroup_slice,"qemu-dm")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May consider to create a common sub slice e.g. "vm.slice/vm_common.slice" for attaching any existing processes including https://github.com/xapi-project/sm/blob/master/libs/sm/blktap2.py#L765
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For XS9 we now have a new enough systemd that we can create per domain slices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MarkSymsCtx I tried your idea. I cannot implement your idea.
Option 1. Let systemd manages qemu-dm process automatically. It does not work. Qemu-dm is not same as Qemu-dp, the startup arguments of qemu-dm is complicated, and qemu-wrapper does some xenstore operation. We cannot prepare a qemu-dm@.service configuration which can replace qemu-wrapper.
Option 2. In qemu-wrapper, I create /sys/fs/cgroup/vm.slice/qemu-dm-$pid for each qemu-dm process, this is ok. Then I have to add xenopsd hook scripts to remove the folder when vm is destroyed. In vm-pre-shutdown script, cgdelete cannot delete the qemu-dm-$pid folder. In vm-post-destroy, it does not have domid parameter, we cannot get qemu-dm process id, so we cannot delete the qemu-dm-$pid folder.
I will merge my commit.
see cgroups v2 "no internal processes" rule
if cgroup.subtree_control is not empty, and we attach a pid
to cgroup.procs, kernel would return EBUSY