-
Notifications
You must be signed in to change notification settings - Fork 113
agent: write OCI spec to config.json in bundle path #346
Conversation
d92a539
to
c692c40
Compare
Codecov Report
@@ Coverage Diff @@
## master #346 +/- ##
==========================================
- Coverage 44.78% 44.19% -0.59%
==========================================
Files 15 15
Lines 2414 2446 +32
==========================================
Hits 1081 1081
- Misses 1188 1220 +32
Partials 145 145 |
Hi @eguzman3 Thanks for the contribution :-) |
@eguzman3 The OCI spec is already in the bundle path when they are passed to the OCI runtime implementation. I'm not sure what you are trying to fix here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @eguzman3 - thanks for raising! This is looking good. Out of interest, how did you notice the issue?
Please could you add some unit tests for the new functions though. You might be able to reuse bits from https://github.com/kata-containers/runtime/blob/master/cli/utils_test.go#L270 possibly?)
Also, feel free to prod us directly (https://github.com/kata-containers/community#join-us) 😄
@grahamwhaley @bergwolf
which in this case is / so this is what the OCI hooks end up getting instead of /run/kata-containers/shared/containers/<hash>/
|
d8e7053
to
29a0fd6
Compare
In order to comply with the OCI specification, the spec must be written to a file named 'config.json' located in the bundle directory. To properly set the bundle path at the time of libcontainer config creation the cwd also needs to be changed to the bundle path. CreateContainer was refactored to reduce cyclomatic complexity. Fixes: kata-containers#349 Signed-off-by: Edward Guzman <eguzman@nvidia.com>
29a0fd6
to
c7efd9a
Compare
@eguzman3 I think there are a few things to clarify:
Another thing to consider is do we really want to run the hooks inside the container? The hooks are mostly used by docker to setup network configuration for a container and in that sense they are only useful when run on the host side. What use case do you have that would require running hooks in side the guest? |
There is no reason to reproduce the whole OCI config.json thing on the guest OS and therefore the agent. The OCI spec only needs to be passed as a structure, and it's already done, which allows the agent to provide libcontainer with the appropriate data. |
@sboeuf @amshinde |
@eguzman3 what is the hook doing exactly ? I'm trying to understand the real need for this. |
This hook configures GPU access for a container using |
As mentioned in #347, the guest hooks will not be part of the Kata API, we want them to be part of the guest rootfs and auto-discovered by the agent. |
@eguzman3 @flx42, ok now I see. |
Yes it would make sense to configure the path for guest rootfs hooks in the runtime |
@sboeuf Can I get some clarification on the need for this configuration option? Is this a performance hit concern? Also would this runtime configuration option be disabled when the option is commented out? |
@eguzman3 yes it's about keeping the default path as simple and efficient as possible. And by default the agent would not try to load any custom hook. |
Ok let's try to sort this out as I understand you have a very specific use case of getting the agent running hooks from inside the VM directly, those hooks being provided by the VM rootfs, so that the agent can find them. This is the purpose of the PR #347. Now, regarding this PR, I don't get why we actually need to create a @eguzman3 Am I missing something? |
I think I can see why this might be required... As specified in https://github.com/opencontainers/runtime-spec/blob/master/config.md#posix-platform-hooks,
But the state of the container must include:
So the arbitrary hook binaries that need to be run inside the container must be passed the container state and must therefore have access to the bundle directory. And a bundle is "rootfs + |
@jodh-intel thanks for looking into this. It's still not clear if this is needed from a libcontainer perspective though. I mean that libcontainer gets provided with the container status through a structure, and not through a CLI, so I don't know how far we need to stick to the OCI definition here. |
Doing some research in libcontainer codebase, here's my understanding: |
The
|
Why do you need the rootfs path here?
Environment variables should not be an issue since they will be passed from the initial |
Our prestart hook needs to know the mount point of the rootfs, to check the content of the filesystem and add mounts.
Sure, but for a hook binary, that's the only way we can get the rootfs path today.
Yeah, the environment variables are correctly preserved for the executed process. But again, for a prestart hook, the |
@flx42 this is an interesting problem that we have here :) Ok I think here could be a solution that would be a good tradeoff on how handle those customized OCI hooks:
WDYT? |
@sboeuf We are flexible, we can definitely have some kind of glue layer that translates the environment variables to something else and then call into our With that said, I'm not sure I understand why you don't want to provide support for standard OCI runtime hooks, what is the concern with this approach? We were hoping to leverage our existing OCI hook since we were missing only a small piece to make it happen. |
The main concern that I have here is that we're trying to implement the agent as an implementation of the OCI specification, but that's not what the agent is. |
@sboeuf I see, I guess it would be fine to draw the line at the But obviously not support the lifecycle and operations of a fully-compliant OCI runtime, as describe in |
Here is my take on this: That said, we should write the config.json inside the guest only if hooks need to be run. @bergwolf @WeiZhang555 @sboeuf What do you think? |
I agree with @sboeuf that kata-agent is not an OCI compatible implementation on its own. @eguzman3 @flx42 Can you do it with a sidecar container? AFAIU, with OCI hooks, you need to add whatever is needed in the guest rootfs image. That means if a user wants to use And you need to pay attention that whatever you put in the OCI hooks, they are executed on the host as well. Do you really need the SAME hooks to be run both on the host and in the guest? |
Driver installation is possible with a container, but at the container runtime level, not really. And isn't that a Kubernetes construct? That won't work with Docker (or another OCI frontend).
Yes, we are fine with that, since the rootfs will also bake our user-space driver libraries, and precompiled objects for the device driver.
We want to only execute OCI hooks on the "translated" OCI spec, by discovering them automatically. There is no way to inject hooks in the spec docker generates today. |
I mean whatever initialization you do with current hooks, do it in a sidecar container, -- not running the same hooks there. And docker does support sidecars, it is just about constructing proper namespace configurations for different containers. Maybe it's too much complexity for your users though.
In that case, how about having a builtin container inside the rootfs image? Then we can make kata containers create default sidecars when configured so. And such a design would solve the socat sidecar image issue @gnawux is trying to solve for k8s port forwarding.
Sorry, I kinda miss the whole picture here. If not docker, who will translate the OCI spec and how? Do you expect kata-runtime to do the translation and only put translated hooks in the spec when passing them to kata-agent? Please explain a bit more about the architecture you are expecting to work with. |
It's not possible, it would require changing our whole implementation. Our hook performs bind mounts from the runtime's namespace to the container. For instance it looks for
I agree. And the implementation is possible but would be hacky.
This would be overkill for our use case, and would not solve the problem as well as what we have today, IMO.
Docker (or something else) gives you the OCI spec, and my understanding is that once in the guest you amend this spec before passing it to libcontainer. @sboeuf mentioned that hooks from the original OCI Spec are removed: #346 (comment) Here, we want to add hooks from the guest rootfs, a possible way of doing that is presented in #347 |
As long as devices share the same major/minor pair, they are shared among different containers in the same guest, which means you do not need to bindmount them. Just pass the same major/minor pair in the OCI spec, and they will be treated as the same device by kernel. OTOH, I agree in your use case the arbitrary rootfs hooks are much simpler than adding implicit sidecars. Thanks for the explanation. Please go ahead with the modification and I'm fine with saving the spec somewhere in the rootfs. |
@@ -579,6 +608,13 @@ func (a *agentGRPC) CreateContainer(ctx context.Context, req *pb.CreateContainer | |||
return emptyResp, err | |||
} | |||
|
|||
// Change cwd because libcontainer sets the bundle path to cwd | |||
oldcwd, err := pb.ChangeToBundlePath(ociSpec) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are only useful when there are hooks in the rootfs. So please make it optional by:
- scan for rootfs hooks when agent gets up
- change cwd and save spec in container bundle only when necessary
Consolidating this PR with #347 |
In order to comply with the OCI specification, the spec must be written to a file named 'config.json' located in the bundle directory.
Signed-off-by: Edward Guzman eguzman@nvidia.com