-
Notifications
You must be signed in to change notification settings - Fork 1.2k
[question]: Support of pod spec fields #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@alexandrst88 Hey! I believe this being a valid feature request. Probably adding cc/ @summerwind |
Thank you for the feedback! However, managing each field of a pod individually can be complicated. How about adding a podTemplate field to Runner?
|
I'm currently working on StatefulSet based runner management in #4. After that, it might be a good time to implement |
I might me wrong, but in runner_controller.go
there is one way to create the pod, where pod spec is just kind of hardcoded some fields like ImagePullPolicy, or RestartPolicy, if you defined,
you need also specify all fields here all fields from the previous code snippet, but, it would be create to deepcopy or merge fields from pod template into pod creation above? like podTemplate.Metadata.DeepCopy(pod.Metadata) and |
Yeah, that would work! We'd ideally want "strategic-merge-patch" on containers, volumes, and volumeMounts. But that seems like a YAGNI and we can defer it to another github issue/feature request. So, in the meantime, I believe you can just deepcopy the pod template into a new pod spec, and set default values to the missing fields. For instance, when the pod template in the runner spec misses |
Funny thing, DeepCopy it's actually removed existing ones, fields like it wiped out containers fields if I choose it in annotations. Second, one Kubernetes also checking embedded fields spec, like if you define podTemplate.
Kuberenetes api validation force you also specify container fileds :) lol, it seems like here elastic/cloud-on-k8s#1822, they removed validation from CRD. what do you think about just pod creation into CRD within runner?
or just skip validation like in https://github.com/elastic/cloud-on-k8s, because creation of pod will be control by api validation too. it will hard to maintain of fields of PodTemplate, because k8s will update API and add or remove f fields. |
wow, looking forward for this feature as well, i'd like to set iam role to the runner pod with podTemplate. what's the status of PR #7? i can contribute as well. thanks again for the nice operator. |
re-comment #5 (comment) I think it depends on what's the principle we want to follow, 1) either minimize the custom podTemplate for user, or 2) always enforce the user to explicitly specify the full runner podTemplate. Thanks, |
BTW, @alexandrst88 , regards to the pod spec merge logic, yes, if you copy the custom podTemplate into the controller something like this (there is still space to improve the code below): // copy custom podTemplate
pod := corev1.Pod{}
if runner.Spec.PodTemplate.ObjectMeta.Size() > 0 {
runner.Spec.PodTemplate.ObjectMeta.DeepCopyInto(&pod.ObjectMeta)
}
if runner.Spec.PodTemplate.Template.Spec.Size() > 0 {
runner.Spec.PodTemplate.Template.Spec.DeepCopyInto(&pod.Spec)
}
// merge runner specific pod spec
pod.ObjectMeta.Name = runner.Name
if pod.ObjectMeta.Namespace == "" {
pod.ObjectMeta.Namespace = runner.Namespace
}
if len(pod.Spec.Containers) == 0 {
pod.Spec.Containers = []corev1.Container{
{
Name: containerName,
Image: runnerImage,
ImagePullPolicy: "Always",
Env: env,
VolumeMounts: []corev1.VolumeMount{
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{
RunAsGroup: &group,
},
},
{
Name: "docker",
Image: r.DockerImage,
VolumeMounts: []corev1.VolumeMount{
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{
Privileged: &privileged,
},
},
}
pod.Spec.Volumes = append(pod.Spec.Volumes, []corev1.Volume{
corev1.Volume{
Name: "docker",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
}...)
pod.Spec.RestartPolicy = "OnFailure"
} else {
// append runner specific env vars
for i := 0; i < len(pod.Spec.Containers); i++ {
pod.Spec.Containers[i].Env = append(pod.Spec.Containers[i].Env, env...)
}
} |
yeap, @missedone thanks, it makes sense. about container field, I think, kustomize could remove this field as required in crd. |
Does anyone have a real use case that needs adding a sidecar to the runner pod or replacing the runner container? If not, I'd like to choose a way to just ignore the |
Yes, for example, I’d like to build golang app, but turns out the default runner image doesn’t have make installed. |
Thank you for sharing your use case! Currently the controller watches the runner's pod state and restarts the pod after actions' job is completed. This is depends on the container name. If we introduce a Pod Template, the restart process could be difficult because the container name will be dynamic value. To solve this issue, I propose the following solution:
The manifest based on the proposal would be:
|
Yeap, in this case, we need to maintain pod spec fields under runner spec :) Like, requests/limits and etc. |
I generally agree with @summerwind's proposal #5 (comment) :) The good part of it is that we can gradually extend it to add more features(dedicated issues would be great), while making it clear that it doesn't necessarily support all the pod spec. |
I believe this is a valid feature request. Can we assume that we're going to add the pod and the main container related fields in the top level of the runner spec?
|
This can still be evolved to support additional containers like this(again, a dedicated issue should be created for this. I'm writing this just for sync up):
|
Yeah, again it's on feature request. |
ok, so final design would be addressing my PR.
and if you have a valid use case for adding an additional field, please open separate PR. |
hi guys, the main reason is how will the user custom the runner image/container is unknown. people may want to add iam role, service account, security context, resources req/limit, mount configmap, pod affinity , node selector, etc. etc. instead, you're clear what is needed to allow runner-controller manage the runner pod. for example, manage the runner lifecycle, inject Env for registration token, etc.
|
@missedone I think we can agree on that we're going to (eventually) support all the pod spec as feature requests come and go. The point of not using the pod spec "now" is that it gives users misconceptions as if it supports full pod spec. Please also see my comment for the context. Once we get to the point where our "replicated" pod template spec in the Runner API becomes feature-complete, I believe that it's fine for us to port it for the full pod spec support, as you proposed. |
well, there comes concrete cases from my side:
so given the list above, it has already covered most of the pod spec, why not just have full pod spec support now? also, from users view, if we have the above support, it easily make people think pod spec would be supported as well, but they might be frustrated when they just figured out it's not truth for the extra fields, then have to raise the feature request, and wait till the feature available. |
@missedone Thanks! I thought we had no conflict at all, but maybe that was my misconception? Just to be clear - are you asking to add full support for the pod template spec from the beginning? |
Yes to me, since I already see this is coming. |
Yeah I don't like doing that myself either. I agree with you from "the user's perspective". But.... who has the bandwidth to complete the implementation in oneshot? Are you willing to contribute the full implementation, or keep submitting multiple pull requests until then it completes, yourself? Then I think that's fine. I don't think anyone can "commit" to afford their spare time until it completes. That's why I've suggested the gradual-enhancement plan. |
yes i'm willing to contribute. but yes can't commit my all spare time. |
Yeap, as I said before we can just use PodTemplateSpec because we need to delete containers fields from CRD.
So I have to go through most populars fields and reimplement them :) I can start from #5 (comment), @missedone propose and add some one. Will try to finish to EOW, thanks |
So the full list comes here https://godoc.org/k8s.io/api/core/v1#PodSpec. as metadata will be inherited from Runner's @missedone i've added most used podspec 0cfbffa for now. So i need just to handle merge logic. |
I believe this is now mostly done via #7. Please open separate issues if there're any missing fields or any kind of improvements, so that we can better track each of them. Thanks again to everyone involved, especially @alexandrst88 for the pull request! |
thanks you guys. |
Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>
Hi! Thanks, for this cool tool. I have few questions, do you have a plan to add pod annotations and servicaccount to runners?
I'd like to build docker images and push them to ecr or have ability to access k8s cluster api for example as use case.
If nope, I could work on this, thanks
The text was updated successfully, but these errors were encountered: