Skip to content

[question]: Support of pod spec fields #5

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
alexandrst88 opened this issue Mar 9, 2020 · 30 comments
Closed

[question]: Support of pod spec fields #5

alexandrst88 opened this issue Mar 9, 2020 · 30 comments

Comments

@alexandrst88
Copy link
Contributor

Hi! Thanks, for this cool tool. I have few questions, do you have a plan to add pod annotations and servicaccount to runners?

spec:
  replicas: 3
  template:
    metadata:
      annotations:
        iam.amazonaws.com/role: role-arn```

I'd like to build docker images and push them to ecr or have ability to access k8s cluster api for example as use case.

If nope, I could work on this, thanks

@alexandrst88 alexandrst88 changed the title [question]: Support of pod spec fieldsr [question]: Support of pod spec fields Mar 9, 2020
@mumoshu
Copy link
Collaborator

mumoshu commented Mar 9, 2020

@alexandrst88 Hey! I believe this being a valid feature request. Probably adding annotations next to https://github.com/summerwind/actions-runner-controller/blob/0edf0d59f74bcbbc7c9bd899c6547c2a49a9b1b5/api/v1alpha1/runner_types.go#L34 would make sense.

cc/ @summerwind

@summerwind
Copy link
Contributor

Thank you for the feedback!
That's nice idea! I'm also considering a similar feature. In my use case, I'd like to set Resource Limits.

However, managing each field of a pod individually can be complicated. How about adding a podTemplate field to Runner?

apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
  name: example
spec:
  repository: summerwind/actions-runner-controller
  podTemplate:
    metadata:
      annotations:
        iam.amazonaws.com/role: role-arn
    spec:
      serviceAccountName: example
      ...

@summerwind
Copy link
Contributor

I'm currently working on StatefulSet based runner management in #4. After that, it might be a good time to implement podTemplate.

@alexandrst88
Copy link
Contributor Author

alexandrst88 commented Mar 11, 2020

I might me wrong, but in runner_controller.go

pod := corev1.Pod{
		ObjectMeta: metav1.ObjectMeta{
			Name:      runner.Name,
			Namespace: runner.Namespace,
		},
		Spec: corev1.PodSpec{
			RestartPolicy: "OnFailure",
			Containers: []corev1.Container{
				{
					Name:            containerName,
					Image:           runnerImage,
					ImagePullPolicy: "Always",
					Env:             env,
					VolumeMounts: []corev1.VolumeMount{
						{
							Name:      "docker",
							MountPath: "/var/run",
						},
					},
					SecurityContext: &corev1.SecurityContext{
						RunAsGroup: &group,
					},
				},
				{
					Name:  "docker",
					Image: r.DockerImage,
					VolumeMounts: []corev1.VolumeMount{
						{
							Name:      "docker",
							MountPath: "/var/run",
						},
					},
					SecurityContext: &corev1.SecurityContext{
						Privileged: &privileged,
					},
				},
			},
			Volumes: []corev1.Volume{
				corev1.Volume{
					Name: "docker",
					VolumeSource: corev1.VolumeSource{
						EmptyDir: &corev1.EmptyDirVolumeSource{},
					},
				},
			},
		},
	}

there is one way to create the pod, where pod spec is just kind of hardcoded some fields like ImagePullPolicy, or RestartPolicy, if you defined,

  repository: summerwind/actions-runner-controller
  podTemplate:
    metadata:
      annotations:
        iam.amazonaws.com/role: role-arn
    spec:
      serviceAccountName: example
      ...

you need also specify all fields here all fields from the previous code snippet, but, it would be create to deepcopy or merge fields from pod template into pod creation above?

like podTemplate.Metadata.DeepCopy(pod.Metadata) and
podTemplate.Spec.DeepCopy(pod.Spec)

@mumoshu
Copy link
Collaborator

mumoshu commented Mar 12, 2020

you need also specify all fields here all fields from the previous code snippet, but, it would be create to deepcopy or merge fields from pod template into pod creation above?

like podTemplate.Metadata.DeepCopy(pod.Metadata) and
podTemplate.Spec.DeepCopy(pod.Spec)

Yeah, that would work!

We'd ideally want "strategic-merge-patch" on containers, volumes, and volumeMounts. But that seems like a YAGNI and we can defer it to another github issue/feature request.

So, in the meantime, I believe you can just deepcopy the pod template into a new pod spec, and set default values to the missing fields.

For instance, when the pod template in the runner spec misses RestartPolicy, you'd set it to OnFailure as we already do. Similarly, if the pod template misses Containers[], you'd set it to the two containers as already do.

@alexandrst88
Copy link
Contributor Author

alexandrst88 commented Mar 12, 2020

Funny thing, DeepCopy it's actually removed existing ones, fields like it wiped out containers fields if I choose it in annotations.

Second, one Kubernetes also checking embedded fields spec, like if you define podTemplate.

  podTemplate:
    metadata:
      annotations:
        iam.amazonaws.com/role: role-arn
    template:
      spec:
        serviceAccountName: testing

Kuberenetes api validation force you also specify container fileds :)

lol, it seems like here elastic/cloud-on-k8s#1822, they removed validation from CRD.

what do you think about just pod creation into CRD within runner?

apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:

or just skip validation like in https://github.com/elastic/cloud-on-k8s, because creation of pod will be control by api validation too.

it will hard to maintain of fields of PodTemplate, because k8s will update API and add or remove f fields.

@missedone
Copy link

wow, looking forward for this feature as well, i'd like to set iam role to the runner pod with podTemplate.

what's the status of PR #7? i can contribute as well.

thanks again for the nice operator.

@missedone
Copy link

missedone commented Mar 15, 2020

re-comment #5 (comment)

I think it depends on what's the principle we want to follow, 1) either minimize the custom podTemplate for user, or 2) always enforce the user to explicitly specify the full runner podTemplate.
i'd prefer option 2, since it makes people very clear on what's containers spec they are using. Ex. what's the version of runner docker image, whether need extra docker container, what's the runner container resource spec, etc.
just my 2 cents.

Thanks,
Nick

@missedone
Copy link

missedone commented Mar 16, 2020

BTW, @alexandrst88 , regards to the pod spec merge logic, yes, if you copy the custom podTemplate into the controller pod instance, the containers spec gets lost.
so I think we can implement the merge logic in other way, saying merge controller managed pod instance to the custom pod.

something like this (there is still space to improve the code below):

	// copy custom podTemplate
	pod := corev1.Pod{}
	if runner.Spec.PodTemplate.ObjectMeta.Size() > 0 {
		runner.Spec.PodTemplate.ObjectMeta.DeepCopyInto(&pod.ObjectMeta)
	}
	if runner.Spec.PodTemplate.Template.Spec.Size() > 0 {
		runner.Spec.PodTemplate.Template.Spec.DeepCopyInto(&pod.Spec)
	}

	// merge runner specific pod spec
	pod.ObjectMeta.Name = runner.Name
	if pod.ObjectMeta.Namespace == "" {
		pod.ObjectMeta.Namespace = runner.Namespace
	}
	if len(pod.Spec.Containers) == 0 {
		pod.Spec.Containers = []corev1.Container{
			{
				Name:            containerName,
				Image:           runnerImage,
				ImagePullPolicy: "Always",
				Env:             env,
				VolumeMounts: []corev1.VolumeMount{
					{
						Name:      "docker",
						MountPath: "/var/run",
					},
				},
				SecurityContext: &corev1.SecurityContext{
					RunAsGroup: &group,
				},
			},
			{
				Name:  "docker",
				Image: r.DockerImage,
				VolumeMounts: []corev1.VolumeMount{
					{
						Name:      "docker",
						MountPath: "/var/run",
					},
				},
				SecurityContext: &corev1.SecurityContext{
					Privileged: &privileged,
				},
			},
		}
		pod.Spec.Volumes = append(pod.Spec.Volumes, []corev1.Volume{
			corev1.Volume{
				Name: "docker",
				VolumeSource: corev1.VolumeSource{
					EmptyDir: &corev1.EmptyDirVolumeSource{},
				},
			},
		}...)
		pod.Spec.RestartPolicy = "OnFailure"
	} else {
		// append runner specific env vars
		for i := 0; i < len(pod.Spec.Containers); i++ {
			pod.Spec.Containers[i].Env = append(pod.Spec.Containers[i].Env, env...)
		}
	}

@alexandrst88
Copy link
Contributor Author

yeap, @missedone thanks, it makes sense. about container field, I think, kustomize could remove this field as required in crd.

@summerwind
Copy link
Contributor

summerwind commented Mar 16, 2020

Does anyone have a real use case that needs adding a sidecar to the runner pod or replacing the runner container? If not, I'd like to choose a way to just ignore the containers field in the pod template to keep controller simple.

@missedone
Copy link

Yes, for example, I’d like to build golang app, but turns out the default runner image doesn’t have make installed.
Also we have serval teams build different type of tech stack, and different requirements of tool dependency. That says I’d like to give the team flexibility to custom the runner docker image.

@summerwind
Copy link
Contributor

Thank you for sharing your use case!
If you want to use custom runner image, you can use the image field ;-)

Currently the controller watches the runner's pod state and restarts the pod after actions' job is completed. This is depends on the container name. If we introduce a Pod Template, the restart process could be difficult because the container name will be dynamic value.

To solve this issue, I propose the following solution:

  • Make 'Runner' resource's annotations and labels are inherited to their 'Pod' resource.
  • Add serviceAccountName field to allow specifying Pod service account

The manifest based on the proposal would be:

apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
  name: example
  annotations:
    iam.amazonaws.com/role: role-arn
spec:
  repository: summerwind/actions-runner-controller
  serviceAccountName: example
  image: your/runner:latest

@alexandrst88
Copy link
Contributor Author

Yeap, in this case, we need to maintain pod spec fields under runner spec :) Like, requests/limits and etc.

@mumoshu
Copy link
Collaborator

mumoshu commented Mar 17, 2020

I generally agree with @summerwind's proposal #5 (comment) :)

The good part of it is that we can gradually extend it to add more features(dedicated issues would be great), while making it clear that it doesn't necessarily support all the pod spec.

@mumoshu
Copy link
Collaborator

mumoshu commented Mar 17, 2020

Yeap, in this case, we need to maintain pod spec fields under runner spec :) Like, requests/limits and etc.

I believe this is a valid feature request.

Can we assume that we're going to add the pod and the main container related fields in the top level of the runner spec?

apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
  name: example
  annotations:
    iam.amazonaws.com/role: role-arn
spec:
  repository: summerwind/actions-runner-controller
  serviceAccountName: example
  image: your/runner:latest
  resources:
    limits:
      cpu: "1"
    requests:
      cpu: "0.5"

@mumoshu
Copy link
Collaborator

mumoshu commented Mar 17, 2020

This can still be evolved to support additional containers like this(again, a dedicated issue should be created for this. I'm writing this just for sync up):

apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
  name: example
  annotations:
    iam.amazonaws.com/role: role-arn
spec:
  repository: summerwind/actions-runner-controller
  serviceAccountName: example
  image: your/runner:latest
  resources:
    limits:
      cpu: "1"
    requests:
      cpu: "0.5"
  sidecarContainers:
  - name: "fluentd"
    image: ...
  - name: istio-proxy
    image: ...

@alexandrst88
Copy link
Contributor Author

Yeah, again it's on feature request.

@alexandrst88
Copy link
Contributor Author

ok, so final design would be addressing my PR.

apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
## this metadata will be inherited by pod, so pod will have the same labels/annotations as runner metadata
metadata:
  name: example
  annotations:
    iam.amazonaws.com/role: role-arn
### pod will have the same spec as runner spec does. except container field because of 
Currently the controller watches the runner's pod state and restarts the pod after actions' job is completed. This is depends on the container name. If we introduce a Pod Template, the restart process could be difficult because the container name will be dynamic value. @summerwind  comment

spec:
  repository: summerwind/actions-runner-controller
  serviceAccountName: example
  image: your/runner:latest
  resources:
    limits:
      cpu: "1"
    requests:
      cpu: "0.5"
  sidecarContainers:
  - name: "fluentd"
    image: ...
  - name: istio-proxy
    image: ...

and if you have a valid use case for adding an additional field, please open separate PR.
need you approve @summerwind @mumoshu as maintainer of this controller

@missedone
Copy link

hi guys,
i would prefer support full pod spec at the beginning.

the main reason is how will the user custom the runner image/container is unknown. people may want to add iam role, service account, security context, resources req/limit, mount configmap, pod affinity , node selector, etc. etc.
i don't think you should care about it at all.

instead, you're clear what is needed to allow runner-controller manage the runner pod. for example, manage the runner lifecycle, inject Env for registration token, etc.
so i'd suggest we define the contract, like annotation which is quite standard way in k8s.
in implementation, we need to have proper merge logic something like below:

  1. take the pod spec defined by the user via CR manifest runnerdeployment, etc.
  2. validate, ex. if annotation or other required field are missing, etc.
  3. merge runner-controller specific fields like Env var, etc. to the pod spec
  4. done.

@mumoshu
Copy link
Collaborator

mumoshu commented Mar 18, 2020

@missedone I think we can agree on that we're going to (eventually) support all the pod spec as feature requests come and go.

The point of not using the pod spec "now" is that it gives users misconceptions as if it supports full pod spec. Please also see my comment for the context.

Once we get to the point where our "replicated" pod template spec in the Runner API becomes feature-complete, I believe that it's fine for us to port it for the full pod spec support, as you proposed.

@missedone
Copy link

missedone commented Mar 18, 2020

well, there comes concrete cases from my side:

  • iam role: to have the proper AWS access of runner for each project
  • security context: it's the internal security compliance across the teams.
  • resources req/limit: to have proper resource allocation and management.
  • mount configmap: for apps build process, we want to mount build specific configs, for example, maven settings.xml, npm config etc which points to internal artifacts repos.
  • node selector: we want to make sure the runner running on dedicated k8s nodes.

so given the list above, it has already covered most of the pod spec, why not just have full pod spec support now?

also, from users view, if we have the above support, it easily make people think pod spec would be supported as well, but they might be frustrated when they just figured out it's not truth for the extra fields, then have to raise the feature request, and wait till the feature available.

@mumoshu
Copy link
Collaborator

mumoshu commented Mar 18, 2020

@missedone Thanks! I thought we had no conflict at all, but maybe that was my misconception?

Just to be clear - are you asking to add full support for the pod template spec from the beginning?

@missedone
Copy link

Yes to me, since I already see this is coming.

@mumoshu
Copy link
Collaborator

mumoshu commented Mar 18, 2020

then have to raise the feature request, and wait till the feature available.

Yeah I don't like doing that myself either. I agree with you from "the user's perspective".

But.... who has the bandwidth to complete the implementation in oneshot? Are you willing to contribute the full implementation, or keep submitting multiple pull requests until then it completes, yourself? Then I think that's fine.

I don't think anyone can "commit" to afford their spare time until it completes. That's why I've suggested the gradual-enhancement plan.

@missedone
Copy link

missedone commented Mar 18, 2020

yes i'm willing to contribute. but yes can't commit my all spare time.
could you list what's the potential effort to have full pod spec support with embedding core v1.PodTemplateSpec in RunnerSpec just as what @alexandrst88 did? i may oversee which made me think it a simple task.
i think this is good opportunity to me for learning deeper of kubenetes and github actions.
thanks

@alexandrst88
Copy link
Contributor Author

Yeap, as I said before we can just use PodTemplateSpec because we need to delete containers fields from CRD.

Kuberenetes api validation force you also specify container fileds :)

lol, it seems like here elastic/cloud-on-k8s#1822, they removed validation from CRD.

So I have to go through most populars fields and reimplement them :)

I can start from #5 (comment), @missedone propose and add some one. Will try to finish to EOW, thanks

@alexandrst88
Copy link
Contributor Author

alexandrst88 commented Mar 18, 2020

So the full list comes here https://godoc.org/k8s.io/api/core/v1#PodSpec. as metadata will be inherited from Runner's

@missedone i've added most used podspec 0cfbffa for now. So i need just to handle merge logic.

@mumoshu
Copy link
Collaborator

mumoshu commented Mar 20, 2020

I believe this is now mostly done via #7.

Please open separate issues if there're any missing fields or any kind of improvements, so that we can better track each of them.

Thanks again to everyone involved, especially @alexandrst88 for the pull request!

@missedone
Copy link

thanks you guys.

TingluoHuang pushed a commit that referenced this issue Jan 12, 2023
Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants