-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSPL-633 Support extraENV in CR spec #325
Conversation
I believe you would need to run "make generate" so that the CRDs themselves are updated with extraEnv in this PR as well, right? |
Yeah I need to run |
Squashed commits |
e60cd5c
to
92610a8
Compare
Fixed by using yq v3.2.1 |
d3cfb4d
to
a7da7de
Compare
a7da7de
to
a236de3
Compare
// Add extraEnv from the CommonSplunkSpec config to the extraEnv variable list | ||
for _, envVar := range spec.ExtraEnv { | ||
extraEnv = append(extraEnv, envVar) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the CRD changes seems like []corev1.EnvVar
, is supporting multiple ways of specifying the env variables to source from(configMap, secret object, etc..). Wondering if we need to deal with them the same way, like reading those resource objects, and populating them as part of the extraEnv here?
Also, what happens if somebody specifies an environment source, instead of the value, and the source is not created/available. Do we need to do a sanity check on those cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the extraEnv is the same []corev1.EnvVar
object, it will behave the same as all the other EnvVar
objects. Here we are just appending the ExtraEnv
section to the env vars already created. If EnvVarSource
is used and the source doesn't exist, this should throw an error when creating the pod.
Applied CR:
apiVersion: enterprise.splunk.com/v1beta1
kind: Standalone
metadata:
name: sa
finalizers:
- enterprise.splunk.com/delete-pvc
spec:
replicas: 1
extraEnv:
- name: ANSIBLE_EXTRA_FLAGS
value: "-vvv"
- name: FROM_SOURCE
valueFrom:
secretKeyRef:
name: doesntexistsecret
key: awsSecretAccessKey
Error:
splunk-sa-standalone-0 0/1 CreateContainerConfigError 0 101s 10.244.0.22 kind-control-plane <none> <none>
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13s default-scheduler Successfully assigned default/splunk-sa-standalone-0 to kind-control-plane
Normal Pulled 1s (x3 over 13s) kubelet Container image "docker.io/splunk/splunk:8.1.3" already present on machine
Warning Failed 1s (x3 over 13s) kubelet Error: secret "doesntexistsecret" not found
I'm not sure we really need to this prior to apply since its already handled by kubectl.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Jeff.
Doing it prior may help in the case of incremental config change, where we destroy an existing Pod, before starting a pod with the missing resource, which may be a rare case :-)
@@ -943,7 +1047,6 @@ spec: | |||
x-kubernetes-int-or-string: true | |||
required: | |||
- port | |||
- protocol |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this removed because of the newer SDK?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I assume so. I didn't touch this config, just ran make generate
and it threw out this diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having to re-add the protocol field is a limitation of the workaround to kubernetes/kubernetes#91395 at the moment that we have to do anytime we generate CRDs.
I have been looking into a better solution to automate this process or workaround the issue altogether, I can open a CSPL to formally track this.
Other projects that have hit the issue (kubernetes-sigs/controller-tools#444) mention upgrading their controller-gen version works around the issue, but I don't believe we use controller-gen to generate the crds. There are some other ideas mentioned here kubernetes/kubernetes#91395, but the automated ones seem to require modifying k8s api code.
6c96309
to
042e7d1
Compare
Manually readded https://github.com/splunk/splunk-operator/pull/331/commits back, updated to latest and squashed. |
0f4393a
to
b5c9ae3
Compare
Allow users to specify additional env variables to be passed down to the Splunk instance pods via the CRD with the following config: kind: Standalone metadata: name: sa spec: extraEnv: - name: SOME_ENV_VAR value: "value1" - name: MORE_ENV_VARS value: "value2" These environment variables will be passed and set in the evn of each Splunk instances created by applying this CRD. Added additional unit tests to validate that the `extraEnv` parameters are passed to the `env` of the created Splunk instance containers.
5248185
to
dd7622e
Compare
Problem
Splunk operator developers need a method to add additional env variables to the Splunk instance pods for debugging and Splunk app installation. The operator needs to provide them with a method of defining these environment variables in the CRD so that they are propagated to all created Splunk instances when deployed.
Solution
Allow users to specify additional env variables to be passed down to the Splunk instance
pods via the CRD with the following config:
These environment variables will be passed and set in the env of each Splunk instances
created by applying this CRD.
Added additional unit tests to validate that the
extraEnv
parameters are passed to theenv
of the created Splunk instance containers.Tests
Along with adding unit tests, this functionality can be verified functionally as well. Validate that when adding
ANSIBLE_EXTRA_FLAGS
to theextraEnv:
section, Ansible prints out verbose debugs:Check kubectl describe:
Validate with ansible logs: