-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Splunk Operator: something breaking local config files on pod restart #1212
Comments
same thing happened in etc/system/local/server.conf:
and etc/system/local/web.conf
so, each file, which was defined in |
|
unmasked diag uploaded in case #3285863 |
i found how i can replicate issue: delete/stop/whatever with splunk process in pod and in sometime liveness probe will trigger restart of pod and after that you'll see broken config |
reported: splunk/splunk-ansible#751 |
@iaroslav-nakonechnikov we are looking into this issue now, will update you with our findings. |
issue still exist in 9.1.1 |
@yaroslav-nakonechnikov , we are working with splunk-ansible team to fix this issue. will update you once that is done. |
was it fixed? |
Hi @yaroslav-nakonechnikov , this fix didnt go in 9.1.1 . its planned for 9.1.2 . will update you once the release is complete. |
@vivekr-splunk 9.1.2 released, but still no news here. |
Hello @yaroslav-nakonechnikov this is fixed in 9.1.2 build. |
i managed to test it, and yes. it looks like this fixed. |
Please select the type of request
Bug
Tell us more
Describe the request
We see time to time strange behavior, that config files, which were pushed thru default.yml is broken after pod restart.
so, list of keys were duplicated without value.
Here is a configmap:
Expected behavior
default.yml is rendering each run same way. without issues.
Splunk setup on K8S
EKS 1.27
Splunk Operator 2.3.0
Splunk 9.1.0.2
Reproduction/Testing steps
after some unpredicted restart of pod, new pod started with broken config.
The text was updated successfully, but these errors were encountered: