-
Notifications
You must be signed in to change notification settings - Fork 22
Changes to work with Pod Security Restricted and OpenShift #223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking good, just looking into a tangentially-related issue.
Since our linux Dockerfile does not have a USER
directive, combined with the fact that we're mounting /var/lib/questdb/db
, .checkpoint
, and snapshot
by subpaths, we get mixed-ownership in the db root when we add the security context (run as user 10001):
drwxrwxrwx 2 root root 4.0K Oct 6 18:04 .checkpoint
drwxr-xr-x 2 questdb questdb 4.0K Oct 6 18:04 conf
drwxrwxrwx 9 root root 4.0K Oct 6 18:04 db
-rw-rw-r-- 1 questdb questdb 503 Oct 6 18:04 hello.txt
drwxr-xr-x 2 questdb questdb 4.0K Oct 6 18:04 import
drwxr-xr-x 3 questdb questdb 4.0K Oct 6 18:04 public
drwxrwxrwx 2 root root 4.0K Oct 6 18:04 snapshot
I was reading about this, and while it's ok in this case because the dirs have 777
permissions, this might not be guaranteed across all k8s distributions and volume drivers.
Fixing this is relatively straightforward. We should remove the following subpath mounts (in statefulset.yaml
)
- name: {{ include "questdb.fullname" . }}
mountPath: {{ .Values.questdb.dataDir }}/db
subPath: db/
- name: {{ include "questdb.fullname" . }}
mountPath: {{ .Values.questdb.dataDir }}/.checkpoint
subPath: .checkpoint/
- name: {{ include "questdb.fullname" . }}
mountPath: {{ .Values.questdb.dataDir }}/snapshot
subPath: snapshot/
and replace them with
- name: {{ include "questdb.fullname" . }}
mountPath: {{ .Values.questdb.dataDir }}
I've only tested this in a kind cluster, but I believe that this should be safe to change... we're mounting the entire directory instead of just the subpaths. I still need to confirm that the config subpaths will mount correctly will mount correctly (assuming that we're now mounting their parent /var/lib/questdb
dir). That's probably why I used subpaths in the first place...
FYI, I am working with the 9.0.3-rhel image at the moment. The container does appear to set 'USER questdb'. For compliance with OpenShift, the container should be authored with 'USER 10001'. I just created questdb/questdb#6238 which should address this within the questdb Dockerfile (though I do not have a mechanism to test/confirm this at this time). Are you sure that when running the container it is running as root? I would expect it would be running as questdb (10001) and therefore the files would be written OK with the Helm chart as authored. It appears to be working in the places where I have been testing. |
Perfect, that's what I was going to recommend. We have a multistage build with 2 separate outputs (one for
Edit: After reviewing the PR, I've decided to keep the |
Removing the paths will not work in conjunction with overriding the config files. I think the subpaths need to stay. Is there a reason we need to change that? I'd prefer to keep that separate from this PR. |
That's what I initially thought as well, but after a bit of testing it appears that Take a look at this values.yaml questdb:
serverConfig:
enabled: true
options:
shared.worker.count: "3" and the resulting file inside the container
The subpath volumeMounts correctly mount the configmap values to their respective files!
I'm trying to avoid potential permissions issues based on changes to the helm chart. If we remove the snapshot/.checkpoint/db subpaths, we get a nice clean
and the group for
Compare this to my previous example:
Even though Can you confirm this behavior on Openshift? |
@sklarsa : I can confirm that the change you requested is working. However, for OpenShift to fully work, I am testing with a Helm chart that has further changes. This PR does not address temporary locations that need to be mounted as emptydir's since the running UID within OpenShift is not able to write into those places (it is a random UID). I plan to recommend that in a later PR. With these changes incorporated, do you have any further feedback for this PR? |
@wkbrd lgtm thanks! Let's open up a separate PR for the temp dir handling. Perhaps we can roll that into the openshift flag you added in this PR? Also can you please sign the CLA so we can merge these? Thanks! |
This includes changes to add support for running within Pod Security Admission in "Restricted" profile (the most restricted policy).
Details:
It also includes OpenShift support for running with a flexible UID (removing runAsUser, runAsGroup, and fsGroup on OpenShift, which are managed by OpenShift for security reasons).