-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability in KafkaConnect to use secrets provided by Secrets Store CSI Driver #5277
Comments
Triaged on 28.4.2022: This makes sense on the functional level. I wonder if the code implementation really differs from #3693 which is about generic volumes. I guess it can be implemented in two ways:
|
Is the answer to this on this reddit post? |
This feature request is still valid. For someone who is looking for a solution to this, existing functionality, at version 0.43.0, does not solve the problem of having ephemeral volumes required by the Secrets Store CSI Driver. With #10099 "Support of additional volumes in pod", Strimzi can mount Persistent Volume to a KafkaConnect pod providing a Claim. spec:
template:
pod:
volumes:
- name: creds-volume
persistentVolumeClaim:
# PersistentVolumeClaimVolumeSource v1 core: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#persistentvolumeclaimvolumesource-v1-core
claimName: creds
readOnly: true This will NOT work for Secrets Store CSI Driver. In more details, the CSI driver kind: Pod
apiVersion: v1
metadata:
name: secrets-store-inline
spec:
containers:
- name: busybox
image: registry.k8s.io/e2e-test-images/busybox:1.29
command:
- "/bin/sleep"
- "10000"
volumeMounts:
- name: creds-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: creds-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "secrets-sync" after defining a apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: secrets-sync
spec:
provider: aws
parameters:
objects: |
- objectName: "arn:aws:secretsmanager:us-east-1:123456789012:secret:my-service/database_username"
objectAlias: "my-service-db-username"
- objectName: "arn:aws:secretsmanager:us-east-1:123456789012:secret:my-service/database_password"
objectAlias: "my-service-db-password"
secretObjects:
- secretName: my-service-creds
type: Opaque
data:
- objectName: "my-service-db-username"
key: username
- objectName: "my-service-db-password"
key: password Does anyone else have a better hack? 🤔 🙏 |
I don't think the additional volumes currently support the CSIVolumeSource. So I do not think this issue is currently covered by the additional volumes at all, or? Or maybe I missed the point you were trying to make? |
Thanks for the reply @scholzj. No, defining additional volumes does not solve this issue. I think CSIPersistentVolumes are needed. But Secrets Store CSI Driver needs them to be mounted for the secrets to appear. |
I opened proposal strimzi/proposals#138 to address this. |
Is your feature request related to a problem? Please describe.
At the moment there is no nice way to use secrets for Kafka Connect, which are fetched from cloud key vaults (in our case it is Azure) using Secrets Store CSI Driver project. This CSI driver integrates with cloud key vaults and automatically creates corresponding Kubernetes secrets, but the problem is that it's doing this only when it's mounted as a volume. Detailed behaviour is described in CSI documentation. And at the moment there is no option to specify custom volume mounts in KafkaConnect Strimizi resource neither using KafkaConnectTemplate spec, nor using ExternalConfiguration spec.
Describe the solution you'd like
a) Extend externalConfiguration spec, add ability to specify CSI secret provider
b) Add ability to manually specifiy volume mounts in KafkaConnect deployment
Describe alternatives you've considered
At the moment the only option to use secrets provided by Secrets Store CSI Driver is to run separate dummy pod, which must be started before KafkaConnect, and which will initiate secret syncronization.
The text was updated successfully, but these errors were encountered: