-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to override service account name for resources #4716
Comments
@scholzj does the above make sense from the Strimzi design standpoint? It shouldn't be awfully hard to implement but first I'd like to make sure that the change would be welcomed. |
TBH, I personally not that eager to have this done. If we let everyone customize everything, it will be a road to hell and make the project unmaintainable. I would prefer to keep driving the customisations through the I'm also not sure I understand the idea of this. Do you run many connects which are resource heave consuming Kubernetes resources? Or why does the service account matter? |
As long as it allows customizing the service account, it's fine. I don't have a specific API preference.
Yes. Logically, it's a single large cluster (hence, the need for the same service account). But physically, each connector runs in its own single-worker cluster with tweaked resource requests/limits. |
Well, that depends on what customizations you need. It would let you set labels and/or annotations on the service account. Not change its name.
Ok, but what are the connectors doing? Why do they need some special RBAC rights? |
I see. I'm specifically interested in specifying the name.
Those are Debezium MySQL connectors. Each connects to its own MySQL cluster, and they all use Kafka Connect Vault config provider to get MySQL credentials. Vault integrates with Kubernetes, so access to a given path is granted on a per-service-account basis. So each time we add a new cluster/connector, we need to update the Vault configuration to grant the new connector's service account access. This is what I'm trying to avoid by using the same SA for all. |
That is a bit weird hack around the service account. So how does it get the Vault credentials out of the SA? Does it use the token to authenticate against Vault? Can you simply use a shared secret for this instead of shared SA? |
Yes (documentation).
In this case, probably yes. But we also have less busy environments where we run multiple connectors in one Connect cluster. The process of provisioning new connectors is automated. If secrets were implemented as regular Kubernetes secrets and mounted to the Connect workers via the |
Why would you need multiple secrets? If single service account is good enough for you to cover all connectors, single secret should be also good enough to cover all connectors, or? At the end, maybe you can just mount the token secret of the single service account ou have and use it, or? |
Each connector captures data from its own MySQL cluster which has a user for the connector. There's one secret per MySQL cluster.
That would require synchronization of the connector user credentials between the MySQL clusters. Currently there's no way or intent to do that.
As far as I understand, currently, there's no way to mount a volume to the worker (#3693). How would one do that? If it was possible, we could look at using something else than a Kubernetes service account for authorization at Vault (e.g. something generic as OAuth client ID and secret). |
Sorry, if it wasn't clear ... but that was my idea from the beginning to mount the secret which gets you connected to your Vault and pull the configs from there. For that - if you want to use one service account for that, then one secret with other Vault credentials should be enough as well. #3693 talks about mounting persistent volumes which is indeed not supported. But you would want to keep this in secret (which at the end is the same the service account is using) - and for that you can use this: https://strimzi.io/docs/operators/latest/full/using.html#type-ExternalConfiguration-reference |
Yeah, I rephrased your idea in my own words. Thanks for the suggestion!
Yeah, I was thinking that since the service account token is mounted to the pod as a volume (
I think that should work. We chose the approach of using Kubernetes service accounts for authentication at Vault because it was on the surface (supported by Vault). But it doesn't have to be this way. Now I have some homework to do. |
With the introduction of strimzi/kafka-kubernetes-config-provider, I believe the issue is no longer relevant. It should be easier for us to migrate from Vault to Kubernetes secrets and grant each Kafka Connect cluster's service account access to a given secret once it's deployed. Thank you for the discussion and the new configuration provider, @scholzj! |
Is your feature request related to a problem? Please describe.
There are several use cases where I'd like to use the same service account name for multiple Kafka Connect clusters:
Describe the solution you'd like
Provide a KafkaConnect resource property that would allow to explicitly define the service account name. The default will remain the deployment name.
Describe alternatives you've considered
Duplicating service accounts.
Additional context
The additional burden comes may from the fact that the duplicated services accounts duplicated in Kubernetes need to be reflected in other subsystems: e.g. HashiCorp Vault.
The text was updated successfully, but these errors were encountered: