-
Notifications
You must be signed in to change notification settings - Fork 83
Common Auth Config option for Kafka #157
Comments
Going with the "global" config for all the components, would have the benefit, that the same API would work for the apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: default
namespace: default
annotations:
eventing.knative.dev/broker.class: Kafka
spec:
config:
apiVersion: v1
kind: ConfigMap (or ConfigMap)
name: my-cfg But I not sure if mixing different configuration concerns is really a good idea 😄 |
Maybe I didn't see it here, but another option is to create a proper CRD (I see you using ConfigMap in all the examples) that holds the deets about the required connection parameters. If we do it as a CRD say, KafkaConnection then we can do proper validation, defaulting, etc. over a configmap. Then we could embed that in other constructs, so for example, you could create a KafkaBrokerConfig CRD that has the KafkaConnection embedded in it and KafkaChannel could also embed this field, etc. |
One thing to consider is not all Kafka backed implementations have the same set of configuration parameters. For instance, which there is a lots of commonality between the Sarama library and the Java Kafka library, there are also differences. A better name for |
I wonder if we should recommend the use of the |
Not always using just ConfigMaps - I think a very focused type would be the |
We don't have to do the "Kafka impl-lib" CRD right now or even ever. All depends on the tooling. I personally prefer Proposal 2, except instead of |
@vaikas @lionelvillard for Serving there is also interest in doing some "config CRDs" |
@travis-minke-sap @eric-sap, would you be ok to extend the KafkaChannel API as follows:
The referenced object could be anything and implementation dependent (ie. not part of the API). Explicitly referencing the Kafka configuration in the channel allows for multiple Kafkas per k8s cluster. I believe the distributed impl relies on labels to provide this capability. Correct me if I'm wrong. Eventually I would also like to change the KafkaSource API to use |
The distributed KafkaChannel implementation doesn't currently support connecting to multiple Kafkas. For EventHub usage it does aggregate/pool multiple SASL auths to work around Azure limitations, but the user cannot control which Topics go to which Kafka/EventHubNamspace as they are just round robin'd. Other usages (standard kafka and custom sidecar) only allow for a single kafka/auth. Moving the the config from installation level to per-channel level is an interesting approach. This would be a significant change to the implementation to support such fine-grained brokers/auth. I suppose you could add it to the API and we could ignore it for the short term while we work on refactoring to supporting it. Let me look at the implementation a bit and get my thoughts together and get back to you here (soon - I promise.) |
While the idea of per-channel brokers/authentication is interesting, and I generally don't have anything against the approach, it seems like a big feature/capability shift that merits a design doc and discussion. I have the following questions just to start with...
Supporting this feature would require a large-scale refactoring of the distributed KafkaChannel's controller / receiver / dispatcher implementations. I'm open to doing that work but feel like we shouldn't just jam it in as a means to fixing an issue/bug. Thoughts on how to proceed? |
Here my current thoughts.
Yes that's the main goal, being able to connect to multiple Kafka clusters from a single k8s cluster.
That's a great idea! Definitely something we should support
all Knative channels and beyond, sources, bindings, brokers, etc... More on this topic soon.
When the annotation is set to
Absolutely.
Let's come up with a design first and then will see. |
Proposal 3: Leverage KafkaBindingTL;DR: Use KafkaBinding to represent Kafka authorization and config parameters, and binding. Kafka resource implementations (Channel dispatcher, Source adapter, Sink receiver, Broker) look for the matching binding and applies it. Examples Binds all KafkaChannels to apiVersion: bindings.knative.dev/v1alpha1
kind: KafkaBinding
metadata:
name: all-channels
spec:
subject:
kind: KafkaChannel
config:
apiVersion: v1
kind: ConfigMap
name: config-kafka Binds all channel labeled apiVersion: bindings.knative.dev/v1alpha1
kind: KafkaBinding
metadata:
name: team-a-channels
spec:
subject:
selector:
matchLabels:
team: 'a'
config:
apiVersion: v1
kind: ConfigMap
name: config-kafka-team-a Binds all channel labeled apiVersion: bindings.knative.dev/v1alpha1
kind: KafkaBinding
metadata:
name: team-a-channels
spec:
subject:
selector:
matchLabels:
team: 'a'
producer-config:
apiVersion: v1
kind: ConfigMap
name: config-kafka-team-a-producer
consumer-configs:
subscriptions:
- '*'
config:
apiVersion: v1
kind: ConfigMap
name: config-kafka-team-a-consumer This is where I'm currently heading. There are lots of details to flesh out, in particular around config aggregation rules (ie. determining which config applies to a given resource). In this proposal I'm reusing the concept of KafkaBinding but the implementation is quite different since in this case the subject is not a PodSpecable. How it's actually done is us to decide (maybe the binding controller can add an annotation on the CRs, eg. I think this proposal offers a lot of flexibility and nicely decouple the configuration from the API. It covers a wide spectrum of configurations, from one config for the entire k8s cluster down to each subscription with their own config. Vendors can decide to fully manage KafkaBindings, or not. And the current API does not have to change too much (only KafkaBinding and KafkaSource) @matzew @travis-minke-sap WDYT? @n3wscott do you think that's an acceptable use of the binding concept? |
fwiw (not much ; )... I haven't looked at the "bindings" implementations much, but the above seems really cool - I like the flexibility for the user to control the granularity of the configuration. Probably you're already aware and it likely doesn't impact this but the ConfigMap will reference a Secret with the Kafka Auth - wasn't sure if there was any reason to de-couple them and specify both in the binding or not? (Meaning would you want to reuse a ConfigMap with two different sets of credentials? Also, since I last posted in this Issue... Matthias and Ali have made good progress on consolidating some of the Configuration logic from the "distributed" KafkaChannel for re-use by both implementations. I am also in the process of removing the distributed channel's "odd" approach of watching Secrets to align with the consolidated approach of specifying the single Secret in the ConfigMap. This will remove the ability to pool EventHub Namespaces but the tradeoff is worth it for the moment. Only mentioning since it means we'll be in a better place to adopt something like this binding approach than we we're before. |
More thinking on this. KafkaBinding is cute but bad UX IMO (in this context, not KafkaBinding in general). To get started, are we ok with: apiVersion: messaging.knative.dev/v1beta1
kind: KafkaChannel
metadata:
name: kafka-channel-oney
namespace: default
spec:
config:
apiVersion: v1
kind: ConfigMap
name: my-configmap
namespace: default
When It is up to the implementation to define which configuration parameters are supported, and even the format. For instance, some implementations might decide to support For the consolidated implementation, the configmap follows this format: https://github.com/knative-sandbox/eventing-kafka/tree/master/pkg/channel/consolidated#configuring-kafka-client-sarama Later on we may decide to promote some configuration parameters to the API to improve the UX. IMO |
This issue is stale because it has been open for 90 days with no |
/remove-lifecycle stale |
This issue is stale because it has been open for 90 days with no |
Motivation
Provide a common AUTH/Config approach for the different Knative Kafka components.
Current Kafka auth config story
The different Knative Kafka components have different AUTH configuration options:
The overall idea should be that there is one unified configuration for all components.
Other Configuration option in the Knative ecosystem
Looking at some other Knative components how they implement their AUTH configuration:
Proposal 1: Ducktype (Auth) Configuration
Introduce a
AuthConfig
field that holds references to a Secret, which contains a list of well-defined keys, for TLS/SASL configuration.The referenced Secret would have a structure like:
Proposal 2: Ducktype global config
Introduce a (global)
config
, that would be able to configure all sorts of aspects:The referenced
ConfigMap
would contain all sorts of aspects, that are possible to configure for a Kafka “application”, like:Discussion: (Auth)Config? Or global config?
Looking at the Knative Broker implementations (e.g. MTChannelbased or RabbitMQ), the CRD for
brokers.eventing.knative.dev
usesConfig
, for this. However, it’s up to the implementation what GVK it supports. The nameConfig
is a pretty generic construct. In the case of the “default” channel-based broker, it does configure what channel is used behind the scenes. In case of Rabbit, it does actually deal with auth concerns around RabbitMQ broker.Question: do we want a more global configuration, or more specific configuration options, like
authConfig
etc ?The text was updated successfully, but these errors were encountered: