You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
We use kafkaconnect with the build/plugins/artifacts who can add connectors to the kafkaconnect. It's a really great feature.
We started with debezium who writes data in kafka and everything is working fine.
Our next try was for the plugin/connector kafkabackup who took topic content and export it to a mounted volume. Unfortunatly we cannot mount a volume for the output of our connector with this use of kafka connect (the build with artifacts)
So what we need is a way to mount a pvc for our kafkabackup connector with the build with artifacts way. It can be by the pod template or any other way.
Describe the solution you'd like
A way to mount a volume for a kafkaconnector for writing use.
Describe alternatives you've considered
For the moment we use a deployment based on kafkaconnect image with environment variable to configure it. With deployement we can add mount to a pvc. But we think that the build with artifacts is a way simplier to use an to maintain. The connectors are simply configurables with yaml and automatically started.
Additional context
I saw somes issues with similar need but at the end they moved to issue 5227 and I don't think it's the same as my need. We want storage for kafkaconnector output.
The text was updated successfully, but these errors were encountered:
This is a duplicate of #3693. That issue deals with the ability to add custom volume mounts and all the troubles which come with it. So I'm going to close this.
I'm not sure you are trying to use the right pattern here. Running a distributed Connect cluster with storage will bring many possible issues -> for example you have no guarantee on which pod will your connector be scheduled and what disk will be available there (and no, we cannot rely on everyone having RWX storage). For example, pushing it to remote object storage would be normally more elegant and very often also a lot cheaper solution. Running your own standalone Connect cluster would be another option as you can manage the storage in exactly the way you want and you have also better control where the connector runs.
Is your feature request related to a problem? Please describe.
We use kafkaconnect with the build/plugins/artifacts who can add connectors to the kafkaconnect. It's a really great feature.
We started with debezium who writes data in kafka and everything is working fine.
Our next try was for the plugin/connector kafkabackup who took topic content and export it to a mounted volume. Unfortunatly we cannot mount a volume for the output of our connector with this use of kafka connect (the build with artifacts)
So what we need is a way to mount a pvc for our kafkabackup connector with the build with artifacts way. It can be by the pod template or any other way.
Describe the solution you'd like
A way to mount a volume for a kafkaconnector for writing use.
Describe alternatives you've considered
For the moment we use a deployment based on kafkaconnect image with environment variable to configure it. With deployement we can add mount to a pvc. But we think that the build with artifacts is a way simplier to use an to maintain. The connectors are simply configurables with yaml and automatically started.
Additional context
I saw somes issues with similar need but at the end they moved to issue 5227 and I don't think it's the same as my need. We want storage for kafkaconnector output.
The text was updated successfully, but these errors were encountered: