Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for custom truststore at Connector level #546

Open
pjuarezd opened this issue Aug 9, 2022 · 6 comments
Open

Support for custom truststore at Connector level #546

pjuarezd opened this issue Aug 9, 2022 · 6 comments

Comments

@pjuarezd
Copy link

pjuarezd commented Aug 9, 2022

Would be great to have support at the S3-Sink connector level to use additional truststore, so that private CA and self-sgined certificates can be trusted.

For instance, using the property store.url to store in a S3 compatible server over https. Nowadays if the S3 compatible server is setup with a private CA or selft-signed certificates in a airgap environment, the S3-SInk connector fails the SSL verification.

The way how some user have been working around this is by adding the private CA or self-signed certificates to the default java cacerts, however that requieres root acces to the container/machine running the java process, and this is not always available.

@pjuarezd pjuarezd mentioned this issue Aug 9, 2022
3 tasks
@pjuarezd
Copy link
Author

pjuarezd commented Aug 9, 2022

Created a PR to support the custom keystore #547

@OneCricketeer
Copy link

Why not add to the parent project in StorageSinkConnectorConfig? Or in Kafka source itself or that all connectors may use this property?

Also, why can't you use consumer.override prefix to set these values already?

@joshuagrisham-karolinska
Copy link

joshuagrisham-karolinska commented Jul 10, 2023

Also, why can't you use consumer.override prefix to set these values already?

Hi! Sorry to drag an older one back up but we ran into this problem today, it seemed worth if it could get picked up again since I am sure it will not be just me who is having this problem currently.

I can report that at least in 7.3.0 (edit: and kafka-connect-s3 version 10.5.1) that using the consumer.override prefix is not working with the S3 Sink Connector towards a custom S3 endpoint with a private server certificate.

My working assumption is that it is because this property is for "consuming" the message from Kafka which is already working fine and configured at the server level, and the underlying AWS libraries for writing to S3 don't seem to be reading from this truststore property anyway.

For kicks I tried producer.override as a prefix also but was pretty sure that would not do anything anyway since we are not writing to Kafka here with a Kafka producer client (and yes, it did nothing ;) )

@OneCricketeer
Copy link

OneCricketeer commented Jul 10, 2023

The override prefix is not specific to any connector. It's a base feature of the Connect API and has been available since around 2.3, I want to say - https://kafka.apache.org/documentation/#connect_running

You do need to set the override policy on the worker first, otherwise, no the override will not apply

@pjuarezd
Copy link
Author

pjuarezd commented Jan 2, 2024

hi @ OneCricketeer, circling back to this issue:

The overall goal is use setting store.url: https://custom-endpoint.url other than AWS S3 default endpoint.

I understand that you suggest support for custom keystore could make more sense to be added on StorageSinkConnectorConfig in the parent project https://github.com/confluentinc/kafka-connect-storage-common, I could do that.

Can you share documentation on how to target a custom S3 endpoint and trust a custom TLS certifcate using consumer.override?
Found this example in https://github.com/confluentinc/confluent-kubernetes-examples repo, is this what you meant?

https://github.com/confluentinc/confluent-kubernetes-examples/blob/45331dfab5d08e8513b5532016a9c9b7b7e1553e/blueprints/cp-rbac-mtls-lb/cp-apps/connectors/connector_ss.yaml#L27-L28

Why not add to the parent project in StorageSinkConnectorConfig? Or in Kafka source itself or that all connectors may use this property?

Also, why can't you use consumer.override prefix to set these values already?

@OneCricketeer
Copy link

The mentioned override settings have no connection to S3, or Connector, only kafka client settings, so for TLS / SASL connection to the brokers, for example

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants