Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Configuring Kafka client authentication settings when using Apache Kafka storage option #2073

Closed
jakedern-msft opened this issue Dec 1, 2021 · 13 comments
Assignees
Labels
type/question Further information is requested

Comments

@jakedern-msft
Copy link

Is there a way to configure Kafka client authentication settings when using the Apache Kafka storage option? I'm currently running into issues standing up an Apicurio instance with this storage option seemingly due to the fact that my Kafka cluster requires SSL connections. From the Kafka server logs there are repeated attempts to perform an SSL handshake with the Apicurio instance that fail.

From the Apicurio logs I also see the following:

2021-12-01 21:46:45 INFO <> [null] (main) JsonConverterConfig values: 
        converter.type = key
        decimal.format = BASE64
        schemas.cache.size = 0
        schemas.enable = true

2021-12-01 21:46:45 INFO <> [io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage] (main) Using Kafka-SQL artifactStore.
2021-12-01 21:46:45 INFO <> [org.apache.kafka.common.config.AbstractConfig] (main) AdminClientConfig values: 
        bootstrap.servers = [kafka-svc.test.svc.cluster.local:9092]
        client.dns.lookup = use_all_dns_ips
        client.id = 
        connections.max.idle.ms = 300000
        default.api.timeout.ms = 60000
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retries = 2147483647
        retry.backoff.ms = 100
        sasl.client.callback.handler.class = null
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        security.providers = null
        send.buffer.bytes = 131072
        socket.connection.setup.timeout.max.ms = 30000
        socket.connection.setup.timeout.ms = 10000
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
        ssl.endpoint.identification.algorithm = https
        ssl.engine.factory.class = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.certificate.chain = null
        ssl.keystore.key = null
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLSv1.3
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.certificates = null
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS

2021-12-01 21:45:43 INFO <> [org.apache.kafka.common.utils.AppInfoParser$AppInfo] (main) Kafka version: 2.8.0
2021-12-01 21:45:43 INFO <> [org.apache.kafka.common.utils.AppInfoParser$AppInfo] (main) Kafka commitId: ebb1d6e21cc92130
2021-12-01 21:45:43 INFO <> [org.apache.kafka.common.utils.AppInfoParser$AppInfo] (main) Kafka startTimeMs: 1638395143241
2021-12-01 21:45:44 INFO <> [io.apicurio.registry.storage.RegistryStorageProducer] (executor-thread-1) Using RegistryStore: io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage_ClientProxy
2021-12-01 21:45:44 INFO <> [io.apicurio.registry.storage.impl.sql.AbstractSqlRegistryStorage] (executor-thread-1) SqlRegistryStorage constructed successfully.  JDBC URL: jdbc:h2:mem:registry_db
2021-12-01 21:45:44 INFO <> [io.apicurio.registry.storage.impl.sql.AbstractSqlRegistryStorage] (executor-thread-1) Checking to see if the DB is initialized.
2021-12-01 21:45:44 INFO <> [io.apicurio.registry.storage.impl.sql.AbstractSqlRegistryStorage] (executor-thread-1) Database not initialized.
2021-12-01 21:45:44 INFO <> [io.apicurio.registry.storage.impl.sql.AbstractSqlRegistryStorage] (executor-thread-1) Initializing the Apicurio Registry database.
2021-12-01 21:45:44 INFO <> [io.apicurio.registry.storage.impl.sql.AbstractSqlRegistryStorage] (executor-thread-1)        Database type: h2
2021-12-01 21:45:44 INFO <> [io.apicurio.registry.storage.impl.sql.AbstractSqlRegistryStorage] (executor-thread-1) Checking to see if the DB is up-to-date.
2021-12-01 21:46:13 INFO <> [org.apache.kafka.clients.admin.internals.AdminMetadataManager] (kafka-admin-client-thread | adminclient-1) [AdminClient clientId=adminclient-1] Metadata update failed: org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1638395173267, tries=1, nextAllowedTryMs=1638395173416) timed out at 1638395173316 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call. Call: fetchMetadata

2021-12-01 21:46:43 INFO <> [org.apache.kafka.common.utils.AppInfoParser] (kafka-admin-client-thread | adminclient-1) App info kafka.admin.client for adminclient-1 unregistered
2021-12-01 21:46:43 INFO <> [org.apache.kafka.clients.admin.internals.AdminMetadataManager] (kafka-admin-client-thread | adminclient-1) [AdminClient clientId=adminclient-1] Metadata update failed: org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1638395203316, tries=1, nextAllowedTryMs=-9223372036854775709) timed out at 9223372036854775807 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited. Call: fetchMetadata

2021-12-01 21:46:43 INFO <> [org.apache.kafka.common.metrics.Metrics] (kafka-admin-client-thread | adminclient-1) Metrics scheduler closed
2021-12-01 21:46:43 INFO <> [org.apache.kafka.common.metrics.Metrics] (kafka-admin-client-thread | adminclient-1) Closing reporter org.apache.kafka.common.metrics.JmxReporter
2021-12-01 21:46:43 INFO <> [org.apache.kafka.common.metrics.Metrics] (kafka-admin-client-thread | adminclient-1) Metrics reporters closed
2021-12-01 21:46:43 WARN <> [io.apicurio.registry.metrics.health.liveness.PersistenceExceptionLivenessCheck] (main) Liveness problem suspected in PersistenceExceptionLivenessCheck because of an exception: : org.apache.kafka.common.errors.TimeoutException: Call(callName=listTopics, deadlineMs=1638395203268, tries=1, nextAllowedTryMs=1638395203369) timed out at 1638395203269 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listTopics

2021-12-01 21:46:43 INFO <> [io.apicurio.registry.metrics.health.liveness.PersistenceExceptionLivenessCheck] (main) After this event, the error counter is 1 (out of the maximum 5 allowed).
2021-12-01 21:46:43 ERROR <> [io.quarkus.runtime.ApplicationLifecycleManager] (main) Failed to start application (with profile prod): org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listTopics

It seems as though the security settings are there in the AdminClientConfig, but I can't seem to find a way to configure it via environment variables. I've been referencing this file and this one for exposed config.

Is there anything I'm missing regarding configuring the client used with the Kafka storage option?

Thanks much for any help!

@EricWittmann EricWittmann added the type/question Further information is requested label Dec 2, 2021
@EricWittmann
Copy link
Member

@jsenko and/or @carlesarnal can probably answer this one?

@carlesarnal
Copy link
Member

Hi @jakedern-msft,

Unfortunately, this is not possible right now when using Registry. That said, I have opened this #2101 and I'll try to get it fixed ASAP and then get a Registry release making this configuration available.

@jakedern-msft
Copy link
Author

@carlesarnal - Thanks for the update, looking forward to this feature!

@carlesarnal
Copy link
Member

Hi @jakedern-msft here you have the PR adding this capability #2104.

@jakedern-msft
Copy link
Author

@carlesarnal Thank you for the quick turnaround time this looks like a great start! At the risk of betraying my lack of knowledge in this area and/or asking too much, I'm wondering if it's also possible to expose the admin client SSL settings in addition to the SASL settings.

We're using our own certificate authority for all certificates in our Kafka cluster, so ideally we would like to have control over the certificate and CA bundle that this client will use for the SSL connection.

@carlesarnal
Copy link
Member

Ahh, ok, I just added the most basic support for securely connecting to Kafka. Let me see what I can do to also expose that configuration.

@jakedern-msft
Copy link
Author

@carlesarnal Thank you! I really appreciate it.

@carlesarnal
Copy link
Member

HI @jakedern-msft, sorry, I've been on PTO. Here's I think what you'll need. Let me know if you would like that we expose any other configuration.

@jakedern-msft
Copy link
Author

No worries at all @carlesarnal, I've also been out of the office most of the last month. The only other two things I can think of that may be useful to configure are keystore/truststore type and ssl protocol.

For the former it's really not a big deal to convert between p12 and jks. It's just more convenient and in general we prefer p12. And as long as the ssl protocol is locked to 1.3 it's also ok.

Thanks so much again for getting this through, it will be a big help to us!

@carlesarnal
Copy link
Member

Ah, I used JKS, since, AFAIK, is the default in the Kafka client. I'll write up something ASAP to add the type and protocol.

@EricWittmann
Copy link
Member

@carlesarnal status on this one?

@carlesarnal
Copy link
Member

Sorry for the delay. The protocol is already there and can be changed on demand. This will add support for specifying the type.

@EricWittmann
Copy link
Member

Merged. Closing this issue - @jakedern-msft please re-open if we're missing something. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants