You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a need to expose shotover as a SASL SCRAM connection to the client.
Its impossible to pass SCRAM directly onto kafka so instead I am proposing we follow the topology: client <-scram-> shotover <-mTLS->+<-scram-> kafka
And provide the following config to enable this topology:
sources:
- Kafka:
name: "kafka"listen_addr: "127.0.0.1:9192"tls:
certificate_path: "tests/test-configs/kafka/tls/certs/localhost.crt"private_key_path: "tests/test-configs/kafka/tls/certs/localhost.key"chain:
- KafkaSinkCluster:
shotover_nodes:
- address: "127.0.0.1:9192"rack: "rack0"broker_id: 0# The contact points point at the kafka SCRAM connection.first_contact_points: ["172.16.1.2:9092"]connect_timeout_ms: 3000# When enabled Shotover will:# * Use the clients SCRAM requests to create the first connection# * To create following connections a delegation token is created over a control connection opened to `mtls_port_contact_points`.# The token is created for the username appearing in the SCRAM requests.# * New connections are created against the first_contact_points port using SCRAM messages generated from username used in the initial SCRAM login + the delegation token.authorize_scram_over_mtls:
# The addresses to send mTLS delegation requests to.# I assume kafka's clustering means that metadata requests will return all the# SCRAM connections and we can choose a random member of the cluster each time we perform a scram testmtls_port_contact_points: ["172.16.1.2:9093"]tls:
certificate_authority_path: "tests/test-configs/kafka/tls/certs/localhost_CA.crt"certificate_path: "tests/test-configs/kafka/tls/certs/localhost.crt"private_key_path: "tests/test-configs/kafka/tls/certs/localhost.key"verify_hostname: true
Or alternatively we could entirely isolate this as a separate transform.
But that would require duplicating cluster metadata and the first connection.
Example config:
sources:
- Kafka:
name: "kafka"listen_addr: "127.0.0.1:9192"tls:
certificate_path: "tests/test-configs/kafka/tls/certs/localhost.crt"private_key_path: "tests/test-configs/kafka/tls/certs/localhost.key"chain:
# This transform will:# * Use the clients SCRAM requests to create a connection to verify that the client is authorized as that user.# * A delegation token is created over a control connection opened to `mtls_port_contact_points`.# The token is created for the username appearing in the SCRAM requests.# * The scram requests are modified to contain the delegation token instead of the password salt
- ReplaceScramWithDelegationToken:
# The addresses to send mTLS delegation requests to.# I assume kafka's clustering means that metadata requests will return all the# SCRAM connections and we can choose a random member of the cluster each time we perform a scram testmtls_port_contact_points: ["172.16.1.2:9093"]# The addresses to send the initial SCRAM requests toscram_port_contact_points: ["172.16.1.2:9092]# TLS config must be duplicated here as this transform creates its own direct connections to kafkatls:
certificate_authority_path: "tests/test-configs/kafka/tls/certs/localhost_CA.crt"certificate_path: "tests/test-configs/kafka/tls/certs/localhost.crt"private_key_path: "tests/test-configs/kafka/tls/certs/localhost.key"verify_hostname: true
- KafkaSinkCluster:
shotover_nodes:
- address: "127.0.0.1:9192"rack: "rack0"broker_id: 0# The contact points point at the kafka SCRAM connection.first_contact_points: ["172.16.1.2:9092"]connect_timeout_ms: 3000tls:
certificate_authority_path: "tests/test-configs/kafka/tls/certs/localhost_CA.crt"certificate_path: "tests/test-configs/kafka/tls/certs/localhost.crt"private_key_path: "tests/test-configs/kafka/tls/certs/localhost.key"verify_hostname: true
The text was updated successfully, but these errors were encountered:
We have a need to expose shotover as a SASL SCRAM connection to the client.
Its impossible to pass SCRAM directly onto kafka so instead I am proposing we follow the topology:
client <-scram-> shotover <-mTLS->+<-scram-> kafka
And provide the following config to enable this topology:
Or alternatively we could entirely isolate this as a separate transform.
But that would require duplicating cluster metadata and the first connection.
Example config:
The text was updated successfully, but these errors were encountered: