You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#1605 introduced basic functionality for authorize_scram_over_mtls.
However its not as efficient or as robust as we would like.
We recreate a delegation token for every new incoming connection.
We rely on a hardcoded 4 second wait to ensure the delegation token has propagated across the cluster.
To solve both of these I propose a background task to manage tokens similar to the topology task shared between all CassandraSinkCluster instances:
KafkaSinkClusterConfig::get_builder spins up a background task that receives requests for new tokens and then sends the tokens back across a oneshot.
The task will be responsible for requesting tokens over mTLS, waiting until the token is usable, and then finally sending the token details along the oneshot.
From the perspective of the transform the API would look like:
Not sure if we want the task to do its own discovery, or to just reuse the existing nodes_shared field the transforms use.
I think that solving #1588 first will remove some unknowns.
Reusing the nodes_shared field is no good since we need SCRAM task to be running before nodes_shared can be populated.
This leaves a few options:
scram task performs its own discovery
scram task performs discovery for itself and the transform
The text was updated successfully, but these errors were encountered:
#1605 introduced basic functionality for authorize_scram_over_mtls.
However its not as efficient or as robust as we would like.
To solve both of these I propose a background task to manage tokens similar to the topology task shared between all
CassandraSinkCluster
instances:KafkaSinkClusterConfig::get_builder
spins up a background task that receives requests for new tokens and then sends the tokens back across a oneshot.The task will be responsible for requesting tokens over mTLS, waiting until the token is usable, and then finally sending the token details along the oneshot.
From the perspective of the transform the API would look like:
Remaining work on the task:
Broker discovery
Not sure if we want the task to do its own discovery, or to just reuse the existing
nodes_shared
field the transforms use.I think that solving #1588 first will remove some unknowns.
Reusing the
nodes_shared
field is no good since we need SCRAM task to be running before nodes_shared can be populated.This leaves a few options:
The text was updated successfully, but these errors were encountered: