Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KafkaSinkCluster: authorize_scram_over_mtls is robust #1618

Open
5 of 8 tasks
rukai opened this issue May 9, 2024 · 0 comments
Open
5 of 8 tasks

KafkaSinkCluster: authorize_scram_over_mtls is robust #1618

rukai opened this issue May 9, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@rukai
Copy link
Member

rukai commented May 9, 2024

#1605 introduced basic functionality for authorize_scram_over_mtls.
However its not as efficient or as robust as we would like.

  • We recreate a delegation token for every new incoming connection.
  • We rely on a hardcoded 4 second wait to ensure the delegation token has propagated across the cluster.

To solve both of these I propose a background task to manage tokens similar to the topology task shared between all CassandraSinkCluster instances:

KafkaSinkClusterConfig::get_builder spins up a background task that receives requests for new tokens and then sends the tokens back across a oneshot.
The task will be responsible for requesting tokens over mTLS, waiting until the token is usable, and then finally sending the token details along the oneshot.
From the perspective of the transform the API would look like:

struct KafkaSinkCluster {
  token_request_tx: mpsc::Sender<TokenRequest>
  ..
}

struct TokenRequest {
   username: String,
   response: oneshot::Receiver<DelegationToken>,
}

pub struct DelegationToken {
    pub token_id: String,
    pub hmac: Vec<u8>,
}

Remaining work on the task:

Broker discovery

Not sure if we want the task to do its own discovery, or to just reuse the existing nodes_shared field the transforms use.
I think that solving #1588 first will remove some unknowns.

Reusing the nodes_shared field is no good since we need SCRAM task to be running before nodes_shared can be populated.
This leaves a few options:

  • scram task performs its own discovery
  • scram task performs discovery for itself and the transform
@rukai rukai changed the title KafkaSinkCluster: authorize_scram_over_mtls phase 2 KafkaSinkCluster: authorize_scram_over_mtls is robust May 14, 2024
@rukai rukai added the bug Something isn't working label May 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant