Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NPE when creating kmm2 with Oauth in Strimzi 0.26.0 #5897

Closed
pmuir opened this issue Nov 16, 2021 · 1 comment · Fixed by #5907
Closed

NPE when creating kmm2 with Oauth in Strimzi 0.26.0 #5897

pmuir opened this issue Nov 16, 2021 · 1 comment · Fixed by #5907
Labels

Comments

@pmuir
Copy link

pmuir commented Nov 16, 2021

Please use this to only for bug reports. For questions or when you need help, you can use the GitHub Discussions, our #strimzi Slack channel or out user mailing list.

Describe the bug
I fatfingered the secret name for the clientSecret for both the source and target clusters:

      authentication:
        clientSecret:
          key: client-secret
          secretName: target-client-secret

This caused a NPE in Strimzi:

java.lang.NullPointerException: null
	at io.strimzi.operator.common.Util.lambda$addSecretHash$20(Util.java:588) ~[io.strimzi.operator-common-0.26.0.jar:0.26.0]
	at io.vertx.core.impl.future.Composition.onSuccess(Composition.java:38) ~[io.vertx.vertx-core-4.1.5.jar:4.1.5]
	at io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60) ~[io.vertx.vertx-core-4.1.5.jar:4.1.5]
	at io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:211) ~[io.vertx.vertx-core-4.1.5.jar:4.1.5]
	at io.vertx.core.impl.future.PromiseImpl.tryComplete(PromiseImpl.java:23) ~[io.vertx.vertx-core-4.1.5.jar:4.1.5]
	at io.vertx.core.impl.future.PromiseImpl.onSuccess(PromiseImpl.java:49) ~[io.vertx.vertx-core-4.1.5.jar:4.1.5]
	at io.vertx.core.impl.future.FutureBase.lambda$emitSuccess$0(FutureBase.java:54) ~[io.vertx.vertx-core-4.1.5.jar:4.1.5]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) [io.netty.netty-common-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469) [io.netty.netty-common-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) [io.netty.netty-transport-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [io.netty.netty-common-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [io.netty.netty-common-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty.netty-common-4.1.68.Final.jar:4.1.68.Final]
	at java.lang.Thread.run(Thread.java:829) [?:?]
2021-11-16 18:52:46 WARN  AbstractOperator:481 - Reconciliation #184(timer) KafkaMirrorMaker2(openshift-operators/my-mirror-maker2): Failed to reconcile
java.lang.NullPointerException: null
	at io.strimzi.operator.common.Util.lambda$addSecretHash$20(Util.java:588) ~[io.strimzi.operator-common-0.26.0.jar:0.26.0]
	at io.vertx.core.impl.future.Composition.onSuccess(Composition.java:38) ~[io.vertx.vertx-core-4.1.5.jar:4.1.5]
	at io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60) ~[io.vertx.vertx-core-4.1.5.jar:4.1.5]
	at io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:211) ~[io.vertx.vertx-core-4.1.5.jar:4.1.5]
	at io.vertx.core.impl.future.PromiseImpl.tryComplete(PromiseImpl.java:23) ~[io.vertx.vertx-core-4.1.5.jar:4.1.5]
	at io.vertx.core.impl.future.PromiseImpl.onSuccess(PromiseImpl.java:49) ~[io.vertx.vertx-core-4.1.5.jar:4.1.5]
	at io.vertx.core.impl.future.FutureBase.lambda$emitSuccess$0(FutureBase.java:54) ~[io.vertx.vertx-core-4.1.5.jar:4.1.5]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) [io.netty.netty-common-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469) [io.netty.netty-common-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) [io.netty.netty-transport-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [io.netty.netty-common-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [io.netty.netty-common-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty.netty-common-4.1.68.Final.jar:4.1.68.Final]
	at java.lang.Thread.run(Thread.java:829) [?:?]

To Reproduce
Steps to reproduce the behavior:

  1. Create a new kmm2 resource with oauth enabled, using a client id, and reference a client secret
  2. Put the wrong name for secret into the clientSecret.secretName field (secret doesn't exist)
  3. See NPE

Expected behavior
An error telling me the secret doesn't exist

Environment (please complete the following information):

  • Strimzi version: 0.26.0
  • Installation method: OperatorHub on OpenShift
  • Kubernetes cluster: OpenShift 4.9.5
  • Infrastructure: OpenShift Dedicated

YAML files and logs

kind: KafkaMirrorMaker2
metadata: 
  name: my-mirror-maker2
spec: 
  clusters: 
    - 
      alias: my-cluster-source
      authentication: 
        clientId: "srvc-acct-412733fe-efc8-42f4-bf93-a3496c276714" # The Client ID for the service account for the source Kafka cluster
        clientSecret: # A reference to a Kubernetes Secret that contains the Client Secret for the service account for the source Kafka cluster
          key: client-secret
          secretName: source-client-secret
        tokenEndpointUri: "https://identity.api.openshift.com/auth/realms/rhoas/protocol/openid-connect/token"
        type: oauth # Red Hat OpenShift Streams for Apache Kafka prefers OAuth for connections
      bootstrapServers: "source-c-----na--efhhf-oada.bf2.kafka.rhcloud.com:443" # The bootstrap server host for the source cluster
      tls: # Red Hat OpenShift Streams for Apache Kafka requires the use of TLS with the built in trusted certificates
        trustedCertificates: []
    - 
      alias: my-cluster-target
      authentication: 
        clientId: "srvc-acct-953201ba-52ce-4a24-a861-782bf68af72e" # The Client ID for the service account for the target Kafka cluster
        clientSecret: # A reference to a Kubernetes Secret that contains the Client Secret for the service account for the target Kafka cluster
          key: client-secret
          secretName: target-client-secret
        tokenEndpointUri: "https://identity.api.openshift.com/auth/realms/rhoas/protocol/openid-connect/token"
        type: oauth # Red Hat OpenShift Streams for Apache Kafka prefers OAuth for connections
      bootstrapServers: "target-c--vlnmvh---a-nioiaa.bf2.kafka.rhcloud.com:443" # The bootstrap server host for the source cluster
      config: # Red Hat OpenShift Streams for Apache Kafka requires a replication factor of 3 for all topics
        config.storage.replication.factor: 3
        offset.storage.replication.factor: 3
        status.storage.replication.factor: 3
      tls: # Red Hat OpenShift Streams for Apache Kafka requires the use of TLS with the built in trusted certificates
        trustedCertificates: []
  connectCluster: my-cluster-target
  mirrors: 
    - 
      checkpointConnector: 
        config: 
          checkpoints.topic.replication.factor: 3 # Red Hat OpenShift Streams for Apache Kafka requires a replication factor of 3 for all topics
          emit.checkpoints.interval.seconds: 60 # Setting sync interval to 60 seconds is useful for debugging
          refresh.groups.interval.seconds: 60 # Setting sync interval to 60 seconds is useful for debugging
          sync.group.offsets.enabled: true # Enable sync'ing offsets
          sync.group.offsets.interval.seconds: 60 # Setting sync interval to 60 seconds is useful for debugging
      sourceCluster: my-cluster-source
      sourceConnector: 
        config: 
          refresh.topics.interval.seconds: 60 # Red Hat OpenShift Streams for Apache Kafka requires a replication factor of 3 for all topics
          replication.factor: 3  # Red Hat OpenShift Streams for Apache Kafka requires a replication factor of 3 for all topics
          sync.topic.acls.enabled: true # Enable sync'ing offsets
      targetCluster: my-cluster-target
      topicsPattern: .* # Sync all topics
  replicas: 1 # Running a single replica of MirrorMaker makes debugging the logs easier
@pmuir pmuir added the bug label Nov 16, 2021
@scholzj
Copy link
Member

scholzj commented Nov 16, 2021

@sknot-rh Is this related to #5549? Could you have a look at it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants