Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to enable JMX feature in docker-compose.yml and test it #738

Open
shaojun opened this issue Apr 20, 2023 · 1 comment
Open

How to enable JMX feature in docker-compose.yml and test it #738

shaojun opened this issue Apr 20, 2023 · 1 comment

Comments

@shaojun
Copy link

shaojun commented Apr 20, 2023

I want monitoring kafka running status (via promethues and grafana, basically follow: https://ibm-cloud-architecture.github.io/refarch-eda/technology/kafka-monitoring/), this is my docker-compose.yml to enable the JMX in kafka:

version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181"
    restart: unless-stopped

  kafka:
    build: .
    ports:
      - "9098:9092"
      - "11991:11991"
    environment:
      DOCKER_API_VERSION: 1.22
      KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      JMX_PORT: 11991
      KAFKA_JMX_OPTS:   >-
                -Djava.rmi.server.hostname=**replace.internet.ip.address**
                -Dcom.sun.management.jmxremote.port=11991
                -Dcom.sun.management.jmxremote.rmi.port=11991
                -Dcom.sun.management.jmxremote=true
                -Dcom.sun.management.jmxremote.authenticate=false
                -Dcom.sun.management.jmxremote.ssl=false
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    restart: unless-stopped

this is the running log:

root@ecs-01796520-002:/data/shao/kafka-docker# docker-compose up
Creating network "kafka-docker_default" with the default driver
WARNING: Found orphan containers (kafka-docker_jmxexporter_1, kafka-docker_test_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Creating kafka-docker_zookeeper_1 ... done
Creating kafka-docker_kafka_1 ... done
Attaching to kafka-docker_kafka_1, kafka-docker_zookeeper_1
kafka_1 | Excluding KAFKA_JMX_OPTS from broker config
kafka_1 | [Configuring] 'port' in '/opt/kafka/config/server.properties'
kafka_1 | [Configuring] 'advertised.host.name' in '/opt/kafka/config/server.properties'
kafka_1 | Excluding KAFKA_HOME from broker config
kafka_1 | [Configuring] 'log.dirs' in '/opt/kafka/config/server.properties'
kafka_1 | Excluding KAFKA_VERSION from broker config
zookeeper_1 | ZooKeeper JMX enabled by default
zookeeper_1 | Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
kafka_1 | [Configuring] 'advertised.port' in '/opt/kafka/config/server.properties'
kafka_1 | [Configuring] 'zookeeper.connect' in '/opt/kafka/config/server.properties'
kafka_1 | [Configuring] 'broker.id' in '/opt/kafka/config/server.properties'
zookeeper_1 | 2023-04-20 13:25:30,504 [myid:] - INFO [main:QuorumPeerConfig@136] - Reading configuration from: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
zookeeper_1 | 2023-04-20 13:25:30,508 [myid:] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
zookeeper_1 | 2023-04-20 13:25:30,509 [myid:] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
zookeeper_1 | 2023-04-20 13:25:30,509 [myid:] - WARN [main:QuorumPeerMain@116] - Either no config or no quorum defined in config, running in standalone mode
zookeeper_1 | 2023-04-20 13:25:30,509 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
zookeeper_1 | 2023-04-20 13:25:30,522 [myid:] - INFO [main:QuorumPeerConfig@136] - Reading configuration from: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
zookeeper_1 | 2023-04-20 13:25:30,522 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
zookeeper_1 | 2023-04-20 13:25:30,523 [myid:] - INFO [main:ZooKeeperServerMain@98] - Starting server
zookeeper_1 | 2023-04-20 13:25:30,531 [myid:] - INFO [main:Environment@100] - Server environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
zookeeper_1 | 2023-04-20 13:25:30,531 [myid:] - INFO [main:Environment@100] - Server environment:host.name=eb2695455f65
zookeeper_1 | 2023-04-20 13:25:30,531 [myid:] - INFO [main:Environment@100] - Server environment:java.version=1.7.0_65
zookeeper_1 | 2023-04-20 13:25:30,531 [myid:] - INFO [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
zookeeper_1 | 2023-04-20 13:25:30,531 [myid:] - INFO [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
zookeeper_1 | 2023-04-20 13:25:30,531 [myid:] - INFO [main:Environment@100] - Server environment:java.class.path=/opt/zookeeper-3.4.13/bin/../build/classes:/opt/zookeeper-3.4.13/bin/../build/lib/.jar:/opt/zookeeper-3.4.13/bin/../lib/slf4j-log4j12-1.7.25.jar:/opt/zookeeper-3.4.13/bin/../lib/slf4j-api-1.7.25.jar:/opt/zookeeper-3.4.13/bin/../lib/netty-3.10.6.Final.jar:/opt/zookeeper-3.4.13/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper-3.4.13/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.13/bin/../lib/audience-annotations-0.5.0.jar:/opt/zookeeper-3.4.13/bin/../zookeeper-3.4.13.jar:/opt/zookeeper-3.4.13/bin/../src/java/lib/.jar:/opt/zookeeper-3.4.13/bin/../conf:
zookeeper_1 | 2023-04-20 13:25:30,531 [myid:] - INFO [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
zookeeper_1 | 2023-04-20 13:25:30,531 [myid:] - INFO [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
zookeeper_1 | 2023-04-20 13:25:30,533 [myid:] - INFO [main:Environment@100] - Server environment:java.compiler=
zookeeper_1 | 2023-04-20 13:25:30,533 [myid:] - INFO [main:Environment@100] - Server environment:os.name=Linux
zookeeper_1 | 2023-04-20 13:25:30,533 [myid:] - INFO [main:Environment@100] - Server environment:os.arch=amd64
zookeeper_1 | 2023-04-20 13:25:30,533 [myid:] - INFO [main:Environment@100] - Server environment:os.version=5.4.0-107-generic
zookeeper_1 | 2023-04-20 13:25:30,533 [myid:] - INFO [main:Environment@100] - Server environment:user.name=root
zookeeper_1 | 2023-04-20 13:25:30,533 [myid:] - INFO [main:Environment@100] - Server environment:user.home=/root
zookeeper_1 | 2023-04-20 13:25:30,533 [myid:] - INFO [main:Environment@100] - Server environment:user.dir=/opt/zookeeper-3.4.13
zookeeper_1 | 2023-04-20 13:25:30,540 [myid:] - INFO [main:ZooKeeperServer@836] - tickTime set to 2000
zookeeper_1 | 2023-04-20 13:25:30,540 [myid:] - INFO [main:ZooKeeperServer@845] - minSessionTimeout set to -1
zookeeper_1 | 2023-04-20 13:25:30,540 [myid:] - INFO [main:ZooKeeperServer@854] - maxSessionTimeout set to -1
zookeeper_1 | 2023-04-20 13:25:30,549 [myid:] - INFO [main:ServerCnxnFactory@117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
zookeeper_1 | 2023-04-20 13:25:30,554 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
kafka_1 | [2023-04-20 13:25:31,201] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
kafka_1 | [2023-04-20 13:25:31,475] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
kafka_1 | [2023-04-20 13:25:31,546] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
kafka_1 | [2023-04-20 13:25:31,549] INFO starting (kafka.server.KafkaServer)
kafka_1 | [2023-04-20 13:25:31,550] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
kafka_1 | [2023-04-20 13:25:31,568] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
kafka_1 | [2023-04-20 13:25:31,573] INFO Client environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 20:03 GMT (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,573] INFO Client environment:host.name=d2c548d77348 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,573] INFO Client environment:java.version=11.0.16 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,573] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,573] INFO Client environment:java.home=/usr/local/openjdk-11 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,573] INFO Client environment:java.class.path=/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.6.1.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-cli-1.4.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-2.8.1.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.8.1.jar:/opt/kafka/bin/../libs/connect-file-2.8.1.jar:/opt/kafka/bin/../libs/connect-json-2.8.1.jar:/opt/kafka/bin/../libs/connect-mirror-2.8.1.jar:/opt/kafka/bin/../libs/connect-mirror-client-2.8.1.jar:/opt/kafka/bin/../libs/connect-runtime-2.8.1.jar:/opt/kafka/bin/../libs/connect-transforms-2.8.1.jar:/opt/kafka/bin/../libs/hk2-api-2.6.1.jar:/opt/kafka/bin/../libs/hk2-locator-2.6.1.jar:/opt/kafka/bin/../libs/hk2-utils-2.6.1.jar:/opt/kafka/bin/../libs/jackson-annotations-2.10.5.jar:/opt/kafka/bin/../libs/jackson-core-2.10.5.jar:/opt/kafka/bin/../libs/jackson-databind-2.10.5.1.jar:/opt/kafka/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/opt/kafka/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/opt/kafka/bin/../libs/jackson-module-paranamer-2.10.5.jar:/opt/kafka/bin/../libs/jackson-module-scala_2.13-2.10.5.jar:/opt/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/kafka/bin/../libs/jakarta.annotation-api-1.3.5.jar:/opt/kafka/bin/../libs/jakarta.inject-2.6.1.jar:/opt/kafka/bin/../libs/jakarta.validation-api-2.0.2.jar:/opt/kafka/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/opt/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/kafka/bin/../libs/javassist-3.27.0-GA.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.34.jar:/opt/kafka/bin/../libs/jersey-common-2.34.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.34.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.34.jar:/opt/kafka/bin/../libs/jersey-hk2-2.34.jar:/opt/kafka/bin/../libs/jersey-server-2.34.jar:/opt/kafka/bin/../libs/jetty-client-9.4.43.v20210629.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.43.v20210629.jar:/opt/kafka/bin/../libs/jetty-http-9.4.43.v20210629.jar:/opt/kafka/bin/../libs/jetty-io-9.4.43.v20210629.jar:/opt/kafka/bin/../libs/jetty-security-9.4.43.v20210629.jar:/opt/kafka/bin/../libs/jetty-server-9.4.43.v20210629.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.43.v20210629.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.43.v20210629.jar:/opt/kafka/bin/../libs/jetty-util-9.4.43.v20210629.jar:/opt/kafka/bin/../libs/jetty-util-ajax-9.4.43.v20210629.jar:/opt/kafka/bin/../libs/jline-3.12.1.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-2.8.1.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.8.1.jar:/opt/kafka/bin/../libs/kafka-metadata-2.8.1.jar:/opt/kafka/bin/../libs/kafka-raft-2.8.1.jar:/opt/kafka/bin/../libs/kafka-shell-2.8.1.jar:/opt/kafka/bin/../libs/kafka-streams-2.8.1.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.8.1.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.13-2.8.1.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.8.1.jar:/opt/kafka/bin/../libs/kafka-tools-2.8.1.jar:/opt/kafka/bin/../libs/kafka_2.13-2.8.1-sources.jar:/opt/kafka/bin/../libs/kafka_2.13-2.8.1.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/kafka/bin/../libs/maven-artifact-3.8.1.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/netty-buffer-4.1.62.Final.jar:/opt/kafka/bin/../libs/netty-codec-4.1.62.Final.jar:/opt/kafka/bin/../libs/netty-common-4.1.62.Final.jar:/opt/kafka/bin/../libs/netty-handler-4.1.62.Final.jar:/opt/kafka/bin/../libs/netty-resolver-4.1.62.Final.jar:/opt/kafka/bin/../libs/netty-transport-4.1.62.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-epoll-4.1.62.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-unix-common-4.1.62.Final.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.3.jar:/opt/kafka/bin/../libs/paranamer-2.8.jar:/opt/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/kafka/bin/../libs/reflections-0.9.12.jar:/opt/kafka/bin/../libs/rocksdbjni-5.18.4.jar:/opt/kafka/bin/../libs/scala-collection-compat_2.13-2.3.0.jar:/opt/kafka/bin/../libs/scala-java8-compat_2.13-0.9.1.jar:/opt/kafka/bin/../libs/scala-library-2.13.5.jar:/opt/kafka/bin/../libs/scala-logging_2.13-3.9.2.jar:/opt/kafka/bin/../libs/scala-reflect-2.13.5.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/kafka/bin/../libs/snappy-java-1.1.8.1.jar:/opt/kafka/bin/../libs/zookeeper-3.5.9.jar:/opt/kafka/bin/../libs/zookeeper-jute-3.5.9.jar:/opt/kafka/bin/../libs/zstd-jni-1.4.9-1.jar (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,574] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,574] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,574] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,574] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,574] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,574] INFO Client environment:os.version=5.4.0-107-generic (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,574] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,574] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,574] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,574] INFO Client environment:os.memory.free=976MB (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,574] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,574] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,577] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@4bd31064 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2023-04-20 13:25:31,582] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
kafka_1 | [2023-04-20 13:25:31,587] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
kafka_1 | [2023-04-20 13:25:31,596] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
kafka_1 | [2023-04-20 13:25:31,597] INFO Opening socket connection to server zookeeper/192.168.128.3:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
kafka_1 | [2023-04-20 13:25:31,601] INFO Socket connection established, initiating session, client: /192.168.128.2:41496, server: zookeeper/192.168.128.3:2181 (org.apache.zookeeper.ClientCnxn)
zookeeper_1 | 2023-04-20 13:25:31,602 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /192.168.128.2:41496
zookeeper_1 | 2023-04-20 13:25:31,608 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /192.168.128.2:41496
zookeeper_1 | 2023-04-20 13:25:31,609 [myid:] - INFO [SyncThread:0:FileTxnLog@213] - Creating new log file: log.1
zookeeper_1 | 2023-04-20 13:25:31,631 [myid:] - INFO [SyncThread:0:ZooKeeperServer@694] - Established session 0x102294e27940000 with negotiated timeout 18000 for client /192.168.128.2:41496
kafka_1 | [2023-04-20 13:25:31,633] INFO Session establishment complete on server zookeeper/192.168.128.3:2181, sessionid = 0x102294e27940000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
kafka_1 | [2023-04-20 13:25:31,635] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
zookeeper_1 | 2023-04-20 13:25:31,690 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x102294e27940000 type:create cxid:0x2 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
zookeeper_1 | 2023-04-20 13:25:31,702 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x102294e27940000 type:create cxid:0x6 zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config
zookeeper_1 | 2023-04-20 13:25:31,710 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x102294e27940000 type:create cxid:0x9 zxid:0xa txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
kafka_1 | [2023-04-20 13:25:31,745] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
kafka_1 | [2023-04-20 13:25:31,753] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener)
kafka_1 | [2023-04-20 13:25:31,754] INFO Cleared cache (kafka.server.FinalizedFeatureCache)
zookeeper_1 | 2023-04-20 13:25:31,880 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x102294e27940000 type:create cxid:0x18 zxid:0x16 txntype:-1 reqpath:n/a Error Path:/cluster Error:KeeperErrorCode = NoNode for /cluster
kafka_1 | [2023-04-20 13:25:31,889] INFO Cluster ID = jRckvBdXQx-F4RbYgwH4AA (kafka.server.KafkaServer)
kafka_1 | [2023-04-20 13:25:31,891] WARN No meta.properties file under dir /kafka/kafka-logs-d2c548d77348/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka_1 | [2023-04-20 13:25:31,934] INFO KafkaConfig values:
kafka_1 | advertised.host.name = 192.168.99.100
kafka_1 | advertised.listeners = null
kafka_1 | advertised.port = 9098
kafka_1 | alter.config.policy.class.name = null
kafka_1 | alter.log.dirs.replication.quota.window.num = 11
kafka_1 | alter.log.dirs.replication.quota.window.size.seconds = 1
kafka_1 | authorizer.class.name =
kafka_1 | auto.create.topics.enable = true
kafka_1 | auto.leader.rebalance.enable = true
kafka_1 | background.threads = 10
kafka_1 | broker.heartbeat.interval.ms = 2000
kafka_1 | broker.id = -1
kafka_1 | broker.id.generation.enable = true
kafka_1 | broker.rack = null
kafka_1 | broker.session.timeout.ms = 9000
kafka_1 | client.quota.callback.class = null
kafka_1 | compression.type = producer
kafka_1 | connection.failed.authentication.delay.ms = 100
kafka_1 | connections.max.idle.ms = 600000
kafka_1 | connections.max.reauth.ms = 0
kafka_1 | control.plane.listener.name = null
kafka_1 | controlled.shutdown.enable = true
kafka_1 | controlled.shutdown.max.retries = 3
kafka_1 | controlled.shutdown.retry.backoff.ms = 5000
kafka_1 | controller.listener.names = null
kafka_1 | controller.quorum.append.linger.ms = 25
kafka_1 | controller.quorum.election.backoff.max.ms = 1000
kafka_1 | controller.quorum.election.timeout.ms = 1000
kafka_1 | controller.quorum.fetch.timeout.ms = 2000
kafka_1 | controller.quorum.request.timeout.ms = 2000
kafka_1 | controller.quorum.retry.backoff.ms = 20
kafka_1 | controller.quorum.voters = []
kafka_1 | controller.quota.window.num = 11
kafka_1 | controller.quota.window.size.seconds = 1
kafka_1 | controller.socket.timeout.ms = 30000
kafka_1 | create.topic.policy.class.name = null
kafka_1 | default.replication.factor = 1
kafka_1 | delegation.token.expiry.check.interval.ms = 3600000
kafka_1 | delegation.token.expiry.time.ms = 86400000
kafka_1 | delegation.token.master.key = null
kafka_1 | delegation.token.max.lifetime.ms = 604800000
kafka_1 | delegation.token.secret.key = null
kafka_1 | delete.records.purgatory.purge.interval.requests = 1
kafka_1 | delete.topic.enable = true
kafka_1 | fetch.max.bytes = 57671680
kafka_1 | fetch.purgatory.purge.interval.requests = 1000
kafka_1 | group.initial.rebalance.delay.ms = 0
kafka_1 | group.max.session.timeout.ms = 1800000
kafka_1 | group.max.size = 2147483647
kafka_1 | group.min.session.timeout.ms = 6000
kafka_1 | host.name =
kafka_1 | initial.broker.registration.timeout.ms = 60000
kafka_1 | inter.broker.listener.name = null
kafka_1 | inter.broker.protocol.version = 2.8-IV1
kafka_1 | kafka.metrics.polling.interval.secs = 10
kafka_1 | kafka.metrics.reporters = []
kafka_1 | leader.imbalance.check.interval.seconds = 300
kafka_1 | leader.imbalance.per.broker.percentage = 10
kafka_1 | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
kafka_1 | listeners = null
kafka_1 | log.cleaner.backoff.ms = 15000
kafka_1 | log.cleaner.dedupe.buffer.size = 134217728
kafka_1 | log.cleaner.delete.retention.ms = 86400000
kafka_1 | log.cleaner.enable = true
kafka_1 | log.cleaner.io.buffer.load.factor = 0.9
kafka_1 | log.cleaner.io.buffer.size = 524288
kafka_1 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka_1 | log.cleaner.max.compaction.lag.ms = 9223372036854775807
kafka_1 | log.cleaner.min.cleanable.ratio = 0.5
kafka_1 | log.cleaner.min.compaction.lag.ms = 0
kafka_1 | log.cleaner.threads = 1
kafka_1 | log.cleanup.policy = [delete]
kafka_1 | log.dir = /tmp/kafka-logs
kafka_1 | log.dirs = /kafka/kafka-logs-d2c548d77348
kafka_1 | log.flush.interval.messages = 9223372036854775807
kafka_1 | log.flush.interval.ms = null
kafka_1 | log.flush.offset.checkpoint.interval.ms = 60000
kafka_1 | log.flush.scheduler.interval.ms = 9223372036854775807
kafka_1 | log.flush.start.offset.checkpoint.interval.ms = 60000
kafka_1 | log.index.interval.bytes = 4096
kafka_1 | log.index.size.max.bytes = 10485760
kafka_1 | log.message.downconversion.enable = true
kafka_1 | log.message.format.version = 2.8-IV1
kafka_1 | log.message.timestamp.difference.max.ms = 9223372036854775807
kafka_1 | log.message.timestamp.type = CreateTime
kafka_1 | log.preallocate = false
kafka_1 | log.retention.bytes = -1
kafka_1 | log.retention.check.interval.ms = 300000
kafka_1 | log.retention.hours = 168
kafka_1 | log.retention.minutes = null
kafka_1 | log.retention.ms = null
kafka_1 | log.roll.hours = 168
kafka_1 | log.roll.jitter.hours = 0
kafka_1 | log.roll.jitter.ms = null
kafka_1 | log.roll.ms = null
kafka_1 | log.segment.bytes = 1073741824
kafka_1 | log.segment.delete.delay.ms = 60000
kafka_1 | max.connection.creation.rate = 2147483647
kafka_1 | max.connections = 2147483647
kafka_1 | max.connections.per.ip = 2147483647
kafka_1 | max.connections.per.ip.overrides =
kafka_1 | max.incremental.fetch.session.cache.slots = 1000
kafka_1 | message.max.bytes = 1048588
kafka_1 | metadata.log.dir = null
kafka_1 | metric.reporters = []
kafka_1 | metrics.num.samples = 2
kafka_1 | metrics.recording.level = INFO
kafka_1 | metrics.sample.window.ms = 30000
kafka_1 | min.insync.replicas = 1
kafka_1 | node.id = -1
kafka_1 | num.io.threads = 8
kafka_1 | num.network.threads = 3
kafka_1 | num.partitions = 1
kafka_1 | num.recovery.threads.per.data.dir = 1
kafka_1 | num.replica.alter.log.dirs.threads = null
kafka_1 | num.replica.fetchers = 1
kafka_1 | offset.metadata.max.bytes = 4096
kafka_1 | offsets.commit.required.acks = -1
kafka_1 | offsets.commit.timeout.ms = 5000
kafka_1 | offsets.load.buffer.size = 5242880
kafka_1 | offsets.retention.check.interval.ms = 600000
kafka_1 | offsets.retention.minutes = 10080
kafka_1 | offsets.topic.compression.codec = 0
kafka_1 | offsets.topic.num.partitions = 50
kafka_1 | offsets.topic.replication.factor = 1
kafka_1 | offsets.topic.segment.bytes = 104857600
kafka_1 | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
kafka_1 | password.encoder.iterations = 4096
kafka_1 | password.encoder.key.length = 128
kafka_1 | password.encoder.keyfactory.algorithm = null
kafka_1 | password.encoder.old.secret = null
kafka_1 | password.encoder.secret = null
kafka_1 | port = 9092
kafka_1 | principal.builder.class = null
kafka_1 | process.roles = []
kafka_1 | producer.purgatory.purge.interval.requests = 1000
kafka_1 | queued.max.request.bytes = -1
kafka_1 | queued.max.requests = 500
kafka_1 | quota.consumer.default = 9223372036854775807
kafka_1 | quota.producer.default = 9223372036854775807
kafka_1 | quota.window.num = 11
kafka_1 | quota.window.size.seconds = 1
kafka_1 | replica.fetch.backoff.ms = 1000
kafka_1 | replica.fetch.max.bytes = 1048576
kafka_1 | replica.fetch.min.bytes = 1
kafka_1 | replica.fetch.response.max.bytes = 10485760
kafka_1 | replica.fetch.wait.max.ms = 500
kafka_1 | replica.high.watermark.checkpoint.interval.ms = 5000
kafka_1 | replica.lag.time.max.ms = 30000
kafka_1 | replica.selector.class = null
kafka_1 | replica.socket.receive.buffer.bytes = 65536
kafka_1 | replica.socket.timeout.ms = 30000
kafka_1 | replication.quota.window.num = 11
kafka_1 | replication.quota.window.size.seconds = 1
kafka_1 | request.timeout.ms = 30000
kafka_1 | reserved.broker.max.id = 1000
kafka_1 | sasl.client.callback.handler.class = null
kafka_1 | sasl.enabled.mechanisms = [GSSAPI]
kafka_1 | sasl.jaas.config = null
kafka_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka_1 | sasl.kerberos.min.time.before.relogin = 60000
kafka_1 | sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka_1 | sasl.kerberos.service.name = null
kafka_1 | sasl.kerberos.ticket.renew.jitter = 0.05
kafka_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
kafka_1 | sasl.login.callback.handler.class = null
kafka_1 | sasl.login.class = null
kafka_1 | sasl.login.refresh.buffer.seconds = 300
kafka_1 | sasl.login.refresh.min.period.seconds = 60
kafka_1 | sasl.login.refresh.window.factor = 0.8
kafka_1 | sasl.login.refresh.window.jitter = 0.05
kafka_1 | sasl.mechanism.controller.protocol = GSSAPI
kafka_1 | sasl.mechanism.inter.broker.protocol = GSSAPI
kafka_1 | sasl.server.callback.handler.class = null
kafka_1 | security.inter.broker.protocol = PLAINTEXT
kafka_1 | security.providers = null
kafka_1 | socket.connection.setup.timeout.max.ms = 30000
kafka_1 | socket.connection.setup.timeout.ms = 10000
kafka_1 | socket.receive.buffer.bytes = 102400
kafka_1 | socket.request.max.bytes = 104857600
kafka_1 | socket.send.buffer.bytes = 102400
kafka_1 | ssl.cipher.suites = []
kafka_1 | ssl.client.auth = none
kafka_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
kafka_1 | ssl.endpoint.identification.algorithm = https
kafka_1 | ssl.engine.factory.class = null
kafka_1 | ssl.key.password = null
kafka_1 | ssl.keymanager.algorithm = SunX509
kafka_1 | ssl.keystore.certificate.chain = null
kafka_1 | ssl.keystore.key = null
kafka_1 | ssl.keystore.location = null
kafka_1 | ssl.keystore.password = null
kafka_1 | ssl.keystore.type = JKS
kafka_1 | ssl.principal.mapping.rules = DEFAULT
kafka_1 | ssl.protocol = TLSv1.3
kafka_1 | ssl.provider = null
kafka_1 | ssl.secure.random.implementation = null
kafka_1 | ssl.trustmanager.algorithm = PKIX
kafka_1 | ssl.truststore.certificates = null
kafka_1 | ssl.truststore.location = null
kafka_1 | ssl.truststore.password = null
kafka_1 | ssl.truststore.type = JKS
kafka_1 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
kafka_1 | transaction.max.timeout.ms = 900000
kafka_1 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka_1 | transaction.state.log.load.buffer.size = 5242880
kafka_1 | transaction.state.log.min.isr = 1
kafka_1 | transaction.state.log.num.partitions = 50
kafka_1 | transaction.state.log.replication.factor = 1
kafka_1 | transaction.state.log.segment.bytes = 104857600
kafka_1 | transactional.id.expiration.ms = 604800000
kafka_1 | unclean.leader.election.enable = false
kafka_1 | zookeeper.clientCnxnSocket = null
kafka_1 | zookeeper.connect = zookeeper:2181
kafka_1 | zookeeper.connection.timeout.ms = 18000
kafka_1 | zookeeper.max.in.flight.requests = 10
kafka_1 | zookeeper.session.timeout.ms = 18000
kafka_1 | zookeeper.set.acl = false
kafka_1 | zookeeper.ssl.cipher.suites = null
kafka_1 | zookeeper.ssl.client.enable = false
kafka_1 | zookeeper.ssl.crl.enable = false
kafka_1 | zookeeper.ssl.enabled.protocols = null
kafka_1 | zookeeper.ssl.endpoint.identification.algorithm = HTTPS
kafka_1 | zookeeper.ssl.keystore.location = null
kafka_1 | zookeeper.ssl.keystore.password = null
kafka_1 | zookeeper.ssl.keystore.type = null
kafka_1 | zookeeper.ssl.ocsp.enable = false
kafka_1 | zookeeper.ssl.protocol = TLSv1.2
kafka_1 | zookeeper.ssl.truststore.location = null
kafka_1 | zookeeper.ssl.truststore.password = null
kafka_1 | zookeeper.ssl.truststore.type = null
kafka_1 | zookeeper.sync.time.ms = 2000
kafka_1 | (kafka.server.KafkaConfig)
kafka_1 | [2023-04-20 13:25:31,942] INFO KafkaConfig values:
kafka_1 | advertised.host.name = 192.168.99.100
kafka_1 | advertised.listeners = null
kafka_1 | advertised.port = 9098
kafka_1 | alter.config.policy.class.name = null
kafka_1 | alter.log.dirs.replication.quota.window.num = 11
kafka_1 | alter.log.dirs.replication.quota.window.size.seconds = 1
kafka_1 | authorizer.class.name =
kafka_1 | auto.create.topics.enable = true
kafka_1 | auto.leader.rebalance.enable = true
kafka_1 | background.threads = 10
kafka_1 | broker.heartbeat.interval.ms = 2000
kafka_1 | broker.id = -1
kafka_1 | broker.id.generation.enable = true
kafka_1 | broker.rack = null
kafka_1 | broker.session.timeout.ms = 9000
kafka_1 | client.quota.callback.class = null
kafka_1 | compression.type = producer
kafka_1 | connection.failed.authentication.delay.ms = 100
kafka_1 | connections.max.idle.ms = 600000
kafka_1 | connections.max.reauth.ms = 0
kafka_1 | control.plane.listener.name = null
kafka_1 | controlled.shutdown.enable = true
kafka_1 | controlled.shutdown.max.retries = 3
kafka_1 | controlled.shutdown.retry.backoff.ms = 5000
kafka_1 | controller.listener.names = null
kafka_1 | controller.quorum.append.linger.ms = 25
kafka_1 | controller.quorum.election.backoff.max.ms = 1000
kafka_1 | controller.quorum.election.timeout.ms = 1000
kafka_1 | controller.quorum.fetch.timeout.ms = 2000
kafka_1 | controller.quorum.request.timeout.ms = 2000
kafka_1 | controller.quorum.retry.backoff.ms = 20
kafka_1 | controller.quorum.voters = []
kafka_1 | controller.quota.window.num = 11
kafka_1 | controller.quota.window.size.seconds = 1
kafka_1 | controller.socket.timeout.ms = 30000
kafka_1 | create.topic.policy.class.name = null
kafka_1 | default.replication.factor = 1
kafka_1 | delegation.token.expiry.check.interval.ms = 3600000
kafka_1 | delegation.token.expiry.time.ms = 86400000
kafka_1 | delegation.token.master.key = null
kafka_1 | delegation.token.max.lifetime.ms = 604800000
kafka_1 | delegation.token.secret.key = null
kafka_1 | delete.records.purgatory.purge.interval.requests = 1
kafka_1 | delete.topic.enable = true
kafka_1 | fetch.max.bytes = 57671680
kafka_1 | fetch.purgatory.purge.interval.requests = 1000
kafka_1 | group.initial.rebalance.delay.ms = 0
kafka_1 | group.max.session.timeout.ms = 1800000
kafka_1 | group.max.size = 2147483647
kafka_1 | group.min.session.timeout.ms = 6000
kafka_1 | host.name =
kafka_1 | initial.broker.registration.timeout.ms = 60000
kafka_1 | inter.broker.listener.name = null
kafka_1 | inter.broker.protocol.version = 2.8-IV1
kafka_1 | kafka.metrics.polling.interval.secs = 10
kafka_1 | kafka.metrics.reporters = []
kafka_1 | leader.imbalance.check.interval.seconds = 300
kafka_1 | leader.imbalance.per.broker.percentage = 10
kafka_1 | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
kafka_1 | listeners = null
kafka_1 | log.cleaner.backoff.ms = 15000
kafka_1 | log.cleaner.dedupe.buffer.size = 134217728
kafka_1 | log.cleaner.delete.retention.ms = 86400000
kafka_1 | log.cleaner.enable = true
kafka_1 | log.cleaner.io.buffer.load.factor = 0.9
kafka_1 | log.cleaner.io.buffer.size = 524288
kafka_1 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka_1 | log.cleaner.max.compaction.lag.ms = 9223372036854775807
kafka_1 | log.cleaner.min.cleanable.ratio = 0.5
kafka_1 | log.cleaner.min.compaction.lag.ms = 0
kafka_1 | log.cleaner.threads = 1
kafka_1 | log.cleanup.policy = [delete]
kafka_1 | log.dir = /tmp/kafka-logs
kafka_1 | log.dirs = /kafka/kafka-logs-d2c548d77348
kafka_1 | log.flush.interval.messages = 9223372036854775807
kafka_1 | log.flush.interval.ms = null
kafka_1 | log.flush.offset.checkpoint.interval.ms = 60000
kafka_1 | log.flush.scheduler.interval.ms = 9223372036854775807
kafka_1 | log.flush.start.offset.checkpoint.interval.ms = 60000
kafka_1 | log.index.interval.bytes = 4096
kafka_1 | log.index.size.max.bytes = 10485760
kafka_1 | log.message.downconversion.enable = true
kafka_1 | log.message.format.version = 2.8-IV1
kafka_1 | log.message.timestamp.difference.max.ms = 9223372036854775807
kafka_1 | log.message.timestamp.type = CreateTime
kafka_1 | log.preallocate = false
kafka_1 | log.retention.bytes = -1
kafka_1 | log.retention.check.interval.ms = 300000
kafka_1 | log.retention.hours = 168
kafka_1 | log.retention.minutes = null
kafka_1 | log.retention.ms = null
kafka_1 | log.roll.hours = 168
kafka_1 | log.roll.jitter.hours = 0
kafka_1 | log.roll.jitter.ms = null
kafka_1 | log.roll.ms = null
kafka_1 | log.segment.bytes = 1073741824
kafka_1 | log.segment.delete.delay.ms = 60000
kafka_1 | max.connection.creation.rate = 2147483647
kafka_1 | max.connections = 2147483647
kafka_1 | max.connections.per.ip = 2147483647
kafka_1 | max.connections.per.ip.overrides =
kafka_1 | max.incremental.fetch.session.cache.slots = 1000
kafka_1 | message.max.bytes = 1048588
kafka_1 | metadata.log.dir = null
kafka_1 | metric.reporters = []
kafka_1 | metrics.num.samples = 2
kafka_1 | metrics.recording.level = INFO
kafka_1 | metrics.sample.window.ms = 30000
kafka_1 | min.insync.replicas = 1
kafka_1 | node.id = -1
kafka_1 | num.io.threads = 8
kafka_1 | num.network.threads = 3
kafka_1 | num.partitions = 1
kafka_1 | num.recovery.threads.per.data.dir = 1
kafka_1 | num.replica.alter.log.dirs.threads = null
kafka_1 | num.replica.fetchers = 1
kafka_1 | offset.metadata.max.bytes = 4096
kafka_1 | offsets.commit.required.acks = -1
kafka_1 | offsets.commit.timeout.ms = 5000
kafka_1 | offsets.load.buffer.size = 5242880
kafka_1 | offsets.retention.check.interval.ms = 600000
kafka_1 | offsets.retention.minutes = 10080
kafka_1 | offsets.topic.compression.codec = 0
kafka_1 | offsets.topic.num.partitions = 50
kafka_1 | offsets.topic.replication.factor = 1
kafka_1 | offsets.topic.segment.bytes = 104857600
kafka_1 | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
kafka_1 | password.encoder.iterations = 4096
kafka_1 | password.encoder.key.length = 128
kafka_1 | password.encoder.keyfactory.algorithm = null
kafka_1 | password.encoder.old.secret = null
kafka_1 | password.encoder.secret = null
kafka_1 | port = 9092
kafka_1 | principal.builder.class = null
kafka_1 | process.roles = []
kafka_1 | producer.purgatory.purge.interval.requests = 1000
kafka_1 | queued.max.request.bytes = -1
kafka_1 | queued.max.requests = 500
kafka_1 | quota.consumer.default = 9223372036854775807
kafka_1 | quota.producer.default = 9223372036854775807
kafka_1 | quota.window.num = 11
kafka_1 | quota.window.size.seconds = 1
kafka_1 | replica.fetch.backoff.ms = 1000
kafka_1 | replica.fetch.max.bytes = 1048576
kafka_1 | replica.fetch.min.bytes = 1
kafka_1 | replica.fetch.response.max.bytes = 10485760
kafka_1 | replica.fetch.wait.max.ms = 500
kafka_1 | replica.high.watermark.checkpoint.interval.ms = 5000
kafka_1 | replica.lag.time.max.ms = 30000
kafka_1 | replica.selector.class = null
kafka_1 | replica.socket.receive.buffer.bytes = 65536
kafka_1 | replica.socket.timeout.ms = 30000
kafka_1 | replication.quota.window.num = 11
kafka_1 | replication.quota.window.size.seconds = 1
kafka_1 | request.timeout.ms = 30000
kafka_1 | reserved.broker.max.id = 1000
kafka_1 | sasl.client.callback.handler.class = null
kafka_1 | sasl.enabled.mechanisms = [GSSAPI]
kafka_1 | sasl.jaas.config = null
kafka_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka_1 | sasl.kerberos.min.time.before.relogin = 60000
kafka_1 | sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka_1 | sasl.kerberos.service.name = null
kafka_1 | sasl.kerberos.ticket.renew.jitter = 0.05
kafka_1 | sasl.kerberos.ticket.renew.window.factor = 0.8
kafka_1 | sasl.login.callback.handler.class = null
kafka_1 | sasl.login.class = null
kafka_1 | sasl.login.refresh.buffer.seconds = 300
kafka_1 | sasl.login.refresh.min.period.seconds = 60
kafka_1 | sasl.login.refresh.window.factor = 0.8
kafka_1 | sasl.login.refresh.window.jitter = 0.05
kafka_1 | sasl.mechanism.controller.protocol = GSSAPI
kafka_1 | sasl.mechanism.inter.broker.protocol = GSSAPI
kafka_1 | sasl.server.callback.handler.class = null
kafka_1 | security.inter.broker.protocol = PLAINTEXT
kafka_1 | security.providers = null
kafka_1 | socket.connection.setup.timeout.max.ms = 30000
kafka_1 | socket.connection.setup.timeout.ms = 10000
kafka_1 | socket.receive.buffer.bytes = 102400
kafka_1 | socket.request.max.bytes = 104857600
kafka_1 | socket.send.buffer.bytes = 102400
kafka_1 | ssl.cipher.suites = []
kafka_1 | ssl.client.auth = none
kafka_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
kafka_1 | ssl.endpoint.identification.algorithm = https
kafka_1 | ssl.engine.factory.class = null
kafka_1 | ssl.key.password = null
kafka_1 | ssl.keymanager.algorithm = SunX509
kafka_1 | ssl.keystore.certificate.chain = null
kafka_1 | ssl.keystore.key = null
kafka_1 | ssl.keystore.location = null
kafka_1 | ssl.keystore.password = null
kafka_1 | ssl.keystore.type = JKS
kafka_1 | ssl.principal.mapping.rules = DEFAULT
kafka_1 | ssl.protocol = TLSv1.3
kafka_1 | ssl.provider = null
kafka_1 | ssl.secure.random.implementation = null
kafka_1 | ssl.trustmanager.algorithm = PKIX
kafka_1 | ssl.truststore.certificates = null
kafka_1 | ssl.truststore.location = null
kafka_1 | ssl.truststore.password = null
kafka_1 | ssl.truststore.type = JKS
kafka_1 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
kafka_1 | transaction.max.timeout.ms = 900000
kafka_1 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka_1 | transaction.state.log.load.buffer.size = 5242880
kafka_1 | transaction.state.log.min.isr = 1
kafka_1 | transaction.state.log.num.partitions = 50
kafka_1 | transaction.state.log.replication.factor = 1
kafka_1 | transaction.state.log.segment.bytes = 104857600
kafka_1 | transactional.id.expiration.ms = 604800000
kafka_1 | unclean.leader.election.enable = false
kafka_1 | zookeeper.clientCnxnSocket = null
kafka_1 | zookeeper.connect = zookeeper:2181
kafka_1 | zookeeper.connection.timeout.ms = 18000
kafka_1 | zookeeper.max.in.flight.requests = 10
kafka_1 | zookeeper.session.timeout.ms = 18000
kafka_1 | zookeeper.set.acl = false
kafka_1 | zookeeper.ssl.cipher.suites = null
kafka_1 | zookeeper.ssl.client.enable = false
kafka_1 | zookeeper.ssl.crl.enable = false
kafka_1 | zookeeper.ssl.enabled.protocols = null
kafka_1 | zookeeper.ssl.endpoint.identification.algorithm = HTTPS
kafka_1 | zookeeper.ssl.keystore.location = null
kafka_1 | zookeeper.ssl.keystore.password = null
kafka_1 | zookeeper.ssl.keystore.type = null
kafka_1 | zookeeper.ssl.ocsp.enable = false
kafka_1 | zookeeper.ssl.protocol = TLSv1.2
kafka_1 | zookeeper.ssl.truststore.location = null
kafka_1 | zookeeper.ssl.truststore.password = null
kafka_1 | zookeeper.ssl.truststore.type = null
kafka_1 | zookeeper.sync.time.ms = 2000
kafka_1 | (kafka.server.KafkaConfig)
kafka_1 | [2023-04-20 13:25:31,979] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1 | [2023-04-20 13:25:31,980] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1 | [2023-04-20 13:25:31,983] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1 | [2023-04-20 13:25:31,992] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1 | [2023-04-20 13:25:32,007] INFO Log directory /kafka/kafka-logs-d2c548d77348 not found, creating it. (kafka.log.LogManager)
kafka_1 | [2023-04-20 13:25:32,048] INFO Loading logs from log dirs ArraySeq(/kafka/kafka-logs-d2c548d77348) (kafka.log.LogManager)
kafka_1 | [2023-04-20 13:25:32,054] INFO Attempting recovery for all logs in /kafka/kafka-logs-d2c548d77348 since no clean shutdown file was found (kafka.log.LogManager)
kafka_1 | [2023-04-20 13:25:32,064] INFO Loaded 0 logs in 15ms. (kafka.log.LogManager)
kafka_1 | [2023-04-20 13:25:32,065] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka_1 | [2023-04-20 13:25:32,067] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka_1 | [2023-04-20 13:25:32,509] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
kafka_1 | [2023-04-20 13:25:32,512] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka_1 | [2023-04-20 13:25:32,545] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
kafka_1 | [2023-04-20 13:25:32,570] INFO [broker-1001-to-controller-send-thread]: Starting (kafka.server.BrokerToControllerRequestThread)
kafka_1 | [2023-04-20 13:25:32,588] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2023-04-20 13:25:32,589] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2023-04-20 13:25:32,589] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2023-04-20 13:25:32,590] INFO [ExpirationReaper-1001-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2023-04-20 13:25:32,602] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka_1 | [2023-04-20 13:25:32,633] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
kafka_1 | [2023-04-20 13:25:32,649] INFO Stat of the created znode at /brokers/ids/1001 is: 26,26,1681997132643,1681997132643,1,0,0,72665959639547904,215,0,26
kafka_1 | (kafka.zk.KafkaZkClient)
kafka_1 | [2023-04-20 13:25:32,650] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: PLAINTEXT://192.168.99.100:9098, czxid (broker epoch): 26 (kafka.zk.KafkaZkClient)
kafka_1 | [2023-04-20 13:25:32,713] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2023-04-20 13:25:32,720] INFO [ExpirationReaper-1001-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2023-04-20 13:25:32,724] INFO [ExpirationReaper-1001-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2023-04-20 13:25:32,733] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
kafka_1 | [2023-04-20 13:25:32,742] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2023-04-20 13:25:32,747] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2023-04-20 13:25:32,757] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener)
kafka_1 | [2023-04-20 13:25:32,772] INFO [ProducerId Manager 1001]: Acquired new producerId block (brokerId:1001,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
kafka_1 | [2023-04-20 13:25:32,772] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1 | [2023-04-20 13:25:32,776] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1 | [2023-04-20 13:25:32,777] INFO [Transaction Marker Channel Manager 1001]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka_1 | [2023-04-20 13:25:32,790] INFO Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache)
kafka_1 | [2023-04-20 13:25:32,830] INFO [ExpirationReaper-1001-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2023-04-20 13:25:32,892] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
kafka_1 | [2023-04-20 13:25:32,901] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Starting socket server acceptors and processors (kafka.network.SocketServer)
zookeeper_1 | 2023-04-20 13:25:32,906 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@596] - Got user-level KeeperException when processing sessionid:0x102294e27940000 type:multi cxid:0x40 zxid:0x1f txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
kafka_1 | [2023-04-20 13:25:32,909] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
kafka_1 | [2023-04-20 13:25:32,909] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Started socket server acceptors and processors (kafka.network.SocketServer)
kafka_1 | [2023-04-20 13:25:32,912] INFO Kafka version: 2.8.1 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2023-04-20 13:25:32,913] INFO Kafka commitId: 839b886f9b732b15 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2023-04-20 13:25:32,913] INFO Kafka startTimeMs: 1681997132909 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2023-04-20 13:25:32,915] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)

this is the docker ps:

CONTAINER ID   IMAGE                                      COMMAND                  CREATED         STATUS         PORTS                                                                                                                                                       NAMES
d2c548d77348   kafka-docker_kafka                         "start-kafka.sh"         3 minutes ago   Up 3 minutes   0.0.0.0:11991->11991/tcp, :::11991->11991/tcp, 0.0.0.0:9098->9092/tcp, :::9098->9092/tcp                                                                    kafka-docker_kafka_1
eb2695455f65   wurstmeister/zookeeper                     "/bin/sh -c '/usr/sb…"   3 minutes ago   Up 3 minutes   22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:49164->2181/tcp, :::49164->2181/tcp                                                                                     kafka-docker_zookeeper_1

I can telnet through with telnet replace.internet.ip.address 11991 from my Windows PC, but I tried open the url http://replace.internet.ip.address:11991/metrics and it always failed with 502, does this mean the JMX didn't get enabled?

@OneCricketeer
Copy link

OneCricketeer commented Dec 3, 2023

You aren't running a JMX exporter container that actual runs the http server you're looking for

More specifically, if you're trying to use Prometheus, you need to download its JMX exporter JAR and set that up.

That's not an issue related to this container (which no longer exists in Dockerhub, btw), age and don't need build: . to get JMX working

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants