This project demonstrates how to run Apache Kafka inside Docker using KRaft mode
(replaces Zookeeper), and how to build a Producer and Consumer in Python using
the confluent-kafka library.
Kafka is designed to run as a distributed cluster consisting of multiple brokers.
A Kafka broker:
- Stores and manages data
- Organizes messages into topics and partitions
- Handles incoming writes from producers
- Serves read requests to consumers
Kafka now uses KRaft (Kafka Raft Metadata mode) to maintain metadata — replacing Zookeeper.
In KRaft mode:
- One broker becomes the Controller
- Manages cluster metadata
- Tracks the leader of each partition
- Reassigns partitions when a broker fails
- Ensures cluster stability
If the active Controller fails, another broker automatically takes over.
Kafka runs from a Docker Compose setup included in this project.
To verify that Kafka is running:
docker exec -it kafka kafka-topics --list --bootstrap-server localhost:9092docker exec -it kafka kafka-topics --list --bootstrap-server localhost:9092docker exec -it kafka kafka-topics --describe --topic orders --bootstrap-server localhost:9092kafka-topics --helpInstall the Kafka Python client:
pip3 install confluent-kafkabootstrap.servers = "localhost:9092"This tells your Kafka client which broker(s) to connect to initially.
Kafka uses this info to:
- Discover the cluster
- Find all alive brokers
- Fetch topic + partition metadata
Creates a random order, serializes JSON, and sends it to Kafka.
Uses polling to fetch messages and display them.
msg = consumer.poll(timeout)A consumer group ensures:
- Load balancing
- Fault tolerance
- Each message is processed once per group
Start Kafka:
docker compose up -dRun Consumer:
python3 consumer.pyRun Producer:
python3 producer.pyShutdown:
docker compose downYou now have a complete Kafka setup using:
- Docker (KRaft mode)
- Python Producer/Consumer
- Topic management commands