Skip to content

A Kafka Producer–Consumer project using Python and Docker (KRaft mode). Includes a fully containerized Kafka setup with Python clients for sending and consuming real-time messages.

Notifications You must be signed in to change notification settings

VaishnaviSh14/Kafka---Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Kafka Project — Python Producer & Consumer (Docker + KRaft Mode)

This project demonstrates how to run Apache Kafka inside Docker using KRaft mode
(replaces Zookeeper), and how to build a Producer and Consumer in Python using
the confluent-kafka library.


🏗️ Architecture Overview

Kafka is designed to run as a distributed cluster consisting of multiple brokers.

✔️ Broker

A Kafka broker:

  • Stores and manages data
  • Organizes messages into topics and partitions
  • Handles incoming writes from producers
  • Serves read requests to consumers

✔️ Controller (KRaft Mode)

Kafka now uses KRaft (Kafka Raft Metadata mode) to maintain metadata — replacing Zookeeper.

In KRaft mode:

  • One broker becomes the Controller
  • Manages cluster metadata
  • Tracks the leader of each partition
  • Reassigns partitions when a broker fails
  • Ensures cluster stability

If the active Controller fails, another broker automatically takes over.


🐳 Running Kafka with Docker

Kafka runs from a Docker Compose setup included in this project.

To verify that Kafka is running:

docker exec -it kafka kafka-topics --list --bootstrap-server localhost:9092

📦 Kafka Topic Management Commands

✔️ List all topics

docker exec -it kafka kafka-topics --list --bootstrap-server localhost:9092

✔️ Describe a topic

docker exec -it kafka kafka-topics --describe --topic orders --bootstrap-server localhost:9092

✔️ See more options

kafka-topics --help

🛠️ Installing Dependencies

Install the Kafka Python client:

pip3 install confluent-kafka

🔌 Understanding bootstrap.servers

bootstrap.servers = "localhost:9092"

This tells your Kafka client which broker(s) to connect to initially.

Kafka uses this info to:

  • Discover the cluster
  • Find all alive brokers
  • Fetch topic + partition metadata

📝 Kafka Producer (Python)

Creates a random order, serializes JSON, and sends it to Kafka.


🧵 Kafka Consumer (Python)

Uses polling to fetch messages and display them.


🔄 Kafka Consumer Model — Polling

msg = consumer.poll(timeout)

👥 Understanding group.id

A consumer group ensures:

  • Load balancing
  • Fault tolerance
  • Each message is processed once per group

🚀 How to Run the System

Start Kafka:

docker compose up -d

Run Consumer:

python3 consumer.py

Run Producer:

python3 producer.py

Shutdown:

docker compose down

📌 Summary

You now have a complete Kafka setup using:

  • Docker (KRaft mode)
  • Python Producer/Consumer
  • Topic management commands

About

A Kafka Producer–Consumer project using Python and Docker (KRaft mode). Includes a fully containerized Kafka setup with Python clients for sending and consuming real-time messages.

Topics

Resources

Stars

Watchers

Forks

Languages