-
Notifications
You must be signed in to change notification settings - Fork 418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doc(plc4j): Integrating Kafka and PLC4x using Docker. #892
base: develop
Are you sure you want to change the base?
Changes from 1 commit
9aa5e74
194f1f9
3cf21c3
0794ceb
962b548
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -234,3 +234,11 @@ between the base schemas. | |||||
|
||||||
The schemas for the sink and source connectors are the same. This allows us to producer from one PLC and send the | ||||||
data to a sink. | ||||||
|
||||||
|
||||||
### Start with Docker | ||||||
If you want to use PLC4x with Kafka on Docker, simply download the docker-compose.yml file, configure the necessary port and IP settings, and start the containers. The available docker-compose.yml file includes four containers: zookeeper, kafka, kafka connect, and control-center. The control-center container provides a web interface to facilitate the configuration of kafka connect. If you don't want to use it, you can remove it from the docker-compose.yml file. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Minor Formatting with PLC4x, should be PLC4X There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
|
||||||
To start the docker-compose.yml file, download it and use the following command to start it: | ||||||
docker-compose up -d | ||||||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,101 @@ | ||
version: '3' | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It looks like you've used a mix of confluent versions, latest, 6.0.0 and 6.0.1. 6.0.1, is also far from the latest (7.3.3) It also might be nicer to have the PLC4X version as an environment variable so it is easier to find when changing it for a release. I'm guessing you've grabbed a base docker-compose file from Confluent, can you confirm where you got it from so that we can check what license it was released under? |
||
services: | ||
zookeeper: | ||
image: confluentinc/cp-zookeeper:latest | ||
container_name: zookeeper | ||
networks: | ||
- kafka_network | ||
ports: | ||
- 22181:2181 | ||
environment: | ||
ZOOKEEPER_CLIENT_PORT: 2181 | ||
ZOOKEEPER_TICK_TIME: 2000 | ||
|
||
kafka: | ||
image: confluentinc/cp-kafka:latest | ||
container_name: kafka | ||
networks: | ||
- kafka_network | ||
depends_on: | ||
- zookeeper | ||
ports: | ||
- 29093:29093 | ||
environment: | ||
KAFKA_BROKER_ID: 1 | ||
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 | ||
KAFKA_LISTENERS: EXTERNAL_DIFFERENT_HOST://:29093,INTERNAL://:9092 | ||
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,EXTERNAL_DIFFERENT_HOST://YOUR_IP:29093 #YOUR_IP = It is necessary to enable external access. Please insert your machine's IP. | ||
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL_DIFFERENT_HOST:PLAINTEXT | ||
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL | ||
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 | ||
|
||
control-center: | ||
image: confluentinc/cp-enterprise-control-center:6.0.1 | ||
hostname: control-center | ||
depends_on: | ||
- zookeeper | ||
- kafka | ||
- kafka-connect | ||
ports: | ||
- "9021:9021" | ||
environment: | ||
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'kafka:9092' | ||
CONTROL_CENTER_REPLICATION_FACTOR: 1 | ||
CONTROL_CENTER_CONNECT_CLUSTER: http://kafka-connect:8083 | ||
PORT: 9021 | ||
networks: | ||
- kafka_network | ||
|
||
kafka-connect: | ||
image: confluentinc/cp-kafka-connect-base:6.0.0 | ||
container_name: kafka-connect | ||
depends_on: | ||
- zookeeper | ||
- kafka | ||
ports: | ||
- 8083:8083 | ||
environment: | ||
CONNECT_BOOTSTRAP_SERVERS: "kafka:9092" | ||
CONNECT_REST_PORT: 8083 | ||
CONNECT_GROUP_ID: kafka-connect | ||
CONNECT_CONFIG_STORAGE_TOPIC: _connect-configs | ||
CONNECT_OFFSET_STORAGE_TOPIC: _connect-offsets | ||
CONNECT_STATUS_STORAGE_TOPIC: _connect-status | ||
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter | ||
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter | ||
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter" | ||
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter" | ||
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect" | ||
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO" | ||
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR" | ||
CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n" | ||
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1" | ||
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1" | ||
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1" | ||
# # Optional settings to include to support Confluent Control Center | ||
# CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor" | ||
# CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor" | ||
# --------------- | ||
CONNECT_PLUGIN_PATH: /usr/share/java,/usr/share/confluent-hub-components,/data/connect-jars | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please update this to not reference Confluent |
||
# If you want to use the Confluent Hub installer to d/l component, but make them available | ||
# when running this offline, spin up the stack once and then run: | ||
# docker cp kafka-connect:/usr/share/confluent-hub-components ./data/connect-jars | ||
volumes: | ||
- $PWD/data:/data | ||
# In the command section, $ are replaced with $$ to avoid the error 'Invalid interpolation format for "command" option' | ||
command: | ||
- bash | ||
- -c | ||
- | | ||
echo "Installing Connector" | ||
confluent-hub install --no-prompt apache/kafka-connect-plc4x-plc4j:0.10.0 | ||
# | ||
echo "Launching Kafka Connect worker" | ||
/etc/confluent/docker/run & | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please update this to not reference Confluent |
||
# | ||
sleep infinity | ||
networks: | ||
- kafka_network | ||
networks: | ||
kafka_network: | ||
name: kafka_docker_net |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have any major objections to including a docker-compose file based off the Confluent images.