-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Facing issue in creating reporting kafka connection object #62
Comments
Issue Label Bot is not confident enough to auto-label this issue. See dashboard for more details. |
I think the issue might be |
Hello @naorlivne Thanks for the Response. I changed But I am still facing the same issue.
Note: <my_vps_url> is the ip address of my server. |
Still think that the issue is kafka connectivity - can you confirm that your Kafka starts & stays up? (run Assuming Kafka stays online Can you try connecting to it from the worker using Kafka CLI to test? my feeling is that when you enter PLAINTEXT://<my_vps_url>:9092 the Kafka container doesn't know what that IP is so it can't bind to it (as the IP belongs to the host and not the container). Another option you can test is going back to the original compose file (before my suggestion to move from Reading the provided reporter logs I can see it tried connecting a few times and failed until it finally succeeded, this makes sense when you consider the time it takes Kafka to boot up & also means that connection to kafka from inside the compose network works and that the issue seems to be external access to it. |
Thanks for the Response. It was my silly mistake I forgot to keep the port open of the VPS machine and that's why the issue was created. |
Hello
I have configured the Nebula worker on the Raspberry Pi.
I am using Ubuntu 18.04 VPS on which I have the following containers =>
Expected/Wanted Behavior
The worker sends the current state to a Kafka cluster after every sync with the manager. The reporter component will pull from Kafka and populate the state data into the backend DB. Then the manager can query the new state data from the backend DB to let the admin know the state of managed devices.
Actual Behavior
When the nebula worker downloads and updates the application while reporting the state using Kafka I get the following error =>
1. Logs of Nebula worker =>
2. Logs of Nebula Reporter =>
Note As Kafka logs are too big I haven't added them but if you need them for debugging I can attach the log file.
Steps to Reproduce the Problem
Configured worker on Raspberry Pi using
docker-compose.yml
and docker custom built mentioned in the Specifications sectionConfigured Manager, Reporter, Mongo, Kafka, and zookeeper on Ubuntu 18.04 using
docker-compose.yml
as mentioned in the Specification section.Configured a Private Docker Registry for maintaining the Update release and Images.
Specifications
1. Nebula worker =>
At the worker side as I am using Raspberry Pi I had to build the Image on the Pi and start the container. For achieving this I did the following steps =>
Dir Structure =>
Docker file =>
docker-compose.yml for worker =>
2. Nebula Manager, Mongo, Kafka, Reporter and Zookeeper =>
docker-compose.yml
=>The text was updated successfully, but these errors were encountered: