-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error occurring in the event catcher pod when messaging is enabled #607
Comments
It appears that you have kafka env vars set up so that the event catcher will publish events to the |
Thanks @agrare for the pointers to running the manageiq-messaging client on the pod, that allowed me to debug further. As I've been trying to test #604 on the kafka side, I was seeing the same errors mentioned here when deploying our own Kafka as part of the project. With configuring an external kafka server, I was able to publish messages to it from my notebook, but doing the same from a pod, I would get the failure mentioned here even though the Kafka server is reachable by ip address. The problem occurs with just exercising ruby-kafka so the problem is below our manageiq-messaging gem. Testing in a pod as follows:
You'll notice in the Rails log that the failure to connect to kafka is NOT <kafka_server_ip> but to the FQDN of the server instead. Issue here is that what is specified in MESSAGING_HOSTNAME is NOT what the ruby-kafka client attempts to connect to. It connects to the KAFKA_ADVERTISED_HOST_NAME that is configured on the kafka server. My notebook had that FQDN of the kafka server in the hosts file, the pod did not explaining why it worked in one and not the other. Starting the kafka server with the FQDN reachable (or just IP address for dev) for the advertised host name for the server would be the deployment recommendation: i.e.
Then placing that same advertised host name (in this case the <kafka_server_ip>) for the MESSAGING_HOSTNAME
Did the trick.
Of course without kafka here, since I was testing using an external kafka server. While this resolved the connectivity to an external kafka service, it might be the same issue for the local kafka pod. Probably need to override the KAFKA_ADVERTISED_HOST_NAME in the server as "kafka" since that is the default hostname we connect to. |
This issue has been automatically marked as stale because it has not been updated for at least 3 months. If you can still reproduce this issue on the current release or on Thank you for all your contributions! More information about the ManageIQ triage process can be found in the triage process documentation. |
This issue has been automatically closed because it has not been updated for at least 3 months. Feel free to reopen this issue if this issue is still valid. Thank you for all your contributions! More information about the ManageIQ triage process can be found in the triage process documentation. |
Errors occurring in the 1-vmware-infra-event-catcher-* pod when messaging is enabled for the pods.
Steps to reproduce:
An oc log of 1-vmware-infra-event-catcher-2-57cddffd5b-tkqdm shows the following error failing to send an event:
The text was updated successfully, but these errors were encountered: