-
Notifications
You must be signed in to change notification settings - Fork 430
rabbitmq:3.9.0 docker, client keep complain disconnect from server #509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
That would be due to #467 Reasoning behind the change #506 (comment) |
Do you mean it's configuration issue? BTW my test use default config, the command line is |
The environment variables change would be the notable change that we implemented, if it's not related to that then it could be changes having to do with the 3.9 release itself. $ docker run -d --name=rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3.9.0
Unable to find image 'rabbitmq:3.9.0' locally
3.9.0: Pulling from library/rabbitmq
16ec32c2132b: Pull complete
3adbc39b91c4: Pull complete
2611ca544d44: Pull complete
d28525f31fbf: Pull complete
a505e26510db: Pull complete
bb9850617192: Pull complete
30c794338f65: Pull complete
0ea3c2c93893: Pull complete
b108568dff1d: Pull complete
Digest: sha256:6c75795de210cd5efd63a3014bd91c350c8d581d55834ed6217d259f3c14c77a
Status: Downloaded newer image for rabbitmq:3.9.0
19dc39f100bb8083173c9d87665117bcf0b0d852c1e0867137f0dcce9ba6815a logs$ docker logs rabbitmq 2>&1 | tail -n 7
2021-07-28 21:38:17.160553+00:00 [info] <0.671.0> Ready to start client connection listeners
2021-07-28 21:38:17.164306+00:00 [info] <0.748.0> started TCP listener on [::]:5672
completed with 3 plugins.
2021-07-28 21:38:17.458345+00:00 [info] <0.671.0> Server startup complete; 3 plugins started.
2021-07-28 21:38:17.458345+00:00 [info] <0.671.0> * rabbitmq_prometheus
2021-07-28 21:38:17.458345+00:00 [info] <0.671.0> * rabbitmq_web_dispatch
2021-07-28 21:38:17.458345+00:00 [info] <0.671.0> * rabbitmq_management_agent
$ dmesg | tail -n 40
[1762768.075546] vethafda9e2: renamed from eth0
[1762768.181036] docker0: port 1(veth870c6c0) entered disabled state
[1762768.185918] device veth870c6c0 left promiscuous mode
[1762768.185925] docker0: port 1(veth870c6c0) entered disabled state
[1762771.085338] device veth6bedd7e entered promiscuous mode
[1762771.186829] IPVS: Creating netns size=2104 id=267523
[1762771.385393] eth0: renamed from veth99505e1
[1762771.408687] docker0: port 1(veth6bedd7e) entered forwarding state
[1762771.408697] docker0: port 1(veth6bedd7e) entered forwarding state
[1762771.676983] veth99505e1: renamed from eth0
[1762771.725400] docker0: port 1(veth6bedd7e) entered disabled state
[1762771.755758] docker0: port 1(veth6bedd7e) entered disabled state
[1762771.761515] device veth6bedd7e left promiscuous mode
[1762771.761524] docker0: port 1(veth6bedd7e) entered disabled state
[1762780.751245] IPv6: ADDRCONF(NETDEV_UP): pwdbr-7b78f1a7: link is not ready
[1762786.845599] device veth50ec4ab entered promiscuous mode
[1762786.845860] IPv6: ADDRCONF(NETDEV_UP): veth50ec4ab: link is not ready
[1762786.863959] device vethph776c62229 entered promiscuous mode
[1762786.864182] IPv6: ADDRCONF(NETDEV_UP): vethph776c62229: link is not ready
[1762786.864209] pwdbr-7b78f1a7: port 1(vethph776c62229) entered forwarding state
[1762786.864217] pwdbr-7b78f1a7: port 1(vethph776c62229) entered forwarding state
[1762786.864579] pwdbr-7b78f1a7: port 1(vethph776c62229) entered disabled state
[1762786.944071] IPVS: Creating netns size=2104 id=267524
[1762787.345174] eth0: renamed from vethpp776c62229
[1762787.389305] IPv6: ADDRCONF(NETDEV_CHANGE): vethph776c62229: link becomes ready
[1762787.389488] pwdbr-7b78f1a7: port 1(vethph776c62229) entered forwarding state
[1762787.389495] pwdbr-7b78f1a7: port 1(vethph776c62229) entered forwarding state
[1762787.389541] IPv6: ADDRCONF(NETDEV_CHANGE): pwdbr-7b78f1a7: link becomes ready
[1762787.413566] eth1: renamed from veth020c604
[1762787.437034] IPv6: ADDRCONF(NETDEV_CHANGE): veth50ec4ab: link becomes ready
[1762787.437268] docker_gwbridge: port 35(veth50ec4ab) entered forwarding state
[1762787.437324] docker_gwbridge: port 35(veth50ec4ab) entered forwarding state
[1762802.396018] pwdbr-7b78f1a7: port 1(vethph776c62229) entered forwarding state
[1762802.460046] docker_gwbridge: port 35(veth50ec4ab) entered forwarding state
[1762810.011981] device veth3f8aa51 entered promiscuous mode
[1762810.285015] IPVS: Creating netns size=2104 id=267525
[1762810.696385] eth0: renamed from veth7d9c013
[1762810.761363] docker0: port 1(veth3f8aa51) entered forwarding state
[1762810.761385] docker0: port 1(veth3f8aa51) entered forwarding state
[1762825.788024] docker0: port 1(veth3f8aa51) entered forwarding state |
Just test with python pika 1.2.0, the producer and consumer works fine with rabbitmq 3.9.0. so guess it's may related to celery (as rabbitmq client)?
the error message
|
@PingHao you don't have to guess. See node logs and the connectivity troubleshooting for clues. There are no Python library incompatibilities in RabbitMQ 3.9.0 as there are no AMQP 0-9-1 protocol changes of any kind. |
@michaelklishin Thanks your feedback. I was use 'guest' user. however, just tried create new user, give permission then ran with it, the problem still same. here are docker logs shows what's happening with user "testuser", the reason given as "reached_max_restart_intensity", not sure how to overcome that. btw I also test with pika python client, a simple producer and consumer pair keep push message, that's works fine, without any error produced in rabbitmq log.
|
@PingHao this is likely rabbitmq/rabbitmq-server#3230. Avoiding setting global QoS prefetch and using "regular" (per-channel) prefetch instead should side step this exception. |
It seems like the issue was fixed in 3.9.1. |
@mathieu-lemay Thanks. that's correct, the problem is gone on 3.9.1 |
I just pull and ran 3.9.0 container today, and let celery use it for task queue. for every celery task sent out, the celery worker always report a error message it's disconnected from rabbitmq server, then it recover from it. I tried run 3.9.0 and 3.9.0-alpine, same result, and tried on two different server, one centos 7 , another centos8, same error.
however by switch to 3.8.9 image, everything is fine.
here it's linux dmesg output, my guess is somehow the 3.9.0 version cause docker's veth interface temporary down .
[Wed Jul 28 15:27:00 2021] docker0: port 1(vetha2ca9f3) entered blocking state
[Wed Jul 28 15:27:00 2021] docker0: port 1(vetha2ca9f3) entered disabled state
[Wed Jul 28 15:27:00 2021] device vetha2ca9f3 entered promiscuous mode
[Wed Jul 28 15:27:00 2021] IPv6: ADDRCONF(NETDEV_UP): vetha2ca9f3: link is not ready
[Wed Jul 28 15:27:00 2021] eth0: renamed from veth5dd6688
[Wed Jul 28 15:27:00 2021] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[Wed Jul 28 15:27:00 2021] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[Wed Jul 28 15:27:00 2021] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[Wed Jul 28 15:27:00 2021] IPv6: ADDRCONF(NETDEV_CHANGE): vetha2ca9f3: link becomes ready
[Wed Jul 28 15:27:00 2021] docker0: port 1(vetha2ca9f3) entered blocking state
[Wed Jul 28 15:27:00 2021] docker0: port 1(vetha2ca9f3) entered forwarding state
[Wed Jul 28 15:27:00 2021] userif-3: sent link down event.
[Wed Jul 28 15:27:00 2021] userif-3: sent link up event.
The text was updated successfully, but these errors were encountered: