-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error creating channel and connection: connection is already closed due to connection error; cause: com.rabbitmq.client.impl.UnknownChannelException: Unknown channel number 1 #101
Comments
+1, any updates on this ? |
Any updates?? |
We had this problem when RabbitMQ was overload (10'000 msg/s in the same queue). We also made a lot of change in the receiver (didn't PR yet but we will) to better handle timeouts, causing Spark Streaming to have scheduling delay. |
I'm having the same issue. When the queue grows beyond a certain threshold, it looks like RabbitMQ will discard messages from memory, and when you consume them it has to read from disk. Our streaming app will tend to get 5-6k ack rates, but when RAbbitMQ is reading from disk that falls to 80-120 per second which is horrible. With those low ack rates, it looks like you start to get failures like the above or other timeout exceptions. For me the workaround was to run multiple streaming jobs during extremely heavy load periods, so when the app goes down to write (current maxReceiveTime setting is .8 the streaming window), the queue doesn't go over the threshold. |
Is there any way we can increase the timeout settngs? |
yes and no. Some timeouts are hard coded (10s) in the AMQP client. |
spark-rabbitmq version - 0.5.1
spark version - 2.1.0 (scala version - 2.11.8)
rabbitmq version - 3.5.6
I'm using Distributed approach for streaming -
I keep getting
Anyone had this issue before? Any suggestions on how to solve it.
Thanks
Akhila.
The text was updated successfully, but these errors were encountered: