-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failover requests would not work for partially used streams #106
Comments
This would explain scality/MetaData#940 then. |
I guess. I just can't think of why the source stream would end up with an error, I would have assumed that there were TCP errors connecting to any of the nodes from start. |
On the MD issue, @msegura analyzed for some connection closing related to the way the keepalive is configured on our Sproxyd's Tengine. Does it look to you like this could be the explanation ? |
I think it fits the picture. The log line with {"name":"SproxydClient","error":{"code":"ECONNRESET"},"time":1484643601662,"req_id":"a7ea4c3151407e76b10e:7a6691a0746cb6785797","level":"error","message":"PUT chunk to sproxyd","hostname":"asvppdxobjs301.gecis.io","pid":59} could only happen if a connection was established to the remote host, socket was assigned and then later destroyed by the remote host for some reason. In that case, the stream may have been partially consumed. |
That can explain the retry error but not the first one right ? |
Yeah I can't explain the first one. |
Does this really happen though, given that production deployments only have 1 sproxyd endpoint configured and thus can not fail over? |
From the situation today, I realized that we have only 1 sproxyd endpoint. My speculation goes out of the window. It's a potential problem if there are multiple nodes in the bootstrap list. |
Identified by @alexandre-merle, streams which are consumed partially and failed over to another node would not work as it would only be able to pipe partial data
The text was updated successfully, but these errors were encountered: