-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI] 400 Bad request creating snapshot in GoogleCloudStorageThirdPartyTests #49429
Comments
Pinging @elastic/es-distributed (:Distributed/Snapshot/Restore) |
This is running against a GCS mock as well (not a real third party test run). Likely this has similar originals #49400 and all the associated test failures. |
This commit ensures that even for requests that are known to be empty body we at least attempt to read one bytes from the request body input stream. This is done to work around the behavior in `sun.net.httpserver.ServerImpl.Dispatcher#handleEvent` that will close a TCP/HTTP connection that does not have the `eof` flag (see `sun.net.httpserver.LeftOverInputStream#isEOF`) set on its input stream. As far as I can tell the only way to set this flag is to do a read when there's no more bytes buffered. This fixes the numerous connection closing issues because the `ServerImpl` stops closing connections that it thinks weren't fully drained. Also, I removed a now redundant drain loop in the Azure handler as well as removed the connection closing in the error handler's drain action (this shouldn't have an effect but makes things more predictable/easier to reason about IMO). I would suggest merging this and closing related issue after verifying that this fixes things on CI. The way to locally reproduce the issues we're seeing in tests is to make the retry timings more aggressive in e.g. the azure tests and move them to single digit values. This makes the retries happen quickly enough that they run into the async connecting closing of allegedly non-eof connections by `ServerImpl` and produces the exact kinds of failures we're seeing currently. Relates #49401, #49429
This commit ensures that even for requests that are known to be empty body we at least attempt to read one bytes from the request body input stream. This is done to work around the behavior in `sun.net.httpserver.ServerImpl.Dispatcher#handleEvent` that will close a TCP/HTTP connection that does not have the `eof` flag (see `sun.net.httpserver.LeftOverInputStream#isEOF`) set on its input stream. As far as I can tell the only way to set this flag is to do a read when there's no more bytes buffered. This fixes the numerous connection closing issues because the `ServerImpl` stops closing connections that it thinks weren't fully drained. Also, I removed a now redundant drain loop in the Azure handler as well as removed the connection closing in the error handler's drain action (this shouldn't have an effect but makes things more predictable/easier to reason about IMO). I would suggest merging this and closing related issue after verifying that this fixes things on CI. The way to locally reproduce the issues we're seeing in tests is to make the retry timings more aggressive in e.g. the azure tests and move them to single digit values. This makes the retries happen quickly enough that they run into the async connecting closing of allegedly non-eof connections by `ServerImpl` and produces the exact kinds of failures we're seeing currently. Relates elastic#49401, elastic#49429
This commit ensures that even for requests that are known to be empty body we at least attempt to read one bytes from the request body input stream. This is done to work around the behavior in `sun.net.httpserver.ServerImpl.Dispatcher#handleEvent` that will close a TCP/HTTP connection that does not have the `eof` flag (see `sun.net.httpserver.LeftOverInputStream#isEOF`) set on its input stream. As far as I can tell the only way to set this flag is to do a read when there's no more bytes buffered. This fixes the numerous connection closing issues because the `ServerImpl` stops closing connections that it thinks weren't fully drained. Also, I removed a now redundant drain loop in the Azure handler as well as removed the connection closing in the error handler's drain action (this shouldn't have an effect but makes things more predictable/easier to reason about IMO). I would suggest merging this and closing related issue after verifying that this fixes things on CI. The way to locally reproduce the issues we're seeing in tests is to make the retry timings more aggressive in e.g. the azure tests and move them to single digit values. This makes the retries happen quickly enough that they run into the async connecting closing of allegedly non-eof connections by `ServerImpl` and produces the exact kinds of failures we're seeing currently. Relates elastic#49401, elastic#49429
This commit ensures that even for requests that are known to be empty body we at least attempt to read one bytes from the request body input stream. This is done to work around the behavior in `sun.net.httpserver.ServerImpl.Dispatcher#handleEvent` that will close a TCP/HTTP connection that does not have the `eof` flag (see `sun.net.httpserver.LeftOverInputStream#isEOF`) set on its input stream. As far as I can tell the only way to set this flag is to do a read when there's no more bytes buffered. This fixes the numerous connection closing issues because the `ServerImpl` stops closing connections that it thinks weren't fully drained. Also, I removed a now redundant drain loop in the Azure handler as well as removed the connection closing in the error handler's drain action (this shouldn't have an effect but makes things more predictable/easier to reason about IMO). I would suggest merging this and closing related issue after verifying that this fixes things on CI. The way to locally reproduce the issues we're seeing in tests is to make the retry timings more aggressive in e.g. the azure tests and move them to single digit values. This makes the retries happen quickly enough that they run into the async connecting closing of allegedly non-eof connections by `ServerImpl` and produces the exact kinds of failures we're seeing currently. Relates #49401, #49429
This commit ensures that even for requests that are known to be empty body we at least attempt to read one bytes from the request body input stream. This is done to work around the behavior in `sun.net.httpserver.ServerImpl.Dispatcher#handleEvent` that will close a TCP/HTTP connection that does not have the `eof` flag (see `sun.net.httpserver.LeftOverInputStream#isEOF`) set on its input stream. As far as I can tell the only way to set this flag is to do a read when there's no more bytes buffered. This fixes the numerous connection closing issues because the `ServerImpl` stops closing connections that it thinks weren't fully drained. Also, I removed a now redundant drain loop in the Azure handler as well as removed the connection closing in the error handler's drain action (this shouldn't have an effect but makes things more predictable/easier to reason about IMO). I would suggest merging this and closing related issue after verifying that this fixes things on CI. The way to locally reproduce the issues we're seeing in tests is to make the retry timings more aggressive in e.g. the azure tests and move them to single digit values. This makes the retries happen quickly enough that they run into the async connecting closing of allegedly non-eof connections by `ServerImpl` and produces the exact kinds of failures we're seeing currently. Relates #49401, #49429
Thanks to @original-brownbear, #49518 should fix this issues. I'm closing for now and we'll reopen or reference if this issue happens again. |
New build failure: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-unix-compatibility/os=debian-10&&immutable/443/console
Reproduce line:
|
Two things: 1. We should just throw a descriptive assertion error and figure out why we're not reading a multi-part instead of returning a `400` and failing the tests that way here since we can't reproduce these 400s locally. 2. We were missing logging the exception on a cleanup delete failure that coincides with the `400` issue in tests. Relates elastic#49429
* Better Logging GCS Blobstore Mock Two things: 1. We should just throw a descriptive assertion error and figure out why we're not reading a multi-part instead of returning a `400` and failing the tests that way here since we can't reproduce these 400s locally. 2. We were missing logging the exception on a cleanup delete failure that coincides with the `400` issue in tests. Relates #49429
* Better Logging GCS Blobstore Mock Two things: 1. We should just throw a descriptive assertion error and figure out why we're not reading a multi-part instead of returning a `400` and failing the tests that way here since we can't reproduce these 400s locally. 2. We were missing logging the exception on a cleanup delete failure that coincides with the `400` issue in tests. Relates elastic#49429
* Better Logging GCS Blobstore Mock Two things: 1. We should just throw a descriptive assertion error and figure out why we're not reading a multi-part instead of returning a `400` and failing the tests that way here since we can't reproduce these 400s locally. 2. We were missing logging the exception on a cleanup delete failure that coincides with the `400` issue in tests. Relates #49429
@original-brownbear is this still an issue? |
@ywelsch this particular exception is impossible now => let's close this one. There's still some test failures in these tests though, but I'll open a new issue or PR for those. |
Nevermind ... my bad, this is still a possible issue sorry for the noise. Leaving this open. |
We were incorrectly handling blobs starting in `\r\n` which broke tests randomly when blobs started on these. Relates elastic#49429
Closing this after all. The |
* Fix GCS Mock Broken Handling of some Blobs We were incorrectly handling blobs starting in `\r\n` which broke tests randomly when blobs started on these. Relates #49429
* Fix GCS Mock Broken Handling of some Blobs We were incorrectly handling blobs starting in `\r\n` which broke tests randomly when blobs started on these. Relates elastic#49429
* Better Logging GCS Blobstore Mock Two things: 1. We should just throw a descriptive assertion error and figure out why we're not reading a multi-part instead of returning a `400` and failing the tests that way here since we can't reproduce these 400s locally. 2. We were missing logging the exception on a cleanup delete failure that coincides with the `400` issue in tests. Relates elastic#49429
* Fix GCS Mock Broken Handling of some Blobs We were incorrectly handling blobs starting in `\r\n` which broke tests randomly when blobs started on these. Relates elastic#49429
org.elasticsearch.repositories.gcs.GoogleCloudStorageThirdPartyTests.testCreateSnapshot failed with a "400 bad request".
Stacktrace:
Repro line (did not reproduce for me):
Not muting for now given I can't reproduce.
The text was updated successfully, but these errors were encountered: