Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http2: tests for CVE-2019-9512/9517 #612

Open
5 of 6 tasks
RomanBelozerov opened this issue Apr 16, 2024 · 5 comments · Fixed by #616 · May be fixed by #615 or #618
Open
5 of 6 tasks

http2: tests for CVE-2019-9512/9517 #612

RomanBelozerov opened this issue Apr 16, 2024 · 5 comments · Fixed by #616 · May be fixed by #615 or #618
Assignees
Labels

Comments

@RomanBelozerov
Copy link
Contributor

RomanBelozerov commented Apr 16, 2024

See discussion

  • CVE-2019-9511 “Data Dribble” - you may try using a DeproxyClientH2 for this attack. But it has automatic window control so you need to modify handle_read method.
  • CVE-2019-9512 “Ping Flood” and CVE-2019-9515 “Settings Flood” - just send a many frames using send_bytes method from DeproxyClientH2. We need a new directive in frang.
  • CVE-2019-9513 “Resource Loop” - just create a many streams and change priority using send_bytes method from DeproxyClientH2. I think we need 2 tests: first - without end_stream in requests, second - with a large response body.
  • CVE-2019-9514 “Reset Flood” - just open many streams and sending invalid HEADERS frame via send_bytes from DeproxyClientH2
  • CVE-2019-9516 “0-Length Headers Leak” - please check discussion. We need to check for "0-Length Headers Leak" and

Send few pretty large headers(huffman encoded), then cause RST_STREAM(by client or by error), repeat this in other streams. Hpack is per connection, and if we allocate memory for hpack processing it could be not freed until keep-alive connection closing (notice, RST_STREAM not cause connection closing).

  • CVE-2019-9517 “Internal Data Buffering” - you may create connection using socket,ssl and `h2 libraries and check as Tempesta use memory and CPU. (You must not read a data from a socket)

It depends on 1196

for tempesta-tech/tempesta#1346

@kingluo
Copy link
Contributor

kingluo commented Apr 19, 2024

CVE-2019-9514 “Reset Flood” - just open many streams and sending RST frame via send_bytes from DeproxyClientH2

This CVE is not sending RST_STREAM from clients, but causing the server to send a lot of RST_STREAM but does not read them.

Moreover, IMO, CVE-2019-9512, CVE-2019-9515 and CVE-2019-9514 can be combined into one task to solve and test. Just like what golang bugfix does.

@kingluo
Copy link
Contributor

kingluo commented Apr 22, 2024

About the reset flood, as the CVE said:

The attacker opens a number of streams and sends an invalid request over each stream that should solicit a stream of RST_STREAM frames from the peer. Depending on how the peer queues the RST_STREAM frames, this can consume excess memory, CPU, or both.

However, if we send an invalid request (HEADERS frame with END_STREAM flag, followed by DATA frame), then the request still be forwarded to the backend and when the response comes in, it will close the whole connection:

https://github.com/tempesta-tech/tempesta/blob/d66569260281d0a03fbf60b950908f4f649b0ee9/fw/sock_clnt.c#L235

https://github.com/tempesta-tech/tempesta/blob/d66569260281d0a03fbf60b950908f4f649b0ee9/fw/sock_clnt.c#L198C1-L209C3

if we send an invalid header, it will also close the connection due to parsing error.

So, it seems difficult to exploit reset flood, because I cannot find a condition which just triggers RESET_STREAM without connection closing so far.

UPDATE:

Thanks for @EvgeniiMekhanik advice.

Headers + Priority causes RESET_STREAM.

@kingluo
Copy link
Contributor

kingluo commented Apr 23, 2024

@krizhanovsky @const-t @EvgeniiMekhanik
CVE-2019-9516 “0-Length Headers Leak”

  1. for 0-length header name, tempesta returns 400 and GOAWAY and closes the connection, so no further effect happens after.
  2. if we send a request followed by RST_STREAM, temepesta will close the connection when the response arrives from the backend, so no further effect too.
  3. If we open new streams one by one and send a request with multiple random big headers, like 100 random bytes name and 100 random bytes value pairs, and wait for the response to finish, tempesta will grow memory constantly, and even after the connection is disconnected, there is some memory leak. In theory, the memory growth and leak shouldn't happen, because in this test, only one stream is active at one time and the new stream only started after the old one finished, and the hpack table should be limited by a maximum value just like the RFC said. See this test.

@RomanBelozerov
Copy link
Contributor Author

Tempesta returns RST_STREAM if request contains invalid x-forwarded-for header. For example - ("x-forwarded-for", "1.1.1.1.1.1")

@kingluo
Copy link
Contributor

kingluo commented May 24, 2024

@RomanBelozerov @krizhanovsky @const-t

StaticDeproxyServer and DeproxyClient never release the request and response buffer lists, which can cause OOM when running a request loop. So even if you use sockets directly to build test logic, as long as you use StaticDeproxyServer as the backend, you will still encounter client OOM.

Locate the reason using memray:

sudo memray run --live ./run_tests.py -nE -v1 t_stress/test_header_leak.py

Switch to the polling thread and check the changes in column own memory. You can see the source of the leak:

self._response_buffer.append(response)


For control frame attack testing and slow read attack testing, the solution (I'd rather say it's a stopgap measure) is to set max_queued_control_frames or http_body_len respectively small enough to avoid client OOM before temepesta OOM.

However, the header leak test did not go well. We have to differentiate between client OOM and tempesta OOM in localhost test environment. That's why we saw that test_header_leak.py also triggers OOM itself.

I think we should clear the buffer list at some appropriate point in time, e.g. on the client side, when a request (or batch request) completes and gets a corresponding response, and the same goes for the backend. This is reasonable and I think this is a legacy bug in the code base.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment