You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it is a good idea to have a similar check implemented by Python's WebSocket library, as it is a very easy attack. Mainly, check that decompressed size does not exceed some kind of limit when executing HTTP.decode .
A simple example.
First, generate a gzip file. I lifted code from this repo
time dd if=/dev/zero bs=1M count=$((20*1024))| gzip > ./cake.gzip
When I execute the following I observe a jump in the resource usage, eventually leading to a crash of the julia process.
using HTTP
data =read("cake.gzip")
server = HTTP.serve!() do request::HTTP.Request@show request
@show request.method
@show HTTP.header(request, "Content-Type")
@show request.body
tryreturn HTTP.Response(data)
catch e
return HTTP.Response(400, "Error: $e")
endend
r = HTTP.get("http://127.0.0.1:8081/"; decompress=false)
HTTP.decode(r, "gzip")
Happy to provide further details. I can also try to implement a solution if that's gong to be easier :)
The text was updated successfully, but these errors were encountered:
I think it is a good idea to have a similar check implemented by Python's WebSocket library, as it is a very easy attack. Mainly, check that decompressed size does not exceed some kind of limit when executing
HTTP.decode
.A simple example.
First, generate a gzip file. I lifted code from this repo
When I execute the following I observe a jump in the resource usage, eventually leading to a crash of the julia process.
Happy to provide further details. I can also try to implement a solution if that's gong to be easier :)
The text was updated successfully, but these errors were encountered: