Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proxy: Unbuffered request optimization #1314

Merged
merged 2 commits into from
Jan 11, 2017
Merged

proxy: Unbuffered request optimization #1314

merged 2 commits into from
Jan 11, 2017

Conversation

lhecker
Copy link

@lhecker lhecker commented Dec 29, 2016

If only one upstream is defined we don't need to buffer the body.
Instead we directly stream the body to the upstream host, which reduces memory usage as well as latency.
Furthermore this enables different kinds of HTTP streaming applications like gRPC for instance.

This probably fixes #1310.

Again NOTE: This only works if your proxy directive specifies only a single upstream host.

@mholt
Copy link
Member

mholt commented Dec 29, 2016

Cool, thanks for giving this a go @lhecker!

For the test, how about using testing.Short() as described at the top of the overview here: https://golang.org/pkg/testing/

Or maybe the test could be sized down just a notch?

@lhecker
Copy link
Author

lhecker commented Dec 29, 2016

@mholt That would still not lead to a reproducible, reliable test (since only OOM errors would be a proof for it to be not working). I just found debug. SetGCPercent() though with which we could disable the GC. If we then proxy a relatively large request (e.g. 100MB) and check wether the memory usage after running it is still below the request size we can confirm that it was streaming the request in chunks with a small buffer (32KB currently) instead of fully loading it into the memory first.

I'll add such a test okay? 🙂

@mholt
Copy link
Member

mholt commented Dec 29, 2016 via email

@lhecker
Copy link
Author

lhecker commented Dec 29, 2016

@mholt Found an even easier solution by simply calculating the difference between the runtime.MemStats.TotalAlloc before and after the request and added it to this PR just now.

I wrote the new test case based on TestReverseProxyRetry using a request body of 100MB. If I disable the optimizations in this PR my test reliably fails with:

proxy allocated too much memory: 268847840 bytes

and if I enable the optimizations it reliably succeeds with 582320 allocated bytes.
(The test fails if more than 100MB are allocated, because it's very likely a sign that the request body must have been buffered. And if not it's still a sign that the proxy module allocates too much memory.)


For some strange reason the test runs a lot faster if I run it with go test -cpu 1 (2.2s vs 0.2s). Any idea why that might be the case?

Edit: This only happens on macOS. Linux is fine. If someone has any idea why that's the case please tell me. 🙂

@mholt
Copy link
Member

mholt commented Jan 1, 2017

That's interesting -- I don't know why that is! I just tried running the test on my Macbook Pro and I see similar results.

Copy link
Member

@mholt mholt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to say, this is a brilliant test you've written. It's simple and seems to be technically effective.

I was just thinking about this more. And I asked the folks on Gophers Slack how other LBs deal with this. One commented that they were not familiar with the idea of load balancers retrying requests automatically. In other words, this is kind of a novel feature in Caddy, probably because of the buffering problem and the fact that retrying POSTs can be sloppy, as POSTs aren't idempotent. (I have mixed feelings on this -- since ideally an upstream shouldn't apply changes when a POST fails!)

Caddy does NOT retry another backend by default, which is the very smart, safe choice. To enable this retry (in other words, to iterate this big for loop a second time), multiple upstreams need to be provided and try_duration has to be set to a value > 0. So that's good.

Given that try_duration is 0 and no retries are performed by default, maybe the condition on whether we buffer the whole body should be more specific: multiple upstreams and try_duration > 0. (See the keepRetrying func.)

What do you think? If you agree, feel free to update the PR and I think I'm almost ready to merge it.

var _ rewindableReader = &unbufferedBody{}

func (b *unbufferedBody) rewind() error {
panic("cannot rewind unbuffered body")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Y u no return error? 😓

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes... When I wrote this method it wasn't hidden behind a common interface. Calling rewind() on it was thus basically a precondition failure, akin to how 501 Not Implemented is a server error code instead of being a client error like 404. I still kind of think that it's a precondition failure, but since this struct is only used using the rewindableReader it's surely not the right choice anymore. I'll change it.

@mholt
Copy link
Member

mholt commented Jan 1, 2017

More thinking out loud here...

The issue that motivated the buffering of proxy bodies is here: #1229 - and the subsequent PR merged the change.

From what I can tell, other load balancers like nginx and haproxy will try other upstreams only if they couldn't establish a connection. This makes sense, since it means the request body wouldn't be drained: no was connection was ever established to drain it.

Currently, Caddy may try the next upstream for any error returned by the reverseproxy's ServeHTTP method, which (currently) returns its first possible error at the RoundTrip(). It's unclear to me whether RoundTrip() returns an error only on connection problems or potentially after establishing the connection. The net/http docs only talk about whether a "response" was obtained, saying:

RoundTrip must return err == nil if it obtained a response, regardless of the response's HTTP status code. A non-nil err should be reserved for failure to obtain a response.

I'm not sure whether this is interpreted as "complete response body" or "part of a response"... well anyway, I think the way we do it now is alright, but I'm not sure it's ideal.

What I'm getting at is:

The server operator should be able to control whether the proxy retries with another upstream. Right now, it is off by default and, if enabled, only happens if RoundTrip() returns an error (presumably a connection error?). This is good.

It would be really neat if a "proxy-mode error level" could be defined. Right now, that definition is something like this: "failure to obtain a response" (according to the Go docs), which I guess includes "failure to establish a connection". This is a good default and is what other LBs do. But what if the operator could define "a timeout of 10s" or "a 500 response" as proxy-mode error levels? That way, if an upstream serve was accepting connections but unable to handle them, the operator could tell Caddy to treat a 500 like a connection error: retry the request on a different upstream. (The operator would have to understand this has the potential to retry non-idempotent POST requests and buffer response bodies, but... that's how it goes.)

So -- what does this have to do with your PR? It touches on this in a very significant way. Ideally, the only time we should have to buffer the body is if it started being drained, then was stopped, then we had to retry. All other errors -- connection errors, whatever -- shouldn't need it, even if we have multiple upstreams.

What do you think?

@lhecker
Copy link
Author

lhecker commented Jan 1, 2017

I have mixed feelings on this -- since ideally an upstream shouldn't apply changes when a POST fails!

The issues begin when not the request to the upstream but it's response fails and you apply a POST request multiple times due to this. E.g. instead of ordering one expensive product you order a hundred and won't notice because the response from the backend is never reaching caddy.

To enable this retry [...], multiple upstreams need to be provided and try_duration has to be set to a value > 0.

Oh from the docs [1] I thought that a try_duration of 0 just makes the proxy try all upstreams in a loop forever? I guess it should be updated to say that "0s" disables the retrying instead? If that's correct I'll change the code to check for try_duration == 0.

It's unclear to me whether RoundTrip() returns an error only on connection problems or potentially after establishing the connection.

It returns errors on connection failures, timeouts (as configured on the transport) and on protocol errors (TLS & HTTP). This does not include the response body though because it's not parsed until you actually read from the res.Body.

It would be really neat if a "proxy-mode error level" could be defined. [...]

Yeah. Here are the nginx docs for this. As you can see it provides a flexibility similar to what you wrote.

Ideally, the only time we should have to buffer the body is if it started being drained, then was stopped, then we had to retry.

I'm not sure if I understand. At the point you try to "retry" without having buffered the body already you cannot retry anymore because you failed to buffer in the first place. So either you start buffering right from the start and be able to retry upstream requests or you don't buffer and can't retry. But I believe there is no other way apart from deciding it before you even try to connect to the first upstream.

Even if you say "Well but what if we first see wether we can connect to the first upstream?" we still have to buffer it. The reason is simple: Calls to res.Body.Read() can still fail, because the HTTP body might be in a malformed encoding, or because the TCP connection disconnects while reading the body. That's why you have to buffer it anyways.

I don't think we should spend too much time in optimizing the buffering logic for now apart from the 2 obvious cases though (only one upstream or try_duration is 0). Most of the time you'll want to only use the retry logic for GET requests anyways (which don't have a large body!) due to the major issue with e.g. POST requests I mentioned above.

[1]:

By default, this retry is disabled ("0s"). Clients may hang for this long while the proxy tries to find an available upstream host.

@mholt
Copy link
Member

mholt commented Jan 1, 2017

@lhecker

The issues begin when not the request to the upstream but it's response fails and you apply a POST request multiple times due to this. E.g. instead of ordering one expensive product you order a hundred and won't notice because the response from the backend is never reaching caddy.

IMO a "request" should fail or be rejected if any part of the RoundTrip fails, but apparently most backend services don't think about that. (I can see why, it's just not ideal.) Like rolling back a DB transaction if there's an error.

Oh from the docs [1] I thought that a try_duration of 0 just makes the proxy try all upstreams in a loop forever? I guess it should be updated to say that "0s" disables the retrying instead?

Huh? It doesn't say anything about retrying forever. This is what it says: "try_duration is how long to try selecting available upstream hosts for each request. By default, this retry is disabled (0s)."

If that's correct I'll change the code to check for try_duration == 0.

That'd be a good change in the meantime, until we figure out exactly how we want to configure retries. I'm not sure yet if it will affect your PR any more than this, so at least make the change and I'll take another look.

It returns errors on connection failures, timeouts (as configured on the transport) and on protocol errors (TLS & HTTP).

By HTTP protocol errors, do you mean if the backend returns a 4xx or 5xx response, an error value is returned? My understanding is that it only happens on connection failures (and maybe timeouts).

Yeah. Here are the nginx docs for this. As you can see it provides a flexibility similar to what you wrote.

Oh, I didn't realize nginx did this. Thanks. Those docs say that nginx can't retry if the body has been read, so I guess they aren't buffering it. I still think buffering isn't all bad in some use cases, as long as the user configures it. So I like where this PR is going.

I'm not sure if I understand. At the point you try to "retry" without having buffered the body already you cannot retry anymore because you failed to buffer in the first place. So either you start buffering right from the start and be able to retry upstream requests or you don't buffer and can't retry.

Right, that's what I mean -- buffer from the start. But we should know ahead of time whether we need to buffer: 1) retries are enabled and 2) there are multiple backends -- and, if we make this configurable, 3) if retries are allowed after a connection has been established. Ideally, only meeting all 3 of those conditions should induce buffering.

I don't think we should spend too much time in optimizing the buffering logic for now apart from the 2 obvious cases though (only one upstream or try_duration is 0). Most of the time you'll want to only use the retry logic for GET requests anyways (which don't have a large body!) due to the major issue with e.g. POST requests I mentioned above.

Fair enough -- one thing at a time. :)

@lhecker
Copy link
Author

lhecker commented Jan 1, 2017

but apparently most backend services don't think about that.

No most really don't, especially as soon as you start layering your abstraction on top, where e.g. your DB transaction does not extend until you finished replying to the client.
In fact it's actually quite hard to do since you need to e.g. commit first before you know wether the transaction succeeded. And then you have to reply to the client which can still fail if meanwhile the connection broke down for whatever reason, but you can't rollback anymore because you commited already and... aaaaaaah... You just noticed you'd need to implement something like the 2-phase commit protocol over HTTP and you start wondering if growing potatoes in your basement isn't the better option over all this trouble. 😂

Anyways:

Huh? It doesn't say anything about retrying forever.

Ah I misread. I thought it said that the client might hang for a long time if retry is disabled. (Take another look at the docs and you'll notice how close my initial understanding and the correct version are. 😅)

That'd be a good change in the meantime, until we figure out exactly how we want to configure retries.

Done & Pushed.

By HTTP protocol errors, do you mean if the backend returns a 4xx or 5xx response, an error value is returned?

No actual protocol errors like e.g. replying with thisISnotHTTP/1.1 200 OK in the first line. That would make RoundTrip return an error. Status codes, headers (e.g. cookies) or the body aren't interpreted on the other hand. It's basically just one single level above a raw TCP connection.

[...] only meeting all 3 of those conditions should induce buffering.

Yeah. To make it easier to extend I already moved the check to a free-standing assignment. If it get's too complicated we can surely move it to a seperate method in the future.

@oliverpool
Copy link

oliverpool commented Jan 2, 2017

I want to have a tus uploading server behind caddy (via a reverse proxy) and this PR might fix my issue:

Scenario:
A large file upload is interrupted (due to bad network for instance).
The file upload must be retried.

Expected behavior (when the request is directly sent to tus listening port)
The tus server manages to store the data that has been sent so far.
On client retry, only the missing part of the file is sent.

Observed behavior (when caddy acts as a reverse proxy in front of tus):
The tus server does not see any request and the client has to send the whole file again


If think that this might fix my issue, I can try to compile this branch on my system and let you know if it works!

In this case, it could be used as a test case for this PR:
send an incomplete request and see if it is passed to one of the backend

@oliverpool
Copy link

(This could also be seen as unrelated to this issue that the caddyserver does not forward incomplete requests)

@oliverpool
Copy link

I just tried this branch with the tus uploader and resuming works perfectly!

I'm looking forward for this PR to be merged 😃

@lhecker
Copy link
Author

lhecker commented Jan 4, 2017

Glad to hear it's working for you @oliverpool! I assume incomplete requests work as well then? Because I'm not entirely sure if that's the case…

@oliverpool
Copy link

I assume incomplete requests work as well then? Because I'm not entirely sure if that's the case…

I think so.
Without your patch it didn't work (the tus server didn't receive any request, probably because caddy filters out when a request is incomplete).
With your patch it works perfectly (the tus server receives the incomplete request and is capable of handling it)

@lhecker
Copy link
Author

lhecker commented Jan 6, 2017

@mholt Is everything fine with this PR? And if so: When do you plan on merging it?

var _ rewindableReader = &unbufferedBody{}

func (b *unbufferedBody) rewind() error {
return errors.New("cannot rewind unbuffered body")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this just return nil instead of a error? so you don't a check below.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, you're suggesting to just make this a no-op?

If it's a no-op, maybe the bigger design question is, Why does unbufferedBody implement rewindableReader if it is not a rewindable reader?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, it makes sense.

return http.StatusInternalServerError, errors.New("unable to rewind downstream request body")
// NOTE: We check for requiresBuffering, because
// only a bufferedBody supports rewinding.
if requiresBuffering {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could clear this checking if unbufferBody's rewind() just return nil

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually we only need to rewind the body when the previous one failed.

var _ rewindableReader = &unbufferedBody{}

func (b *unbufferedBody) rewind() error {
return errors.New("cannot rewind unbuffered body")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, it makes sense.

return http.StatusInternalServerError, errors.New("unable to rewind downstream request body")
// NOTE: We check for requiresBuffering, because
// only a bufferedBody supports rewinding.
if requiresBuffering {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually we only need to rewind the body when the previous one failed.

@lhecker
Copy link
Author

lhecker commented Jan 7, 2017

@mholt @tw4452852 I removed the dreaded rewindableReader interface. You where absolutely correct in that bufferedBody should only implement interfaces it actually implements and I'm using a type cast now. Kinda even simplifies the code even.

Copy link
Member

@mholt mholt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is looking BEAUTIFUL. Just one more thing related to robustness -- either a test or a code change will do -- and I'm so ready to merge this. Thanks!!

// NOTE:
// Reaching this point implies that keepRetrying() above returned true,
// which in turn implies that body must be a *bufferedBody.
if err := body.(*bufferedBody).rewind(); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Juuuuust to be safe, maybe this code should have one extra check. If the type assertion fails right now, that will lead to a panic (I know, it shouldn't fail, but I suspect we lack tests to enforce this right now). Either a test should be written -- this particular case might be tedious to test for, I'm not sure -- or try this, which won't panic:

if bb, ok := body.(*bufferedBody); ok {
    err := bb.rewind()
    if err != nil {
        // ...
    }
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mholt Fixed.

Can I squash the last 3 commits into a single one before you merge it to clean up the history? It'd result in a force push though of course.

atomic.AddInt64(&host.Conns, 1)
defer atomic.AddInt64(&host.Conns, -1)
backendErr = proxy.ServeHTTP(w, outreq, downHeaderUpdateFn)
}()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't stop merging, but I just noticed this wrapped in a func(). Why is that? Just curious.

Copy link

@oliverpool oliverpool Jan 8, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably for the defer?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just move line 203 to right after line 204... I don't think there's a need to use defer here.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's probably for it to look cleaner (but I'm not the author).
The increment and decrement are now spatially near one another: the rest of the function (only one line here) will then be in between (even in case of an exception or an early return)

Copy link
Author

@lhecker lhecker Jan 9, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's because proxy.ServeHTTP can panic under certain conditions, but previously the host.Conns counter was not decremented properly if it did. That way - in my understanding - the counter was stuck and could sooner or later lead to a broken proxy state.

P.S.: For instance I fixed one possible panic in my previous PR, which could be triggered by opening a WebSocket connection to caddy and causing the upstream to disconnect right after caddy connected to it. That way a 0-length buffer is inserted into the buffer pool and every new WebSocket connection would have randomly caused panics, due to in.CopyBuffer not accepting 0-length buffers. This in turn would have incremented host.Conns forever without ever decrementing it.

And if there is one bug causing panics there must be more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, makes sense. It's a little awkward but we'll roll with it for now. Maybe place a comment explaining why it's in an anonymous func.

time.Sleep(timeout)
atomic.AddInt32(&host.Fails, -1)
}(host, timeout)
}()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait a sec -- why are these parameters removed? It is a bug to start a goroutine in a for loop and not pass in the values as they are on this iteration of the loop, they might change before the goroutine starts.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For reference: https://github.com/golang/go/wiki/CommonMistakes#using-goroutines-on-loop-iterator-variables

It is also important to note that variables declared within the body of a loop are not shared between iterations, and thus can be used separately in a closure.

So this should actually work fine, but it is a dangerous portion of code (for instance if the for loop is changed to for host := range upstream.Selector(r) {)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I still think it should be reverted.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll revert it. I changed it because it was not written in the same style as other parts of the code, which all use implicit binding of closure parameters.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay; will await that, I'll feel better if they get passed into the func ;)

@mholt
Copy link
Member

mholt commented Jan 8, 2017 via email

atomic.AddInt64(&host.Conns, 1)
defer atomic.AddInt64(&host.Conns, -1)
backendErr = proxy.ServeHTTP(w, outreq, downHeaderUpdateFn)
}()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, makes sense. It's a little awkward but we'll roll with it for now. Maybe place a comment explaining why it's in an anonymous func.

time.Sleep(timeout)
atomic.AddInt32(&host.Fails, -1)
}(host, timeout)
}()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay; will await that, I'll feel better if they get passed into the func ;)

@tw4452852
Copy link
Collaborator

I think the root cause for this issue is we have to read full body beforehand to rewind it later when retrying . Here, we just work around it by checking the requiresBuffering, however this issue still exists if checking failed. So I think the proper fix is that we buffer the body only if retrying the request. Anyway we could keep this patch for now and I will try to implement it later.

@oliverpool
Copy link

@tw4452852 : something like a TeeReader with a NopCloser (to have a ReadCloser)? (thus you can copy the forwarded request to a local buffer and use this buffer to repeat if needed)

@tw4452852
Copy link
Collaborator

tw4452852 commented Jan 11, 2017

@oliverpool Yes, I actually have adopted this way here. 😄

@lhecker
Copy link
Author

lhecker commented Jan 11, 2017

So I think the proper fix is that we buffer the body only if retrying the request.

That's inherently impossible. You can't decide to buffer the body after you've already consumed it by proxying it to the first upstream host. Some way or the other you have to decide upfront wether you'd like to buffer the body, especially since you have to keep in mind that request payloads don't have an upper bound on their size.

Your use in replacer.go for instance is a case where you'd always buffer the request and can be used in another way:

The only optimizations I see is to better abstract this logic away and to optimize bufferedBody to transparently backup the body into a buffer while it's read from (which reduces latency).

@mholt
Copy link
Member

mholt commented Jan 11, 2017

@lhecker I'm ready to merge this, as soon as the goroutine gets passed the arguments again. 👍

@tw4452852 I appreciate your thinking on this. I'm not sure I understand fully what you have in mind but perhaps you could explain more? Or submit a PR... but yeah, you'll have to decide to buffer before consuming the body.

@lhecker Maybe he meant the body will not be buffered unless we know we might retry the request?

@lhecker
Copy link
Author

lhecker commented Jan 11, 2017

@mholt I pushed the requested changes just now. Can/May I now squash the later commits into the first 3?

@mholt
Copy link
Member

mholt commented Jan 11, 2017

@lhecker Sure! Go ahead and clean up your commits as you please and when ready I'll merge this.

@tw4452852 It looks like your concerns have been addressed so I'll assume you're good with my merging this. We can always go back and make little changes if we find anything else.

If only one upstream is defined we don't need to buffer the body.
Instead we directly stream the body to the upstream host,
which reduces memory usage as well as latency.
Furthermore this enables different kinds of HTTP streaming
applications like gRPC for instance.
This test ensures that the optimizations in 8048e9c are actually effective.
@lhecker
Copy link
Author

lhecker commented Jan 11, 2017

@mholt I cleaned up the commits. As you can see it's a very compact PR now. 🙂

@mholt mholt dismissed tw4452852’s stale review January 11, 2017 20:02

Changes were addressed :)

@mholt mholt merged commit 6967927 into master Jan 11, 2017
@mholt
Copy link
Member

mholt commented Jan 11, 2017

Excellent!! Thanks so much for your patience and persistence @lhecker, and for your help reviewing, @tw4452852 and others.

At some point I intend to revisit this (unless somebody else wants to first! I'm busy for a while) to make retries more configurable; i.e. to only buffer if explicitly configured to do so, even with multiple backends. Otherwise, to only retry on connection failures where the request body wasn't drained at all.

@lhecker lhecker deleted the unbuffered_proxy branch January 17, 2017 14:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

High RAM usage when proxying WebDAV (& possibly normal traffic as well)
4 participants