-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
Fixes reduced request rate problem for ssl/tls connections.
- Loading branch information
There are no files selected for viewing
8 comments
on commit 2afa95c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But that won't be a 10 request per second DOS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or the process hits the message inbox limit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well that depends in how big the Reason is. To avoid introducing a new attack vector the request errors shouldn't be logged here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I'm more worried about not being warned about possible bugs because nothing is logged than about DOS attacks via de error_logger. You can easily drop "normal" error messages by whitelisting them btw.
Something similar happened to me before with mochiweb + webmachine. I found this out the hard way. Sometimes connections where mysteriously dropped without leaving any trace of what happened. Only the client side noticed it, nothing in the server side logs.
The thing was that our request process sometimes received a message (timeout) which was picked up during receiving of the next http request. Mochiweb just silently dropped the connection without leaving a message that it received something out of the ordinary.
Those kind of bugs are hard to catch, but usually very easy to prevent.
This kind of unexpected behaviour, like sleeping for 100 ms for any kind of accept error, leads to all kinds of weird and hard to debug software.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Introducing error logging by default for requests is up for discussion, but it should be done separately from this issue.
The way I see it is that you're fixing a problem in SSL, but potentially introducing a regression somewhere else. A change in semantics for non-SSL requests shouldn't be hidden away in a fix for SSL.
We handled logging of errors during the request with our own try/catch in the handler that allowed us to quickly decide what to do about it without escalating to any other processes unless it was necessary to do so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI. Looking in my logs I see that there are a two responses from accept which should not be handled as errors as they are right now. econnaborted
and {tls_alert, _}
.
Both all normal things and don't require error reporting and loggin, you can just accept again.
And you can always get eagain
back from accept right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about the exact semantics of eagain
but that sounds correct to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About the error logging. We could either use a configurable error logging module or add gen_event event handling?
This looks like it may open up a different DOS attack if an inefficient error_logger is used and the attacker finds a way to quickly create errors