You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have seen some Denial of Service (DoS) type of behavior from one of our clients who seems to have written something in Erlang (highly parallel language) and they seem to have a bug in their code that sends hundreds of requests per second (probably in parallel) in little bursts a few times every few weeks. The user-agent we're seeing in this one specific case is hackney/1.23.0 which is an Erlang http client. Script/bot gone wild.
We have a very basic rate-limit rule, but it's only for common UI actions and not for all api actions and is quite low at 100 per 5 minutes (which should be adequate for Ui since people browsing very fast are probably bots).
If their rate is 100 per second (approx what I saw in the past) they'd be sending 30,000 requests if it went on for 5 minutes.
Adam mentions "ROR and DataCite use 2K every 5 mins."
It seems like they could still DoS us for a moment doing this rate, but it would prevent it from going on too long.
Adam is also contacting the client and notifying them of the bad behavior.
A ror approach, "we don't have any additional automatic rate limiting, but i do apply "special" rate limiting to people who are being jerks, which involves manually adding the (many) offending IPs to a terraform config. if i can spot a particular CIDR block, i go ahead and add the whole block, but they tend to be AWS serverless apps with rando IPs."
This is a bit tricky since we don't want to push out legitimate reasonably-heavy users that are being polite, but we don't want people DoSing our service.
The 2k per 5 minutes translates to 400 requests per minute or about 7 requests per second. Perhaps this or even lower is ok.
I'd suspect that even half this rate would cover most people (1,000 for 5 minutes) and I've certainly put delays in my code while using APIs rather than just trying to slam as much through as fast as I can while doing batch operations.
We can consider and discuss. We could also ask CDL or other users of our API what they would find reasonable.
It might surprise some people but may force some changes in their code if they don't respond by other channels.
I've also seen that CrossRef only gives higher public limits when you supply an email address and presumably that might give a way to contact someone if they're causing problems and we could ask them to cool it. Though we have the account name for this, it may be used by many people at this campus, I suppose.
The text was updated successfully, but these errors were encountered:
We have seen some Denial of Service (DoS) type of behavior from one of our clients who seems to have written something in Erlang (highly parallel language) and they seem to have a bug in their code that sends hundreds of requests per second (probably in parallel) in little bursts a few times every few weeks. The user-agent we're seeing in this one specific case is
hackney/1.23.0
which is an Erlang http client. Script/bot gone wild.We have a very basic rate-limit rule, but it's only for common UI actions and not for all api actions and is quite low at 100 per 5 minutes (which should be adequate for Ui since people browsing very fast are probably bots).
If their rate is 100 per second (approx what I saw in the past) they'd be sending 30,000 requests if it went on for 5 minutes.
Adam mentions "ROR and DataCite use 2K every 5 mins."
It seems like they could still DoS us for a moment doing this rate, but it would prevent it from going on too long.
Adam is also contacting the client and notifying them of the bad behavior.
A ror approach, "we don't have any additional automatic rate limiting, but i do apply "special" rate limiting to people who are being jerks, which involves manually adding the (many) offending IPs to a terraform config. if i can spot a particular CIDR block, i go ahead and add the whole block, but they tend to be AWS serverless apps with rando IPs."
This is a bit tricky since we don't want to push out legitimate reasonably-heavy users that are being polite, but we don't want people DoSing our service.
The 2k per 5 minutes translates to 400 requests per minute or about 7 requests per second. Perhaps this or even lower is ok.
I'd suspect that even half this rate would cover most people (1,000 for 5 minutes) and I've certainly put delays in my code while using APIs rather than just trying to slam as much through as fast as I can while doing batch operations.
We can consider and discuss. We could also ask CDL or other users of our API what they would find reasonable.
It might surprise some people but may force some changes in their code if they don't respond by other channels.
I've also seen that CrossRef only gives higher public limits when you supply an email address and presumably that might give a way to contact someone if they're causing problems and we could ask them to cool it. Though we have the account name for this, it may be used by many people at this campus, I suppose.
The text was updated successfully, but these errors were encountered: