-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reducing origin server requests #500
Comments
The implementation must handle both the cases for absent and invalid cache entries. We need a generic protection against tundering herd, when many equal requests efficiently passes through the cache to a backend server. The only one request must go to a backend server, while others must be postponed. Technically it could be special Since we need to modify cache entries, the issue depends on #788 |
Addition to comment above: |
Recently we observed not enough memory crash on TDB side due to this issue |
Currently thundering herd problem is possible when many clients try to request the same resource, in this case Tempesta generates as many request to upstream as it receives from all the clients. Rather it should accurately track requests on the fly and made only one active request for upstream.
All the same concurrent requests must be backloged and sent to a server if currently sent request fails. This is different from Nginx's proxy_cache_lock_age , but it seems we can implement
proxy_cache_use_stale update
Nginx strategy. Similar feature in Varnish coalesces requests also see nginx-limit-upstream module.The cache should provide automatic refreshing by using TDB trigger callback for DELETE (see #515).
The text was updated successfully, but these errors were encountered: