-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
txHandler: applications rate limiter #5734
txHandler: applications rate limiter #5734
Conversation
37de57d
to
0ba30f7
Compare
Codecov Report
@@ Coverage Diff @@
## master #5734 +/- ##
==========================================
+ Coverage 55.64% 55.69% +0.04%
==========================================
Files 475 476 +1
Lines 66869 67043 +174
==========================================
+ Hits 37209 37339 +130
- Misses 27151 27185 +34
- Partials 2509 2519 +10
... and 8 files with indirect coverage changes 📣 Codecov offers a browser extension for seamless coverage viewing on GitHub. Try it in Chrome or Firefox today! |
10582ba
to
ac6037f
Compare
ac6037f
to
e05d839
Compare
Merged master, moved new config vals to v32 |
894188c
to
a66abbf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for incorporating the app ID changes
Summary
Rate limit incoming apps based on app id + sender IP addr. It precedes enqueueing into backlog in
TxHandler.processIncomingTxn
. It takes effect only if enabled and the backlog is more than 1/2 full.The implementation uses sharded map with sliding window limiter. The sliding window data is used to for least recently used eviction (list based).
There are two hashes used: 1) memhash64 for app to bucket mapping 2) blake2b for app id + sender caching.
Importantly, the implementation is trying to compromise on memory usage and shrinks 32 bytes blake2b to 8 bytes. It appears OK since blake2b output is uniform, and input data is salted making impossible to censor app + relay pairs.
As benchmark (below) shows there is almost no penalty (5%) on eviction when 94% of operations cause eviction.
Test Plan
Used real recorded transactions and approximated traffic by reported connected peers metric during the last high traffic event on Sep 1st 12:01 pm - 3:28pm. Got about 6M transactions and about 10k unique key pairs (no truncated hash collisions).
Run few benchmarks by using tx messages from the generated data set and got the following data:
Visualized acceptance rate:
Memory overhead:
sync.Pool
for keys/buckets gathering 0.5% alloc_space, 25% alloc_object on a heavy load.go-deadlock.lock()
(fixed in fix extra heap allocations when detector is disabled go-deadlock#2):