Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adapt Limiter for TcpStream usage, lower sleep time, add parametric tests #13

Merged
merged 8 commits into from
Jun 13, 2023

Conversation

litchipi
Copy link
Contributor

Adapts the Limiter for usage with TcpStream

Lowers the sleep Duration while keeping the same rate

When passing some options of 10 bytes for 1 secs with a bucket of 20 bytes,
will instead apply a rule of 1 byte every 0.1secs with a bucket of 2 bytes.

It allows for smoother read / write executions while still not polling the underlying stream, and keep the exact same limitations on the rate limiting.

I don't really know how the change on the bucket size can affect some special cases, but it behaves the way it should.

Allows unused tokens to be stacked

When we CAN read N tokens, but only read x of them, do not treat this as "nothing else to read" anymore, but instead stores N-x tokens to be used for the next iteration.
Only break if 0 bytes are read, or if the buffer is filled.

This is useful for Tcp packets as their size is limited to 64Ko.

Use u64 instead of a mix of u128 and usize

Is in the bounds of everything we need. A number of nanoseconds of u64::MAX is equal to about 584 years, so I assume it's safe to cap / panic if we reach this bounds on our Durations.

I still used expect in order to know what's wrong in case of unexpected panic on some try_from conversion

Network testing and parametric testing

In order to test Tcp stream limiting, used parametric tests to generate LimiterOptions from Rng, pass random data of random length to the write / read options. I ran it for a few hours and didn't encounter any problem.

A feature heavy_testing is used to perform 100x more tests than usual (this way cargo test runs fine, but we can push it further if we want to release)

…ests

Signed-off-by: Litchi Pi <litchi.pi@proton.me>
Signed-off-by: Litchi Pi <litchi.pi@proton.me>
@litchipi
Copy link
Contributor Author

Quick note: A lot of the trouble in the stream limiter comes from the fact that if the window time is too big, we will wait too much between two batch of read/write.
Because of this, an operation that should be done in <= 4ms can take ~500ms, because we have a high bytes / secs, on a big window time, but once we don't have tokens anymore we have to wait big window time.
I implemented a reduction so we have the same speed rate, but with small bytes / secs, on a small window time.

Because of rounding error, this can reduce the final output rate by a few bytes / secs, which is why I introduced a constant ACCEPTABLE_SPEED_DIFF (currently set at 0.1% of the rate) which will define to what point we will perform the reduction.

The more this constant is high, the more we'll have granularity, and we'll have less wait 500ms for next batch of 100Mo of data

Copy link
Contributor

@AurelienFT AurelienFT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice additions especially the tests. Can you fix the warnings ? Very nice job

let mut new_wlen = window_length;
let mut new_wtime = window_time;
let mut new_bsize = bucket_size;
while ((new_speed / init_speed) - 1.0).abs() < ACCEPTABLE_SPEED_DIFF {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you place a comment at the top of this block of code because it's not trivial to understand and i'm having hard time to figure out what's going on

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added comments, hope it's better now 😅

litchipi added 4 commits June 12, 2023 15:48
Signed-off-by: Litchi Pi <litchi.pi@proton.me>
Signed-off-by: Litchi Pi <litchi.pi@proton.me>
Signed-off-by: Litchi Pi <litchi.pi@proton.me>
Signed-off-by: Litchi Pi <litchi.pi@proton.me>
Copy link
Contributor

@AurelienFT AurelienFT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ok with this PR and you tested it in massa so we can merge. Just need to remove the debug print commented

litchipi added 2 commits June 13, 2023 14:21
Signed-off-by: Litchi Pi <litchi.pi@proton.me>
Signed-off-by: Litchi Pi <litchi.pi@proton.me>
@litchipi litchipi merged commit 5de6822 into main Jun 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants