Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not reserve QUIC stream capacity for unstaked client on forward port #34779

Merged
merged 2 commits into from
Jan 17, 2024

Conversation

pgarg66
Copy link
Contributor

@pgarg66 pgarg66 commented Jan 14, 2024

Problem

TPU-Forward QUIC streamer does not accept connections from unstaked peers, but it's reserving stream capacity from such peers. This capacity could be used for the connections from staked peers.

Summary of Changes

Update calculations for stream capacity calculations to factor in if unstaked clients are supproted.

Fixes #

Copy link

codecov bot commented Jan 14, 2024

Codecov Report

Attention: 7 lines in your changes are missing coverage. Please review.

Comparison is base (f92275b) 81.8% compared to head (ff8950d) 81.8%.
Report is 7 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff            @@
##           master   #34779     +/-   ##
=========================================
- Coverage    81.8%    81.8%   -0.1%     
=========================================
  Files         825      825             
  Lines      223124   223150     +26     
=========================================
+ Hits       182527   182540     +13     
- Misses      40597    40610     +13     

@pgarg66 pgarg66 marked this pull request as ready for review January 14, 2024 15:55
@pgarg66 pgarg66 requested a review from lijunwangs January 14, 2024 17:55
.stream_load_capacity_overflow
.fetch_add(1, Ordering::Relaxed);
self.max_unstaked_load_in_throttling_window
.saturating_add(1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the reasoning for this "+1"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to ensure staked connection get atleast 1 more stream than the unstaked connections.
The previous value (8) was lower than the stream count for unstaked connections. Doing this computationally helps make sure that the number will be correct, in case we change the other parameters that control stream counts for unstaked connections.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a little comments there?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, will do

@pgarg66 pgarg66 requested a review from lijunwangs January 16, 2024 21:55

pub(crate) struct StakedStreamLoadEMA {
current_load_ema: AtomicU64,
load_in_recent_interval: AtomicU64,
last_update: RwLock<Instant>,
stats: Arc<StreamStats>,
max_staked_load_in_ema_window: u64,
max_unstaked_load_in_throttling_window: u64,
Copy link
Contributor

@lijunwangs lijunwangs Jan 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can use some comments to explain these two fields -- especially because the first one is the aggregate for all staked connections and the second is for that of the per unstaked connection.

@pgarg66 pgarg66 requested a review from lijunwangs January 16, 2024 23:14
Copy link
Contributor

@lijunwangs lijunwangs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@pgarg66 pgarg66 merged commit 9edf65b into solana-labs:master Jan 17, 2024
35 checks passed
@pgarg66 pgarg66 deleted the ema-follow-up branch January 17, 2024 00:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants