Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reduce allocations of ackhandler.Packet #3525

Merged
merged 2 commits into from
Aug 29, 2022

Conversation

marten-seemann
Copy link
Member

During a 1 GB transfer, quic-go allocates 500 MB on the sending side, as determined using the allocs function of pprof. Of those 500 MB, 140 MB are consumed for the ackhandler.Packet. With this change, we can bring this down to 25 MB.

We can easily solve this by:

  1. storing a the Packet by pointer instead of by value in the linked lists used in the ackhandler
  2. use a sync.Pool to reuse Packet structs

Before:
image

After:
image

@codecov
Copy link

codecov bot commented Aug 27, 2022

Codecov Report

Base: 85.61% // Head: 85.60% // Decreases project coverage by -0.01% ⚠️

Coverage data is based on head (a3b91cf) compared to base (07412be).
Patch coverage: 67.50% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3525      +/-   ##
==========================================
- Coverage   85.61%   85.60%   -0.01%     
==========================================
  Files         137      137              
  Lines       10016    10085      +69     
==========================================
+ Hits         8575     8633      +58     
- Misses       1065     1077      +12     
+ Partials      376      375       -1     
Impacted Files Coverage Δ
internal/ackhandler/packet.go 13.33% <13.33%> (ø)
internal/ackhandler/sent_packet_handler.go 78.12% <100.00%> (+0.10%) ⬆️
internal/ackhandler/sent_packet_history.go 94.81% <100.00%> (ø)
packet_packer.go 85.22% <100.00%> (+0.76%) ⬆️
internal/wire/ping_frame.go 100.00% <0.00%> (ø)
internal/wire/data_blocked_frame.go 100.00% <0.00%> (ø)
server.go 81.25% <0.00%> (+0.15%) ⬆️
quicvarint/varint.go 83.51% <0.00%> (+5.13%) ⬆️
internal/wire/handshake_done_frame.go 85.71% <0.00%> (+35.71%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

}
// After this point, we must not use ackedPackets any longer!
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ackedPackets = nil

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

var packetPool = sync.Pool{New: func() any { return &Packet{} }}

func GetPacket() *Packet {
p := packetPool.Get().(*Packet)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming that callers will sanitize data seems dangerous to me, and could leak to cross-connection data leaks. Is there a downside to zeroing out the other struct members here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense.
At some point it might be nice to reuse the slice in Packet.Frames, but we're not there yet.

@marten-seemann marten-seemann merged commit 15945e3 into master Aug 29, 2022
@marten-seemann marten-seemann deleted the ackhandler-linked-list-allocs branch August 29, 2022 09:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants