-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Report#merge for efficiently combining many reports #110
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks neat, and probably will help speed things up in the cases you are mentioning (most likely saving a hefty amount of excess object allocations).
I have one idea that you might find useful, but probably is in the land of "over optimization", and definitely a bit "ugly" from a code readability standpoint. Just something to consider.
end | ||
|
||
d1, d2 = data, other.data | ||
data = { | ||
all_data = all.map(&:data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like you could save a few iterations of this data by just calculating the samples
, gc_samples
, and missed_samples
in a single loop:
# untested code... you have been warned!
new_samples, new_gc_samples, new_missed_samples = all.inject([0,0,0]) do |result, data|
result[0] += data[:samples]
result[1] += data[:gc_samples]
result[2] += data[:missed_samples]
result
end
new_data = {
# ...
samples: new_samples,
gc_samples: new_gc_samples,
missed_samples: new_missed_samples
}
However, I realize it looks a bit uglier this way...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure it's worth optimizing this much since it's O(number of reports) whereas the main loop is O(frames) which should usually be many orders of magnitude larger.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, also that. Just something I noticed when looking through the code that stuck out to me, but definitely agree that it probably isn't the big focal point of the optimization.
I think it can be adjusted at a second pass if it turns out to be beneficial, but I think what you have is more readable and fine to keep as is.
report.normalized_frames.each do |id, frame| | ||
if !merged[id] | ||
merged[id] = frame | ||
next |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the early eject here. 👍
@NickLaMuro thanks for the review! |
As far as I can tell the test failures on recent Ruby versions are unrelated to this change. |
I need to combine 40 large profiles collected from a CI run.
Report#+
gets very slow as I merge more reports (e.g. withreports.inject(:+)
). This PR implementsReport#merge
that allows for merging an arbitrary number of reports in a single pass. It also implements a test for the merge behavior.