-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Utility to analyze the size of various key types in a library #1291
Conversation
@@ -1768,20 +1768,61 @@ timestamp LocalVersionedEngine::latest_timestamp(const std::string& symbol) { | |||
} | |||
|
|||
std::unordered_map<KeyType, std::pair<size_t, size_t>> LocalVersionedEngine::scan_object_sizes() { | |||
std::unordered_map<KeyType, std::pair<size_t, size_t>> sizes; | |||
std::unordered_map<KeyType, std::pair<size_t, std::atomic<size_t>>> sizes; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be lacking some context. What should the values in std::pair
above represent?
}); | ||
std::vector<std::pair<VariantKey, stream::StreamSource::ReadContinuation>> keys; | ||
store->iterate_type(key_type, [&keys, &sizes, &key_type](const VariantKey&& k) { | ||
auto& pair = sizes[key_type]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you assign first and second to named things via a structured binding?
auto& pair = sizes[key_type]; | ||
++pair.first; | ||
auto& size_counter = pair.second; | ||
keys.emplace_back(std::forward<const VariantKey>(k), [&size_counter] (auto&& ks) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found the collection that had the functors in it being called 'keys' a bit confusing. Took me a while to work out where the work was getting done.
auto& size_counter = pair.second; | ||
keys.emplace_back(std::forward<const VariantKey>(k), [&size_counter] (auto&& ks) { | ||
auto key_seg = std::move(ks); | ||
size_counter += key_seg.segment().total_segment_size(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's an element of 'no good deed goes unpunished', but since we're changing this (and adding a test, which should have been there from the outset - my bad), I believe we're scanning the encoded fields and working out both the uncompressed and the compressed sizes. Do you think it would be possible to record both? I'm very interested in the compression ratios we're getting as I think we may want to move away from block encoders like LZ4
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is all recording compressed sizes. Is there a way to get the uncompressed size without actually doing the decompression of the segments in to SegmentInMemory
?
Update: We discussed that in_bytes
stores the uncompressed size.
|
||
{ | ||
std::lock_guard lock{mutex}; | ||
sizes[stream_id][key_type] += size; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to use atomic here as well? Or maybe switch to a spinlock instead of a mutex.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This case is a bit different to the function above because we also need some synchronization guarantees over the map itself, so I don't think atomic on its own is safe. We could try spinlock, but really I'm not too worried - the locked operation should be lightning fast relative to the IO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need some synchronization guarantees over the map itself
Is this because there can be a race on creating sizes[stream_id]
?
The existing
scan_object_sizes
was not working as it was never actually submitting any work. It always gave the result that the library contained zero bytes. I've fixed it up here.I've also introduced
scan_object_sizes_by_stream
so that users can figure out which symbols are using up space in their library.I considered a map-reduce style approach in
scan_object_sizes_by_stream
rather than the mutex but decided against it as I was considered about memory use (or the probably needless complexity of batching reduce steps).