-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Utility to analyze the size of various key types in a library #1291
Changes from 3 commits
4815502
79dbeda
ad5103b
aab1a35
fd7b09c
4c5e7e1
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1768,20 +1768,61 @@ timestamp LocalVersionedEngine::latest_timestamp(const std::string& symbol) { | |
} | ||
|
||
std::unordered_map<KeyType, std::pair<size_t, size_t>> LocalVersionedEngine::scan_object_sizes() { | ||
std::unordered_map<KeyType, std::pair<size_t, size_t>> sizes; | ||
std::unordered_map<KeyType, std::pair<size_t, std::atomic<size_t>>> sizes; | ||
foreach_key_type([&store=store(), &sizes=sizes](KeyType key_type) { | ||
std::vector<std::pair<VariantKey, stream::StreamSource::ReadContinuation>> keys_and_continuations; | ||
auto& pair = sizes[key_type]; | ||
store->iterate_type(key_type, [&keys_and_continuations, &pair](const VariantKey &&k) { | ||
keys_and_continuations.emplace_back( | ||
std::forward<const VariantKey>(k), | ||
[&pair](auto &&ks) { | ||
auto key_seg = std::move(ks); | ||
++pair.first; | ||
pair.second += key_seg.segment().total_segment_size(); | ||
return key_seg.variant_key(); | ||
}); | ||
std::vector<std::pair<VariantKey, stream::StreamSource::ReadContinuation>> keys; | ||
store->iterate_type(key_type, [&keys, &sizes, &key_type](const VariantKey&& k) { | ||
auto& pair = sizes[key_type]; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could you assign first and second to named things via a structured binding? |
||
++pair.first; | ||
auto& size_counter = pair.second; | ||
keys.emplace_back(std::forward<const VariantKey>(k), [&size_counter] (auto&& ks) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I found the collection that had the functors in it being called 'keys' a bit confusing. Took me a while to work out where the work was getting done. |
||
auto key_seg = std::move(ks); | ||
size_counter += key_seg.segment().total_segment_size(); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There's an element of 'no good deed goes unpunished', but since we're changing this (and adding a test, which should have been there from the outset - my bad), I believe we're scanning the encoded fields and working out both the uncompressed and the compressed sizes. Do you think it would be possible to record both? I'm very interested in the compression ratios we're getting as I think we may want to move away from block encoders like LZ4 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is all recording compressed sizes. Is there a way to get the uncompressed size without actually doing the decompression of the segments in to Update: We discussed that |
||
return key_seg.variant_key(); | ||
}); | ||
}); | ||
|
||
if(!keys.empty()) { | ||
store->batch_read_compressed(std::move(keys), BatchReadArgs{}).get(); | ||
} | ||
|
||
}); | ||
|
||
std::unordered_map<KeyType, std::pair<size_t, size_t>> result; | ||
for (const auto& [k, v] : sizes) { | ||
result[k] = v; | ||
} | ||
return result; | ||
} | ||
|
||
std::unordered_map<StreamId, std::unordered_map<KeyType, size_t>> LocalVersionedEngine::scan_object_sizes_by_stream() { | ||
std::mutex mutex; | ||
std::unordered_map<StreamId, std::unordered_map<KeyType, size_t>> sizes; | ||
auto streams = symbol_list().get_symbols(store()); | ||
|
||
foreach_key_type([&store=store(), &sizes, &mutex](KeyType key_type) { | ||
std::vector<std::pair<VariantKey, stream::StreamSource::ReadContinuation>> keys; | ||
|
||
store->iterate_type(key_type, [&keys, &mutex, &sizes, key_type](const VariantKey&& k){ | ||
keys.emplace_back(std::forward<const VariantKey>(k), [key_type, &sizes, &mutex] (auto&& ks) { | ||
auto key_seg = std::move(ks); | ||
auto variant_key = key_seg.variant_key(); | ||
auto stream_id = variant_key_id(variant_key); | ||
auto size = key_seg.segment().total_segment_size(); | ||
|
||
{ | ||
std::lock_guard lock{mutex}; | ||
sizes[stream_id][key_type] += size; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does it make sense to use atomic here as well? Or maybe switch to a spinlock instead of a mutex. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This case is a bit different to the function above because we also need some synchronization guarantees over the map itself, so I don't think atomic on its own is safe. We could try spinlock, but really I'm not too worried - the locked operation should be lightning fast relative to the IO. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Is this because there can be a race on creating |
||
} | ||
|
||
return variant_key; | ||
}); | ||
|
||
}); | ||
|
||
if (!keys.empty()) { | ||
store->batch_read_compressed(std::move(keys), BatchReadArgs{}).get(); | ||
} | ||
}); | ||
|
||
return sizes; | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,70 @@ | ||
from arcticdb.util.test import sample_dataframe | ||
from arcticdb_ext.storage import KeyType | ||
|
||
|
||
def test_symbol_sizes(basic_store): | ||
sizes = basic_store.version_store.scan_object_sizes_by_stream() | ||
assert len(sizes) == 1 | ||
assert "__symbols__" in sizes | ||
|
||
sym_names = [] | ||
for i in range(5): | ||
df = sample_dataframe(100, i) | ||
sym = "sym_{}".format(i) | ||
sym_names.append(sym) | ||
basic_store.write(sym, df) | ||
|
||
sizes = basic_store.version_store.scan_object_sizes_by_stream() | ||
|
||
for s in sym_names: | ||
assert s in sizes | ||
|
||
assert sizes["sym_0"][KeyType.VERSION] < 1000 | ||
assert sizes["sym_0"][KeyType.TABLE_INDEX] < 5000 | ||
assert sizes["sym_0"][KeyType.TABLE_DATA] < 15000 | ||
|
||
|
||
def test_symbol_sizes_big(basic_store): | ||
df = sample_dataframe(1000) | ||
basic_store.write("sym", df) | ||
|
||
sizes = basic_store.version_store.scan_object_sizes_by_stream() | ||
|
||
assert sizes["sym"][KeyType.VERSION] < 1000 | ||
assert sizes["sym"][KeyType.TABLE_INDEX] < 5000 | ||
assert 15_000 < sizes["sym"][KeyType.TABLE_DATA] < 100_000 | ||
|
||
|
||
""" | ||
Manual testing lines up well: | ||
|
||
In [11]: lib._nvs.version_store.scan_object_sizes_by_stream() | ||
Out[11]: | ||
{'sym': {<KeyType.VERSION: 4>: 1160, | ||
<KeyType.TABLE_INDEX: 3>: 2506, | ||
<KeyType.TABLE_DATA: 2>: 5553859}} | ||
|
||
In [12]: lib | ||
Out[12]: Library(Arctic(config=LMDB(path=/home/alex/source/ArcticDB/python/blah)), path=tst3, storage=lmdb_storage) | ||
|
||
(310) ➜ tst3 git:(size-by-symbol) ✗ du -h . | ||
5.5M . | ||
(310) ➜ tst3 git:(size-by-symbol) ✗ pwd | ||
/home/alex/source/ArcticDB/python/blah/tst3 | ||
""" | ||
|
||
|
||
def test_scan_object_sizes(basic_store): | ||
df = sample_dataframe(1000) | ||
basic_store.write("sym", df) | ||
|
||
sizes = basic_store.version_store.scan_object_sizes() | ||
|
||
assert len(sizes) == 5 | ||
assert sizes[KeyType.VERSION][1] < 1000 | ||
assert sizes[KeyType.TABLE_INDEX][1] < 5000 | ||
assert 15_000 < sizes[KeyType.TABLE_DATA][1] < 100_000 | ||
|
||
assert sizes[KeyType.VERSION][0] == 1 | ||
assert sizes[KeyType.TABLE_INDEX][0] == 1 | ||
assert sizes[KeyType.TABLE_DATA][0] == 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be lacking some context. What should the values in
std::pair
above represent?