Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Utility to analyze the size of various key types in a library #1291

Merged
merged 6 commits into from
Feb 8, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 53 additions & 12 deletions cpp/arcticdb/version/local_versioned_engine.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1768,20 +1768,61 @@ timestamp LocalVersionedEngine::latest_timestamp(const std::string& symbol) {
}

std::unordered_map<KeyType, std::pair<size_t, size_t>> LocalVersionedEngine::scan_object_sizes() {
std::unordered_map<KeyType, std::pair<size_t, size_t>> sizes;
std::unordered_map<KeyType, std::pair<size_t, std::atomic<size_t>>> sizes;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be lacking some context. What should the values in std::pair above represent?

foreach_key_type([&store=store(), &sizes=sizes](KeyType key_type) {
std::vector<std::pair<VariantKey, stream::StreamSource::ReadContinuation>> keys_and_continuations;
auto& pair = sizes[key_type];
store->iterate_type(key_type, [&keys_and_continuations, &pair](const VariantKey &&k) {
keys_and_continuations.emplace_back(
std::forward<const VariantKey>(k),
[&pair](auto &&ks) {
auto key_seg = std::move(ks);
++pair.first;
pair.second += key_seg.segment().total_segment_size();
return key_seg.variant_key();
});
std::vector<std::pair<VariantKey, stream::StreamSource::ReadContinuation>> keys;
store->iterate_type(key_type, [&keys, &sizes, &key_type](const VariantKey&& k) {
auto& pair = sizes[key_type];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you assign first and second to named things via a structured binding?

++pair.first;
auto& size_counter = pair.second;
keys.emplace_back(std::forward<const VariantKey>(k), [&size_counter] (auto&& ks) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found the collection that had the functors in it being called 'keys' a bit confusing. Took me a while to work out where the work was getting done.

auto key_seg = std::move(ks);
size_counter += key_seg.segment().total_segment_size();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's an element of 'no good deed goes unpunished', but since we're changing this (and adding a test, which should have been there from the outset - my bad), I believe we're scanning the encoded fields and working out both the uncompressed and the compressed sizes. Do you think it would be possible to record both? I'm very interested in the compression ratios we're getting as I think we may want to move away from block encoders like LZ4

Copy link
Collaborator Author

@poodlewars poodlewars Feb 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is all recording compressed sizes. Is there a way to get the uncompressed size without actually doing the decompression of the segments in to SegmentInMemory?

Update: We discussed that in_bytes stores the uncompressed size.

return key_seg.variant_key();
});
});

if(!keys.empty()) {
store->batch_read_compressed(std::move(keys), BatchReadArgs{}).get();
}

});

std::unordered_map<KeyType, std::pair<size_t, size_t>> result;
for (const auto& [k, v] : sizes) {
result[k] = v;
}
return result;
}

std::unordered_map<StreamId, std::unordered_map<KeyType, size_t>> LocalVersionedEngine::scan_object_sizes_by_stream() {
std::mutex mutex;
std::unordered_map<StreamId, std::unordered_map<KeyType, size_t>> sizes;
auto streams = symbol_list().get_symbols(store());

foreach_key_type([&store=store(), &sizes, &mutex](KeyType key_type) {
std::vector<std::pair<VariantKey, stream::StreamSource::ReadContinuation>> keys;

store->iterate_type(key_type, [&keys, &mutex, &sizes, key_type](const VariantKey&& k){
keys.emplace_back(std::forward<const VariantKey>(k), [key_type, &sizes, &mutex] (auto&& ks) {
auto key_seg = std::move(ks);
auto variant_key = key_seg.variant_key();
auto stream_id = variant_key_id(variant_key);
auto size = key_seg.segment().total_segment_size();

{
std::lock_guard lock{mutex};
sizes[stream_id][key_type] += size;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to use atomic here as well? Or maybe switch to a spinlock instead of a mutex.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This case is a bit different to the function above because we also need some synchronization guarantees over the map itself, so I don't think atomic on its own is safe. We could try spinlock, but really I'm not too worried - the locked operation should be lightning fast relative to the IO.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need some synchronization guarantees over the map itself

Is this because there can be a race on creating sizes[stream_id]?

}

return variant_key;
});

});

if (!keys.empty()) {
store->batch_read_compressed(std::move(keys), BatchReadArgs{}).get();
}
});

return sizes;
Expand Down
3 changes: 3 additions & 0 deletions cpp/arcticdb/version/local_versioned_engine.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -390,6 +390,9 @@ class LocalVersionedEngine : public VersionedEngine {
const WriteOptions& write_options);

std::unordered_map<KeyType, std::pair<size_t, size_t>> scan_object_sizes();

std::unordered_map<StreamId, std::unordered_map<KeyType, size_t>> scan_object_sizes_by_stream();

std::shared_ptr<Store>& _test_get_store() { return store_; }
void _test_set_validate_version_map() {
version_map()->set_validate(true);
Expand Down
6 changes: 5 additions & 1 deletion cpp/arcticdb/version/python_bindings.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -543,7 +543,11 @@ void register_bindings(py::module &version, py::exception<arcticdb::ArcticExcept
py::call_guard<SingleThreadMutexHolder>(), "Get the most recent update time for a list of stream ids")
.def("scan_object_sizes",
&PythonVersionStore::scan_object_sizes,
py::call_guard<SingleThreadMutexHolder>(), "Scan the sizes of object")
py::call_guard<SingleThreadMutexHolder>(), "Scan the sizes of all objects in the library. Sizes are in bytes.")
.def("scan_object_sizes_by_stream",
&PythonVersionStore::scan_object_sizes_by_stream,
py::call_guard<SingleThreadMutexHolder>(),
"Scan the sizes of all objects in the library, grouped by stream ID. Sizes are in bytes.")
.def("find_version",
&PythonVersionStore::get_version_to_read,
py::call_guard<SingleThreadMutexHolder>(), "Check if a specific stream has been written to previously")
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
from arcticdb.util.test import sample_dataframe
from arcticdb_ext.storage import KeyType


def test_symbol_sizes(basic_store):
sizes = basic_store.version_store.scan_object_sizes_by_stream()
assert len(sizes) == 1
assert "__symbols__" in sizes

sym_names = []
for i in range(5):
df = sample_dataframe(100, i)
sym = "sym_{}".format(i)
sym_names.append(sym)
basic_store.write(sym, df)

sizes = basic_store.version_store.scan_object_sizes_by_stream()

for s in sym_names:
assert s in sizes

assert sizes["sym_0"][KeyType.VERSION] < 1000
assert sizes["sym_0"][KeyType.TABLE_INDEX] < 5000
assert sizes["sym_0"][KeyType.TABLE_DATA] < 15000


def test_symbol_sizes_big(basic_store):
df = sample_dataframe(1000)
basic_store.write("sym", df)

sizes = basic_store.version_store.scan_object_sizes_by_stream()

assert sizes["sym"][KeyType.VERSION] < 1000
assert sizes["sym"][KeyType.TABLE_INDEX] < 5000
assert 15_000 < sizes["sym"][KeyType.TABLE_DATA] < 100_000


"""
Manual testing lines up well:

In [11]: lib._nvs.version_store.scan_object_sizes_by_stream()
Out[11]:
{'sym': {<KeyType.VERSION: 4>: 1160,
<KeyType.TABLE_INDEX: 3>: 2506,
<KeyType.TABLE_DATA: 2>: 5553859}}

In [12]: lib
Out[12]: Library(Arctic(config=LMDB(path=/home/alex/source/ArcticDB/python/blah)), path=tst3, storage=lmdb_storage)

(310) ➜ tst3 git:(size-by-symbol) ✗ du -h .
5.5M .
(310) ➜ tst3 git:(size-by-symbol) ✗ pwd
/home/alex/source/ArcticDB/python/blah/tst3
"""


def test_scan_object_sizes(basic_store):
df = sample_dataframe(1000)
basic_store.write("sym", df)

sizes = basic_store.version_store.scan_object_sizes()

assert len(sizes) == 5
assert sizes[KeyType.VERSION][1] < 1000
assert sizes[KeyType.TABLE_INDEX][1] < 5000
assert 15_000 < sizes[KeyType.TABLE_DATA][1] < 100_000

assert sizes[KeyType.VERSION][0] == 1
assert sizes[KeyType.TABLE_INDEX][0] == 1
assert sizes[KeyType.TABLE_DATA][0] == 1
Loading