-
Notifications
You must be signed in to change notification settings - Fork 143
Refactor to storages to support async reads #2012
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
0c6203a
to
9a33876
Compare
8061d37
to
b761182
Compare
c8989e8
to
0486da9
Compare
4977208
to
ec23fe7
Compare
blob_name, | ||
static_cast<int>(e.StatusCode), | ||
e.ReasonPhrase)); | ||
} catch(const std::exception&) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we now catching std::exception
here? Is this intentional? If so, why only for read
?
if(!failed_deletes.empty()) | ||
throw KeyNotFoundException(Composite<VariantKey>(std::move(failed_deletes))); | ||
if (!failed_deletes.empty()) | ||
throw KeyNotFoundException(failed_deletes); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
request.WithBucket(bucket_name.c_str()).WithKey(s3_object_name.c_str()); | ||
request.SetResponseStreamFactory(S3StreamFactory()); | ||
ARCTICDB_RUNTIME_DEBUG(log::version(), "Scheduling async read of {}", s3_object_name); | ||
s3_client.GetObjectAsync(request, GetObjectAsyncHandler{std::move(promise)}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the callback is executed in a thread pool managed by AWS SDK? Do we need to set its size somewhere or is it sensibly sized by default?
|
||
thread_local std::ostringstream oss; | ||
for (int i = 0; i < num_frames; ++i) { | ||
auto filtered = removePrefix(symbols[i], "/opt/arcticdb/arcticdb_link/python/arcticdb_ext.cpython-38-x86_64-linux-gnu.so"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you drop a more general prefix? This stripping will only work for certain dev setups
@@ -1905,7 +1943,7 @@ void set_row_id_if_index_only( | |||
if (read_query.columns && | |||
read_query.columns->empty() && | |||
pipeline_context.descriptor().index().type() == IndexDescriptor::Type::ROWCOUNT) { | |||
frame.set_row_id(pipeline_context.rows_ - 1); | |||
frame.set_row_id(static_cast<ssize_t>(pipeline_context.rows_ - 1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not your problem, but confused why the row_id
is a ssize_t
but pipeline_context.rows_
is size_t
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only important thing I spotted was this one #2012 (comment) otherwise looks good
Wrap the storage methods in an async interface, add methods that return KeySegmentPairs directly, get rid of composite from the storage interface