Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mow master 0922 tmp 4 rebase #4

Closed
wants to merge 64 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
4b22fc1
[Feature](update) Support `update on current_timestamp` (#25884)
bobhan1 Nov 23, 2023
ca7dbc3
[refactor](pipelineX) refine union dependency (#27348)
Mryange Nov 23, 2023
772f181
[fix](stats) Fix thread leaks when doing checkpoint (#27334)
Kikyou1997 Nov 23, 2023
6fdaf2d
[fix](ci) 1. if skip compile then skip p0 p1 external pipelinex_p0 al…
hello-stephen Nov 23, 2023
ab739a6
[Chore](workflow)Fix Pr comment not worker (#27400)
CalvinKirs Nov 23, 2023
2ec3395
[fix](planner)the data type should be the same between input slot and…
starocean999 Nov 23, 2023
1555b11
[fix](nereids)remove literal partition by and order by expression in …
starocean999 Nov 23, 2023
d9f6e51
[fix](planner)output slot should be materialized as intermediate slot…
starocean999 Nov 23, 2023
511eedb
[fix](nereids)select base index if mv's data type is different from b…
starocean999 Nov 23, 2023
2ea3351
[Opt](load) use batching to optimize auto partition (#26915)
zclllyybb Nov 23, 2023
dd65cc1
[opt](MergedIO) no need to merge large columns (#27315)
AshinGau Nov 23, 2023
78203c8
[chore](docker cases): support for specifying output from the command…
feifeifeimoon Nov 23, 2023
d04a2de
[fix](hms) fix compatibility issue of hive metastore client (#27327)
morningman Nov 23, 2023
d73b945
[chore](Nereids): rename pushdown to push_down (#27473)
jackwener Nov 23, 2023
aa766a7
[bugfix](pipeline core) lock fragment context during task close to av…
yiguolei Nov 23, 2023
540bce4
[typo](log) Let env lock msg more distinct (#27493)
SWJTU-ZhangLei Nov 23, 2023
5d31bc9
[Fix](Group_commit) Fix group commit regression test failure (#27475)
Yukang-Lian Nov 23, 2023
5adbe47
[test](regression) add stream load tvf properties regression test (#…
sollhui Nov 23, 2023
eb878ad
[fix](Export) add feut for `Cancel Export` (#27178)
BePPPower Nov 23, 2023
8e74470
[fix](statistics)Fix auto analyze remove finished job bug (#27486)
Jibing-Li Nov 23, 2023
b580ee9
[fix](compile) fix macOS compile and format code (#27494)
zclllyybb Nov 23, 2023
39a5229
[fix](regression)Fix hive p2 case (#27466)
Jibing-Li Nov 23, 2023
df628e1
[chore](merge-on-write) disable rowid conversion check for mow table …
liaoxin01 Nov 23, 2023
5a82a57
[pipelineX](bug) Fix core dump if cancelled (#27449)
Gabriel39 Nov 24, 2023
17ca75f
[chore](Nereids): add eager aggregate into rules (#27505)
jackwener Nov 24, 2023
75c9f00
[Bug](bitmap) Fix heap-use-after-free in the bitmap functions (#27411)
xy720 Nov 24, 2023
d74004e
[feature-wip](merge-on-write) MOW table split primary key and sort key
mymeiyi Sep 22, 2023
9326c7f
fix code format
mymeiyi Sep 22, 2023
4379e27
fix regression
mymeiyi Sep 22, 2023
dc80873
add some regression
mymeiyi Sep 25, 2023
8ede803
fix write
mymeiyi Sep 26, 2023
3f96b2c
fix write
mymeiyi Sep 28, 2023
8691d42
Check status when get rowid
mymeiyi Oct 7, 2023
288cd3b
fix read
mymeiyi Oct 7, 2023
aacb645
improve
mymeiyi Oct 7, 2023
87bfda9
fix be format
mymeiyi Oct 7, 2023
6d9db37
improve get pk row range
mymeiyi Oct 8, 2023
373eedb
Add p2 regression
mymeiyi Oct 8, 2023
6251ba5
Support point query
mymeiyi Oct 8, 2023
384bf64
Fix read bug
mymeiyi Oct 8, 2023
6edd267
Modify point query regression
mymeiyi Oct 12, 2023
08b65dc
Fix rebase compile error
mymeiyi Oct 25, 2023
e13bd2a
Fix point query out
mymeiyi Oct 25, 2023
d5c82d3
Support schema change
mymeiyi Oct 26, 2023
ebac9bb
fix read
mymeiyi Oct 30, 2023
35746c0
fix be ut
mymeiyi Oct 30, 2023
cc8d2da
support row compaction
mymeiyi Oct 31, 2023
ad57b40
add compaction regression case
mymeiyi Oct 31, 2023
5ecf38a
support vertical compaction
mymeiyi Oct 31, 2023
cc3b92d
Fix vertical compaction
mymeiyi Nov 2, 2023
052c927
rebase master
mymeiyi Nov 13, 2023
a68ecd0
fix comments
mymeiyi Nov 13, 2023
de54393
disable test_delete_sign_delete_bitmap
mymeiyi Nov 13, 2023
ebf35e0
fix
mymeiyi Nov 14, 2023
bb75b95
fix compile
mymeiyi Nov 16, 2023
f29ce66
skip vertical segment writer
mymeiyi Nov 16, 2023
a3618f4
some fix
mymeiyi Nov 18, 2023
db1b557
modify by comments
mymeiyi Nov 21, 2023
1d1f6c5
fix DeleteBitmapCalculatorTest
mymeiyi Nov 21, 2023
0233921
Fix segment compaction
mymeiyi Nov 22, 2023
80bbd69
Fix test_point_query_cluster_key
mymeiyi Nov 22, 2023
fbb6aa8
Fix test_point_query_cluster_key
mymeiyi Nov 22, 2023
520a15c
Fix test_point_query_cluster_key
mymeiyi Nov 22, 2023
378c20a
Fix test_point_query_cluster_key
mymeiyi Nov 23, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
38 changes: 30 additions & 8 deletions .github/workflows/auto-pr-reply.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,20 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

name: Auto Reply to PR

on:
Expand All @@ -8,11 +25,16 @@ jobs:
comment:
runs-on: ubuntu-latest
steps:
- name: Comment on PR
uses: ./.github/actions/create-or-update-comment
with:
issue-number: ${{ github.event.pull_request.number }}
body: |
Thank you for your contribution to Apache Doris.

Don't know what should be done next? See [How to process your PR](https://cwiki.apache.org/confluence/display/DORIS/How+to+process+your+PR)
- name: Checkout
uses: actions/checkout@v3
with:
persist-credentials: false
submodules: recursive
- name: Comment on PR
uses: ./.github/actions/create-or-update-comment
with:
issue-number: ${{ github.event.pull_request.number }}
body: |
Thank you for your contribution to Apache Doris.

Don't know what should be done next? See [How to process your PR](https://cwiki.apache.org/confluence/display/DORIS/How+to+process+your+PR)
2 changes: 1 addition & 1 deletion .gitmodules
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@
branch = clucene
[submodule ".github/actions/create-or-update-comment"]
path = .github/actions/create-or-update-comment
url = https://github.com/peter-evans/create-or-update-comment
url = https://github.com/peter-evans/create-or-update-comment.git
5 changes: 4 additions & 1 deletion be/src/common/config.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -505,7 +505,8 @@ DEFINE_Int64(stream_tvf_buffer_size, "1048576"); // 1MB

// OlapTableSink sender's send interval, should be less than the real response time of a tablet writer rpc.
// You may need to lower the speed when the sink receiver bes are too busy.
DEFINE_mInt32(olap_table_sink_send_interval_ms, "1");
DEFINE_mInt32(olap_table_sink_send_interval_microseconds, "1000");
DEFINE_mDouble(olap_table_sink_send_interval_auto_partition_factor, "0.001");

// Fragment thread pool
DEFINE_Int32(fragment_pool_thread_num_min, "64");
Expand Down Expand Up @@ -1067,6 +1068,8 @@ DEFINE_mInt64(lookup_connection_cache_bytes_limit, "4294967296");
DEFINE_mInt64(LZ4_HC_compression_level, "9");

DEFINE_mBool(enable_merge_on_write_correctness_check, "true");
// rowid conversion correctness check when compaction for mow table
DEFINE_mBool(enable_rowid_conversion_correctness_check, "false");

// The secure path with user files, used in the `local` table function.
DEFINE_mString(user_files_secure_path, "${DORIS_HOME}");
Expand Down
6 changes: 5 additions & 1 deletion be/src/common/config.h
Original file line number Diff line number Diff line change
Expand Up @@ -559,7 +559,9 @@ DECLARE_Int64(stream_tvf_buffer_size);

// OlapTableSink sender's send interval, should be less than the real response time of a tablet writer rpc.
// You may need to lower the speed when the sink receiver bes are too busy.
DECLARE_mInt32(olap_table_sink_send_interval_ms);
DECLARE_mInt32(olap_table_sink_send_interval_microseconds);
// For auto partition, the send interval will multiply the factor
DECLARE_mDouble(olap_table_sink_send_interval_auto_partition_factor);

// Fragment thread pool
DECLARE_Int32(fragment_pool_thread_num_min);
Expand Down Expand Up @@ -1128,6 +1130,8 @@ DECLARE_mBool(enable_flatten_nested_for_variant);
DECLARE_mDouble(ratio_of_defaults_as_sparse_column);

DECLARE_mBool(enable_merge_on_write_correctness_check);
// rowid conversion correctness check when compaction for mow table
DECLARE_mBool(enable_rowid_conversion_correctness_check);

// The secure path with user files, used in the `local` table function.
DECLARE_mString(user_files_secure_path);
Expand Down
10 changes: 5 additions & 5 deletions be/src/io/fs/buffered_reader.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Status MergeRangeFileReader::read_at_impl(size_t offset, Slice result, size_t* b
Status st = _reader->read_at(offset, result, bytes_read, io_ctx);
_statistics.merged_io++;
_statistics.request_bytes += *bytes_read;
_statistics.read_bytes += *bytes_read;
_statistics.merged_bytes += *bytes_read;
return st;
}
if (offset + result.size > _random_access_ranges[range_index].end_offset) {
Expand All @@ -68,10 +68,10 @@ Status MergeRangeFileReader::read_at_impl(size_t offset, Slice result, size_t* b
if (cached_data.contains(offset)) {
// has cached data in box
_read_in_box(cached_data, offset, result, &has_read);
_statistics.request_bytes += has_read;
if (has_read == result.size) {
// all data is read in cache
*bytes_read = has_read;
_statistics.request_bytes += has_read;
return Status::OK();
}
} else if (!cached_data.empty()) {
Expand All @@ -91,7 +91,7 @@ Status MergeRangeFileReader::read_at_impl(size_t offset, Slice result, size_t* b
*bytes_read = has_read + read_size;
_statistics.merged_io++;
_statistics.request_bytes += read_size;
_statistics.read_bytes += read_size;
_statistics.merged_bytes += read_size;
return Status::OK();
}

Expand Down Expand Up @@ -186,7 +186,7 @@ Status MergeRangeFileReader::read_at_impl(size_t offset, Slice result, size_t* b
*bytes_read = has_read + read_size;
_statistics.merged_io++;
_statistics.request_bytes += read_size;
_statistics.read_bytes += read_size;
_statistics.merged_bytes += read_size;
return Status::OK();
}

Expand Down Expand Up @@ -314,7 +314,7 @@ Status MergeRangeFileReader::_fill_box(int range_index, size_t start_offset, siz
RETURN_IF_ERROR(
_reader->read_at(start_offset, Slice(_read_slice, to_read), bytes_read, io_ctx));
_statistics.merged_io++;
_statistics.read_bytes += *bytes_read;
_statistics.merged_bytes += *bytes_read;
}

SCOPED_RAW_TIMER(&_statistics.copy_time);
Expand Down
18 changes: 13 additions & 5 deletions be/src/io/fs/buffered_reader.h
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,8 @@ class MergeRangeFileReader : public io::FileReader {
int64_t request_io = 0;
int64_t merged_io = 0;
int64_t request_bytes = 0;
int64_t read_bytes = 0;
int64_t merged_bytes = 0;
int64_t apply_bytes = 0;
};

struct RangeCachedData {
Expand Down Expand Up @@ -147,6 +148,9 @@ class MergeRangeFileReader : public io::FileReader {
// Equivalent min size of each IO that can reach the maximum storage speed limit:
// 512KB for oss, 4KB for hdfs
_equivalent_io_size = _is_oss ? OSS_MIN_IO_SIZE : HDFS_MIN_IO_SIZE;
for (const PrefetchRange& range : _random_access_ranges) {
_statistics.apply_bytes += range.end_offset - range.start_offset;
}
if (_profile != nullptr) {
const char* random_profile = "MergedSmallIO";
ADD_TIMER_WITH_LEVEL(_profile, random_profile, 1);
Expand All @@ -158,8 +162,10 @@ class MergeRangeFileReader : public io::FileReader {
random_profile, 1);
_request_bytes = ADD_CHILD_COUNTER_WITH_LEVEL(_profile, "RequestBytes", TUnit::BYTES,
random_profile, 1);
_read_bytes = ADD_CHILD_COUNTER_WITH_LEVEL(_profile, "MergedBytes", TUnit::BYTES,
random_profile, 1);
_merged_bytes = ADD_CHILD_COUNTER_WITH_LEVEL(_profile, "MergedBytes", TUnit::BYTES,
random_profile, 1);
_apply_bytes = ADD_CHILD_COUNTER_WITH_LEVEL(_profile, "ApplyBytes", TUnit::BYTES,
random_profile, 1);
}
}

Expand All @@ -184,7 +190,8 @@ class MergeRangeFileReader : public io::FileReader {
COUNTER_UPDATE(_request_io, _statistics.request_io);
COUNTER_UPDATE(_merged_io, _statistics.merged_io);
COUNTER_UPDATE(_request_bytes, _statistics.request_bytes);
COUNTER_UPDATE(_read_bytes, _statistics.read_bytes);
COUNTER_UPDATE(_merged_bytes, _statistics.merged_bytes);
COUNTER_UPDATE(_apply_bytes, _statistics.apply_bytes);
}
}
return Status::OK();
Expand Down Expand Up @@ -220,7 +227,8 @@ class MergeRangeFileReader : public io::FileReader {
RuntimeProfile::Counter* _request_io;
RuntimeProfile::Counter* _merged_io;
RuntimeProfile::Counter* _request_bytes;
RuntimeProfile::Counter* _read_bytes;
RuntimeProfile::Counter* _merged_bytes;
RuntimeProfile::Counter* _apply_bytes;

int _search_read_range(size_t start_offset, size_t end_offset);
void _clean_cached_data(RangeCachedData& cached_data);
Expand Down
11 changes: 8 additions & 3 deletions be/src/olap/compaction.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -660,7 +660,8 @@ Status Compaction::modify_rowsets(const Merger::Statistics* stats) {
output_rowsets.push_back(_output_rowset);

if (_tablet->keys_type() == KeysType::UNIQUE_KEYS &&
_tablet->enable_unique_key_merge_on_write()) {
_tablet->enable_unique_key_merge_on_write() &&
_tablet->tablet_schema()->cluster_key_idxes().empty()) {
Version version = _tablet->max_version();
DeleteBitmap output_rowset_delete_bitmap(_tablet->tablet_id());
std::set<RowLocation> missed_rows;
Expand Down Expand Up @@ -689,7 +690,9 @@ Status Compaction::modify_rowsets(const Merger::Statistics* stats) {
}
}

RETURN_IF_ERROR(_tablet->check_rowid_conversion(_output_rowset, location_map));
if (config::enable_rowid_conversion_correctness_check) {
RETURN_IF_ERROR(_tablet->check_rowid_conversion(_output_rowset, location_map));
}
location_map.clear();

{
Expand Down Expand Up @@ -750,7 +753,9 @@ Status Compaction::modify_rowsets(const Merger::Statistics* stats) {
}
}

RETURN_IF_ERROR(_tablet->check_rowid_conversion(_output_rowset, location_map));
if (config::enable_rowid_conversion_correctness_check) {
RETURN_IF_ERROR(_tablet->check_rowid_conversion(_output_rowset, location_map));
}

_tablet->merge_delete_bitmap(output_rowset_delete_bitmap);
RETURN_IF_ERROR(_tablet->modify_rowsets(output_rowsets, _input_rowsets, true));
Expand Down
59 changes: 40 additions & 19 deletions be/src/olap/delete_bitmap_calculator.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -92,47 +92,54 @@ bool MergeIndexDeleteBitmapCalculatorContext::Comparator::operator()(
Slice key1, key2;
RETURN_IF_ERROR(lhs->get_current_key(key1));
RETURN_IF_ERROR(rhs->get_current_key(key2));
if (_sequence_length == 0) {
if (_sequence_length == 0 && _rowid_length == 0) {
auto cmp_result = key1.compare(key2);
// when key1 is the same as key2,
// we want the one with greater segment id to be popped first
return cmp_result ? (cmp_result > 0) : (lhs->segment_id() < rhs->segment_id());
}
// smaller key popped first
auto key1_without_seq = Slice(key1.get_data(), key1.get_size() - _sequence_length);
auto key2_without_seq = Slice(key2.get_data(), key2.get_size() - _sequence_length);
auto key1_without_seq =
Slice(key1.get_data(), key1.get_size() - _sequence_length - _rowid_length);
auto key2_without_seq =
Slice(key2.get_data(), key2.get_size() - _sequence_length - _rowid_length);
auto cmp_result = key1_without_seq.compare(key2_without_seq);
if (cmp_result != 0) {
return cmp_result > 0;
}
// greater sequence value popped first
auto key1_sequence_val =
Slice(key1.get_data() + key1.get_size() - _sequence_length, _sequence_length);
auto key2_sequence_val =
Slice(key2.get_data() + key2.get_size() - _sequence_length, _sequence_length);
cmp_result = key1_sequence_val.compare(key2_sequence_val);
if (cmp_result != 0) {
return cmp_result < 0;
if (_sequence_length > 0) {
// greater sequence value popped first
auto key1_sequence_val =
Slice(key1.get_data() + key1_without_seq.get_size() + 1, _sequence_length - 1);
auto key2_sequence_val =
Slice(key2.get_data() + key2_without_seq.get_size() + 1, _sequence_length - 1);
cmp_result = key1_sequence_val.compare(key2_sequence_val);
if (cmp_result != 0) {
return cmp_result < 0;
}
}
// greater segment id popped first
return lhs->segment_id() < rhs->segment_id();
}

bool MergeIndexDeleteBitmapCalculatorContext::Comparator::is_key_same(Slice const& lhs,
Slice const& rhs) const {
DCHECK(lhs.get_size() >= _sequence_length);
DCHECK(rhs.get_size() >= _sequence_length);
auto lhs_without_seq = Slice(lhs.get_data(), lhs.get_size() - _sequence_length);
auto rhs_without_seq = Slice(rhs.get_data(), rhs.get_size() - _sequence_length);
DCHECK(lhs.get_size() >= _sequence_length + _rowid_length);
DCHECK(rhs.get_size() >= _sequence_length + _rowid_length);
auto lhs_without_seq = Slice(lhs.get_data(), lhs.get_size() - _sequence_length - _rowid_length);
auto rhs_without_seq = Slice(rhs.get_data(), rhs.get_size() - _sequence_length - _rowid_length);
return lhs_without_seq.compare(rhs_without_seq) == 0;
}

Status MergeIndexDeleteBitmapCalculator::init(RowsetId rowset_id,
std::vector<SegmentSharedPtr> const& segments,
size_t seq_col_length, size_t max_batch_size) {
size_t seq_col_length, size_t rowdid_length,
size_t max_batch_size) {
_rowset_id = rowset_id;
_seq_col_length = seq_col_length;
_comparator = MergeIndexDeleteBitmapCalculatorContext::Comparator(seq_col_length);
_rowid_length = rowdid_length;
_comparator =
MergeIndexDeleteBitmapCalculatorContext::Comparator(seq_col_length, _rowid_length);
_contexts.reserve(segments.size());
_heap = std::make_unique<Heap>(_comparator);

Expand All @@ -146,6 +153,10 @@ Status MergeIndexDeleteBitmapCalculator::init(RowsetId rowset_id,
_contexts.emplace_back(std::move(index), index_type, segment->id(), pk_idx->num_rows());
_heap->push(&_contexts.back());
}
if (_rowid_length > 0) {
_rowid_coder = get_key_coder(
get_scalar_type_info<FieldType::OLAP_FIELD_TYPE_UNSIGNED_INT>()->type());
}
return Status::OK();
}

Expand All @@ -159,6 +170,15 @@ Status MergeIndexDeleteBitmapCalculator::calculate_one(RowLocation& loc) {
if (!_last_key.empty() && _comparator.is_key_same(cur_key, _last_key)) {
loc.segment_id = cur_ctx->segment_id();
loc.row_id = cur_ctx->row_id();
if (_rowid_length > 0) {
Slice key_without_seq = Slice(cur_key.get_data(),
cur_key.get_size() - _seq_col_length - _rowid_length);
Slice rowid_slice =
Slice(cur_key.get_data() + key_without_seq.get_size() + _seq_col_length + 1,
_rowid_length - 1);
RETURN_IF_ERROR(_rowid_coder->decode_ascending(&rowid_slice, _rowid_length,
(uint8_t*)&loc.row_id));
}
auto st = cur_ctx->advance();
if (st.ok()) {
_heap->push(cur_ctx);
Expand All @@ -176,8 +196,9 @@ Status MergeIndexDeleteBitmapCalculator::calculate_one(RowLocation& loc) {
RETURN_IF_ERROR(nxt_ctx->get_current_key(nxt_key));
Status st = _comparator.is_key_same(cur_key, nxt_key)
? cur_ctx->advance()
: cur_ctx->seek_at_or_after(Slice(
nxt_key.get_data(), nxt_key.get_size() - _seq_col_length));
: cur_ctx->seek_at_or_after(
Slice(nxt_key.get_data(),
nxt_key.get_size() - _seq_col_length - _rowid_length));
if (st.is<ErrorCode::END_OF_FILE>()) {
continue;
}
Expand Down
11 changes: 8 additions & 3 deletions be/src/olap/delete_bitmap_calculator.h
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
#include "olap/base_tablet.h"
#include "olap/binlog_config.h"
#include "olap/data_dir.h"
#include "olap/key_coder.h"
#include "olap/olap_common.h"
#include "olap/rowset/rowset.h"
#include "olap/rowset/rowset_meta.h"
Expand All @@ -47,13 +48,15 @@ class MergeIndexDeleteBitmapCalculatorContext {
public:
class Comparator {
public:
Comparator(size_t sequence_length) : _sequence_length(sequence_length) {}
Comparator(size_t sequence_length, size_t rowid_length)
: _sequence_length(sequence_length), _rowid_length(rowid_length) {}
bool operator()(MergeIndexDeleteBitmapCalculatorContext* lhs,
MergeIndexDeleteBitmapCalculatorContext* rhs) const;
bool is_key_same(Slice const& lhs, Slice const& rhs) const;

private:
size_t _sequence_length;
size_t _rowid_length;
};

MergeIndexDeleteBitmapCalculatorContext(std::unique_ptr<segment_v2::IndexedColumnIterator> iter,
Expand Down Expand Up @@ -90,7 +93,7 @@ class MergeIndexDeleteBitmapCalculator {
MergeIndexDeleteBitmapCalculator() = default;

Status init(RowsetId rowset_id, std::vector<SegmentSharedPtr> const& segments,
size_t seq_col_length = 0, size_t max_batch_size = 1024);
size_t seq_col_length = 0, size_t rowid_length = 0, size_t max_batch_size = 1024);

Status calculate_one(RowLocation& loc);

Expand All @@ -101,11 +104,13 @@ class MergeIndexDeleteBitmapCalculator {
std::vector<MergeIndexDeleteBitmapCalculatorContext*>,
MergeIndexDeleteBitmapCalculatorContext::Comparator>;
std::vector<MergeIndexDeleteBitmapCalculatorContext> _contexts;
MergeIndexDeleteBitmapCalculatorContext::Comparator _comparator {0};
MergeIndexDeleteBitmapCalculatorContext::Comparator _comparator {0, 0};
RowsetId _rowset_id;
std::unique_ptr<Heap> _heap;
std::string _last_key;
size_t _seq_col_length;
size_t _rowid_length;
const KeyCoder* _rowid_coder = nullptr;
};

} // namespace doris
Loading
Loading