Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-40431: [C++] Move key_hash/key_map/light_array related files to internal for prevent using by users #40484

Merged
merged 4 commits into from
Apr 2, 2024

Conversation

ZhangHuiGui
Copy link
Collaborator

@ZhangHuiGui ZhangHuiGui commented Mar 12, 2024

Rationale for this change

These files expose implementation details and APIs that are not meant for third-party use. This PR explicitly marks them internal, which also avoids having them installed.

Are these changes tested?

By existing builds and tests.

Are there any user-facing changes?

No, except hiding some header files that were not supposed to be included externally.

@ZhangHuiGui
Copy link
Collaborator Author

cc @kou , it's a temporary fix. And i haven't add ut for now. PTAL?

Comment on lines 480 to 481
const int64_t alloc_size1 =
2 * (alloc_entry_length * sizeof(uint32_t) + util::TempVectorStack::meta_size());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this for hash_temp_buf and null_hash_temp_buf?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Comment on lines 482 to 483
const int64_t alloc_size2 =
alloc_entry_length * sizeof(uint16_t) + util::TempVectorStack::meta_size();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this for null_indices_buf?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Comment on lines 477 to 485
const int64_t alloc_entry_length = column_arrays[0].length();
auto estimate_size = [&] {
// An estimate TempVectorStack usage size for Hashing32::HashMultiColumm.
const int64_t alloc_size1 =
2 * (alloc_entry_length * sizeof(uint32_t) + util::TempVectorStack::meta_size());
const int64_t alloc_size2 =
alloc_entry_length * sizeof(uint16_t) + util::TempVectorStack::meta_size();
return alloc_size1 + alloc_size2;
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you want to do this in HashBatch() not HashMultiColumn() (that has a real allocation logic)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the codes here are unreasonable. It's temporary codes and refactored.

if (!temp_stack) {
util::TempVectorStack stack;
RETURN_NOT_OK(stack.Init(default_memory_pool(), estimate_size()));
ctx.stack = std::move(&stack);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this safe?
I think that stack is invalid outside of this block.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh,,, i lost my mind!

return alloc_size1 + alloc_size2;
};

if (!temp_stack) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You want to pass nullptr for temp_stack for your use case, right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, i think if user want to use the HashBatch as an independent api and they may needn't care about the stack size.

<< temp_stack->buffer_size() << "Bytes, expect " << estimate_alloc_size
<< "Bytes)";
ctx.stack = temp_stack;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have similar codes in Hashing64::HashBatch(). Can we avoid it?

@@ -97,6 +97,9 @@ class ARROW_EXPORT TempVectorStack {
return Status::OK();
}

const int64_t buffer_size() const { return buffer_size_; }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const int64_t buffer_size() const { return buffer_size_; }
int64_t buffer_size() const { return buffer_size_; }

@@ -97,6 +97,9 @@ class ARROW_EXPORT TempVectorStack {
return Status::OK();
}

const int64_t buffer_size() const { return buffer_size_; }
static int64_t meta_size() { return kPadding + 2 * sizeof(uint64_t); }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we provide RequiredSize(), EstimateSize() or something instead of providing this?

@github-actions github-actions bot added awaiting changes Awaiting changes awaiting change review Awaiting change review and removed awaiting review Awaiting review awaiting changes Awaiting changes labels Mar 12, 2024
@@ -89,16 +89,23 @@ class ARROW_EXPORT TempVectorStack {
Status Init(MemoryPool* pool, int64_t size) {
num_vectors_ = 0;
top_ = 0;
buffer_size_ = PaddedAllocationSize(size) + kPadding + 2 * sizeof(uint64_t);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have already added the kPadding in PaddedAllocationSize, it's unnecessary to add it again.

@ZhangHuiGui
Copy link
Collaborator Author

@kou how about this refactor?

Comment on lines 386 to 387
const uint32_t alloc_batch_size = std::min(num_rows, max_batch_size);
const int64_t estimate_alloc_size = EstimateBatchStackSize<uint32_t>(alloc_batch_size);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use auto here?

Suggested change
const uint32_t alloc_batch_size = std::min(num_rows, max_batch_size);
const int64_t estimate_alloc_size = EstimateBatchStackSize<uint32_t>(alloc_batch_size);
const auto alloc_batch_size = std::min(num_rows, max_batch_size);
const auto estimate_alloc_size = EstimateBatchStackSize<uint32_t>(alloc_batch_size);

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

util::TempVectorStack temp_stack;
if (!ctx->stack) {
ARROW_CHECK_OK(temp_stack.Init(default_memory_pool(), estimate_alloc_size));
ctx->stack = &temp_stack;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you set nullptr to ctx->stack before this function is exited?

@@ -472,6 +483,7 @@ Status Hashing32::HashBatch(const ExecBatch& key_batch, uint32_t* hashes,
LightContext ctx;
ctx.hardware_flags = hardware_flags;
ctx.stack = temp_stack;

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you revert a needless change?

@@ -35,7 +35,7 @@ void TempVectorStack::alloc(uint32_t num_bytes, uint8_t** data, int* id) {
int64_t new_top = top_ + PaddedAllocationSize(num_bytes) + 2 * sizeof(uint64_t);
// Stack overflow check (see GH-39582).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you move this comment to CheckAllocSizeValid()?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -58,6 +58,13 @@ void TempVectorStack::release(int id, uint32_t num_bytes) {
--num_vectors_;
}

void TempVectorStack::CheckAllocSizeValid(int64_t estimate_alloc_size) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you return arrow::Status instead of void here?

TempVectorStack::alloc() will not be able to use it for now but Hashing32::HashBatch() can use it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, reasonable. Refactored!

@@ -58,6 +58,13 @@ void TempVectorStack::release(int id, uint32_t num_bytes) {
--num_vectors_;
}

void TempVectorStack::CheckAllocSizeValid(int64_t estimate_alloc_size) {
ARROW_DCHECK_LE(estimate_alloc_size, buffer_size_)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we should receive additional allocation size instead of total new allocation size here:

Suggested change
ARROW_DCHECK_LE(estimate_alloc_size, buffer_size_)
ARROW_DCHECK_LE(top_ + alloc_size, buffer_size_)

@@ -89,16 +89,23 @@ class ARROW_EXPORT TempVectorStack {
Status Init(MemoryPool* pool, int64_t size) {
num_vectors_ = 0;
top_ = 0;
buffer_size_ = PaddedAllocationSize(size) + kPadding + 2 * sizeof(uint64_t);
buffer_size_ = PaddedAllocationSize(size) + 2 * sizeof(uint64_t);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use EstimateAllocSize() here?

return PaddedAllocationSize(size) + 2 * sizeof(uint64_t);
}

int64_t StackBufferSize() const { return buffer_size_; }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this?

@@ -311,5 +311,32 @@ TEST(VectorHash, FixedLengthTailByteSafety) {
HashFixedLengthFrom(/*key_length=*/19, /*num_rows=*/64, /*start_row=*/63);
}

TEST(HashBatch, AllocTempStackAsNeeded) {
auto arr = arrow::ArrayFromJSON(arrow::int32(), "[9,2,6]");
const int32_t batch_size = static_cast<int32_t>(arr->length());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const int32_t batch_size = static_cast<int32_t>(arr->length());
const auto batch_size = static_cast<int32_t>(arr->length());

@@ -219,5 +219,24 @@ class ARROW_EXPORT Hashing64 {
const uint8_t* keys, uint64_t* hashes);
};

template <typename T = uint32_t>
static int64_t EstimateBatchStackSize(int32_t batch_size) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to export this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, i want to unify the logic in Hashing32 and Hashing64. But seems unnecessary.

@github-actions github-actions bot added awaiting changes Awaiting changes and removed awaiting change review Awaiting change review labels Mar 25, 2024
@github-actions github-actions bot added awaiting change review Awaiting change review and removed awaiting changes Awaiting changes labels Mar 26, 2024
@ZhangHuiGui
Copy link
Collaborator Author

@kou Thank you for your review!
@westonpace PTAL? This is a refactoring job, in order to allow users in need to better use the HashBatch related API.

@mapleFU
Copy link
Member

mapleFU commented Mar 26, 2024

cc @zanmato1984 if you're interested in this

Copy link
Collaborator

@zanmato1984 zanmato1984 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea of generalizing the hashing APIs is nice. Some suggestions.

// XXX cannot return a regular Status because most consumers do not either.
ARROW_CHECK_LE(new_top, buffer_size_) << "TempVectorStack::alloc overflow";
ARROW_DCHECK_OK(CheckAllocOverflow(estimate_size));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably should use ARROW_CHECK_OK here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you're right. ARROW_DCHECK_OK seems not work in NDEBUG mode.

private:
int64_t PaddedAllocationSize(int64_t num_bytes) {
static int64_t PaddedAllocationSize(int64_t num_bytes) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can align all the function names to either XxxAllocSize or XxxAllocationSize?

Comment on lines 385 to 386
constexpr uint32_t max_batch_size = util::MiniBatch::kMiniBatchLength;
const auto alloc_batch_size = std::min(num_rows, max_batch_size);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it can be more clear if we combine these two lines to const uint32_t max_batch_size = std::min(num_rows, util::MiniBatch::kMiniBatchLength);.

Comment on lines 390 to 394
const auto alloc_hash_temp_buf =
util::TempVectorStack::EstimateAllocSize(alloc_batch_size * sizeof(uint32_t));
const auto alloc_for_null_indices_buf =
util::TempVectorStack::EstimateAllocSize(alloc_batch_size * sizeof(uint16_t));
const auto alloc_size = alloc_hash_temp_buf * 2 + alloc_for_null_indices_buf;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The scope of these three variables doesn't have to be this function, right? We can put them into the if statement below?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually the alloc_size is used both in if and else statement, it's not suitable to move these three variables into if statement.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, thanks.

util::TempVectorStack::EstimateAllocSize(alloc_batch_size * sizeof(uint16_t));
const auto alloc_size = alloc_hash_temp_buf * 2 + alloc_for_null_indices_buf;

std::shared_ptr<util::TempVectorStack> temp_stack(nullptr);
Copy link
Collaborator

@zanmato1984 zanmato1984 Mar 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
std::shared_ptr<util::TempVectorStack> temp_stack(nullptr);
auto stack = ctx->stack;
std::unique_ptr<util::TempVectorStack> temp_stack(nullptr);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Point is you don't really have to set the temp_stack pointer into the ctx. Just a regular temp variable will do. So you don't have to clear ctx->stack at the end.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, totally agree!

RETURN_NOT_OK(temp_stack->Init(default_memory_pool(), alloc_size));
ctx->stack = temp_stack.get();
} else {
RETURN_NOT_OK(ctx->stack->CheckAllocOverflow(alloc_size));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
RETURN_NOT_OK(ctx->stack->CheckAllocOverflow(alloc_size));
RETURN_NOT_OK(stack->CheckAllocOverflow(alloc_size));


auto hash_temp_buf = util::TempVectorHolder<uint32_t>(ctx->stack, max_batch_size);
auto hash_temp_buf = util::TempVectorHolder<uint32_t>(ctx->stack, alloc_batch_size);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto hash_temp_buf = util::TempVectorHolder<uint32_t>(ctx->stack, alloc_batch_size);
auto hash_temp_buf = util::TempVectorHolder<uint32_t>(stack, alloc_batch_size);

uint32_t* hash_temp = hash_temp_buf.mutable_data();

auto null_indices_buf = util::TempVectorHolder<uint16_t>(ctx->stack, max_batch_size);
auto null_indices_buf = util::TempVectorHolder<uint16_t>(ctx->stack, alloc_batch_size);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto null_indices_buf = util::TempVectorHolder<uint16_t>(ctx->stack, alloc_batch_size);
auto null_indices_buf = util::TempVectorHolder<uint16_t>(stack, alloc_batch_size);

uint16_t* null_indices = null_indices_buf.mutable_data();
int num_null_indices;

auto null_hash_temp_buf = util::TempVectorHolder<uint32_t>(ctx->stack, max_batch_size);
auto null_hash_temp_buf =
util::TempVectorHolder<uint32_t>(ctx->stack, alloc_batch_size);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
util::TempVectorHolder<uint32_t>(ctx->stack, alloc_batch_size);
util::TempVectorHolder<uint32_t>(stack, alloc_batch_size);

Comment on lines 483 to 485
if (temp_stack) {
ctx->stack = nullptr;
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if (temp_stack) {
ctx->stack = nullptr;
}


std::shared_ptr<util::TempVectorStack> temp_stack(nullptr);
if (!ctx->stack) {
temp_stack = std::make_shared<util::TempVectorStack>();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
temp_stack = std::make_shared<util::TempVectorStack>();
temp_stack = std::make_unique<util::TempVectorStack>();

@ZhangHuiGui
Copy link
Collaborator Author

@zanmato1984 Thank you very much for your suggestion, the code looks clearer!

Status Hashing32::HashMultiColumn(const std::vector<KeyColumnArray>& cols,
LightContext* ctx, uint32_t* hashes) {
auto num_rows = static_cast<uint32_t>(cols[0].length());
const auto alloc_batch_size =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest keeping the name max_batch_size. It carries the meaning of how many rows to process in each iteration. In addition, this name is used everywhere in hash join related code so keeping it may complies with existing code base more. Last, it doesn't seem to be in the same category of the following three alloc family variables - we can think any alloc variable is solely to make sure the stack is large enough.


// alloc stack overflow in HashBatch
ASSERT_OK(stack.Init(default_memory_pool(), batch_size));
ASSERT_NOT_OK(arrow::compute::Hashing32::HashBatch(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can use ASSERT_RAISES_WITH_MESSAGE to check the detailed error message.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, i've considered this. But the message has some detail numbers which related with internal alloc size. This is not convenient for future maintenance (for example, if some variables that require stack allocation are removed in HashMultiColumn, this test will need to be modified).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, you are right. Thanks.

Copy link
Collaborator

@zanmato1984 zanmato1984 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor suggestions.

auto ctx = arrow::compute::default_exec_context();
std::vector<arrow::compute::KeyColumnArray> temp_column_arrays;

// alloc stack by HashBatch internal
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// alloc stack by HashBatch internal
// HashBatch using internally allocated buffer.

util::TempVectorStack stack;
std::vector<uint32_t> h2(batch_size);

// alloc stack overflow in HashBatch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// alloc stack overflow in HashBatch
// HashBatch using pre-allocated buffer of insufficient size raises stack overflow.

exec_batch, h2.data(), temp_column_arrays, ctx->cpu_info()->hardware_flags(),
&stack, 0, batch_size));

// alloc stack normally in HashBatch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// alloc stack normally in HashBatch
// HashBatch using big enough pre-allocated buffer.

Copy link
Collaborator

@zanmato1984 zanmato1984 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My one last suggestion :)

Comment on lines 64 to 65
return Status::Invalid("TempVectorStack alloc overflow. (Actual ", buffer_size_,
"Bytes, expect ", alloc_size, "Bytes)");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return Status::Invalid("TempVectorStack alloc overflow. (Actual ", buffer_size_,
"Bytes, expect ", alloc_size, "Bytes)");
return Status::Invalid("TempVectorStack allocation overflow: capacity ", buffer_size_, ", current size ", top, ", attempt allocating ", alloc_size);

Copy link
Collaborator

@zanmato1984 zanmato1984 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My one last suggestion :)

@ZhangHuiGui
Copy link
Collaborator Author

My one last suggestion :)

Thanks!

@github-actions github-actions bot added the awaiting committer review Awaiting committer review label Mar 28, 2024
Comment on lines 400 to 405
temp_stack = std::make_unique<util::TempVectorStack>();
RETURN_NOT_OK(temp_stack->Init(default_memory_pool(), alloc_size));
stack = temp_stack.get();
} else {
RETURN_NOT_OK(stack->CheckAllocationOverflow(alloc_size));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is a possibility that ctx->stack is nullptr, then it's better to declare a TempVectorStack * parameter explicitly so the caller can allocate a stack with the right memory pool instead of this function internally relying on the global default_memory_pool(). Most calls would be passing ctx, ctx->stack except for the ones that for some reason don't have a stack in the context.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, you're right. HashBatch is used internally the way you said!

@github-actions github-actions bot added awaiting changes Awaiting changes and removed awaiting committer review Awaiting committer review labels Mar 28, 2024
@github-actions github-actions bot added awaiting change review Awaiting change review and removed awaiting changes Awaiting changes labels Mar 30, 2024
@ZhangHuiGui ZhangHuiGui changed the title GH-40431: [C++] Try to check/alloc the TempVectorStack size as HashBatch needed GH-40431: [C++] Move key_hash/key_map/light_array related files to internal for prevent using by users Mar 30, 2024
@ZhangHuiGui
Copy link
Collaborator Author

@pitrou PTAL!
The new commit include two things:

  1. Move these codes(key_hash/key_map/light_array) to internal! Besides, seems it's unnecessary to use internal namespace for them. The purpose we want is just prevent user's calling/
  2. Simple refactor to simplify some codes in TempVectorStack.

Copy link
Member

@westonpace westonpace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree these files were meant to be internal. Thanks for cleaning this up :)

@github-actions github-actions bot added awaiting merge Awaiting merge and removed awaiting change review Awaiting change review labels Apr 2, 2024
@pitrou
Copy link
Member

pitrou commented Apr 2, 2024

@github-actions crossbow submit -g cpp

cpp/src/arrow/acero/schema_util.h Outdated Show resolved Hide resolved

This comment was marked as outdated.

@pitrou
Copy link
Member

pitrou commented Apr 2, 2024

@github-actions crossbow submit -g cpp

This comment was marked as outdated.

@pitrou
Copy link
Member

pitrou commented Apr 2, 2024

@github-actions crossbow submit -g cpp

Copy link

github-actions bot commented Apr 2, 2024

Revision: 0722f88

Submitted crossbow builds: ursacomputing/crossbow @ actions-5141a1fc14

Task Status
test-alpine-linux-cpp GitHub Actions
test-build-cpp-fuzz GitHub Actions
test-conda-cpp GitHub Actions
test-conda-cpp-valgrind Azure
test-cuda-cpp GitHub Actions
test-debian-12-cpp-amd64 GitHub Actions
test-debian-12-cpp-i386 GitHub Actions
test-fedora-39-cpp GitHub Actions
test-ubuntu-20.04-cpp GitHub Actions
test-ubuntu-20.04-cpp-bundled GitHub Actions
test-ubuntu-20.04-cpp-minimal-with-formats GitHub Actions
test-ubuntu-20.04-cpp-thread-sanitizer GitHub Actions
test-ubuntu-22.04-cpp GitHub Actions
test-ubuntu-22.04-cpp-20 GitHub Actions
test-ubuntu-22.04-cpp-no-threading GitHub Actions
test-ubuntu-24.04-cpp GitHub Actions
test-ubuntu-24.04-cpp-gcc-14 GitHub Actions

@pitrou
Copy link
Member

pitrou commented Apr 2, 2024

CI failures are unrelated, I'll merge.

@pitrou pitrou merged commit 8163d02 into apache:main Apr 2, 2024
34 of 35 checks passed
@pitrou pitrou removed the awaiting merge Awaiting merge label Apr 2, 2024
Copy link

After merging your PR, Conbench analyzed the 6 benchmarking runs that have been run so far on merge-commit 8163d02.

There were no benchmark performance regressions. 🎉

The full Conbench report has more details. It also includes information about 14 possible false positives for unstable benchmarks that are known to sometimes produce them.

tolleybot pushed a commit to tmct/arrow that referenced this pull request May 2, 2024
… to internal for prevent using by users (apache#40484)

### Rationale for this change

These files expose implementation details and APIs that are not meant for third-party use. This PR explicitly marks them internal, which also avoids having them installed.

### Are these changes tested?

By existing builds and tests.

### Are there any user-facing changes?

No, except hiding some header files that were not supposed to be included externally.

* GitHub Issue: apache#40431

Lead-authored-by: ZhangHuiGui <hugo.zhang@openpie.com>
Co-authored-by: Antoine Pitrou <antoine@python.org>
Signed-off-by: Antoine Pitrou <antoine@python.org>
vibhatha pushed a commit to vibhatha/arrow that referenced this pull request May 25, 2024
… to internal for prevent using by users (apache#40484)

### Rationale for this change

These files expose implementation details and APIs that are not meant for third-party use. This PR explicitly marks them internal, which also avoids having them installed.

### Are these changes tested?

By existing builds and tests.

### Are there any user-facing changes?

No, except hiding some header files that were not supposed to be included externally.

* GitHub Issue: apache#40431

Lead-authored-by: ZhangHuiGui <hugo.zhang@openpie.com>
Co-authored-by: Antoine Pitrou <antoine@python.org>
Signed-off-by: Antoine Pitrou <antoine@python.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants