Skip to content

Conversation

@bernhardmgruber
Copy link
Contributor

@bernhardmgruber bernhardmgruber commented Dec 8, 2025

Fixes: #6919

  • CUB transform tests pass
  • CCCL.C transform tests pass
  • Rebase on top of Implement cuda::__all_arch_ids and cuda::__is_specific_arch #6916
  • Update benchmarks
  • No SASS difference for cub.bench.transform.babelstream.base
  • No SASS difference for cub.test.device.transform.lid_0
  • Compile-time comparison between before and after this PR (including clang in CUDA, because we don't have __CUDA_ARCH_LIST__.)

Compile time of cub.test.device.transform.lid_0 using nvcc 13.1 and clang 20 for sm86, sm120

TODO: outdated

branch:
2m8.741s
2m7.726s
2m7.949s

main:
2m7.661s
2m6.072s
2m9.804s

Using clang 20 in CUDA mode:

branch:
real 1m40.627s
real 1m40.675s
real 1m40.912s

main:
real 1m39.273s
real 1m39.669s
real 1m39.835s

@copy-pr-bot
Copy link
Contributor

copy-pr-bot bot commented Dec 8, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@cccl-authenticator-app cccl-authenticator-app bot moved this from Todo to In Progress in CCCL Dec 8, 2025
Comment on lines 200 to 205
auto make_iterator_info(cccl_iterator_t input_it) -> cdt::iterator_info
{
return {static_cast<int>(input_it.value_type.size),
static_cast<int>(input_it.value_type.alignment),
/* trivially_relocatable */ true, // TODO(bgruber): how to check this properly?
input_it.type == CCCL_POINTER}; // TODO(bgruber): how to check this properly?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would appreciate some cccl.c maintainer input here. How I do know whether the iterator's value type is trivially relocatable and the iterator is contiguous?

Comment on lines +635 to +637
std::unique_ptr<arch_policies<1>> rtp(static_cast<arch_policies<1>*>(build_ptr->runtime_policy)); // FIXME(bgruber):
// handle <2> as
// well
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any way in this function to distinguish whether we build the unary or binary transform?

@bernhardmgruber bernhardmgruber marked this pull request as ready for review December 9, 2025 07:44
@bernhardmgruber bernhardmgruber requested review from a team as code owners December 9, 2025 07:44
@cccl-authenticator-app cccl-authenticator-app bot moved this from In Progress to In Review in CCCL Dec 9, 2025
@github-actions

This comment has been minimized.

_CCCL_API constexpr int get_block_threads_helper()
{
if constexpr (ActivePolicy::algorithm == Algorithm::prefetch)
constexpr transform_arch_policy policy = ArchPolicies{}(::cuda::arch_id{CUB_PTX_ARCH / 10});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hate the arcane / 10 here with a passion

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would love to call ::cuda::current_arch_id() but it's not constexpr on NVHPC by design.

Comment on lines +986 to +1010
#if _CCCL_HAS_CONCEPTS()
requires transform_policy_hub<ArchPolicies>
#endif // _CCCL_HAS_CONCEPTS()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: I believe we should either use the concept emulation or plain SFINAE in C++17 too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. We could also static_assert, but ArchPolicies is already used in the kernel attributes before we reach the body. And using a static_assert would only be evaluated in the device path.

How would I write that using concept emulation and have the concept check before the __launch_bounds__?

Comment on lines 358 to 386
bool all_inputs_contiguous = true;
bool all_input_values_trivially_reloc = true;
bool can_memcpy_contiguous_inputs = true;
bool all_value_types_have_power_of_two_size = ::cuda::is_power_of_two(output.value_type_size);
for (const auto& input : inputs)
{
all_inputs_contiguous &= input.is_contiguous;
all_input_values_trivially_reloc &= input.value_type_is_trivially_relocatable;
// the vectorized kernel supports mixing contiguous and non-contiguous iterators
can_memcpy_contiguous_inputs &= !input.is_contiguous || input.value_type_is_trivially_relocatable;
all_value_types_have_power_of_two_size &= ::cuda::is_power_of_two(input.value_type_size);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: While it is technically more efficient, I believe it would improve readability if we did

    const bool all_inputs_contiguous = ::cuda::std::all_of(input.begin(), input.end(), [](const auto& input) { return input.is_contiguous; })

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@bernhardmgruber
Copy link
Contributor Author

bernhardmgruber commented Dec 11, 2025

I see tiny changes in the generated SASS for cub.bench.transform.babelstream.base, notable in the filling kernels (no inputs) for complex<float>. The compiler now generates STG.E.ENL2.256, which it didn't do before.

The fill lernel for int128 seems to have degraded from generating STG.E.128 to a lot more STG.E.

All kernels with a functor marked as __callable_permitting_copied_arguments show no changes. That's good.

It feels a bit like the items per thread changed for the fill kernels.

@bernhardmgruber
Copy link
Contributor Author

It feels a bit like the items per thread changed for the fill kernels.

They did. Before we had a tuning policy for sm_120, that was not taken into account :D This PR now uses it.

@bernhardmgruber
Copy link
Contributor Author

I disabled the sm120 fill policy and now the only SASS diff for filling is on:

void cub::_V_300300_SM_1200::detail::transform::transform_kernel<cub::_V_300300_SM_1200::detail::transform::policy_hub<false, true, cuda::std::__4::tuple<cuda::__4::counting_iterator<long, 0, 0>>, unsigned long*>::policy1000, long, cub::_V_300300_SM_1200::detail::transform::always_true_predicate, cuda::__4::__callable_permitting_copied_arguments<(anonymous namespace)::lognormal_adjust_t<unsigned long>>, unsigned long*, cuda::__4::counting_iterator<long, 0, 0>>(long, int, bool, cub::_V_300300_SM_1200::detail::transform::always_true_predicate, cuda::__4::__callable_permitting_copied_arguments<(anonymous namespace)::lognormal_adjust_t<unsigned long>>, unsigned long*, cub::_V_300300_SM_1200::detail::transform::kernel_arg<cuda::__4::counting_iterator<long, 0, 0>>)

which is a thrust::tabulate of a counting_iterator<long> and an unsigned long*.

@gonidelis gonidelis self-requested a review December 11, 2025 16:44
@bernhardmgruber
Copy link
Contributor Author

Found the final issue with the fill kernels. Disabled the vectorized tunings when we have input streams (they were tuned for output only use cases). SASS of cub.bench.transform.fill.base now matches baseline on sm120.

@bernhardmgruber bernhardmgruber requested a review from a team as a code owner December 11, 2025 21:48
@bernhardmgruber bernhardmgruber requested review from jrhemstad and removed request for jrhemstad December 11, 2025 21:48
@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

// for function: found previous definition of same function!"` when we pass a const& as template parameter (and the
// function template body contains a lambda). As a workaround, we pass the parts of the policy by value.
// TODO(bgruber): In C++20, we should just pass transform_arch_policy by value.
template < // const transform_arch_policy& Policy,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to pass transform_arch_policy by const& in the template arguments?

Copy link
Contributor Author

@bernhardmgruber bernhardmgruber Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

transform_arch_policy is not a literal type in C++17, so we can only pass a pointer or reference to a static constexpr instance of it. However, nvcc dies if I do this, so I had to pass the relevant members of transform_arch_policy instead.

Comment on lines +123 to +126
// if we have to fall back to prefetching, use these values:
int prefetch_items_per_thread_no_input = 2;
int prefetch_min_items_per_thread = 1;
int prefetch_max_items_per_thread = 32;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this rather hold a prefetch_policy instead of the individual members?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I initially inherited from a prefetch_policy, but then designated initializers don't work anymore. Then I did a prefetch_policy member, which was awkward again, because now you had to write policy.prefetch_policy.block_threads to get the block threads for the vectorized policy in case the fall back is not needed. The current state avoids both issues, but I agree, it's not nice.

store_size > 4 ? 128 : 256, 16, ::cuda::std::max(8 / store_size, 1) /* 64-bit instructions */};
}
// manually tuned fill on A100
if (arch >= ::cuda::arch_id::sm_90) // TODO(bgruber): this should be sm_80
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this then not changed? This code path cannot be taken currently

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because enabling it caused sudden compilation errors that I didn't understand and I had to make progress elsewhere. But this needs to be fixed before merging.

@bernhardmgruber
Copy link
Contributor Author

I pulled out the arch dispatching logic into: #7093

@github-actions

This comment has been minimized.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 7, 2026

😬 CI Workflow Results

🟥 Finished in 6h 00m: Pass: 94%/143 | Total: 8d 07h | Max: 6h 00m | Hits: 72%/178221

See results here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In Review

Development

Successfully merging this pull request may close these issues.

Implement the new tuning API for DeviceTransform

2 participants