Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relocation algorithms Clean #6314

Closed

Conversation

isidorostsa
Copy link
Contributor

This is a reiteration of isidorostsa#2 which unfortunately got tangled up due to some mishandling of versioning on my end. Feedback from the original PR, specifically Arthur's comments, has been considered and implemented in this iteration.

Also, tests that are expected to fail to compile have been implemented.

The original pr's description:

This is a starting point for the implementation of the P1144 relocate_at and uninitialized_relocate algorithms for HPX. They will later be used as base primitives for parallelized versions.

@StellarBot
Copy link

Can one of the admins verify this patch?

@hkaiser
Copy link
Member

hkaiser commented Jul 28, 2023

add to whitelist

Copy link
Member

@hkaiser hkaiser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do have a facility that decides whether copy/move should use memcpy/memmove. Not sure if that's of use for you in this context (see: https://github.com/STEllAR-GROUP/hpx/blob/master/libs/core/algorithms/include/hpx/parallel/util/transfer.hpp).

@isidorostsa
Copy link
Contributor Author

We do have a facility that decides whether copy/move should use memcpy/memmove. Not sure if that's of use for you in this context (see: https://github.com/STEllAR-GROUP/hpx/blob/master/libs/core/algorithms/include/hpx/parallel/util/transfer.hpp).

Thanks for reviewing. I think the end goal is indeed to integrate relocation into transfer. In this PR I'm laying the groundwork by incorporating as many HPX facilities as possible into the conventional version.

@isidorostsa isidorostsa requested a review from hkaiser July 31, 2023 15:05
@gonidelis
Copy link
Contributor

@Pansysk75 Is jenkins/lsu failing a known issue? I know that boost has not been "loadable" on Rostam for a while now and we rather have to access it directly from /usr/lib (or /opt cannot recollect exactly)?

Lmod has detected the following error: These module(s) or extension(s) exist
but cannot be loaded as requested: "boost"
   Try: "module spider boost" to see how to load the module(s).

@hkaiser
Copy link
Member

hkaiser commented Aug 1, 2023

Is jenkins/lsu failing a known issue? I know that boost has not been "loadable" on Rostam for a while now and we rather have to access it directly from /usr/lib (or /opt cannot recollect exactly)?

jenkins/lsu is failing if one of the LSU builders fails. The hipcc failures are a know issue (@G-071 is working on it). However the isuees reported by inspect is genuine.

@isidorostsa
Copy link
Contributor Author

isidorostsa commented Aug 1, 2023

However the isuees reported by inspect is genuine.

I am not sure how I should interpret inspect failing, can I look beyond https://app.circleci.com/pipelines/github/STEllAR-GROUP/hpx/15462/workflows/452015fb-b56b-48c3-a18d-1eccf5e2c72f/jobs/357090 ?

@Pansysk75
Copy link
Member

Pansysk75 commented Aug 1, 2023

However the isuees reported by inspect is genuine.

I am not sure how I should interpret inspect failing, can I look beyond https://app.circleci.com/pipelines/github/STEllAR-GROUP/hpx/15462/workflows/452015fb-b56b-48c3-a18d-1eccf5e2c72f/jobs/357090 ?

@isidorostsa You can to the Artifacts tab and download the html report file

@gonidelis

Lmod has detected the following error: These module(s) or extension(s) exist
but cannot be loaded as requested: "boost"
   Try: "module spider boost" to see how to load the module(s).

FYI boost can be loaded on Rostam, you just have to load a compiler first (that's how it behaves for me at least, check out module spider boost).

@isidorostsa
Copy link
Contributor Author

isidorostsa commented Aug 1, 2023

Ohh thanks, looks like a line was too many characters. I may have found the culprit...

        hpx::relocate_at(ptr1, ptr2);

        // count = 1 + 0 (relocation on trivially relocatable objects does not trigger move constructors
        // or destructors); no object is destroyed or created
        HPX_TEST(trivially_relocatable_struct::count == 1);

@StellarBot
Copy link

Performance test report

HPX Performance

Comparison

BENCHMARKFORK_JOIN_EXECUTORPARALLEL_EXECUTORSCHEDULER_EXECUTOR
For Each(=)??(=)

Info

PropertyBeforeAfter
HPX Datetime2023-05-10T12:07:53+00:002023-08-23T18:13:20+00:00
HPX Commitdcb541595a70a0
Clusternamerostamrostam
Datetime2023-05-10T14:50:18.616050-05:002023-08-23T13:21:15.098880-05:00
Hostnamemedusa08.rostam.cct.lsu.edumedusa08.rostam.cct.lsu.edu
Envfile
Compiler/opt/apps/llvm/13.0.1/bin/clang++ 13.0.1/opt/apps/llvm/13.0.1/bin/clang++ 13.0.1

Comparison

BENCHMARKNO-EXECUTOR
Future Overhead - Create Thread Hierarchical - Latch=

Info

PropertyBeforeAfter
HPX Datetime2023-05-10T12:07:53+00:002023-08-23T18:13:20+00:00
HPX Commitdcb541595a70a0
Clusternamerostamrostam
Datetime2023-05-10T14:52:35.047119-05:002023-08-23T13:23:29.622921-05:00
Hostnamemedusa08.rostam.cct.lsu.edumedusa08.rostam.cct.lsu.edu
Envfile
Compiler/opt/apps/llvm/13.0.1/bin/clang++ 13.0.1/opt/apps/llvm/13.0.1/bin/clang++ 13.0.1

Comparison

BENCHMARKFORK_JOIN_EXECUTOR_DEFAULT_FORK_JOIN_POLICY_ALLOCATORPARALLEL_EXECUTOR_DEFAULT_PARALLEL_POLICY_ALLOCATORSCHEDULER_EXECUTOR_DEFAULT_SCHEDULER_EXECUTOR_ALLOCATOR
Stream Benchmark - Add(=)(=)=
Stream Benchmark - Scale(=)(=)=
Stream Benchmark - Triad==(=)
Stream Benchmark - Copy=-(=)

Info

PropertyBeforeAfter
HPX Datetime2023-05-10T12:07:53+00:002023-08-23T18:13:20+00:00
HPX Commitdcb541595a70a0
Clusternamerostamrostam
Datetime2023-05-10T14:52:52.237641-05:002023-08-23T13:23:47.000915-05:00
Hostnamemedusa08.rostam.cct.lsu.edumedusa08.rostam.cct.lsu.edu
Envfile
Compiler/opt/apps/llvm/13.0.1/bin/clang++ 13.0.1/opt/apps/llvm/13.0.1/bin/clang++ 13.0.1

Explanation of Symbols

SymbolMEANING
=No performance change (confidence interval within ±1%)
(=)Probably no performance change (confidence interval within ±2%)
(+)/(-)Very small performance improvement/degradation (≤1%)
+/-Small performance improvement/degradation (≤5%)
++/--Large performance improvement/degradation (≤10%)
+++/---Very large performance improvement/degradation (>10%)
?Probably no change, but quite large uncertainty (confidence interval with ±5%)
??Unclear result, very large uncertainty (±10%)
???Something unexpected…

@isidorostsa
Copy link
Contributor Author

Closed in favor of #6344

@isidorostsa isidorostsa closed this Sep 9, 2023
bors bot pushed a commit that referenced this pull request Sep 25, 2023
6344: uninitialized_relocate w/ type_support primitive r=hkaiser a=isidorostsa

This PR replaces the `type_support::uninitialized_relocate` with a primitive version for uninitialized relocation, named `uninitialized_relocate_n_primitive`. This is used by the parallel and sequenced versions of `algorithms::uninitialized_relocate`.

In contrast to the previous attempt of implementing the relocation functions, #6314, this does not place the primitive in `transfer.hpp`, which is in the `algorithms` module, nor does it modify `loop.hpp` to add the specific exception handling method required by `uninitialized_relocate`. 

The goal of keeping a primitive in type support, which is relatively low level in terms of dependencies, is to use it in the higher level `small_vector` and other data structures. 

6348: Adding --exclusive to launching tests on rostam r=hkaiser a=hkaiser



6349: Moving hpx::threads::run_as_xxx to namespace hpx r=hkaiser a=hkaiser

- run_as_os_thread
- run_as_hpx_thread

Co-authored-by: isidorostsa <tsa.isidoros@gmail.com>
Co-authored-by: Hartmut Kaiser <hartmut.kaiser@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants