Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to allocate mpi buffers in device memory #979

Conversation

AlexanderSinn
Copy link
Member

Necessary for testing gpu aware mpi.

  • Small enough (< few 100s of lines), otherwise it should probably be split into smaller PRs
  • Tested (describe the tests in the PR description)
  • Runs on GPU (basic: the code compiles and run well with the new module)
  • Contains an automated test (checksum and/or comparison with theory)
  • Documented: all elements (classes and their members, functions, namespaces, etc.) are documented
  • Constified (All that can be const is const)
  • Code is clean (no unwanted comments, )
  • Style and code conventions are respected at the bottom of https://github.com/Hi-PACE/hipace
  • Proper label and GitHub project, if applicable

@AlexanderSinn AlexanderSinn added GPU Related to GPU acceleration Parallelization Longitudinal and transverse MPI decomposition labels Jun 14, 2023
Copy link
Member

@MaxThevenet MaxThevenet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, see minor comments below.

docs/source/run/parameters.rst Outdated Show resolved Hide resolved
docs/source/run/parameters.rst Outdated Show resolved Hide resolved
src/Hipace.H Outdated Show resolved Hide resolved
@MaxThevenet MaxThevenet merged commit 3a9e04f into Hi-PACE:development Jun 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GPU Related to GPU acceleration Parallelization Longitudinal and transverse MPI decomposition
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants