Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add vector mean computation #1275

Merged
merged 16 commits into from
Oct 20, 2023
Merged

Add vector mean computation #1275

merged 16 commits into from
Oct 20, 2023

Conversation

greole
Copy link
Collaborator

@greole greole commented Feb 12, 2023

This PR adds the computation of column wise arithmetic mean to dense and the distributed vector classes.

@greole greole added the 1:ST:WIP This PR is a work in progress. Not ready for review. label Feb 12, 2023
@greole greole marked this pull request as draft February 12, 2023 19:27
@ginkgo-bot ginkgo-bot added reg:testing This is related to testing. mod:core This is related to the core module. mod:reference This is related to the reference module. type:matrix-format This is related to the Matrix formats labels Feb 12, 2023
@greole
Copy link
Collaborator Author

greole commented Feb 13, 2023

format!

@greole greole changed the title Add vec mean tests Add vector mean computation Feb 13, 2023
@greole greole mentioned this pull request Feb 16, 2023
@greole
Copy link
Collaborator Author

greole commented Oct 13, 2023

format!

Comment on lines 414 to 416
* @param tmp the temporary storage to use for partial sums during the
* reduction computation. It may be resized and/or reset to the
* correct executor.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it does not have the parameter here

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I guess, ptr_param<LinOp> result is the way to go here?

@greole greole removed the 1:ST:WIP This PR is a work in progress. Not ready for review. label Oct 13, 2023
@greole greole marked this pull request as ready for review October 13, 2023 19:04
@greole
Copy link
Collaborator Author

greole commented Oct 13, 2023

format!

@greole greole added 1:ST:ready-for-review This PR is ready for review 1:ST:run-full-test labels Oct 13, 2023
@codecov
Copy link

codecov bot commented Oct 15, 2023

Codecov Report

Attention: 7 lines in your changes are missing coverage. Please review.

Files Coverage Δ
common/unified/matrix/dense_kernels.template.cpp 100.00% <100.00%> (ø)
include/ginkgo/core/distributed/vector.hpp 33.33% <ø> (ø)
include/ginkgo/core/matrix/dense.hpp 93.13% <ø> (ø)
reference/matrix/dense_kernels.cpp 89.55% <100.00%> (+0.18%) ⬆️
reference/test/matrix/dense_kernels.cpp 99.82% <100.00%> (+<0.01%) ⬆️
test/mpi/vector.cpp 100.00% <100.00%> (ø)
core/device_hooks/common_kernels.inc.cpp 0.00% <0.00%> (ø)
core/matrix/dense.cpp 93.34% <87.50%> (-0.15%) ⬇️
core/distributed/vector.cpp 86.00% <80.95%> (-0.48%) ⬇️

... and 5 files with indirect coverage changes

📢 Thoughts on this report? Let us know!.

@MarcelKoch MarcelKoch self-requested a review October 16, 2023 10:42
Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, some minor comments.

include/ginkgo/core/matrix/dense.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/distributed/vector.hpp Outdated Show resolved Hide resolved
core/distributed/vector.cpp Outdated Show resolved Hide resolved
core/distributed/vector.cpp Outdated Show resolved Hide resolved
core/matrix/dense.cpp Outdated Show resolved Hide resolved
core/matrix/dense.cpp Show resolved Hide resolved
common/unified/matrix/dense_kernels.template.cpp Outdated Show resolved Hide resolved
reference/matrix/dense_kernels.cpp Outdated Show resolved Hide resolved
reference/test/matrix/dense_kernels.cpp Outdated Show resolved Hide resolved
Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. No other comment.

Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only some nit

run_kernel_col_reduction_cached(
exec,
[] GKO_KERNEL(auto i, auto j, auto x, auto total_size) {
return x(i, j) / static_cast<ValueType_nc>(total_size);
Copy link
Member

@yhmtsai yhmtsai Oct 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we use sum(x/size) or sum(x)/size?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as my other comment, I would leave it as it is for now, and conduct some more experiments if it matters at a later point.

Comment on lines 621 to 632
}


template <typename ValueType>
ValueType Vector<ValueType>::at_local(size_type row,
size_type col) const noexcept
{
return local_.at(row, col);
}


template <typename ValueType>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think they are considered as overload function such that they only need one line break

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's not fixed?

core/matrix/dense.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/distributed/vector.hpp Outdated Show resolved Hide resolved
reference/matrix/dense_kernels.cpp Outdated Show resolved Hide resolved
reference/test/matrix/dense_kernels.cpp Outdated Show resolved Hide resolved
test/mpi/vector.cpp Show resolved Hide resolved
test/mpi/vector.cpp Outdated Show resolved Hide resolved
test/mpi/vector.cpp Outdated Show resolved Hide resolved
Comment on lines 599 to 601
auto weight = initialize<matrix::Dense<remove_complex<ValueType>>>(
1, {static_cast<remove_complex<ValueType>>(local_size) / global_size},
this->get_executor());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure whether it is performance critical.
Another way to do that is to pass the total size into compute_mean, and then we can use reduce directly without additional kernel.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like that idea, but I would assume that the mpi all_reduce and compute_mean are outweighing the dense scale. I'll leave it as is in this PR and perform some experiment to see whether it has an impact for the application cases. If it matters, I would open another PR with a scaling version of compute_mean.

core/distributed/vector.cpp Outdated Show resolved Hide resolved
core/matrix/dense.cpp Outdated Show resolved Hide resolved
@greole
Copy link
Collaborator Author

greole commented Oct 18, 2023

rebase!

@yhmtsai yhmtsai added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Oct 18, 2023
@yhmtsai
Copy link
Member

yhmtsai commented Oct 18, 2023

Before merging it, could you rephrase the commit history?

greole and others added 16 commits October 19, 2023 06:44
Add vec mean tests
Co-authored-by: Gregor Olenik <greole@users.noreply.github.com>
Co-authored-by: Marcel Koch <marcel.koch@kit.edu>
Co-authored-by: Yu-Hsiang M. Tsai <19565938+yhmtsai@users.noreply.github.com>
Co-authored-by: Yu-Hsiang M. Tsai <19565938+yhmtsai@users.noreply.github.com>
Co-authored-by: Marcel Koch <marcel.koch@kit.edu>
@sonarcloud
Copy link

sonarcloud bot commented Oct 19, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 9 Code Smells

87.9% 87.9% Coverage
0.0% 0.0% Duplication

warning The version of Java (11.0.3) you have used to run this analysis is deprecated and we will stop accepting it soon. Please update to at least Java 17.
Read more here

@greole greole merged commit 612a732 into develop Oct 20, 2023
17 of 20 checks passed
@greole greole deleted the vec-mean branch October 20, 2023 07:40
@tcojean tcojean mentioned this pull request Nov 6, 2023
tcojean added a commit that referenced this pull request Nov 10, 2023
Release 1.7.0 to master

The Ginkgo team is proud to announce the new Ginkgo minor release 1.7.0. This release brings new features such as:
- Complete GPU-resident sparse direct solvers feature set and interfaces,
- Improved Cholesky factorization performance,
- A new MC64 reordering,
- Batched iterative solver support with the BiCGSTAB solver with batched Dense and ELL matrix types,
- MPI support for the SYCL backend,
- Improved ParILU(T)/ParIC(T) preconditioner convergence,
and more!

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.16+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2019+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CMake 3.18+, and CUDA 10.1+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel oneAPI 2022.1+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp` or `icpx`.
  + MPI: standard version 3.1+, ideally GPU Aware, for best performance
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 10.1+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version support changes

+ CUDA 9.2 is no longer supported and 10.0 is untested [#1382](#1382)
+ Ginkgo now requires CMake version 3.16 (and 3.18 for CUDA) [#1368](#1368)

### Interface changes

+ `const` Factory parameters can no longer be modified through `with_*` functions, as this breaks const-correctness [#1336](#1336) [#1439](#1439)

### New Deprecations

+ The `device_reset` parameter of CUDA and HIP executors no longer has an effect, and its `allocation_mode` parameters have been deprecated in favor of the `Allocator` interface. [#1315](#1315)
+ The CMake parameter `GINKGO_BUILD_DPCPP` has been deprecated in favor of `GINKGO_BUILD_SYCL`. [#1350](#1350)
+ The `gko::reorder::Rcm` interface has been deprecated in favor of `gko::experimental::reorder::Rcm` based on `Permutation`. [#1418](#1418)
+ The Permutation class' `permute_mask` functionality. [#1415](#1415)
+ Multiple functions with typos (`set_complex_subpsace()`, range functions such as `conj_operaton` etc). [#1348](#1348)

### Summary of previous deprecations
+ `gko::lend()` is not necessary anymore.
+ The classes `RelativeResidualNorm` and `AbsoluteResidualNorm` are deprecated in favor of `ResidualNorm`.
+ The class `AmgxPgm` is deprecated in favor of `Pgm`.
+ Default constructors for the CSR `load_balance` and `automatical` strategies
+ The PolymorphicObject's move-semantic `copy_from` variant
+ The templated `SolverBase` class.
+ The class `MachineTopology` is deprecated in favor of `machine_topology`.
+ Logger constructors and create functions with the `executor` parameter.
+ The virtual, protected, Dense functions `compute_norm1_impl`, `add_scaled_impl`, etc.
+ Logger events for solvers and criterion without the additional `implicit_tau_sq` parameter.
+ The global `gko::solver::default_krylov_dim`, use instead `gko::solver::gmres_default_krylov_dim`.

### Added features

+ Adds a batch::BatchLinOp class that forms a base class for batched linear operators such as batched matrix formats, solver and preconditioners [#1379](#1379)
+ Adds a batch::MultiVector class that enables operations such as dot, norm, scale on batched vectors [#1371](#1371)
+ Adds a batch::Dense matrix format that stores batched dense matrices and provides gemv operations for these dense matrices. [#1413](#1413)
+ Adds a batch::Ell matrix format that stores batched Ell matrices and provides spmv operations for these batched Ell matrices. [#1416](#1416) [#1437](#1437)
+ Add a batch::Bicgstab solver (class, core, and reference kernels) that enables iterative solution of batched linear systems [#1438](#1438).
+ Add device kernels (CUDA, HIP, and DPCPP) for batch::Bicgstab solver. [#1443](#1443).
+ New MC64 reordering algorithm which optimizes the diagonal product or sum of a matrix by permuting the rows, and computes additional scaling factors for equilibriation [#1120](#1120)
+ New interface for (non-symmetric) permutation and scaled permutation of Dense and Csr matrices [#1415](#1415)
+ LU and Cholesky Factorizations can now be separated into their factors [#1432](#1432)
+ New symbolic LU factorization algorithm that is optimized for matrices with an almost-symmetric sparsity pattern [#1445](#1445)
+ Sorting kernels for SparsityCsr on all backends [#1343](#1343)
+ Allow passing pre-generated local solver as factory parameter for the distributed Schwarz preconditioner [#1426](#1426)
+ Add DPCPP kernels for Partition [#1034](#1034), and CSR's `check_diagonal_entries` and `add_scaled_identity` functionality [#1436](#1436)
+ Adds a helper function to create a partition based on either local sizes, or local ranges [#1227](#1227)
+ Add function to compute arithmetic mean of dense and distributed vectors [#1275](#1275)
+ Adds `icpx` compiler supports [#1350](#1350)
+ All backends can be built simultaneously [#1333](#1333)
+ Emits a CMake warning in downstream projects that use different compilers than the installed Ginkgo [#1372](#1372)
+ Reordering algorithms in sparse_blas benchmark [#1354](#1354)
+ Benchmarks gained an `-allocator` parameter to specify device allocators [#1385](#1385)
+ Benchmarks gained an `-input_matrix` parameter that initializes the input JSON based on the filename [#1387](#1387)
+ Benchmark inputs can now be reordered as a preprocessing step [#1408](#1408)


### Improvements

+ Significantly improve Cholesky factorization performance [#1366](#1366)
+ Improve parallel build performance [#1378](#1378)
+ Allow constrained parallel test execution using CTest resources [#1373](#1373)
+ Use arithmetic type more inside mixed precision ELL [#1414](#1414)
+ Most factory parameters of factory type no longer need to be constructed explicitly via `.on(exec)` [#1336](#1336) [#1439](#1439)
+ Improve ParILU(T)/ParIC(T) convergence by using more appropriate atomic operations [#1434](#1434)

### Fixes

+ Fix an over-allocation for OpenMP reductions [#1369](#1369)
+ Fix DPCPP's common-kernel reduction for empty input sizes [#1362](#1362)
+ Fix several typos in the API and documentation [#1348](#1348)
+ Fix inconsistent `Threads` between generations [#1388](#1388)
+ Fix benchmark median condition [#1398](#1398)
+ Fix HIP 5.6.0 compilation [#1411](#1411)
+ Fix missing destruction of rand_generator from cuda/hip [#1417](#1417)
+ Fix PAPI logger destruction order [#1419](#1419)
+ Fix TAU logger compilation [#1422](#1422)
+ Fix relative criterion to not iterate if the residual is already zero [#1079](#1079)
+ Fix memory_order invocations with C++20 changes [#1402](#1402)
+ Fix `check_diagonal_entries_exist` report correctly when only missing diagonal value in the last rows. [#1440](#1440)
+ Fix checking OpenMPI version in cross-compilation settings [#1446](#1446)
+ Fix false-positive deprecation warnings in Ginkgo, especially for the old Rcm (it doesn't emit deprecation warnings anymore as a result but is still considered deprecated) [#1444](#1444)


### Related PR: #1451
tcojean added a commit that referenced this pull request Nov 10, 2023
Release 1.7.0 to develop

The Ginkgo team is proud to announce the new Ginkgo minor release 1.7.0. This release brings new features such as:
- Complete GPU-resident sparse direct solvers feature set and interfaces,
- Improved Cholesky factorization performance,
- A new MC64 reordering,
- Batched iterative solver support with the BiCGSTAB solver with batched Dense and ELL matrix types,
- MPI support for the SYCL backend,
- Improved ParILU(T)/ParIC(T) preconditioner convergence,
and more!

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.16+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2019+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CMake 3.18+, and CUDA 10.1+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel oneAPI 2022.1+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp` or `icpx`.
  + MPI: standard version 3.1+, ideally GPU Aware, for best performance
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 10.1+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version support changes

+ CUDA 9.2 is no longer supported and 10.0 is untested [#1382](#1382)
+ Ginkgo now requires CMake version 3.16 (and 3.18 for CUDA) [#1368](#1368)

### Interface changes

+ `const` Factory parameters can no longer be modified through `with_*` functions, as this breaks const-correctness [#1336](#1336) [#1439](#1439)

### New Deprecations

+ The `device_reset` parameter of CUDA and HIP executors no longer has an effect, and its `allocation_mode` parameters have been deprecated in favor of the `Allocator` interface. [#1315](#1315)
+ The CMake parameter `GINKGO_BUILD_DPCPP` has been deprecated in favor of `GINKGO_BUILD_SYCL`. [#1350](#1350)
+ The `gko::reorder::Rcm` interface has been deprecated in favor of `gko::experimental::reorder::Rcm` based on `Permutation`. [#1418](#1418)
+ The Permutation class' `permute_mask` functionality. [#1415](#1415)
+ Multiple functions with typos (`set_complex_subpsace()`, range functions such as `conj_operaton` etc). [#1348](#1348)

### Summary of previous deprecations
+ `gko::lend()` is not necessary anymore.
+ The classes `RelativeResidualNorm` and `AbsoluteResidualNorm` are deprecated in favor of `ResidualNorm`.
+ The class `AmgxPgm` is deprecated in favor of `Pgm`.
+ Default constructors for the CSR `load_balance` and `automatical` strategies
+ The PolymorphicObject's move-semantic `copy_from` variant
+ The templated `SolverBase` class.
+ The class `MachineTopology` is deprecated in favor of `machine_topology`.
+ Logger constructors and create functions with the `executor` parameter.
+ The virtual, protected, Dense functions `compute_norm1_impl`, `add_scaled_impl`, etc.
+ Logger events for solvers and criterion without the additional `implicit_tau_sq` parameter.
+ The global `gko::solver::default_krylov_dim`, use instead `gko::solver::gmres_default_krylov_dim`.

### Added features

+ Adds a batch::BatchLinOp class that forms a base class for batched linear operators such as batched matrix formats, solver and preconditioners [#1379](#1379)
+ Adds a batch::MultiVector class that enables operations such as dot, norm, scale on batched vectors [#1371](#1371)
+ Adds a batch::Dense matrix format that stores batched dense matrices and provides gemv operations for these dense matrices. [#1413](#1413)
+ Adds a batch::Ell matrix format that stores batched Ell matrices and provides spmv operations for these batched Ell matrices. [#1416](#1416) [#1437](#1437)
+ Add a batch::Bicgstab solver (class, core, and reference kernels) that enables iterative solution of batched linear systems [#1438](#1438).
+ Add device kernels (CUDA, HIP, and DPCPP) for batch::Bicgstab solver. [#1443](#1443).
+ New MC64 reordering algorithm which optimizes the diagonal product or sum of a matrix by permuting the rows, and computes additional scaling factors for equilibriation [#1120](#1120)
+ New interface for (non-symmetric) permutation and scaled permutation of Dense and Csr matrices [#1415](#1415)
+ LU and Cholesky Factorizations can now be separated into their factors [#1432](#1432)
+ New symbolic LU factorization algorithm that is optimized for matrices with an almost-symmetric sparsity pattern [#1445](#1445)
+ Sorting kernels for SparsityCsr on all backends [#1343](#1343)
+ Allow passing pre-generated local solver as factory parameter for the distributed Schwarz preconditioner [#1426](#1426)
+ Add DPCPP kernels for Partition [#1034](#1034), and CSR's `check_diagonal_entries` and `add_scaled_identity` functionality [#1436](#1436)
+ Adds a helper function to create a partition based on either local sizes, or local ranges [#1227](#1227)
+ Add function to compute arithmetic mean of dense and distributed vectors [#1275](#1275)
+ Adds `icpx` compiler supports [#1350](#1350)
+ All backends can be built simultaneously [#1333](#1333)
+ Emits a CMake warning in downstream projects that use different compilers than the installed Ginkgo [#1372](#1372)
+ Reordering algorithms in sparse_blas benchmark [#1354](#1354)
+ Benchmarks gained an `-allocator` parameter to specify device allocators [#1385](#1385)
+ Benchmarks gained an `-input_matrix` parameter that initializes the input JSON based on the filename [#1387](#1387)
+ Benchmark inputs can now be reordered as a preprocessing step [#1408](#1408)


### Improvements

+ Significantly improve Cholesky factorization performance [#1366](#1366)
+ Improve parallel build performance [#1378](#1378)
+ Allow constrained parallel test execution using CTest resources [#1373](#1373)
+ Use arithmetic type more inside mixed precision ELL [#1414](#1414)
+ Most factory parameters of factory type no longer need to be constructed explicitly via `.on(exec)` [#1336](#1336) [#1439](#1439)
+ Improve ParILU(T)/ParIC(T) convergence by using more appropriate atomic operations [#1434](#1434)

### Fixes

+ Fix an over-allocation for OpenMP reductions [#1369](#1369)
+ Fix DPCPP's common-kernel reduction for empty input sizes [#1362](#1362)
+ Fix several typos in the API and documentation [#1348](#1348)
+ Fix inconsistent `Threads` between generations [#1388](#1388)
+ Fix benchmark median condition [#1398](#1398)
+ Fix HIP 5.6.0 compilation [#1411](#1411)
+ Fix missing destruction of rand_generator from cuda/hip [#1417](#1417)
+ Fix PAPI logger destruction order [#1419](#1419)
+ Fix TAU logger compilation [#1422](#1422)
+ Fix relative criterion to not iterate if the residual is already zero [#1079](#1079)
+ Fix memory_order invocations with C++20 changes [#1402](#1402)
+ Fix `check_diagonal_entries_exist` report correctly when only missing diagonal value in the last rows. [#1440](#1440)
+ Fix checking OpenMPI version in cross-compilation settings [#1446](#1446)
+ Fix false-positive deprecation warnings in Ginkgo, especially for the old Rcm (it doesn't emit deprecation warnings anymore as a result but is still considered deprecated) [#1444](#1444)

### Related PR: #1454
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. 1:ST:run-full-test mod:core This is related to the core module. mod:reference This is related to the reference module. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants