Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add csr mixed spmv and accessor storage addr #1319

Merged
merged 5 commits into from
May 8, 2023
Merged

Add csr mixed spmv and accessor storage addr #1319

merged 5 commits into from
May 8, 2023

Conversation

yhmtsai
Copy link
Member

@yhmtsai yhmtsai commented Apr 4, 2023

It adds the csr mixed spmv without casting the vectors to the matrix value type.

Additionally, I add the get_storage_address(index) in the accessor because I need the original pointer to perform Atomic operation.
Note: It might be possible to have the atomicCAS to still perform arithmetic type on the atomic operation.

@yhmtsai yhmtsai added the 1:ST:ready-for-review This PR is ready for review label Apr 4, 2023
@yhmtsai yhmtsai self-assigned this Apr 4, 2023
@ginkgo-bot ginkgo-bot added reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats mod:all This touches all Ginkgo modules. labels Apr 4, 2023
Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer if you changed the approach to the atomic_add with accessors.

Comment on lines +235 to +243
template <typename... Indices>
constexpr GKO_ACC_ATTRIBUTES std::enable_if_t<
are_all_integral<Indices...>::value,
std::conditional_t<is_const, const storage_type * GKO_ACC_RESTRICT,
storage_type * GKO_ACC_RESTRICT>>
get_storage_address(Indices&&... indices) const
{
return storage_ + compute_index(std::forward<Indices>(indices)...);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not a big fan of adding this operation and using it inside other kernels.
There is already an accessor (that is not pushed to develop yet) that operates on the POSIT format which would not work with this at all.
Preferably, the accessor either gets a atomic_add member function, or we have a free-standing function that takes the accessor and the index to perform the atomic operation (which could leverage this get_storage_address for reduced_row_major).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we put the compute_index into the public interface? It's currently in the protected interface. I can simply compute the storage address out of the accessor without touching atomic add.
Will POSIT have its own math operator? (like half) or it can only be used as accessor storage.
if it is like half and we only use the specific length like 32bit, 64 bits, atomic_add still works for it by passing through atomic_cas.
If it is the storage type of accessor, it will not work with the current atomic_add. Need to use some overload to call atomicCAS

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The POSIT is simply a storage format, so it can only be converted to and from IEEE types and all arithmetic operations happen in IEEE.
Making compute_index public would be fine with me, however, relying on that inside the CSR kernel defeats the purpose of having an Accessor interface since it only works with reduced_row_major.
That is the reason why I would prefer to have another function that handles support for more than one accessor. Inside that function, it would be fine to rely on get_storage_address or compute_index.

I need to redesign the whole accessor structure to make this more clear and easier to use, which I will do after finishing the block-compressor accessor.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This discussion only seems relevant if we plan to add POSIT support (or other, more advanced storage types) as a ValueType in Ginkgo in the near future, and with non-existent hardware support, I don't really see that happening yet.

Honestly, this kind of function to me is a hint that the accessor is not a suitable abstraction in this case. Considering that it is mainly a wrapper around a static_cast on load/store, I'm not convinced the complexity is worth it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not as a ValueType, but as a storage format. You get the same problem when you want to add bfloat16 and other storage formats to it, which I would like to have in the future.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what scaling issues you mean @upsj . The 2 implementations I was referring to was the accessor and the CSR SpMV.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant the comment "but that doesn't scale well"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, that "scaling" was referring to having multiple implementation that do the same thing, which leads to bad maintainability.
I didn't mean performance scaling, sorry for the confusion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was also talking about programming complexity. The kernel either uses accessors or it doesn't, IMO there is no need to maintain two implementations.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is likely not the only place where we need some atomic access, I would prefer of solving it with the accessor once and use that in more places.
But if you think it is better to implement it without the accessor, feel free to do so.

@upsj upsj requested a review from yanjen April 6, 2023 15:55
@pratikvn pratikvn added this to the Release 1.6.0 milestone Apr 17, 2023
Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
Most of my comments encourage the creation of helper functions to create an accessor out of a dense matrix.

Comment on lines +203 to +208
// using atomic add to accumluate data, so it needs to be
// the output storage type
// TODO: Does it make sense to use atomicCAS when the
// arithmetic_type and output_type are different? It may
// allow the non floating point storage or more precise
// result.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this comment be removed / moved as you don't have the atomic add in this place anymore?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will prefer putting it here because casting to the output storage type is here

omp/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
cuda/matrix/csr_kernels.cu Outdated Show resolved Hide resolved
cuda/matrix/csr_kernels.cu Outdated Show resolved Hide resolved
omp/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
reference/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
reference/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
reference/test/matrix/csr_kernels.cpp Show resolved Hide resolved
reference/test/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
test/matrix/csr_kernels2.cpp Outdated Show resolved Hide resolved
reference/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
omp/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
@yhmtsai yhmtsai force-pushed the mixed_csr branch 2 times, most recently from c729320 to 8108f0f Compare April 20, 2023 12:37
Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Notes about the introduced helper functions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer if you move this file into core/matrix/csr_helper or similar as this is a helper to transition from Accessor to Ginkgo (and is dependent on Ginkgo).

Comment on lines 51 to 62
template <typename ArthType, typename ValueType>
auto build_accessor(matrix::Dense<ValueType>* input)
{
using accessor = gko::acc::reduced_row_major<2, ArthType, ValueType>;
return range<accessor>(
std::array<acc::size_type, 2>{
{static_cast<acc::size_type>(input->get_size()[0]),
static_cast<acc::size_type>(input->get_size()[1])}},
input->get_values(),
std::array<acc::size_type, 1>{
{static_cast<acc::size_type>(input->get_stride())}});
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a helper, this should indicate one way or the other what accessor is used underneath.
This could be accomplished by putting this into a struct with the concrete accessor as the template parameter (allows for specialization for more accessors), a name change (to indicate the accessor like build_rrm_accessor or build_reduced_row_major_accessor) or by putting this into the CSR namespace and only using it there.

Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Considering how many surprising auto deductions happen in the code - are we taking the right approach with the accessors here? A more straightforward implementation would need to consist of only two things: 1. additional template parameters and 2. a static_cast around every load from/store to vals/input/output vector.

With the decaying of reference-like types happening only with the first operation with another object, this reminds me a lot of the issues that std::vector<bool> and Eigen's expression templates have. The C++ standard library maintainers need to take great care that all algorithms work correctly on iterators returning reference-like types, but I'm not sure we should be forced to do the same? Unfortunately, there is no operator auto() that automatically decays these transient objects on assignment.

omp/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
omp/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
omp/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
reference/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
reference/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
test/matrix/csr_kernels2.cpp Outdated Show resolved Hide resolved
@yhmtsai
Copy link
Member Author

yhmtsai commented Apr 24, 2023

are we taking the right approach with the accessors here? A more straightforward implementation would need to consist of only two things: 1. additional template parameters and 2. a static_cast around every load from/store to vals/input/output vector.

I thought it is the target of the accessor. Use operation on a different format than storage.
if no accessor, I will also use the static_cast for these operations. although it will be lengthy, it's simple and clear.

@upsj
Copy link
Member

upsj commented Apr 24, 2023

I thought it is the target of the accessor. Use operation on a different format than storage.

The accessor is a tool for abstracting away loads and stores, which is well-suited for regular BLAS1/2 kernels, but with the Csr SpMV being our most performance-critical component, and the memory accesses being the bottleneck of the operation, I think it is important to have fine control over the memory operations. Things we might want to try out (if the compiler doesn't figure out to do it for us already) involve nontemporal or L1-bypassing stores, noncoherent loads, prefetching etc, and for all that I think is easiest if we have the memory ops behind at most a thin abstraction.

@yhmtsai
Copy link
Member Author

yhmtsai commented May 8, 2023

rebase!

yhmtsai and others added 5 commits May 8, 2023 08:27
Co-authored-by: Thomas Grützmacher <thomas.gruetzmacher@kit.edu>
Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
Co-authored-by: Thomas Grützmacher <thomas.gruetzmacher@kit.edu>
@yhmtsai yhmtsai added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels May 8, 2023
@yhmtsai yhmtsai merged commit 9281045 into develop May 8, 2023
@yhmtsai yhmtsai deleted the mixed_csr branch May 8, 2023 21:19
tcojean added a commit that referenced this pull request Jun 16, 2023
Release 1.6.0 of Ginkgo.

The Ginkgo team is proud to announce the new Ginkgo minor release 1.6.0. This release brings new features such as:
- Several building blocks for GPU-resident sparse direct solvers like symbolic
  and numerical LU and Cholesky factorization, ...,
- A distributed Schwarz preconditioner,
- New FGMRES and GCR solvers,
- Distributed benchmarks for the SpMV operation, solvers, ...
- Support for non-default streams in the CUDA and HIP backends,
- Mixed precision support for the CSR SpMV,
- A new profiling logger which integrates with NVTX, ROCTX, TAU and VTune to
  provide internal Ginkgo knowledge to most HPC profilers!

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel OneAPI 2021.3+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version Support Changes
+ ROCm 4.0+ -> 4.5+ after [#1303](#1303)
+ Removed Cygwin pipeline and support [#1283](#1283)

### Interface Changes
+ Due to internal changes, `ConcreteExecutor::run` will now always throw if the corresponding module for the `ConcreteExecutor` is not build [#1234](#1234)
+ The constructor of `experimental::distributed::Vector` was changed to only accept local vectors as `std::unique_ptr` [#1284](#1284)
+ The default parameters for the `solver::MultiGrid` were improved. In particular, the smoother defaults to one iteration of `Ir` with `Jacobi` preconditioner, and the coarse grid solver uses the new direct solver with LU factorization. [#1291](#1291) [#1327](#1327)
+ The `iteration_complete` event gained a more expressive overload with additional parameters, the old overloads were deprecated. [#1288](#1288) [#1327](#1327)

### Deprecations
+ Deprecated less expressive `iteration_complete` event. Users are advised to now implement the function `void iteration_complete(const LinOp* solver, const LinOp* b, const LinOp* x, const size_type& it, const LinOp* r, const LinOp* tau, const LinOp* implicit_tau_sq, const array<stopping_status>* status, bool stopped)` [#1288](#1288)

### Added Features
+ A distributed Schwarz preconditioner. [#1248](#1248)
+ A GCR solver [#1239](#1239)
+ Flexible Gmres solver [#1244](#1244)
+ Enable Gmres solver for distributed matrices and vectors [#1201](#1201)
+ An example that uses Kokkos to assemble the system matrix [#1216](#1216)
+ A symbolic LU factorization allowing the `gko::experimental::factorization::Lu` and `gko::experimental::solver::Direct` classes to be used for matrices with non-symmetric sparsity pattern [#1210](#1210)
+ A numerical Cholesky factorization [#1215](#1215)
+ Symbolic factorizations in host-side operations are now wrapped in a host-side `Operation` to make their execution visible to loggers. This means that profiling loggers and benchmarks are no longer missing a separate entry for their runtime [#1232](#1232)
+ Symbolic factorization benchmark [#1302](#1302)
+ The `ProfilerHook` logger allows annotating the Ginkgo execution (apply, operations, ...) for profiling frameworks like NVTX, ROCTX and TAU. [#1055](#1055)
+ `ProfilerHook::created_(nested_)summary` allows the generation of a lightweight runtime profile over all Ginkgo functions written to a user-defined stream [#1270](#1270) for both host and device timing functionality [#1313](#1313)
+ It is now possible to enable host buffers for MPI communications at runtime even if the compile option `GINKGO_FORCE_GPU_AWARE_MPI` is set. [#1228](#1228)
+ A stencil matrices generator (5-pt, 7-pt, 9-pt, and 27-pt) for benchmarks [#1204](#1204)
+ Distributed benchmarks (multi-vector blas, SpMV, solver) [#1204](#1204)
+ Benchmarks for CSR sorting and lookup [#1219](#1219)
+ A timer for MPI benchmarks that reports the longest time [#1217](#1217)
+ A `timer_method=min|max|average|median` flag for benchmark timing summary [#1294](#1294)
+ Support for non-default streams in CUDA and HIP executors [#1236](#1236)
+ METIS integration for nested dissection reordering [#1296](#1296)
+ SuiteSparse AMD integration for fillin-reducing reordering [#1328](#1328)
+ Csr mixed-precision SpMV support [#1319](#1319)
+ A `with_loggers` function for all `Factory` parameters [#1337](#1337)

### Improvements
+ Improve naming of kernel operations for loggers [#1277](#1277)
+ Annotate solver iterations in `ProfilerHook` [#1290](#1290)
+ Allow using the profiler hooks and inline input strings in benchmarks [#1342](#1342)
+ Allow passing smart pointers in place of raw pointers to most matrix functions. This means that things like `vec->compute_norm2(x.get())` or `vec->compute_norm2(lend(x))` can be simplified to `vec->compute_norm2(x)` [#1279](#1279) [#1261](#1261)
+ Catch overflows in prefix sum operations, which makes Ginkgo's operations much less likely to crash. This also improves the performance of the prefix sum kernel [#1303](#1303)
+ Make the installed GinkgoConfig.cmake file relocatable and follow more best practices [#1325](#1325)

### Fixes
+ Fix OpenMPI version check [#1200](#1200)
+ Fix the mpi cxx type binding by c binding [#1306](#1306)
+ Fix runtime failures for one-sided MPI wrapper functions observed on some OpenMPI versions [#1249](#1249)
+ Disable thread pinning with GPU executors due to poor performance [#1230](#1230)
+ Fix hwloc version detection [#1266](#1266)
+ Fix PAPI detection in non-implicit include directories [#1268](#1268)
+ Fix PAPI support for newer PAPI versions: [#1321](#1321)
+ Fix pkg-config file generation for library paths outside prefix [#1271](#1271)
+ Fix various build failures with ROCm 5.4, CUDA 12, and OneAPI 6 [#1214](#1214), [#1235](#1235), [#1251](#1251)
+ Fix incorrect read for skew-symmetric MatrixMarket files with explicit diagonal entries [#1272](#1272)
+ Fix handling of missing diagonal entries in symbolic factorizations [#1263](#1263)
+ Fix segmentation fault in benchmark matrix construction [#1299](#1299)
+ Fix the stencil matrix creation for benchmarking [#1305](#1305)
+ Fix the additional residual check in IR [#1307](#1307)
+ Fix the cuSPARSE CSR SpMM issue on single strided vector when cuda >= 11.6 [#1322](#1322) [#1331](#1331)
+ Fix Isai generation for large sparsity powers [#1327](#1327)
+ Fix Ginkgo compilation and test with NVHPC >= 22.7 [#1331](#1331)
+ Fix Ginkgo compilation of 32 bit binaries with MSVC [#1349](#1349)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:all This touches all Ginkgo modules. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants