Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add gpu timer and use seconds in benchmark #669

Merged
merged 5 commits into from
Dec 17, 2020
Merged

Add gpu timer and use seconds in benchmark #669

merged 5 commits into from
Dec 17, 2020

Conversation

yhmtsai
Copy link
Member

@yhmtsai yhmtsai commented Nov 24, 2020

This PR adds gpu timer (based on event of cuda/hip) in benchmark.
--gpu_timer=true/false to choose which timer to use.
The gpu timer requires the first and the last op are on GPU.

UPDATED This PR also changes the timing result from nanoseconds to seconds

I only add it for major time measurement.
For the component time (detail), I keep original code because they usually contain some cpu operations.

@yhmtsai yhmtsai added reg:benchmarking This is related to benchmarking. 1:ST:ready-for-review This PR is ready for review labels Nov 24, 2020
@yhmtsai yhmtsai self-assigned this Nov 24, 2020
tic_called_ = true;
}

std::size_t toc()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it nice to have the return type as size_t? I see that Cuda and Hip timers actually return float which are then cast to size_t. Maybe the return type can just be kept as double?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is based on the chrono library.
In chrono, we can count the number of nanosecond, so I use size_t to represent it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay. But it does not seem like good practice to cast a float to size_t... though the other way round (or better yet to double) is not too bad. [Side note: cppreference says nanosecond is a signed 64 bit type, not size_t].

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it is a good idea to have toc() return something because it is not obvious what it returns and in any case, the return value is not really useful. It could be any of these return values:

  • The number of times the time has already been recorded
  • the average runtime up until this point
  • the sum of the runtime until this point
  • the runtime of the latest measurement (you have chosen this one)

Additionally, it is not clear that the unit returned is nanoseconds. All in all, I would recommend to have this function not return anything.
If you want to know the latest result, I would prefer an additional function (std::chrono does the same).

benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Cleans up a lot of the benchmarking code!

std::chrono::duration_cast<std::chrono::nanoseconds>(g_tac -
g_tic) /
FLAGS_repetitions;
generate_timer->get_total_time() / FLAGS_repetitions;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess you can use the get_average_time here as well ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The timer is out of the repetition loop, so total_time = average_time here .
To get the correct average time, need total_time/FLAGS_repetitions or average_time/FLAGS_repetitions.
total_time is less confusing than average_time?

#include "hip/base/device_guard.hip.hpp"


#endif // HAS_CUDA
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#endif // HAS_CUDA
#endif // HAS_HIP

public:
void tic()
{
assert(tic_called_ == false);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if this is useful, as if someone puts a #define NDEBUG at some point, it gets optimized out when in Release. Maybe you can define your own simple GKO_ASSERT macro ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the point would be to avoid this check in release, so as to disturb the timing as little as possible? In case someone suspects issues, they can run in debug and check. Admittedly, the extra assert would not usually matter for the timing, but still, you never know how it might be used later.


std::size_t get_total_time() { return total_duration_ns_; }

std::size_t get_tictoc_num() { return duration_ns_.size(); }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you can rename this to num_repetitions or something like that ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, that would be more self-explanatory.

thoasm
thoasm previously requested changes Nov 27, 2020
Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When using the script, you currently can't measure with GPU time.

FLAGS_repetitions;
add_or_set_member(this_precond_data["apply"], "time",
apply_time.count(), allocator);
auto apply_time = apply_timer->get_total_time() / FLAGS_repetitions;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto apply_time = apply_timer->get_total_time() / FLAGS_repetitions;
auto apply_time = apply_timer->get_average_time();

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is outside the loop and the timer doesn't know about FLAGS_repetitions, wouldn't this give a wrong result? (equivalent to get_total_time()).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The apply_timer does know how often it was started and stopped, therefore, it is independent of the FLAGS_repetitions (and in my opinion safer) when using get_average_time().

Copy link
Member

@tcojean tcojean Dec 3, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's independent only if you put it inside the for loop, AFAIK it is not here. See the context of the line highlighted:
https://github.com/ginkgo-project/ginkgo/pull/669/files#diff-d573d9686e53b5ce4b41de8061f46d1a30693d2dfa59795bec0680f6bdd4e4dcR178-R183

You also get less timing overhead if you put it outside the for loops when you can, as you will synchronize only once instead of at every iteration.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We put the timer out of for-loop, so the apply_timer can not know the number of for-loop. As terry said, I keep the current timer for less overhead. If we need to refill/reset the x,b like spmv or solver, we can put the timer inside the for-loop

benchmark/run_all_benchmarks.sh Outdated Show resolved Hide resolved
benchmark/run_all_benchmarks.sh Outdated Show resolved Hide resolved
benchmark/run_all_benchmarks.sh Outdated Show resolved Hide resolved
benchmark/run_all_benchmarks.sh Outdated Show resolved Hide resolved
benchmark/utils/timer.hpp Show resolved Hide resolved
tic_called_ = true;
}

std::size_t toc()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it is a good idea to have toc() return something because it is not obvious what it returns and in any case, the return value is not really useful. It could be any of these return values:

  • The number of times the time has already been recorded
  • the average runtime up until this point
  • the sum of the runtime until this point
  • the runtime of the latest measurement (you have chosen this one)

Additionally, it is not clear that the unit returned is nanoseconds. All in all, I would recommend to have this function not return anything.
If you want to know the latest result, I would prefer an additional function (std::chrono does the same).


std::size_t get_total_time() { return total_duration_ns_; }

std::size_t get_tictoc_num() { return duration_ns_.size(); }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, that would be more self-explanatory.

benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Nov 28, 2020

Codecov Report

Merging #669 (a149032) into develop (2a951ac) will increase coverage by 0.02%.
The diff coverage is n/a.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #669      +/-   ##
===========================================
+ Coverage    92.87%   92.89%   +0.02%     
===========================================
  Files          333      333              
  Lines        24266    24265       -1     
===========================================
+ Hits         22537    22541       +4     
+ Misses        1729     1724       -5     
Impacted Files Coverage Δ
omp/reorder/rcm_kernels.cpp 98.13% <0.00%> (+3.07%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2a951ac...a149032. Read the comment docs.

@thoasm thoasm dismissed their stale review November 30, 2020 04:39

Suggested changes have been made (but not yet fully reviewed by me).

@thoasm thoasm self-requested a review November 30, 2020 04:39
Copy link
Contributor

@Slaedr Slaedr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! I would still prefer that toc etc. return double, but int64_t is also reasonable so I'll leave it to you.

Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Please add documentations though for the the different classes and public API. Also, please add some documentation to BENCHMARKING.md.

FLAGS_repetitions;
add_or_set_member(this_precond_data["apply"], "time",
apply_time.count(), allocator);
auto apply_time = apply_timer->get_total_time() / FLAGS_repetitions;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is outside the loop and the timer doesn't know about FLAGS_repetitions, wouldn't this give a wrong result? (equivalent to get_total_time()).

"executor is cuda or hip");


class Timer {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use the gko namespace?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I put it in gko:: namespace? I put the timer only in benchmark, so I am not sure whether it is suitable with gko namespace

Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @tcojean: I would like to have more documentation.

benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
}
#endif // HAS_HIP
}
// Not use gpu_timer or not cuda/hip executor
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
// Not use gpu_timer or not cuda/hip executor
// No cuda/hip executor available or no gpu_timer used

auto duration_time =
std::chrono::duration_cast<std::chrono::nanoseconds>(stop - start_)
.count();
return static_cast<std::int64_t>(duration_time);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess a cast is not required here anymore.

Suggested change
return static_cast<std::int64_t>(duration_time);
return duration_time;

@yhmtsai yhmtsai force-pushed the gpu_timer branch 2 times, most recently from 01581b1 to 3ecb031 Compare December 7, 2020 21:15
@yhmtsai
Copy link
Member Author

yhmtsai commented Dec 7, 2020

format!

Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
benchmark/utils/timer.hpp Outdated Show resolved Hide resolved
- fix gpu_timer in script
- use int64_t
- add const to function
- update documentation
- add get_latest_time
- rename get_average_time -> compute_average_time

Co-authored-by: Aditya Kashi <aditya.kashi@kit.edu>
Co-authored-by: Pratik Nayak <pratikvn@protonmail.com>
Co-authored-by: Terry Cojean <terry.cojean@kit.edu>
Co-authored-by: Thomas Grützmacher <thomas.gruetzmacher@kit.edu>
{
exec_->synchronize();
auto stop = std::chrono::steady_clock::now();
auto duration_time = get_duration_in_seconds(stop - start_);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could just use the standard way of doing this as shown here: https://en.cppreference.com/w/cpp/chrono/steady_clock/now
We don't really need an entire function and file to do this.

Suggested change
auto duration_time = get_duration_in_seconds(stop - start_);
std::chrono::duration<double> duration_time = stop - start_;

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the example, I did not know that before.

exec_->synchronize();
auto stop = std::chrono::steady_clock::now();
auto duration_time = get_duration_in_seconds(stop - start_);
return duration_time;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return duration_time;
return duration_time.count();

@yhmtsai yhmtsai changed the title Add gpu timer for benchmark Add gpu timer and use seconds in benchmark Dec 15, 2020
@yhmtsai yhmtsai added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Dec 17, 2020
Co-authored-by: Aditya Kashi <aditya.kashi@kit.edu>
@sonarcloud
Copy link

sonarcloud bot commented Dec 17, 2020

Kudos, SonarCloud Quality Gate passed!

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 10 Code Smells

0.0% 0.0% Coverage
0.0% 0.0% Duplication

@yhmtsai yhmtsai merged commit 297d732 into develop Dec 17, 2020
@yhmtsai yhmtsai deleted the gpu_timer branch December 17, 2020 22:51
@tcojean tcojean mentioned this pull request Jun 23, 2021
2 tasks
tcojean added a commit that referenced this pull request Aug 20, 2021
Ginkgo release 1.4.0

The Ginkgo team is proud to announce the new Ginkgo minor release 1.4.0. This
release brings most of the Ginkgo functionality to the Intel DPC++ ecosystem
which enables Intel-GPU and CPU execution. The only Ginkgo features which have
not been ported yet are some preconditioners.

Ginkgo's mixed-precision support is greatly enhanced thanks to:
1. The new Accessor concept, which allows writing kernels featuring on-the-fly
memory compression, among other features. The accessor can be used as
header-only, see the [accessor BLAS benchmarks repository](https://github.com/ginkgo-project/accessor-BLAS/tree/develop) as a usage example.
2. All LinOps now transparently support mixed-precision execution. By default,
this is done through a temporary copy which may have a performance impact but
already allows mixed-precision research.

Native mixed-precision ELL kernels are implemented which do not see this cost.
The accessor is also leveraged in a new CB-GMRES solver which allows for
performance improvements by compressing the Krylov basis vectors. Many other
features have been added to Ginkgo, such as reordering support, a new IDR
solver, Incomplete Cholesky preconditioner, matrix assembly support (only CPU
for now), machine topology information, and more!

Supported systems and requirements:
+ For all platforms, cmake 3.13+
+ C++14 compliant compiler
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
  + HIP module: ROCm 3.5+
  + DPC++ module: Intel OneAPI 2021.3. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: gcc 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add a new DPC++ Executor for SYCL execution and other base utilities
  [#648](#648), [#661](#661), [#757](#757), [#832](#832)
+ Port matrix formats, solvers and related kernels to DPC++. For some kernels,
  also make use of a shared kernel implementation for all executors (except
  Reference). [#710](#710), [#799](#799), [#779](#779), [#733](#733), [#844](#844), [#843](#843), [#789](#789), [#845](#845), [#849](#849), [#855](#855), [#856](#856)
+ Add accessors which allow multi-precision kernels, among other things.
  [#643](#643), [#708](#708)
+ Add support for mixed precision operations through apply in all LinOps. [#677](#677)
+ Add incomplete Cholesky factorizations and preconditioners as well as some
  improvements to ILU. [#672](#672), [#837](#837), [#846](#846)
+ Add an AMGX implementation and kernels on all devices but DPC++.
  [#528](#528), [#695](#695), [#860](#860)
+ Add a new mixed-precision capability solver, Compressed Basis GMRES
  (CB-GMRES). [#693](#693), [#763](#763)
+ Add the IDR(s) solver. [#620](#620)
+ Add a new fixed-size block CSR matrix format (for the Reference executor).
  [#671](#671), [#730](#730)
+ Add native mixed-precision support to the ELL format. [#717](#717), [#780](#780)
+ Add Reverse Cuthill-McKee reordering [#500](#500), [#649](#649)
+ Add matrix assembly support on CPUs. [#644](#644)
+ Extends ISAI from triangular to general and spd matrices. [#690](#690)

Other additions:
+ Add the possibility to apply real matrices to complex vectors.
  [#655](#655), [#658](#658)
+ Add functions to compute the absolute of a matrix format. [#636](#636)
+ Add symmetric permutation and improve existing permutations.
  [#684](#684), [#657](#657), [#663](#663)
+ Add a MachineTopology class with HWLOC support [#554](#554), [#697](#697)
+ Add an implicit residual norm criterion. [#702](#702), [#818](#818), [#850](#850)
+ Row-major accessor is generalized to more than 2 dimensions and a new
  "block column-major" accessor has been added. [#707](#707)
+ Add an heat equation example. [#698](#698), [#706](#706)
+ Add ccache support in CMake and CI. [#725](#725), [#739](#739)
+ Allow tuning and benchmarking variables non intrusively. [#692](#692)
+ Add triangular solver benchmark [#664](#664)
+ Add benchmarks for BLAS operations [#772](#772), [#829](#829)
+ Add support for different precisions and consistent index types in benchmarks.
  [#675](#675), [#828](#828)
+ Add a Github bot system to facilitate development and PR management.
  [#667](#667), [#674](#674), [#689](#689), [#853](#853)
+ Add Intel (DPC++) CI support and enable CI on HPC systems. [#736](#736), [#751](#751), [#781](#781)
+ Add ssh debugging for Github Actions CI. [#749](#749)
+ Add pipeline segmentation for better CI speed. [#737](#737)


Changes:
+ Add a Scalar Jacobi specialization and kernels. [#808](#808), [#834](#834), [#854](#854)
+ Add implicit residual log for solvers and benchmarks. [#714](#714)
+ Change handling of the conjugate in the dense dot product. [#755](#755)
+ Improved Dense stride handling. [#774](#774)
+ Multiple improvements to the OpenMP kernels performance, including COO,
an exclusive prefix sum, and more. [#703](#703), [#765](#765), [#740](#740)
+ Allow specialization of submatrix and other dense creation functions in solvers. [#718](#718)
+ Improved Identity constructor and treatment of rectangular matrices. [#646](#646)
+ Allow CUDA/HIP executors to select allocation mode. [#758](#758)
+ Check if executors share the same memory. [#670](#670)
+ Improve test install and smoke testing support. [#721](#721)
+ Update the JOSS paper citation and add publications in the documentation.
  [#629](#629), [#724](#724)
+ Improve the version output. [#806](#806)
+ Add some utilities for dim and span. [#821](#821)
+ Improved solver and preconditioner benchmarks. [#660](#660)
+ Improve benchmark timing and output. [#669](#669), [#791](#791), [#801](#801), [#812](#812)


Fixes:
+ Sorting fix for the Jacobi preconditioner. [#659](#659)
+ Also log the first residual norm in CGS [#735](#735)
+ Fix BiCG and HIP CSR to work with complex matrices. [#651](#651)
+ Fix Coo SpMV on strided vectors. [#807](#807)
+ Fix segfault of extract_diagonal, add short-and-fat test. [#769](#769)
+ Fix device_reset issue by moving counter/mutex to device. [#810](#810)
+ Fix `EnableLogging` superclass. [#841](#841)
+ Support ROCm 4.1.x and breaking HIP_PLATFORM changes. [#726](#726)
+ Decreased test size for a few device tests. [#742](#742)
+ Fix multiple issues with our CMake HIP and RPATH setup.
  [#712](#712), [#745](#745), [#709](#709)
+ Cleanup our CMake installation step. [#713](#713)
+ Various simplification and fixes to the Windows CMake setup. [#720](#720), [#785](#785)
+ Simplify third-party integration. [#786](#786)
+ Improve Ginkgo device arch flags management. [#696](#696)
+ Other fixes and improvements to the CMake setup.
  [#685](#685), [#792](#792), [#705](#705), [#836](#836)
+ Clarification of dense norm documentation [#784](#784)
+ Various development tools fixes and improvements [#738](#738), [#830](#830), [#840](#840)
+ Make multiple operators/constructors explicit. [#650](#650), [#761](#761)
+ Fix some issues, memory leaks and warnings found by MSVC.
  [#666](#666), [#731](#731)
+ Improved solver memory estimates and consistent iteration counts [#691](#691)
+ Various logger improvements and fixes [#728](#728), [#743](#743), [#754](#754)
+ Fix for ForwardIterator requirements in iterator_factory. [#665](#665)
+ Various benchmark fixes. [#647](#647), [#673](#673), [#722](#722)
+ Various CI fixes and improvements. [#642](#642), [#641](#641), [#795](#795), [#783](#783), [#793](#793), [#852](#852)


Related PR: #857
tcojean added a commit that referenced this pull request Aug 23, 2021
Release 1.4.0 to master

The Ginkgo team is proud to announce the new Ginkgo minor release 1.4.0. This
release brings most of the Ginkgo functionality to the Intel DPC++ ecosystem
which enables Intel-GPU and CPU execution. The only Ginkgo features which have
not been ported yet are some preconditioners.

Ginkgo's mixed-precision support is greatly enhanced thanks to:
1. The new Accessor concept, which allows writing kernels featuring on-the-fly
memory compression, among other features. The accessor can be used as
header-only, see the [accessor BLAS benchmarks repository](https://github.com/ginkgo-project/accessor-BLAS/tree/develop) as a usage example.
2. All LinOps now transparently support mixed-precision execution. By default,
this is done through a temporary copy which may have a performance impact but
already allows mixed-precision research.

Native mixed-precision ELL kernels are implemented which do not see this cost.
The accessor is also leveraged in a new CB-GMRES solver which allows for
performance improvements by compressing the Krylov basis vectors. Many other
features have been added to Ginkgo, such as reordering support, a new IDR
solver, Incomplete Cholesky preconditioner, matrix assembly support (only CPU
for now), machine topology information, and more!

Supported systems and requirements:
+ For all platforms, cmake 3.13+
+ C++14 compliant compiler
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
  + HIP module: ROCm 3.5+
  + DPC++ module: Intel OneAPI 2021.3. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: gcc 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add a new DPC++ Executor for SYCL execution and other base utilities
  [#648](#648), [#661](#661), [#757](#757), [#832](#832)
+ Port matrix formats, solvers and related kernels to DPC++. For some kernels,
  also make use of a shared kernel implementation for all executors (except
  Reference). [#710](#710), [#799](#799), [#779](#779), [#733](#733), [#844](#844), [#843](#843), [#789](#789), [#845](#845), [#849](#849), [#855](#855), [#856](#856)
+ Add accessors which allow multi-precision kernels, among other things.
  [#643](#643), [#708](#708)
+ Add support for mixed precision operations through apply in all LinOps. [#677](#677)
+ Add incomplete Cholesky factorizations and preconditioners as well as some
  improvements to ILU. [#672](#672), [#837](#837), [#846](#846)
+ Add an AMGX implementation and kernels on all devices but DPC++.
  [#528](#528), [#695](#695), [#860](#860)
+ Add a new mixed-precision capability solver, Compressed Basis GMRES
  (CB-GMRES). [#693](#693), [#763](#763)
+ Add the IDR(s) solver. [#620](#620)
+ Add a new fixed-size block CSR matrix format (for the Reference executor).
  [#671](#671), [#730](#730)
+ Add native mixed-precision support to the ELL format. [#717](#717), [#780](#780)
+ Add Reverse Cuthill-McKee reordering [#500](#500), [#649](#649)
+ Add matrix assembly support on CPUs. [#644](#644)
+ Extends ISAI from triangular to general and spd matrices. [#690](#690)

Other additions:
+ Add the possibility to apply real matrices to complex vectors.
  [#655](#655), [#658](#658)
+ Add functions to compute the absolute of a matrix format. [#636](#636)
+ Add symmetric permutation and improve existing permutations.
  [#684](#684), [#657](#657), [#663](#663)
+ Add a MachineTopology class with HWLOC support [#554](#554), [#697](#697)
+ Add an implicit residual norm criterion. [#702](#702), [#818](#818), [#850](#850)
+ Row-major accessor is generalized to more than 2 dimensions and a new
  "block column-major" accessor has been added. [#707](#707)
+ Add an heat equation example. [#698](#698), [#706](#706)
+ Add ccache support in CMake and CI. [#725](#725), [#739](#739)
+ Allow tuning and benchmarking variables non intrusively. [#692](#692)
+ Add triangular solver benchmark [#664](#664)
+ Add benchmarks for BLAS operations [#772](#772), [#829](#829)
+ Add support for different precisions and consistent index types in benchmarks.
  [#675](#675), [#828](#828)
+ Add a Github bot system to facilitate development and PR management.
  [#667](#667), [#674](#674), [#689](#689), [#853](#853)
+ Add Intel (DPC++) CI support and enable CI on HPC systems. [#736](#736), [#751](#751), [#781](#781)
+ Add ssh debugging for Github Actions CI. [#749](#749)
+ Add pipeline segmentation for better CI speed. [#737](#737)


Changes:
+ Add a Scalar Jacobi specialization and kernels. [#808](#808), [#834](#834), [#854](#854)
+ Add implicit residual log for solvers and benchmarks. [#714](#714)
+ Change handling of the conjugate in the dense dot product. [#755](#755)
+ Improved Dense stride handling. [#774](#774)
+ Multiple improvements to the OpenMP kernels performance, including COO,
an exclusive prefix sum, and more. [#703](#703), [#765](#765), [#740](#740)
+ Allow specialization of submatrix and other dense creation functions in solvers. [#718](#718)
+ Improved Identity constructor and treatment of rectangular matrices. [#646](#646)
+ Allow CUDA/HIP executors to select allocation mode. [#758](#758)
+ Check if executors share the same memory. [#670](#670)
+ Improve test install and smoke testing support. [#721](#721)
+ Update the JOSS paper citation and add publications in the documentation.
  [#629](#629), [#724](#724)
+ Improve the version output. [#806](#806)
+ Add some utilities for dim and span. [#821](#821)
+ Improved solver and preconditioner benchmarks. [#660](#660)
+ Improve benchmark timing and output. [#669](#669), [#791](#791), [#801](#801), [#812](#812)


Fixes:
+ Sorting fix for the Jacobi preconditioner. [#659](#659)
+ Also log the first residual norm in CGS [#735](#735)
+ Fix BiCG and HIP CSR to work with complex matrices. [#651](#651)
+ Fix Coo SpMV on strided vectors. [#807](#807)
+ Fix segfault of extract_diagonal, add short-and-fat test. [#769](#769)
+ Fix device_reset issue by moving counter/mutex to device. [#810](#810)
+ Fix `EnableLogging` superclass. [#841](#841)
+ Support ROCm 4.1.x and breaking HIP_PLATFORM changes. [#726](#726)
+ Decreased test size for a few device tests. [#742](#742)
+ Fix multiple issues with our CMake HIP and RPATH setup.
  [#712](#712), [#745](#745), [#709](#709)
+ Cleanup our CMake installation step. [#713](#713)
+ Various simplification and fixes to the Windows CMake setup. [#720](#720), [#785](#785)
+ Simplify third-party integration. [#786](#786)
+ Improve Ginkgo device arch flags management. [#696](#696)
+ Other fixes and improvements to the CMake setup.
  [#685](#685), [#792](#792), [#705](#705), [#836](#836)
+ Clarification of dense norm documentation [#784](#784)
+ Various development tools fixes and improvements [#738](#738), [#830](#830), [#840](#840)
+ Make multiple operators/constructors explicit. [#650](#650), [#761](#761)
+ Fix some issues, memory leaks and warnings found by MSVC.
  [#666](#666), [#731](#731)
+ Improved solver memory estimates and consistent iteration counts [#691](#691)
+ Various logger improvements and fixes [#728](#728), [#743](#743), [#754](#754)
+ Fix for ForwardIterator requirements in iterator_factory. [#665](#665)
+ Various benchmark fixes. [#647](#647), [#673](#673), [#722](#722)
+ Various CI fixes and improvements. [#642](#642), [#641](#641), [#795](#795), [#783](#783), [#793](#793), [#852](#852)

Related PR: #866
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. reg:benchmarking This is related to benchmarking.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants