Skip to content

Commit

Permalink
Merge branch 'flatironinstitute:master' into interp-vectorization
Browse files Browse the repository at this point in the history
  • Loading branch information
DiamonDinoia authored Jul 8, 2024
2 parents b2d7e3e + 98637bc commit c1e8ae5
Show file tree
Hide file tree
Showing 10 changed files with 95 additions and 40 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/python_build_win.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Add-Content -Path make.inc -Value "CXXFLAGS+= -march=x86-64"
Set-Variable libvcruntime140_a -Value ([IO.Path]::Combine((Split-Path -Path $PYTHON), "libs", 'libvcruntime140.a'))
Copy-Item -Path .\.github\workflows\libvcruntime140.a -Destination $libvcruntime140_a

python -m pip install --upgrade setuptools wheel numpy pip
python -m pip install --upgrade setuptools==70.1.1 wheel numpy pip
if (-not $?) {throw "Failed pip install"}

# mingw gcc compiler pacth to work with python
Expand Down
64 changes: 58 additions & 6 deletions docs/c_gpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ There are four steps needed to call cuFINUFFT from C: 1) making a plan, 2) setti
The simplest case is to call them in order, i.e., 1234.
However, it is possible to repeat 3 with new strength or coefficient data, or to repeat 2 to choose new nonuniform points in order to do one or more step 3's again, before destroying.
For instance, 123334 and 1232334 are allowed.
If non-standard algorithm options are desired, an extra function is needed before making the plan (see bottom of this page).
If non-standard algorithm options are desired, an extra function is needed before making the plan; see bottom of this page for options.

This API matches very closely that of the plan interface to FINUFFT (in turn modeled on those of FFTW and NFFT).
Here is the full documentation for these functions.
Expand Down Expand Up @@ -289,8 +289,8 @@ This deallocates all arrays inside the ``plan`` struct, freeing all internal mem
Note: the plan (being just a pointer to the plan struct) is not actually "destroyed"; rather, its internal struct is destroyed.
There is no need for further deallocation of the plan.

Non-standard options
~~~~~~~~~~~~~~~~~~~~
Options for GPU code
--------------------

The last argument in the above plan stage accepts a pointer to an options structure, which is the same in both single and double precision.
To create such a structure, use:
Expand All @@ -300,7 +300,59 @@ To create such a structure, use:
cufinufft_opts opts;
cufinufft_default_opts(&opts);
Then you may change fields of ``opts`` by hand, finally pass ``&opts`` in as the last argument to ``cufinufft_makeplan`` or ``cufinufftf_makeplan``.
The options fields are currently only documented in the ``include/cufinufft_opts.h``.
Then you may change fields of ``opts`` by hand, finally pass ``&opts`` in as the last argument to ``cufinufft_makeplan`` or ``cufinufftf_makeplan``. Here are the options, with the important user-controllable ones documented. For their default values, see below.

For examples of this advanced usage, see ``test/cuda/cufinufft*.cu``
Data handling options
~~~~~~~~~~~~~~~~~~~~~

**modeord**: Fourier coefficient frequency index ordering; see the CPU option of the same name :ref:`modeord<modeord>`.
As a reminder, ``modeord=0`` selects increasing frequencies (negative through positive) in each dimension,
while ``modeord=1`` selects FFT-style ordering starting at zero and wrapping over to negative frequencies half way through.

**gpu_device_id**: Sets the GPU device ID. Leave at default unless you know what you're doing. [To be documented]

Diagnostic options
~~~~~~~~~~~~~~~~~~

**gpu_spreadinterponly**: if ``0`` do the NUFFT as intended. If ``1``, omit the FFT and kernel FT deconvolution steps and return garbage answers.
Nonzero value is *only* to be used to aid timing tests (although currently there are no timing codes that exploit this option), and will give wrong or undefined answers for the NUFFT transforms!


Algorithm performance options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

**gpu_method**: Spreader/interpolator algorithm.

* ``gpu_method=0`` : makes an automatic choice of one of the below methods, based on our heuristics.

* ``gpu_method=1`` : uses a nonuniform points-driven method, either unsorted which is referred to as GM in our paper, or sorted which is called GM-sort in our paper, depending on option ``gpu_sort`` below

* ``gpu_method=2`` : for spreading only, ie, type 1 transforms, uses a shared memory output-block driven method, referred to as SM in our paper. Has no effect for interpolation (type 2 transforms)

* ``gpu_method>2`` : (various upsupported experimental methods due to Melody Shih, not for regular users. Eg ``3`` tests an idea of Paul Springer's to group NU points when spreading, ``4`` is a block gather method of possible interest.)

**gpu_sort**: ``0`` do not sort nonuniform points, ``1`` do sort nonuniform points. Only has an effect when ``gpu_method=1`` (or if this method has been internally chosen when ``gpu_method=0``). Unlike the CPU code, there is no auto-choice since in our experience sorting is fast and always helps. It is possible for structured NU point inputs that ``gpu_sort=0`` may be the faster.

**gpu_kerevalmeth**: ``0`` use direct (reference) kernel evaluation, which is not recommended for speed (however, it allows nonstandard ``opts.upsampfac`` to be used). ``1`` use Horner piecewise polynomial evaluation (recommended, and enforces ``upsampfac=2.0``)

**upsampfac**: set upsampling factor. For the recommended ``kerevalmeth=1`` you must choose the standard ``upsampfac=2.0``. If you are willing to risk a slower kernel evaluation, you may set any ``upsampfac>1.0``, but this is experimental and unsupported.

**gpu_maxsubprobsize**: maximum number of NU points to be handled in a single subproblem in the spreading SM method (``gpu_method=2`` only)

**gpu_{o}binsize{x,y,z}**: various bisizes for sorting (GM-sort) or SM subproblem methods. Values of ``-1`` trigger the heuristically set default values. Leave at default unless you know what you're doing. [To be documented]

**gpu_maxbatchsize**: ``0`` use heuristically defined batch size for vectorized (many-transforms with same NU points) interface, else set this batch size.

**gpu_stream**: CUDA stream to use. Leave at default unless you know what you're doing. [To be documented]


For all GPU option default values we refer to the source code in
``src/cuda/cufinufft.cu:cufinufft_default_opts``):

.. literalinclude:: ../src/cuda/cufinufft.cu
:start-after: @gpu_defopts_start
:end-before: @gpu_defopts_end

For examples of advanced options-switching usage, see ``test/cuda/cufinufft*.cu`` and ``perftest/cuda/cuperftest.cu``.

You may notice a lack of debugging/timing options in the GPU code. This is to avoid CUDA writing to stdout. Please help us out by adding some of these.
9 changes: 5 additions & 4 deletions docs/julia.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
Julia interfaces
================
Julia interfaces (CPU and GPU)
==============================

Ludvig af Klinteberg, Libin Lu, and others, have built `FINUFFT.jl <https://github.com/ludvigak/FINUFFT.jl>`_, an interface from the `Julia <https://julialang.org/>`_ language. This package supports 32-bit and 64-bit precision, and automatically downloads and runs pre-built binaries of the FINUFFT library for Linux, macOS, Windows and FreeBSD (for a full list see `finufft_jll <https://github.com/JuliaBinaryWrappers/finufft_jll.jl>`_).
Principal author Ludvig af Klinteberg and others have built and maintain `FINUFFT.jl <https://github.com/ludvigak/FINUFFT.jl>`_, an interface from the `Julia <https://julialang.org/>`_ language. This official Julia package supports 32-bit and 64-bit precision, now on both CPU and GPU (via `CUDA.jl`), via a common interface.
The Julia package installation automatically downloads pre-built CPU binaries of the FINUFFT library for Linux, macOS, Windows and FreeBSD (for a full list see `finufft_jll <https://github.com/JuliaBinaryWrappers/finufft_jll.jl>`_), and the GPU binary for Linux (see `cufinufft_jll <https://github.com/JuliaBinaryWrappers/cufinufft_jll.jl>`_).

`FINUFFT.jl` has now (in 2022) itself been wrapped as part of `NFFT.jl <https://juliamath.github.io/NFFT.jl/dev/performance/>`_, which contains an "abstract" interface
`FINUFFT.jl` has itself been wrapped as part of `NFFT.jl <https://juliamath.github.io/NFFT.jl/dev/performance/>`_, which contains an "abstract" interface
to any NUFFT in Julia, with FINUFFT as an example.
Their
`performance comparison page <https://juliamath.github.io/NFFT.jl/dev/performance/>`_
Expand Down
6 changes: 3 additions & 3 deletions docs/opts.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. _opts:

Options parameters
==================
Options parameters (CPU)
========================

Aside from the mandatory inputs (dimension, type,
nonuniform points, strengths or coefficients, and, in C++/C/Fortran/MATLAB,
Expand Down Expand Up @@ -140,7 +140,7 @@ automatically from call to call in the same executable (incidentally, also in th
* ``spread_sort=2`` : uses a heuristic to decide whether to sort or not.

The heuristic bakes in empirical findings such as: generally it is not worth sorting in 1D type 2 transforms, or when the number of nonuniform points is small.
Do not change this from its default unless you obsever.
Feel free to try experimenting here; if you have highly-structured nonuniform point ordering (such as coming from polar-grid or propeller-type MRI k-points) it may be advantageous not to sort.

**spread_kerevalmeth**: Kernel evaluation method in spreader/interpolator.
This should not be changed from its default value, unless you are an
Expand Down
4 changes: 2 additions & 2 deletions makefile
Original file line number Diff line number Diff line change
Expand Up @@ -471,7 +471,7 @@ ifneq ($(MINGW),ON)
rm -f $(STATICLIB) $(DYNLIB)
rm -f matlab/*.mex*
rm -f $(TESTS) test/results/*.out perftest/results/*.out
rm -f $(EXAMPLES) $(FE) $(ST) $(STF) $(GTT) $(GTTF)
rm -f $(EXAMPLES) $(FE) $(ST) $(STF) $(STA) $(STAF) $(GTT) $(GTTF)
rm -f perftest/manysmallprobs
rm -f examples/core test/core perftest/core $(FE_DIR)/core
else
Expand All @@ -480,7 +480,7 @@ else
del matlab\*.mex*
for %%f in ($(subst /,\, $(TESTS))) do ((if exist %%f del %%f) & (if exist %%f.exe del %%f.exe))
del test\results\*.out perftest\results\*.out
for %%f in ($(subst /,\, $(EXAMPLES)), $(subst /,\,$(FE)), $(subst /,\,$(ST)), $(subst /,\,$(STF)), $(subst /,\,$(GTT)), $(subst /,\,$(GTTF))) do ((if exist %%f del %%f) & (if exist %%f.exe del %%f.exe))
for %%f in ($(subst /,\, $(EXAMPLES)), $(subst /,\,$(FE)), $(subst /,\,$(ST)), $(subst /,\,$(STF)), $(subst /,\,$(STA)), $(subst /,\,$(STAF)), $(subst /,\,$(GTT)), $(subst /,\,$(GTTF))) do ((if exist %%f del %%f) & (if exist %%f.exe del %%f.exe))
del perftest\manysmallprobs
del examples\core, test\core, perftest\core, $(subst /,\, $(FE_DIR))\core
endif
Expand Down
2 changes: 1 addition & 1 deletion perftest/compare_spreads.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ using CairoMakie
using JLD2 # for load/save arrays to file
using UnPack

fnam = "results/master-vs-svec2_gcc114_5700U_nthr1" # outfile head
fnam = "/home/alex/numerics/finufft/perftest/results/7-1-24-vs-ivec_gcc114_5700U_nthr1" # outfile head
# locations of pair of FINUFFT repos to compare...
repo1 = "/home/alex/numerics/finufft"
repo2 = "/home/alex/numerics/nufft/finufft-svec2"
Expand Down
39 changes: 18 additions & 21 deletions src/cuda/cufinufft.cu
Original file line number Diff line number Diff line change
Expand Up @@ -92,39 +92,36 @@ void cufinufft_default_opts(cufinufft_opts *opts)
Options with prefix "gpu_" are used for gpu code.
Notes:
Values set in this function for different type and dimensions are preferable
based on experiments. User can experiement with different settings by
replacing them after calling this function.
1) Values set in this function for different type and dimensions are preferable
based on experiments. User can experiment with different settings by
changing them after calling this function.
2) Sphinx sucks the below code block into the web docs, hence keep it clean.
Melody Shih 07/25/19; Barnett 2/5/21.
Melody Shih 07/25/19; Barnett 2/5/21, tidied for sphinx 7/2/24.
*/
{
opts->upsampfac = 2.0;
// sphinx tag (don't remove): @gpu_defopts_start
// data handling opts...
opts->modeord = 0;
opts->gpu_device_id = 0;

/* following options are for gpu */
opts->gpu_sort = 1; // access nupts in an ordered way for nupts driven method
// diagnostic opts...
opts->gpu_spreadinterponly = 0;

// algorithm performance opts...
opts->gpu_method = 0;
opts->gpu_sort = 1;
opts->gpu_kerevalmeth = 1;
opts->upsampfac = 2.0;
opts->gpu_maxsubprobsize = 1024;
opts->gpu_obinsizex = -1;
opts->gpu_obinsizey = -1;
opts->gpu_obinsizez = -1;

opts->gpu_binsizex = -1;
opts->gpu_binsizey = -1;
opts->gpu_binsizez = -1;

opts->gpu_spreadinterponly = 0; // default to do the whole nufft

opts->gpu_maxbatchsize = 0; // Heuristically set
opts->gpu_maxbatchsize = 0;
opts->gpu_stream = cudaStreamDefault;

opts->gpu_kerevalmeth = 1; // Horner

opts->gpu_method = 0; // Auto method (2 for type 1, 2 for type 2).

// By default, only use device 0
opts->gpu_device_id = 0;

opts->modeord = 0;
// sphinx tag (don't remove): @gpu_defopts_end
}
}
2 changes: 1 addition & 1 deletion src/finufft.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -550,7 +550,7 @@ void FINUFFT_DEFAULT_OPTS(finufft_opts *o)
o->showwarn = 1;

o->nthreads = 0;
o->fftw = FFTW_ESTIMATE; //
o->fftw = FFTW_ESTIMATE;
o->spread_sort = 2;
o->spread_kerevalmeth = 1;
o->spread_kerpad = 1;
Expand Down
6 changes: 6 additions & 0 deletions src/spreadinterp.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -750,6 +750,12 @@ void evaluate_kernel_vector(FLT *ker, FLT *args, const finufft_spread_opts &opts
if (!(opts.flags & TF_OMIT_EVALUATE_EXPONENTIAL))
for (int i = 0; i < Npad; i++) // Loop 2: Compute exponentials
ker[i] = exp(ker[i]);
if (opts.kerpad) {
// padded part should be zero, in spread_subproblem_nd_kernels, there are
// out of bound writes to trg arrays
for (int i = N; i < Npad; ++i)
ker[i] = 0.0;
}
} else {
for (int i = 0; i < N; i++) // dummy for timing only
ker[i] = 1.0;
Expand Down
1 change: 0 additions & 1 deletion tools/finufft/build-wheels-linux.sh
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ versions=("cp36-cp36m"
"cp310-cp310"
"cp311-cp311"
"cp312-cp312"
"pp38-pypy38_pp73"
"pp39-pypy39_pp73")

pys=()
Expand Down

0 comments on commit c1e8ae5

Please sign in to comment.