From b2fdfeb5fdef793638266ff7e6ca76671135bf3d Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Wed, 13 Nov 2024 17:27:07 +0000 Subject: [PATCH] build based on cb75d53 --- dev/.documenter-siteinfo.json | 2 +- dev/api/accumulate/index.html | 8 +- dev/api/binarysearch/index.html | 4 +- dev/api/custom_structs/index.html | 4 +- dev/api/foreachindex/index.html | 53 +++++- dev/api/map/index.html | 4 +- dev/api/mapreduce/index.html | 4 +- dev/api/predicates/index.html | 4 +- dev/api/reduce/index.html | 4 +- dev/api/sort/index.html | 16 +- dev/api/task_partition/index.html | 6 +- dev/api/using_backends/index.html | 4 +- dev/api/utilities/index.html | 11 ++ dev/assets/documenter.js | 302 ++++++++++++++++-------------- dev/benchmarks/index.html | 4 +- dev/debugging/index.html | 4 +- dev/index.html | 4 +- dev/objects.inv | Bin 1104 -> 1151 bytes dev/performance/index.html | 4 +- dev/references/index.html | 4 +- dev/roadmap/index.html | 4 +- dev/search_index.js | 2 +- dev/testing/index.html | 4 +- 23 files changed, 264 insertions(+), 192 deletions(-) create mode 100644 dev/api/utilities/index.html diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 4ec2921..f4cc931 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-12T18:42:29","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-13T17:27:01","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/dev/api/accumulate/index.html b/dev/api/accumulate/index.html index 00181b7..c050290 100644 --- a/dev/api/accumulate/index.html +++ b/dev/api/accumulate/index.html @@ -1,5 +1,5 @@ -Accumulate · AcceleratedKernels.jl

Accumulate / Prefix Sum / Scan

Compute accumulated running totals along a sequence by applying a binary operator to all elements up to the current one; often used in GPU programming as a first step in finding / extracting subsets of data.

+Accumulate · AcceleratedKernels.jl

Accumulate / Prefix Sum / Scan

Compute accumulated running totals along a sequence by applying a binary operator to all elements up to the current one; often used in GPU programming as a first step in finding / extracting subsets of data.

  • accumulate! (in-place), accumulate (allocating); inclusive or exclusive.

  • @@ -8,11 +8,11 @@

Function signature:

accumulate!(op, v::AbstractGPUVector; init, inclusive::Bool=true,
-            block_size::Int=128,
+            block_size::Int=256,
             temp::Union{Nothing, AbstractGPUVector}=nothing,
             temp_flags::Union{Nothing, AbstractGPUVector}=nothing)
 accumulate(op, v::AbstractGPUVector; init, inclusive::Bool=true,
-           block_size::Int=128,
+           block_size::Int=256,
            temp::Union{Nothing, AbstractGPUVector}=nothing,
            temp_flags::Union{Nothing, AbstractGPUVector}=nothing)

Example computing an inclusive prefix sum (the typical GPU "scan"):

@@ -22,4 +22,4 @@ v = oneAPI.ones(Int32, 100_000) AK.accumulate!(+, v, init=0)

The temporaries temp and temp_flags should both have at least (length(v) + 2 * block_size - 1) ÷ (2 * block_size) elements; eltype(v) === eltype(temp); the elements in temp_flags can be any integers, but Int8 is used by default to reduce memory usage.

-
+
diff --git a/dev/api/binarysearch/index.html b/dev/api/binarysearch/index.html index 96d81ae..c4b5c83 100644 --- a/dev/api/binarysearch/index.html +++ b/dev/api/binarysearch/index.html @@ -1,5 +1,5 @@ -Binary Search · AcceleratedKernels.jl

Find the indices where some elements x should be inserted into a sorted sequence v to maintain the sorted order. Effectively applying the Julia.Base functions in parallel on a GPU using foreachindex.

+Binary Search · AcceleratedKernels.jl

Find the indices where some elements x should be inserted into a sorted sequence v to maintain the sorted order. Effectively applying the Julia.Base functions in parallel on a GPU using foreachindex.

  • searchsortedfirst! (in-place), searchsortedfirst (allocating): index of first element in v >= x[j].

  • @@ -49,4 +49,4 @@ ix = MtlArray{Int}(undef, 10_000) AK.searchsortedfirst!(ix, v, x) -
+
diff --git a/dev/api/custom_structs/index.html b/dev/api/custom_structs/index.html index 0ef09ee..adc6100 100644 --- a/dev/api/custom_structs/index.html +++ b/dev/api/custom_structs/index.html @@ -1,5 +1,5 @@ -Custom Structs · AcceleratedKernels.jl

Custom Structs

As functions are compiled as/when used in Julia for the given argument types (for C++ people: kind of like everything being a template argument by default), we can use custom structs and functions defined outside AcceleratedKernels.jl, which will be inlined and optimised as if they were hardcoded within the library. Normal Julia functions and code can be used, without special annotations like __device__, KOKKOS_LAMBDA or wrapping them in classes with overloaded operator().

+Custom Structs · AcceleratedKernels.jl

Custom Structs

As functions are compiled as/when used in Julia for the given argument types (for C++ people: kind of like everything being a template argument by default), we can use custom structs and functions defined outside AcceleratedKernels.jl, which will be inlined and optimised as if they were hardcoded within the library. Normal Julia functions and code can be used, without special annotations like __device__, KOKKOS_LAMBDA or wrapping them in classes with overloaded operator().

As an example, let's compute the coordinate-wise minima of some points:

import AcceleratedKernels as AK
 using Metal
@@ -23,4 +23,4 @@
 points = MtlArray([Point(rand(), rand()) for _ in 1:100_000])
 @show minima = compute_minima(points)

Note that we did not have to explicitly type the function arguments in compute_minima - the types would be figured out when calling the function and compiled for the right backend automatically, e.g. CPU, oneAPI, ROCm, CUDA, Metal. Also, we used the standard Julia function min; it was not special-cased anywhere, it's just KernelAbstractions.jl inlining and compiling normal code, even from within the Julia.Base standard library.

-
+
diff --git a/dev/api/foreachindex/index.html b/dev/api/foreachindex/index.html index 326006b..85b1217 100644 --- a/dev/api/foreachindex/index.html +++ b/dev/api/foreachindex/index.html @@ -1,5 +1,5 @@ -General Loops · AcceleratedKernels.jl

General Looping

AcceleratedKernels.foreachindexFunction
foreachindex(
+General Loops · AcceleratedKernels.jl

General Looping

AcceleratedKernels.foreachindexFunction
foreachindex(
     f, itr, backend::Backend=get_backend(itr);
 
     # CPU settings
@@ -14,26 +14,69 @@
 for i in eachindex(x)
     @inbounds y[i] = 2 * x[i] + 1
 end

Using this function you can have the same for loop body over a GPU array:

using CUDA
+import AcceleratedKernels as AK
 const x = CuArray(1:100)
 const y = similar(x)
-foreachindex(x) do i
+AK.foreachindex(x) do i
     @inbounds y[i] = 2 * x[i] + 1
 end

Note that the above code is pure arithmetic, which you can write directly (and on some platforms it may be faster) as:

using CUDA
 x = CuArray(1:100)
 y = 2 .* x .+ 1

Important note: to use this function on a GPU, the objects referenced inside the loop body must have known types - i.e. be inside a function, or const global objects; but you shouldn't use global objects anyways. For example:

using oneAPI
+import AcceleratedKernels as AK
 
 x = oneArray(1:100)
 
 # CRASHES - typical error message: "Reason: unsupported dynamic function invocation"
-# foreachindex(x) do i
+# AK.foreachindex(x) do i
 #     x[i] = i
 # end
 
 function somecopy!(v)
     # Because it is inside a function, the type of `v` will be known
-    foreachindex(v) do i
+    AK.foreachindex(v) do i
         v[i] = i
     end
 end
 
-somecopy!(x)    # This works
source
+somecopy!(x) # This works
source
AcceleratedKernels.foraxesFunction
foraxes(
+    f, itr, dims::Union{Nothing, <:Integer}=nothing, backend::Backend=get_backend(itr);
+
+    # CPU settings
+    scheduler=:threads,
+    max_tasks=Threads.nthreads(),
+    min_elems=1,
+
+    # GPU settings
+    block_size=256,
+)

Parallelised for loop over the indices along axis dims of an iterable.

It allows you to run normal Julia code on a GPU over multiple arrays - e.g. CuArray, ROCArray, MtlArray, oneArray - with one GPU thread per index.

On CPUs at most max_tasks threads are launched, or fewer such that each thread processes at least min_elems indices; if a single task ends up being needed, f is inlined and no thread is launched. Tune it to your function - the more expensive it is, the fewer elements are needed to amortise the cost of launching a thread (which is a few μs). The scheduler can be :polyester to use Polyester.jl cheap threads or :threads to use normal Julia threads; either can be faster depending on the function, but in general the latter is more composable.

Examples

Normally you would write a for loop like this:

x = Array(reshape(1:30, 3, 10))
+y = similar(x)
+for i in axes(x, 2)
+    for j in axes(x, 1)
+        @inbounds y[j, i] = 2 * x[j, i] + 1
+    end
+end

Using this function you can have the same for loop body over a GPU array:

using CUDA
+import AcceleratedKernels as AK
+const x = CuArray(reshape(1:3000, 3, 1000))
+const y = similar(x)
+AK.foraxes(x, 2) do i
+    for j in axes(x, 1)
+        @inbounds y[j, i] = 2 * x[j, i] + 1
+    end
+end

Important note: to use this function on a GPU, the objects referenced inside the loop body must have known types - i.e. be inside a function, or const global objects; but you shouldn't use global objects anyways. For example:

using oneAPI
+import AcceleratedKernels as AK
+
+x = oneArray(reshape(1:3000, 3, 1000))
+
+# CRASHES - typical error message: "Reason: unsupported dynamic function invocation"
+# AK.foraxes(x) do i
+#     x[i] = i
+# end
+
+function somecopy!(v)
+    # Because it is inside a function, the type of `v` will be known
+    AK.foraxes(v) do i
+        v[i] = i
+    end
+end
+
+somecopy!(x)    # This works
source
diff --git a/dev/api/map/index.html b/dev/api/map/index.html index aabdaaa..6f27b86 100644 --- a/dev/api/map/index.html +++ b/dev/api/map/index.html @@ -1,5 +1,5 @@ -Map · AcceleratedKernels.jl

Map

Parallel mapping of a function over each element of an iterable via foreachindex:

+Map · AcceleratedKernels.jl

Map

Parallel mapping of a function over each element of an iterable via foreachindex:

  • map! (in-place), map (out-of-place)

  • @@ -36,4 +36,4 @@ # GPU settings block_size=256, -)

    Apply the function f to each element of src and store the result in dst. The CPU and GPU settings are the same as for foreachindex.

source
+)

Apply the function f to each element of src and store the result in dst. The CPU and GPU settings are the same as for foreachindex.

source diff --git a/dev/api/mapreduce/index.html b/dev/api/mapreduce/index.html index e9046e2..1cb022a 100644 --- a/dev/api/mapreduce/index.html +++ b/dev/api/mapreduce/index.html @@ -1,5 +1,5 @@ -MapReduce · AcceleratedKernels.jl

MapReduce

Equivalent to reduce(op, map(f, iterable)), without saving the intermediate mapped collection; can be used to e.g. split documents into words (map) and count the frequency thereof (reduce).

+MapReduce · AcceleratedKernels.jl

MapReduce

Equivalent to reduce(op, map(f, iterable)), without saving the intermediate mapped collection; can be used to e.g. split documents into words (map) and count the frequency thereof (reduce).

  • Other names: transform_reduce, some fold implementations include the mapping function too.

  • @@ -52,4 +52,4 @@ f(x) = x * x m = MtlArray(rand(Int32(1):Int32(100), 10, 100_000)) mrowsumsq = AK.mapreduce(f, +, m; init=zero(eltype(m)), dims=1) -mcolsumsq = AK.mapreduce(f, +, m; init=zero(eltype(m)), dims=2)
source
+mcolsumsq = AK.mapreduce(f, +, m; init=zero(eltype(m)), dims=2)source diff --git a/dev/api/predicates/index.html b/dev/api/predicates/index.html index fe43144..449194a 100644 --- a/dev/api/predicates/index.html +++ b/dev/api/predicates/index.html @@ -1,5 +1,5 @@ -Predicates · AcceleratedKernels.jl

Predicates

Apply a predicate to check if all / any elements in a collection return true. Could be implemented as a reduction, but is better optimised with stopping the search once a false / true is found.

+Predicates · AcceleratedKernels.jl

Predicates

Apply a predicate to check if all / any elements in a collection return true. Could be implemented as a reduction, but is better optimised with stopping the search once a false / true is found.

  • Other names: not often implemented standalone on GPUs, typically included as part of a reduction.

  • @@ -16,4 +16,4 @@ v = CuArray(rand(Float32, 100_000)) AK.any(x -> x < 1, v) AK.all(x -> x > 0, v) -

Note on the cooperative keyword: some older platforms crash when multiple threads write to the same memory location in a global array (e.g. old Intel Graphics); if all threads were to write the same value, it is well-defined on others (e.g. CUDA F4.2 says "If a non-atomic instruction executed by a warp writes to the same location in global memory for more than one of the threads of the warp, only one thread performs a write and which thread does it is undefined."). This "cooperative" thread behaviour allows for a faster implementation; if you have a platform - the only one I know is Intel UHD Graphics - that crashes, set cooperative=false to use a safer mapreduce-based implementation.

+

Note on the cooperative keyword: some older platforms crash when multiple threads write to the same memory location in a global array (e.g. old Intel Graphics); if all threads were to write the same value, it is well-defined on others (e.g. CUDA F4.2 says "If a non-atomic instruction executed by a warp writes to the same location in global memory for more than one of the threads of the warp, only one thread performs a write and which thread does it is undefined."). This "cooperative" thread behaviour allows for a faster implementation; if you have a platform - the only one I know is Intel UHD Graphics - that crashes, set cooperative=false to use a safer mapreduce-based implementation.

diff --git a/dev/api/reduce/index.html b/dev/api/reduce/index.html index cd6f93d..e559982 100644 --- a/dev/api/reduce/index.html +++ b/dev/api/reduce/index.html @@ -1,5 +1,5 @@ -Reduce · AcceleratedKernels.jl

Reductions

Apply a custom binary operator reduction on all elements in an iterable; can be used to compute minima, sums, counts, etc.

+Reduce · AcceleratedKernels.jl

Reductions

Apply a custom binary operator reduction on all elements in an iterable; can be used to compute minima, sums, counts, etc.

  • Other names: Kokkos:parallel_reduce, fold, aggregate.

  • @@ -55,4 +55,4 @@ m = MtlArray(rand(Int32(1):Int32(100), 10, 100_000)) mrowsum = AK.reduce(+, m; init=zero(eltype(m)), dims=1) -mcolsum = AK.reduce(+, m; init=zero(eltype(m)), dims=2)
source
+mcolsum = AK.reduce(+, m; init=zero(eltype(m)), dims=2)source diff --git a/dev/api/sort/index.html b/dev/api/sort/index.html index bffea67..ddc87e0 100644 --- a/dev/api/sort/index.html +++ b/dev/api/sort/index.html @@ -1,5 +1,5 @@ -Sorting · AcceleratedKernels.jl

sort and friends

Sorting algorithms with similar interface and default settings as the Julia Base ones, on GPUs:

+Sorting · AcceleratedKernels.jl

sort and friends

Sorting algorithms with similar interface and default settings as the Julia Base ones, on GPUs:

  • sort! (in-place), sort (out-of-place)

  • @@ -11,11 +11,11 @@

    Function signature:

    sort!(v::AbstractGPUVector;
           lt=isless, by=identity, rev::Bool=false, order::Base.Order.Ordering=Base.Order.Forward,
    -      block_size::Int=128, temp::Union{Nothing, AbstractGPUVector}=nothing)
    +      block_size::Int=256, temp::Union{Nothing, AbstractGPUVector}=nothing)
     
     sortperm!(ix::AbstractGPUVector, v::AbstractGPUVector;
               lt=isless, by=identity, rev::Bool=false, order::Base.Order.Ordering=Base.Order.Forward,
    -          block_size::Int=128, temp::Union{Nothing, AbstractGPUVector}=nothing)
    + block_size::Int=256, temp::Union{Nothing, AbstractGPUVector}=nothing)

    Specific implementations that the interfaces above forward to:

    • merge_sort! (in-place), merge_sort (out-of-place) - sort arbitrary objects with custom comparisons.

      @@ -28,23 +28,23 @@

      Function signature:

      merge_sort!(v::AbstractGPUVector;
                   lt=(<), by=identity, rev::Bool=false, order::Ordering=Forward,
      -            block_size::Int=128, temp::Union{Nothing, AbstractGPUVector}=nothing)
      +            block_size::Int=256, temp::Union{Nothing, AbstractGPUVector}=nothing)
       
       merge_sort_by_key!(keys::AbstractGPUVector, values::AbstractGPUVector;
                          lt=(<), by=identity, rev::Bool=false, order::Ordering=Forward,
      -                   block_size::Int=128,
      +                   block_size::Int=256,
                          temp_keys::Union{Nothing, AbstractGPUVector}=nothing,
                          temp_values::Union{Nothing, AbstractGPUVector}=nothing)
       
       merge_sortperm!(ix::AbstractGPUVector, v::AbstractGPUVector;
                       lt=(<), by=identity, rev::Bool=false, order::Ordering=Forward,
      -                inplace::Bool=false, block_size::Int=128,
      +                inplace::Bool=false, block_size::Int=256,
                       temp_ix::Union{Nothing, AbstractGPUVector}=nothing,
                       temp_v::Union{Nothing, AbstractGPUVector}=nothing)
       
       merge_sortperm_lowmem!(ix::AbstractGPUVector, v::AbstractGPUVector;
                              lt=(<), by=identity, rev::Bool=false, order::Ordering=Forward,
      -                       block_size::Int=128,
      +                       block_size::Int=256,
                              temp::Union{Nothing, AbstractGPUVector}=nothing)

      Example:

      import AcceleratedKernels as AK
      @@ -56,4 +56,4 @@
       
      v = ROCArray(rand(Float32, 100_000))
       temp = similar(v)
       AK.sort!(v, temp=temp)
      -
+
diff --git a/dev/api/task_partition/index.html b/dev/api/task_partition/index.html index cd9a63d..1160b3a 100644 --- a/dev/api/task_partition/index.html +++ b/dev/api/task_partition/index.html @@ -1,5 +1,5 @@ -Task Partitioning · AcceleratedKernels.jl

Multithreaded Task Partitioning

AcceleratedKernels.TaskPartitionerType
struct TaskPartitioner

Partitioning num_elems elements / jobs over maximum max_tasks tasks with minimum min_elems elements per task.

Methods

TaskPartitioner(num_elems, max_tasks=Threads.nthreads(), min_elems=1)

Fields

  • num_elems::Int64

  • max_tasks::Int64

  • min_elems::Int64

  • num_tasks::Int64

  • task_istarts::Vector{Int64}

Examples

using AcceleratedKernels: TaskPartitioner
+Task Partitioning · AcceleratedKernels.jl

Multithreaded Task Partitioning

AcceleratedKernels.TaskPartitionerType
struct TaskPartitioner

Partitioning num_elems elements / jobs over maximum max_tasks tasks with minimum min_elems elements per task.

Methods

TaskPartitioner(num_elems, max_tasks=Threads.nthreads(), min_elems=1)

Fields

  • num_elems::Int64

  • max_tasks::Int64

  • min_elems::Int64

  • num_tasks::Int64

  • task_istarts::Vector{Int64}

Examples

using AcceleratedKernels: TaskPartitioner
 
 # Divide 10 elements between 4 tasks
 tp = TaskPartitioner(10, 4)
@@ -24,7 +24,7 @@
 tp[i] = 1:5
 tp[i] = 6:10
 tp[i] = 11:15
-tp[i] = 16:20
source
AcceleratedKernels.task_partitionFunction
task_partition(f, num_elems, max_tasks=Threads.nthreads(), min_elems=1)
 task_partition(f, tp::TaskPartitioner)

Partition num_elems jobs across at most num_tasks parallel tasks with at least min_elems per task, calling f(start_index:end_index), where the indices are between 1 and num_elems.

Examples

A toy example showing outputs:

num_elems = 4
 task_partition(println, num_elems)
 
@@ -34,4 +34,4 @@
 2:2
 3:3

This function is probably most useful with a do-block, e.g.:

task_partition(4) do irange
     some_long_computation(param1, param2, irange)
-end
source
+end
source
diff --git a/dev/api/using_backends/index.html b/dev/api/using_backends/index.html index c5d901b..895884d 100644 --- a/dev/api/using_backends/index.html +++ b/dev/api/using_backends/index.html @@ -1,5 +1,5 @@ -Using Different Backends · AcceleratedKernels.jl

Using Different Backends

For any of the examples below, simply use a different GPU array and AcceleratedKernels.jl will pick the right backend:

+Using Different Backends · AcceleratedKernels.jl

Using Different Backends

For any of the examples below, simply use a different GPU array and AcceleratedKernels.jl will pick the right backend:

# Intel Graphics
 using oneAPI
 v = oneArray{Int32}(undef, 100_000)             # Empty array
@@ -24,4 +24,4 @@
 AK.reduce(+, v, max_tasks=Threads.nthreads())

Note the reduce and mapreduce CPU implementations forward arguments to OhMyThreads.jl, an excellent package for multithreading. The focus of AcceleratedKernels.jl is to provide a unified interface to high-performance implementations of common algorithmic kernels, for both CPUs and GPUs - if you need fine-grained control over threads, scheduling, communication for specialised algorithms (e.g. with highly unequal workloads), consider using OhMyThreads.jl or KernelAbstractions.jl directly.

There is ongoing work on multithreaded CPU sort and accumulate implementations - at the moment, they fall back to single-threaded algorithms; the rest of the library is fully parallelised for both CPUs and GPUs.

-
+
diff --git a/dev/api/utilities/index.html b/dev/api/utilities/index.html new file mode 100644 index 0000000..e93e3dc --- /dev/null +++ b/dev/api/utilities/index.html @@ -0,0 +1,11 @@ + +Utilities · AcceleratedKernels.jl

Utilities

AcceleratedKernels.TypeWrapType
struct TypeWrap{T} end
+TypeWrap(T) = TypeWrap{T}()
+Base.:*(x::Number, ::TypeWrap{T}) where T = T(x)

Allow type conversion via multiplication, like 5i32 for 5 * i32 where i32 is a TypeWrap.

Examples

import AcceleratedKernels as AK
+u32 = AK.TypeWrap{UInt32}
+println(typeof(5u32))
+
+# output
+UInt32

This is used e.g. to set integer literals inside kernels as u16 to ensure no indices are promoted beyond the index base type.

For example, Metal uses UInt32 indices, but if it is mixed with a Julia integer literal (Int64 by default) like in src[ithread + 1], we incur a type cast to Int64. Instead, we can use src[ithread + 1u16] or src[ithread + 0x1] to ensure the index is UInt32 and avoid the cast; as the integer literal 1u16 has a shorter type than ithread, it is automatically promoted (at compile time) to the ithread type, whether ithread is signed or unsigned as per the backend.

# Defaults defined
+1u8, 2u16, 3u32, 4u64
+5i8, 6i16, 7i32, 8i64
source
diff --git a/dev/assets/documenter.js b/dev/assets/documenter.js index 82252a1..7d68cd8 100644 --- a/dev/assets/documenter.js +++ b/dev/assets/documenter.js @@ -612,176 +612,194 @@ function worker_function(documenterSearchIndex, documenterBaseURL, filters) { }; } -// `worker = Threads.@spawn worker_function(documenterSearchIndex)`, but in JavaScript! -const filters = [ - ...new Set(documenterSearchIndex["docs"].map((x) => x.category)), -]; -const worker_str = - "(" + - worker_function.toString() + - ")(" + - JSON.stringify(documenterSearchIndex["docs"]) + - "," + - JSON.stringify(documenterBaseURL) + - "," + - JSON.stringify(filters) + - ")"; -const worker_blob = new Blob([worker_str], { type: "text/javascript" }); -const worker = new Worker(URL.createObjectURL(worker_blob)); - /////// SEARCH MAIN /////// -// Whether the worker is currently handling a search. This is a boolean -// as the worker only ever handles 1 or 0 searches at a time. -var worker_is_running = false; - -// The last search text that was sent to the worker. This is used to determine -// if the worker should be launched again when it reports back results. -var last_search_text = ""; - -// The results of the last search. This, in combination with the state of the filters -// in the DOM, is used compute the results to display on calls to update_search. -var unfiltered_results = []; - -// Which filter is currently selected -var selected_filter = ""; - -$(document).on("input", ".documenter-search-input", function (event) { - if (!worker_is_running) { - launch_search(); - } -}); - -function launch_search() { - worker_is_running = true; - last_search_text = $(".documenter-search-input").val(); - worker.postMessage(last_search_text); -} - -worker.onmessage = function (e) { - if (last_search_text !== $(".documenter-search-input").val()) { - launch_search(); - } else { - worker_is_running = false; - } - - unfiltered_results = e.data; - update_search(); -}; +function runSearchMainCode() { + // `worker = Threads.@spawn worker_function(documenterSearchIndex)`, but in JavaScript! + const filters = [ + ...new Set(documenterSearchIndex["docs"].map((x) => x.category)), + ]; + const worker_str = + "(" + + worker_function.toString() + + ")(" + + JSON.stringify(documenterSearchIndex["docs"]) + + "," + + JSON.stringify(documenterBaseURL) + + "," + + JSON.stringify(filters) + + ")"; + const worker_blob = new Blob([worker_str], { type: "text/javascript" }); + const worker = new Worker(URL.createObjectURL(worker_blob)); + + // Whether the worker is currently handling a search. This is a boolean + // as the worker only ever handles 1 or 0 searches at a time. + var worker_is_running = false; + + // The last search text that was sent to the worker. This is used to determine + // if the worker should be launched again when it reports back results. + var last_search_text = ""; + + // The results of the last search. This, in combination with the state of the filters + // in the DOM, is used compute the results to display on calls to update_search. + var unfiltered_results = []; + + // Which filter is currently selected + var selected_filter = ""; + + $(document).on("input", ".documenter-search-input", function (event) { + if (!worker_is_running) { + launch_search(); + } + }); -$(document).on("click", ".search-filter", function () { - if ($(this).hasClass("search-filter-selected")) { - selected_filter = ""; - } else { - selected_filter = $(this).text().toLowerCase(); + function launch_search() { + worker_is_running = true; + last_search_text = $(".documenter-search-input").val(); + worker.postMessage(last_search_text); } - // This updates search results and toggles classes for UI: - update_search(); -}); + worker.onmessage = function (e) { + if (last_search_text !== $(".documenter-search-input").val()) { + launch_search(); + } else { + worker_is_running = false; + } -/** - * Make/Update the search component - */ -function update_search() { - let querystring = $(".documenter-search-input").val(); + unfiltered_results = e.data; + update_search(); + }; - if (querystring.trim()) { - if (selected_filter == "") { - results = unfiltered_results; + $(document).on("click", ".search-filter", function () { + if ($(this).hasClass("search-filter-selected")) { + selected_filter = ""; } else { - results = unfiltered_results.filter((result) => { - return selected_filter == result.category.toLowerCase(); - }); + selected_filter = $(this).text().toLowerCase(); } - let search_result_container = ``; - let modal_filters = make_modal_body_filters(); - let search_divider = `
`; + // This updates search results and toggles classes for UI: + update_search(); + }); - if (results.length) { - let links = []; - let count = 0; - let search_results = ""; - - for (var i = 0, n = results.length; i < n && count < 200; ++i) { - let result = results[i]; - if (result.location && !links.includes(result.location)) { - search_results += result.div; - count++; - links.push(result.location); - } - } + /** + * Make/Update the search component + */ + function update_search() { + let querystring = $(".documenter-search-input").val(); - if (count == 1) { - count_str = "1 result"; - } else if (count == 200) { - count_str = "200+ results"; + if (querystring.trim()) { + if (selected_filter == "") { + results = unfiltered_results; } else { - count_str = count + " results"; + results = unfiltered_results.filter((result) => { + return selected_filter == result.category.toLowerCase(); + }); } - let result_count = `
${count_str}
`; - search_result_container = ` + let search_result_container = ``; + let modal_filters = make_modal_body_filters(); + let search_divider = `
`; + + if (results.length) { + let links = []; + let count = 0; + let search_results = ""; + + for (var i = 0, n = results.length; i < n && count < 200; ++i) { + let result = results[i]; + if (result.location && !links.includes(result.location)) { + search_results += result.div; + count++; + links.push(result.location); + } + } + + if (count == 1) { + count_str = "1 result"; + } else if (count == 200) { + count_str = "200+ results"; + } else { + count_str = count + " results"; + } + let result_count = `
${count_str}
`; + + search_result_container = ` +
+ ${modal_filters} + ${search_divider} + ${result_count} +
+ ${search_results} +
+
+ `; + } else { + search_result_container = `
${modal_filters} ${search_divider} - ${result_count} -
- ${search_results} -
-
+
0 result(s)
+ +
No result found!
`; - } else { - search_result_container = ` -
- ${modal_filters} - ${search_divider} -
0 result(s)
-
-
No result found!
- `; - } + } - if ($(".search-modal-card-body").hasClass("is-justify-content-center")) { - $(".search-modal-card-body").removeClass("is-justify-content-center"); - } + if ($(".search-modal-card-body").hasClass("is-justify-content-center")) { + $(".search-modal-card-body").removeClass("is-justify-content-center"); + } - $(".search-modal-card-body").html(search_result_container); - } else { - if (!$(".search-modal-card-body").hasClass("is-justify-content-center")) { - $(".search-modal-card-body").addClass("is-justify-content-center"); + $(".search-modal-card-body").html(search_result_container); + } else { + if (!$(".search-modal-card-body").hasClass("is-justify-content-center")) { + $(".search-modal-card-body").addClass("is-justify-content-center"); + } + + $(".search-modal-card-body").html(` +
Type something to get started!
+ `); } + } - $(".search-modal-card-body").html(` -
Type something to get started!
- `); + /** + * Make the modal filter html + * + * @returns string + */ + function make_modal_body_filters() { + let str = filters + .map((val) => { + if (selected_filter == val.toLowerCase()) { + return `${val}`; + } else { + return `${val}`; + } + }) + .join(""); + + return ` +
+ Filters: + ${str} +
`; } } -/** - * Make the modal filter html - * - * @returns string - */ -function make_modal_body_filters() { - let str = filters - .map((val) => { - if (selected_filter == val.toLowerCase()) { - return `${val}`; - } else { - return `${val}`; - } - }) - .join(""); - - return ` -
- Filters: - ${str} -
`; +function waitUntilSearchIndexAvailable() { + // It is possible that the documenter.js script runs before the page + // has finished loading and documenterSearchIndex gets defined. + // So we need to wait until the search index actually loads before setting + // up all the search-related stuff. + if (typeof documenterSearchIndex !== "undefined") { + runSearchMainCode(); + } else { + console.warn("Search Index not available, waiting"); + setTimeout(waitUntilSearchIndexAvailable, 1000); + } } +// The actual entry point to the search code +waitUntilSearchIndexAvailable(); + }) //////////////////////////////////////////////////////////////////////////////// require(['jquery'], function($) { diff --git a/dev/benchmarks/index.html b/dev/benchmarks/index.html index 1fce5ca..033e0f8 100644 --- a/dev/benchmarks/index.html +++ b/dev/benchmarks/index.html @@ -1,5 +1,5 @@ -Benchmarks · AcceleratedKernels.jl

Benchmarks

Some arithmetic-heavy benchmarks are given below - see this repository for the code; our paper will be linked here upon publishing with a full analysis.

+Benchmarks · AcceleratedKernels.jl

Benchmarks

Some arithmetic-heavy benchmarks are given below - see this repository for the code; our paper will be linked here upon publishing with a full analysis.

Arithmetic benchmark

See protoype/sort_benchmark.jl for a small-scale sorting benchmark code and prototype/thrust_sort for the Nvidia Thrust wrapper. The results below are from a system with Linux 6.6.30-2-MANJARO, Intel Core i9-10885H CPU, Nvidia Quadro RTX 4000 with Max-Q Design GPU, Thrust 1.17.1-1, Julia Version 1.10.4.

Sorting benchmark

@@ -7,4 +7,4 @@

The sorting algorithms can also be combined with MPISort.jl for multi-device sorting - indeed, you can co-operatively sort using both your CPU and GPU! Or use 200 GPUs on the 52 nodes of Baskerville HPC to sort 538-855 GB of data per second (comparable with the highest figure reported in literature of 900 GB/s on 262,144 CPU cores):

Sorting throughput

Hardware stats for nerds available here. Full analysis will be linked here once our paper is published.

-
+
diff --git a/dev/debugging/index.html b/dev/debugging/index.html index c2db23b..ecdaa1e 100644 --- a/dev/debugging/index.html +++ b/dev/debugging/index.html @@ -1,5 +1,5 @@ -Debugging Kernels · AcceleratedKernels.jl

Debugging Kernels

As the compilation pipeline of GPU kernels is different to that of base Julia, error messages also look different - for example, where Julia would insert an exception when a variable name was not defined (e.g. we had a typo), a GPU kernel throwing exceptions cannot be compiled and instead you'll see some cascading errors like "[...] compiling [...] resulted in invalid LLVM IR" caused by "Reason: unsupported use of an undefined name" resulting in "Reason: unsupported dynamic function invocation", etc.

Thankfully, there are only about 3 types of such error messages and they're not that scary when you look into them.

Undefined Variables / Typos

If you misspell a variable name, Julia would insert an exception:

function set_color(v, color)
+Debugging Kernels · AcceleratedKernels.jl

Debugging Kernels

As the compilation pipeline of GPU kernels is different to that of base Julia, error messages also look different - for example, where Julia would insert an exception when a variable name was not defined (e.g. we had a typo), a GPU kernel throwing exceptions cannot be compiled and instead you'll see some cascading errors like "[...] compiling [...] resulted in invalid LLVM IR" caused by "Reason: unsupported use of an undefined name" resulting in "Reason: unsupported dynamic function invocation", etc.

Thankfully, there are only about 3 types of such error messages and they're not that scary when you look into them.

Undefined Variables / Typos

If you misspell a variable name, Julia would insert an exception:

function set_color(v, color)
     AK.foreachindex(v) do i
         v[i] = colour           # Grab your porridge
     end
@@ -50,4 +50,4 @@
 mymul!(v, 2.0)

Note that we try to multiply Float32 values by 2.0, which is a Float64 - in which case we get:

ERROR: LoadError: Compilation to native code failed; see below for details.
 [...]
 caused by: NSError: Compiler encountered an internal error (AGXMetalG15X_M1, code 3)
-[...]

Change the 2.0 to 2.0f0 or Float32(2); in kernels with generic types (that are supposed to work on multiple possible input types), do use the same types as your inputs, using e.g. T = eltype(v) then zero(T), T(42), etc.


For other library-related problems, feel free to post a GitHub issue. For help implementing new code, or just advice, you can also use the Julia Discourse forum, the community is incredibly helpful.

+[...]

Change the 2.0 to 2.0f0 or Float32(2); in kernels with generic types (that are supposed to work on multiple possible input types), do use the same types as your inputs, using e.g. T = eltype(v) then zero(T), T(42), etc.


For other library-related problems, feel free to post a GitHub issue. For help implementing new code, or just advice, you can also use the Julia Discourse forum, the community is incredibly helpful.

diff --git a/dev/index.html b/dev/index.html index 83cb239..d88e1fe 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,9 +1,9 @@ -Overview · AcceleratedKernels.jl

Logo

Parallel algorithm building blocks for the Julia ecosystem, targeting multithreaded CPUs, and GPUs via Intel oneAPI, AMD ROCm, Apple Metal and Nvidia CUDA (and any future backends added to the JuliaGPU organisation).


What's Different?

As far as I am aware, this is the first cross-architecture parallel standard library from a unified codebase - that is, the code is written as KernelAbstractions.jl backend-agnostic kernels, which are then transpiled to a GPU backend; that means we benefit from all the optimisations available on the native platform and official compiler stacks. For example, unlike open standards like OpenCL that require GPU vendors to implement that API for their hardware, we target the existing official compilers. And while performance-portability libraries like Kokkos and RAJA are powerful for large C++ codebases, they require US National Lab-level development and maintenance efforts to effectively forward calls from a single API to other OpenMP, CUDA Thrust, ROCm rocThrust, oneAPI DPC++ libraries developed separately. In comparison, this library was developed effectively in a week by a single person because developing packages in Julia is just a joy.

+Overview · AcceleratedKernels.jl

Logo

Parallel algorithm building blocks for the Julia ecosystem, targeting multithreaded CPUs, and GPUs via Intel oneAPI, AMD ROCm, Apple Metal and Nvidia CUDA (and any future backends added to the JuliaGPU organisation).


What's Different?

As far as I am aware, this is the first cross-architecture parallel standard library from a unified codebase - that is, the code is written as KernelAbstractions.jl backend-agnostic kernels, which are then transpiled to a GPU backend; that means we benefit from all the optimisations available on the native platform and official compiler stacks. For example, unlike open standards like OpenCL that require GPU vendors to implement that API for their hardware, we target the existing official compilers. And while performance-portability libraries like Kokkos and RAJA are powerful for large C++ codebases, they require US National Lab-level development and maintenance efforts to effectively forward calls from a single API to other OpenMP, CUDA Thrust, ROCm rocThrust, oneAPI DPC++ libraries developed separately. In comparison, this library was developed effectively in a week by a single person because developing packages in Julia is just a joy.

Again, this is only possible because of the unique Julia compilation model, the JuliaGPU organisation work for reusable GPU backend infrastructure, and especially the KernelAbstractions.jl backend-agnostic kernel language. Thank you.


Status

The AcceleratedKernels.jl sorters were adopted as the official AMDGPU algorithms! The API is starting to stabilise; it follows the Julia standard library fairly closely - and additionally exposing all temporary arrays for memory reuse. For any new ideas / requests, please join the conversation on Julia Discourse or post an issue.

We have an extensive randomised test suite that we run on the CPU (single- and multi-threaded) backend on Windows, Ubuntu and MacOS for Julia LTS, Stable, and Pre-Release, plus the CUDA, AMDGPU, oneAPI and Metal backends on the JuliaGPU buildkite.

AcceleratedKernels.jl is also be a fundamental building block of applications developed at EvoPhase, so it will see continuous heavy use with industry backing. Long-term stability, performance improvements and support are priorities for us.


Acknowledgements

Designed and built by Andrei-Leonard Nicusan, maintained with contributors.

Much of this work was possible because of the fantastic HPC resources at the University of Birmingham and the Birmingham Environment for Academic Research, which gave us free on-demand access to thousands of CPUs and GPUs that we experimented on, and the support teams we nagged. In particular, thank you to Kit Windows-Yule and Andrew Morris on the BlueBEAR and Baskerville T2 supercomputers' leadership, and Simon Branford, Simon Hartley, James Allsopp and James Carpenter for computing support.

-

License

AcceleratedKernels.jl is MIT-licensed. Enjoy.

+

License

AcceleratedKernels.jl is MIT-licensed. Enjoy.

diff --git a/dev/objects.inv b/dev/objects.inv index 297934f7b759b681ecf917fd0a7065d9f657ef4a..ea14ee0214a251793db8cebcdc9f38184824fddf 100644 GIT binary patch delta 1029 zcmV+g1p52X2>%F>h<}|{O>f&c5WVwP%px`R#0pIpTOgMrZku4S4Qf!tC47 zKF(&a^xv$JW?P$s(1i@q!E^?z0JBz2%S5BHu2uT+VIzvY)PH5P($t#4y&JJiejhIBc-+sZl#t+RPed3hix20lTvUhgFDfXy{(N+0d6jX9Di|Bgfg=1 z;tLpOGx$pXr{(6c-atzS3{e!_p{sNLgwo;g4m(oHThLT2*(~!7s zAF4o4lc0(oJP&0vjV zSe?eFD0Eg;I5FM0H((;2J{fM^#V^DTQ|Q0{V5Qu2kBv6Y0DsRoMrW&@dEOmxkQYM7+$4K0 z0`B7SCSx3+$J9@;^D!*wbEE=Np{IhQ0#VuR>YgvFy4dqv{VRL_IJMqkxk$Gi7BlpXR3Iui zYVRMX27fAB9~;U4(ybL6mug3hz-?7;M8&@eqr|34BN=Rb!gcpQVLKEAC=5c60gqvK zU4`PHEyEn+6L!9{gOaQWuN2 z*XNxO7zhls9t==WEg%#;lnp(KyB;+gsi>v7d<>eJi;M3cQ|j1dC;+~E@u7UZpr~o7 zTL%Tyb513c{C`Ki{oamc;eC=U&c3*O7a$gIZ=h$51>rsWtO}`HIg@IuxLHMq4cZl9 zU4Ppqp>K_vuT9k05lc3Tr2d%Wwd(x@b_FqAdwus(lfa~vnLmu!&1K2 z^;0FwM|r^W&q-`8511}aJ=eyFg8`4%PXkS}`>0#TJ;L{E+J45}&`;A;?ydb}g6F~0BLXIm59B1z4{U9+ zLIt7thSYxTik+`@(!f8}m32k;T&d)9PA~-G8IOnLZKtUq!n}fK>B);|>g7Lf!*z@A z2CpzaL~_UT`yZdx|ML*zdw1#}DMmBwv(#Vk(DQHe3({L>2EMp_gRXu8k<5?Ti@gc- delta 982 zcmV;{11bFf2+#f&c5WVYHOp!HuVuhx~E|5zRx0_(m1~n>Ydkkobj=82O zfufpte|?7|St2PpMK89pUrgK;Yn6;{#jMpk}8)Y6JHeBqbDSxAtrq&$p{TM!}?N(Z8 z94vbaxkBfXi=9f3l-hE*l}1{w;Y(HZ+c@w#rQlQ!cf2NhQ(2b+{9FzN{^F(xWo6mM z7ckD|@QwaY%gtj411%jexDf53)!a=eqa}yy;hRVsWw<>yl3TG$X`u~6Nh2s|NL;@U zCD79(D5C{qI)8itik1Kc;bf zqqK5IjDNtbt~Ok=f4NnBqti$R8=r9f{ZH5)1px|!&|zS}WOuE`RHrS;lKJdsVfIhs zk_ZOR8M&AdP~o_1le17NQZ4E(f3i$pvIy5z5>|{wCxS+02MKZE30eg6E#k$HP5Q}o zU2Dniq{EB-YV`4n-RO$DKi@OeFjF~9{<_9;jeiF}7pv68;@$OmCjLHT?PT9l8=&<&>4v1d>KhVsSw^7VqErloEj6wrZlDxu{6JL>Iq zGnR!9NwRqR;_|Nmv3Pd_9cwHIAF?k>NYlufR9nTZCpv7P_ofE7?l(RMe;Lc5wpxGe+>H;OzC+r(v!iEYaE&QjioG+#43Lu+v zk~}-m$F?<8?S9Ws^aP!HNku;EV|*J<9VAkpVV|Y`f`^`8%`ZqVi5d9f@-4dh3Bc1u EJV3zX4gdfE diff --git a/dev/performance/index.html b/dev/performance/index.html index 613ef5d..9705419 100644 --- a/dev/performance/index.html +++ b/dev/performance/index.html @@ -1,4 +1,4 @@ -Performance Tips · AcceleratedKernels.jl

Performance Tips

If you just started using AcceleratedKernels.jl, see the Manual first for some examples.

GPU Block Size and CPU Threads

All GPU functions allow you to specify a block size - this is often a power of two (mostly 64, 128, 256, 512); the optimum depends on the algorithm, input data and hardware - you can try the different values and @time or @benchmark them:

@time AK.foreachindex(f, itr_gpu, block_size=512)

Similarly, for performance on the CPU the overhead of spawning threads should be masked by processing more elements per thread (but there is no reason here to launch more threads than Threads.nthreads(), the number of threads Julia was started with); the optimum depends on how expensive f is - again, benchmarking is your friend:

@time AK.foreachindex(f, itr_cpu, max_tasks=16, min_elems=1000)

Temporary Arrays

As GPU memory is more expensive, all functions in AcceleratedKernels.jl expose any temporary arrays they will use (the temp argument); you can supply your own buffers to make the algorithms not allocate additional GPU storage, e.g.:

v = ROCArray(rand(Float32, 100_000))
+Performance Tips · AcceleratedKernels.jl

Performance Tips

If you just started using AcceleratedKernels.jl, see the Manual first for some examples.

GPU Block Size and CPU Threads

All GPU functions allow you to specify a block size - this is often a power of two (mostly 64, 128, 256, 512); the optimum depends on the algorithm, input data and hardware - you can try the different values and @time or @benchmark them:

@time AK.foreachindex(f, itr_gpu, block_size=512)

Similarly, for performance on the CPU the overhead of spawning threads should be masked by processing more elements per thread (but there is no reason here to launch more threads than Threads.nthreads(), the number of threads Julia was started with); the optimum depends on how expensive f is - again, benchmarking is your friend:

@time AK.foreachindex(f, itr_cpu, max_tasks=16, min_elems=1000)

Temporary Arrays

As GPU memory is more expensive, all functions in AcceleratedKernels.jl expose any temporary arrays they will use (the temp argument); you can supply your own buffers to make the algorithms not allocate additional GPU storage, e.g.:

v = ROCArray(rand(Float32, 100_000))
 temp = similar(v)
-AK.sort!(v, temp=temp)
+AK.sort!(v, temp=temp)
diff --git a/dev/references/index.html b/dev/references/index.html index 0b0c659..e45ebd8 100644 --- a/dev/references/index.html +++ b/dev/references/index.html @@ -1,5 +1,5 @@ -References · AcceleratedKernels.jl

References

This library is built on the unique Julia infrastructure for transpiling code to GPU backends, and years spent developing the JuliaGPU ecosystem that make it a joy to use. In particular, credit should go to the following people and work:

+References · AcceleratedKernels.jl

References

This library is built on the unique Julia infrastructure for transpiling code to GPU backends, and years spent developing the JuliaGPU ecosystem that make it a joy to use. In particular, credit should go to the following people and work:

  • The Julia language design, which made code manipulation and generation a first class citizen: Bezanson J, Edelman A, Karpinski S, Shah VB. Julia: A fresh approach to numerical computing. SIAM review. 2017.

  • @@ -38,4 +38,4 @@

Designed and built by Andrei-Leonard Nicusan, maintained with contributors.

Much of this work was possible because of the fantastic HPC resources at the University of Birmingham and the Birmingham Environment for Academic Research, which gave us free on-demand access to thousands of CPUs and GPUs that we experimented on, and the support teams we nagged. In particular, thank you to Kit Windows-Yule and Andrew Morris on the BlueBEAR and Baskerville T2 supercomputers' leadership, and Simon Branford, Simon Hartley, James Allsopp and James Carpenter for computing support.

-
+
diff --git a/dev/roadmap/index.html b/dev/roadmap/index.html index 827d405..0715c24 100644 --- a/dev/roadmap/index.html +++ b/dev/roadmap/index.html @@ -1,5 +1,5 @@ -Roadmap · AcceleratedKernels.jl

Roadmap / Future Plans

Help is very welcome for any of the below:

+Roadmap · AcceleratedKernels.jl

Roadmap / Future Plans

Help is very welcome for any of the below:

  • Automated optimisation / tuning of e.g. block_size for a given input; can be made algorithm-agnostic.

      @@ -27,4 +27,4 @@
    • Other ideas? Post an issue, or open a discussion on the Julia Discourse.

    -
+
diff --git a/dev/search_index.js b/dev/search_index.js index 8719741..b2b7e53 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"references/#References","page":"References","title":"References","text":"","category":"section"},{"location":"references/","page":"References","title":"References","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 10. References\") # hide","category":"page"},{"location":"references/","page":"References","title":"References","text":"","category":"page"},{"location":"references/","page":"References","title":"References","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 11. Acknowledgements\") # hide","category":"page"},{"location":"api/sort/#sort-and-friends","page":"Sorting","title":"sort and friends","text":"","category":"section"},{"location":"api/sort/","page":"Sorting","title":"Sorting","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.4. `sort` and friends\") # hide","category":"page"},{"location":"api/accumulate/#Accumulate-/-Prefix-Sum-/-Scan","page":"Accumulate","title":"Accumulate / Prefix Sum / Scan","text":"","category":"section"},{"location":"api/accumulate/","page":"Accumulate","title":"Accumulate","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.7. `accumulate`\") # hide","category":"page"},{"location":"api/task_partition/#Multithreaded-Task-Partitioning","page":"Task Partitioning","title":"Multithreaded Task Partitioning","text":"","category":"section"},{"location":"api/task_partition/","page":"Task Partitioning","title":"Task Partitioning","text":"AcceleratedKernels.TaskPartitioner\nAcceleratedKernels.task_partition","category":"page"},{"location":"api/task_partition/#AcceleratedKernels.TaskPartitioner","page":"Task Partitioning","title":"AcceleratedKernels.TaskPartitioner","text":"struct TaskPartitioner\n\nPartitioning num_elems elements / jobs over maximum max_tasks tasks with minimum min_elems elements per task.\n\nMethods\n\nTaskPartitioner(num_elems, max_tasks=Threads.nthreads(), min_elems=1)\n\nFields\n\nnum_elems::Int64\nmax_tasks::Int64\nmin_elems::Int64\nnum_tasks::Int64\ntask_istarts::Vector{Int64}\n\nExamples\n\nusing AcceleratedKernels: TaskPartitioner\n\n# Divide 10 elements between 4 tasks\ntp = TaskPartitioner(10, 4)\nfor i in 1:tp.num_tasks\n @show tp[i]\nend\n\n# output\ntp[i] = 1:3\ntp[i] = 4:6\ntp[i] = 7:8\ntp[i] = 9:10\n\nusing AcceleratedKernels: TaskPartitioner\n\n# Divide 20 elements between 6 tasks with minimum 5 elements per task.\n# Not all tasks will be required\ntp = TaskPartitioner(20, 6, 5)\nfor i in 1:tp.num_tasks\n @show tp[i]\nend\n\n# output\ntp[i] = 1:5\ntp[i] = 6:10\ntp[i] = 11:15\ntp[i] = 16:20\n\n\n\n\n\n","category":"type"},{"location":"api/task_partition/#AcceleratedKernels.task_partition","page":"Task Partitioning","title":"AcceleratedKernels.task_partition","text":"task_partition(f, num_elems, max_tasks=Threads.nthreads(), min_elems=1)\ntask_partition(f, tp::TaskPartitioner)\n\nPartition num_elems jobs across at most num_tasks parallel tasks with at least min_elems per task, calling f(start_index:end_index), where the indices are between 1 and num_elems.\n\nExamples\n\nA toy example showing outputs:\n\nnum_elems = 4\ntask_partition(println, num_elems)\n\n# Output, possibly in a different order due to threading order\n1:1\n4:4\n2:2\n3:3\n\nThis function is probably most useful with a do-block, e.g.:\n\ntask_partition(4) do irange\n some_long_computation(param1, param2, irange)\nend\n\n\n\n\n\n","category":"function"},{"location":"api/foreachindex/#General-Looping","page":"General Loops","title":"General Looping","text":"","category":"section"},{"location":"api/foreachindex/","page":"General Loops","title":"General Loops","text":"AcceleratedKernels.foreachindex","category":"page"},{"location":"api/foreachindex/#AcceleratedKernels.foreachindex","page":"General Loops","title":"AcceleratedKernels.foreachindex","text":"foreachindex(\n f, itr, backend::Backend=get_backend(itr);\n\n # CPU settings\n scheduler=:threads,\n max_tasks=Threads.nthreads(),\n min_elems=1,\n\n # GPU settings\n block_size=256,\n)\n\nParallelised for loop over the indices of an iterable.\n\nIt allows you to run normal Julia code on a GPU over multiple arrays - e.g. CuArray, ROCArray, MtlArray, oneArray - with one GPU thread per index.\n\nOn CPUs at most max_tasks threads are launched, or fewer such that each thread processes at least min_elems indices; if a single task ends up being needed, f is inlined and no thread is launched. Tune it to your function - the more expensive it is, the fewer elements are needed to amortise the cost of launching a thread (which is a few μs). The scheduler can be :polyester to use Polyester.jl cheap threads or :threads to use normal Julia threads; either can be faster depending on the function, but in general the latter is more composable.\n\nExamples\n\nNormally you would write a for loop like this:\n\nx = Array(1:100)\ny = similar(x)\nfor i in eachindex(x)\n @inbounds y[i] = 2 * x[i] + 1\nend\n\nUsing this function you can have the same for loop body over a GPU array:\n\nusing CUDA\nconst x = CuArray(1:100)\nconst y = similar(x)\nforeachindex(x) do i\n @inbounds y[i] = 2 * x[i] + 1\nend\n\nNote that the above code is pure arithmetic, which you can write directly (and on some platforms it may be faster) as:\n\nusing CUDA\nx = CuArray(1:100)\ny = 2 .* x .+ 1\n\nImportant note: to use this function on a GPU, the objects referenced inside the loop body must have known types - i.e. be inside a function, or const global objects; but you shouldn't use global objects anyways. For example:\n\nusing oneAPI\n\nx = oneArray(1:100)\n\n# CRASHES - typical error message: \"Reason: unsupported dynamic function invocation\"\n# foreachindex(x) do i\n# x[i] = i\n# end\n\nfunction somecopy!(v)\n # Because it is inside a function, the type of `v` will be known\n foreachindex(v) do i\n v[i] = i\n end\nend\n\nsomecopy!(x) # This works\n\n\n\n\n\n","category":"function"},{"location":"api/map/#Map","page":"Map","title":"Map","text":"","category":"section"},{"location":"api/map/","page":"Map","title":"Map","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.3. `map`\") # hide","category":"page"},{"location":"api/map/","page":"Map","title":"Map","text":"","category":"page"},{"location":"api/map/","page":"Map","title":"Map","text":"AcceleratedKernels.map!","category":"page"},{"location":"api/map/#AcceleratedKernels.map!","page":"Map","title":"AcceleratedKernels.map!","text":"map!(\n f, dst::AbstractArray, src::AbstractArray;\n\n # CPU settings\n scheduler=:threads,\n max_tasks=Threads.nthreads(),\n min_elems=1,\n\n # GPU settings\n block_size=256, \n)\n\nApply the function f to each element of src and store the result in dst. The CPU and GPU settings are the same as for foreachindex.\n\n\n\n\n\n","category":"function"},{"location":"api/binarysearch/#Binary-Search","page":"Binary Search","title":"Binary Search","text":"","category":"section"},{"location":"api/binarysearch/","page":"Binary Search","title":"Binary Search","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.8. `searchsorted` and friends\") # hide","category":"page"},{"location":"benchmarks/#Benchmarks","page":"Benchmarks","title":"Benchmarks","text":"","category":"section"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 3. Benchmarks\") # hide","category":"page"},{"location":"performance/#Performance-Tips","page":"Performance Tips","title":"Performance Tips","text":"","category":"section"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"If you just started using AcceleratedKernels.jl, see the Manual first for some examples.","category":"page"},{"location":"performance/#GPU-Block-Size-and-CPU-Threads","page":"Performance Tips","title":"GPU Block Size and CPU Threads","text":"","category":"section"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"All GPU functions allow you to specify a block size - this is often a power of two (mostly 64, 128, 256, 512); the optimum depends on the algorithm, input data and hardware - you can try the different values and @time or @benchmark them:","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"@time AK.foreachindex(f, itr_gpu, block_size=512)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"Similarly, for performance on the CPU the overhead of spawning threads should be masked by processing more elements per thread (but there is no reason here to launch more threads than Threads.nthreads(), the number of threads Julia was started with); the optimum depends on how expensive f is - again, benchmarking is your friend:","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"@time AK.foreachindex(f, itr_cpu, max_tasks=16, min_elems=1000)","category":"page"},{"location":"performance/#Temporary-Arrays","page":"Performance Tips","title":"Temporary Arrays","text":"","category":"section"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"As GPU memory is more expensive, all functions in AcceleratedKernels.jl expose any temporary arrays they will use (the temp argument); you can supply your own buffers to make the algorithms not allocate additional GPU storage, e.g.:","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"v = ROCArray(rand(Float32, 100_000))\ntemp = similar(v)\nAK.sort!(v, temp=temp)","category":"page"},{"location":"api/custom_structs/#Custom-Structs","page":"Custom Structs","title":"Custom Structs","text":"","category":"section"},{"location":"api/custom_structs/","page":"Custom Structs","title":"Custom Structs","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 6. Custom Structs\") # hide","category":"page"},{"location":"roadmap/#Roadmap-/-Future-Plans","page":"Roadmap","title":"Roadmap / Future Plans","text":"","category":"section"},{"location":"roadmap/","page":"Roadmap","title":"Roadmap","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 9. Roadmap / Future Plans\") # hide","category":"page"},{"location":"debugging/#Debugging-Kernels","page":"Debugging Kernels","title":"Debugging Kernels","text":"","category":"section"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"As the compilation pipeline of GPU kernels is different to that of base Julia, error messages also look different - for example, where Julia would insert an exception when a variable name was not defined (e.g. we had a typo), a GPU kernel throwing exceptions cannot be compiled and instead you'll see some cascading errors like \"[...] compiling [...] resulted in invalid LLVM IR\" caused by \"Reason: unsupported use of an undefined name\" resulting in \"Reason: unsupported dynamic function invocation\", etc.","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Thankfully, there are only about 3 types of such error messages and they're not that scary when you look into them.","category":"page"},{"location":"debugging/#Undefined-Variables-/-Typos","page":"Debugging Kernels","title":"Undefined Variables / Typos","text":"","category":"section"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"If you misspell a variable name, Julia would insert an exception:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"function set_color(v, color)\n AK.foreachindex(v) do i\n v[i] = colour # Grab your porridge\n end\nend","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"However, exceptions cannot be compiled on GPUs and you will see cascading errors like below:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"(Image: Undefined Name Error)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"The key thing to look for is undefined name, then search for it in your code.","category":"page"},{"location":"debugging/#Exceptions-and-Checks-that-throw","page":"Debugging Kernels","title":"Exceptions and Checks that throw","text":"","category":"section"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"As mentioned above, exceptions cannot be compiled in GPU kernels; however, many normal-looking functions that we reference in kernels may contain argument-checking. If it cannot be proved that a check branch would not throw an exception, you will see a similar cascade of errors. For example, casting a Float32 to an Int32 includes an InexactError exception check - see this tame-looking code:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"function mymul!(v)\n AK.foreachindex(v) do i\n v[i] *= 2f0\n end\nend\n\nv = MtlArray(1:1000)\nmymul!(v)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"See any problem with it? The MtlArray(1:1000) creates a GPU vector filled with Int64 values, but within foreachindex we do v[i] *= 2.0. We are multiplying an Int64 by a Float32, resulting in a Float32 value that we try to write back into v - this may throw an exception, like in normal Julia code:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"julia> x = [1, 2, 3];\njulia> x[1] = 42.5\nERROR: InexactError: Int64(42.5)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"On GPUs you will see an error like this:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"(Image: Check Exception Error)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Note the error stack: setindex!, convert, Int64, box_float32 - because of the exception check, we have a type instability, which in turn results in boxing values behind pointers, in turn resulting in dynamic memory allocation and finally the error we see at the top, unsupported call to gpu_malloc.","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"You may need to do your correctness checks manually, without exceptions; in this specific case, if we did want to cast a Float32 to an Int, we could use unsafe_trunc(T, x) - though be careful when using unsafe functions that you understand their behaviour and assumptions (e.g. log has a DomainError check for negative values):","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"function mymul!(v)\n AK.foreachindex(v) do i\n v[i] = unsafe_trunc(eltype(v), v[i] * 2.5f0)\n end\nend\n\nv = MtlArray(1:1000)\nmymul!(v)","category":"page"},{"location":"debugging/#Type-Instability-/-Global-Variables","page":"Debugging Kernels","title":"Type Instability / Global Variables","text":"","category":"section"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Types must be known to be captured and compiled within GPU kernels. Global variables without const are not type-stable, as you could associate a different value later on in a script:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"v = MtlArray(1:1000)\n\nAK.foreachindex(v) do i\n v[i] *= 2\nend\n\nv = \"potato\"","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"The error stack is a bit more difficult here:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"(Image: Type Unstable Error)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"You see a few dynamic function invocation, an unsupported call to gpu_malloc, and a bit further down a box. The more operations you do on the type-unstable object, the more dynamic function invocation errors you'll see. These would also be the steps Base Julia would take to allow dynamically-changing objects: they'd be put in a Box behind pointers, and allocated on the heap. In a way, it is better that we cannot do that on a GPU, as it hurts performance massively.","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"There are two ways to solve this - if you really want to use global variables in a script, put them behind a const:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"const v = MtlArray(1:1000)\n\nAK.foreachindex(v) do i\n v[i] *= 2\nend\n\n# This would give you an error now\n# v = \"potato\"","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Or better, use functions:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"function mymul!(v, x)\n AK.foreachindex(v) do i\n v[i] *= x\n end\nend\n\nv = MtlArray(1:1000)\nmymul!(v, 2)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Note that Julia's lambda capture is very powerful - inside AK.foreachindex you can references other objects from within the function (like x), without explicitly passing them to the GPU.","category":"page"},{"location":"debugging/#Apple-Metal-Only:-Float64-is-not-Supported","page":"Debugging Kernels","title":"Apple Metal Only: Float64 is not Supported","text":"","category":"section"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Mac GPUs do not natively support Float64 values; there is a high-level check when trying to create an array:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"julia> x = MtlArray([1.0, 2.0, 3.0])\nERROR: Metal does not support Float64 values, try using Float32 instead","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"However, if we tried to use / convert values in a kernel to a Float64:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"function mymul!(v, x)\n AK.foreachindex(v) do i\n v[i] *= x\n end\nend\n\nv = MtlArray{Float32}(1:1000)\nmymul!(v, 2.0)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Note that we try to multiply Float32 values by 2.0, which is a Float64 - in which case we get:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"ERROR: LoadError: Compilation to native code failed; see below for details.\n[...]\ncaused by: NSError: Compiler encountered an internal error (AGXMetalG15X_M1, code 3)\n[...]","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Change the 2.0 to 2.0f0 or Float32(2); in kernels with generic types (that are supposed to work on multiple possible input types), do use the same types as your inputs, using e.g. T = eltype(v) then zero(T), T(42), etc.","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"For other library-related problems, feel free to post a GitHub issue. For help implementing new code, or just advice, you can also use the Julia Discourse forum, the community is incredibly helpful.","category":"page"},{"location":"api/predicates/#Predicates","page":"Predicates","title":"Predicates","text":"","category":"section"},{"location":"api/predicates/","page":"Predicates","title":"Predicates","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.9. `all` / `any`\") # hide","category":"page"},{"location":"api/predicates/","page":"Predicates","title":"Predicates","text":"Note on the cooperative keyword: some older platforms crash when multiple threads write to the same memory location in a global array (e.g. old Intel Graphics); if all threads were to write the same value, it is well-defined on others (e.g. CUDA F4.2 says \"If a non-atomic instruction executed by a warp writes to the same location in global memory for more than one of the threads of the warp, only one thread performs a write and which thread does it is undefined.\"). This \"cooperative\" thread behaviour allows for a faster implementation; if you have a platform - the only one I know is Intel UHD Graphics - that crashes, set cooperative=false to use a safer mapreduce-based implementation.","category":"page"},{"location":"testing/#Testing","page":"Testing","title":"Testing","text":"","category":"section"},{"location":"testing/","page":"Testing","title":"Testing","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 7. Testing\") # hide","category":"page"},{"location":"api/mapreduce/#MapReduce","page":"MapReduce","title":"MapReduce","text":"","category":"section"},{"location":"api/mapreduce/","page":"MapReduce","title":"MapReduce","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.6. `mapreduce`\") # hide","category":"page"},{"location":"api/mapreduce/","page":"MapReduce","title":"MapReduce","text":"","category":"page"},{"location":"api/mapreduce/","page":"MapReduce","title":"MapReduce","text":"AcceleratedKernels.mapreduce","category":"page"},{"location":"api/mapreduce/#AcceleratedKernels.mapreduce","page":"MapReduce","title":"AcceleratedKernels.mapreduce","text":"mapreduce(\n f, op, src::AbstractArray;\n init,\n dims::Union{Nothing, Int}=nothing,\n\n # CPU settings\n scheduler=:static,\n max_tasks=Threads.nthreads(),\n min_elems=1,\n\n # GPU settings\n block_size::Int=256,\n temp::Union{Nothing, AbstractArray}=nothing,\n switch_below::Int=0,\n)\n\nReduce src along dimensions dims using the binary operator op after applying f elementwise. If dims is nothing, reduce src to a scalar. If dims is an integer, reduce src along that dimension. The init value is used as the initial value for the reduction (i.e. after mapping).\n\nCPU settings\n\nThe scheduler can be one of the OhMyThreads.jl schedulers, i.e. :static, :dynamic, :greedy or :serial. Assuming the workload is uniform (as the GPU algorithm prefers), :static is used by default; if you need fine-grained control over your threads, consider using OhMyThreads.jl directly.\n\nUse at most max_tasks threads with at least min_elems elements per task.\n\nGPU settings\n\nThe block_size parameter controls the number of threads per block.\n\nThe temp parameter can be used to pass a pre-allocated temporary array. For reduction to a scalar (dims=nothing), length(temp) >= 2 * (length(src) + 2 * block_size - 1) ÷ (2 * block_size) is required. For reduction along a dimension (dims is an integer), temp is used as the destination array, and thus must have the exact dimensions required - i.e. same dimensionwise sizes as src, except for the reduced dimension which becomes 1; there are some corner cases when one dimension is zero, check against Base.reduce for CPU arrays for exact behavior.\n\nThe switch_below parameter controls the threshold below which the reduction is performed on the CPU and is only used for 1D reductions (i.e. dims=nothing).\n\nExample\n\nComputing a sum of squares, reducing down to a scalar that is copied to host:\n\nimport AcceleratedKernels as AK\nusing CUDA\n\nv = CuArray{Int16}(rand(1:1000, 100_000))\nvsumsq = AK.mapreduce(x -> x * x, (x, y) -> x + y, v; init=zero(eltype(v)))\n\nComputing dimensionwise sums of squares in a 2D matrix:\n\nimport AcceleratedKernels as AK\nusing Metal\n\nf(x) = x * x\nm = MtlArray(rand(Int32(1):Int32(100), 10, 100_000))\nmrowsumsq = AK.mapreduce(f, +, m; init=zero(eltype(m)), dims=1)\nmcolsumsq = AK.mapreduce(f, +, m; init=zero(eltype(m)), dims=2)\n\n\n\n\n\n","category":"function"},{"location":"api/using_backends/#Using-Different-Backends","page":"Using Different Backends","title":"Using Different Backends","text":"","category":"section"},{"location":"api/using_backends/","page":"Using Different Backends","title":"Using Different Backends","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.1. Using Different Backends\") # hide","category":"page"},{"location":"","page":"Overview","title":"Overview","text":"(Image: Logo)","category":"page"},{"location":"","page":"Overview","title":"Overview","text":"Parallel algorithm building blocks for the Julia ecosystem, targeting multithreaded CPUs, and GPUs via Intel oneAPI, AMD ROCm, Apple Metal and Nvidia CUDA (and any future backends added to the JuliaGPU organisation).","category":"page"},{"location":"","page":"Overview","title":"Overview","text":"","category":"page"},{"location":"#What's-Different?","page":"Overview","title":"What's Different?","text":"","category":"section"},{"location":"","page":"Overview","title":"Overview","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 1. What's Different?\") # hide","category":"page"},{"location":"","page":"Overview","title":"Overview","text":"","category":"page"},{"location":"#Status","page":"Overview","title":"Status","text":"","category":"section"},{"location":"","page":"Overview","title":"Overview","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 2. Status\") # hide","category":"page"},{"location":"","page":"Overview","title":"Overview","text":"","category":"page"},{"location":"#Acknowledgements","page":"Overview","title":"Acknowledgements","text":"","category":"section"},{"location":"","page":"Overview","title":"Overview","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 11. Acknowledgements\") # hide","category":"page"},{"location":"","page":"Overview","title":"Overview","text":"","category":"page"},{"location":"#License","page":"Overview","title":"License","text":"","category":"section"},{"location":"","page":"Overview","title":"Overview","text":"AcceleratedKernels.jl is MIT-licensed. Enjoy.","category":"page"},{"location":"api/reduce/#Reductions","page":"Reduce","title":"Reductions","text":"","category":"section"},{"location":"api/reduce/","page":"Reduce","title":"Reduce","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.5. `reduce`\") # hide","category":"page"},{"location":"api/reduce/","page":"Reduce","title":"Reduce","text":"","category":"page"},{"location":"api/reduce/","page":"Reduce","title":"Reduce","text":"AcceleratedKernels.reduce","category":"page"},{"location":"api/reduce/#AcceleratedKernels.reduce","page":"Reduce","title":"AcceleratedKernels.reduce","text":"reduce(\n op, src::AbstractArray;\n init,\n dims::Union{Nothing, Int}=nothing,\n\n # CPU settings\n scheduler=:static,\n max_tasks=Threads.nthreads(),\n min_elems=1,\n\n # GPU settings\n block_size::Int=256,\n temp::Union{Nothing, AbstractGPUArray}=nothing,\n switch_below::Int=0,\n)\n\nReduce src along dimensions dims using the binary operator op. If dims is nothing, reduce src to a scalar. If dims is an integer, reduce src along that dimension. The init value is used as the initial value for the reduction.\n\nCPU settings\n\nThe scheduler can be one of the OhMyThreads.jl schedulers, i.e. :static, :dynamic, :greedy or :serial. Assuming the workload is uniform (as the GPU algorithm prefers), :static is used by default; if you need fine-grained control over your threads, consider using OhMyThreads.jl directly.\n\nUse at most max_tasks threads with at least min_elems elements per task.\n\nGPU settings\n\nThe block_size parameter controls the number of threads per block.\n\nThe temp parameter can be used to pass a pre-allocated temporary array. For reduction to a scalar (dims=nothing), length(temp) >= 2 * (length(src) + 2 * block_size - 1) ÷ (2 * block_size) is required. For reduction along a dimension (dims is an integer), temp is used as the destination array, and thus must have the exact dimensions required - i.e. same dimensionwise sizes as src, except for the reduced dimension which becomes 1; there are some corner cases when one dimension is zero, check against Base.reduce for CPU arrays for exact behavior.\n\nThe switch_below parameter controls the threshold below which the reduction is performed on the CPU and is only used for 1D reductions (i.e. dims=nothing).\n\nExample\n\nComputing a sum, reducing down to a scalar that is copied to host:\n\nimport AcceleratedKernels as AK\nusing CUDA\n\nv = CuArray{Int16}(rand(1:1000, 100_000))\nvsum = AK.reduce((x, y) -> x + y, v; init=zero(eltype(v)))\n\nComputing dimensionwise sums in a 2D matrix:\n\nimport AcceleratedKernels as AK\nusing Metal\n\nm = MtlArray(rand(Int32(1):Int32(100), 10, 100_000))\nmrowsum = AK.reduce(+, m; init=zero(eltype(m)), dims=1)\nmcolsum = AK.reduce(+, m; init=zero(eltype(m)), dims=2)\n\n\n\n\n\n","category":"function"}] +[{"location":"references/#References","page":"References","title":"References","text":"","category":"section"},{"location":"references/","page":"References","title":"References","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 10. References\") # hide","category":"page"},{"location":"references/","page":"References","title":"References","text":"","category":"page"},{"location":"references/","page":"References","title":"References","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 11. Acknowledgements\") # hide","category":"page"},{"location":"api/sort/#sort-and-friends","page":"Sorting","title":"sort and friends","text":"","category":"section"},{"location":"api/sort/","page":"Sorting","title":"Sorting","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.4. `sort` and friends\") # hide","category":"page"},{"location":"api/accumulate/#Accumulate-/-Prefix-Sum-/-Scan","page":"Accumulate","title":"Accumulate / Prefix Sum / Scan","text":"","category":"section"},{"location":"api/accumulate/","page":"Accumulate","title":"Accumulate","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.7. `accumulate`\") # hide","category":"page"},{"location":"api/task_partition/#Multithreaded-Task-Partitioning","page":"Task Partitioning","title":"Multithreaded Task Partitioning","text":"","category":"section"},{"location":"api/task_partition/","page":"Task Partitioning","title":"Task Partitioning","text":"AcceleratedKernels.TaskPartitioner\nAcceleratedKernels.task_partition","category":"page"},{"location":"api/task_partition/#AcceleratedKernels.TaskPartitioner","page":"Task Partitioning","title":"AcceleratedKernels.TaskPartitioner","text":"struct TaskPartitioner\n\nPartitioning num_elems elements / jobs over maximum max_tasks tasks with minimum min_elems elements per task.\n\nMethods\n\nTaskPartitioner(num_elems, max_tasks=Threads.nthreads(), min_elems=1)\n\nFields\n\nnum_elems::Int64\nmax_tasks::Int64\nmin_elems::Int64\nnum_tasks::Int64\ntask_istarts::Vector{Int64}\n\nExamples\n\nusing AcceleratedKernels: TaskPartitioner\n\n# Divide 10 elements between 4 tasks\ntp = TaskPartitioner(10, 4)\nfor i in 1:tp.num_tasks\n @show tp[i]\nend\n\n# output\ntp[i] = 1:3\ntp[i] = 4:6\ntp[i] = 7:8\ntp[i] = 9:10\n\nusing AcceleratedKernels: TaskPartitioner\n\n# Divide 20 elements between 6 tasks with minimum 5 elements per task.\n# Not all tasks will be required\ntp = TaskPartitioner(20, 6, 5)\nfor i in 1:tp.num_tasks\n @show tp[i]\nend\n\n# output\ntp[i] = 1:5\ntp[i] = 6:10\ntp[i] = 11:15\ntp[i] = 16:20\n\n\n\n\n\n","category":"type"},{"location":"api/task_partition/#AcceleratedKernels.task_partition","page":"Task Partitioning","title":"AcceleratedKernels.task_partition","text":"task_partition(f, num_elems, max_tasks=Threads.nthreads(), min_elems=1)\ntask_partition(f, tp::TaskPartitioner)\n\nPartition num_elems jobs across at most num_tasks parallel tasks with at least min_elems per task, calling f(start_index:end_index), where the indices are between 1 and num_elems.\n\nExamples\n\nA toy example showing outputs:\n\nnum_elems = 4\ntask_partition(println, num_elems)\n\n# Output, possibly in a different order due to threading order\n1:1\n4:4\n2:2\n3:3\n\nThis function is probably most useful with a do-block, e.g.:\n\ntask_partition(4) do irange\n some_long_computation(param1, param2, irange)\nend\n\n\n\n\n\n","category":"function"},{"location":"api/foreachindex/#General-Looping","page":"General Loops","title":"General Looping","text":"","category":"section"},{"location":"api/foreachindex/","page":"General Loops","title":"General Loops","text":"AcceleratedKernels.foreachindex\nAcceleratedKernels.foraxes","category":"page"},{"location":"api/foreachindex/#AcceleratedKernels.foreachindex","page":"General Loops","title":"AcceleratedKernels.foreachindex","text":"foreachindex(\n f, itr, backend::Backend=get_backend(itr);\n\n # CPU settings\n scheduler=:threads,\n max_tasks=Threads.nthreads(),\n min_elems=1,\n\n # GPU settings\n block_size=256,\n)\n\nParallelised for loop over the indices of an iterable.\n\nIt allows you to run normal Julia code on a GPU over multiple arrays - e.g. CuArray, ROCArray, MtlArray, oneArray - with one GPU thread per index.\n\nOn CPUs at most max_tasks threads are launched, or fewer such that each thread processes at least min_elems indices; if a single task ends up being needed, f is inlined and no thread is launched. Tune it to your function - the more expensive it is, the fewer elements are needed to amortise the cost of launching a thread (which is a few μs). The scheduler can be :polyester to use Polyester.jl cheap threads or :threads to use normal Julia threads; either can be faster depending on the function, but in general the latter is more composable.\n\nExamples\n\nNormally you would write a for loop like this:\n\nx = Array(1:100)\ny = similar(x)\nfor i in eachindex(x)\n @inbounds y[i] = 2 * x[i] + 1\nend\n\nUsing this function you can have the same for loop body over a GPU array:\n\nusing CUDA\nimport AcceleratedKernels as AK\nconst x = CuArray(1:100)\nconst y = similar(x)\nAK.foreachindex(x) do i\n @inbounds y[i] = 2 * x[i] + 1\nend\n\nNote that the above code is pure arithmetic, which you can write directly (and on some platforms it may be faster) as:\n\nusing CUDA\nx = CuArray(1:100)\ny = 2 .* x .+ 1\n\nImportant note: to use this function on a GPU, the objects referenced inside the loop body must have known types - i.e. be inside a function, or const global objects; but you shouldn't use global objects anyways. For example:\n\nusing oneAPI\nimport AcceleratedKernels as AK\n\nx = oneArray(1:100)\n\n# CRASHES - typical error message: \"Reason: unsupported dynamic function invocation\"\n# AK.foreachindex(x) do i\n# x[i] = i\n# end\n\nfunction somecopy!(v)\n # Because it is inside a function, the type of `v` will be known\n AK.foreachindex(v) do i\n v[i] = i\n end\nend\n\nsomecopy!(x) # This works\n\n\n\n\n\n","category":"function"},{"location":"api/foreachindex/#AcceleratedKernels.foraxes","page":"General Loops","title":"AcceleratedKernels.foraxes","text":"foraxes(\n f, itr, dims::Union{Nothing, <:Integer}=nothing, backend::Backend=get_backend(itr);\n\n # CPU settings\n scheduler=:threads,\n max_tasks=Threads.nthreads(),\n min_elems=1,\n\n # GPU settings\n block_size=256,\n)\n\nParallelised for loop over the indices along axis dims of an iterable.\n\nIt allows you to run normal Julia code on a GPU over multiple arrays - e.g. CuArray, ROCArray, MtlArray, oneArray - with one GPU thread per index.\n\nOn CPUs at most max_tasks threads are launched, or fewer such that each thread processes at least min_elems indices; if a single task ends up being needed, f is inlined and no thread is launched. Tune it to your function - the more expensive it is, the fewer elements are needed to amortise the cost of launching a thread (which is a few μs). The scheduler can be :polyester to use Polyester.jl cheap threads or :threads to use normal Julia threads; either can be faster depending on the function, but in general the latter is more composable.\n\nExamples\n\nNormally you would write a for loop like this:\n\nx = Array(reshape(1:30, 3, 10))\ny = similar(x)\nfor i in axes(x, 2)\n for j in axes(x, 1)\n @inbounds y[j, i] = 2 * x[j, i] + 1\n end\nend\n\nUsing this function you can have the same for loop body over a GPU array:\n\nusing CUDA\nimport AcceleratedKernels as AK\nconst x = CuArray(reshape(1:3000, 3, 1000))\nconst y = similar(x)\nAK.foraxes(x, 2) do i\n for j in axes(x, 1)\n @inbounds y[j, i] = 2 * x[j, i] + 1\n end\nend\n\nImportant note: to use this function on a GPU, the objects referenced inside the loop body must have known types - i.e. be inside a function, or const global objects; but you shouldn't use global objects anyways. For example:\n\nusing oneAPI\nimport AcceleratedKernels as AK\n\nx = oneArray(reshape(1:3000, 3, 1000))\n\n# CRASHES - typical error message: \"Reason: unsupported dynamic function invocation\"\n# AK.foraxes(x) do i\n# x[i] = i\n# end\n\nfunction somecopy!(v)\n # Because it is inside a function, the type of `v` will be known\n AK.foraxes(v) do i\n v[i] = i\n end\nend\n\nsomecopy!(x) # This works\n\n\n\n\n\n","category":"function"},{"location":"api/map/#Map","page":"Map","title":"Map","text":"","category":"section"},{"location":"api/map/","page":"Map","title":"Map","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.3. `map`\") # hide","category":"page"},{"location":"api/map/","page":"Map","title":"Map","text":"","category":"page"},{"location":"api/map/","page":"Map","title":"Map","text":"AcceleratedKernels.map!","category":"page"},{"location":"api/map/#AcceleratedKernels.map!","page":"Map","title":"AcceleratedKernels.map!","text":"map!(\n f, dst::AbstractArray, src::AbstractArray;\n\n # CPU settings\n scheduler=:threads,\n max_tasks=Threads.nthreads(),\n min_elems=1,\n\n # GPU settings\n block_size=256, \n)\n\nApply the function f to each element of src and store the result in dst. The CPU and GPU settings are the same as for foreachindex.\n\n\n\n\n\n","category":"function"},{"location":"api/binarysearch/#Binary-Search","page":"Binary Search","title":"Binary Search","text":"","category":"section"},{"location":"api/binarysearch/","page":"Binary Search","title":"Binary Search","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.8. `searchsorted` and friends\") # hide","category":"page"},{"location":"benchmarks/#Benchmarks","page":"Benchmarks","title":"Benchmarks","text":"","category":"section"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 3. Benchmarks\") # hide","category":"page"},{"location":"performance/#Performance-Tips","page":"Performance Tips","title":"Performance Tips","text":"","category":"section"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"If you just started using AcceleratedKernels.jl, see the Manual first for some examples.","category":"page"},{"location":"performance/#GPU-Block-Size-and-CPU-Threads","page":"Performance Tips","title":"GPU Block Size and CPU Threads","text":"","category":"section"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"All GPU functions allow you to specify a block size - this is often a power of two (mostly 64, 128, 256, 512); the optimum depends on the algorithm, input data and hardware - you can try the different values and @time or @benchmark them:","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"@time AK.foreachindex(f, itr_gpu, block_size=512)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"Similarly, for performance on the CPU the overhead of spawning threads should be masked by processing more elements per thread (but there is no reason here to launch more threads than Threads.nthreads(), the number of threads Julia was started with); the optimum depends on how expensive f is - again, benchmarking is your friend:","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"@time AK.foreachindex(f, itr_cpu, max_tasks=16, min_elems=1000)","category":"page"},{"location":"performance/#Temporary-Arrays","page":"Performance Tips","title":"Temporary Arrays","text":"","category":"section"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"As GPU memory is more expensive, all functions in AcceleratedKernels.jl expose any temporary arrays they will use (the temp argument); you can supply your own buffers to make the algorithms not allocate additional GPU storage, e.g.:","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"v = ROCArray(rand(Float32, 100_000))\ntemp = similar(v)\nAK.sort!(v, temp=temp)","category":"page"},{"location":"api/utilities/#Utilities","page":"Utilities","title":"Utilities","text":"","category":"section"},{"location":"api/utilities/","page":"Utilities","title":"Utilities","text":"AcceleratedKernels.TypeWrap","category":"page"},{"location":"api/utilities/#AcceleratedKernels.TypeWrap","page":"Utilities","title":"AcceleratedKernels.TypeWrap","text":"struct TypeWrap{T} end\nTypeWrap(T) = TypeWrap{T}()\nBase.:*(x::Number, ::TypeWrap{T}) where T = T(x)\n\nAllow type conversion via multiplication, like 5i32 for 5 * i32 where i32 is a TypeWrap.\n\nExamples\n\nimport AcceleratedKernels as AK\nu32 = AK.TypeWrap{UInt32}\nprintln(typeof(5u32))\n\n# output\nUInt32\n\nThis is used e.g. to set integer literals inside kernels as u16 to ensure no indices are promoted beyond the index base type.\n\nFor example, Metal uses UInt32 indices, but if it is mixed with a Julia integer literal (Int64 by default) like in src[ithread + 1], we incur a type cast to Int64. Instead, we can use src[ithread + 1u16] or src[ithread + 0x1] to ensure the index is UInt32 and avoid the cast; as the integer literal 1u16 has a shorter type than ithread, it is automatically promoted (at compile time) to the ithread type, whether ithread is signed or unsigned as per the backend.\n\n# Defaults defined\n1u8, 2u16, 3u32, 4u64\n5i8, 6i16, 7i32, 8i64\n\n\n\n\n\n","category":"type"},{"location":"api/custom_structs/#Custom-Structs","page":"Custom Structs","title":"Custom Structs","text":"","category":"section"},{"location":"api/custom_structs/","page":"Custom Structs","title":"Custom Structs","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 6. Custom Structs\") # hide","category":"page"},{"location":"roadmap/#Roadmap-/-Future-Plans","page":"Roadmap","title":"Roadmap / Future Plans","text":"","category":"section"},{"location":"roadmap/","page":"Roadmap","title":"Roadmap","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 9. Roadmap / Future Plans\") # hide","category":"page"},{"location":"debugging/#Debugging-Kernels","page":"Debugging Kernels","title":"Debugging Kernels","text":"","category":"section"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"As the compilation pipeline of GPU kernels is different to that of base Julia, error messages also look different - for example, where Julia would insert an exception when a variable name was not defined (e.g. we had a typo), a GPU kernel throwing exceptions cannot be compiled and instead you'll see some cascading errors like \"[...] compiling [...] resulted in invalid LLVM IR\" caused by \"Reason: unsupported use of an undefined name\" resulting in \"Reason: unsupported dynamic function invocation\", etc.","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Thankfully, there are only about 3 types of such error messages and they're not that scary when you look into them.","category":"page"},{"location":"debugging/#Undefined-Variables-/-Typos","page":"Debugging Kernels","title":"Undefined Variables / Typos","text":"","category":"section"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"If you misspell a variable name, Julia would insert an exception:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"function set_color(v, color)\n AK.foreachindex(v) do i\n v[i] = colour # Grab your porridge\n end\nend","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"However, exceptions cannot be compiled on GPUs and you will see cascading errors like below:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"(Image: Undefined Name Error)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"The key thing to look for is undefined name, then search for it in your code.","category":"page"},{"location":"debugging/#Exceptions-and-Checks-that-throw","page":"Debugging Kernels","title":"Exceptions and Checks that throw","text":"","category":"section"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"As mentioned above, exceptions cannot be compiled in GPU kernels; however, many normal-looking functions that we reference in kernels may contain argument-checking. If it cannot be proved that a check branch would not throw an exception, you will see a similar cascade of errors. For example, casting a Float32 to an Int32 includes an InexactError exception check - see this tame-looking code:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"function mymul!(v)\n AK.foreachindex(v) do i\n v[i] *= 2f0\n end\nend\n\nv = MtlArray(1:1000)\nmymul!(v)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"See any problem with it? The MtlArray(1:1000) creates a GPU vector filled with Int64 values, but within foreachindex we do v[i] *= 2.0. We are multiplying an Int64 by a Float32, resulting in a Float32 value that we try to write back into v - this may throw an exception, like in normal Julia code:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"julia> x = [1, 2, 3];\njulia> x[1] = 42.5\nERROR: InexactError: Int64(42.5)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"On GPUs you will see an error like this:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"(Image: Check Exception Error)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Note the error stack: setindex!, convert, Int64, box_float32 - because of the exception check, we have a type instability, which in turn results in boxing values behind pointers, in turn resulting in dynamic memory allocation and finally the error we see at the top, unsupported call to gpu_malloc.","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"You may need to do your correctness checks manually, without exceptions; in this specific case, if we did want to cast a Float32 to an Int, we could use unsafe_trunc(T, x) - though be careful when using unsafe functions that you understand their behaviour and assumptions (e.g. log has a DomainError check for negative values):","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"function mymul!(v)\n AK.foreachindex(v) do i\n v[i] = unsafe_trunc(eltype(v), v[i] * 2.5f0)\n end\nend\n\nv = MtlArray(1:1000)\nmymul!(v)","category":"page"},{"location":"debugging/#Type-Instability-/-Global-Variables","page":"Debugging Kernels","title":"Type Instability / Global Variables","text":"","category":"section"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Types must be known to be captured and compiled within GPU kernels. Global variables without const are not type-stable, as you could associate a different value later on in a script:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"v = MtlArray(1:1000)\n\nAK.foreachindex(v) do i\n v[i] *= 2\nend\n\nv = \"potato\"","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"The error stack is a bit more difficult here:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"(Image: Type Unstable Error)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"You see a few dynamic function invocation, an unsupported call to gpu_malloc, and a bit further down a box. The more operations you do on the type-unstable object, the more dynamic function invocation errors you'll see. These would also be the steps Base Julia would take to allow dynamically-changing objects: they'd be put in a Box behind pointers, and allocated on the heap. In a way, it is better that we cannot do that on a GPU, as it hurts performance massively.","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"There are two ways to solve this - if you really want to use global variables in a script, put them behind a const:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"const v = MtlArray(1:1000)\n\nAK.foreachindex(v) do i\n v[i] *= 2\nend\n\n# This would give you an error now\n# v = \"potato\"","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Or better, use functions:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"function mymul!(v, x)\n AK.foreachindex(v) do i\n v[i] *= x\n end\nend\n\nv = MtlArray(1:1000)\nmymul!(v, 2)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Note that Julia's lambda capture is very powerful - inside AK.foreachindex you can references other objects from within the function (like x), without explicitly passing them to the GPU.","category":"page"},{"location":"debugging/#Apple-Metal-Only:-Float64-is-not-Supported","page":"Debugging Kernels","title":"Apple Metal Only: Float64 is not Supported","text":"","category":"section"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Mac GPUs do not natively support Float64 values; there is a high-level check when trying to create an array:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"julia> x = MtlArray([1.0, 2.0, 3.0])\nERROR: Metal does not support Float64 values, try using Float32 instead","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"However, if we tried to use / convert values in a kernel to a Float64:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"function mymul!(v, x)\n AK.foreachindex(v) do i\n v[i] *= x\n end\nend\n\nv = MtlArray{Float32}(1:1000)\nmymul!(v, 2.0)","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Note that we try to multiply Float32 values by 2.0, which is a Float64 - in which case we get:","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"ERROR: LoadError: Compilation to native code failed; see below for details.\n[...]\ncaused by: NSError: Compiler encountered an internal error (AGXMetalG15X_M1, code 3)\n[...]","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"Change the 2.0 to 2.0f0 or Float32(2); in kernels with generic types (that are supposed to work on multiple possible input types), do use the same types as your inputs, using e.g. T = eltype(v) then zero(T), T(42), etc.","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"","category":"page"},{"location":"debugging/","page":"Debugging Kernels","title":"Debugging Kernels","text":"For other library-related problems, feel free to post a GitHub issue. For help implementing new code, or just advice, you can also use the Julia Discourse forum, the community is incredibly helpful.","category":"page"},{"location":"api/predicates/#Predicates","page":"Predicates","title":"Predicates","text":"","category":"section"},{"location":"api/predicates/","page":"Predicates","title":"Predicates","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.9. `all` / `any`\") # hide","category":"page"},{"location":"api/predicates/","page":"Predicates","title":"Predicates","text":"Note on the cooperative keyword: some older platforms crash when multiple threads write to the same memory location in a global array (e.g. old Intel Graphics); if all threads were to write the same value, it is well-defined on others (e.g. CUDA F4.2 says \"If a non-atomic instruction executed by a warp writes to the same location in global memory for more than one of the threads of the warp, only one thread performs a write and which thread does it is undefined.\"). This \"cooperative\" thread behaviour allows for a faster implementation; if you have a platform - the only one I know is Intel UHD Graphics - that crashes, set cooperative=false to use a safer mapreduce-based implementation.","category":"page"},{"location":"testing/#Testing","page":"Testing","title":"Testing","text":"","category":"section"},{"location":"testing/","page":"Testing","title":"Testing","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 7. Testing\") # hide","category":"page"},{"location":"api/mapreduce/#MapReduce","page":"MapReduce","title":"MapReduce","text":"","category":"section"},{"location":"api/mapreduce/","page":"MapReduce","title":"MapReduce","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.6. `mapreduce`\") # hide","category":"page"},{"location":"api/mapreduce/","page":"MapReduce","title":"MapReduce","text":"","category":"page"},{"location":"api/mapreduce/","page":"MapReduce","title":"MapReduce","text":"AcceleratedKernels.mapreduce","category":"page"},{"location":"api/mapreduce/#AcceleratedKernels.mapreduce","page":"MapReduce","title":"AcceleratedKernels.mapreduce","text":"mapreduce(\n f, op, src::AbstractArray;\n init,\n dims::Union{Nothing, Int}=nothing,\n\n # CPU settings\n scheduler=:static,\n max_tasks=Threads.nthreads(),\n min_elems=1,\n\n # GPU settings\n block_size::Int=256,\n temp::Union{Nothing, AbstractArray}=nothing,\n switch_below::Int=0,\n)\n\nReduce src along dimensions dims using the binary operator op after applying f elementwise. If dims is nothing, reduce src to a scalar. If dims is an integer, reduce src along that dimension. The init value is used as the initial value for the reduction (i.e. after mapping).\n\nCPU settings\n\nThe scheduler can be one of the OhMyThreads.jl schedulers, i.e. :static, :dynamic, :greedy or :serial. Assuming the workload is uniform (as the GPU algorithm prefers), :static is used by default; if you need fine-grained control over your threads, consider using OhMyThreads.jl directly.\n\nUse at most max_tasks threads with at least min_elems elements per task.\n\nGPU settings\n\nThe block_size parameter controls the number of threads per block.\n\nThe temp parameter can be used to pass a pre-allocated temporary array. For reduction to a scalar (dims=nothing), length(temp) >= 2 * (length(src) + 2 * block_size - 1) ÷ (2 * block_size) is required. For reduction along a dimension (dims is an integer), temp is used as the destination array, and thus must have the exact dimensions required - i.e. same dimensionwise sizes as src, except for the reduced dimension which becomes 1; there are some corner cases when one dimension is zero, check against Base.reduce for CPU arrays for exact behavior.\n\nThe switch_below parameter controls the threshold below which the reduction is performed on the CPU and is only used for 1D reductions (i.e. dims=nothing).\n\nExample\n\nComputing a sum of squares, reducing down to a scalar that is copied to host:\n\nimport AcceleratedKernels as AK\nusing CUDA\n\nv = CuArray{Int16}(rand(1:1000, 100_000))\nvsumsq = AK.mapreduce(x -> x * x, (x, y) -> x + y, v; init=zero(eltype(v)))\n\nComputing dimensionwise sums of squares in a 2D matrix:\n\nimport AcceleratedKernels as AK\nusing Metal\n\nf(x) = x * x\nm = MtlArray(rand(Int32(1):Int32(100), 10, 100_000))\nmrowsumsq = AK.mapreduce(f, +, m; init=zero(eltype(m)), dims=1)\nmcolsumsq = AK.mapreduce(f, +, m; init=zero(eltype(m)), dims=2)\n\n\n\n\n\n","category":"function"},{"location":"api/using_backends/#Using-Different-Backends","page":"Using Different Backends","title":"Using Different Backends","text":"","category":"section"},{"location":"api/using_backends/","page":"Using Different Backends","title":"Using Different Backends","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.1. Using Different Backends\") # hide","category":"page"},{"location":"","page":"Overview","title":"Overview","text":"(Image: Logo)","category":"page"},{"location":"","page":"Overview","title":"Overview","text":"Parallel algorithm building blocks for the Julia ecosystem, targeting multithreaded CPUs, and GPUs via Intel oneAPI, AMD ROCm, Apple Metal and Nvidia CUDA (and any future backends added to the JuliaGPU organisation).","category":"page"},{"location":"","page":"Overview","title":"Overview","text":"","category":"page"},{"location":"#What's-Different?","page":"Overview","title":"What's Different?","text":"","category":"section"},{"location":"","page":"Overview","title":"Overview","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 1. What's Different?\") # hide","category":"page"},{"location":"","page":"Overview","title":"Overview","text":"","category":"page"},{"location":"#Status","page":"Overview","title":"Status","text":"","category":"section"},{"location":"","page":"Overview","title":"Overview","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 2. Status\") # hide","category":"page"},{"location":"","page":"Overview","title":"Overview","text":"","category":"page"},{"location":"#Acknowledgements","page":"Overview","title":"Acknowledgements","text":"","category":"section"},{"location":"","page":"Overview","title":"Overview","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"## 11. Acknowledgements\") # hide","category":"page"},{"location":"","page":"Overview","title":"Overview","text":"","category":"page"},{"location":"#License","page":"Overview","title":"License","text":"","category":"section"},{"location":"","page":"Overview","title":"Overview","text":"AcceleratedKernels.jl is MIT-licensed. Enjoy.","category":"page"},{"location":"api/reduce/#Reductions","page":"Reduce","title":"Reductions","text":"","category":"section"},{"location":"api/reduce/","page":"Reduce","title":"Reduce","text":"import AcceleratedKernels as AK # hide\nAK.DocHelpers.readme_section(\"### 5.5. `reduce`\") # hide","category":"page"},{"location":"api/reduce/","page":"Reduce","title":"Reduce","text":"","category":"page"},{"location":"api/reduce/","page":"Reduce","title":"Reduce","text":"AcceleratedKernels.reduce","category":"page"},{"location":"api/reduce/#AcceleratedKernels.reduce","page":"Reduce","title":"AcceleratedKernels.reduce","text":"reduce(\n op, src::AbstractArray;\n init,\n dims::Union{Nothing, Int}=nothing,\n\n # CPU settings\n scheduler=:static,\n max_tasks=Threads.nthreads(),\n min_elems=1,\n\n # GPU settings\n block_size::Int=256,\n temp::Union{Nothing, AbstractGPUArray}=nothing,\n switch_below::Int=0,\n)\n\nReduce src along dimensions dims using the binary operator op. If dims is nothing, reduce src to a scalar. If dims is an integer, reduce src along that dimension. The init value is used as the initial value for the reduction.\n\nCPU settings\n\nThe scheduler can be one of the OhMyThreads.jl schedulers, i.e. :static, :dynamic, :greedy or :serial. Assuming the workload is uniform (as the GPU algorithm prefers), :static is used by default; if you need fine-grained control over your threads, consider using OhMyThreads.jl directly.\n\nUse at most max_tasks threads with at least min_elems elements per task.\n\nGPU settings\n\nThe block_size parameter controls the number of threads per block.\n\nThe temp parameter can be used to pass a pre-allocated temporary array. For reduction to a scalar (dims=nothing), length(temp) >= 2 * (length(src) + 2 * block_size - 1) ÷ (2 * block_size) is required. For reduction along a dimension (dims is an integer), temp is used as the destination array, and thus must have the exact dimensions required - i.e. same dimensionwise sizes as src, except for the reduced dimension which becomes 1; there are some corner cases when one dimension is zero, check against Base.reduce for CPU arrays for exact behavior.\n\nThe switch_below parameter controls the threshold below which the reduction is performed on the CPU and is only used for 1D reductions (i.e. dims=nothing).\n\nExample\n\nComputing a sum, reducing down to a scalar that is copied to host:\n\nimport AcceleratedKernels as AK\nusing CUDA\n\nv = CuArray{Int16}(rand(1:1000, 100_000))\nvsum = AK.reduce((x, y) -> x + y, v; init=zero(eltype(v)))\n\nComputing dimensionwise sums in a 2D matrix:\n\nimport AcceleratedKernels as AK\nusing Metal\n\nm = MtlArray(rand(Int32(1):Int32(100), 10, 100_000))\nmrowsum = AK.reduce(+, m; init=zero(eltype(m)), dims=1)\nmcolsum = AK.reduce(+, m; init=zero(eltype(m)), dims=2)\n\n\n\n\n\n","category":"function"}] } diff --git a/dev/testing/index.html b/dev/testing/index.html index 36f7f9e..9eae06a 100644 --- a/dev/testing/index.html +++ b/dev/testing/index.html @@ -1,8 +1,8 @@ -Testing · AcceleratedKernels.jl

Testing

If it ain't tested, it's broken. The test/runtests.jl suite does randomised correctness testing on all algorithms in the library. To test locally, execute:

+Testing · AcceleratedKernels.jl

Testing

If it ain't tested, it's broken. The test/runtests.jl suite does randomised correctness testing on all algorithms in the library. To test locally, execute:

$> julia -e 'import Pkg; Pkg.develop(path="path/to/AcceleratedKernels.jl"); Pkg.add("oneAPI")'
 $> julia -e 'import Pkg; Pkg.test("AcceleratedKernels.jl", test_args=["--oneAPI"])'

Replace the "--oneAPI" with "--CUDA", "--AMDGPU" or "--Metal" to test different backends, as available on your machine.

Leave out to test the CPU backend:

$> julia -e 'import Pkg; Pkg.test("AcceleratedKernels.jl")
-
+