You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
rowsum(x::SubArray{T,2,P}) where {T,P<:SparseMatrixCSC} = (x.parent *sparse(x.indexes[2],ones(Int,length(x.indexes[2])), ones(Int,length(x.indexes[2])), size(x.parent,2),1))[x.indexes[1]]
On 0.6 this function is >100 times slower for matrices of the size I work with (roughly 10000 x 10000 with 0.01 sparsity), when x is a SubArray that has only been sliced along the columns (e.g. x = view(parentmatrix, :, some_index). The regression is due to the final getindex ([x.indexes[1]]), because x.indexes[1] has a different type on 0.5 (Colon) and 0.6 (Base.Slice{OneTo}), and the Slice is much slower than Colon though they refer to the same elements.
Here is an MWE (not sensible in itself, but the use case is given above:
parentmatrix =sprand(1000,1000, 0.1)
rowview =view(parentmatrix, : , rand(1:1000, 500))
sparsevec =sparse(rand(1000,1))
using BenchmarkTools
#julia 0.5@benchmark$(sparsevec)[$(rowview.indexes[1])]
BenchmarkTools.Trial:
memory estimate:15.91 KiB
allocs estimate:3--------------
minimum time:4.837 μs (0.00% GC)
median time:8.594 μs (0.00% GC)
mean time:29.137 μs (19.64% GC)
maximum time:5.003 ms (99.44% GC)
--------------
samples:10000
evals/sample:7versioninfo()
Julia Version 0.5.2
Commit f4c6c9d4bb (2017-05-0616:34 UTC)
Platform Info:
OS: macOS (x86_64-apple-darwin13.4.0)
CPU:Intel(R) Core(TM) i7-5557U CPU @ 3.10GHz
WORD_SIZE:64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
LAPACK: libopenblas64_
LIBM: libopenlibm
LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)
#julia 0.6@benchmark$(sparsevec)[$(rowview.indexes[1])]
BenchmarkTools.Trial:
memory estimate:15.92 KiB
allocs estimate:4--------------
minimum time:247.825 μs (0.00% GC)
median time:472.646 μs (0.00% GC)
mean time:1.549 ms (1.57% GC)
maximum time:38.920 ms (98.90% GC)
--------------
samples:3159
evals/sample:1versioninfo()
Julia Version 0.6.0-rc2.0
Commit 68e911be53 (2017-05-1802:31 UTC)
Platform Info:
OS: macOS (x86_64-apple-darwin13.4.0)
CPU:Intel(R) Core(TM) i7-5557U CPU @ 3.10GHz
WORD_SIZE:64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
LAPACK: libopenblas64_
LIBM: libopenlibm
LLVM: libLLVM-3.9.1 (ORCJIT, broadwell)
The text was updated successfully, but these errors were encountered:
I have been trying to get around the very slow execution on
reducedim
operations on views into large sparse matrices (see https://discourse.julialang.org/t/slow-arithmetic-on-views-of-sparse-matrices/3644), and see also https://github.com/JuliaLang/julia/issues/21515 and https://github.com/JuliaLang/julia/issues/21796). To do this, I use a customrowsum
function to dosum(x,2)
on views into sparse matrices like this:On 0.6 this function is >100 times slower for matrices of the size I work with (roughly 10000 x 10000 with 0.01 sparsity), when
x
is a SubArray that has only been sliced along the columns (e.g.x = view(parentmatrix, :, some_index)
. The regression is due to the finalgetindex
([x.indexes[1]]
), becausex.indexes[1]
has a different type on 0.5 (Colon
) and 0.6 (Base.Slice{OneTo}
), and theSlice
is much slower thanColon
though they refer to the same elements.Here is an MWE (not sensible in itself, but the use case is given above:
The text was updated successfully, but these errors were encountered: