Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Map for sparse matrices #60

Closed
jumutc opened this issue May 28, 2014 · 37 comments
Closed

Map for sparse matrices #60

jumutc opened this issue May 28, 2014 · 37 comments

Comments

@jumutc
Copy link

jumutc commented May 28, 2014

Please consider the code below where julia is converting sparse matrix into dense one and doing map on the top. Should we have map defined only for non-zero elements in sparse matrix instead? Otherwise the slow down is an issue and one cannot apply effectively λ-calculus in this case!

julia> test = sprand(10^3,10^3,.01)
1000x1000 sparse matrix with 10144 Float64 entries:
    [80  ,    1]  =  0.993039
    [128 ,    1]  =  0.117601
    [152 ,    1]  =  0.974119
    [259 ,    1]  =  0.0362442
    [289 ,    1]  =  0.621536
    [371 ,    1]  =  0.653076
    [631 ,    1]  =  0.131718
    
    [439 , 1000]  =  0.0621062
    [538 , 1000]  =  0.109039
    [613 , 1000]  =  0.212955
    [620 , 1000]  =  0.147798
    [640 , 1000]  =  0.479203
    [702 , 1000]  =  0.88309
    [884 , 1000]  =  0.780324
    [892 , 1000]  =  0.0164652

julia> f = v -> v+1
(anonymous function)

julia> @time map(f,test)
elapsed time: 14.162436633 seconds (89388704 bytes allocated)
1000x1000 sparse matrix with 1000000 Float64 entries:
    [1   ,    1]  =  1.0
    [2   ,    1]  =  1.0
    [3   ,    1]  =  1.0
    [4   ,    1]  =  1.0
    [5   ,    1]  =  1.0
    [6   ,    1]  =  1.0
    [7   ,    1]  =  1.0
    
    [993 , 1000]  =  1.0
    [994 , 1000]  =  1.0
    [995 , 1000]  =  1.0
    [996 , 1000]  =  1.0
    [997 , 1000]  =  1.0
    [998 , 1000]  =  1.0
    [999 , 1000]  =  1.0
    [1000, 1000]  =  1.0

julia> @time (I,J,V)=findnz(test); sparse(I,J,map(f,V))
elapsed time: 0.000144994 seconds (243784 bytes allocated)
1000x1000 sparse matrix with 10144 Float64 entries:
    [80  ,    1]  =  1.99304
    [128 ,    1]  =  1.1176
    [152 ,    1]  =  1.97412
    [259 ,    1]  =  1.03624
    [289 ,    1]  =  1.62154
    [371 ,    1]  =  1.65308
    [631 ,    1]  =  1.13172
    
    [439 , 1000]  =  1.06211
    [538 , 1000]  =  1.10904
    [613 , 1000]  =  1.21296
    [620 , 1000]  =  1.1478
    [640 , 1000]  =  1.4792
    [702 , 1000]  =  1.88309
    [884 , 1000]  =  1.78032
    [892 , 1000]  =  1.01647
@mlubin
Copy link

mlubin commented May 28, 2014

Defining map to only work on the stored elements (note: this may include zeros) of a sparse matrix seems a bit too magical and liable to lead to surprising behavior. I think it's better to be explicit in this case. You could do map!(f,nonzeros(M)) to update the stored elements in place.

@jumutc
Copy link
Author

jumutc commented May 28, 2014

Maybe one could imagine something like mapnz to make it work? Also I thing this applies to other methods like filter, fold!

@JeffBezanson
Copy link
Contributor

Fully agree with @mlubin.
filter is easier since it doesn't change any values, only selects from them. However it doesn't make much sense for 2-d arrays.

@StefanKarpinski
Copy link
Contributor

If we had a nmatrix type with non-zero default, this would be straightforward. But we don't and no one seems that interested in it. As it stands, we could make it an error to map a function to a sparse array that doesn't map zero to zero. In that case, we could require specifying a dense result array type.

@ViralBShah
Copy link
Member

I agree with @mlubin 's solution too. Otherwise, it is too magical for anyone else reading the code.

@JeffBezanson
Copy link
Contributor

@StefanKarpinski my vague sense is that zero is special, in that common operations like matrix multiply and scaling generally (though not always) preserve it. The next generalization is to different semirings, which I believe we support via the sparse code's use of zero(T).

@jumutc
Copy link
Author

jumutc commented May 28, 2014

For sure there is no place for magical transformations and behaviour. @StefanKarpinski might be right in throwing an error if one maps zeros to something else on SparseMatrixCSC object. From my side it would be nice to have some straightforward functionality like mapnz or foldnz which would work solely with nonzeros.

@JeffBezanson
Copy link
Contributor

Ah, I see we also use x == 0. I guess this is kind of ambiguous if x is from an alternate semiring on the reals. I'd hate to have to use x == zero(x) everywhere though.

@nalimilan
Copy link
Contributor

Raising an error would be painful as it would mean functions working with any AbstractArray would not work with sparse arrays. If you're OK to pay the (memory) price of getting a dense result, why prevent you from doing that? Else, if you're forced to convert the sparse array to a dense one first, you'll need twice as much memory (original + result).

@jumutc
Copy link
Author

jumutc commented May 28, 2014

Maybe redefining the behaviour of the default map is not a good idea but extending the Base with some additional functionality which would help to work with sparse matrices seems to me reasonable.

@mlubin
Copy link

mlubin commented May 28, 2014

I'm not sure if a map function applied to only stored elements is even meaningful in general given that zero elements themselves could be stored. The result could change depending on stored zeros, which isn't supposed to happen. I think something like this should first live in a package before we consider adopting it in Base.

@jumutc
Copy link
Author

jumutc commented May 28, 2014

But storing zeroes in sparse matrix format is jet another issue. A[i,j] = 0 for sparse matrices shouldn't lead to any modification of nzval field or other fields of SparseMatrixCSC object.

@StefanKarpinski
Copy link
Contributor

@StefanKarpinski my vague sense is that zero is special, in that common operations like matrix multiply and scaling generally (though not always) preserve it. The next generalization is to different semirings, which I believe we support via the sparse code's use of zero(T).

I've outlined before how one can pretty easily have non-zero default sparse matrices decomposed as S = Z + c where Z is a zero-default sparse matrix and c is a constant. Then you can do many computations pretty easily, e.g. (Z1 + c1) + (Z2 + c2) = (Z1 + Z2) + (c1 + c2) and (Z1 + c1)*(Z2 + c2) = Z1*Z2 + c1*Z2 + c2*Z1 + c1 + c2. It's mildly complicated, but really not all that bad. The advantage is that this kind of sparse matrix is closed under all kinds of operations, including map, whereas zero-default sparse matrices are not.

@JeffBezanson
Copy link
Contributor

Ah yes, I remember that now.

@kmsquire
Copy link

On Wed, May 28, 2014 at 9:10 AM, Stefan Karpinski
notifications@github.comwrote:

@StefanKarpinski https://github.com/StefanKarpinski my vague sense is
that zero is special, in that common operations like matrix multiply and
scaling generally (though not always) preserve it. The next generalization
is to different semirings, which I believe we support via the sparse code's
use of zero(T).

I've outlined before how one can pretty easily have non-zero default
sparse matrices decomposed as S = Z + c where Z is a zero-default sparse
matrix and c is a constant. Then you can do many computations pretty
easily, e.g. (Z1 + c1) + (Z2 + c2) = (Z1 + Z2) + (c1 + c2) and (Z1 +
c1)_(Z2 + c2) = Z1_Z2 + c1_Z2 + c2_Z1 + c1 + c2. It's mildly complicated,
but really not all that bad. The advantage is that this kind of sparse
matrix is closed under all kinds of operations, including map, whereas
zero-default sparse matrices are not.

I don't remember this, but I like it!

Kevin

@tkelman
Copy link

tkelman commented May 29, 2014

@StefanKarpinski only time I've ever wanted sparse with a non-zero default is for sparse vectors of bounds, with default -inf for lower bound vector or inf for upper bound vector. While neat, I can't think of a place where I'd use your proposal. Also Z + c is deprecated, presumably you mean Z .+ c

@JeffBezanson
Copy link
Contributor

Ok I think we've decided not to have map work on only non-zeros by default.

@ViralBShah
Copy link
Member

There are a few things to do though. Perhaps we should make map not work on sparse matrices, even though it does now, and instead print a message suggesting that the user use it on nonzeros(S). Also, we could document this in the sparse matrix portion of the manual.

@ViralBShah
Copy link
Member

Reopening this to make sure that some of the above mentioned things get done.

@ViralBShah ViralBShah reopened this May 29, 2014
@StefanKarpinski
Copy link
Contributor

@tkelman: The real benefit of sparse arrays with non-zero defaults is so that things like map can be made to work generically. Sparse matrices aren't closed under almost any operations unless you can have non-zero defaults. Anyway, I'm clearly failing to convince anyone that sparse arrays with non-zero defaults are a good idea so I'm just going to shut up about it.

@JeffBezanson
Copy link
Contributor

I think @johnmyleswhite had a good point that such a map implementation only works for totally pure functions. Non-zero defaults might be good for many other reasons, but they're not a slam dunk for map.

@tkelman
Copy link

tkelman commented Jun 20, 2014

Putting more generality into the way sparse matrices are structured is a good goal. These features don't exist in any classical sparse matrix tools due to their many limitations, so quantifying how much demand exists for that kind of functionality is hard. Better extensibility in terms of more coordinate formats, being able to specify block structure (say a sparse matrix where each "nonzero element" is itself a dense matrix), etc and still having linear algebra operations work on them in a generic way is a little higher up on my wish list than non-zero defaults. But also considerably harder to accomplish.

@ViralBShah
Copy link
Member

See JuliaLang/julia#10536.

@toivoh
Copy link

toivoh commented Mar 19, 2015

I for one agree that sparse matrices with nonzero default is a neat idea, and it might also be a nice way to generalize them to non-number element types (I seem to remember we had a discussion about that at some point, but can't remember where). I'm not sure when you would want to map a non-pure function anyway, but maybe others have cases for this?

I don't now much about actual use cases either, though. One thing that would become trickier is concatenation of matrices with different defaults.

@Sacha0
Copy link
Member

Sacha0 commented Jun 28, 2016

Concerning the original topic (map over SparseMatrixCSCs), consensus appears to favor the status quo. Have I missed something actionable? If not, close? Thanks!

@tkelman
Copy link

tkelman commented Jun 29, 2016

We could add a special-purpose mapnz function, but that probably doesn't need to be in base unless it falls out of a more general mechanism like an eachstored sparse iterator protocol.

@jumutc
Copy link
Author

jumutc commented Jun 29, 2016

mapnz would be a great API enhancement!

@toivoh
Copy link

toivoh commented Jul 1, 2016

I guess that it should be mapnz and not mapstored, right? Ie it should explicitly not map stored zeros (but probably keep them as stored zeros).

@ExpandingMan
Copy link
Contributor

Hello all, I came across this issue after experiencing unexpectedly slow map behavior for sparse matrices. The thing is, if map had returned a dense matrix I wouldn't have been surprised at all that it is slow. However, since it returned a sparse matrix I was confused. Given the conversation above I expect that nothing about this will change, but I did want to point out that broadcasting f.(X) a sparse matrix returns a dense matrix. I can appreciate why that might be, but shouldn't the broadcasting behavior be consistent with the map behavior? If they are different, what defines the difference?

Also, currently nonzeros flattens tensors. That seems fine, but there doesn't seem to be any really clean way of doing map or broadcast on sparse matrices, only on non-zero elements, returning a sparse matrix. Is there one? If not there probably should be a mapnz.

Also, I have one possible application for "sparse" matrices with non-zero defaults. In very large scale optimization with binary variables, sometimes it is convenient to store a tensor of 1's. There are certainly other ways around that, so some may argue that it's not the best way of doing what it achieves, but I thought I'd mention it as one application that I can think of for my own use.

@Sacha0
Copy link
Member

Sacha0 commented Nov 18, 2016

Given the conversation above I expect that nothing about this will change, but I did want to point out that broadcasting f.(X) a sparse matrix returns a dense matrix. I can appreciate why that might be, but shouldn't the broadcasting behavior be consistent with the map behavior? If they are different, what defines the difference?

The next evolution of broadcast over sparse matrices should land soon. See e.g. JuliaLang/julia#19239, the product of extensive discussion in JuliaLang/julia#18590. I have not looked at map over sparse matrices recently; chances are map is due an overhaul as well.

Also, currently nonzeros flattens tensors. That seems fine, but there doesn't seem to be any really clean way of doing map or broadcast on sparse matrices, only on non-zero elements, returning a sparse matrix. Is there one? If not there probably should be a mapnz.

julia> foo = sprand(4, 4, 0.5)
4×4 sparse matrix with 8 Float64 nonzero entries:
    [3, 1]  =  0.433251
    [4, 1]  =  0.269651
    [1, 2]  =  0.783236
    [2, 2]  =  0.693917
    [1, 3]  =  0.272095
    [3, 3]  =  0.270313
    [1, 4]  =  0.755662
    [2, 4]  =  0.303817

julia> map!(cos, foo.nzval); # or map!(cos, nonzeros(foo)) if you prefer

julia> foo
4×4 sparse matrix with 8 Float64 nonzero entries:
    [3, 1]  =  0.907606
    [4, 1]  =  0.963864
    [1, 2]  =  0.708634
    [2, 2]  =  0.768747
    [1, 3]  =  0.96321
    [3, 3]  =  0.963687
    [1, 4]  =  0.727818
    [2, 4]  =  0.954202

?

I came across this issue after experiencing unexpectedly slow map behavior for sparse matrices. The thing is, if map had returned a dense matrix I wouldn't have been surprised at all that it is slow. However, since it returned a sparse matrix I was confused.

Could you point to an issue describing/illustrating the behavior you mention?

Thanks and best!

@ExpandingMan
Copy link
Contributor

Ah, so am I to undestand that map! iterates over only non-zeros while map iterates over all elements? If that's the case, then map!(f, nonzeros(A)) certainly provides a solution, but it definitely seems confusing to the user that map and map! behave differently (in fact I'm not really sure what I'd have expected map! to do on a sparse matrix until you mentioned it).

This issue came up JuliaOpt/JuMP.jl/#889. We wanted a function which performed an operation only on the non-zero elements of a sparse tensor and returned a tensor of identical shape, and weren't sure what the "proper" way of doing that was. As it is, the best available solution doesn't seem particularly elegant.

@Sacha0
Copy link
Member

Sacha0 commented Nov 18, 2016

Ah, so am I to undestand that map! iterates over only non-zeros while map iterates over all elements? If that's the case, then map!(f, nonzeros(A)) certainly provides a solution, but it definitely seems confusing to the user that map and map! behave differently (in fact I'm not really sure what I'd have expected map! to do on a sparse matrix until you mentioned it).

No, map!(f, A) operates over all entries in A, not only stored entries. Similarly map!(f, A.nzval) (or equivalently map!(f, nonzeros(A)), as nonzeros(A) = A.nzval) operates over all entries in A.nzval, which is the field of sparse matrix/vector A (a Vector{eltype(A)}) that contains the nonzero values of A. Hence for sparse A, map!(f, A.nzval) mutates A such that A's nonzero values are transformed by f. Does that clarify? Having a look at the definitions of SparseMatrixCSC, SparseVector, nonzero for each case (documentation for nonzeros included) might help. Best!

@ExpandingMan
Copy link
Contributor

Oh, I see... it was doing exactly what it does for map in that it was iterating over the flattened nonzeros(A), but since it stored it in a sparse matrix you get the right shape. The thing which is confusing about that is that it seems strange that it is even possible to "assign" a flat Vector to a SparseMatrixCSC because they have completely different shapes.

It appears that this topic is quite a minefield. Maybe this isn't the place to bring it up, but one other thing that would probably be extremely helpful for most users because it would have (as far as I can see) completely unambiguous behavior is to have something like

for (i, j)  spiterate(X)
   f(X[i, j])
end

iterating first through columns and then rows, skipping all zero elements. Of course, it is already possible to achieve this behavior using nzrange, rowvals and nonzeros but it isn't nearly as nice and clean.

@Sacha0
Copy link
Member

Sacha0 commented Nov 18, 2016

Oh, I see... it was doing exactly what it does for map in that it was iterating over the flattened nonzeros(A), but since it stored it in a sparse matrix you get the right shape. The thing which is confusing about that is that it seems strange that it is even possible to "assign" a flat Vector to a SparseMatrixCSC because they have completely different shapes.

Please have a look at the definitions and documentation I linked above, you might find them helpful :). To be clear, nonzeros(A) is not a flattened version of A --- nonzeros(A) returns the field of A which contains A's stored values. The snippet above involves no "assignment" of a Vector to a SparseMatrixCSC.

iterating first through columns and then rows, skipping all zero elements. Of course, it is already possible to achieve this behavior using nzrange, rowvals and nonzeros but it isn't nearly as nice and clean.

Iterating efficiently over sparse matrices is tricky. For example, the suggestion above would be lamentably inefficient (as accessing a sparse matrix/vector via getindex on a row-column pair (i,j) is fundamentally inefficient). But something similar would be possible with a more generic "index".

Potential higher level interfaces for iteration over sparse matrices/vectors have received extensive discussion elsewhere, in case you feel like diving down a deep rabbit hole :). (I don't have the issue numbers immediately available unfortunately).

Best!

@ExpandingMan
Copy link
Contributor

Yeah, this issue is a rabbit hole indeed.

Just to be clear, I realized that nonzeros(A) isn't just a flattened version of A, of course it is only the non-zero values, my point was just that it is flat while A is not, so from a mathematical perspective assignment is a confusing concept even if A is internally just a wrapped version of nonzeros(A).

I'm beginning to see why Julia is the only language I know of with native sparse matrices (I guess Matlab has them but it doesn't count because it isn't really a general-purpose programming language). Still, it's very worth all the effort everyone has put into it!

Thanks for all the help.

@jleugeri
Copy link

Despite the fact that the semantics of this are "a rabbit hole", as @ExpandingMan pointed out, wouldn't it still be useful to include some simple function such as the following for convenience, maybe under a better thought out name?

function sparsemap(f, M::SparseMatrixCSC)
    SparseMatrixCSC(M.m, M.n, M.colptr, M.rowval, map(f, M.nzval))
end

@KristofferC KristofferC transferred this issue from JuliaLang/julia Jan 14, 2022
@ViralBShah
Copy link
Member

Dup of #4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests