Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Symmetric cannot be vectorized #9175

Closed
joachimdahl opened this issue Nov 27, 2014 · 9 comments
Closed

Symmetric cannot be vectorized #9175

joachimdahl opened this issue Nov 27, 2014 · 9 comments
Labels
linear algebra Linear algebra

Comments

@joachimdahl
Copy link
Contributor

It should be possible to convert symmetric matrices into to vector form.

julia> vec(Symmetric(ones(3,3),:L))
ERROR: `similar` has no method matching similar(::Symmetric{Float64,Array{Float64,2}}, ::Type{Float64}, ::(Int64,))
in reshape at abstractarray.jl:138
in vec at abstractarray.jl:142

Otherwise it's not clear how to compute inner products with symmetric matrices, without manually converting to full matrices.

Other related examples that raise errors include:

julia> Symmetric(ones(3,3),:L) - ones(3,3)
julia> trace(Symmetric(ones(3,3),:L))
julia> norm(Symmetric(ones(3,3),:L))
julia> chol(Symmetric(ones(3,3),:L))
@ivarne ivarne added the linear algebra Linear algebra label Nov 27, 2014
@stevengj
Copy link
Member

It's not clear to me what vec should mean for sparse or structured matrix formats. If you convert to a vector containing only the (structurally) nonzero/nonredundant values in some arbitrary order, then you can only use it to take inner products with matrices of the same sparsity/redundancy pattern—inner products with other matrices will be wrong or undefined. And converting to a vector containing the zeros or redundant values is hideously inefficient (and anyway you could do vec(full(A)).

My feeling is that you are better off writing your own inner product (dot) method (using A.data in the case of A::Symmetric).

On the other hand, trace etcetera should certainly be defined.

@RauliRuohonen
Copy link

There's no issue with vec for sparse matrices. A sparse vector takes O(nonzeros) bytes of memory just like a sparse matrix does. Sparsity is a particularly simple kind of redundancy, you get the same memory usage regardless of the shape of the structure, only the total count of nonzeros matters. Typical operations' CPU usage is also a function of the nonzero count. So in principle you can reshape and swizzle the nonzeros around as you please and the result is just as efficient as the original. It's even implemented, try e.g. vec(speye(5)), it works perfectly fine.

The same is not true for arbitrary types of redundancy. I suppose you could implement Vec{Symmetric} etc. types that just reference the original matrix and specialize the dot product for pairs of those, but it's a bit Rube Goldberg-esque when you can directly define suitably optimized frobdot(A :: Symmetric, B :: Symmetric) etc. for your Frobenius inner product. You can express frobdot(A, B) as vec(A)*vec(B) or trace(A*A') or sum(A .* B) or whatever, but reducing any of the latter back to frobdot is something one would expect from a symbolic math package rather than from the Matlab-like Julia. If you don't care about performance, then sum(A .* B) works right now.

If vec(A :: Symmetric) works at all, I'd expect it to return vec(full(A)), it's the only efficient representation that works for everything you might want to do with the vector and it's only roughly twice as big as the original anyway, not too bad when compared to e.g. the inefficiency of matrix operations that produce copies, it's right at home among those. In a similar vein, vec for e.g. a SymTridiagonal or a Tridiagonal would make sense as a plain sparse vector. Whether it's a good idea to implement these or force the user to be more explicit about it is debatable.

@tkelman
Copy link
Contributor

tkelman commented Nov 29, 2014

Sparsity is a particularly simple kind of redundancy, you get the same memory usage regardless of the shape of the structure, only the total count of nonzeros matters.

No, there is a constant memory cost per column regardless of the number of nonzeros in it when you use compressed sparse column format. A tall skinny SparseMatrixCSC will use less memory than its transpose - until we add SparseMatrixCSR as a separate type, in which case the transposed matrix represented in the transposed format will use the same amount of memory as the original.

@stevengj
Copy link
Member

If vec is supposed to map an object into a column vector it is isomorphic to, then that column vector should have the correct dimension. So an m×m Symmetric matrix should map to a vector with m(m+1)/2 components, not to m² components.

@joachimdahl
Copy link
Contributor Author

Perhaps the implementation of "vec" is a naive idea - it was prompted by the fact that I didn't know how to compute inner products in Julia with symmetric matrices.

As I commented in the bottom of my post, many standard operations seem unsupported for symmetric matrices. Converting it to a full matrix, or storing the symmetric matrix as a full matrix (as BLAS/LAPACK does) seems fine, but if you have to do a manual conversion to a full matrix in order to do standard computations, then the usefulness of a symmetric matrix type is probably reduced.

@tkelman
Copy link
Contributor

tkelman commented Dec 1, 2014

Having vec return the packed contents of the relevant triangle of a symmetric matrix seems like a bad idea to me, unless we were actually using the packed storage format.

See #8001 and #8240 etc - most special matrix types are incompletely supported at the moment, I don't think we've determined the right abstractions yet to make all generic operations truly generic in a way that extends and composes automatically over matrix structure. Right now when there are missing methods for a type we have to write more code, that's all. So the burden of making a new type of matrix completely supported by all operations is pretty high right now.

@stevengj
Copy link
Member

stevengj commented Dec 1, 2014

It depends on what you want to define vec to do, and why. If you think of vec(::T) as giving a column vector isomorphic to the vector space defined by T, then giving that column vector the wrong dimension seems incorrect to me, whether or not it is ideal from a performance perspective. But ultimately this is a question of how we expect vec to be used for T.

If you mainly want a dot product, the right thing may be to overload dot in your application (since it's not 100% clear to me that Base should define a particular choice of matrix dot product, especially given the plethora of norms we already support). But I agree that adding more functionality for Symmetric in general is subsumed by #8001.

@jiahao
Copy link
Member

jiahao commented Dec 1, 2014

Converting it to a full matrix, or storing the symmetric matrix as a full matrix (as BLAS/LAPACK does) seems fine, but if you have to do a manual conversion to a full matrix in order to do standard computations, then the usefulness of a symmetric matrix type is probably reduced.

To clarify, a Symmetric matrix does the latter, not the former, and is used for the LAPACK xSYxxx routines. The rectangular full packed format is represented by SymmetricRFP (not exported from LinAlg) and is used for the LAPACK xSPxxx routines.

As @tkelman wrote, the question in #8001 is how to support all this functionality without a combinatorial explosion of methods that have to be written and maintained. The thought I had in #8240 is perhaps we can get away with factoring out symmetry labels and storage formats as orthogonal concerns. The storage format clearly determines how one should traverse the matrix elements and the symmetry label dictates what to do with the intermediate results.

@joachimdahl
Copy link
Contributor Author

I am closing this issue. After your explanations I realize it should not have been opened in the first place.

I like the idea about a lightweight labeling of the LAPACK matrix formats.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
linear algebra Linear algebra
Projects
None yet
Development

No branches or pull requests

6 participants