Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rank for sparse matrices #401

Closed
cossio opened this issue Feb 2, 2017 · 20 comments
Closed

rank for sparse matrices #401

cossio opened this issue Feb 2, 2017 · 20 comments
Labels
sparse Sparse arrays

Comments

@cossio
Copy link
Contributor

cossio commented Feb 2, 2017

mat = sparse([1 0 0; 0 1 0; 0 0 1])
rank(mat)

throws an error:

ERROR: MethodError: no method matching svdvals!(::SparseMatrixCSC{Float64,Int64})

Can we have rank for sparse matrices?

@andreasnoack
Copy link
Member

This is not so simple. Do you know a way to do this without converting the sparse matrix to a dense matrix? The way we compute the numerical rank in the dense case is to compute all singular values and count how many are above some threshold. Computing all the singular values is not really feasible for sparse matrices. (Our main competitor gives a similar error message).

@StephenVavasis
Copy link
Contributor

StephenVavasis commented Feb 2, 2017

The cholmod package, which is included in Julia, has a factorization called "squeezed QR factorization" that can compute the rank of a sparse matrix. According to documents by the cholmod authors (Davis et al), it is not as reliable as traditional rank-revealing QR factorization or SVD, but they found that it gives correct results in many cases. The squeezed QR factorization is accessible in Julia but is not conveniently wrapped. Here is some code I wrote to access it. In particular, call the routine qrfact_get_r_colprm; the estimated rank is the number of rows in the resulting R.

# By Steve Vavasis 2016
# MIT License:
#> Permission is hereby granted, free of charge, to any person obtaining
#> a copy of this software and associated documentation files (the
#> "Software"), to deal in the Software without restriction, including
#> without limitation the rights to use, copy, modify, merge, publish,
#> distribute, sublicense, and/or sell copies of the Software, and to
#> permit persons to whom the Software is furnished to do so, subject to
#> the following conditions:
#>
#> The above copyright notice and this permission notice shall be
#> included in all copies or substantial portions of the Software.
#>
#> THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
#> EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
#> MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
#> NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
#> LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
#> OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
#> WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.


module MySparseExtensions


import Base.sparse


if VERSION < v"0.5"
    import Base.SparseMatrix.CHOLMOD.FactorComponent
    import Base.SparseMatrix.CHOLMOD.Factor
    import Base.SparseMatrix.CHOLMOD.CHOLMODException
    import Base.SparseMatrix.CHOLMOD.common
    import Base.SparseMatrix.CHOLMOD.C_Sparse
    import Base.SparseMatrix.CHOLMOD.Sparse
    import Base.SparseMatrix.CHOLMOD.free_sparse!
    import Base.SparseMatrix.CHOLMOD.increment
    import Base.SparseMatrix.CHOLMOD.SuiteSparse_long
    import Base.SparseMatrix.CHOLMOD.defaults
    import Base.SparseMatrix.CHOLMOD.fact_
    import Base.SparseMatrix.CHOLMOD.set_print_level
    import Base.SparseMatrix.CHOLMOD.common_final_ll
    import Base.SparseMatrix.SPQR.ORDERING_DEFAULT
    import Base.SparseMatrix.CHOLMOD.get_perm
else
    import Base.SparseArrays.CHOLMOD.FactorComponent
    import Base.SparseArrays.CHOLMOD.Factor
    import Base.SparseArrays.CHOLMOD.CHOLMODException
    import Base.SparseArrays.CHOLMOD.common
    import Base.SparseArrays.CHOLMOD.C_Sparse
    import Base.SparseArrays.CHOLMOD.Sparse
    import Base.SparseArrays.CHOLMOD.free_sparse!
    import Base.SparseArrays.CHOLMOD.increment
    import Base.SparseArrays.CHOLMOD.SuiteSparse_long
    import Base.SparseArrays.CHOLMOD.defaults
    import Base.SparseArrays.CHOLMOD.fact_
    import Base.SparseArrays.CHOLMOD.set_print_level
    import Base.SparseArrays.CHOLMOD.common_final_ll
    import Base.SparseArrays.SPQR.ORDERING_DEFAULT
    import Base.SparseArrays.CHOLMOD.get_perm
end


## Retrieve PtL factor from sparse Cholesky factorization
## See example below for usage

function sparse{Tv}(FC::FactorComponent{Tv,:PtL})
    F = Factor(FC)
    s = unsafe_load(F.p)
    s.is_ll != 0 || throw(CHOLMODException("sparse: supported for :PtL only on LLt factorizations"))
    s.is_super == 0 || throw(CHOLMODException("sparse: cannot invoke sparse() on supernodal factor; use change_factor! first"))
    s.n == s.minor || throw(CHOLMODException("sparse: cannot invoke sparse() on incomplete factor"))
    nnz = s.nzmax
    is = zeros(Int, nnz)
    js = zeros(Int, nnz)
    for oldcolnum = 1 : s.n
        newcolnum = unsafe_load(s.Perm, oldcolnum) + 1
        estart = unsafe_load(s.p, oldcolnum) + 1
        eend = unsafe_load(s.p, oldcolnum + 1)
        for pos = estart : eend
            js[pos] = newcolnum
            oldrownum = unsafe_load(s.i, pos) + 1
            newrownum = unsafe_load(s.Perm, oldrownum) + 1
            is[pos] = newrownum
        end
    end
    sparse(is, js, pointer_to_array(s.x, nnz, false), s.n, s.n)
end

## Retrieve R and colprm factor from sparse QR.  See below
## for usage.

function qrfact_get_r_colperm(A::SparseMatrixCSC{Float64, Int}, 
                              tol::Float64, 
                              ordering = ORDERING_DEFAULT)
    Aw = Sparse(A,0)
    s = unsafe_load(Aw.p)
    if s.stype != 0
        throw(ArgumentError("stype must be zero"))
    end
    ptr_R = Ref{Ptr{C_Sparse{Float64}}}()
    ptr_E = Ref{Ptr{SuiteSparse_long}}()
    cc = common()
    rk = ccall((:SuiteSparseQR_C, :libspqr), Clong,
               (Cint, #ordering
                Cdouble, #tol
                Clong, #econ
                Cint, #getCTX
                Ptr{C_Sparse{Float64}},  # A
                Ptr{Void}, #Bsparse
                Ptr{Void}, #Bdense
                Ptr{Void}, #Zsparse
                Ptr{Void}, #Zdense
                Ptr{Void}, #R
                Ptr{Void}, #E
                Ptr{Void}, #H
                Ptr{Void}, #HPInv
                Ptr{Void}, #HTau
                Ptr{Void}), #cc
               ordering, #ordering
               tol, #tol
               1000000000, #econ
               0, #getCTX
               get(Aw.p),  # A
               C_NULL, #Bsparse
               C_NULL, #Bdense
               C_NULL, #Zsparse
               C_NULL, #Zdense
               ptr_R, #R
               ptr_E, #E
               C_NULL, #H
               C_NULL, #HPInv
               C_NULL, #HTau
               cc) #cc
    R = ptr_R[]
    E = ptr_E[]
    if E != C_NULL
        colprm = pointer_to_array(E, size(A,2), false) + 1
    else
        colprm = collect(1:size(A,2))
    end
    R1 = unsafe_load(R)
    if R1.stype != 0
        throw(ArgumentError("matrix has stype != 0. Convert to matrix with stype == 0 before converting to SparseMatrixCSC"))
    end
    maxrownum = 0
    for entryind = 1 : R1.nzmax
        maxrownum = max(maxrownum, unsafe_load(R1.i, entryind) + 1)
    end
    R_cvt = SparseMatrixCSC(maxrownum,
                            R1.ncol, 
                            increment(pointer_to_array(R1.p, (R1.ncol + 1,), false)), 
                            increment(pointer_to_array(R1.i, (R1.nzmax,), false)), 
                            copy(pointer_to_array(R1.x, (R1.nzmax,), false)))
    free_sparse!(R)
    ccall((:cholmod_l_free, :libcholmod), Ptr{Void}, (Csize_t, Csize_t, Ptr{Void}, Ptr{Void}),
          size(A,2), sizeof(SuiteSparse_long), E, cc)
    R_cvt, colprm
end

## Obtain right null vector from squeezed R factor

function rightnullvec{Tv}(R::SparseMatrixCSC{Tv,Int}, 
                          colperm::Array{Int,1})
    m = size(R,1)
    n = size(R,2)
    if m >= n
        error("Right null vector cannot be computed if m>=n")
    end
    # find the first squeezed row
    squeezepos = m + 1
    for i = 1 : m
        if R[i,i] == 0.0
            squeezepos = i
            break
        end
    end
    nullvec = zeros(n)
    nullvec[squeezepos] = 1.0
    b = full(R[:,squeezepos])
    if norm(b[squeezepos:end]) > 0.0
        error("R must be upper triangular")
    end
    # solve for the column of the squeezed row
    # in terms of preceding columns using back
    # substitution.  For efficiency, work directly
    # on the fields of the sparse matrix R.
    for j = squeezepos - 1 : -1 : 1
        startp = R.colptr[j]
        endp = R.colptr[j+1] - 1
        if R.rowval[endp] != j
            error("R must be upper triangular")
        end
        coef = b[j] / R.nzval[endp]
        for pos = startp : endp
            i = R.rowval[pos]
            b[i] -= coef * R.nzval[pos]
        end
        nullvec[j] = -coef
    end
    nullvec_permute = zeros(n)
    nullvec_permute[colperm] = nullvec
    nullvec_permute
end




function cholfactLPs(A::SparseMatrixCSC{Float64, Int}; kws...)
    cm = defaults(common()) # setting the common struct to default values. Should only be done when creating new factorization.
    set_print_level(cm, 0) # no printing from CHOLMOD by default

    # Makes it an LLt
    unsafe_store!(common_final_ll, 1)
    F = fact_(Sparse(A), cm; kws...)
    s = unsafe_load(get(F.p))
    s.minor < size(A, 1) && return spzeros(0,0), Int[], false
    return sparse(F[:L]), get_perm(F), true
end

function forwsub_!(Lis, Ljs, Les, rhs)
    n = length(rhs)
    nnz = length(Lis)
    @assert minimum(Lis - Ljs) == 0
    pos = 1
    for j = 1 : n
        @assert Lis[pos] == j && Ljs[pos] == j
        rhs[j] /= Les[pos]
        pos += 1
        while pos <= nnz && Ljs[pos] == j
            rhs[Lis[pos]] -= rhs[j] * Les[pos]
            pos += 1
        end
    end
    nothing
end


function forwsub(L, rhs)
    x = rhs[:]
    is, js, es = findnz(L)
    forwsub_!(is, js, es, x)
    x
end
    

function backwsub_!(Lis, Ljs, Les, rhs)
    n = length(rhs)
    nnz = length(Lis)
    @assert minimum(Lis - Ljs) == 0
    pos = nnz
    for j = n : -1 : 1
        t = rhs[j]
        while pos > 0 && Ljs[pos] == j && Lis[pos] > j
            t -= Les[pos] * rhs[Lis[pos]]
            pos -= 1
        end
        @assert Lis[pos] == j && Ljs[pos] == j
        rhs[j] = t / Les[pos]
        pos -= 1
    end
    nothing
end
        

function cholsolve(L, prm, rhs)
    n = size(L,1)
    is, js, es = findnz(L)
    r2 = rhs[prm]
    forwsub_!(is, js, es, r2)
    backwsub_!(is, js, es, r2)
    sol = zeros(n)
    sol[prm] = r2
    sol
end
        
            
        




function cholfactPtL(A)
    n = size(A,1)
    F = cholfact((A + A') / 2)
    L0 = sparse(F[:L])
    is, js, es = findnz(L0)
    p = get_perm(F)
    sparse(p[is], p[js], es, n, n)
end



    

function test()
    A = [2.0  1.0  1.0 0.0
         1.0  1.0  0.5 0.0
         1.0  0.5  1.0 0.2
         0.0  0.0  0.2 1.5]
    As = sparse(A)
    F = cholfact(As)
    PtL = sparse(F[:PtL])
    println("norm of difference A - PtL * PtL' = ", norm(As - PtL * PtL',1), " (should be close to 0)")

    L, prm, success = cholfactLPs(As)
    @assert success
    println("prm = ", prm)
    println("L = ", L)
    b = [4.3,-2.2,8.8,7.1]
    x = cholsolve(L, prm, b)
    println("x = ", x, " norm(A*x - b) = ", norm(A * x - b), " (should be close to 0)")
    Ainit = [0.0   6.0  -3.1
             6.0   0.0   2.2
             6.3   0.0   0.0
             2.2   0.0   0.0]
    A = hcat(Ainit, Ainit * [2.0,2.1,2.2], Ainit * [3.0,3.1,3.2])
    As = sparse(A)
    R, colprm = qrfact_get_r_colperm(As, 1.0e-14)
    @assert size(R,1) == 3 && size(R,2) == 5
    v = rightnullvec(R,colprm)
    println("residual norm of right null vec = ", norm(As * v) / norm(v), " (should be close to 0)")
    nothing
end       

end

@andreasnoack
Copy link
Member

Thanks. Looks like a reasonable solution in the sparse case. Is the discussion of the reliability of the procedure in the the SPQR docs?

@StephenVavasis
Copy link
Contributor

StephenVavasis commented Feb 3, 2017

Here is a quote from the paper spqr.pdf (Davis), which is included in the SuiteSparse download (tar.gz file). My attempt to copy-and-paste from the PDF didn't quite work, so 103 below is actually supposed to be 10^3.

[Foster 2009] reports that SuiteSparseQR correctly finds the rank of A for about
two-thirds of the rank-deficient matrices in the Univ. of Florida Sparse Matrix
Collection. However, for many of those matrices, the numerical rank is ill-defined.
If the numerical rank is very well-defined, then SuiteSparseQR frequently finds the
correct rank. In particular, if the rank of A is k, and σ_k/σ_{k+1} > 10^3 where σ_k is
the kth largest singular value of A, the correct rank is found 95% of the time.

@protogeezer
Copy link

Stephen's code seems to work quite nicely. Scouting about to see if there was an alternative that was more deterministic, I found that Foster and Davis published a 2013 paper titled "Algorithm 933: Reliable Calculation of Numerical Rank, Null Space Bases, Pseudoinverse Solutions, and Basic Solutions using SuiteSparseQR" (ACM Transactions). The algorithms described in the paper are implemented in Matlab and included in the Matlab_Tools directory of the SuiteSparse source distribution. Specifically, there is a subdirectory called spqr_rank, which is described as follows:

"The routines in SPQR_RANK estimate upper and lower bounds for singular values of A and use these estimates to warn when the calculated numerical rank may be incorrect. Section 3.5 demonstrates that either the rank is accurately determined or an accurate warning is returned (with one exception)."

I hope this is helpful.

@StefanKarpinski
Copy link
Member

The usual caveats about copyright apply: don't look at the Foster & Davis Matlab code if you plan on writing an implementation of any of these algorithms for Julia unless it has an MIT-compatible license (this is unlikely given that it's part of SuiteSparse, which is one of the GPL libraries we use).

@protogeezer
Copy link

Fair enough. The sample code Stephen Vavasis supplied is probably sufficient for my needs, so I have little motivation to spend more time on a sparse matrix rank algorithm. The Matlab code seems not to be GPL'd. WRT rank, the paper focuses on one of the (short) routines in the directory. Keeping in mind that I'm not a numerical expert in the least, there may be an implementation of a similar scheme in the IterativeSolvers package - SVDL... If that routine was better debugged, it seemed about as fast as Stephen's scheme for the 30K x 30K matrices I am processing. if the answer was correct that would solve the issue at hand. That is assuming that Algorithm 933 isn't the current state-of-the-art.

@StefanKarpinski
Copy link
Member

StefanKarpinski commented Feb 16, 2017

Writing an implementation based on their algorithm description (including pseudo-code) is ok.

@protogeezer
Copy link

Ok - I'll give it a shot when I have time. Thanks for the advice. I'm hoping to get more involved in Julia as time permits. I have a ways to go to understand open source projects.

@StefanKarpinski
Copy link
Member

Looking forward to it! The legal stuff is a nuisance, but unfortunately one that we have to live with.

@protogeezer
Copy link

@StefanKarpinski - I'll be heading to JuliaCon in a few days. I know you'll be busy, but if the occasion arises, it would be great to at least introduce myself.

As stated above, I want to become active in the Julia effort. The project I want to pursue is to bring VTK-based 3D plots to Gadfly, along with various other enhancements - incorporating native OTF fonts, equation labels and the like. Another way to think of the proposal is to undertake an exploratory project which would examine whether VTK could be the visualization infrastructure for Julia.

I'm pretty convinced this needs to be approached on a platform by platform basis to achieve first class performance and rendering quality. I'm also pretty sure it can be implemented in a more robust and complete way if changes can be made to the runtime. For example, where and how do I seek guidance on how to go about achieving consensus on making the REPL aware of window manager events.

I won't burden you with a long explanation of the scope and sequence of steps I believe are involved...at least for now.

I hope to see you at the conference.

@StefanKarpinski
Copy link
Member

@protogeezer, it'll be great to meet in person. I probably won't be much use on this particular issue, but @andreasnoack will be there and he may be of more use :)

@chriscoey
Copy link

@StephenVavasis I would like to use the squeezed QR factorization for sparse matrices. Can your code #401 go into base?

@StephenVavasis
Copy link
Contributor

Sparse matrix computations are no longer in Base; they are in stdlib. And the lesser used sparse matrix routines like eigs are moved all the way out to packages. So most likely squeezed QR factorization would go into a package rather than stdlib.

The internal Julia structures for both ccall and sparse-matrix wrappers have changed quite a bit between 0.6 and 0.7/1.0, so there is a fair amount of work to recode the squeezed-QR wrapper for 1.0. I will try to get to it at some point, but if you are in a big hurry, you might find it faster to recode this yourself.

@chriscoey
Copy link

So most likely squeezed QR factorization would go into a package rather than stdlib.

Do you know of a package where it would make sense to put this?

@StephenVavasis
Copy link
Contributor

Not really... but it would be nice to have all the capabilities of SuiteSparse wrapped in a user-friendly package.

@andreasnoack
Copy link
Member

We now have rank for sparse QR so we could define rank(A::SparseMatrixCSC) = rank(qr(A))

@aashidham
Copy link

@StephenVavasis what is the runtime of this codeblock, say for an n x n sparse matrix with k elements? Just a ballpark figure.

@StephenVavasis
Copy link
Contributor

The number of ops is highly data-dependent. For example, this code would require O(k) operations for a sparse rank-one matrix (where k is the number of nonzero entries). On the other hand, it would require
O(n^3) operations for a rank-n matrix that fills in during QR factorization.

@stevengj
Copy link
Member

stevengj commented Nov 4, 2019

The usual caveats about copyright apply: don't look at the Foster & Davis Matlab code if you plan on writing an implementation of any of these algorithms for Julia unless it has an MIT-compatible license (this is unlikely given that it's part of SuiteSparse, which is one of the GPL libraries we use).

I just came across this old discussion following a discussion on discourse, and I wanted to note that the spqr_rank part of SuiteSparse is, in fact, 3-clause BSD according to its LICENSE.txt.

So, it seems like it would be okay to look at that Matlab code, in addition to pseudocode in Davis' paper, in porting these algorithms to Julia.

@KristofferC KristofferC transferred this issue from JuliaLang/julia Nov 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sparse Sparse arrays
Projects
None yet
Development

No branches or pull requests

9 participants