Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dense Schur complement? #126

Closed
mipals opened this issue Feb 4, 2024 · 9 comments
Closed

Dense Schur complement? #126

mipals opened this issue Feb 4, 2024 · 9 comments

Comments

@mipals
Copy link

mipals commented Feb 4, 2024

Hi there,

I am trying to calculate the product $B^{-1}A$ using the Schur complement (both matrices are sparse). I've not been able to find any example, but got something to work after copying some code directly from mumps_schur_complement(A::AbstractArray, x) (I could not use it directly as the line mumps = Mumps(A) seem to throw errors).

N = size(B,1)
A_mumps = [sparse(B) A; -I spzeros(N,N)]
schur_inds = sparse([zeros(Int64,N); ones(Int64,N)]) # indices of A_{2,2} block?
MPI.Init()
mumps = Mumps{Float64}(mumps_unsymmetric, default_icntl, default_cntl64)
associate_matrix!(mumps, A_mumps)
# BEGIN COPIED LINES
MUMPS.suppress_display!(mumps)
MUMPS.set_icntl!(mumps, 8, 0) # turn scaling off, not used with schur anyway (suppresses warning message with schur)
MUMPS.mumps_schur_complement!(mumps, schur_inds)
S = MUMPS.get_schur_complement(mumps)
MUMPS.finalize!(mumps)
# END COPIED LINES
MPI.Finalize()

My issue is now that the computed Schur complement (S) seem to be returned as a dense matrix even though it is extremely sparse. Is there some input variable that I am missing, or does it simply always return a dense matrix?

Cheers,
Mikkel

@guiburon
Copy link
Contributor

guiburon commented Feb 4, 2024

Hi,

The sparsity of the Schur complement matrix is very dependent on the structure of the blocks used to compute it, even if they are sparse themself, Some time ago I looked into sparse Schur complement matrix computation and all the solvers I looked into allocated a dense matrix for the result because in general you can't say that the result will be sparse. From the MUMPS documentation, I think I remember that a dense Schur complement matrix is always returned. But maybe verify in the MUMPS documentation.

Guillaume

@amontoison
Copy link
Member

I checked the documentation tonight (https://mumps-solver.org/doc/userguide_5.6.2.pdf) and it seems that the returned Schur complement is always dense (section 5.15).

@mipals
Copy link
Author

mipals commented Feb 5, 2024

Ahh, fair enough, for some reason I thought the whole point of doing the Schur complement trick was to have a sparse result. But I guess the unfortunate thing with sparse solves is that one can not guarantee sparsity 😢 (what is the application of the Schur trick then? Small sparse matrices?)

In my application $B$ is block-diagonal. As such it is possible to form $B^{-1}$ explicitly by inverting each block and then simply perform the product of the two sparse matrices i.e. $B^{-1}A$. However, I am not sure of the numerical stability of the explicit inversions. Do you have any other ideas?

@dpo
Copy link
Member

dpo commented Feb 5, 2024

Instead of inverting each block of $B$, simply compute a factorization for them. You’re not saying whether the blocks are dense or sparse; use the corresponding adequate factorization. That’s what would happen if you computed the inverse anyways, but with the added cost of performing a solve for each column of the inverse (which you don’t need). Once you have the factorization, it’s a simple matter of combining them to obtain $B^{-1} A$.

@mipals
Copy link
Author

mipals commented Feb 5, 2024

For the time being the blocks are either diagonal or dense (but with structure where the inverse is diagonal + low-rank).

That being said, I have tried the factorization approach for general blocks, which simply reduces to performing lots of B_i\A_i ($B_i$ is the square block, and $A_i$ rectangular and sparse). However, here I again have the issue that even when $A_i$ is sparse then Julia returns the solution (B_i\A_i) as a dense matrix. In hindsight this makes sense as one can not, in general, guarantee that B_i\A_i is sparse.

@amontoison
Copy link
Member

BasicLU.jl handles sparse right-hand sides.
You could use it to factorize each B_i and call solve! on each column of A_i.

@mipals
Copy link
Author

mipals commented Feb 6, 2024

I only briefly looked into BasicLU.jl. For me it seemed like the sparse rhs could only be vectors and not matrices. I will take another look. Thanks for the inputs!

@amontoison
Copy link
Member

Yes, it could be only sparse vectors but you can call on each column of A_i, which is easy when stored in CSC format.

@mipals
Copy link
Author

mipals commented Feb 6, 2024

On slack I got told about the SparseSparse.jl package. It seems to do the job if I ignore the block structure. But I was already doing that trying to use MUMPS.

Anyhow, thanks for the many great inputs!

@mipals mipals closed this as completed Feb 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants