Skip to content

CHOLMOD default ordering options: METIS vs AMD #548

Closed
@gerw

Description

@gerw

Consider the following snippet

using SparseArrays
using LinearAlgebra

function setup(n)
	e = ones(n)
	e2 = -ones(n-1)
	K = spdiagm(n, n, 0 => 2.0e, 1 => e2, -1 => e2)
	Id = sparse(1.0*I, n, n)
	return kron(K, Id) + kron(Id, K)
end

function solve(n)
	A = setup(n)
	F = ones(n^2)
	@time A \ F
end

It just builds a discretization of a PDE and solves it. Now let us execute

solve(1156);
solve(1157);

Using julia 1.9.3 I get

  2.095628 seconds (66 allocations: 1.339 GiB, 0.08% gc time)
  2.134111 seconds (66 allocations: 1.339 GiB, 0.08% gc time)

and all is fine. With julia 1.10.4 instead I get

  1.966967 seconds (69 allocations: 1.339 GiB, 0.17% gc time)
  7.775528 seconds (1.99 M allocations: 9.856 GiB, 2.97% gc time)

Thus, the second solve on a slightly larger system is suddenly much slower and allocates memory. The profiling shows that cholmod_l_analyze runs much longer. The same happens with the RC of 1.11.

Digging a little bit deeper with

A = setup(1157);
B = SparseArrays.CHOLMOD.Sparse(A);
@time F = SparseArrays.CHOLMOD.symbolic(B, perm=nothing);
@time cholesky!(F, A; shift = 1., check = true);

and looking at the output of top, one can see that symbolic runs only on a single core and allocates memory, cholesky! seems to be fine (and seems to run in parallel).

I also tried to fiddle around with

SparseArrays.CHOLMOD.getcommon()[].nthreads_max

(which is 1) after startup, but it does not seem to change anything.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions