-
-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eigs takes too long to converge #29
Comments
Setting |
(It would also probably converge faster if there were a way to exploit the fact that my matrix is symmetric positive definite. Related to #30.) |
We should certainly give a better message. That is easy to fix. I don't think there is a way to exploit spd property in arpack - only the fact that it is symmetric. |
If you are doing inverse iteration then you can use Cholesky factorization for SPD matrices. A basic problem here is that |
It is tempting to think of By the way. Not directly related to this issue but as a consequence of the example of this issue: eigs seems pretty slow. With the laplacian example I get
and with a pure Julia implementation
|
I wonder what is happening. Can you post your implementation? What happens in matlab or octave? @vtjnash fixed some Fortran calling issues which could potentially explain this. |
@stevengj we should be able to exploit the spd property for free due to the use of . The polyalgorithm may need some tweaking. |
The memory usage is pretty insane too... 17GiB! |
The implementation is here https://gist.github.com/andreasnoackjensen/6963157. I only did it to learn how the Lanczos method works so it is very simple. I have defined
so much of the reallocation is avoided but the timing is still not good. I think I have made
so the difference is huge. |
579a005c995df57cd92f0dc90e6bda2b25158de6 gives us a better error message. |
@JeffBezanson I have a suspicion that type inference is not working as well as it ought to here, which might be slowing down |
Can you give me an example call that you think runs too slowly? |
@stevengj's example above is one. There |
To narrow it down a bit, should I focus on the product of |
Ok, there aren't any type inference problems in sparse matvec. I tried hoisting the accesses to |
It would be good to check the number of iterations that eigs is taking to converge versus the version in Matlab or in Julia by @andreasnoackjensen. If the increase in time is proportional to that, then the problem is probably not codegen, it is some screwup of the Arnoldi algorithm. |
I wrote up this example in matlab, and it seems that matlab's eigs finishes in a handful of iterations (<10). There doesn't seem to be a way to get the number of iterations performed by eigs in matlab.
|
@andreasnoackjensen Your code does not always seem to work with @stevengj 's problem, but when it does work, it seems to come back in 4 iterations for Nx=Ny=100 and 11 iterations for Nx=Ny=400. So, clearly something's up with our ARPACK implementation. |
can you tell matlab to run just one iteration at a time? |
In matlab, you can pass a function (that takes a vector and returns A*vector) instead of a matrix. Then just include a print statement in your function. |
While @alanedelman and I were exploring this, we took a look at the Golub-Kahan-Lanczos algorithm for the svd, which is about as simple as @andreasnoackjensen 's Arnoldi implementation. It may be best for us to have our own implementations of This is @alanedelman 's implementation of (m,n)=size(A)
αs=βs=zeros(0)
v=randn(n);v/=norm(v)
u=A*v
for k=1:100
α=norm(u);αs=[αs; α];u/=α
v=A'*u-α*v
β=norm(v);βs=[βs; β];v/=β
u=A*v-β*u
end
println(round(svdvals(Bidiagonal(αs,βs[1:end-1],true))[1:5]',3)) |
Some notes on the above: This is the basic, no bells and whistles Golub-Kahan-Lanczos. We certainly should remove the The algorithm, is mathematically the same as Householder reduction when the starting vector I think the key point here is that ARPACK has served well for many years, but if we have a The same would apply for Lanczos tridiagonalization of course. |
Perhaps we should start an Arnoldi.jl package, which can move into base when it becomes ready. Seems like we have enough momentum. |
Maybe should be called Lanczos.jl :-) On Mon, Oct 28, 2013 at 7:48 AM, Viral B. Shah notifications@github.comwrote:
|
I would like to contribute to such a package. |
+1 |
I just created this and you guys should all have commit access to it. |
I think Unless anyone has better references, I would recommend van der Vorst's Iterative Krylov Methods for Large Linear Systems, and the last couple of chapters in Trefethen and Bau. |
That does seem like a much better generic name. |
Done. https://github.com/JuliaLang/IterativeSolvers.jl We could start off with @andreasnoackjensen 's gist above, @alanedelman 's code for GKL, and other iterative solvers that people started putting together. Perhaps future discussion can be had over in the issues for IterativeSolvers. |
Well, the Templates books (Linear Systems and Eigenproblems) are also decent references. Although they don't include Sleijpen's BiCGSTAB(L) algorithm, which is one of the better ones for large sparse nonsymmetric systems (don't waste your time with any of the other BiCG variants, this subsumes them). |
Apparently MATLAB's
which is much better. Without the patch the matrix is always converted to a dense matrix when calling
|
It seems like we should use shift-and-invert for |
@acroy Thanks for tracking this down. I kept thinking that we have some bug in our ARPACK interface, but this makes sense now. |
Can someone submit a PR? Unfortunately I am quite taken up for a few weeks and don't want to do this in haste. |
Do we imagine merging the PR for this for 0.3? |
Yes. There is already an open PR, and should be possible to merge for 0.3 |
Closed by JuliaLang/julia#6053, with continuing work in IterativeSolvers.jl. |
The code below, which constructs a sparse discrete lapacian (or –laplacian, actually) inside a cylinder and evaluates the smallest eigenvalue using
eigs
, dies with anARPACKException
for me:Changing to
Nx=Ny=100
works fine. However,Nx=Ny=400
is "only" a 100000x100000 (positive-definite real-symmetric) sparse matrix from a 2d grid (hence the sparse-direct solver should be efficient), and similar code works fine in Matlab.What does this error mean? cc: @ViralBShah
[ViralBShah: The title was ARPACKException needs better error message, which is fixed.]
The text was updated successfully, but these errors were encountered: