Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merging with IterativeSovlers.jl #1

Open
jpfairbanks opened this issue Mar 18, 2015 · 10 comments
Open

Merging with IterativeSovlers.jl #1

jpfairbanks opened this issue Mar 18, 2015 · 10 comments

Comments

@jpfairbanks
Copy link

Where do these implementations win over the IterativeSolvers versions and can we merge them to a single common interface. Right now IterativeSolvers.jl also supports eigs and svds with the Lanczos method. If the only problem is allocation costs, then we should be able to take the best from each implementation. If there is a cost to abstraction or interface design, then we should figure out why that is.

@lruthotto
Copy link
Collaborator

Yes, merging the two packages would be very nice. This code is MIT license so can be freely used. Currently, the code for cg here is about a factor of 1.5 faster than the one in the IterativeSolvers package. There is a benchmark provided here if you want to give it a try.

When merging the versions I would advise you to keep the level of abstraction rather low. My impression is that many in the numerical linear (that invent great iterative methods) are used to matlab. I think most of them will be unlikely to look at the code or provide their own methods if they have to spend too much time on understanding the type structure in IterativeSolvers. Here, we don't use types at all but focus on fast and reliable code. I have seen a very nice implementation by @dpo that is in between the abstraction here (none) and the one in IterativeSolvers.

@jpfairbanks
Copy link
Author

So you think that the cost is mostly in the human domain where we have to have code that people trained in matlab will be able to understand and contribute. Yes that is a real issue in the julia community right now. Striking a balance between being familiar to matlab/r/numpy users while doing things the "right" way.

Which repo of @dpo are you referring to? It looks like his LinearOperator package would be generally useful for people writing all kinds of linear algebra algorithms. A good LinearOperator abstraction that worked well with deferred evaluation and had good performance could really give us unparalleled ease of use * speed / space usage. A similar phenomena is happening with the JuMP project.

@lruthotto
Copy link
Collaborator

I don't think @dpo's package is publicly available at this time. He showed me his package during SIAM CSE and I saw that his CG using the LinearOperator package performs much(!) better in terms of memory consumption and runtime than the one in the InterativeSolver package and similar to mine. On top of that, his code looked very much like a text book description of a CG method. To me, the right way to code a cg (which is simple after all) is to have a reliable and fast implementation.

@jpfairbanks
Copy link
Author

I am happy to when the Julia code looks like the textbook code and yet gets
the performance expected of a highly optimized code. It would be nice to
have the textbook implementations of many methods in a repo. It would show
off a great benefit of julia. If an enterprising math dept. taught their
Numerical Linear Algebra class with julia instead of matlab we would get
many implementations right away. Having the code examples in GandVL in
matlab is really helpful for understanding the book but further ingrains
the matlab way into people learning the methods.

On Thu, Mar 19, 2015, 7:03 AM Lars Ruthotto notifications@github.com
wrote:

I don't think @dpo https://github.com/dpo's package is publicly
available at this time. He showed me his package during SIAM CSE and I saw
that his CG using the LinearOperator package performs much(!) better in
terms of memory consumption and runtime than the one in the
InterativeSolver package and similar to mine. On top of that, his code
looked very much like a text book description of a CG method. To me, the
right way to code a cg (which is simple after all) is to have a reliable
and fast implementation.


Reply to this email directly or view it on GitHub
#1 (comment)
.

@dpo
Copy link

dpo commented Mar 23, 2015

I'll try and make my package (Krylov.jl) public later this week, after cleaning it up a bit. Here is a current benchmark comparing the three CG implementations on a sparse Laplacian problem on a NxNxN grid without preconditioner (the initial guess and stopping test are the same in all cases):

N = 32
Krylov:           elapsed time: 0.060390877 seconds (792832 bytes allocated)
KrylovMethods:    elapsed time: 0.069162332 seconds (1331880 bytes allocated)
IterativeSolvers: elapsed time: 0.190692704 seconds (150268488 bytes allocated, 28.31% gc time)
N = 64
Krylov:           elapsed time: 1.078000347 seconds (6303552 bytes allocated)
KrylovMethods:    elapsed time: 1.129268325 seconds (10526072 bytes allocated)
IterativeSolvers: elapsed time: 2.547279369 seconds (2321661992 bytes allocated, 35.41% gc time)
N = 128
Krylov:           elapsed time: 19.292867227 seconds (50354432 bytes allocated)
KrylovMethods:    elapsed time: 19.03193401 seconds (83962472 bytes allocated)
IterativeSolvers: elapsed time: 35.431064895 seconds (35383366152 bytes allocated, 35.96% gc time)

The factor of 2 with IterativeSolvers is probably (at least partly) explained by better use of BLAS methods, but note also the lower memory consumption and gc time.

In all fairness, my package currently doesn't perform any preconditioning, so KrylovMethods and IterativeSolvers perform a few additional operations at each iteration.

I'm also in favor of keeping the abstraction to a minimum and I believe LinearOperators is the flexible way to go to provide users with a transparent interface that allows them to supply matrices or functions as arguments.

@dpo
Copy link

dpo commented Mar 31, 2015

Have a look at https://github.com/optimizers/Krylov.jl/tree/develop for a first draft.

@jpfairbanks
Copy link
Author

This is really great. I like the LinearOperators framework and how easy it is wrap a linear operator with storage for it to compute. Nice tradeoff between ease of use and memory efficiency.

@lruthotto
Copy link
Collaborator

I agree, LinearOperators.jl is a great piece of work! Thanks, @dpo! I'm using it now in this package as well.

@dpo
Copy link

dpo commented Apr 4, 2015

Thanks guys! I just pushed a few updates to the develop branch. It's now safe to use CRLS with preallocation, and I just added CGNE. Memory requirements are at an all-time low!

@jiahao
Copy link

jiahao commented Apr 4, 2015

👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants