-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP:New design of linalg (factorizations) #2308
Conversation
For a QR decomposition is there a way of multiplying by the full Q or Q' without forming the matrix Q? For example, one of the most stable ways of forming residuals is Of course these operations can be done effectively by direct calls to functions in |
Right now I have defined the multiplication methods for julia> A=randn(5,2);
julia> Aqr=QRDense(A);
julia> Aqr[:Q]*randn(2,2)
5x2 Float64 Array:
0.476723 0.727353
-0.320566 0.429844
-0.761412 -0.148256
0.144157 0.538303
0.781046 -0.176017
julia> Aqr[:Q]*randn(5,2)
5x2 Float64 Array:
0.727572 1.40746
-0.260413 0.016806
0.743548 0.0498523
0.697383 -0.635056
-0.439073 -1.394
julia> Aqr[:Q]*randn(4,2)
ERROR: DimensionMismatch("")
in * at linalg_dense.jl:690 For |
Thanks for the info and good job on this. |
Nice, i like it. A few comments:
|
Looks pretty good. Though why should we have to type |
+1 |
@nolta I have changed |
Thanks @andreasnoackjensen. |
I really like the new |
Please try this out and give some feedback about how it feels. It follows @ViralBShah's proposal of removing the MATLAB way of returning factorizations, but I hope it is more acceptable now that you can write I have changed the QR to use a new and faster subroutine from LAPACK, but the Julia user should see no changes except for speed gain. I have also added types and methods for rectangular full packed format which is a fast way of packing triangular and hermitian matrices. So if you have a least squares problem with a very large number of variables you can try |
Should we compute the full Q in a QR factorization? May not be necessary, and it could be computed only when the Q is extracted. Also, I get undefs when trying to extract the Q.
|
Cc: @alanedelman |
@ViralBShah After the last update, julia> A=randn(5,2)
5x2 Float64 Array:
-1.04258 0.254472
0.201797 -0.808045
-0.889902 0.978295
0.428517 -1.46212
-0.00136611 -0.264258
julia> Aqr=qr(A);
julia> Aqr[:Q]*Aqr[:R]
5x2 Float64 Array:
-1.04258 0.254472
0.201797 -0.808045
-0.889902 0.978295
0.428517 -1.46212
-0.00136611 -0.264258 |
|
For now, I just realized that I haven't written a |
The printing is prettier now julia> A=randn(5,3)
5x3 Float64 Array:
0.928881 -0.476071 0.982468
0.420802 0.552705 -0.329329
-1.45427 1.61268 -1.82751
-1.28442 -1.03301 -0.816368
-0.435444 -0.336586 0.0642692
julia> qr(A)[:Q]
5x3 Float64 QRDenseQ:
-0.415653 0.136119 -0.0579582
-0.188299 -0.318782 0.750931
0.65075 -0.642555 -0.0810155
0.574747 0.649197 0.454656
0.194851 0.213344 -0.468475
julia> full(qr(A)[:Q])
5x3 Float64 Array:
-0.415653 0.136119 -0.0579582
-0.188299 -0.318782 0.750931
0.65075 -0.642555 -0.0810155
0.574747 0.649197 0.454656
0.194851 0.213344 -0.468475
julia> Hermitian(A'A)
3x3 Float64 Hermitian:
4.99413 -1.08152 4.45226
-1.08152 4.31327 -2.77524
4.45226 -2.77524 5.08406
julia> full(Hermitian(A'A))
3x3 Float64 Array:
4.99413 -1.08152 4.45226
-1.08152 4.31327 -2.77524
4.45226 -2.77524 5.08406 |
+1 |
We need |
I did |
Thanks. I was just thinking of doing the same. |
I am soon adding methods for triangular multiplication and I think it would be useful to cover multiplication and division for the new dispatch types, but still it is very likely that someone will encounter missing methods. If there was a |
Is there any chance of adding a |
I think we should also split linalg_dense.jl into a few files to make it manageable. I was thinking of just moving all the special matrix types into their own files. In fact, it may be useful to have a linalg/ directory and move all the linear algebra stuff in there. |
Also see #2345. Perhaps we can experiment with a different type hierarchy in here. |
@ViralBShah I agree on splitting linalg_dense.jl into smaller files in a directory. It would be easier to get an overview. I also think it would be a good idea to try out a |
Please keep the extras/suitesparse.jl from the db/suitesparse branch in mind when considering the new design of the linalg factorization functions. I am somewhat in stasis at present while waiting to see what the convention for factorizations will be. The sparse LU and Cholesky factorizations are available once the names are straightened out. I can also add a sparse QR at that time. |
The functions that are being removed need to be included in deprecated.jl. |
There seems to be some confusion about what deprecated means: a function is only deprecated if it still works but shouldn't be used; if you just delete something, it isn't deprecated, it's just gone. |
@ViralBShah Can we lift the methods for the decrement, decrement!, increment and increment! functions to a higher level than SuiteSparse. I am trying to separate UMFPACK from CHOLMOD and those functions are the only overlaps. |
Things were getting too unwieldy trying to keep both together.
For the time being I have kept all the const definitions in suitesparse_h.jl
@dmbates The suitesparse tests need to be updated, as the suitesparse tests are failing on this branch. The Base.LinAlg.SuiteSparse name no longer exists. I tried to add using LinAlg.CHOLMOD and changing the names, but the tests continue to fail. |
This is for @StefanKarpinski and others following this from a distance. Essentially, we are retaining the the interface from 0.1, and there is no interface change:
New capabilities here are:
When we merge this, we will do a detailed writeup of all the changes. |
@ViralBShah Sorry about the failure of the suitesparse tests. I am working on a fix now. Let me know if you want one immediately and I will comment out the tests that are failing. |
I haven't looked at the code, but your bullet list sounds good to me. |
Commented out a couple of tests. The cholmod_sdmult calls throw exceptions but I haven't been able to determine why.
Also added a show method for the CholmodFactor type.
CholmodSparse/CholmodDense multiplication and \ now working. Added tests but still need a lot more.
@andreasnoackjensen When do you think we can be ready to merge this into master? If the dense stuff is ready, we can comment out the failing suitesparse tests and merge. We can keep this branch around until @dmbates finishes his suitesparse work, and do another merge into master then. |
@ViralBShah @andreasnoackjensen The current versions of umfpack.jl and cholmod.jl do pass the suitesparse tests. For my use I had the tests print some information to STDOUT. I can push a version of test/suitesparse.jl now to suppress that. Do you plan to merge this branch (which could be very messy) or just pull in the modified files? I ask because there are many commits of the SuiteSparse interface code that are interim versions. If you are going to merge, those commits should be squashed. |
A general design issue for the CHOLMOD module: In the past I had defined the most basic method with the c_Cholmod* arguments and then defined the methods for Cholmod* arguments by calling the basic method on foo.c field. Now I don't think that is a good idea. Most of the C functions return a Ptr{c_Cholmod_} but it is not a good idea to have either the pointer or the c_Cholmod_ object created with unsafe_ref running around loose because the ownership of the memory addressed by such objects is unclear. Constructing a Cholmod* object from a Ptr{c_Cholmod*} transfers responsibility for freeing the memory to the finalizer of the Julia object. A programmer who wants fine-grained control of the memory can call the C functions directly. |
My main question/concern about this change of direction is that we previously decided that it was too much to keep both the Matlab-compatible versions of these functions and the new factorization-returning versions. So what caused the change in thinking here? Why was that deemed to be no longer too much? Is this a transitional change or is this the ultimate design we're thinking to end up with? In particular, how does all of this relate to the realization that emerged from that other thread that some "methods" of the Matlab functions compute a factorization whereas other methods compute a factorization-related artifact, which is in-and-of-itself useful, but does not behave equivalently to the original matrix as a linear operator? I'm thinking of My final concern is that "retaining compatibility with Matlab" isn't even possible for us since we can't dispatch on the number of output arguments. Which leaves us with the problem of which method of each Matlab factorization function to retain compatibility with. This was one of the arguments for using only an indexing interface since we simply cannot express the output polymorphism that Matlab can, due to lack of overloading based on the number of return values. |
and definitions of constants. The biggest change in cholmod.jl is defining and using a macro "chm_nm" instead of writing out all those tedious names explicitly.
I had originally proposed removing the matlab-like versions, but there was no clear consensus on that, with many wanting that interface. So, in I'd much rather focus at this point on making the factorization interface much better. |
after an unsafe_ref copy operation, the object returned is owned by julia. the data at the original ptr is not referenced again and can be immediately freed or mutated without effect. although, any pointers inside of these objects are still up to the programmer to manage. On Mar 15, 2013, at 10:41 AM, dmbates notifications@github.com wrote:
|
@ViralBShah and @StefanKarpinski. People have given their opinions and there is not full agreement, so it is really up to you now to decide on the design or a design path. @ViralBShah I only have the two issues mentioned earlier. Maybe they disappear when this branch is merged. Also, I haven't removed |
@andreasnoackjensen Do let me know when ready to merge. The As for the API, let's not have this pull request depend on it. The API can be easily changed if necessary with little effort once this is merged. |
@dmbates There are no more suitesparse test failures. Is this generally ok to merge? |
@ViralBShah Yes, okay to merge. |
WIP:New design of linalg (factorizations)
@ViralBShah We should merge my latest change on this branch into master as well. |
Will do. |
Based on the discussion in #2212 I have rewritten some of the linalg functionality. Please share your thoughts. The main points of the design are
Matlab style methods are retained for now
Removal of
factors
methods and introduction ofref
methods for factorizations object. Symbols are used for referencing. E.g.Factorizations work similarly to the matrix they were constructed from. No magic.
New
QRDensQ
andHessenbergQ
types which are subtypes of AbstractMatrix can be used for multiplication.For types with pivoting you can get either a permutation vector of matrix
Removal of all fact methods. Factorizations are constructed from constructors. The fact methods were kind of constructors so I think it is more clear just to use e.g.
QRDense
instead ofqrfact
.New Triangular and Hermitian types for dispatch. This eases the choice of algorithm and simplifies some of the functions for general matrices. E.g.
New symmetric tridiagonal eigen solvers and methods for subsets of eigenvalues
Removal Bool argument in eig. Just use eigvals.