-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for linalg #10
Comments
Let's discuss the API for That reasoning leads to the following API: interface eig
module procedure seig
module procedure seig_generalized
module procedure deig
module procedure deig_generalized
module procedure zeig
module procedure zeig_generalized
end interface eig Here subroutine deig(A, lam, c)
! Solves a standard eigenproblem A*c = lam*c
real(dp), intent(in) :: A(:, :) ! matrix A
complex(dp), intent(out) :: lam(:) ! eigenvalues: A c = lam c
complex(dp), intent(out) :: c(:, :) ! eigenvectors: A c = lam c; c(i,j) = ith component of jth vec.
...
end subroutine
subroutine deig_generalized(A, B, lam, c)
! Solves a generalized eigenproblem A*c = lam*B*c
real(dp), intent(in) :: A(:, :) ! matrix A
real(dp), intent(in) :: B(:, :) ! matrix B
complex(dp), intent(out) :: lam(:) ! eigenvalues: A c = lam c
complex(dp), intent(out) :: c(:, :) ! eigenvectors: A c = lam c; c(i,j) = ith component of jth vec.
...
end subroutine |
Here are some common points that should be discussed:
|
@cmacmackin, @milancurcic, @septcolor what do you think of this proposal? I chose it so that it's something we might be able to agree quickly, and then we can figure out the proper workflow. |
I'm not experienced with linalg stuff, but all looks reasonable. Procedure names are short and intuitive. I agree that I don't like APIs that modify variables in-place, so I suggest we make an extra copy in default implementation, but if desired by the community we can also have a variant with an Otherwise, full steam ahead as far as I'm concerned. |
From my experience with using LAPACK routines in some fluid-solvers (immersed boundary method, lattice boltzmann equation, etc) and PDE routines (orthogonal collocation), the right hand-side is rarely required after the solution of the system. If it is needed the user can always make a copy beforehand. Would the A matrix be returned factorize or would there be another copy? real, allocatable(:) :: x, b
real, allocatable(:) :: A
A = reshape([1,3,2,4],[2,2])
b = [5, 6]
x = b ! user creates the copy
call solve(A,x) ! solve A*x=b If you follow a more functional style you could write x = solve(A,b) The annoying thing is that a subroutine and a function cannot be in the same generic interface block (perhaps something to discuss in the J3 Fortran proposal), meaning we would need two routines with different names. While I like the idea to have convenience routines, I think it is also worth considering a more object-oriented interface, similar to the one in the Eigen library: http://eigen.tuxfamily.org/dox/group__TutorialLinearAlgebra.html. A major difference compared to Eigen is that in Fortran we would like to use native real arrays and not some abstract matrix data types, meaning the solvers would likely become the abstract data type, e.g.: type(linalg_solver) :: solver
! ... create A and b ...
call solver%init(A,factorization="LU")
x = solver%solve(b) |
@milancurcic, @ivan-pi I think you just demonstrated that we need both "copy" and "inplace" API. Julia does that, they append Finally, @ivan-pi also has a good point that besides the direct "copy" and "inplace" API, some people might also prefer an object oriented API. So I think we might need 3 APIs. @milancurcic so I can submit a PR for |
@jacobwilliams, @marshallward do you have any thoughts on this workflow? Let's just concentrate on |
Personally, my preference would be for using derived types to store the matrix and/or its factorisation. With LAPACK, the factorisation requires more information than can be stored in the original matrix (e.g., pivot information), requiring additional arrays to be passed in. There are use-cases that reuse the factorisation, so we don't want that information to be lost. An example of an OOP approach can be found here: https://github.com/cmacmackin/OOP-Fortran-Examples/blob/master/04-lapack-wrapper.f90. In terms of returning multiple results, another approach, aside from using a subroutine, would be to define a transparent derived type with these results as components and just to return that. |
At best, I'd say we may need both. Let's focus on one, and consider another later. I'd caution against involving derived types or OOP here from the get-go. Obviously there are implementations with procedures only. I suggest taking an incremental approach here:
In summary, let's make a banana first, and jungle later. Gorilla will come for the banana sooner or later. Let's worry about it then. |
In case of |
I agree with your plan to start with |
@certik Mostly just agreement on my part. I am not a big user of linear algebra so I'm reluctant to weigh in here too heavily. But since you asked...
Mostly though I agree with @milancurcic - make the 🍌 ! |
Regarding argument ordering: NumPy, Matlab and Julia just return the result. So we do not have a prior implementation. To me the natural thing is to do output arguments at the end of the argument list. So the API is then very similar to NumPy: one simply calls Which ordering do people prefer? |
Example of using generic interface to specific implementations: interface :: eig
module procedure :: deig_copy, deig_inplace
...
end interface eig
contains
subroutine deig_copy(A, lam, c)
real(dp), intent(in) :: A(:, :) ! matrix A
complex(dp), intent(out) :: lam(:) ! eigenvalues: A c = lam c
complex(dp), intent(out) :: c(:, :)
...
subroutine deig_inplace(A, lam)
real(dp), intent(in out) :: A(:, :) ! matrix A and result c
complex(dp), intent(out) :: lam(:) ! eigenvalues: A c = lam c
... |
Since sparse arrays and sparse matrix solvers use quite different data structures than dense arrays and factorizations, I would prefer to see them in a separate module. That is also how it is done in Scipy (https://docs.scipy.org/doc/scipy/reference/sparse.html). |
@milancurcic I like your idea. My only worry is if this can work for all our routines or not. If not, then for consistency reason I would prefer to be explicit. Some ideas to use for
From these I like the |
I agree with @ivan-pi: I would also prefer to see sparse matrices in a separate module, especially that there are mutiple ways to store sparse matrices. |
Why not defining only one |
It could be done with |
I like |
We can do |
One thing we should try to do is to keep the serial API consistent with the possible parallel API in #67. |
I split the in-place discussion into its own issue #177. |
Update the description of testing procedures
eig
,eigh
,inv
,solve
,det
,svd
, ... . A possible implementation is here: https://github.com/certik/fortran-utils/blob/b43bd24cd421509a5bc6d3b9c3eeae8ce856ed88/src/linalg.f90.All these functions will be implemented in
stdlib_linalg
module, and they would probably just call Lapack. The general idea of these routines is to be general routines that will just work, with a simple intuitive interface, and the highest performance given the simple API. One can always achieve higher performance with more specialized routines for a particular problem (and more complicated API), but that is not the point here. Rather we would like a Matlab / NumPy style routines to do linear algebra.In particular, let's start with
eig
, for an eigenvalue and eigenvectors of a general (non-symmetric) matrix. Both NumPy and Matlab have a very similar interface calledeig
, that I propose we use:Julia seems to have more of an "object oriented" interface called
eigen
: https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/index.html#LinearAlgebra.eigen, which uses some Julia language features to emulate the Matlab stylevals, vecs = eigen([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])
.The text was updated successfully, but these errors were encountered: