Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deprecate lu/eig/schur/svd/lq in favor of *fact counterparts and factorization destructuring #26997

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 92 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -997,6 +997,98 @@ Deprecated or removed

* `gradient` is deprecated and will be removed in the next release ([#23816]).

* `lu(A::AbstractMatrix[, pivot])` has been deprecated in favor of `lufact(A[, pivot])`.
Whereas `lu(A[, pivot])` returns a tuple of arrays, `lufact(A[, pivot])` returns an `LU` object.
So for a direct replacement, use `(lufact(A[, pivot])...,)`. But going forward, consider using
the direct result of `lufact(A[, pivot])` instead, either destructured into its components
(`l, u, p = lufact(A[, pivot])`) or as an `LU` object (`lup = lufact(A)`) ([#26997]).

* `lu(x::Number)` has been deprecated in favor of `lufact(x::Number)`.
Whereas `lu(x::Number)` returns a tuple of numbers, `lufact(x::Number)` returns a tuple of arrays
for consistency with other `lufact` methods. So for a direct replacement, use `first.((lufact(x)...,))`.
But going forward, consider using the direct result of `lufact(x)` instead, either destructured
into its components (`l, u, p = lufact(x)`) or as an `LU` object (`lup = lufact(x)`) ([#26997]).

* `eig(A[, args...])` has been deprecated in favor of `eigfact(A[, args...])`.
Whereas the former returns a tuple of arrays, the latter returns an `Eigen` object.
So for a direct replacement, use `(eigfact(A[, args...])...,)`.
But going forward, consider using the direct result of
`eigfact(A[, args...])` instead, either destructured into its components
(`vals, vecs = eigfact(A[, args...])`) or as an `Eigen` object
(`eigf = eigfact(A[, args...])`) ([#26997]).

* `eig(A::AbstractMatrix, B::AbstractMatrix)` and `eig(A::Number, B::Number)`
have been deprecated in favor of `eigfact(A, B)`. Whereas the former each return
a tuple of arrays, the latter returns a `GeneralizedEigen` object. So for a direct
replacement, use `(eigfact(A, B)...,)`. But going forward, consider using the
direct result of `eigfact(A, B)` instead, either destructured into its components
(`vals, vecs = eigfact(A, B)`), or as a
`GeneralizedEigen` object (`eigf = eigfact(A, B)`) ([#26997]).

* `schur(A::AbstractMatrix)` has been deprecated in favor of `schurfact(A)`.
Whereas the former returns a tuple of arrays, the latter returns a `Schur` object.
So for a direct replacement, use `(schurfact(A)...,)`. But going forward, consider
using the direct result of `schurfact(A)` instead, either destructured into
its components (`T, Z, λ = schurfact(A)`) or as a `Schur` object
(`schurf = schurfact(A)`) ([#26997]).

* `schur(A::StridedMatrix, B::StridedMatrix)` has been deprecated in favor of
`schurfact(A, B)`. Whereas the former returns a tuple of arrays, the latter
returns a `GeneralizedSchur` object. So for a direct replacement, use
`(schurfact(A, B)...,)`. But going forward, consider using the direct result
of `schurfact(A, B)` instead, either destructured into its components
(`S, T, Q, Z, α, β = schurfact(A, B)`) or as a `GeneralizedSchur` object
(`schurf = schurfact(A, B)`) ([#26997]).

* `svd(A::Abstractarray; thin=true)` has been deprecated in favor of
`svdfact(A; full=false)`. Note that the `thin` keyword and its replacement
`full` have opposite meanings. Additionally, whereas `svd` returns a
tuple of arrays `(U, S, V)` such that `A ≈ U*Diagonal(S)*V'`, `svdfact` returns
an `SVD` object that nominally provides `U, S, Vt` such that `A ≈ U*Diagonal(S)*Vt`.
So for a direct replacement,
use `((U, S, Vt) = svdfact(A[; full=...]); (U, S, copy(Vt')))`. But going forward,
consider using the direct result of `svdfact(A[; full=...])` instead,
either destructured into its components (`U, S, Vt = svdfact(A[; full=...])`)
or as an `SVD` object (`svdf = svdfact(A[; full=...])`) ([#26997]).

* `svd(x::Number; thin=true)` has been deprecated in favor of
`svdfact(x; full=false)`. Note that the `thin` keyword and its replacement
`full` have opposite meanings. Additionally, whereas `svd(x::Number[; thin=...])`
returns a tuple of numbers `(u, s, v)` such that `x ≈ u*s*conj(v)`,
`svdfact(x::Number[; full=...])` returns
an `SVD` object that nominally provides `u, s, vt` such that `x ≈ u*Diagonal(s)*conj(vt)`.
So for a direct replacement,
use `((u, s, vt) = first.((svdfact(x[; full=...]...,)); (u, s, conj(vt)))`.
But going forward,
consider using the direct result of `svdfact(x[; full=...])` instead,
either destructured into its components (`U, S, Vt = svdfact(A[; full=...])`)
or as an `SVD` object (`svdf = svdfact(x[; full=...])`) ([#26997]).

* `svd(A::Abstractarray, B::AbstractArray)` has been deprecated in favor of
`svdfact(A, B)`. Whereas the former returns a tuple of arrays,
the latter returns a `GeneralizedSVD` object. So for a direct replacement,
use `(svdfact(A, B)...,)`. But going forward,
consider using the direct result of `svdfact(A, B)` instead,
either destructured into its components (`U, V, Q, D1, D2, R0 = svdfact(A, B)`)
or as a `GeneralizedSVD` object (`gsvdf = svdfact(A, B)`) ([#26997]).

* `svd(x::Number, y::Number)` has been deprecated in favor of
`svdfact(x, y)`. Whereas the former returns a tuple of numbers,
the latter returns a `GeneralizedSVD` object. So for a direct replacement,
use `first.((svdfact(x, y)...,))`. But going forward,
consider using the direct result of `svdfact(x, ys)` instead,
either destructured into its components (`U, V, Q, D1, D2, R0 = svdfact(x, y)`)
or as a `GeneralizedSVD` object (`gsvdf = svdfact(x, y)`) ([#26997]).

* `lq(A; thin=true)` has been deprecated in favor of `lqfact(A)`.
Whereas `lq(A; thin=true)` returns a tuple of arrays, `lqfact` returns
an `LQ` object. So for a direct replacement of `lqfact(A; thin=true)`,
use `(F = lqfact(A); (F.L, Array(F.Q)))`, and for `lqfact(A; thin=false)`
use `(F = lqfact(A); k = size(F.Q.factors, 2); (F.L, lmul!(F.Q, Matrix{eltype(F.Q)}(I, k, k))))`.
But going forward, consider using the direct result of `lqfact(A)` instead,
either destructured into its components (`L, Q = lqfact(A)`)
or as an `LQ` object (`lqf = lqfact(A)`) ([#26997]).

* The timing functions `tic`, `toc`, and `toq` are deprecated in favor of `@time` and `@elapsed`
([#17046]).

Expand Down
2 changes: 1 addition & 1 deletion doc/src/manual/noteworthy-differences.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ may trip up Julia users accustomed to MATLAB:
parentheses may be required (e.g., to select elements of `A` equal to 1 or 2 use `(A .== 1) .| (A .== 2)`).
* In Julia, the elements of a collection can be passed as arguments to a function using the splat
operator `...`, as in `xs=[1,2]; f(xs...)`.
* Julia's [`svd`](@ref) returns singular values as a vector instead of as a dense diagonal matrix.
* Julia's [`svdfact`](@ref) returns singular values as a vector instead of as a dense diagonal matrix.
* In Julia, `...` is not used to continue lines of code. Instead, incomplete expressions automatically
continue onto the next line.
* In both Julia and MATLAB, the variable `ans` is set to the value of the last expression issued
Expand Down
4 changes: 2 additions & 2 deletions doc/src/manual/parallel-computing.md
Original file line number Diff line number Diff line change
Expand Up @@ -457,7 +457,7 @@ we could compute the singular values of several large random matrices in paralle
```julia-repl
julia> M = Matrix{Float64}[rand(1000,1000) for i = 1:10];

julia> pmap(svd, M);
julia> pmap(svdvals, M);
```

Julia's [`pmap`](@ref) is designed for the case where each function call does a large amount
Expand Down Expand Up @@ -486,7 +486,7 @@ As an example, consider computing the singular values of matrices of different s
```julia-repl
julia> M = Matrix{Float64}[rand(800,800), rand(600,600), rand(800,800), rand(600,600)];

julia> pmap(svd, M);
julia> pmap(svdvals, M);
```

If one process handles both 800×800 matrices and another handles both 600×600 matrices, we will
Expand Down
4 changes: 2 additions & 2 deletions doc/src/manual/performance-tips.md
Original file line number Diff line number Diff line change
Expand Up @@ -544,7 +544,7 @@ function norm(A)
if isa(A, Vector)
return sqrt(real(dot(A,A)))
elseif isa(A, Matrix)
return maximum(svd(A)[2])
return maximum(svdvals(A))
else
error("norm: invalid argument")
end
Expand All @@ -555,7 +555,7 @@ This can be written more concisely and efficiently as:

```julia
norm(x::Vector) = sqrt(real(dot(x,x)))
norm(A::Matrix) = maximum(svd(A)[2])
norm(A::Matrix) = maximum(svdvals(A))
```

## Write "type-stable" functions
Expand Down
4 changes: 2 additions & 2 deletions stdlib/IterativeEigensolvers/test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ end
@testset "real svds" begin
A = sparse([1, 1, 2, 3, 4], [2, 1, 1, 3, 1], [2.0, -1.0, 6.1, 7.0, 1.5])
S1 = svds(A, nsv = 2)
S2 = svd(Array(A))
S2 = (F = svdfact(Array(A)); (F.U, F.S, F.V))

## singular values match:
@test S1[1].S ≈ S2[2][1:2]
Expand Down Expand Up @@ -248,7 +248,7 @@ end
@testset "complex svds" begin
A = sparse([1, 1, 2, 3, 4], [2, 1, 1, 3, 1], exp.(im*[2.0:2:10;]), 5, 4)
S1 = svds(A, nsv = 2)
S2 = svd(Array(A))
S2 = (F = svdfact(Array(A)); (F.U, F.S, F.V))

## singular values match:
@test S1[1].S ≈ S2[2][1:2]
Expand Down
25 changes: 10 additions & 15 deletions stdlib/LinearAlgebra/docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,16 +198,16 @@ Legend:

### Matrix factorizations

| Matrix type | LAPACK | [`eig`](@ref) | [`eigvals`](@ref) | [`eigvecs`](@ref) | [`svd`](@ref) | [`svdvals`](@ref) |
|:------------------------- |:------ |:------------- |:----------------- |:----------------- |:------------- |:----------------- |
| [`Symmetric`](@ref) | SY | | ARI | | | |
| [`Hermitian`](@ref) | HE | | ARI | | | |
| [`UpperTriangular`](@ref) | TR | A | A | A | | |
| [`LowerTriangular`](@ref) | TR | A | A | A | | |
| [`SymTridiagonal`](@ref) | ST | A | ARI | AV | | |
| [`Tridiagonal`](@ref) | GT | | | | | |
| [`Bidiagonal`](@ref) | BD | | | | A | A |
| [`Diagonal`](@ref) | DI | | A | | | |
| Matrix type | LAPACK | [`eigfact`](@ref) | [`eigvals`](@ref) | [`eigvecs`](@ref) | [`svdfact`](@ref) | [`svdvals`](@ref) |
|:------------------------- |:------ |:----------------- |:----------------- |:----------------- |:----------------- |:----------------- |
| [`Symmetric`](@ref) | SY | | ARI | | | |
| [`Hermitian`](@ref) | HE | | ARI | | | |
| [`UpperTriangular`](@ref) | TR | A | A | A | | |
| [`LowerTriangular`](@ref) | TR | A | A | A | | |
| [`SymTridiagonal`](@ref) | ST | A | ARI | AV | | |
| [`Tridiagonal`](@ref) | GT | | | | | |
| [`Bidiagonal`](@ref) | BD | | | | A | A |
| [`Diagonal`](@ref) | DI | | A | | | |

Legend:

Expand Down Expand Up @@ -311,7 +311,6 @@ LinearAlgebra.Hermitian
LinearAlgebra.LowerTriangular
LinearAlgebra.UpperTriangular
LinearAlgebra.UniformScaling
LinearAlgebra.lu
LinearAlgebra.lufact
LinearAlgebra.lufact!
LinearAlgebra.chol
Expand All @@ -332,10 +331,8 @@ LinearAlgebra.QRCompactWY
LinearAlgebra.QRPivoted
LinearAlgebra.lqfact!
LinearAlgebra.lqfact
LinearAlgebra.lq
LinearAlgebra.bkfact
LinearAlgebra.bkfact!
LinearAlgebra.eig
LinearAlgebra.eigvals
LinearAlgebra.eigvals!
LinearAlgebra.eigmax
Expand All @@ -347,12 +344,10 @@ LinearAlgebra.hessfact
LinearAlgebra.hessfact!
LinearAlgebra.schurfact
LinearAlgebra.schurfact!
LinearAlgebra.schur
LinearAlgebra.ordschur
LinearAlgebra.ordschur!
LinearAlgebra.svdfact
LinearAlgebra.svdfact!
LinearAlgebra.svd
LinearAlgebra.svdvals
LinearAlgebra.svdvals!
LinearAlgebra.Givens
Expand Down
5 changes: 0 additions & 5 deletions stdlib/LinearAlgebra/src/LinearAlgebra.jl
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,6 @@ export
diagind,
diagm,
dot,
eig,
eigfact,
eigfact!,
eigmax,
Expand Down Expand Up @@ -111,7 +110,6 @@ export
lowrankdowndate!,
lowrankupdate,
lowrankupdate!,
lu,
lufact,
lufact!,
lyap,
Expand All @@ -128,15 +126,12 @@ export
qr,
qrfact!,
qrfact,
lq,
lqfact!,
lqfact,
rank,
rdiv!,
schur,
schurfact!,
schurfact,
svd,
svdfact!,
svdfact,
svdvals!,
Expand Down
2 changes: 1 addition & 1 deletion stdlib/LinearAlgebra/src/bitarray.jl
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ end

## norm and rank

svd(A::BitMatrix) = svd(float(A))
svdfact(A::BitMatrix) = svdfact(float(A))
qr(A::BitMatrix) = qr(float(A))

## kron
Expand Down
12 changes: 6 additions & 6 deletions stdlib/LinearAlgebra/src/dense.jl
Original file line number Diff line number Diff line change
Expand Up @@ -418,7 +418,7 @@ function schurpow(A::AbstractMatrix, p)
retmat = retmat * powm!(UpperTriangular(float.(A)), real(p - floor(p)))
end
else
S,Q,d = schur(complex(A))
S,Q,d = schurfact(complex(A))
# Integer part
R = S ^ floor(p)
# Real part
Expand Down Expand Up @@ -605,7 +605,7 @@ matrix function is returned whenever possible.
If `A` is symmetric or Hermitian, its eigendecomposition ([`eigfact`](@ref)) is
used, if `A` is triangular an improved version of the inverse scaling and squaring method is
employed (see [^AH12] and [^AHR13]). For general matrices, the complex Schur form
([`schur`](@ref)) is computed and the triangular algorithm is used on the
([`schurfact`](@ref)) is computed and the triangular algorithm is used on the
triangular factor.

[^AH12]: Awad H. Al-Mohy and Nicholas J. Higham, "Improved inverse scaling and squaring algorithms for the matrix logarithm", SIAM Journal on Scientific Computing, 34(4), 2012, C153-C169. [doi:10.1137/110852553](https://doi.org/10.1137/110852553)
Expand Down Expand Up @@ -662,7 +662,7 @@ that is the unique matrix ``X`` with eigenvalues having positive real part such

If `A` is symmetric or Hermitian, its eigendecomposition ([`eigfact`](@ref)) is
used to compute the square root. Otherwise, the square root is determined by means of the
Björck-Hammarling method [^BH83], which computes the complex Schur form ([`schur`](@ref))
Björck-Hammarling method [^BH83], which computes the complex Schur form ([`schurfact`](@ref))
and then the complex square root of the triangular factor.

[^BH83]:
Expand Down Expand Up @@ -1409,8 +1409,8 @@ julia> A*X + X*B + C
```
"""
function sylvester(A::StridedMatrix{T},B::StridedMatrix{T},C::StridedMatrix{T}) where T<:BlasFloat
RA, QA = schur(A)
RB, QB = schur(B)
RA, QA = schurfact(A)
RB, QB = schurfact(B)

D = -(adjoint(QA) * (C*QB))
Y, scale = LAPACK.trsyl!('N','N', RA, RB, D)
Expand Down Expand Up @@ -1453,7 +1453,7 @@ julia> A*X + X*A' + B
```
"""
function lyap(A::StridedMatrix{T}, C::StridedMatrix{T}) where {T<:BlasFloat}
R, Q = schur(A)
R, Q = schurfact(A)

D = -(adjoint(Q) * (C*Q))
Y, scale = LAPACK.trsyl!('N', T <: Complex ? 'C' : 'T', R, R, D)
Expand Down
Loading