Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to AbstractArray to support e.g. StaticArrays.jl #163

Open
franckgaga opened this issue Feb 21, 2025 · 1 comment
Open

Switch to AbstractArray to support e.g. StaticArrays.jl #163

franckgaga opened this issue Feb 21, 2025 · 1 comment
Assignees

Comments

@franckgaga
Copy link
Member

No description provided.

@franckgaga franckgaga self-assigned this Feb 21, 2025
@franckgaga franckgaga changed the title Switch to AbstractVector and AbstractMatrix to support e.g. StaticArrays.jl Switch to AbstractArray to support e.g. StaticArrays.jl Feb 21, 2025
@franckgaga
Copy link
Member Author

franckgaga commented Feb 22, 2025

FYI @baggepinnen and @1-Bart-1

I experimented a bit with StaticArrays.jl. Here's some findings:

  1. Converting everything to AbstractVector and AbstractMatrix significantly complexifies the code and reduce it readability a lot. It needs to be really justified IMO.
  2. I would need to use mutating static arrays, since all the struct in this package are non-mutating (the default behavior of struct). Many methods in the package are currently allocation-free with builtin arrays because I use mutation a lot (ex: updatestate! mutate .x0 or .x̂0 vector, setmodel! and linearize! mutates .A, .Bu, .Bd, .C and .D matrices, etc.)
  3. I ran the micro-benchmark script on my computer (Julia 1.11):
============================================
    Benchmarks for 3×3 Float64 matrices
============================================
Matrix multiplication               -> 2.7x speedup
Matrix multiplication (mutating)    -> 1.6x speedup
Matrix addition                     -> 16.6x speedup
Matrix addition (mutating)          -> 3.0x speedup
Matrix determinant                  -> 73.7x speedup
Matrix inverse                      -> 90.3x speedup
Matrix symmetric eigendecomposition -> 3.1x speedup
Matrix Cholesky decomposition       -> 13.8x speedup
Matrix LU decomposition             -> 4.2x speedup
Matrix QR decomposition             -> 31.6x speedup

the speedups of the mutating operations are modest. And it's micro-benchmarks: we cannot interpret these results as "the MPC controller would be 3.0x faster". These operations are generally not the bottleneck for MPC.

From these findings, I am not convinced at all this feature is worth the implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant