Numerical mathematics, ordinary differential equations, special math functions, high-performance (sparse) linear algebra
- Introduction
- Installation
- π Examples
- (lab) Numerical integration (quadrature)
- (lab) Solution of PDEs using spectral collocation
- (lab) Matrix visualization
- (lab) Singular value decomposition
- (lab) Cholesky factorization
- (lab) Solution of a (dense) linear system
- (lab) Reading table-formatted data files
- (sparse) Solution of a sparse linear system
- (ode) Solution of the Brusselator ODE
- (ode) Solution of the Brusselator PDE
- (stat) Generate the Frechet distribution
- (tensor) Allocate second-order tensors
- Roadmap
Russell (Rust Scientific Library) assists in developing high-performance computations involving linear algebra, sparse linear systems, differential equations, statistics, and continuum mechanics using the Rust programming language. The applications built with Russell revolve around the computational mechanics discipline; however, since Russell deals with fundamental mathematics and numerics, it is also helpful for other disciplines.
Russell aims to deliver efficient, reliable, and easy-to-maintain code. Thus, Russell implements several unit and integration tests and requires test coverage to be over 95%. For the sake of code maintenance, Russell avoids overcomplicated Rust constructions. Nonetheless, Russell considers a good range of Rust concepts, such as generics and traits, and convenient/powerful constructs, such as enums, options, and results. Another goal of Russell is to publish examples of all computations in the documentation to assist the user/developer.
Available libraries:
- russell_lab Scientific laboratory with special math functions, linear algebra, interpolation, quadrature, numerical derivation, and more
- russell_ode Solvers for ordinary differential equations (ODEs) and differential algebraic equations (DAEs)
- russell_sparse Solvers for large sparse linear systems (wraps MUMPS and UMFPACK)
- russell_stat Statistics calculations and (engineering) probability distributions
- russell_tensor Tensor analysis, calculus, and functions for continuum mechanics
π Check the crate version and update your Cargo.toml accordingly. Examples:
[dependencies]
russell_lab = "*"
russell_sparse = "*"
russell_ode = "*"
russell_stat = "*"
russell_tensor = "*"
All crates have an option to use Intel MKL instead of the default OpenBLAS. For instance, the features
keyword may be configured as follows:
[dependencies]
russell_lab = { version = "*", features = ["intel_mkl"] }
russell_sparse = { version = "*", features = ["intel_mkl"] }
russell_ode = { version = "*", features = ["intel_mkl"] }
russell_stat = { version = "*", features = ["intel_mkl"] }
russell_tensor = { version = "*", features = ["intel_mkl"] }
External associated and recommended crates:
- plotpy Plotting tools using Python3/Matplotlib as an engine (for quality graphics)
- tritet Triangle and tetrahedron mesh generators (with Triangle and Tetgen)
- gemlab Geometry, meshes, and numerical integration for finite element analyses
Russell requires some non-Rust libraries (e.g., OpenBLAS, Intel MKL, MUMPS, SuiteSparse) to achieve the max performance. These libraries can be installed as explained in each subsection next.
After installing the dependencies, you may add each crate using:
cargo add russell_lab
cargo add russell_sparse # etc.
Required libraries:
# install libraries for russell
sudo apt-get install -y --no-install-recommends \
liblapacke-dev \
libopenblas-dev \
libsuitesparse-dev
Required libraries:
# initialize
dnf update -y
dnf install epel-release -y
crb enable
# install libraries for russell
dnf install -y \
lapack-devel \
openblas-devel \
suitesparse-devel
Required libraries:
# install libraries for russell
yay -Y --gendb --noconfirm && yay -Y --devel --save
yay -Syu blas-openblas --noconfirm
yay -Syu suitesparse --noconfirm
First, install Homebrew. Then, run:
# install libraries for russell
brew install lapack openblas suite-sparse
russell_sparse
allows the use of a locally compiled SuiteSparse, installed in /usr/local/include/suitesparse
and /usr/local/lib/suitesparse
. This option is defined by the local_suitesparse
feature. The compile-and-install-suitesparse script may be used in this case:
bash zscripts/compile-and-install-suitesparse.bash
russell_sparse
has an optional feature named with_mumps
which enables the MUMPS solver. To use this feature, MUMPS needs to be locally compiled first. The compile-and-install-mumps script may be used in this case:
bash zscripts/compile-and-install-mumps.bash
To enable Intel MKL (and disable OpenBLAS), the optional intel_mkl
feature may be used. In this case SuiteSparse (and MUMPS) must be locally compiled (with Intel MKL). This step can be easily accomplished by the compile-and-install-suitesparse and compile-and-install-mumps scripts, called with the mkl argument. For example:
bash zscripts/compile-and-install-suitesparse.bash mkl
bash zscripts/compile-and-install-mumps.bash mkl
Warning: We need to further investigate why the nightly Rust version (1.83) fails to link with Intel MKL on Ubuntu 24.04.1 LTS. The stable version (1.81) works just fine.
By default, OpenBLAS will use all available threads, including Hyper-Threads that may worsen the performance. Thus, it is recommended to set the following environment variable:
export OPENBLAS_NUM_THREADS=<real-core-number>
Substitute <real-core-number>
with the correct value from your system.
Furthermore, if working on a multi-threaded application where the solver should not be multi-threaded on its own (e.g., running parallel calculations in an optimization tool), you may set:
export OPENBLAS_NUM_THREADS=1
See also:
- russell_lab/examples
- russell_sparse/examples
- russell_ode/examples
- russell_stat/examples
- russell_tensor/examples
The code below approximates the area of a semicircle of unitary radius.
use russell_lab::math::PI;
use russell_lab::{approx_eq, Quadrature, StrError};
fn main() -> Result<(), StrError> {
let mut quad = Quadrature::new();
let args = &mut 0;
let (a, b) = (-1.0, 1.0);
let (area, stats) = quad.integrate(a, b, args, |x, _| Ok(f64::sqrt(1.0 - x * x)))?;
println!("\narea = {}", area);
println!("\n{}", stats);
approx_eq(area, PI / 2.0, 1e-13);
Ok(())
}
This example illustrates the solution of a 1D PDE using the spectral collocation method. It employs the InterpLagrange struct.
dΒ²u du x
βββ - 4 ββ + 4 u = e + C
dxΒ² dx
-4 e
C = ββββββ
1 + eΒ²
x β [-1, 1]
Boundary conditions:
u(-1) = 0 and u(1) = 0
Reference solution:
x sinh(1) 2x C
u(x) = e - βββββββ e + β
sinh(2) 4
Results:
We can use the fantastic tool named vismatrix to visualize the pattern of non-zero values of a matrix. With vismatrix
, we can click on each circle and investigate the numeric values as well.
The function mat_write_vismatrix
writes the input data file for vismatrix
.
After generating the "dot-smat" file, run the following command:
vismatrix /tmp/russell_lab/matrix_visualization.smat
Output:
use russell_lab::{mat_svd, Matrix, Vector, StrError};
fn main() -> Result<(), StrError> {
// set matrix
let mut a = Matrix::from(&[
[2.0, 4.0],
[1.0, 3.0],
[0.0, 0.0],
[0.0, 0.0],
]);
// allocate output structures
let (m, n) = a.dims();
let min_mn = if m < n { m } else { n };
let mut s = Vector::new(min_mn);
let mut u = Matrix::new(m, m);
let mut vt = Matrix::new(n, n);
// perform SVD
mat_svd(&mut s, &mut u, &mut vt, &mut a)?;
// check S
let s_correct = "β β\n\
β 5.46 β\n\
β 0.37 β\n\
β β";
assert_eq!(format!("{:.2}", s), s_correct);
// check SVD: a == u * s * vt
let mut usv = Matrix::new(m, n);
for i in 0..m {
for j in 0..n {
for k in 0..min_mn {
usv.add(i, j, u.get(i, k) * s[k] * vt.get(k, j));
}
}
}
let usv_correct = "β β\n\
β 2.000000 4.000000 β\n\
β 1.000000 3.000000 β\n\
β 0.000000 0.000000 β\n\
β 0.000000 0.000000 β\n\
β β";
assert_eq!(format!("{:.6}", usv), usv_correct);
Ok(())
}
use russell_lab::*;
fn main() -> Result<(), StrError> {
// set matrix (full)
#[rustfmt::skip]
let a_full = Matrix::from(&[
[ 3.0, 0.0,-3.0, 0.0],
[ 0.0, 3.0, 1.0, 2.0],
[-3.0, 1.0, 4.0, 1.0],
[ 0.0, 2.0, 1.0, 3.0],
]);
// set matrix (lower)
#[rustfmt::skip]
let mut a_lower = Matrix::from(&[
[ 3.0, 0.0, 0.0, 0.0],
[ 0.0, 3.0, 0.0, 0.0],
[-3.0, 1.0, 4.0, 0.0],
[ 0.0, 2.0, 1.0, 3.0],
]);
// set matrix (upper)
#[rustfmt::skip]
let mut a_upper = Matrix::from(&[
[3.0, 0.0,-3.0, 0.0],
[0.0, 3.0, 1.0, 2.0],
[0.0, 0.0, 4.0, 1.0],
[0.0, 0.0, 0.0, 3.0],
]);
// perform Cholesky factorization (lower)
mat_cholesky(&mut a_lower, false)?;
let l = &a_lower;
// perform Cholesky factorization (upper)
mat_cholesky(&mut a_upper, true)?;
let u = &a_upper;
// check: l β
lα΅ = a
let m = a_full.nrow();
let mut l_lt = Matrix::new(m, m);
for i in 0..m {
for j in 0..m {
for k in 0..m {
l_lt.add(i, j, l.get(i, k) * l.get(j, k));
}
}
}
mat_approx_eq(&l_lt, &a_full, 1e-14);
// check: uα΅ β
u = a
let mut ut_u = Matrix::new(m, m);
for i in 0..m {
for j in 0..m {
for k in 0..m {
ut_u.add(i, j, u.get(k, i) * u.get(k, j));
}
}
}
mat_approx_eq(&ut_u, &a_full, 1e-14);
Ok(())
}
use russell_lab::{solve_lin_sys, Matrix, Vector, StrError};
fn main() -> Result<(), StrError> {
// set matrix and right-hand side
let mut a = Matrix::from(&[
[1.0, 3.0, -2.0],
[3.0, 5.0, 6.0],
[2.0, 4.0, 3.0],
]);
let mut b = Vector::from(&[5.0, 7.0, 8.0]);
// solve linear system b := aβ»ΒΉβ
b
solve_lin_sys(&mut b, &mut a)?;
// check
let x_correct = "β β\n\
β -15.000 β\n\
β 8.000 β\n\
β 2.000 β\n\
β β";
assert_eq!(format!("{:.3}", b), x_correct);
Ok(())
}
The goal is to read the following file (clay-data.txt
):
# Fujinomori clay test results
sr ea er # header
1.00000 -6.00000 0.10000
2.00000 7.00000 0.20000
3.00000 8.00000 0.20000 # << look at this line
# comments plus new lines are OK
4.00000 9.00000 0.40000
5.00000 10.00000 0.50000
# bye
The code below illustrates how to do it.
Each column (sr
, ea
, er
) is accessible via the get
method of the [HashMap].
use russell_lab::{read_table, StrError};
use std::collections::HashMap;
use std::env;
use std::path::PathBuf;
fn main() -> Result<(), StrError> {
// get the asset's full path
let root = PathBuf::from(env::var("CARGO_MANIFEST_DIR").unwrap());
let full_path = root.join("data/tables/clay-data.txt");
// read the file
let labels = &["sr", "ea", "er"];
let table: HashMap<String, Vec<f64>> = read_table(&full_path, Some(labels))?;
// check the columns
assert_eq!(table.get("sr").unwrap(), &[1.0, 2.0, 3.0, 4.0, 5.0]);
assert_eq!(table.get("ea").unwrap(), &[-6.0, 7.0, 8.0, 9.0, 10.0]);
assert_eq!(table.get("er").unwrap(), &[0.1, 0.2, 0.2, 0.4, 0.5]);
Ok(())
}
use russell_lab::*;
use russell_sparse::prelude::*;
use russell_sparse::StrError;
fn main() -> Result<(), StrError> {
// constants
let ndim = 5; // number of rows = number of columns
let nnz = 13; // number of non-zero values, including duplicates
// allocate solver
let mut umfpack = SolverUMFPACK::new()?;
// allocate the coefficient matrix
// 2 3 . . .
// 3 . 4 . 6
// . -1 -3 2 .
// . . 1 . .
// . 4 2 . 1
let mut coo = SparseMatrix::new_coo(ndim, ndim, nnz, Sym::No)?;
coo.put(0, 0, 1.0)?; // << (0, 0, a00/2) duplicate
coo.put(0, 0, 1.0)?; // << (0, 0, a00/2) duplicate
coo.put(1, 0, 3.0)?;
coo.put(0, 1, 3.0)?;
coo.put(2, 1, -1.0)?;
coo.put(4, 1, 4.0)?;
coo.put(1, 2, 4.0)?;
coo.put(2, 2, -3.0)?;
coo.put(3, 2, 1.0)?;
coo.put(4, 2, 2.0)?;
coo.put(2, 3, 2.0)?;
coo.put(1, 4, 6.0)?;
coo.put(4, 4, 1.0)?;
// parameters
let mut params = LinSolParams::new();
params.verbose = false;
params.compute_determinant = true;
// call factorize
umfpack.factorize(&mut coo, Some(params))?;
// allocate x and rhs
let mut x = Vector::new(ndim);
let rhs = Vector::from(&[8.0, 45.0, -3.0, 3.0, 19.0]);
// calculate the solution
umfpack.solve(&mut x, &coo, &rhs, false)?;
println!("x =\n{}", x);
// check the results
let correct = vec![1.0, 2.0, 3.0, 4.0, 5.0];
vec_approx_eq(&x, &correct, 1e-14);
// analysis
let mut stats = StatsLinSol::new();
umfpack.update_stats(&mut stats);
let (mx, ex) = (stats.determinant.mantissa_real, stats.determinant.exponent);
println!("det(a) = {:?}", mx * f64::powf(10.0, ex));
println!("rcond = {:?}", stats.output.umfpack_rcond_estimate);
Ok(())
}
The system is:
y0' = 1 - 4 y0 + y0Β² y1
y1' = 3 y0 - y0Β² y1
with y0(x=0) = 3/2 and y1(x=0) = 3
Solving with DoPri8 -- 8(5,3):
use russell_lab::StrError;
use russell_ode::prelude::*;
fn main() -> Result<(), StrError> {
// get the ODE system
let (system, x0, mut y0, mut args, y_ref) = Samples::brusselator_ode();
// final x
let x1 = 20.0;
// solver
let params = Params::new(Method::DoPri8);
let mut solver = OdeSolver::new(params, system)?;
// enable dense output
let h_out = 0.01;
let selected_y_components = &[0, 1];
solver.enable_output().set_dense_recording(true, h_out, selected_y_components)?;
// solve the problem
solver.solve(&mut y0, x0, x1, None, Some(&mut out), &mut args)?;
// print the results and stats
println!("y_russell = {:?}", y0.as_data());
println!("y_mathematica = {:?}", y_ref.as_data());
println!("{}", solver.stats());
Ok(())
}
A plot of the (dense) solution is shown below:
This example solves the Brusselator PDE described in (Hairer E, Wanner G (2002) Solving Ordinary Differential Equations II Stiff and Differential-Algebraic Problems. Second Revised Edition. Corrected 2nd printing 2002. Springer Series in Computational Mathematics, 614p).
See the code brusselator_pde_radau5_2nd.rs.
The results are shown below for the U
field:
And below for the V
field:
The code brusselator_pde_2nd_comparison.rs compares russell
results with Mathematica results.
The figure below shows the russell
(black dashed lines) and Mathematica (red solid lines) results for the U
field:
The figure below shows the russell
(black dashed lines) and Mathematica (red solid lines) results for the V
field:
Code:
use russell_stat::*;
fn main() -> Result<(), StrError> {
// generate samples
let mut rng = get_rng();
let dist = DistributionFrechet::new(0.0, 1.0, 1.0)?;
let nsamples = 10_000;
let mut data = vec![0.0; nsamples];
for i in 0..nsamples {
data[i] = dist.sample(&mut rng);
}
println!("{}", statistics(&data));
// text-plot
let stations = (0..20).map(|i| (i as f64) * 0.5).collect::<Vec<f64>>();
let mut hist = Histogram::new(&stations)?;
hist.count(&data);
println!("{}", hist);
Ok(())
}
Sample output:
min = 0.11845731988882305
max = 26248.036672205748
mean = 12.268212841918867
std_dev = 312.7131690782321
[ 0,0.5) | 1370 π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦
[0.5, 1) | 2313 π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦
[ 1,1.5) | 1451 π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦
[1.5, 2) | 971 π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦π¦
[ 2,2.5) | 659 π¦π¦π¦π¦π¦π¦π¦π¦
[2.5, 3) | 460 π¦π¦π¦π¦π¦
[ 3,3.5) | 345 π¦π¦π¦π¦
[3.5, 4) | 244 π¦π¦π¦
[ 4,4.5) | 216 π¦π¦
[4.5, 5) | 184 π¦π¦
[ 5,5.5) | 133 π¦
[5.5, 6) | 130 π¦
[ 6,6.5) | 115 π¦
[6.5, 7) | 108 π¦
[ 7,7.5) | 70
[7.5, 8) | 75
[ 8,8.5) | 57
[8.5, 9) | 48
[ 9,9.5) | 59
sum = 9008
use russell_tensor::*;
fn main() -> Result<(), StrError> {
// general
let a = Tensor2::from_matrix(
&[
[1.0, SQRT_2 * 2.0, SQRT_2 * 3.0],
[SQRT_2 * 4.0, 5.0, SQRT_2 * 6.0],
[SQRT_2 * 7.0, SQRT_2 * 8.0, 9.0],
],
Mandel::General,
)?;
assert_eq!(
format!("{:.1}", a.vec),
"β β\n\
β 1.0 β\n\
β 5.0 β\n\
β 9.0 β\n\
β 6.0 β\n\
β 14.0 β\n\
β 10.0 β\n\
β -2.0 β\n\
β -2.0 β\n\
β -4.0 β\n\
β β"
);
// symmetric-3D
let b = Tensor2::from_matrix(
&[
[1.0, 4.0 / SQRT_2, 6.0 / SQRT_2],
[4.0 / SQRT_2, 2.0, 5.0 / SQRT_2],
[6.0 / SQRT_2, 5.0 / SQRT_2, 3.0],
],
Mandel::Symmetric,
)?;
assert_eq!(
format!("{:.1}", b.vec),
"β β\n\
β 1.0 β\n\
β 2.0 β\n\
β 3.0 β\n\
β 4.0 β\n\
β 5.0 β\n\
β 6.0 β\n\
β β"
);
// symmetric-2D
let c = Tensor2::from_matrix(
&[[1.0, 4.0 / SQRT_2, 0.0], [4.0 / SQRT_2, 2.0, 0.0], [0.0, 0.0, 3.0]],
Mandel::Symmetric2D,
)?;
assert_eq!(
format!("{:.1}", c.vec),
"β β\n\
β 1.0 β\n\
β 2.0 β\n\
β 3.0 β\n\
β 4.0 β\n\
β β"
);
Ok(())
}
- Improve
russell_lab
- Implement more integration tests for linear algebra
- Implement more examples
- Implement more benchmarks
- Wrap more BLAS/LAPACK functions
- Implement dggev, zggev, zheev, and zgeev
- Wrap Intel MKL (option for OpenBLAS)
- Add more complex number functions
- Add fundamental functions to
russell_lab
- Implement the Bessel functions
- Implement the modified Bessel functions
- Implement the elliptical integral functions
- Implement Beta, Gamma and Erf functions (and associated)
- Implement orthogonal polynomial functions
- Implement some numerical methods in
russell_lab
- Implement Brent's solver
- Implement a solver for the cubic equation
- Implement numerical derivation
- Implement numerical Jacobian function
- Implement line search
- Implement Newton's method for nonlinear systems
- Implement numerical quadrature
- Implement multidimensional data interpolation
- Add interpolation and polynomials to
russell_lab
- Implement Chebyshev polynomials
- Implement Chebyshev interpolation
- Implement Orthogonal polynomials
- Implement Lagrange interpolation
- Implement Fourier interpolation
- Improve
russell_sparse
- Wrap the KLU solver (in addition to MUMPS and UMFPACK)
- Implement the Compressed Sparse Column format (CSC)
- Implement the Compressed Sparse Row format (CSC)
- Improve the C-interface to UMFPACK and MUMPS
- Write the conversion from COO to CSC in Rust
- Improve
russell_ode
- Implement explicit Runge-Kutta solvers
- Implement Radau5 for DAEs
- Implement extrapolation methods
- Implement multi-step methods
- Implement general linear methods
- Improve
russell_stat
- Add probability distribution functions
- Implement drawing of ASCII histograms
- Implement FORM (first-order reliability method)
- Add more examples
- Improve
russell_tensor
- Implement functions to calculate invariants
- Implement first and second-order derivatives of invariants
- Implement some high-order derivatives
- Implement standard continuum mechanics tensors
- General improvements
- Compile on macOS
- Study the possibility to install Russell on Windows