From 26322efb68823d855f7f2ec9c7194f73c7f6efac Mon Sep 17 00:00:00 2001 From: Tzanio Kolev Date: Mon, 25 Nov 2024 11:25:35 -0800 Subject: [PATCH] Deployed 4942e1d with MkDocs version: 1.0.4 --- index.html | 8 ++++---- news/index.html | 4 ++++ search/search_index.json | 2 +- sitemap.xml.gz | Bin 516 -> 516 bytes 4 files changed, 9 insertions(+), 5 deletions(-) diff --git a/index.html b/index.html index a7935e86..4e93c408 100644 --- a/index.html +++ b/index.html @@ -418,12 +418,12 @@

News

-Oct 28, 2024 -Postdoc position on the MFEM team.    Apply +Nov 25, 2024 +Recap of the 2024 MFEM Community Workshop. -Oct 22, 2024 -2024 MFEM Community Workshop. +Oct 28, 2024 +Postdoc position on the MFEM team.    Apply May 7, 2024 diff --git a/news/index.html b/news/index.html index 8862d557..46ad6938 100644 --- a/news/index.html +++ b/news/index.html @@ -352,6 +352,10 @@

MFEM News

+Nov 25, 2024 +Recap of the 2024 MFEM Community Workshop. + + Oct 28, 2024 Postdoc position on the MFEM team at LLNL. diff --git a/search/search_index.json b/search/search_index.json index b41b145f..9c36e930 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config": {"lang": ["en"], "prebuild_index": false, "separator": "[\\s\\-]+"}, "docs": [{"location": "", "text": "2024 Visualization Contest Winner Mathias Schmidt 2024 Visualization Contest Winner Jan Nikl Electromagnetic wave propagation in the NSTX-U tokamak High-order multi-material hydrodynamics in the BLAST code Topology optimization of a drone body using LLNL's LiDO code , based on MFEM Non-conforming adaptive mesh refinement with parallel load-balancing Previous Next MFEM is a free , lightweight , scalable C++ library for finite element methods. Features Arbitrary high-order finite element meshes and spaces . Wide variety of finite element discretization approaches. Conforming and nonconforming adaptive mesh refinement . Scalable from laptops to GPU-accelerated supercomputers. ... and many more . MFEM is used in many projects, including BLAST , Cardioid , Palace , VisIt , RF-SciDAC , FASTMath , xSDK , and CEED in the Exascale Computing Project . We host an annual workshop and FEM@LLNL seminar series series. See also our Gallery , Publications , Videos and News pages. News Date Message Oct 28, 2024 Postdoc position on the MFEM team. Apply Oct 22, 2024 2024 MFEM Community Workshop . May 7, 2024 Version 4.7 released . May 2, 2024 New MFEM paper in IJHPCA. Feb 22, 2023 AWS releases Palace based on MFEM. Latest Release New features \u250a Examples \u250a Code documentation \u250a Sources Download mfem-4.7.tgz Older releases \u250a Python wrapper \u250a Documentation Building MFEM \u250a Getting Started \u250a Finite Elements \u250a Performance New users should start by examining the example codes . We also recommend using GLVis for visualization. Contact Use the GitHub issue tracker to report bugs or post questions or comments . See the About page for citation information.", "title": "Home"}, {"location": "#features", "text": "Arbitrary high-order finite element meshes and spaces . Wide variety of finite element discretization approaches. Conforming and nonconforming adaptive mesh refinement . Scalable from laptops to GPU-accelerated supercomputers. ... and many more . MFEM is used in many projects, including BLAST , Cardioid , Palace , VisIt , RF-SciDAC , FASTMath , xSDK , and CEED in the Exascale Computing Project . We host an annual workshop and FEM@LLNL seminar series series. See also our Gallery , Publications , Videos and News pages.", "title": "Features"}, {"location": "#news", "text": "Date Message Oct 28, 2024 Postdoc position on the MFEM team. Apply Oct 22, 2024 2024 MFEM Community Workshop . May 7, 2024 Version 4.7 released . May 2, 2024 New MFEM paper in IJHPCA. Feb 22, 2023 AWS releases Palace based on MFEM.", "title": "News"}, {"location": "#latest-release", "text": "New features \u250a Examples \u250a Code documentation \u250a Sources Download mfem-4.7.tgz Older releases \u250a Python wrapper \u250a", "title": "Latest Release"}, {"location": "#documentation", "text": "Building MFEM \u250a Getting Started \u250a Finite Elements \u250a Performance New users should start by examining the example codes . We also recommend using GLVis for visualization.", "title": "Documentation"}, {"location": "#contact", "text": "Use the GitHub issue tracker to report bugs or post questions or comments . See the About page for citation information.", "title": "Contact"}, {"location": "about/", "text": "About MFEM MFEM originates from previous research effort in the (unreleased) AggieFEM/aFEM project. Please cite with: @article{mfem, title = {{MFEM}: A Modular Finite Element Methods Library}, author = {R. Anderson and J. Andrej and A. Barker and J. Bramwell and J.-S. Camier and J. Cerveny and V. Dobrev and Y. Dudouit and A. Fisher and Tz. Kolev and W. Pazner and M. Stowell and V. Tomov and I. Akkerman and J. Dahm and D. Medina and S. Zampini}, journal = {Computers \\& Mathematics with Applications}, doi = {10.1016/j.camwa.2020.06.009}, volume = {81}, pages = {42-74}, year = {2021} } @misc{mfem-web, key = {mfem}, title = {{MFEM}: Modular Finite Element Methods {[Software]}}, howpublished = {\\url{mfem.org}}, doi = {10.11578/dc.20171025.1248} } Contributors Ido Akkerman Robert Anderson Thomas Anderson Julian Andrej Mikhail Artemyev Nabil Atallah Tucker Babcock Jan-Phillip B\u00e4cker Cody Balos Andrew Barker Natalie Beams Thomas Benson Adrien Bernede Aaron Black Jamie Bramwell Thomas Brunner Jean-Sylvain Camier Hugh Carson Robert Carson Eric Chin Lenka \u010cerven\u00e1 Jakub \u010cerven\u00fd Dylan Copeland Johann Dahm William Dawn Victor DeCaria Veselin Dobrev Daniel Drzisga Yohann Dudouit Tobias Duswald Truman Ellis Josh Essman Aaron Fisher David Gardner Pieter Ghysels Andrew Gillette Sebastian Grimberg Hennes Hajduk Cyrus Harrison Stefan Henneking Milan Holec Delyan Kalchev Kazem Kamran Brendan Keith Dohyun Kim Patrick Knupp Tzanio Kolev \u2014 Project Leader Chris Laganella Ilya Lashuk Boyan Lazarov Chak Shing Lee Jacob Lotz Scott MacLachlan Peter Maginot Victor Magri David Medina Mark Miller Ketan Mittal William Moses Jan Nikl Dennis Ogiermann Geoffrey Oxberry Will Pazner Cosmin Petra Socratis Petrides Robert Rieben Amit Rotem Michael Schneier Joachim Sch\u00f6berl Jean Sexton Syun'ichi Shiraiwa Morteza Siboni Joseph Signorelli Cameron Smith Vanessa Sochat Gabriel Pinochet-Soto Ben Southworth Mike Stees Thomas Stitt Mark Stowell Jeremy Thompson Stanimire Tomov Vladimir Tomov Jean-\u00c9tienne Tremblay Arturo Vargas Umberto Villa Chris Vogl Seth Watts Kenneth Weiss Daniel White Brad Whitlock Christian Woltering Jonathan Wong Max Yang George Zagaris Stefano Zampini Patrick Zulian License BSD This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore Laboratory under Contract DE-AC52-07NA27344. Software release number: LLNL-CODE-806117. DOI: 10.11578/dc.20171025.1248 . Website built with MkDocs , Bootstrap and Bootswatch . Hosted on GitHub .", "title": "About"}, {"location": "about/#about-mfem", "text": "MFEM originates from previous research effort in the (unreleased) AggieFEM/aFEM project. Please cite with: @article{mfem, title = {{MFEM}: A Modular Finite Element Methods Library}, author = {R. Anderson and J. Andrej and A. Barker and J. Bramwell and J.-S. Camier and J. Cerveny and V. Dobrev and Y. Dudouit and A. Fisher and Tz. Kolev and W. Pazner and M. Stowell and V. Tomov and I. Akkerman and J. Dahm and D. Medina and S. Zampini}, journal = {Computers \\& Mathematics with Applications}, doi = {10.1016/j.camwa.2020.06.009}, volume = {81}, pages = {42-74}, year = {2021} } @misc{mfem-web, key = {mfem}, title = {{MFEM}: Modular Finite Element Methods {[Software]}}, howpublished = {\\url{mfem.org}}, doi = {10.11578/dc.20171025.1248} }", "title": "About MFEM"}, {"location": "about/#contributors", "text": "Ido Akkerman Robert Anderson Thomas Anderson Julian Andrej Mikhail Artemyev Nabil Atallah Tucker Babcock Jan-Phillip B\u00e4cker Cody Balos Andrew Barker Natalie Beams Thomas Benson Adrien Bernede Aaron Black Jamie Bramwell Thomas Brunner Jean-Sylvain Camier Hugh Carson Robert Carson Eric Chin Lenka \u010cerven\u00e1 Jakub \u010cerven\u00fd Dylan Copeland Johann Dahm William Dawn Victor DeCaria Veselin Dobrev Daniel Drzisga Yohann Dudouit Tobias Duswald Truman Ellis Josh Essman Aaron Fisher David Gardner Pieter Ghysels Andrew Gillette Sebastian Grimberg Hennes Hajduk Cyrus Harrison Stefan Henneking Milan Holec Delyan Kalchev Kazem Kamran Brendan Keith Dohyun Kim Patrick Knupp Tzanio Kolev \u2014 Project Leader Chris Laganella Ilya Lashuk Boyan Lazarov Chak Shing Lee Jacob Lotz Scott MacLachlan Peter Maginot Victor Magri David Medina Mark Miller Ketan Mittal William Moses Jan Nikl Dennis Ogiermann Geoffrey Oxberry Will Pazner Cosmin Petra Socratis Petrides Robert Rieben Amit Rotem Michael Schneier Joachim Sch\u00f6berl Jean Sexton Syun'ichi Shiraiwa Morteza Siboni Joseph Signorelli Cameron Smith Vanessa Sochat Gabriel Pinochet-Soto Ben Southworth Mike Stees Thomas Stitt Mark Stowell Jeremy Thompson Stanimire Tomov Vladimir Tomov Jean-\u00c9tienne Tremblay Arturo Vargas Umberto Villa Chris Vogl Seth Watts Kenneth Weiss Daniel White Brad Whitlock Christian Woltering Jonathan Wong Max Yang George Zagaris Stefano Zampini Patrick Zulian", "title": "Contributors"}, {"location": "about/#license", "text": "BSD This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore Laboratory under Contract DE-AC52-07NA27344. Software release number: LLNL-CODE-806117. DOI: 10.11578/dc.20171025.1248 . Website built with MkDocs , Bootstrap and Bootswatch . Hosted on GitHub .", "title": "License"}, {"location": "autodiff/", "text": "Automatic Differentiation Mini Applications The code in the miniapps/autodiff subdirectory of MFEM provides methods for automatic differentiation (AD) of arbitrary functions implemented in C++, either as lambda functions or functors. AD consists of a set of techniques to evaluate the derivative of a function implemented as a computer program. AD does not provide a symbolic form of the derivatives, and AD is not a numerical approximation technique. Instead, the derivatives obtained by AD are exact and exploit the fact that every function implemented on a computer can be represented by a sequence of arithmetic operations and basic functions, i.e., addition, multiplication, sin , cos , log , etc. The derivatives in AD with respect to the input arguments are obtained by applying the chain rule on the recorded sequence of operations. For more theoretical details, the users are referred to 1 . AD can be implemented on a compiler level by source code transformations or by using some of the features of modern object-oriented languages like operator overloading and templating. Even though several AD implementations on a compiler level exist, they are often utilized for simple functions written in languages like Fortran and C, and developments for general C++ applications are still in their infancy. The MFEM implementation relies on native and external C++ libraries like CoDiPack 2 . The users can choose the AD engine during the configuration phase. The choice does not affect the actual utilization of AD in the code, and it can impact only the performance and memory utilization. Two distinguished modes, forward and reverse, can be easily identified in software implementations of automatic differentiation. For a function $f(\\mathbf{x}):\\mathbb{R}^n \\rightarrow\\mathbb{R}^m$, forward mode implementations evaluate \\begin{align} \\dot{\\mathbf{y}}&=f'\\left(\\mathbf{x}\\right)\\dot{\\mathbf{x}}, \\quad \\dot{\\mathbf{x}}\\in\\mathbb{R}^{n\\times 1}, \\dot{\\mathbf{y}}\\in\\mathbb{R}^{m\\times1}, \\end{align} where the vector $\\dot{\\mathbf{x}}$ is specified by the user. Therefore, to extract the Jacobian $f'\\left(\\mathbf{x}\\right)\\dot{\\mathbf{x}}$ one has to call the AD procedure $n$ times with $n$ different vectors $\\dot{\\mathbf{x}}$, where the values of vector $j=1,\\ldots, n$ are defined as $\\dot{\\mathbf{x}}_i=\\delta_{i,j}$. The Jacobian is extracted column by column. For a function $f(\\mathbf{x}):\\mathbb{R}^n \\rightarrow\\mathbb{R}^m$, reverse mode evaluates \\begin{align} \\bar{\\mathbf{x}}^{\\sf{T}}&=\\bar{\\mathbf{y}}^{\\sf{T}} f'\\left(\\mathbf{x}\\right), &\\bar{\\mathbf{x}}\\in\\mathbb{R}^{n\\times 1}, \\bar{\\mathbf{y}}\\in\\mathbb{R}^{m\\times1}, \\end{align} where $\\bar{\\mathbf{y}}$ is a vector specified by the user. In contrast to forward mode, the Jacobian, in this case, can be extracted row by row. Thus, for a vector function with a number of arguments smaller than the size of the output, the forward mode will be the preferable one. For a vector function with a number of arguments larger than the size of the output, the reverse mode will be the preferable one. It should be mentioned that reverse mode introduces additional overhead for storing the computational graph in the memory, which might easily fill up the available memory. The interested users are referred to 3 for detailed comparisons. In MFEM, users can choose between a native implementation using AD in a forward mode and both forward and reverse mode implementation based on CoDiPack 2 . The native implementation is based on the so-called dual numbers briefly described below. Dual numbers In forward mode, the derivative information propagates from the input arguments to the output results. In MFEM, this is achieved with the help of the so-called dual number arithmetic. The native low-level implementation can be found in the header file fdual.hpp . The file implements a large number of basic functions, and if necessary additional basic and more complex functions can be easily added by following the examples. A dual number $x+\\varepsilon x'$ consists of a primal/real part and a dual part dragging the derivative information. Every real number can be represented as $x+\\varepsilon 0 $. The arithmetic is defined with the help of dummy symbol $\\varepsilon$ by specifying that $\\varepsilon^2=0$. Based on the above, the following set of rules can be easily derived. $\\left(x+\\varepsilon x'\\right)+\\left(y+\\varepsilon y'\\right)=\\left(x+y\\right)+\\varepsilon\\left(x'+y'\\right)$ $\\left(x+\\varepsilon x'\\right)*\\left(y+\\varepsilon y'\\right)=xy+\\varepsilon\\left(yx'+xy'\\right)$ $f\\left(x+\\varepsilon x'\\right)=f\\left(x\\right)+\\varepsilon f'\\left(x\\right)x'$ $f\\left(g \\left(x+\\varepsilon x'\\right) \\right)= f\\left(g \\left(x\\right)+\\varepsilon g'\\left(x\\right) x'\\right) = f\\left(g \\left(x \\right)\\right)+\\varepsilon f'\\left(g \\left(x \\right)\\right) g'\\left(x\\right) x'$ Example of AD differentiated function The following vector function, defined as lambda expression, has two parameters kappa and load . The input of the function input_vector is a vector $\\left[\\partial u/\\partial x, \\partial u/\\partial y,\\partial u/\\partial z,u \\right]^{\\sf{T}}$ with 4 components (the last one is not used in the output of the function), and the result is a vector $\\left[\\kappa \\partial u/\\partial x, \\kappa \\partial u/\\partial y, \\kappa \\partial u/\\partial z, -f \\right]$ output_vector of size 4. //using lambda expression auto func = [](mfem::Vector& vparam, mfem::ad::ADVectorType& input_vector, mfem::ad::ADVectorType& output_vector) { auto kappa = vparam[0]; //diffusion coefficient auto load = vparam[1]; //volumetric influx output_vector[0] = kappa * input_vector[0]; output_vector[1] = kappa * input_vector[1]; output_vector[2] = kappa * input_vector[2]; output_vector[3] = -load; }; The gradient of output_vector will be a matrix of size 4x4 and is computed with the help of the following object: constexpr int output_length = 4; constexpr int input_length = 4; constexpr int parameter_length = 2; mfem::VectorFuncAutoDiff function_derivative(func); The first parameter in the above template specifies the length of the result, the second parameter the length of the input vector input_vector , and the third template parameter specifies the length of vparam . Once function_derivative is defined, the following statement computes the gradients: function_derivative.Jacobian(param,state, grad_mat); The input consists of parameters and a state vector, and the output is 4x4 grad_mat matrix. The parameter vector consists of the coefficients $\\kappa$ and $f$ (referred to as load in the code). Example of AD differentiated function using functors The following vector function, defined as a functor, has zero parameters. The input of the function input_vector is a vector with 6 components, and the result is a vector output_vector of size 3. template class ExampleResidual { public: void operator ()(ParamVector& vparam, StateVector& input_vector, StateVector& output_vector) { output_vector[0]=sin(input_vector[0]+input_vector[1]+input_vector[2]); output_vector[1]=cos(input_vector[1]+input_vector[2]+input_vector[3]); output_vector[2]=tan(input_vector[2]+input_vector[3]+input_vector[4]+input_vector[5]); } }; The gradient of output_vector will be a matrix of size 3x6 and is computed with the help of the following object: constexpr int output_length = 3; constexpr int input_length = 6; constexpr int parameter_length = 0; mfem::VectorFuncAutoDiff erdf; The Jacobian for a vector input_vector is calculated using the following lines: mfem::DenseMatrix jac(3,6); mfem::Vector param; //dummy vector - we do not have parameters mfem::Vector input_vector(6); input_vector=1.0; // all values are set to one erdf.Jacobian(param,input_vector,jac); The elements of the state vector input_vector are set to one. In real application they should be set to the actual arguments of the function. The Jacobian is returned in the matrix jac(3,6) . The template parameters output_length , input_length ,and parameter_length should match the vector function signature. It is important to mention that the current AD interface is intended to be used at the integration point level. Thus, all vectors and matrices used as arguments in the functors and the lambda expressions should be serial objects. The provided set of examples, in the mini-app directory, for solving a $p$-Laplacian problem further exemplifies the intended use of the current implementation. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); Griewank, A. & Walther, A. Evaluating derivatives: principles and techniques of algorithmic differentiation SIAM, 2008 \u21a9 Sagebaum, M.; Albring, T. & Gauger, N. R. High-Performance Derivative Computations Using CoDiPack ACM Trans. Math. Softw., Association for Computing Machinery, 2019, 45 \u21a9 \u21a9 N\u00f8rgaard, S. A.; Sagebaum, M.; Gauger, N. R. & Lazarov, B. Applications of automatic differentiation in topology optimization Structural and Multidisciplinary Optimization, 2017, 56, 1135-1146 \u21a9", "title": "Automatic Differentiation"}, {"location": "autodiff/#automatic-differentiation-mini-applications", "text": "The code in the miniapps/autodiff subdirectory of MFEM provides methods for automatic differentiation (AD) of arbitrary functions implemented in C++, either as lambda functions or functors. AD consists of a set of techniques to evaluate the derivative of a function implemented as a computer program. AD does not provide a symbolic form of the derivatives, and AD is not a numerical approximation technique. Instead, the derivatives obtained by AD are exact and exploit the fact that every function implemented on a computer can be represented by a sequence of arithmetic operations and basic functions, i.e., addition, multiplication, sin , cos , log , etc. The derivatives in AD with respect to the input arguments are obtained by applying the chain rule on the recorded sequence of operations. For more theoretical details, the users are referred to 1 . AD can be implemented on a compiler level by source code transformations or by using some of the features of modern object-oriented languages like operator overloading and templating. Even though several AD implementations on a compiler level exist, they are often utilized for simple functions written in languages like Fortran and C, and developments for general C++ applications are still in their infancy. The MFEM implementation relies on native and external C++ libraries like CoDiPack 2 . The users can choose the AD engine during the configuration phase. The choice does not affect the actual utilization of AD in the code, and it can impact only the performance and memory utilization. Two distinguished modes, forward and reverse, can be easily identified in software implementations of automatic differentiation. For a function $f(\\mathbf{x}):\\mathbb{R}^n \\rightarrow\\mathbb{R}^m$, forward mode implementations evaluate \\begin{align} \\dot{\\mathbf{y}}&=f'\\left(\\mathbf{x}\\right)\\dot{\\mathbf{x}}, \\quad \\dot{\\mathbf{x}}\\in\\mathbb{R}^{n\\times 1}, \\dot{\\mathbf{y}}\\in\\mathbb{R}^{m\\times1}, \\end{align} where the vector $\\dot{\\mathbf{x}}$ is specified by the user. Therefore, to extract the Jacobian $f'\\left(\\mathbf{x}\\right)\\dot{\\mathbf{x}}$ one has to call the AD procedure $n$ times with $n$ different vectors $\\dot{\\mathbf{x}}$, where the values of vector $j=1,\\ldots, n$ are defined as $\\dot{\\mathbf{x}}_i=\\delta_{i,j}$. The Jacobian is extracted column by column. For a function $f(\\mathbf{x}):\\mathbb{R}^n \\rightarrow\\mathbb{R}^m$, reverse mode evaluates \\begin{align} \\bar{\\mathbf{x}}^{\\sf{T}}&=\\bar{\\mathbf{y}}^{\\sf{T}} f'\\left(\\mathbf{x}\\right), &\\bar{\\mathbf{x}}\\in\\mathbb{R}^{n\\times 1}, \\bar{\\mathbf{y}}\\in\\mathbb{R}^{m\\times1}, \\end{align} where $\\bar{\\mathbf{y}}$ is a vector specified by the user. In contrast to forward mode, the Jacobian, in this case, can be extracted row by row. Thus, for a vector function with a number of arguments smaller than the size of the output, the forward mode will be the preferable one. For a vector function with a number of arguments larger than the size of the output, the reverse mode will be the preferable one. It should be mentioned that reverse mode introduces additional overhead for storing the computational graph in the memory, which might easily fill up the available memory. The interested users are referred to 3 for detailed comparisons. In MFEM, users can choose between a native implementation using AD in a forward mode and both forward and reverse mode implementation based on CoDiPack 2 . The native implementation is based on the so-called dual numbers briefly described below.", "title": "Automatic Differentiation Mini Applications"}, {"location": "autodiff/#dual-numbers", "text": "In forward mode, the derivative information propagates from the input arguments to the output results. In MFEM, this is achieved with the help of the so-called dual number arithmetic. The native low-level implementation can be found in the header file fdual.hpp . The file implements a large number of basic functions, and if necessary additional basic and more complex functions can be easily added by following the examples. A dual number $x+\\varepsilon x'$ consists of a primal/real part and a dual part dragging the derivative information. Every real number can be represented as $x+\\varepsilon 0 $. The arithmetic is defined with the help of dummy symbol $\\varepsilon$ by specifying that $\\varepsilon^2=0$. Based on the above, the following set of rules can be easily derived. $\\left(x+\\varepsilon x'\\right)+\\left(y+\\varepsilon y'\\right)=\\left(x+y\\right)+\\varepsilon\\left(x'+y'\\right)$ $\\left(x+\\varepsilon x'\\right)*\\left(y+\\varepsilon y'\\right)=xy+\\varepsilon\\left(yx'+xy'\\right)$ $f\\left(x+\\varepsilon x'\\right)=f\\left(x\\right)+\\varepsilon f'\\left(x\\right)x'$ $f\\left(g \\left(x+\\varepsilon x'\\right) \\right)= f\\left(g \\left(x\\right)+\\varepsilon g'\\left(x\\right) x'\\right) = f\\left(g \\left(x \\right)\\right)+\\varepsilon f'\\left(g \\left(x \\right)\\right) g'\\left(x\\right) x'$", "title": "Dual numbers"}, {"location": "autodiff/#example-of-ad-differentiated-function", "text": "The following vector function, defined as lambda expression, has two parameters kappa and load . The input of the function input_vector is a vector $\\left[\\partial u/\\partial x, \\partial u/\\partial y,\\partial u/\\partial z,u \\right]^{\\sf{T}}$ with 4 components (the last one is not used in the output of the function), and the result is a vector $\\left[\\kappa \\partial u/\\partial x, \\kappa \\partial u/\\partial y, \\kappa \\partial u/\\partial z, -f \\right]$ output_vector of size 4. //using lambda expression auto func = [](mfem::Vector& vparam, mfem::ad::ADVectorType& input_vector, mfem::ad::ADVectorType& output_vector) { auto kappa = vparam[0]; //diffusion coefficient auto load = vparam[1]; //volumetric influx output_vector[0] = kappa * input_vector[0]; output_vector[1] = kappa * input_vector[1]; output_vector[2] = kappa * input_vector[2]; output_vector[3] = -load; }; The gradient of output_vector will be a matrix of size 4x4 and is computed with the help of the following object: constexpr int output_length = 4; constexpr int input_length = 4; constexpr int parameter_length = 2; mfem::VectorFuncAutoDiff function_derivative(func); The first parameter in the above template specifies the length of the result, the second parameter the length of the input vector input_vector , and the third template parameter specifies the length of vparam . Once function_derivative is defined, the following statement computes the gradients: function_derivative.Jacobian(param,state, grad_mat); The input consists of parameters and a state vector, and the output is 4x4 grad_mat matrix. The parameter vector consists of the coefficients $\\kappa$ and $f$ (referred to as load in the code).", "title": "Example of AD differentiated function"}, {"location": "autodiff/#example-of-ad-differentiated-function-using-functors", "text": "The following vector function, defined as a functor, has zero parameters. The input of the function input_vector is a vector with 6 components, and the result is a vector output_vector of size 3. template class ExampleResidual { public: void operator ()(ParamVector& vparam, StateVector& input_vector, StateVector& output_vector) { output_vector[0]=sin(input_vector[0]+input_vector[1]+input_vector[2]); output_vector[1]=cos(input_vector[1]+input_vector[2]+input_vector[3]); output_vector[2]=tan(input_vector[2]+input_vector[3]+input_vector[4]+input_vector[5]); } }; The gradient of output_vector will be a matrix of size 3x6 and is computed with the help of the following object: constexpr int output_length = 3; constexpr int input_length = 6; constexpr int parameter_length = 0; mfem::VectorFuncAutoDiff erdf; The Jacobian for a vector input_vector is calculated using the following lines: mfem::DenseMatrix jac(3,6); mfem::Vector param; //dummy vector - we do not have parameters mfem::Vector input_vector(6); input_vector=1.0; // all values are set to one erdf.Jacobian(param,input_vector,jac); The elements of the state vector input_vector are set to one. In real application they should be set to the actual arguments of the function. The Jacobian is returned in the matrix jac(3,6) . The template parameters output_length , input_length ,and parameter_length should match the vector function signature. It is important to mention that the current AD interface is intended to be used at the integration point level. Thus, all vectors and matrices used as arguments in the functors and the lambda expressions should be serial objects. The provided set of examples, in the mini-app directory, for solving a $p$-Laplacian problem further exemplifies the intended use of the current implementation. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); Griewank, A. & Walther, A. Evaluating derivatives: principles and techniques of algorithmic differentiation SIAM, 2008 \u21a9 Sagebaum, M.; Albring, T. & Gauger, N. R. High-Performance Derivative Computations Using CoDiPack ACM Trans. Math. Softw., Association for Computing Machinery, 2019, 45 \u21a9 \u21a9 N\u00f8rgaard, S. A.; Sagebaum, M.; Gauger, N. R. & Lazarov, B. Applications of automatic differentiation in topology optimization Structural and Multidisciplinary Optimization, 2017, 56, 1135-1146 \u21a9", "title": "Example of AD differentiated function using functors"}, {"location": "bilininteg/", "text": "Bilinear Form Integrators $ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} $ Bilinear form integrators are at the heart of any finite element method, they are used to compute the integrals of products of basis functions over individual mesh elements (or sometimes over edges or faces). Typically each element is contained in the support of several basis functions of both the domain and range spaces, therefore bilinear integrators simultaneously compute the integrals of all combinations of the relevant basis functions from the domain and range spaces. This produces a two dimensional array of results that are arranged into a small dense matrix of integral values called a local element (stiffness) matrix . To put this another way, the BilinearForm class builds a global, sparse, finite element matrix, glb_mat , by performing the outer loop in the following pseudocode snippet whereas the BilinearFormIntegrator class performs the nested inner loops to compute the dense local element matrix, loc_mat . for each elem in elements loc_mat = 0.0 for each pt in quadrature_points for each u_j in elem for each v_i in elem loc_mat(i,j) += w(pt) * u_j(pt) v_i(pt) end end end glb_mat += loc_mat end There are three types of integrals that typically arise although many other, more exotic, forms are possible: Integrals involving Scalar basis functions: $\\int_\\Omega \\lambda\\, u v$ Integrals involving Vector basis functions: $\\int_\\Omega \\lambda\\, \\vec{u}\\cdot\\vec{v}$ Integrals involving Scalar and Vector basis functions: $\\int_\\Omega u\\,\\vec{\\lambda}\\cdot\\vec{v}$ The BilinearFormIntegrator classes allow MFEM to produce a wide variety of local element matrices without modifying the BilinearForm class. Many of the possible operators are collected below into tables that briefly describe their action and requirements. For more information on integration and developing custom BilinearFormIntegrator classes see Integration . In the tables below the Space column refers to finite element spaces which implement the following methods: Space Operator Derivative Operator H1 CalcShape CalcDShape ND CalcVShape CalcCurlShape RT CalcVShape CalcDivShape L2 CalcShape None The Coef. column refers to the types of coefficients that are available. A boldface coefficient type is required whereas most coefficients are optional. Coef. Type of Function Argument Type S Scalar Valued Function Coefficient V Vector Valued Function VectorCoefficient D Diagonal Matrix Function VectorCoefficient M General Matrix Function MatrixCoefficient Notation: The integrals performed by the various integrators listed below are shown using inner product notation, $(\\cdot,\\cdot)$, defined as follows. $$(\\lambda u, v)\\equiv \\int_\\Omega \\lambda u v$$ $$(\\lambda\\vec{u}, \\vec{v})\\equiv \\int_\\Omega\\lambda\\vec{u}\\cdot\\vec{v}$$ Where $u$ or $\\vec{u}$ is a function in the domain (or trial) space and $v$ or $\\vec{v}$ is in the range (or test) space. For boundary integrators, the integrals are over $\\partial \\Omega$. Face integrators integrate over the interior and boundary faces of mesh elements and are denoted with $\\left<\\cdot,\\cdot\\right>$. Note that any operators involving a derivative of the range function $v$ or $\\vec{v}$ are computed using integration by parts. This leads to a boundary integral which can be used to apply Neumann boundary conditions. Some of these operators are listed along with their boundary terms in section Weak Operators . Scalar Field Operators These operators require scalar-valued trial spaces. Many of these operators will work with either H1 or L2 basis functions but some that require a gradient operator should be used with H1. Square Operators These integrators are designed to be used with the BilinearForm object to assemble square linear operators. Class Name Spaces Coef. Operator Continuous Op. Dimension MassIntegrator H1, L2 S $(\\lambda u, v)$ $\\lambda u$ 1D, 2D, 3D DiffusionIntegrator H1 S, M $(\\lambda\\grad u, \\grad v)$ $-\\div(\\lambda\\grad u)$ 1D, 2D, 3D Mixed Operators These integrators are designed to be used with the MixedBilinearForm object to assemble square or rectangular linear operators. Class Name Domain Range Coef. Operator Continuous Op. Dimension MixedScalarMassIntegrator H1, L2 H1, L2 S $(\\lambda u, v)$ $\\lambda u$ 1D, 2D, 3D MixedScalarWeakDivergenceIntegrator H1, L2 H1 V $(-\\vec{\\lambda}u,\\grad v)$ $\\div(\\vec{\\lambda}u)$ 2D, 3D MixedScalarWeakDerivativeIntegrator H1, L2 H1 S $(-\\lambda u, \\ddx{v})$ $\\ddx{}(\\lambda u)\\;$ 1D MixedScalarWeakCurlIntegrator H1, L2 ND S $(\\lambda u,\\curl\\vec{v})$ $\\curl(\\lambda\\,u\\,\\hat{z})\\;$ 2D MixedVectorProductIntegrator H1, L2 ND, RT V $(\\vec{\\lambda}u,\\vec{v})$ $\\vec{\\lambda}u$ 2D, 3D MixedScalarWeakCrossProductIntegrator H1, L2 ND, RT V $(\\vec{\\lambda} u\\,\\hat{z},\\vec{v})$ $\\vec{\\lambda}\\times\\,\\hat{z}\\,u$ 2D MixedScalarWeakGradientIntegrator H1, L2 RT S $(-\\lambda u, \\div\\vec{v})$ $\\grad(\\lambda u)$ 2D, 3D MixedDirectionalDerivativeIntegrator H1 H1, L2 V $(\\vec{\\lambda}\\cdot\\grad u, v)$ $\\vec{\\lambda}\\cdot\\grad u$ 2D, 3D MixedScalarCrossGradIntegrator H1 H1, L2 V $(\\vec{\\lambda}\\cross\\grad u, v)$ $\\vec{\\lambda}\\cross\\grad u$ 2D MixedScalarDerivativeIntegrator H1 H1, L2 S $(\\lambda \\ddx{u}, v)$ $\\lambda\\ddx{u}\\;$ 1D MixedGradGradIntegrator H1 H1 S, D, M $(\\lambda\\grad u,\\grad v)$ $-\\div(\\lambda\\grad u)$ 2D, 3D MixedCrossGradGradIntegrator H1 H1 V $(\\vec{\\lambda}\\cross\\grad u,\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\grad u)$ 2D, 3D MixedVectorGradientIntegrator H1 ND, RT S, D, M $(\\lambda\\grad u,\\vec{v})$ $\\lambda\\grad u$ 2D, 3D MixedCrossGradIntegrator H1 ND, RT V $(\\vec{\\lambda}\\cross\\grad u,\\vec{v})$ $\\vec{\\lambda}\\cross\\grad u$ 3D MixedCrossGradCurlIntegrator H1 ND V $(\\vec{\\lambda}\\times\\grad u, \\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\times\\grad u)$ 3D MixedGradDivIntegrator H1 RT V $(\\vec{\\lambda}\\cdot\\grad u, \\div\\vec{v})$ $-\\grad(\\vec{\\lambda}\\cdot\\grad u)$ 2D, 3D Other Scalar Operators Class Name Domain Range Coef. Dimension Operator Notes DerivativeIntegrator H1, L2 H1, L2 S 1D, 2D, 3D $(\\lambda\\frac{\\partial u}{\\partial x_i}, v)$ The direction index \"i\" is passed by the user. See MixedDirectionalDerivativeIntegrator for a more general alternative. ConvectionIntegrator H1 H1 V 1D, 2D, 3D $(\\vec{\\lambda}\\cdot\\grad u, v)$ This is designed to be used with BilinearForm to produce a square matrix. See MixedDirectionalDerivativeIntegrator for a rectangular version. GroupConvectionIntegrator H1 H1 V 1D, 2D, 3D $(\\alpha\\vec{\\lambda}\\cdot\\grad u, v)$ Uses the \"group\" finite element formulation for advection due to Fletcher . BoundaryMassIntegrator H1, L2 H1, L2 S 1D, 2D, 3D $(\\lambda\\,u,v)$ Computes a mass matrix on the exterior faces of a domain. See MassIntegrator above for a more general version. Vector Finite Element Operators These operators require vector-valued basis functions in the trial space. Many of these operators will work with either ND or RT basis functions but others require one or the other. Square Operators These integrators are designed to be used with the BilinearForm object to assemble square linear operators. Class Name Spaces Coef. Operator Continuous Op. Dimension VectorFEMassIntegrator ND, RT S, D, M $(\\lambda\\vec{u},\\vec{v})$ $\\lambda\\vec{u}$ 2D, 3D CurlCurlIntegrator ND S, D, M $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ 2D, 3D DivDivIntegrator RT S $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ 2D, 3D Mixed Operators These integrators are designed to be used with the MixedBilinearForm object to assemble square or rectangular linear operators. Class Name Domain Range Coef. Operator Continuous Op. Dimension MixedDotProductIntegrator ND, RT H1, L2 V $(\\vec{\\lambda}\\cdot\\vec{u},v)$ $\\vec{\\lambda}\\cdot\\vec{u}$ 2D, 3D MixedScalarCrossProductIntegrator ND, RT H1, L2 V $(\\vec{\\lambda}\\cross\\vec{u},v)$ $\\vec{\\lambda}\\cross\\vec{u}$ 2D MixedVectorWeakDivergenceIntegrator ND, RT H1 S, D, M $(-\\lambda\\vec{u},\\grad v)$ $\\div(\\lambda\\vec{u})$ 2D, 3D MixedWeakDivCrossIntegrator ND, RT H1 V $(-\\vec{\\lambda}\\cross\\vec{u},\\grad v)$ $\\div(\\vec{\\lambda}\\cross\\vec{u})$ 3D MixedVectorMassIntegrator ND, RT ND, RT S, D, M $(\\lambda\\vec{u},\\vec{v})$ $\\lambda\\vec{u}$ 2D, 3D MixedCrossProductIntegrator ND, RT ND, RT V $(\\vec{\\lambda}\\cross\\vec{u},\\vec{v})$ $\\vec{\\lambda}\\cross\\vec{u}$ 3D MixedVectorWeakCurlIntegrator ND, RT ND S, D, M $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\vec{u})$ 3D MixedWeakCurlCrossIntegrator ND, RT ND V $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ 3D MixedScalarWeakCurlCrossIntegrator ND, RT ND V $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ 2D MixedWeakGradDotIntegrator ND, RT RT V $(-\\vec{\\lambda}\\cdot\\vec{u},\\div\\vec{v})$ $\\grad(\\vec{\\lambda}\\cdot\\vec{u})$ 2D, 3D MixedScalarCurlIntegrator ND H1, L2 S $(\\lambda\\curl\\vec{u},v)$ $\\lambda\\curl\\vec{u}\\;$ 2D MixedCrossCurlGradIntegrator ND H1 V $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\curl\\vec{u})$ 3D MixedVectorCurlIntegrator ND ND, RT S, D, M $(\\lambda\\curl\\vec{u},\\vec{v})$ $\\lambda\\curl\\vec{u}$ 3D MixedCrossCurlIntegrator ND ND, RT V $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\vec{v})$ $\\vec{\\lambda}\\cross\\curl\\vec{u}$ 3D MixedScalarCrossCurlIntegrator ND ND, RT V $(\\vec{\\lambda}\\cross\\hat{z}\\,\\curl\\vec{u},\\vec{v})$ $\\vec{\\lambda}\\cross\\hat{z}\\,\\curl\\vec{u}$ 2D MixedCurlCurlIntegrator ND ND S, D, M $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ 3D MixedCrossCurlCurlIntegrator ND ND V $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\curl\\vec{u})$ 3D MixedScalarDivergenceIntegrator RT H1, L2 S $(\\lambda\\div\\vec{u}, v)$ $\\lambda \\div\\vec{u}$ 2D, 3D MixedDivGradIntegrator RT H1 V $(\\vec{\\lambda}\\div\\vec{u}, \\grad v)$ $-\\div(\\vec{\\lambda}\\div\\vec{u})$ 2D, 3D MixedVectorDivergenceIntegrator RT ND, RT V $(\\vec{\\lambda}\\div\\vec{u}, \\vec{v})$ $\\vec{\\lambda}\\div\\vec{u}$ 2D, 3D Other Vector Finite Element Operators Class Name Domain Range Coef. Operator Dimension Notes VectorFEDivergenceIntegrator RT H1, L2 S $(\\lambda\\div\\vec{u}, v)$ 2D, 3D Alternate implementation of MixedScalarDivergenceIntegrator. VectorFEWeakDivergenceIntegrator ND H1 S $(-\\lambda\\vec{u},\\grad v)$ 2D, 3D See MixedVectorWeakDivergenceIntegrator for a more general implementation. VectorFECurlIntegrator ND, RT ND, RT S $(\\lambda\\curl\\vec{u},\\vec{v})$ or $(\\lambda\\vec{u},\\curl\\vec{v})$ 3D If the domain is ND then the Curl operator is returned, if the range is ND then the weak Curl is returned, otherwise a failure is encountered. See MixedVectorCurlIntegrator and MixedVectorWeakCurlIntegrator for more general implementations. Vector Field Operators These operators require vector-valued basis functions constructed by using multiple copies of scalar fields. In each of these integrators the scalar basis function index increments most quickly followed by the vector index. This leads to local element matrices that have a block structure. Square Operators Class Name Spaces Coef. Dimension Operator Notes VectorMassIntegrator H1$^d$, L2$^d$ S, D, M 1D, 2D, 3D $(\\lambda\\vec{u},\\vec{v})$ VectorCurlCurlIntegrator H1$^d$, L2$^d$ S 2D, 3D $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ VectorDiffusionIntegrator H1$^d$, L2$^d$ S 1D, 2D, 3D $(\\lambda\\grad u_i,\\grad v_i)$ Produces a block diagonal matrix where $i\\in[0,dim)$ indicates the index of the block ElasticityIntegrator H1$^d$, L2$^d$ $2\\times$S 1D, 2D, 3D $(c_{ikjl}\\grad u_j,\\grad v_i)$ Takes two scalar coefficients $\\lambda$ and $\\mu$ and produces a $dim\\times dim$ block structured matrix where $i$ and $j$ are indices in this matrix. The coefficient is defined by $c_{ikjl} = \\lambda\\delta_{ik}\\delta_{jl}+\\mu(\\delta_{ij}\\delta_{kl}+\\delta_{il}\\delta_{jk})$ Mixed Operators Class Name Domain Range Coef. Dimension Operator VectorDivergenceIntegrator H1$^d$, L2$^d$ H1, L2 S 1D, 2D, 3D $(\\lambda\\div\\vec{u},v)$ GradientIntegrator H1 H1$^d$, L2$^d$ S 1D, 2D, 3D $(\\lambda\\grad u, \\vec{v})$ Discontinuous Galerkin Operators Class Name Domain Range Operator Notes DGTraceIntegrator H1, L2 H1, L2 $\\alpha \\left<\\rho_u(\\vec{u}\\cdot\\hat{n}) \\{v\\},[w]\\right> \\\\ + \\beta \\left<\\rho_u \\abs{\\vec{u}\\cdot\\hat{n}}[v],[w]\\right>$ DGDiffusionIntegrator H1, L2 H1, L2 $-\\left<\\{Q\\grad u\\cdot\\hat{n}\\},[v]\\right> \\\\ + \\sigma \\left<[u],\\{Q\\grad v\\cdot\\hat{n}\\}\\right> \\\\ + \\kappa \\left<\\{h^{-1}Q\\}[u],[v]\\right> $ DGElasticityIntegrator H1, L2 H1, L2 see $(\\ref{dg-elast})$ TraceJumpIntegrator $\\left< v, [w] \\right>$ NormalTraceJumpIntegrator $\\left< v, \\left[\\vec{w}\\cdot \\hat{n}\\right] \\right>$ Integrator for the DG elasticity form, for the formulations see: PhD Thesis of Jonas De Basabe, High-Order Finite Element Methods for Seismic Wave Propagation, UT Austin, 2009, p. 23, and references therein Peter Hansbo and Mats G. Larson, Discontinuous Galerkin and the Crouzeix-Raviart Element: Application to Elasticity, PREPRINT 2000-09, p.3 $$ - \\left< \\{ \\tau(u) \\}, [v] \\right> + \\alpha \\left< \\{ \\tau(v) \\}, [u] \\right> + \\kappa \\left< h^{-1} \\{ \\lambda + 2 \\mu \\} [u], [v] \\right> $$ where $ \\left< u, v\\right> = \\int_{F} u \\cdot v $, and $ F $ is a face which is either a boundary face $ F_b $ of an element $ K $ or an interior face $ F_i $ separating elements $ K_1 $ and $ K_2 $. In the bilinear form above $ \\tau(u) $ is traction, and it's also $ \\tau(u) = \\sigma(u) \\cdot \\hat{n} $, where $ \\sigma(u) $ is stress, and $ \\hat{n} $ is the unit normal vector w.r.t. to $ F $. In other words, we have $$\\label{dg-elast} - \\left< \\{ \\sigma(u) \\cdot \\hat{n} \\}, [v] \\right> + \\alpha \\left< \\{ \\sigma(v) \\cdot \\hat{n} \\}, [u] \\right> + \\kappa \\left< h^{-1} \\{ \\lambda + 2 \\mu \\} [u], [v] \\right> $$ For isotropic media $$ \\begin{split} \\sigma(u) &= \\lambda \\nabla \\cdot u I + 2 \\mu \\varepsilon(u) \\\\ &= \\lambda \\nabla \\cdot u I + 2 \\mu \\frac{1}{2} \\left( \\nabla u + \\nabla u^T \\right) \\\\ &= \\lambda \\nabla \\cdot u I + \\mu \\left( \\nabla u + \\nabla u^T \\right) \\end{split} $$ where $ I $ is identity matrix, $ \\lambda $ and $ \\mu $ are Lame coefficients (see ElasticityIntegrator), $ u, v $ are the trial and test functions, respectively. The parameters $ \\alpha $ and $ \\kappa $ determine the DG method to use (when this integrator is added to the \"broken\" ElasticityIntegrator): IIPG , $\\alpha = 0$, C. Dawson, S. Sun, M. Wheeler, Compatible algorithms for coupled flow and transport, Comp. Meth. Appl. Mech. Eng., 193(23-26), 2565-2580, 2004. SIPG , $\\alpha = -1$, M. Grote, A. Schneebeli, D. Schotzau, Discontinuous Galerkin Finite Element Method for the Wave Equation, SINUM, 44(6), 2408-2431, 2006. NIPG , $\\alpha = 1$, B. Riviere, M. Wheeler, V. Girault, A Priori Error Estimates for Finite Element Methods Based on Discontinuous Approximation Spaces for Elliptic Problems, SINUM, 39(3), 902-931, 2001. This is a 'Vector' integrator, i.e. defined for FE spaces using multiple copies of a scalar FE space. Special Purpose Integrators These \"integrators\" do not actually perform integrations they merely alter the results of other integrators. As such they provide a convenient and easy way to reuse existing integrators in special situations rather than needing to reimplement their functionality. Class Name Description TransposeIntegrator Returns the transpose of the local matrix computed by another BilinearFormIntegrator LumpedIntegrator Returns a diagonal local matrix where each entry is the sum of the corresponding row of a local matrix computed by another BilinearFormIntegrator (only implemented for square matrices) InverseIntegrator Returns the inverse of the local matrix computed by another BilinearFormIntegrator which produces a square local matrix SumIntegrator Returns the sum of a series of integrators with compatible dimensions (only implemented for square matrices) Weak Operators and Their Boundary Integrals Weak operators use integration by parts to move a spatial derivative onto the test function. This results in an implied boundary integral that is often assumed to be zero but can be used to apply a non-homogeneous Neumann boundary condition given a known function $u_\\mathrm{bc}$ (or $\\vec{u}_\\mathrm{bc}$ for operators with a vector domain). Operator with Scalar Range The following weak operators require the range (or test) space to be $H_1$ i.e. a scalar basis function with a gradient operator. The implied natural boundary condition when using these operators is for the continuous boundary operator (shown in the last column) to be equal to zero. On the other hand an inhomogeneous Neumann boundary condition can be applied by using a linear form boundary integrator to compute this boundary term for a known function e.g. when using the DiffusionIntegrator one could provide a known function for $\\lambda\\,\\grad u_\\mathrm{bc}$ to the BoundaryNormalLFIntegrator which would then integrate the normal component of this function over the boundary of the domain. See Linear Form Integrators for more information. Class Name Operator Continuous Op. Continuous Boundary Op. DiffusionIntegrator $(\\lambda\\grad u, \\grad v)$ $-\\div(\\lambda\\grad u)$ $\\lambda\\,\\hat{n}\\cdot\\grad u_\\mathrm{bc}$ MixedGradGradIntegrator $(\\lambda\\grad u, \\grad v)$ $-\\div(\\lambda\\grad u)$ $\\lambda\\,\\hat{n}\\cdot\\grad u_\\mathrm{bc}$ MixedCrossGradGradIntegrator $(\\vec{\\lambda}\\cross\\grad u,\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\grad u)$ $\\hat{n}\\cdot(\\vec{\\lambda}\\times\\grad u_\\mathrm{bc})$ MixedScalarWeakDivergenceIntegrator $(-\\vec{\\lambda}u,\\grad v)$ $\\div(\\vec{\\lambda}u)$ $-\\hat{n}\\cdot\\vec{\\lambda}\\,u_\\mathrm{bc}$ MixedScalarWeakDerivativeIntegrator $(-\\lambda u, \\ddx{v})$ $\\ddx{}(\\lambda u)\\;$ $-\\hat{n}\\cdot\\hat{x}\\,\\lambda\\,u_\\mathrm{bc}$ MixedVectorWeakDivergenceIntegrator $(-\\lambda\\vec{u},\\grad v)$ $\\div(\\lambda\\vec{u})$ $-\\hat{n}\\cdot(\\lambda\\,\\vec{u}_\\mathrm{bc})$ MixedWeakDivCrossIntegrator $(-\\vec{\\lambda}\\cross\\vec{u},\\grad v)$ $\\div(\\vec{\\lambda}\\cross\\vec{u})$ $-\\hat{n}\\cdot(\\vec{\\lambda}\\times\\vec{u}_\\mathrm{bc})$ MixedCrossCurlGradIntegrator $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\curl\\vec{u})$ $\\hat{n}\\cdot(\\vec{\\lambda}\\cross\\curl\\vec{u}_\\mathrm{bc})$ MixedDivGradIntegrator $(\\vec{\\lambda}\\div\\vec{u}, \\grad v)$ $-\\div(\\vec{\\lambda}\\div\\vec{u})$ $\\hat{n}\\cdot(\\vec{\\lambda}\\div\\vec{u}_\\mathrm{bc})$ Operator with Vector Range The following weak operators require the range (or test) space to be H(Curl) i.e. a vector basis function with a curl operator. The implied natural boundary condition when using these operators is for the continuous boundary operator (shown in the last column) to be equal to zero. On the other hand a non-homogeneous Neumann boundary condition can be applied by using a linear form boundary integrator to compute this boundary term for a known function e.g. when using the CurlCurlIntegrator one could provide a known function for $-\\lambda\\,\\curl\\vec{u}_\\mathrm{bc}$ to the VectorFEBoundaryTangentLFIntegrator which would then integrate the product of the tangential portion of this function with that of the ND basis function over the boundary of the domain. See Linear Form Integrators for more information. Class Name Operator Continuous Op. Continuous Boundary Op. CurlCurlIntegrator $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $-\\lambda\\,\\hat{n}\\times\\curl\\vec{u}_\\mathrm{bc}$ MixedCurlCurlIntegrator $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $-\\lambda\\,\\hat{n}\\times\\curl\\vec{u}_\\mathrm{bc}$ MixedCrossCurlCurlIntegrator $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\curl\\vec{u})$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\curl\\vec{u}_\\mathrm{bc})$ MixedCrossGradCurlIntegrator $(\\vec{\\lambda}\\cross\\grad u,\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\grad u)$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\grad u_\\mathrm{bc})$ MixedVectorWeakCurlIntegrator $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\vec{u})$ $-\\lambda\\,\\hat{n}\\times\\vec{u}_\\mathrm{bc}$ MixedScalarWeakCurlIntegrator $(\\lambda u,\\curl\\vec{v})$ $\\curl(\\lambda\\,u\\,\\hat{z})\\;$ $-\\lambda\\,u_\\mathrm{bc}\\,\\hat{n}\\times\\hat{z}$ MixedWeakCurlCrossIntegrator $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\vec{u}_\\mathrm{bc})$ MixedScalarWeakCurlCrossIntegrator $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\vec{u}_\\mathrm{bc})$ The following weak operators require the range (or test) space to be H(Div) i.e. a vector basis function with a divergence operator. The implied natural boundary condition when using these operators is for the continuous boundary operator (shown in the last column) to be equal to zero. On the other hand a non-homogeneous Neumann boundary condition can be applied by using a linear form boundary integrator to compute this boundary term for a known function e.g. when using the DivDivIntegrator one could provide a known function for $\\lambda\\,\\div\\vec{u}_\\mathrm{bc}$ to the VectorFEBoundaryFluxLFIntegrator which would then integrate the product of this function with the normal component of the RT basis function over the boundary of the domain. See Linear Form Integrators for more information. Class Name Operator Continuous Op. Continuous Boundary Op. DivDivIntegrator $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ $\\lambda\\div\\vec{u}_\\mathrm{bc}\\,\\hat{n}$ MixedGradDivIntegrator $(\\vec{\\lambda}\\cdot\\grad u, \\div\\vec{v})$ $-\\grad(\\vec{\\lambda}\\cdot\\grad u)$ $\\vec{\\lambda}\\cdot\\grad u_\\mathrm{bc}\\,\\hat{n}$ MixedScalarWeakGradientIntegrator $(-\\lambda u, \\div\\vec{v})$ $\\grad(\\lambda u)$ $-\\lambda u_\\mathrm{bc}\\,\\hat{n}$ MixedWeakGradDotIntegrator $(-\\vec{\\lambda}\\cdot\\vec{u},\\div\\vec{v})$ $\\grad(\\vec{\\lambda}\\cdot\\vec{u})$ $-\\vec{\\lambda}\\cdot\\vec{u}_\\mathrm{bc}\\,\\hat{n}$ Device support A list of the MFEM integrators that support device acceleration is available here . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Bilinear Form Integrators"}, {"location": "bilininteg/#bilinear-form-integrators", "text": "$ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} $ Bilinear form integrators are at the heart of any finite element method, they are used to compute the integrals of products of basis functions over individual mesh elements (or sometimes over edges or faces). Typically each element is contained in the support of several basis functions of both the domain and range spaces, therefore bilinear integrators simultaneously compute the integrals of all combinations of the relevant basis functions from the domain and range spaces. This produces a two dimensional array of results that are arranged into a small dense matrix of integral values called a local element (stiffness) matrix . To put this another way, the BilinearForm class builds a global, sparse, finite element matrix, glb_mat , by performing the outer loop in the following pseudocode snippet whereas the BilinearFormIntegrator class performs the nested inner loops to compute the dense local element matrix, loc_mat . for each elem in elements loc_mat = 0.0 for each pt in quadrature_points for each u_j in elem for each v_i in elem loc_mat(i,j) += w(pt) * u_j(pt) v_i(pt) end end end glb_mat += loc_mat end There are three types of integrals that typically arise although many other, more exotic, forms are possible: Integrals involving Scalar basis functions: $\\int_\\Omega \\lambda\\, u v$ Integrals involving Vector basis functions: $\\int_\\Omega \\lambda\\, \\vec{u}\\cdot\\vec{v}$ Integrals involving Scalar and Vector basis functions: $\\int_\\Omega u\\,\\vec{\\lambda}\\cdot\\vec{v}$ The BilinearFormIntegrator classes allow MFEM to produce a wide variety of local element matrices without modifying the BilinearForm class. Many of the possible operators are collected below into tables that briefly describe their action and requirements. For more information on integration and developing custom BilinearFormIntegrator classes see Integration . In the tables below the Space column refers to finite element spaces which implement the following methods: Space Operator Derivative Operator H1 CalcShape CalcDShape ND CalcVShape CalcCurlShape RT CalcVShape CalcDivShape L2 CalcShape None The Coef. column refers to the types of coefficients that are available. A boldface coefficient type is required whereas most coefficients are optional. Coef. Type of Function Argument Type S Scalar Valued Function Coefficient V Vector Valued Function VectorCoefficient D Diagonal Matrix Function VectorCoefficient M General Matrix Function MatrixCoefficient Notation: The integrals performed by the various integrators listed below are shown using inner product notation, $(\\cdot,\\cdot)$, defined as follows. $$(\\lambda u, v)\\equiv \\int_\\Omega \\lambda u v$$ $$(\\lambda\\vec{u}, \\vec{v})\\equiv \\int_\\Omega\\lambda\\vec{u}\\cdot\\vec{v}$$ Where $u$ or $\\vec{u}$ is a function in the domain (or trial) space and $v$ or $\\vec{v}$ is in the range (or test) space. For boundary integrators, the integrals are over $\\partial \\Omega$. Face integrators integrate over the interior and boundary faces of mesh elements and are denoted with $\\left<\\cdot,\\cdot\\right>$. Note that any operators involving a derivative of the range function $v$ or $\\vec{v}$ are computed using integration by parts. This leads to a boundary integral which can be used to apply Neumann boundary conditions. Some of these operators are listed along with their boundary terms in section Weak Operators .", "title": "Bilinear Form Integrators"}, {"location": "bilininteg/#scalar-field-operators", "text": "These operators require scalar-valued trial spaces. Many of these operators will work with either H1 or L2 basis functions but some that require a gradient operator should be used with H1.", "title": "Scalar Field Operators"}, {"location": "bilininteg/#square-operators", "text": "These integrators are designed to be used with the BilinearForm object to assemble square linear operators. Class Name Spaces Coef. Operator Continuous Op. Dimension MassIntegrator H1, L2 S $(\\lambda u, v)$ $\\lambda u$ 1D, 2D, 3D DiffusionIntegrator H1 S, M $(\\lambda\\grad u, \\grad v)$ $-\\div(\\lambda\\grad u)$ 1D, 2D, 3D", "title": "Square Operators"}, {"location": "bilininteg/#mixed-operators", "text": "These integrators are designed to be used with the MixedBilinearForm object to assemble square or rectangular linear operators. Class Name Domain Range Coef. Operator Continuous Op. Dimension MixedScalarMassIntegrator H1, L2 H1, L2 S $(\\lambda u, v)$ $\\lambda u$ 1D, 2D, 3D MixedScalarWeakDivergenceIntegrator H1, L2 H1 V $(-\\vec{\\lambda}u,\\grad v)$ $\\div(\\vec{\\lambda}u)$ 2D, 3D MixedScalarWeakDerivativeIntegrator H1, L2 H1 S $(-\\lambda u, \\ddx{v})$ $\\ddx{}(\\lambda u)\\;$ 1D MixedScalarWeakCurlIntegrator H1, L2 ND S $(\\lambda u,\\curl\\vec{v})$ $\\curl(\\lambda\\,u\\,\\hat{z})\\;$ 2D MixedVectorProductIntegrator H1, L2 ND, RT V $(\\vec{\\lambda}u,\\vec{v})$ $\\vec{\\lambda}u$ 2D, 3D MixedScalarWeakCrossProductIntegrator H1, L2 ND, RT V $(\\vec{\\lambda} u\\,\\hat{z},\\vec{v})$ $\\vec{\\lambda}\\times\\,\\hat{z}\\,u$ 2D MixedScalarWeakGradientIntegrator H1, L2 RT S $(-\\lambda u, \\div\\vec{v})$ $\\grad(\\lambda u)$ 2D, 3D MixedDirectionalDerivativeIntegrator H1 H1, L2 V $(\\vec{\\lambda}\\cdot\\grad u, v)$ $\\vec{\\lambda}\\cdot\\grad u$ 2D, 3D MixedScalarCrossGradIntegrator H1 H1, L2 V $(\\vec{\\lambda}\\cross\\grad u, v)$ $\\vec{\\lambda}\\cross\\grad u$ 2D MixedScalarDerivativeIntegrator H1 H1, L2 S $(\\lambda \\ddx{u}, v)$ $\\lambda\\ddx{u}\\;$ 1D MixedGradGradIntegrator H1 H1 S, D, M $(\\lambda\\grad u,\\grad v)$ $-\\div(\\lambda\\grad u)$ 2D, 3D MixedCrossGradGradIntegrator H1 H1 V $(\\vec{\\lambda}\\cross\\grad u,\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\grad u)$ 2D, 3D MixedVectorGradientIntegrator H1 ND, RT S, D, M $(\\lambda\\grad u,\\vec{v})$ $\\lambda\\grad u$ 2D, 3D MixedCrossGradIntegrator H1 ND, RT V $(\\vec{\\lambda}\\cross\\grad u,\\vec{v})$ $\\vec{\\lambda}\\cross\\grad u$ 3D MixedCrossGradCurlIntegrator H1 ND V $(\\vec{\\lambda}\\times\\grad u, \\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\times\\grad u)$ 3D MixedGradDivIntegrator H1 RT V $(\\vec{\\lambda}\\cdot\\grad u, \\div\\vec{v})$ $-\\grad(\\vec{\\lambda}\\cdot\\grad u)$ 2D, 3D", "title": "Mixed Operators"}, {"location": "bilininteg/#other-scalar-operators", "text": "Class Name Domain Range Coef. Dimension Operator Notes DerivativeIntegrator H1, L2 H1, L2 S 1D, 2D, 3D $(\\lambda\\frac{\\partial u}{\\partial x_i}, v)$ The direction index \"i\" is passed by the user. See MixedDirectionalDerivativeIntegrator for a more general alternative. ConvectionIntegrator H1 H1 V 1D, 2D, 3D $(\\vec{\\lambda}\\cdot\\grad u, v)$ This is designed to be used with BilinearForm to produce a square matrix. See MixedDirectionalDerivativeIntegrator for a rectangular version. GroupConvectionIntegrator H1 H1 V 1D, 2D, 3D $(\\alpha\\vec{\\lambda}\\cdot\\grad u, v)$ Uses the \"group\" finite element formulation for advection due to Fletcher . BoundaryMassIntegrator H1, L2 H1, L2 S 1D, 2D, 3D $(\\lambda\\,u,v)$ Computes a mass matrix on the exterior faces of a domain. See MassIntegrator above for a more general version.", "title": "Other Scalar Operators"}, {"location": "bilininteg/#vector-finite-element-operators", "text": "These operators require vector-valued basis functions in the trial space. Many of these operators will work with either ND or RT basis functions but others require one or the other.", "title": "Vector Finite Element Operators"}, {"location": "bilininteg/#square-operators_1", "text": "These integrators are designed to be used with the BilinearForm object to assemble square linear operators. Class Name Spaces Coef. Operator Continuous Op. Dimension VectorFEMassIntegrator ND, RT S, D, M $(\\lambda\\vec{u},\\vec{v})$ $\\lambda\\vec{u}$ 2D, 3D CurlCurlIntegrator ND S, D, M $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ 2D, 3D DivDivIntegrator RT S $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ 2D, 3D", "title": "Square Operators"}, {"location": "bilininteg/#mixed-operators_1", "text": "These integrators are designed to be used with the MixedBilinearForm object to assemble square or rectangular linear operators. Class Name Domain Range Coef. Operator Continuous Op. Dimension MixedDotProductIntegrator ND, RT H1, L2 V $(\\vec{\\lambda}\\cdot\\vec{u},v)$ $\\vec{\\lambda}\\cdot\\vec{u}$ 2D, 3D MixedScalarCrossProductIntegrator ND, RT H1, L2 V $(\\vec{\\lambda}\\cross\\vec{u},v)$ $\\vec{\\lambda}\\cross\\vec{u}$ 2D MixedVectorWeakDivergenceIntegrator ND, RT H1 S, D, M $(-\\lambda\\vec{u},\\grad v)$ $\\div(\\lambda\\vec{u})$ 2D, 3D MixedWeakDivCrossIntegrator ND, RT H1 V $(-\\vec{\\lambda}\\cross\\vec{u},\\grad v)$ $\\div(\\vec{\\lambda}\\cross\\vec{u})$ 3D MixedVectorMassIntegrator ND, RT ND, RT S, D, M $(\\lambda\\vec{u},\\vec{v})$ $\\lambda\\vec{u}$ 2D, 3D MixedCrossProductIntegrator ND, RT ND, RT V $(\\vec{\\lambda}\\cross\\vec{u},\\vec{v})$ $\\vec{\\lambda}\\cross\\vec{u}$ 3D MixedVectorWeakCurlIntegrator ND, RT ND S, D, M $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\vec{u})$ 3D MixedWeakCurlCrossIntegrator ND, RT ND V $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ 3D MixedScalarWeakCurlCrossIntegrator ND, RT ND V $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ 2D MixedWeakGradDotIntegrator ND, RT RT V $(-\\vec{\\lambda}\\cdot\\vec{u},\\div\\vec{v})$ $\\grad(\\vec{\\lambda}\\cdot\\vec{u})$ 2D, 3D MixedScalarCurlIntegrator ND H1, L2 S $(\\lambda\\curl\\vec{u},v)$ $\\lambda\\curl\\vec{u}\\;$ 2D MixedCrossCurlGradIntegrator ND H1 V $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\curl\\vec{u})$ 3D MixedVectorCurlIntegrator ND ND, RT S, D, M $(\\lambda\\curl\\vec{u},\\vec{v})$ $\\lambda\\curl\\vec{u}$ 3D MixedCrossCurlIntegrator ND ND, RT V $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\vec{v})$ $\\vec{\\lambda}\\cross\\curl\\vec{u}$ 3D MixedScalarCrossCurlIntegrator ND ND, RT V $(\\vec{\\lambda}\\cross\\hat{z}\\,\\curl\\vec{u},\\vec{v})$ $\\vec{\\lambda}\\cross\\hat{z}\\,\\curl\\vec{u}$ 2D MixedCurlCurlIntegrator ND ND S, D, M $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ 3D MixedCrossCurlCurlIntegrator ND ND V $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\curl\\vec{u})$ 3D MixedScalarDivergenceIntegrator RT H1, L2 S $(\\lambda\\div\\vec{u}, v)$ $\\lambda \\div\\vec{u}$ 2D, 3D MixedDivGradIntegrator RT H1 V $(\\vec{\\lambda}\\div\\vec{u}, \\grad v)$ $-\\div(\\vec{\\lambda}\\div\\vec{u})$ 2D, 3D MixedVectorDivergenceIntegrator RT ND, RT V $(\\vec{\\lambda}\\div\\vec{u}, \\vec{v})$ $\\vec{\\lambda}\\div\\vec{u}$ 2D, 3D", "title": "Mixed Operators"}, {"location": "bilininteg/#other-vector-finite-element-operators", "text": "Class Name Domain Range Coef. Operator Dimension Notes VectorFEDivergenceIntegrator RT H1, L2 S $(\\lambda\\div\\vec{u}, v)$ 2D, 3D Alternate implementation of MixedScalarDivergenceIntegrator. VectorFEWeakDivergenceIntegrator ND H1 S $(-\\lambda\\vec{u},\\grad v)$ 2D, 3D See MixedVectorWeakDivergenceIntegrator for a more general implementation. VectorFECurlIntegrator ND, RT ND, RT S $(\\lambda\\curl\\vec{u},\\vec{v})$ or $(\\lambda\\vec{u},\\curl\\vec{v})$ 3D If the domain is ND then the Curl operator is returned, if the range is ND then the weak Curl is returned, otherwise a failure is encountered. See MixedVectorCurlIntegrator and MixedVectorWeakCurlIntegrator for more general implementations.", "title": "Other Vector Finite Element Operators"}, {"location": "bilininteg/#vector-field-operators", "text": "These operators require vector-valued basis functions constructed by using multiple copies of scalar fields. In each of these integrators the scalar basis function index increments most quickly followed by the vector index. This leads to local element matrices that have a block structure.", "title": "Vector Field Operators"}, {"location": "bilininteg/#square-operators_2", "text": "Class Name Spaces Coef. Dimension Operator Notes VectorMassIntegrator H1$^d$, L2$^d$ S, D, M 1D, 2D, 3D $(\\lambda\\vec{u},\\vec{v})$ VectorCurlCurlIntegrator H1$^d$, L2$^d$ S 2D, 3D $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ VectorDiffusionIntegrator H1$^d$, L2$^d$ S 1D, 2D, 3D $(\\lambda\\grad u_i,\\grad v_i)$ Produces a block diagonal matrix where $i\\in[0,dim)$ indicates the index of the block ElasticityIntegrator H1$^d$, L2$^d$ $2\\times$S 1D, 2D, 3D $(c_{ikjl}\\grad u_j,\\grad v_i)$ Takes two scalar coefficients $\\lambda$ and $\\mu$ and produces a $dim\\times dim$ block structured matrix where $i$ and $j$ are indices in this matrix. The coefficient is defined by $c_{ikjl} = \\lambda\\delta_{ik}\\delta_{jl}+\\mu(\\delta_{ij}\\delta_{kl}+\\delta_{il}\\delta_{jk})$", "title": "Square Operators"}, {"location": "bilininteg/#mixed-operators_2", "text": "Class Name Domain Range Coef. Dimension Operator VectorDivergenceIntegrator H1$^d$, L2$^d$ H1, L2 S 1D, 2D, 3D $(\\lambda\\div\\vec{u},v)$ GradientIntegrator H1 H1$^d$, L2$^d$ S 1D, 2D, 3D $(\\lambda\\grad u, \\vec{v})$", "title": "Mixed Operators"}, {"location": "bilininteg/#discontinuous-galerkin-operators", "text": "Class Name Domain Range Operator Notes DGTraceIntegrator H1, L2 H1, L2 $\\alpha \\left<\\rho_u(\\vec{u}\\cdot\\hat{n}) \\{v\\},[w]\\right> \\\\ + \\beta \\left<\\rho_u \\abs{\\vec{u}\\cdot\\hat{n}}[v],[w]\\right>$ DGDiffusionIntegrator H1, L2 H1, L2 $-\\left<\\{Q\\grad u\\cdot\\hat{n}\\},[v]\\right> \\\\ + \\sigma \\left<[u],\\{Q\\grad v\\cdot\\hat{n}\\}\\right> \\\\ + \\kappa \\left<\\{h^{-1}Q\\}[u],[v]\\right> $ DGElasticityIntegrator H1, L2 H1, L2 see $(\\ref{dg-elast})$ TraceJumpIntegrator $\\left< v, [w] \\right>$ NormalTraceJumpIntegrator $\\left< v, \\left[\\vec{w}\\cdot \\hat{n}\\right] \\right>$ Integrator for the DG elasticity form, for the formulations see: PhD Thesis of Jonas De Basabe, High-Order Finite Element Methods for Seismic Wave Propagation, UT Austin, 2009, p. 23, and references therein Peter Hansbo and Mats G. Larson, Discontinuous Galerkin and the Crouzeix-Raviart Element: Application to Elasticity, PREPRINT 2000-09, p.3 $$ - \\left< \\{ \\tau(u) \\}, [v] \\right> + \\alpha \\left< \\{ \\tau(v) \\}, [u] \\right> + \\kappa \\left< h^{-1} \\{ \\lambda + 2 \\mu \\} [u], [v] \\right> $$ where $ \\left< u, v\\right> = \\int_{F} u \\cdot v $, and $ F $ is a face which is either a boundary face $ F_b $ of an element $ K $ or an interior face $ F_i $ separating elements $ K_1 $ and $ K_2 $. In the bilinear form above $ \\tau(u) $ is traction, and it's also $ \\tau(u) = \\sigma(u) \\cdot \\hat{n} $, where $ \\sigma(u) $ is stress, and $ \\hat{n} $ is the unit normal vector w.r.t. to $ F $. In other words, we have $$\\label{dg-elast} - \\left< \\{ \\sigma(u) \\cdot \\hat{n} \\}, [v] \\right> + \\alpha \\left< \\{ \\sigma(v) \\cdot \\hat{n} \\}, [u] \\right> + \\kappa \\left< h^{-1} \\{ \\lambda + 2 \\mu \\} [u], [v] \\right> $$ For isotropic media $$ \\begin{split} \\sigma(u) &= \\lambda \\nabla \\cdot u I + 2 \\mu \\varepsilon(u) \\\\ &= \\lambda \\nabla \\cdot u I + 2 \\mu \\frac{1}{2} \\left( \\nabla u + \\nabla u^T \\right) \\\\ &= \\lambda \\nabla \\cdot u I + \\mu \\left( \\nabla u + \\nabla u^T \\right) \\end{split} $$ where $ I $ is identity matrix, $ \\lambda $ and $ \\mu $ are Lame coefficients (see ElasticityIntegrator), $ u, v $ are the trial and test functions, respectively. The parameters $ \\alpha $ and $ \\kappa $ determine the DG method to use (when this integrator is added to the \"broken\" ElasticityIntegrator): IIPG , $\\alpha = 0$, C. Dawson, S. Sun, M. Wheeler, Compatible algorithms for coupled flow and transport, Comp. Meth. Appl. Mech. Eng., 193(23-26), 2565-2580, 2004. SIPG , $\\alpha = -1$, M. Grote, A. Schneebeli, D. Schotzau, Discontinuous Galerkin Finite Element Method for the Wave Equation, SINUM, 44(6), 2408-2431, 2006. NIPG , $\\alpha = 1$, B. Riviere, M. Wheeler, V. Girault, A Priori Error Estimates for Finite Element Methods Based on Discontinuous Approximation Spaces for Elliptic Problems, SINUM, 39(3), 902-931, 2001. This is a 'Vector' integrator, i.e. defined for FE spaces using multiple copies of a scalar FE space.", "title": "Discontinuous Galerkin Operators"}, {"location": "bilininteg/#special-purpose-integrators", "text": "These \"integrators\" do not actually perform integrations they merely alter the results of other integrators. As such they provide a convenient and easy way to reuse existing integrators in special situations rather than needing to reimplement their functionality. Class Name Description TransposeIntegrator Returns the transpose of the local matrix computed by another BilinearFormIntegrator LumpedIntegrator Returns a diagonal local matrix where each entry is the sum of the corresponding row of a local matrix computed by another BilinearFormIntegrator (only implemented for square matrices) InverseIntegrator Returns the inverse of the local matrix computed by another BilinearFormIntegrator which produces a square local matrix SumIntegrator Returns the sum of a series of integrators with compatible dimensions (only implemented for square matrices)", "title": "Special Purpose Integrators"}, {"location": "bilininteg/#weak-operators-and-their-boundary-integrals", "text": "Weak operators use integration by parts to move a spatial derivative onto the test function. This results in an implied boundary integral that is often assumed to be zero but can be used to apply a non-homogeneous Neumann boundary condition given a known function $u_\\mathrm{bc}$ (or $\\vec{u}_\\mathrm{bc}$ for operators with a vector domain).", "title": "Weak Operators and Their Boundary Integrals"}, {"location": "bilininteg/#operator-with-scalar-range", "text": "The following weak operators require the range (or test) space to be $H_1$ i.e. a scalar basis function with a gradient operator. The implied natural boundary condition when using these operators is for the continuous boundary operator (shown in the last column) to be equal to zero. On the other hand an inhomogeneous Neumann boundary condition can be applied by using a linear form boundary integrator to compute this boundary term for a known function e.g. when using the DiffusionIntegrator one could provide a known function for $\\lambda\\,\\grad u_\\mathrm{bc}$ to the BoundaryNormalLFIntegrator which would then integrate the normal component of this function over the boundary of the domain. See Linear Form Integrators for more information. Class Name Operator Continuous Op. Continuous Boundary Op. DiffusionIntegrator $(\\lambda\\grad u, \\grad v)$ $-\\div(\\lambda\\grad u)$ $\\lambda\\,\\hat{n}\\cdot\\grad u_\\mathrm{bc}$ MixedGradGradIntegrator $(\\lambda\\grad u, \\grad v)$ $-\\div(\\lambda\\grad u)$ $\\lambda\\,\\hat{n}\\cdot\\grad u_\\mathrm{bc}$ MixedCrossGradGradIntegrator $(\\vec{\\lambda}\\cross\\grad u,\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\grad u)$ $\\hat{n}\\cdot(\\vec{\\lambda}\\times\\grad u_\\mathrm{bc})$ MixedScalarWeakDivergenceIntegrator $(-\\vec{\\lambda}u,\\grad v)$ $\\div(\\vec{\\lambda}u)$ $-\\hat{n}\\cdot\\vec{\\lambda}\\,u_\\mathrm{bc}$ MixedScalarWeakDerivativeIntegrator $(-\\lambda u, \\ddx{v})$ $\\ddx{}(\\lambda u)\\;$ $-\\hat{n}\\cdot\\hat{x}\\,\\lambda\\,u_\\mathrm{bc}$ MixedVectorWeakDivergenceIntegrator $(-\\lambda\\vec{u},\\grad v)$ $\\div(\\lambda\\vec{u})$ $-\\hat{n}\\cdot(\\lambda\\,\\vec{u}_\\mathrm{bc})$ MixedWeakDivCrossIntegrator $(-\\vec{\\lambda}\\cross\\vec{u},\\grad v)$ $\\div(\\vec{\\lambda}\\cross\\vec{u})$ $-\\hat{n}\\cdot(\\vec{\\lambda}\\times\\vec{u}_\\mathrm{bc})$ MixedCrossCurlGradIntegrator $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\curl\\vec{u})$ $\\hat{n}\\cdot(\\vec{\\lambda}\\cross\\curl\\vec{u}_\\mathrm{bc})$ MixedDivGradIntegrator $(\\vec{\\lambda}\\div\\vec{u}, \\grad v)$ $-\\div(\\vec{\\lambda}\\div\\vec{u})$ $\\hat{n}\\cdot(\\vec{\\lambda}\\div\\vec{u}_\\mathrm{bc})$", "title": "Operator with Scalar Range"}, {"location": "bilininteg/#operator-with-vector-range", "text": "The following weak operators require the range (or test) space to be H(Curl) i.e. a vector basis function with a curl operator. The implied natural boundary condition when using these operators is for the continuous boundary operator (shown in the last column) to be equal to zero. On the other hand a non-homogeneous Neumann boundary condition can be applied by using a linear form boundary integrator to compute this boundary term for a known function e.g. when using the CurlCurlIntegrator one could provide a known function for $-\\lambda\\,\\curl\\vec{u}_\\mathrm{bc}$ to the VectorFEBoundaryTangentLFIntegrator which would then integrate the product of the tangential portion of this function with that of the ND basis function over the boundary of the domain. See Linear Form Integrators for more information. Class Name Operator Continuous Op. Continuous Boundary Op. CurlCurlIntegrator $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $-\\lambda\\,\\hat{n}\\times\\curl\\vec{u}_\\mathrm{bc}$ MixedCurlCurlIntegrator $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $-\\lambda\\,\\hat{n}\\times\\curl\\vec{u}_\\mathrm{bc}$ MixedCrossCurlCurlIntegrator $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\curl\\vec{u})$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\curl\\vec{u}_\\mathrm{bc})$ MixedCrossGradCurlIntegrator $(\\vec{\\lambda}\\cross\\grad u,\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\grad u)$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\grad u_\\mathrm{bc})$ MixedVectorWeakCurlIntegrator $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\vec{u})$ $-\\lambda\\,\\hat{n}\\times\\vec{u}_\\mathrm{bc}$ MixedScalarWeakCurlIntegrator $(\\lambda u,\\curl\\vec{v})$ $\\curl(\\lambda\\,u\\,\\hat{z})\\;$ $-\\lambda\\,u_\\mathrm{bc}\\,\\hat{n}\\times\\hat{z}$ MixedWeakCurlCrossIntegrator $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\vec{u}_\\mathrm{bc})$ MixedScalarWeakCurlCrossIntegrator $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\vec{u}_\\mathrm{bc})$ The following weak operators require the range (or test) space to be H(Div) i.e. a vector basis function with a divergence operator. The implied natural boundary condition when using these operators is for the continuous boundary operator (shown in the last column) to be equal to zero. On the other hand a non-homogeneous Neumann boundary condition can be applied by using a linear form boundary integrator to compute this boundary term for a known function e.g. when using the DivDivIntegrator one could provide a known function for $\\lambda\\,\\div\\vec{u}_\\mathrm{bc}$ to the VectorFEBoundaryFluxLFIntegrator which would then integrate the product of this function with the normal component of the RT basis function over the boundary of the domain. See Linear Form Integrators for more information. Class Name Operator Continuous Op. Continuous Boundary Op. DivDivIntegrator $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ $\\lambda\\div\\vec{u}_\\mathrm{bc}\\,\\hat{n}$ MixedGradDivIntegrator $(\\vec{\\lambda}\\cdot\\grad u, \\div\\vec{v})$ $-\\grad(\\vec{\\lambda}\\cdot\\grad u)$ $\\vec{\\lambda}\\cdot\\grad u_\\mathrm{bc}\\,\\hat{n}$ MixedScalarWeakGradientIntegrator $(-\\lambda u, \\div\\vec{v})$ $\\grad(\\lambda u)$ $-\\lambda u_\\mathrm{bc}\\,\\hat{n}$ MixedWeakGradDotIntegrator $(-\\vec{\\lambda}\\cdot\\vec{u},\\div\\vec{v})$ $\\grad(\\vec{\\lambda}\\cdot\\vec{u})$ $-\\vec{\\lambda}\\cdot\\vec{u}_\\mathrm{bc}\\,\\hat{n}$", "title": "Operator with Vector Range"}, {"location": "bilininteg/#device-support", "text": "A list of the MFEM integrators that support device acceleration is available here . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Device support"}, {"location": "building/", "text": "Building MFEM A simple tutorial on how to build and run the serial and parallel versions of MFEM together with GLVis. For more details, see the INSTALL file and make help . In addition to the native build system described below, MFEM packages are also available in the following package managers: Homebrew Spack OpenHPC MFEM can also be installed as part of xSDK E4S FASTMath RADIUSS CEED A pre-built version of MFEM is also available in a container form, see our AWS tutorial and the mfem/containers repo. Instructions Download MFEM and GLVis https://mfem.org https://glvis.org Below we assume that we are working with versions mfem-4.5 and glvis-4.2 . Serial version of MFEM and GLVis Put everything in the same directory: ~> ls glvis-4.2.tgz mfem-4.5.tgz Build the serial version of MFEM: ~> tar -zxvf mfem-4.5.tgz ~> cd mfem-4.5 ~/mfem-4.5> make serial -j ~/mfem-4.5> cd .. Build GLVis: ~> tar -zxvf glvis-4.2.tgz ~> cd glvis-4.2 ~/glvis-4.2> make MFEM_DIR=../mfem-4.5 -j ~/glvis-4.2> cd .. That's it! The MFEM library can be found in mfem-4.5/libmfem.a , while the glvis executable will be in the glvis-4.2 directory. Note: as of version 4.0, GLVis has additional dependencies that need to be installed first, see its building documentation . To start a GLVis server, open a new terminal and type ~> cd glvis-4.2 ~/glvis-4.2> ./glvis The serial examples can be built with: ~> cd mfem-4.5/examples ~/mfem-4.5/examples> make -j All serial examples and miniapps can be built with: ~> cd mfem-4.5 ~/mfem-4.5> make all -j Parallel MPI version of MFEM Download hypre and METIS from https://github.com/hypre-space/hypre/tags https://github.com/mfem/tpls Note: We recommend MFEM's mirror of metis-4.0.3 and metis-5.1.0 above because the METIS webpage , is often down and we don't support yet the new GitHub repo . Below we assume that we are working with hypre-2.26.0 and metis-4.0.3 (see below for METIS version 5 and later). We also assume that the serial version of MFEM and GLVis have been built as described above. Put everything in the same directory: ~> ls glvis-4.2/ hypre-2.26.0.tar.gz metis-4.0.3.tar.gz mfem-4.5/ Build hypre: ~> tar -zxvf hypre-2.26.0.tar.gz ~> cd hypre-2.26.0/src/ ~/hypre-2.26.0/src> ./configure --disable-fortran ~/hypre-2.26.0/src> make -j ~/hypre-2.26.0/src> cd ../.. ~> ln -s hypre-2.26.0 hypre Build METIS: ~> tar -zxvf metis-4.0.3.tar.gz ~> cd metis-4.0.3 ~/metis-4.0.3> make OPTFLAGS=-Wno-error=implicit-function-declaration ~/metis-4.0.3> cd .. ~> ln -s metis-4.0.3 metis-4.0 (If you are using METIS 5, see the instructions below .) Build the parallel version of MFEM: ~> cd mfem-4.5 ~/mfem-4.5> make parallel -j ~/mfem-4.5> cd .. Note that if hypre or METIS are in different locations, or you have different versions of these libraries, you will need to update the corresponding paths in the config/defaults.mk file, or create you own config/user.mk , as described in the INSTALL file. The parallel examples can be built with: ~> cd mfem-4.5/examples ~/mfem-4.5/examples> make -j The serial examples can also be built with the parallel version of the library, e.g. ~/mfem-4.5/examples> make ex1 ex2 All parallel examples and miniapps can be built with: ~> cd mfem-4.5 ~/mfem-4.5> make all -j One can also use the parallel library to optionally (re-)build GLVis: ~> cd glvis-4.2 ~/glvis-4.2> make clean ~/glvis-4.2> make MFEM_DIR=../mfem-4.5 -j This, however, is generally not recommended , since the additional MPI thread can interfere with the other GLVis threads. Parallel build using METIS 5 Build METIS 5: ~> tar zvxf metis-5.1.0.tar.gz ~> cd metis-5.1.0 ~/metis-5.1.0> make BUILDDIR=lib config ~/metis-5.1.0> make BUILDDIR=lib ~/metis-5.1.0> cp lib/libmetis/libmetis.a lib Build the parallel version of MFEM, setting the options MFEM_USE_METIS_5 and METIS_DIR , e.g.: ~> cd mfem-4.5 ~/mfem-4.5> make parallel -j MFEM_USE_METIS_5=YES METIS_DIR=@MFEM_DIR@/../metis-5.1.0 CUDA version of MFEM To build the CUDA version of MFEM, one needs to specify the CUDA compute capability , with the CUDA_ARCH flag. In the examples below we use CUDA_ARCH=sm_70 to build the MFEM serial and parallel versions for compute capability 7.0 (V100). Build the serial CUDA version of MFEM: ~/mfem> make cuda CUDA_ARCH=sm_70 -j Build the parallel CUDA version of MFEM: ~/mfem> make pcuda CUDA_ARCH=sm_70 -j To use hypre with CUDA support in MFEM, follow the instructions above but configure it with the following command, specifying the CUDA compute capability: ~/hypre-2.26.0/src> ./configure --with-cuda --with-gpu-arch=\"70\" --disable-fortran HIP version of MFEM To build the HIP version of MFEM, one needs to specify the HIP architecture , with the HIP_ARCH flag. In the examples below we use HIP_ARCH=gfx908 to build the MFEM serial and parallel versions for gfx908 (MI100). Build the serial HIP version of MFEM: ~/mfem> make hip HIP_ARCH=gfx908 -j Build the parallel HIP version of MFEM: ~/mfem> make phip HIP_ARCH=gfx908 -j To use hypre with HIP support in MFEM, follow the instructions above but configure it with the following command, specifying the HIP architecture: ~/hypre-2.26.0/src> ./configure --with-hip --with-gpu-arch=\"gfx908\" --disable-fortran Installing MFEM with Spack If Spack is already available on your system and is visible in your PATH , you can install the MFEM software simply with: spack install mfem To enable package testing during the build process, use instead: spack install -v --test=all mfem If you don't have Spack, you can download it and install MFEM with the following commands: git clone https://github.com/spack/spack.git cd spack ./bin/spack install -v mfem", "title": "_Building MFEM"}, {"location": "building/#building-mfem", "text": "A simple tutorial on how to build and run the serial and parallel versions of MFEM together with GLVis. For more details, see the INSTALL file and make help . In addition to the native build system described below, MFEM packages are also available in the following package managers: Homebrew Spack OpenHPC MFEM can also be installed as part of xSDK E4S FASTMath RADIUSS CEED A pre-built version of MFEM is also available in a container form, see our AWS tutorial and the mfem/containers repo.", "title": "Building MFEM"}, {"location": "building/#instructions", "text": "Download MFEM and GLVis https://mfem.org https://glvis.org Below we assume that we are working with versions mfem-4.5 and glvis-4.2 .", "title": "Instructions"}, {"location": "building/#serial-version-of-mfem-and-glvis", "text": "Put everything in the same directory: ~> ls glvis-4.2.tgz mfem-4.5.tgz Build the serial version of MFEM: ~> tar -zxvf mfem-4.5.tgz ~> cd mfem-4.5 ~/mfem-4.5> make serial -j ~/mfem-4.5> cd .. Build GLVis: ~> tar -zxvf glvis-4.2.tgz ~> cd glvis-4.2 ~/glvis-4.2> make MFEM_DIR=../mfem-4.5 -j ~/glvis-4.2> cd .. That's it! The MFEM library can be found in mfem-4.5/libmfem.a , while the glvis executable will be in the glvis-4.2 directory. Note: as of version 4.0, GLVis has additional dependencies that need to be installed first, see its building documentation . To start a GLVis server, open a new terminal and type ~> cd glvis-4.2 ~/glvis-4.2> ./glvis The serial examples can be built with: ~> cd mfem-4.5/examples ~/mfem-4.5/examples> make -j All serial examples and miniapps can be built with: ~> cd mfem-4.5 ~/mfem-4.5> make all -j", "title": "Serial version of MFEM and GLVis"}, {"location": "building/#parallel-mpi-version-of-mfem", "text": "Download hypre and METIS from https://github.com/hypre-space/hypre/tags https://github.com/mfem/tpls Note: We recommend MFEM's mirror of metis-4.0.3 and metis-5.1.0 above because the METIS webpage , is often down and we don't support yet the new GitHub repo . Below we assume that we are working with hypre-2.26.0 and metis-4.0.3 (see below for METIS version 5 and later). We also assume that the serial version of MFEM and GLVis have been built as described above. Put everything in the same directory: ~> ls glvis-4.2/ hypre-2.26.0.tar.gz metis-4.0.3.tar.gz mfem-4.5/ Build hypre: ~> tar -zxvf hypre-2.26.0.tar.gz ~> cd hypre-2.26.0/src/ ~/hypre-2.26.0/src> ./configure --disable-fortran ~/hypre-2.26.0/src> make -j ~/hypre-2.26.0/src> cd ../.. ~> ln -s hypre-2.26.0 hypre Build METIS: ~> tar -zxvf metis-4.0.3.tar.gz ~> cd metis-4.0.3 ~/metis-4.0.3> make OPTFLAGS=-Wno-error=implicit-function-declaration ~/metis-4.0.3> cd .. ~> ln -s metis-4.0.3 metis-4.0 (If you are using METIS 5, see the instructions below .) Build the parallel version of MFEM: ~> cd mfem-4.5 ~/mfem-4.5> make parallel -j ~/mfem-4.5> cd .. Note that if hypre or METIS are in different locations, or you have different versions of these libraries, you will need to update the corresponding paths in the config/defaults.mk file, or create you own config/user.mk , as described in the INSTALL file. The parallel examples can be built with: ~> cd mfem-4.5/examples ~/mfem-4.5/examples> make -j The serial examples can also be built with the parallel version of the library, e.g. ~/mfem-4.5/examples> make ex1 ex2 All parallel examples and miniapps can be built with: ~> cd mfem-4.5 ~/mfem-4.5> make all -j One can also use the parallel library to optionally (re-)build GLVis: ~> cd glvis-4.2 ~/glvis-4.2> make clean ~/glvis-4.2> make MFEM_DIR=../mfem-4.5 -j This, however, is generally not recommended , since the additional MPI thread can interfere with the other GLVis threads.", "title": "Parallel MPI version of MFEM"}, {"location": "building/#parallel-build-using-metis-5", "text": "Build METIS 5: ~> tar zvxf metis-5.1.0.tar.gz ~> cd metis-5.1.0 ~/metis-5.1.0> make BUILDDIR=lib config ~/metis-5.1.0> make BUILDDIR=lib ~/metis-5.1.0> cp lib/libmetis/libmetis.a lib Build the parallel version of MFEM, setting the options MFEM_USE_METIS_5 and METIS_DIR , e.g.: ~> cd mfem-4.5 ~/mfem-4.5> make parallel -j MFEM_USE_METIS_5=YES METIS_DIR=@MFEM_DIR@/../metis-5.1.0", "title": "Parallel build using METIS 5"}, {"location": "building/#cuda-version-of-mfem", "text": "To build the CUDA version of MFEM, one needs to specify the CUDA compute capability , with the CUDA_ARCH flag. In the examples below we use CUDA_ARCH=sm_70 to build the MFEM serial and parallel versions for compute capability 7.0 (V100). Build the serial CUDA version of MFEM: ~/mfem> make cuda CUDA_ARCH=sm_70 -j Build the parallel CUDA version of MFEM: ~/mfem> make pcuda CUDA_ARCH=sm_70 -j To use hypre with CUDA support in MFEM, follow the instructions above but configure it with the following command, specifying the CUDA compute capability: ~/hypre-2.26.0/src> ./configure --with-cuda --with-gpu-arch=\"70\" --disable-fortran", "title": "CUDA version of MFEM"}, {"location": "building/#hip-version-of-mfem", "text": "To build the HIP version of MFEM, one needs to specify the HIP architecture , with the HIP_ARCH flag. In the examples below we use HIP_ARCH=gfx908 to build the MFEM serial and parallel versions for gfx908 (MI100). Build the serial HIP version of MFEM: ~/mfem> make hip HIP_ARCH=gfx908 -j Build the parallel HIP version of MFEM: ~/mfem> make phip HIP_ARCH=gfx908 -j To use hypre with HIP support in MFEM, follow the instructions above but configure it with the following command, specifying the HIP architecture: ~/hypre-2.26.0/src> ./configure --with-hip --with-gpu-arch=\"gfx908\" --disable-fortran", "title": "HIP version of MFEM"}, {"location": "building/#installing-mfem-with-spack", "text": "If Spack is already available on your system and is visible in your PATH , you can install the MFEM software simply with: spack install mfem To enable package testing during the build process, use instead: spack install -v --test=all mfem If you don't have Spack, you can download it and install MFEM with the following commands: git clone https://github.com/spack/spack.git cd spack ./bin/spack install -v mfem", "title": "Installing MFEM with Spack"}, {"location": "coefficient/", "text": "Coefficients Coefficient objects serve many purposes within MFEM. As the name suggests they often represent the material coefficients appearing in partial differential equations. However, Coefficients can also be used to specify initial conditions, boundary conditions, exact solutions, etc.. Coefficients come in three varieties; scalar-valued, vector-valued, and matrix-valued. The primary purpose of any Coefficient class is to define an Eval method which returns a scalar, vector, or matrix given an element and a location within that element expressed as a point in reference space i.e. an IntegrationPoint . Coefficients can also be time dependent. Time is treated as a parameter which changes infrequently by passing the current time though a SetTime(t) method. A Coefficient's Eval method depends on not only the position within an element but also on the element attribute number which allows the Coefficient to return different results from different regions of the domain or boundary. This can be a powerful feature but it can lead to unexpected results. As a rule domain integrals will have access to element attributes and boundary integrals will access the boundary attributes. This seems obvious but there may be cases where the outcome is not so clear cut and careful thought is required. It is important to know when a Coefficient will be accessed, particularly in the case of time-dependent or field-dependent coefficients. When used with GridFunction::Project , GridFunction::ComputeL2Error , and other GridFunction methods the Coefficient is used immediately. When used in BilinearForm and LinearForm objects the coefficients are only accessed during calls to the Assemble methods. An important side note is that GridFunction and LinearForm objects will overwrite their values during such calls but a BilinearForm will not. Consequently, when using a time-dependent coefficient with a BilinearForm object it is crucial that the user calls BilinearForm::Update to reset the internally stored matrix to zero before calling BilinearForm::Assemble . Otherwise the new matrix entries will be added to the previous values leading to odd behavior. Scalar Coefficients Basic Scalar Coefficients Class Name Description ConstantCoefficient Returns a constant value: $\\alpha$ FunctionCoefficient Computes a value from a standard function, $f(\\vec{x},t)$, or a lambda expression PWConstCoefficient Returns different constants based e.g. on element attribute GridFunctionCoefficient Returns values interpolated from a scalar-valued GridFunction : $u(\\vec{x})$ DivergenceGridFunctionCoefficient Returns the divergence of a vector-valued GridFunction : $\\nabla\\cdot\\vec{u}$ DeltaCoefficient A weighted Dirac delta function: $s\\,w(\\vec{x},t)\\,T(t)\\,\\delta(\\vec{x}-\\vec{x}_c)$ Derived Scalar Coefficients These classes provide a means of creating functions of existing coefficients. In performance critical situations it would clearly be preferable to write specialized Coefficient classes but these offer a quick and, hopefully, easy to use alternative. Class Name Formula TransformedCoefficient $T(Q_1(\\vec{x},t))\\mbox{ or }T(Q_1(\\vec{x},t),Q_2(\\vec{x},t))$ RestrictedCoefficient $Q(\\vec{x})\\,\\forall a\\in A, 0\\mbox{ otherwise}$ SumCoefficient $\\alpha\\,Q_1(\\vec{x}) + \\beta\\,Q_2(\\vec{x})$ ProductCoefficient $Q_1(\\vec{x})\\,Q_2(\\vec{x})$ PowerCoefficient $Q(\\vec{x})^p$ InnerProductCoefficient $\\vec{Q}_1\\cdot\\vec{Q}_2$ VectorRotProductCoefficient $\\vec{Q}_1\\times\\vec{Q}_2\\mbox{ in }\\mathbb{R}^2$ DeterminantCoefficient $|\\overleftrightarrow{Q}|$ Vector Coefficients Basic Vector Coefficients Class Name Description VectorConstantCoefficient Returns a constant vector value: $\\vec{\\alpha}$ VectorFunctionCoefficient Computes a value from a standard function, $\\vec{f}(\\vec{x})$, or a lambda expression VectorGridFunctionCoefficient Returns values interpolated from a vector-valued GridFunction : $\\vec{u}(\\vec{x})$ GradientGridFunctionCoefficient Returns the gradient of a scalar-valued GridFunction : $\\nabla u(\\vec{x})$ CurlGridFunctionCoefficient Returns the curl of a vector-valued GridFunction : $\\nabla\\times\\vec{u}(\\vec{x})$ VectorDeltaCoefficient $s\\,\\vec{\\alpha}\\,\\delta(\\vec{x}-\\vec{x}_c)$ Derived Vector Coefficients Again these classes provide a means of creating functions of existing coefficients. Class Name Formula VectorArrayCoefficient Construct a vector value from an array of scalar coefficients: $\\vec{Q}_a$ VectorRestrictedCoefficient $\\vec{Q}(\\vec{x})\\,\\forall a\\in A, 0\\mbox{ otherwise}$ VectorSumCoefficient $\\alpha\\,\\vec{Q}_1(\\vec{x}) + \\beta\\,\\vec{Q}_2(\\vec{x})$ ScalarVectorProductCoefficient $Q_1\\,\\vec{Q}_2$ VectorCrossProductCoefficient $\\vec{Q}_1\\times\\vec{Q}_2$ MatVecCoefficient $\\overleftrightarrow{Q}_1\\cdot\\vec{Q}_2$ Matrix Coefficients Basic Matrix Coefficients Class Name Description MatrixConstantCoefficient Returns a constant matrix value: $\\overleftrightarrow{\\alpha}$ MatrixFunctionCoefficient Computes a value from a standard function, $\\overleftrightarrow{f}$, or a lambda expression IdentityMatrixCoefficient Returns the identity matrix of the appropriate dimension: $\\overleftrightarrow{I}$ Derived Matrix Coefficients Again these classes provide a means of creating functions of existing coefficients. Class Name Formula MatrixArrayCoefficient Construct a matrix value from an array of scalar coefficients: $\\overleftrightarrow{Q}_a$ MatrixRestrictedCoefficient $\\overleftrightarrow{Q}(\\vec{x})\\,\\forall a\\in A, 0\\mbox{ otherwise}$ MatrixSumCoefficient $\\alpha\\,\\overleftrightarrow{Q}_1(\\vec{x}) + \\beta\\,\\overleftrightarrow{Q}_2(\\vec{x})$ ScalarMatrixProductCoefficient $Q_1\\,\\overleftrightarrow{Q}_2$ TransposeMatrixCoefficient $\\overleftrightarrow{Q}^T$ InverseMatrixCoefficient $\\overleftrightarrow{Q}^{-1}$ OuterProductCoefficient $\\vec{Q}_1\\otimes\\vec{Q}_2$ MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Coefficients"}, {"location": "coefficient/#coefficients", "text": "Coefficient objects serve many purposes within MFEM. As the name suggests they often represent the material coefficients appearing in partial differential equations. However, Coefficients can also be used to specify initial conditions, boundary conditions, exact solutions, etc.. Coefficients come in three varieties; scalar-valued, vector-valued, and matrix-valued. The primary purpose of any Coefficient class is to define an Eval method which returns a scalar, vector, or matrix given an element and a location within that element expressed as a point in reference space i.e. an IntegrationPoint . Coefficients can also be time dependent. Time is treated as a parameter which changes infrequently by passing the current time though a SetTime(t) method. A Coefficient's Eval method depends on not only the position within an element but also on the element attribute number which allows the Coefficient to return different results from different regions of the domain or boundary. This can be a powerful feature but it can lead to unexpected results. As a rule domain integrals will have access to element attributes and boundary integrals will access the boundary attributes. This seems obvious but there may be cases where the outcome is not so clear cut and careful thought is required. It is important to know when a Coefficient will be accessed, particularly in the case of time-dependent or field-dependent coefficients. When used with GridFunction::Project , GridFunction::ComputeL2Error , and other GridFunction methods the Coefficient is used immediately. When used in BilinearForm and LinearForm objects the coefficients are only accessed during calls to the Assemble methods. An important side note is that GridFunction and LinearForm objects will overwrite their values during such calls but a BilinearForm will not. Consequently, when using a time-dependent coefficient with a BilinearForm object it is crucial that the user calls BilinearForm::Update to reset the internally stored matrix to zero before calling BilinearForm::Assemble . Otherwise the new matrix entries will be added to the previous values leading to odd behavior.", "title": "Coefficients"}, {"location": "coefficient/#scalar-coefficients", "text": "", "title": "Scalar Coefficients"}, {"location": "coefficient/#basic-scalar-coefficients", "text": "Class Name Description ConstantCoefficient Returns a constant value: $\\alpha$ FunctionCoefficient Computes a value from a standard function, $f(\\vec{x},t)$, or a lambda expression PWConstCoefficient Returns different constants based e.g. on element attribute GridFunctionCoefficient Returns values interpolated from a scalar-valued GridFunction : $u(\\vec{x})$ DivergenceGridFunctionCoefficient Returns the divergence of a vector-valued GridFunction : $\\nabla\\cdot\\vec{u}$ DeltaCoefficient A weighted Dirac delta function: $s\\,w(\\vec{x},t)\\,T(t)\\,\\delta(\\vec{x}-\\vec{x}_c)$", "title": "Basic Scalar Coefficients"}, {"location": "coefficient/#derived-scalar-coefficients", "text": "These classes provide a means of creating functions of existing coefficients. In performance critical situations it would clearly be preferable to write specialized Coefficient classes but these offer a quick and, hopefully, easy to use alternative. Class Name Formula TransformedCoefficient $T(Q_1(\\vec{x},t))\\mbox{ or }T(Q_1(\\vec{x},t),Q_2(\\vec{x},t))$ RestrictedCoefficient $Q(\\vec{x})\\,\\forall a\\in A, 0\\mbox{ otherwise}$ SumCoefficient $\\alpha\\,Q_1(\\vec{x}) + \\beta\\,Q_2(\\vec{x})$ ProductCoefficient $Q_1(\\vec{x})\\,Q_2(\\vec{x})$ PowerCoefficient $Q(\\vec{x})^p$ InnerProductCoefficient $\\vec{Q}_1\\cdot\\vec{Q}_2$ VectorRotProductCoefficient $\\vec{Q}_1\\times\\vec{Q}_2\\mbox{ in }\\mathbb{R}^2$ DeterminantCoefficient $|\\overleftrightarrow{Q}|$", "title": "Derived Scalar Coefficients"}, {"location": "coefficient/#vector-coefficients", "text": "", "title": "Vector Coefficients"}, {"location": "coefficient/#basic-vector-coefficients", "text": "Class Name Description VectorConstantCoefficient Returns a constant vector value: $\\vec{\\alpha}$ VectorFunctionCoefficient Computes a value from a standard function, $\\vec{f}(\\vec{x})$, or a lambda expression VectorGridFunctionCoefficient Returns values interpolated from a vector-valued GridFunction : $\\vec{u}(\\vec{x})$ GradientGridFunctionCoefficient Returns the gradient of a scalar-valued GridFunction : $\\nabla u(\\vec{x})$ CurlGridFunctionCoefficient Returns the curl of a vector-valued GridFunction : $\\nabla\\times\\vec{u}(\\vec{x})$ VectorDeltaCoefficient $s\\,\\vec{\\alpha}\\,\\delta(\\vec{x}-\\vec{x}_c)$", "title": "Basic Vector Coefficients"}, {"location": "coefficient/#derived-vector-coefficients", "text": "Again these classes provide a means of creating functions of existing coefficients. Class Name Formula VectorArrayCoefficient Construct a vector value from an array of scalar coefficients: $\\vec{Q}_a$ VectorRestrictedCoefficient $\\vec{Q}(\\vec{x})\\,\\forall a\\in A, 0\\mbox{ otherwise}$ VectorSumCoefficient $\\alpha\\,\\vec{Q}_1(\\vec{x}) + \\beta\\,\\vec{Q}_2(\\vec{x})$ ScalarVectorProductCoefficient $Q_1\\,\\vec{Q}_2$ VectorCrossProductCoefficient $\\vec{Q}_1\\times\\vec{Q}_2$ MatVecCoefficient $\\overleftrightarrow{Q}_1\\cdot\\vec{Q}_2$", "title": "Derived Vector Coefficients"}, {"location": "coefficient/#matrix-coefficients", "text": "", "title": "Matrix Coefficients"}, {"location": "coefficient/#basic-matrix-coefficients", "text": "Class Name Description MatrixConstantCoefficient Returns a constant matrix value: $\\overleftrightarrow{\\alpha}$ MatrixFunctionCoefficient Computes a value from a standard function, $\\overleftrightarrow{f}$, or a lambda expression IdentityMatrixCoefficient Returns the identity matrix of the appropriate dimension: $\\overleftrightarrow{I}$", "title": "Basic Matrix Coefficients"}, {"location": "coefficient/#derived-matrix-coefficients", "text": "Again these classes provide a means of creating functions of existing coefficients. Class Name Formula MatrixArrayCoefficient Construct a matrix value from an array of scalar coefficients: $\\overleftrightarrow{Q}_a$ MatrixRestrictedCoefficient $\\overleftrightarrow{Q}(\\vec{x})\\,\\forall a\\in A, 0\\mbox{ otherwise}$ MatrixSumCoefficient $\\alpha\\,\\overleftrightarrow{Q}_1(\\vec{x}) + \\beta\\,\\overleftrightarrow{Q}_2(\\vec{x})$ ScalarMatrixProductCoefficient $Q_1\\,\\overleftrightarrow{Q}_2$ TransposeMatrixCoefficient $\\overleftrightarrow{Q}^T$ InverseMatrixCoefficient $\\overleftrightarrow{Q}^{-1}$ OuterProductCoefficient $\\vec{Q}_1\\otimes\\vec{Q}_2$ MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Derived Matrix Coefficients"}, {"location": "dox/", "text": "", "title": "Doxygen"}, {"location": "electromagnetics/", "text": "Electromagnetics Mini Applications $\\newcommand{\\A}{\\vec{A}}\\newcommand{\\B}{\\vec{B}} \\newcommand{\\D}{\\vec{D}}\\newcommand{\\E}{\\vec{E}} \\newcommand{\\H}{\\vec{H}}\\newcommand{\\J}{\\vec{J}} \\newcommand{\\M}{\\vec{M}}\\newcommand{\\P}{\\vec{P}} \\newcommand{\\F}{\\vec{F}} \\newcommand{\\dd}[2]{\\frac{\\partial #1}{\\partial #2}} \\newcommand{\\cross}{\\times}\\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot}\\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla}$ The miniapps/electromagnetics directory contains a collection of electromagnetic miniapps based on MFEM. Compared to the example codes , the miniapps are more complex, demonstrating more advanced usage of the library. They are intended to be more representative of MFEM-based application codes. We recommend that new users start with the example codes before moving to the miniapps. The current electromagnetic miniapps are described below. Electromagnetics The equations describing electromagnetic phenomena are known collectively as the Maxwell Equations. They are usually given as: $$\\begin{align} \\curl\\H - \\dd{\\D}{t} & = \\J \\label{ampere} \\\\ \\curl\\E + \\dd{\\B}{t} & = 0 \\label{faraday} \\\\ \\div\\D & = \\rho \\label{gauss} \\\\ \\div\\B & = 0 \\label{divb} \\end{align}$$ Where equation \\eqref{ampere} can be referred to as Amp\u00e8re's Law , equation \\eqref{faraday} is called Faraday's Law , equation \\eqref{gauss} is Gauss's Law , and equation \\eqref{divb} doesn't generally have a name but is related to the nonexistence of magnetic monopoles. The various fields in these equations are: Symbol Name SI Units $\\H$ magnetic field Ampere/meter $\\B$ magnetic flux density Tesla $\\E$ electric field Volt/meter $\\D$ electric displacement Coulomb/meter$^2$ $\\J$ current density Ampere/meter$^2$ $\\rho$ charge density Coulomb/meter$^3$ In the literature these names do vary, particularly those for $\\H$ and $\\B$, but in this document we will try to adhere to the convention laid out above. Generally we also need constitutive relations between $\\E$ and $\\D$ and/or between $\\H$ and $\\B$. These relations start with the definitions: $$\\begin{align} \\D & = \\epsilon_0\\E + \\P \\label{const_d} \\\\ \\B & = \\mu_0(\\H + \\M) \\label{const_b} \\end{align}$$ Where $\\P$ is the polarization density , and $\\M$ is the magnetization . Also, $\\epsilon_0$ is the permittivity of free space and $\\mu_0$ is the permeability of free space which are both constants of nature. In many common materials the polarization density can be approximated as a scalar multiple of the electric field, i.e., $\\P = \\epsilon_0\\chi\\E$, where $\\chi$ is called the electric susceptibility . In such cases we usually use the relation $\\D = \\epsilon\\E$ with $\\epsilon = \\epsilon_0(1 + \\chi)$ and call $\\epsilon$ the permittivity of the material. The nature of magnetization is more complicated but we will take a very simplified view which is valid in many situations. Specifically, we will assume that either $\\M$ is proportional to $\\H$ yielding the relation $\\B = \\mu\\H$ where $\\mu = \\mu_0(1 + \\chi_M)$ and $\\chi_M$ is the magnetic susceptibility or that $\\M$ is independent of the applied field. The former case pertains to both diamagnetic and paramagnetic materials and the latter to ferromagnetic materials. Finally we should note that equations \\eqref{ampere} and \\eqref{gauss} can be combined to yield the equation of charge continuity $\\dd{\\rho}{t} + \\div\\J = 0$ which can be important in plasma physics and magnetohydrodynamics (MHD). Electrostatics Electrostatic problems come in a variety of subtypes but they all derive from Gauss's Law and Faraday's Law (equations \\eqref{gauss} and \\eqref{faraday}). When we assume no time variation, Faraday's Law becomes simply $\\curl\\E = 0$. This suggests that the electric field can be expressed as the gradient of a scalar field which is traditionally taken to be $-\\varphi$, i.e. $$\\E = -\\grad\\varphi \\label{gradphi}$$ where $\\varphi$ is called the electric potential and has units of Volts in the SI system. Inserting this definition into equation \\eqref{gauss} gives: $$-\\div\\epsilon\\grad\\varphi = \\rho - \\div\\P \\label{poisson}$$ which is Poisson's equation for the electric potential, where we have assumed a linear constitutive relation between $\\D$ and $\\E$ of the form $\\D = \\epsilon\\E + \\P$. This allows a polarization which is proportional to $\\E$ as well as a polarization independent of $\\E$. If this relation happens to be nonlinear then Poisson's equation would need to be replaced with a more complicated nonlinear expression. The solutions to equation \\eqref{poisson} are non unique because they can be shifted by any additive constant. This means that we must apply a Dirichlet boundary condition at least at one point in the problem domain in order to obtain a solution. Typically this point will be on the boundary but it need not be so. Such a Dirichlet value is equivalent to fixing the voltage (a.k.a. potential) at one or more locations. Additionally, this equation admits a normal derivative boundary condition. This corresponds to setting $\\hat{n}\\cdot\\D$ to a prescribed value on some portion of the boundary. This is equivalent to defining a surface charge density on that portion of the boundary. Volta Mini Application The electrostatics mini application, named volta after the inventor of the voltaic pile , is intended to demonstrate how to solve standard electrostatics problems in MFEM. Its source terms and boundary conditions are simple but they should indicate how more specialized sources or boundary conditions could be implemented. Note that this application assumes the mesh coordinates are given in meters. Mini Application Features Permittivity: The permittivity, $\\epsilon$, is assumed to be that of free space except for an optional sphere of dielectric material which can be defined by the user. The command line option -ds can be used to set the parameters for this dielectric sphere. For example, to produce a sphere at the origin with a radius of 0.5 and a relative permittivity of 3 the user would specify: -ds '0 0 0 0.5 3' . Charge Density: The charge density, $\\rho$, is assumed to be zero except for an optional sphere of uniform charge density which can be defined by the user. The command line option for this is -cs which follows the same pattern as the dielectric sphere. Note that the last entry is the total charge of the sphere and not its charge density. Polarization: A polarization vector function, $\\P$, can be imposed as a source of the electric field. The command line option -vp creates a polarization due to a simple voltaic pile, i.e., a cylinder which is electrically polarized along its axis. The user should specify the two end points of the cylinder axis, its radius and the magnitude of the polarization vector. Dirichlet BC: Dirichlet boundary conditions can either specify piecewise constant voltages on a collection of surfaces or they can specify a gradient field which approximates a uniform applied electric field. In either case the user specifies the surfaces where the Dirichlet boundary condition should be applied using the -dbcs option followed by a list of boundary attributes. For example to select surfaces 2, 3, and 4 the user would use the following: -dbcs '2 3 4' . To apply a gradient field on these surfaces the user would also use the -dbcg option. This defaults to the uniform field $\\E = (0,0,1)$ in 3D or $\\E = (0,1)$ in 2D. An arbitrary vector can be specified with -uebc followed by the desired vector, e.g., to apply $\\E = (1,2,3)$ the user would supply: -uebc '1 2 3' . To specify piecewise constant potential values the user would list the desired values after -dbcv as follows: -dbcv '0.0 1.0 -1.0' . Neumann BC: Neumann boundary conditions set the normal component of the electric displacement on portions of the boundary. This normal component is equivalent to the surface charge density on the surface. This is rarely used because surface charge densities are rarely known unless they are known to be zero. However, if the surface charge density is zero then the Neumann BCs are not needed because this is the natural boundary condition. Only piecewise constant Neumann boundary conditions are supported. They can be set analogously to piecewise Dirichlet boundary conditions but using options -nbcs and -nbcv . Magnetostatics Magnetostatic problems arise when we assume no time variation in Amp\u00e8re's Law \\eqref{ampere} which leads to: $$\\curl\\H = \\J \\nonumber$$ We will again assume a somewhat more general constitutive relation between $\\H$ and $\\vec{B}$ than is normally seen: $$\\B = \\mu\\H + \\mu_0\\M = \\mu_0(1 + \\chi_M)\\H + \\mu_0\\M \\nonumber$$ Where the magnetization is split into two portions; one which is proportional to $\\H$ and given by $\\chi_M\\H$, and another which is independent of $\\H$ and is given by $\\M$. This allows for paramagnetic and/or diamagnetic materials defined through $\\mu$ as well as ferromagnetic materials represented by $\\M$. This choice yields: $$\\curl\\mu^{-1}\\B = \\J + \\curl\\mu^{-1}\\mu_0\\M \\nonumber$$ Which, when combined with equation \\eqref{divb}, becomes: $$\\curl\\mu^{-1}\\curl\\A = \\J + \\curl\\mu^{-1}\\mu_0\\M $$ If $\\J$ happens to be zero we have another option because we can assume that $\\H = -\\grad\\varphi_M$ for some scalar potential $\\varphi_M$. When combined with equation \\eqref{divb} this leads to: $$\\div\\mu\\grad\\varphi_M = \\div\\mu_0\\M $$ Currently only the vector potential equation is used so we will focus on that for the remainder of this document. The vector potential is again non unique so we must apply additional constraints in order to arrive at a solution for $\\A$. When working analytically it is common to constrain the solution by restricting the divergence of $\\A$ but numerically this leads to other complications. For our problems of interest it will be necessary to require Dirichlet boundary conditions on the entire outer surface in order to sufficiently constrain the solution. Dirichlet boundary conditions for the vector potential on a surface provide a means to specify the component of $\\B$ normal to that surface. For example, setting the tangential components of $\\A$ to be zero on a particular surface results in a magnetic flux density which must be tangent to that surface. Tesla Mini Application The magnetostatics mini application, named tesla after the unit of magnetic field strength (and of course the man Nikola Tesla), is intended to demonstrate how to solve standard magnetostatics problems in MFEM. Its source terms and boundary conditions are simple but they should indicate how more specialized sources of boundary conditions could be implemented. A detailed overview of the equations being solved and their discretization can be found here: Tesla Theory Notes . Note that this application assumes the mesh coordinates are given in meters. Mini Application Features Permeability: The permeability, $\\mu$, is assumed to be that of free space except for an optional spherical shell of diamagnetic or paramagnetic material which can be defined by the user. The command line option -ms can be used to set the parameters for this shell. For example, to produce a shell at the origin with inner and outer radii of 0.4 and 0.5 respectively and a relative permeability of 3 the user would specify: -ms '0 0 0 0.4 0.5 3' . Current Density: The current density, $\\J$, is assumed to be zero except for an optional ring of constant current which can be defined by the user. The command line option for this is -cr which requires two points giving the end points of the ring's axis, inner and outer radii, and a constant total current. For example, to specify a ring centered at the origin and laying in the XY plane with a thickness of 0.2 and radii 0.4 and 0.5, and a current of 2 amps the user would give: -cr 0 0 -0.1 0 0 0.1 0.4 0.5 2 . Magnetization: A permanent magnetization, $\\M$, can be applied in the form of a cylindrical magnet with poles at its circular ends. The command line option is -bm which indicates a 'bar magnet'. The option requires the two end points of the cylinder's axis, its radius, and the magnitude of the magnetization. Surface Current Density: A surface current can be imposed indirectly by specifying separate surface patches with different voltages as well as a collection of surface patches connecting the voltages through which the current will flow. The voltage surfaces and their voltages can be specified using -vbcs followed by the indices of the surfaces and -vbcv followed by their voltages. The path for the surface current ($\\vec{K}$) is specified by using -kbcs followed by a set of surface indices. For example, applying voltages 1 and -1 to surfaces 2 and 3 with a current path along surfaces 4 and 6 would be specified as: -vbcs '2 3' -vbcv '1 -1' -kbcs '4 6' . Any surfaces not listed as voltage or current surfaces will be assigned as homogeneous Dirichlet boundaries. Note that when this option is selected an auxiliary electrostatic problem will be solved on the surface of the geometry to compute the surface current. Dirichlet BC: Dirichlet boundary conditions are required if a surface current density is not defined. For this reason the user need not specify boundary surfaces by number since the boundary condition must be applied on all of them. The default boundary condition is a homogeneous Dirichlet boundary condition on all outer surfaces. This means that the normal component of $\\B$ will be zero at the outer boundary. An alternative is to specify a desired uniform magnetic flux density on the entire outer surface. This is accomplished with the -ubbc command line option followed by the desired $\\B$ vector. Transient Full-Wave Electromagnetics Transient electromagnetics problems are governed by the time-dependent Maxwell equations \\eqref{ampere} and \\eqref{faraday} when combined using the constitutive relations \\eqref{const_d} and \\eqref{const_b}. When combined these equations can describe the evolution and propagation of electromagnetic waves. $$\\begin{align} \\dd{(\\epsilon\\E)}{t} & = \\curl(\\mu^{-1}\\B) - \\sigma \\E - \\J \\\\ \\dd{\\B}{t} & = - \\curl\\E \\end{align}$$ The term $\\sigma\\E$ arises in the presence of electrically conductive materials where the electric field induces a current which can be separated from $\\J$. In such cases the total current appearing in Amp\u00e8re's Law \\eqref{ampere} can be expressed as the sum of an applied current (also labeled as $\\J$) and an induced current $\\sigma\\E$. Solving these equations requires initial conditions for both the electric and magnetic fields $\\E$ and $\\B$ as well as boundary conditions related to the tangential components of $\\E$ or $\\H$. Other formulations are possible such as evolving $\\H$ and $\\D$ or the potentials $\\varphi$ and $\\A$. This system of equations can also be written as a single second order equation involving only $\\E$, $\\H$, $\\varphi$, or $\\A$. Each of these formulations has a different set of sources, initial and boundary conditions for which it is well-suited. The choice we make here is perhaps the most common but it may not be the most convenient choice for a given application. These equations can be used to evolve their initial conditions or they can be driven by either a current source or through time-varying boundary conditions. It is also possible to combine all three of these sources in a single simulation. Maxwell Mini Application The electrodynamics mini application, named maxwell after James Clerk Maxwell who first formulated the classical theory of electromagnetic radiation, is intended to demonstrate how to solve transient wave problems in MFEM. Its source terms and boundary conditions are simple but they should indicate how more specialized sources or boundary conditions could be implemented. A detailed overview of the equations being solved and their discretization can be found here: Maxwell Theory Notes . An example simulation is depicted below (click to animate the wave propagation). Time integration is handled by a variable order symplectic time integration algorithm. This algorithm is designed for systems of equations which are derived from a Hamiltonian and it helps to ensure energy conservation within some tolerance. The time step used during integration is automatically chosen based on the largest stable time step as computed from the largest eigenvalue of the update equations. This determination involves a user-adjustable factor which creates a safety margin. By default the actual time step is less than 95% of the estimate for the largest stable time step. Note that this application assumes the mesh coordinates are given in meters. Internally the code assumes time is in seconds but the command line options use nanoseconds for convenience. Mini Application Features Time Evolution: The initial and final times for the simulation can be specified, in nanoseconds, with the -ti and -tf options. Visualization snapshots of data will be written out after time intervals specified by -ts which again given in nanoseconds. The order of the time integration can be specified, from 1 to 4, using the -to option. Permittivity: The permittivity, $\\epsilon$, is assumed to be that of free space except for an optional sphere of dielectric material which can be defined by the user. The command line option -ds can be used to set the parameters for this dielectric sphere. For example, to produce a sphere at the origin with a radius of 0.5 and a relative permittivity of 3 the user would specify: -ds '0 0 0 0.5 3' . Permeability: The permeability, $\\mu$, is assumed to be that of free space except for an optional spherical shell of diamagnetic or paramagnetic material which can be defined by the user. The command line option -ms can be used to set the parameters for this shell. For example, to produce a shell at the origin with inner and outer radii of 0.4 and 0.5 respectively and a relative permeability of 3 the user would specify: -ms '0 0 0 0.4 0.5 3' . Conductivity: The conductivity, $\\sigma$, is assumed to be zero except for an optional sphere of conductive material which can be defined by the user. The command line option -cs can be used to set the parameters for this conductive sphere. For example, to produce a sphere at the origin with a radius of 0.5 and a conductivity of 3,000,000 S/m the user would specify: -cs '0 0 0 0.5 3e6' . Current Density: The current density, $\\J$, is assumed to be zero except for an optional cylinder of pulsed current which can be defined by the user. The command line option for this is -dp , short for 'dipole pulse', which requires two points giving the end points of the cylinder's axis, radius, amplitude ($\\alpha$), pulse center ($\\beta$), and a pulse width ($\\gamma$). The time dependence of this pulse is given by: $$\\J(t) = \\hat{a} \\alpha e^{-(t-\\beta)^2/(2\\gamma^2)}$$ Where $\\hat{a}$ is the unit vector along the cylinder's axis and both $\\beta$ and $\\gamma$ are specified in nanoseconds. Dirichlet BC: Homogeneous Dirichlet boundary conditions, which constrain the tangential components of $\\frac{\\partial\\E}{\\partial t}$ to be zero, can be activated on a portion of the boundary by specifying a list of boundary attributes such as -dbcs '4 8' . For convenience a boundary attribute of '-1' can be used to specify all boundary surfaces. Non-Homogeneous, time-dependent Dirichlet boundary conditions are supported by the Maxwell solver so a user can edit maxwell.cpp and supply their own function if desired. Absorbing BC: A first order Sommerfeld absorbing boundary condition can be applied to a portion of the boundary using the -abcs option along with a list of boundary attributes such as -abcs '4 18' . Again, the special purpose boundary attribute '-1' can be used to specify all boundary surfaces. This boundary condition depends on a coefficient, $\\eta^{-1}=\\sqrt{\\epsilon/\\mu}$, which must be matched to the materials just inside the boundary. The code assumes that the permittivity and permeability are those of the vacuum near the surface but, if this is not the case, an ambitious user can replace etaInvCoef_ with a more appropriate function. Transient Magnetics and Joule Heating Joule Mini Application The transient magnetics mini application, named joule after the SI unit of energy (and the scientist James Prescott Joule, who was also a brewer), is intended to demonstrate how to solve transient implicit diffusion problems. The equations of low-frequency electromagnetics are coupled with the equations of heat transfer. The coupling is one way, electromagnetics generates Joule heating, but the heating does not affect the electromagnetics. The thermal problem is solved using an $H(\\mathrm{div})$ method, i.e. temperature is discontinuous and the thermal flux $\\F$ is in $H(\\mathrm{div})$. There are three linear solves per time step: Poisson's equation for the scalar electric potential is solved using the AMG preconditioner, the electric diffusion equation is solved using the AMS preconditioner, and the thermal diffusion equation is solved using the ADS preconditioner. Two example meshes are provided, one is a straight circular metal rod in vacuum, the other is a helical coil in vacuum (the latter is 21MB and can be downloaded from here ). The idea is that a voltage is applied to the ends of the rod/coil, the electric field diffuses into the metal, the metal is heated by Joule heating, the heat diffuses out. The equations are: $$\\begin{align} \\div\\sigma\\grad\\Phi &= 0 \\\\ \\sigma \\E &= \\curl\\mu^{-1} \\B - \\sigma \\grad \\Phi \\\\ \\frac{d \\B}{d t} &= - \\curl \\E \\\\ \\F &= -k \\grad T \\\\ c \\frac{d T}{d t} &= - \\div \\F + \\sigma \\E \\cdot \\E \\end{align}$$ The equations are integrated in time using implicit time integration, either midpoint or higher order SDIRK. Since there are three solves, three sets of boundary conditions must be specified. The essential BC's are the scalar potential, the electric field, and the thermal flux. These are not set via command line arguments, you have to edit the code to change these. To change these, search the code for ess_bdr . There are conducting and non-conducting material regions, and the mesh must have integer attributes to specify these regions. To change these, search the code for std::map this maps the integer attribute to the floating-point material value. Note that this application assumes the mesh coordinates are given in meters. The above picture shows Joule heating of a cylinder using the mesh cylinder-hex.mesh . The cylinder is surrounded by vacuum. The black arrows show the magnetic field $\\B$, the magenta arrows show the heat flux $\\F$, and the pseudocolor in the center of the cylinder shows the temperature. Mini Application Features Boundary Conditions: Since there are three solves, three sets of boundary conditions must be specified. The essential BC's are the voltage for the scalar potential, the tangential electric field, and the normal thermal flux. These are not set via command line arguments, you have to edit the code to change these. To change these, search the code for ess_bdr . Note that the essential BC's can be time varying. Material Properties: There are conducting and non-conducting material regions, and the mesh must have integer attributes to specify these regions. To change these, search the code for std::map this maps the integer attribute to the floating-point material value. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Electromagnetics"}, {"location": "electromagnetics/#electromagnetics-mini-applications", "text": "$\\newcommand{\\A}{\\vec{A}}\\newcommand{\\B}{\\vec{B}} \\newcommand{\\D}{\\vec{D}}\\newcommand{\\E}{\\vec{E}} \\newcommand{\\H}{\\vec{H}}\\newcommand{\\J}{\\vec{J}} \\newcommand{\\M}{\\vec{M}}\\newcommand{\\P}{\\vec{P}} \\newcommand{\\F}{\\vec{F}} \\newcommand{\\dd}[2]{\\frac{\\partial #1}{\\partial #2}} \\newcommand{\\cross}{\\times}\\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot}\\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla}$ The miniapps/electromagnetics directory contains a collection of electromagnetic miniapps based on MFEM. Compared to the example codes , the miniapps are more complex, demonstrating more advanced usage of the library. They are intended to be more representative of MFEM-based application codes. We recommend that new users start with the example codes before moving to the miniapps. The current electromagnetic miniapps are described below.", "title": "Electromagnetics Mini Applications"}, {"location": "electromagnetics/#electromagnetics", "text": "The equations describing electromagnetic phenomena are known collectively as the Maxwell Equations. They are usually given as: $$\\begin{align} \\curl\\H - \\dd{\\D}{t} & = \\J \\label{ampere} \\\\ \\curl\\E + \\dd{\\B}{t} & = 0 \\label{faraday} \\\\ \\div\\D & = \\rho \\label{gauss} \\\\ \\div\\B & = 0 \\label{divb} \\end{align}$$ Where equation \\eqref{ampere} can be referred to as Amp\u00e8re's Law , equation \\eqref{faraday} is called Faraday's Law , equation \\eqref{gauss} is Gauss's Law , and equation \\eqref{divb} doesn't generally have a name but is related to the nonexistence of magnetic monopoles. The various fields in these equations are: Symbol Name SI Units $\\H$ magnetic field Ampere/meter $\\B$ magnetic flux density Tesla $\\E$ electric field Volt/meter $\\D$ electric displacement Coulomb/meter$^2$ $\\J$ current density Ampere/meter$^2$ $\\rho$ charge density Coulomb/meter$^3$ In the literature these names do vary, particularly those for $\\H$ and $\\B$, but in this document we will try to adhere to the convention laid out above. Generally we also need constitutive relations between $\\E$ and $\\D$ and/or between $\\H$ and $\\B$. These relations start with the definitions: $$\\begin{align} \\D & = \\epsilon_0\\E + \\P \\label{const_d} \\\\ \\B & = \\mu_0(\\H + \\M) \\label{const_b} \\end{align}$$ Where $\\P$ is the polarization density , and $\\M$ is the magnetization . Also, $\\epsilon_0$ is the permittivity of free space and $\\mu_0$ is the permeability of free space which are both constants of nature. In many common materials the polarization density can be approximated as a scalar multiple of the electric field, i.e., $\\P = \\epsilon_0\\chi\\E$, where $\\chi$ is called the electric susceptibility . In such cases we usually use the relation $\\D = \\epsilon\\E$ with $\\epsilon = \\epsilon_0(1 + \\chi)$ and call $\\epsilon$ the permittivity of the material. The nature of magnetization is more complicated but we will take a very simplified view which is valid in many situations. Specifically, we will assume that either $\\M$ is proportional to $\\H$ yielding the relation $\\B = \\mu\\H$ where $\\mu = \\mu_0(1 + \\chi_M)$ and $\\chi_M$ is the magnetic susceptibility or that $\\M$ is independent of the applied field. The former case pertains to both diamagnetic and paramagnetic materials and the latter to ferromagnetic materials. Finally we should note that equations \\eqref{ampere} and \\eqref{gauss} can be combined to yield the equation of charge continuity $\\dd{\\rho}{t} + \\div\\J = 0$ which can be important in plasma physics and magnetohydrodynamics (MHD).", "title": "Electromagnetics"}, {"location": "electromagnetics/#electrostatics", "text": "Electrostatic problems come in a variety of subtypes but they all derive from Gauss's Law and Faraday's Law (equations \\eqref{gauss} and \\eqref{faraday}). When we assume no time variation, Faraday's Law becomes simply $\\curl\\E = 0$. This suggests that the electric field can be expressed as the gradient of a scalar field which is traditionally taken to be $-\\varphi$, i.e. $$\\E = -\\grad\\varphi \\label{gradphi}$$ where $\\varphi$ is called the electric potential and has units of Volts in the SI system. Inserting this definition into equation \\eqref{gauss} gives: $$-\\div\\epsilon\\grad\\varphi = \\rho - \\div\\P \\label{poisson}$$ which is Poisson's equation for the electric potential, where we have assumed a linear constitutive relation between $\\D$ and $\\E$ of the form $\\D = \\epsilon\\E + \\P$. This allows a polarization which is proportional to $\\E$ as well as a polarization independent of $\\E$. If this relation happens to be nonlinear then Poisson's equation would need to be replaced with a more complicated nonlinear expression. The solutions to equation \\eqref{poisson} are non unique because they can be shifted by any additive constant. This means that we must apply a Dirichlet boundary condition at least at one point in the problem domain in order to obtain a solution. Typically this point will be on the boundary but it need not be so. Such a Dirichlet value is equivalent to fixing the voltage (a.k.a. potential) at one or more locations. Additionally, this equation admits a normal derivative boundary condition. This corresponds to setting $\\hat{n}\\cdot\\D$ to a prescribed value on some portion of the boundary. This is equivalent to defining a surface charge density on that portion of the boundary.", "title": "Electrostatics"}, {"location": "electromagnetics/#volta-mini-application", "text": "The electrostatics mini application, named volta after the inventor of the voltaic pile , is intended to demonstrate how to solve standard electrostatics problems in MFEM. Its source terms and boundary conditions are simple but they should indicate how more specialized sources or boundary conditions could be implemented. Note that this application assumes the mesh coordinates are given in meters.", "title": "Volta Mini Application"}, {"location": "electromagnetics/#mini-application-features", "text": "Permittivity: The permittivity, $\\epsilon$, is assumed to be that of free space except for an optional sphere of dielectric material which can be defined by the user. The command line option -ds can be used to set the parameters for this dielectric sphere. For example, to produce a sphere at the origin with a radius of 0.5 and a relative permittivity of 3 the user would specify: -ds '0 0 0 0.5 3' . Charge Density: The charge density, $\\rho$, is assumed to be zero except for an optional sphere of uniform charge density which can be defined by the user. The command line option for this is -cs which follows the same pattern as the dielectric sphere. Note that the last entry is the total charge of the sphere and not its charge density. Polarization: A polarization vector function, $\\P$, can be imposed as a source of the electric field. The command line option -vp creates a polarization due to a simple voltaic pile, i.e., a cylinder which is electrically polarized along its axis. The user should specify the two end points of the cylinder axis, its radius and the magnitude of the polarization vector. Dirichlet BC: Dirichlet boundary conditions can either specify piecewise constant voltages on a collection of surfaces or they can specify a gradient field which approximates a uniform applied electric field. In either case the user specifies the surfaces where the Dirichlet boundary condition should be applied using the -dbcs option followed by a list of boundary attributes. For example to select surfaces 2, 3, and 4 the user would use the following: -dbcs '2 3 4' . To apply a gradient field on these surfaces the user would also use the -dbcg option. This defaults to the uniform field $\\E = (0,0,1)$ in 3D or $\\E = (0,1)$ in 2D. An arbitrary vector can be specified with -uebc followed by the desired vector, e.g., to apply $\\E = (1,2,3)$ the user would supply: -uebc '1 2 3' . To specify piecewise constant potential values the user would list the desired values after -dbcv as follows: -dbcv '0.0 1.0 -1.0' . Neumann BC: Neumann boundary conditions set the normal component of the electric displacement on portions of the boundary. This normal component is equivalent to the surface charge density on the surface. This is rarely used because surface charge densities are rarely known unless they are known to be zero. However, if the surface charge density is zero then the Neumann BCs are not needed because this is the natural boundary condition. Only piecewise constant Neumann boundary conditions are supported. They can be set analogously to piecewise Dirichlet boundary conditions but using options -nbcs and -nbcv .", "title": "Mini Application Features"}, {"location": "electromagnetics/#magnetostatics", "text": "Magnetostatic problems arise when we assume no time variation in Amp\u00e8re's Law \\eqref{ampere} which leads to: $$\\curl\\H = \\J \\nonumber$$ We will again assume a somewhat more general constitutive relation between $\\H$ and $\\vec{B}$ than is normally seen: $$\\B = \\mu\\H + \\mu_0\\M = \\mu_0(1 + \\chi_M)\\H + \\mu_0\\M \\nonumber$$ Where the magnetization is split into two portions; one which is proportional to $\\H$ and given by $\\chi_M\\H$, and another which is independent of $\\H$ and is given by $\\M$. This allows for paramagnetic and/or diamagnetic materials defined through $\\mu$ as well as ferromagnetic materials represented by $\\M$. This choice yields: $$\\curl\\mu^{-1}\\B = \\J + \\curl\\mu^{-1}\\mu_0\\M \\nonumber$$ Which, when combined with equation \\eqref{divb}, becomes: $$\\curl\\mu^{-1}\\curl\\A = \\J + \\curl\\mu^{-1}\\mu_0\\M $$ If $\\J$ happens to be zero we have another option because we can assume that $\\H = -\\grad\\varphi_M$ for some scalar potential $\\varphi_M$. When combined with equation \\eqref{divb} this leads to: $$\\div\\mu\\grad\\varphi_M = \\div\\mu_0\\M $$ Currently only the vector potential equation is used so we will focus on that for the remainder of this document. The vector potential is again non unique so we must apply additional constraints in order to arrive at a solution for $\\A$. When working analytically it is common to constrain the solution by restricting the divergence of $\\A$ but numerically this leads to other complications. For our problems of interest it will be necessary to require Dirichlet boundary conditions on the entire outer surface in order to sufficiently constrain the solution. Dirichlet boundary conditions for the vector potential on a surface provide a means to specify the component of $\\B$ normal to that surface. For example, setting the tangential components of $\\A$ to be zero on a particular surface results in a magnetic flux density which must be tangent to that surface.", "title": "Magnetostatics"}, {"location": "electromagnetics/#tesla-mini-application", "text": "The magnetostatics mini application, named tesla after the unit of magnetic field strength (and of course the man Nikola Tesla), is intended to demonstrate how to solve standard magnetostatics problems in MFEM. Its source terms and boundary conditions are simple but they should indicate how more specialized sources of boundary conditions could be implemented. A detailed overview of the equations being solved and their discretization can be found here: Tesla Theory Notes . Note that this application assumes the mesh coordinates are given in meters.", "title": "Tesla Mini Application"}, {"location": "electromagnetics/#mini-application-features_1", "text": "Permeability: The permeability, $\\mu$, is assumed to be that of free space except for an optional spherical shell of diamagnetic or paramagnetic material which can be defined by the user. The command line option -ms can be used to set the parameters for this shell. For example, to produce a shell at the origin with inner and outer radii of 0.4 and 0.5 respectively and a relative permeability of 3 the user would specify: -ms '0 0 0 0.4 0.5 3' . Current Density: The current density, $\\J$, is assumed to be zero except for an optional ring of constant current which can be defined by the user. The command line option for this is -cr which requires two points giving the end points of the ring's axis, inner and outer radii, and a constant total current. For example, to specify a ring centered at the origin and laying in the XY plane with a thickness of 0.2 and radii 0.4 and 0.5, and a current of 2 amps the user would give: -cr 0 0 -0.1 0 0 0.1 0.4 0.5 2 . Magnetization: A permanent magnetization, $\\M$, can be applied in the form of a cylindrical magnet with poles at its circular ends. The command line option is -bm which indicates a 'bar magnet'. The option requires the two end points of the cylinder's axis, its radius, and the magnitude of the magnetization. Surface Current Density: A surface current can be imposed indirectly by specifying separate surface patches with different voltages as well as a collection of surface patches connecting the voltages through which the current will flow. The voltage surfaces and their voltages can be specified using -vbcs followed by the indices of the surfaces and -vbcv followed by their voltages. The path for the surface current ($\\vec{K}$) is specified by using -kbcs followed by a set of surface indices. For example, applying voltages 1 and -1 to surfaces 2 and 3 with a current path along surfaces 4 and 6 would be specified as: -vbcs '2 3' -vbcv '1 -1' -kbcs '4 6' . Any surfaces not listed as voltage or current surfaces will be assigned as homogeneous Dirichlet boundaries. Note that when this option is selected an auxiliary electrostatic problem will be solved on the surface of the geometry to compute the surface current. Dirichlet BC: Dirichlet boundary conditions are required if a surface current density is not defined. For this reason the user need not specify boundary surfaces by number since the boundary condition must be applied on all of them. The default boundary condition is a homogeneous Dirichlet boundary condition on all outer surfaces. This means that the normal component of $\\B$ will be zero at the outer boundary. An alternative is to specify a desired uniform magnetic flux density on the entire outer surface. This is accomplished with the -ubbc command line option followed by the desired $\\B$ vector.", "title": "Mini Application Features"}, {"location": "electromagnetics/#transient-full-wave-electromagnetics", "text": "Transient electromagnetics problems are governed by the time-dependent Maxwell equations \\eqref{ampere} and \\eqref{faraday} when combined using the constitutive relations \\eqref{const_d} and \\eqref{const_b}. When combined these equations can describe the evolution and propagation of electromagnetic waves. $$\\begin{align} \\dd{(\\epsilon\\E)}{t} & = \\curl(\\mu^{-1}\\B) - \\sigma \\E - \\J \\\\ \\dd{\\B}{t} & = - \\curl\\E \\end{align}$$ The term $\\sigma\\E$ arises in the presence of electrically conductive materials where the electric field induces a current which can be separated from $\\J$. In such cases the total current appearing in Amp\u00e8re's Law \\eqref{ampere} can be expressed as the sum of an applied current (also labeled as $\\J$) and an induced current $\\sigma\\E$. Solving these equations requires initial conditions for both the electric and magnetic fields $\\E$ and $\\B$ as well as boundary conditions related to the tangential components of $\\E$ or $\\H$. Other formulations are possible such as evolving $\\H$ and $\\D$ or the potentials $\\varphi$ and $\\A$. This system of equations can also be written as a single second order equation involving only $\\E$, $\\H$, $\\varphi$, or $\\A$. Each of these formulations has a different set of sources, initial and boundary conditions for which it is well-suited. The choice we make here is perhaps the most common but it may not be the most convenient choice for a given application. These equations can be used to evolve their initial conditions or they can be driven by either a current source or through time-varying boundary conditions. It is also possible to combine all three of these sources in a single simulation.", "title": "Transient Full-Wave Electromagnetics"}, {"location": "electromagnetics/#maxwell-mini-application", "text": "The electrodynamics mini application, named maxwell after James Clerk Maxwell who first formulated the classical theory of electromagnetic radiation, is intended to demonstrate how to solve transient wave problems in MFEM. Its source terms and boundary conditions are simple but they should indicate how more specialized sources or boundary conditions could be implemented. A detailed overview of the equations being solved and their discretization can be found here: Maxwell Theory Notes . An example simulation is depicted below (click to animate the wave propagation). Time integration is handled by a variable order symplectic time integration algorithm. This algorithm is designed for systems of equations which are derived from a Hamiltonian and it helps to ensure energy conservation within some tolerance. The time step used during integration is automatically chosen based on the largest stable time step as computed from the largest eigenvalue of the update equations. This determination involves a user-adjustable factor which creates a safety margin. By default the actual time step is less than 95% of the estimate for the largest stable time step. Note that this application assumes the mesh coordinates are given in meters. Internally the code assumes time is in seconds but the command line options use nanoseconds for convenience.", "title": "Maxwell Mini Application"}, {"location": "electromagnetics/#mini-application-features_2", "text": "Time Evolution: The initial and final times for the simulation can be specified, in nanoseconds, with the -ti and -tf options. Visualization snapshots of data will be written out after time intervals specified by -ts which again given in nanoseconds. The order of the time integration can be specified, from 1 to 4, using the -to option. Permittivity: The permittivity, $\\epsilon$, is assumed to be that of free space except for an optional sphere of dielectric material which can be defined by the user. The command line option -ds can be used to set the parameters for this dielectric sphere. For example, to produce a sphere at the origin with a radius of 0.5 and a relative permittivity of 3 the user would specify: -ds '0 0 0 0.5 3' . Permeability: The permeability, $\\mu$, is assumed to be that of free space except for an optional spherical shell of diamagnetic or paramagnetic material which can be defined by the user. The command line option -ms can be used to set the parameters for this shell. For example, to produce a shell at the origin with inner and outer radii of 0.4 and 0.5 respectively and a relative permeability of 3 the user would specify: -ms '0 0 0 0.4 0.5 3' . Conductivity: The conductivity, $\\sigma$, is assumed to be zero except for an optional sphere of conductive material which can be defined by the user. The command line option -cs can be used to set the parameters for this conductive sphere. For example, to produce a sphere at the origin with a radius of 0.5 and a conductivity of 3,000,000 S/m the user would specify: -cs '0 0 0 0.5 3e6' . Current Density: The current density, $\\J$, is assumed to be zero except for an optional cylinder of pulsed current which can be defined by the user. The command line option for this is -dp , short for 'dipole pulse', which requires two points giving the end points of the cylinder's axis, radius, amplitude ($\\alpha$), pulse center ($\\beta$), and a pulse width ($\\gamma$). The time dependence of this pulse is given by: $$\\J(t) = \\hat{a} \\alpha e^{-(t-\\beta)^2/(2\\gamma^2)}$$ Where $\\hat{a}$ is the unit vector along the cylinder's axis and both $\\beta$ and $\\gamma$ are specified in nanoseconds. Dirichlet BC: Homogeneous Dirichlet boundary conditions, which constrain the tangential components of $\\frac{\\partial\\E}{\\partial t}$ to be zero, can be activated on a portion of the boundary by specifying a list of boundary attributes such as -dbcs '4 8' . For convenience a boundary attribute of '-1' can be used to specify all boundary surfaces. Non-Homogeneous, time-dependent Dirichlet boundary conditions are supported by the Maxwell solver so a user can edit maxwell.cpp and supply their own function if desired. Absorbing BC: A first order Sommerfeld absorbing boundary condition can be applied to a portion of the boundary using the -abcs option along with a list of boundary attributes such as -abcs '4 18' . Again, the special purpose boundary attribute '-1' can be used to specify all boundary surfaces. This boundary condition depends on a coefficient, $\\eta^{-1}=\\sqrt{\\epsilon/\\mu}$, which must be matched to the materials just inside the boundary. The code assumes that the permittivity and permeability are those of the vacuum near the surface but, if this is not the case, an ambitious user can replace etaInvCoef_ with a more appropriate function.", "title": "Mini Application Features"}, {"location": "electromagnetics/#transient-magnetics-and-joule-heating", "text": "", "title": "Transient Magnetics and Joule Heating"}, {"location": "electromagnetics/#joule-mini-application", "text": "The transient magnetics mini application, named joule after the SI unit of energy (and the scientist James Prescott Joule, who was also a brewer), is intended to demonstrate how to solve transient implicit diffusion problems. The equations of low-frequency electromagnetics are coupled with the equations of heat transfer. The coupling is one way, electromagnetics generates Joule heating, but the heating does not affect the electromagnetics. The thermal problem is solved using an $H(\\mathrm{div})$ method, i.e. temperature is discontinuous and the thermal flux $\\F$ is in $H(\\mathrm{div})$. There are three linear solves per time step: Poisson's equation for the scalar electric potential is solved using the AMG preconditioner, the electric diffusion equation is solved using the AMS preconditioner, and the thermal diffusion equation is solved using the ADS preconditioner. Two example meshes are provided, one is a straight circular metal rod in vacuum, the other is a helical coil in vacuum (the latter is 21MB and can be downloaded from here ). The idea is that a voltage is applied to the ends of the rod/coil, the electric field diffuses into the metal, the metal is heated by Joule heating, the heat diffuses out. The equations are: $$\\begin{align} \\div\\sigma\\grad\\Phi &= 0 \\\\ \\sigma \\E &= \\curl\\mu^{-1} \\B - \\sigma \\grad \\Phi \\\\ \\frac{d \\B}{d t} &= - \\curl \\E \\\\ \\F &= -k \\grad T \\\\ c \\frac{d T}{d t} &= - \\div \\F + \\sigma \\E \\cdot \\E \\end{align}$$ The equations are integrated in time using implicit time integration, either midpoint or higher order SDIRK. Since there are three solves, three sets of boundary conditions must be specified. The essential BC's are the scalar potential, the electric field, and the thermal flux. These are not set via command line arguments, you have to edit the code to change these. To change these, search the code for ess_bdr . There are conducting and non-conducting material regions, and the mesh must have integer attributes to specify these regions. To change these, search the code for std::map this maps the integer attribute to the floating-point material value. Note that this application assumes the mesh coordinates are given in meters. The above picture shows Joule heating of a cylinder using the mesh cylinder-hex.mesh . The cylinder is surrounded by vacuum. The black arrows show the magnetic field $\\B$, the magenta arrows show the heat flux $\\F$, and the pseudocolor in the center of the cylinder shows the temperature.", "title": "Joule Mini Application"}, {"location": "electromagnetics/#mini-application-features_3", "text": "Boundary Conditions: Since there are three solves, three sets of boundary conditions must be specified. The essential BC's are the voltage for the scalar potential, the tangential electric field, and the normal thermal flux. These are not set via command line arguments, you have to edit the code to change these. To change these, search the code for ess_bdr . Note that the essential BC's can be time varying. Material Properties: There are conducting and non-conducting material regions, and the mesh must have integer attributes to specify these regions. To change these, search the code for std::map this maps the integer attribute to the floating-point material value. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Mini Application Features"}, {"location": "examples-orig/", "text": "MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$']]}}); Example Codes and Miniapps This page provides a brief overview of MFEM's example codes and miniapps. For detailed documentation of the MFEM sources, including the examples, see the online Doxygen documentation , or the doc directory in the distribution. The goal of the example codes is to provide a step-by-step introduction to MFEM in simple model settings. The miniapps are more complex, and are intended to be more representative of the advanced usage of the library in physics/application codes. We recommend that new users start with the example codes before moving to the miniapps. Select from the categories below to display examples and miniapps that contain the respective feature. All examples support (arbitrarily) high-order meshes and finite element spaces . The numerical results from the example codes can be visualized using the GLVis visualization tool (based on MFEM). See the GLVis website for more details. Users are encouraged to submit any example codes and miniapps that they have created and would like to share. Contact a member of the MFEM team to report bugs or post questions or comments . Application (PDE) All Diffusion Convection-diffusion Elasticity Electromagnetics Acoustics grad-div Darcy Advection Conduction Wave Compressible flow Incompressible flow Meshing Nonlocal Stochastic Free boundary Finite Elements All H1 nodal elements L2 discontinuous elements H(curl) Nedelec elements H(div) Raviart-Thomas elements H^{1/2} interfacial elements H^{-1/2} interfacial elements Discretization All Galerkin FEM Mixed FEM Discontinuous Galerkin (DG) Discont. Petrov-Galerkin (DPG) Hybridization Static condensation Isogeometric analysis (NURBS) Adaptive mesh refinement (AMR) Partial assembly Solver All Jacobi Gauss-Seidel PCG MINRES GMRES Algebraic Multigrid (BoomerAMG) Auxiliary-space Maxwell Solver (AMS) Auxiliary-space Divergence Solver (ADS) SuperLU/STRUMPACK (parallel direct) UMFPACK (serial direct) Newton method (nonlinear solver) Explicit Runge-Kutta (ODE integration) Implicit Runge-Kutta (ODE integration) Newmark (ODE Integration) Symplectic Algorithm (ODE Integration) LOBPCG, AME (eigensolvers) SUNDIALS solvers PETSc solvers SLEPc eigensolvers HiOp solvers None Example 0: Simplest Laplace Problem This is the simplest MFEM example and a good starting point for new users. The example demonstrates the use of MFEM to define and solve an $H^1$ finite element discretization of the Laplace problem $$-\\Delta u = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ The example illustrates the use of the basic MFEM classes for defining the mesh, finite element space, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. The example has serial ( ex0.cpp ) and parallel ( ex0p.cpp ) versions. Example 1: Laplace Problem This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Specifically, we discretize with the finite element space coming from the mesh (linear by default, quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The problem solved in this example is the same as ex0 , but with more sophisticated options and features. The example highlights the use of mesh refinement, finite element grid functions, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. We also cover the explicit elimination of essential boundary conditions, static condensation, and the optional connection to the GLVis tool for visualization. The example has a serial ( ex1.cpp ), a parallel ( ex1p.cpp ), and HPC versions: performance/ex1.cpp , performance/ex1p.cpp . It also has a PETSc modification in examples/petsc , a PUMI modification in examples/pumi and a Ginkgo modification in examples/ginkgo . Partial assembly and GPU devices are supported. Example 2: Linear Elasticity This example code solves a simple linear elasticity problem describing a multi-material cantilever beam. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder with $f$ being a constant pull down vector on boundary elements with attribute 2, and zero otherwise. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order and NURBS vector finite element spaces with the linear elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and vector coefficient objects. Static condensation is also illustrated. The example has a serial ( ex2.cpp ) and a parallel ( ex2p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . We recommend viewing Example 1 before viewing this example. Example 3: Definite Maxwell Problem This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 2D or 3D. The example demonstrates the use of $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. Static condensation is also illustrated. The example has a serial ( ex3.cpp ) and a parallel ( ex3p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-2 before viewing this example. Example 4: Grad-div Problem This example code solves a simple 2D/3D $H(div)$ diffusion problem corresponding to the second order definite equation $$-{\\rm grad}(\\alpha\\,{\\rm div}(F)) + \\beta F = f$$ with boundary condition $F \\cdot n$ = \"given normal field\". Here we use a given exact solution $F$ and compute the corresponding right hand side $f$. We discretize with the Raviart-Thomas finite elements. The example demonstrates the use of $H(div)$ finite element spaces with the grad-div and $H(div)$ vector finite element mass bilinear form, as well as the computation of discretization error when the exact solution is known. Bilinear form hybridization and static condensation are also illustrated. The example has a serial ( ex4.cpp ) and a parallel ( ex4p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-3 before viewing this example. Example 5: Darcy Problem This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize with Raviart-Thomas finite elements (velocity $\\bf u$) and piecewise discontinuous polynomials (pressure $p$). The example demonstrates the use of the BlockMatrix and BlockOperator classes, as well as the collective saving of several grid functions in VisIt and ParaView formats. The example has a serial ( ex5.cpp ) and a parallel ( ex5p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly is supported. We recommend viewing examples 1-4 before viewing this example. Example 6: Laplace Problem with AMR This is a version of Example 1 with a simple adaptive mesh refinement loop. The problem being solved is again the Laplace equation $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear, curved and surface meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex6.cpp ) and a parallel ( ex6p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . Partial assembly and GPU devices are supported. We recommend viewing Example 1 before viewing this example. Example 7: Surface Meshes This example code demonstrates the use of MFEM to define a triangulation of a unit sphere and a simple isoparametric finite element discretization of the Laplace problem with mass term, $$-\\Delta u + u = f.$$ The example highlights mesh generation, the use of mesh refinement, high-order meshes and finite elements, as well as surface-based linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. Simple local mesh refinement is also demonstrated. The example has a serial ( ex7.cpp ) and a parallel ( ex7p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 8: DPG for the Laplace Problem This example code demonstrates the use of the Discontinuous Petrov-Galerkin (DPG) method in its primal 2x2 block form as a simple finite element discretization of the Laplace problem $$-\\Delta u = f$$ with homogeneous Dirichlet boundary conditions. We use high-order continuous trial space, a high-order interfacial (trace) space, and a high-order discontinuous test space defining a local dual ($H^{-1}$) norm. We use the primal form of DPG, see \"A primal DPG method without a first-order reformulation\" , Demkowicz and Gopalakrishnan, CAM 2013. The example highlights the use of interfacial (trace) finite elements and spaces, trace face integrators and the definition of block operators and preconditioners. The example has a serial ( ex8.cpp ) and a parallel ( ex8p.cpp ) version. We recommend viewing examples 1-5 before viewing this example. Example 9: DG Advection This example code solves the time-dependent advection equation $$\\frac{\\partial u}{\\partial t} + v \\cdot \\nabla u = 0,$$ where $v$ is a given fluid velocity, and $u_0(x)=u(0,x)$ is a given initial condition. The example demonstrates the use of Discontinuous Galerkin (DG) bilinear forms in MFEM (face integrators), the use of explicit and implicit (with block ILU preconditioning) ODE time integrators, the definition of periodic boundary conditions through periodic meshes, as well as the use of GLVis for persistent visualization of a time-evolving solution. The saving of time-dependent data files for external visualization with VisIt and ParaView is also illustrated. The example has a serial ( ex9.cpp ) and a parallel ( ex9p.cpp ) version. It also has a SUNDIALS modification in examples/sundials , a PETSc modification in examples/petsc , and a HiOp modification in examples/hiop . Example 10: Nonlinear Elasticity This example solves a time dependent nonlinear elasticity problem of the form $$ \\frac{dv}{dt} = H(x) + S v\\,,\\qquad \\frac{dx}{dt} = v\\,, $$ where $H$ is a hyperelastic model and $S$ is a viscosity operator of Laplacian type. The geometry of the domain is assumed to be as follows: The example demonstrates the use of nonlinear operators, as well as their implicit time integration using a Newton method for solving an associated reduced backward-Euler type nonlinear equation. Each Newton step requires the inversion of a Jacobian matrix, which is done through a (preconditioned) inner solver. The example has a serial ( ex10.cpp ) and a parallel ( ex10p.cpp ) version. It also has a SUNDIALS modification in examples/sundials and a PETSc modification in examples/petsc . We recommend viewing examples 2 and 9 before viewing this example. Example 11: Laplace Eigenproblem This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex11p.cpp ) version. It also has a SLEPc modification in examples/petsc . We recommend viewing Example 1 before viewing this example. Example 12: Linear Elasticity Eigenproblem This example code solves the linear elasticity eigenvalue problem for a multi-material cantilever beam. Specifically, we compute a number of the lowest eigenmodes by approximating the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = \\lambda {\\bf u} \\,,$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field $\\bf u$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder. The geometry of the domain is assumed to be as follows: The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex12p.cpp ) version. We recommend viewing examples 2 and 11 before viewing this example. Example 13: Maxwell Eigenproblem This example code solves the Maxwell (electromagnetic) eigenvalue problem $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 2D or 3D. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex13p.cpp ) version. We recommend viewing examples 3 and 11 before viewing this example. Example 14: DG Diffusion This example code demonstrates the use of MFEM to define a discontinuous Galerkin (DG) finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Finite element spaces of any order, including zero on regular grids, are supported. The example highlights the use of discontinuous spaces and DG-specific face integrators. The example has a serial ( ex14.cpp ) and a parallel ( ex14p.cpp ) version. We recommend viewing examples 1 and 9 before viewing this example. Example 15: Dynamic AMR Building on Example 6 , this example demonstrates dynamic adaptive mesh refinement. The mesh is adapted to a time-dependent solution by refinement as well as by derefinement. For simplicity, the solution is prescribed and no time integration is done. However, the error estimation and refinement/derefinement decisions are realistic. At each outer iteration the right hand side function is changed to mimic a time dependent problem. Within each inner iteration the problem is solved on a sequence of meshes which are locally refined according to a simple ZZ error estimator. At the end of the inner iteration the error estimates are also used to identify any elements which may be over-refined and a single derefinement step is performed. After each refinement or derefinement step a rebalance operation is performed to keep the mesh evenly distributed among the available processors. The example demonstrates MFEM's capability to refine, derefine and load balance nonconforming meshes, in 2D and 3D, and on linear, curved and surface meshes. Interpolation of functions between coarse and fine meshes, persistent GLVis visualization, and saving of time-dependent fields for external visualization with VisIt are also illustrated. The example has a serial ( ex15.cpp ) and a parallel ( ex15p.cpp ) version. We recommend viewing examples 1, 6 and 9 before viewing this example. Example 16: Time Dependent Heat Conduction This example code solves a simple 2D/3D time dependent nonlinear heat conduction problem $$\\frac{du}{dt} = \\nabla \\cdot \\left( \\kappa + \\alpha u \\right) \\nabla u$$ with a natural insulating boundary condition $\\frac{du}{dn} = 0$. We linearize the problem by using the temperature field $u$ from the previous time step to compute the conductivity coefficient. This example demonstrates both implicit and explicit time integration as well as a single Picard step method for linearization. The saving of time dependent data files for external visualization with VisIt is also illustrated. The example has a serial ( ex16.cpp ) and a parallel ( ex16p.cpp ) version. We recommend viewing examples 2, 9, and 10 before viewing this example. Example 17: DG Linear Elasticity This example code solves a simple linear elasticity problem describing a multi-material cantilever beam using symmetric or non-symmetric discontinuous Galerkin (DG) formulation. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are Dirichlet, $\\bf{u}=\\bf{u_D}$, on the fixed part of the boundary, namely boundary attributes 1 and 2; on the rest of the boundary we use ${\\sigma}({\\bf u})\\cdot n = {\\bf 0}$. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order DG vector finite element spaces with the linear DG elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and function vector-coefficient objects. The use of non-homogeneous Dirichlet b.c. imposed weakly, is also illustrated. The example has a serial ( ex17.cpp ) and a parallel ( ex17p.cpp ) version. We recommend viewing examples 2 and 14 before viewing this example. Example 18: DG Euler Equations This example code solves the compressible Euler system of equations, a model nonlinear hyperbolic PDE, with a discontinuous Galerkin (DG) formulation. The primary purpose is to show how a transient system of nonlinear equations can be formulated in MFEM. The equations are solved in conservative form $$\\frac{\\partial u}{\\partial t} + \\nabla \\cdot {\\bf F}(u) = 0$$ with a state vector $u = [ \\rho, \\rho v_0, \\rho v_1, \\rho E ]$, where $\\rho$ is the density, $v_i$ is the velocity in the $i^{\\rm th}$ direction, $E$ is the total specific energy, and $H = E + p / \\rho$ is the total specific enthalpy. The pressure, $p$ is computed through a simple equation of state (EOS) call. The conservative hydrodynamic flux ${\\bf F}$ in each direction $i$ is $${\\bf F_{\\it i}} = [ \\rho v_i, \\rho v_0 v_i + p \\delta_{i,0}, \\rho v_1 v_i + p \\delta_{i,1}, \\rho v_i H ]$$ Specifically, the example solves for an exact solution of the equations whereby a vortex is transported by a uniform flow. Since all boundaries are periodic here, the method's accuracy can be assessed by measuring the difference between the solution and the initial condition at a later time when the vortex returns to its initial location. Note that as the order of the spatial discretization increases, the timestep must become smaller. This example currently uses a simple estimate derived by Cockburn and Shu for the 1D RKDG method. An additional factor can be tuned by passing the --cfl (or -c shorter) flag. The example demonstrates user-defined nonlinear form with hyperbolic form integrator for systems of equations that are defined with block vectors, and how these are used with an operator for explicit time integrators. In this case the system also involves an external approximate Riemann solver for the DG interface flux. It also demonstrates how to use GLVis for in-situ visualization of vector grid functions. The example has a serial ( ex18.cpp ) and a parallel ( ex18p.cpp ) version. We recommend viewing examples 9, 14 and 17 before viewing this example. Example 19: Incompressible Nonlinear Elasticity This example code solves the quasi-static incompressible nonlinear hyperelasticity equations. Specifically, it solves the nonlinear equation $$ \\nabla \\cdot \\sigma(F) = 0 $$ subject to the constraint $$ \\text{det } F = 1 $$ where $\\sigma$ is the Cauchy stress and $F_{ij} = \\delta_{ij} + u_{i,j}$ is the deformation gradient. To handle the incompressibility constraint, pressure is included as an independent unknown $p$ and the stress response is modeled as an incompressible neo-Hookean hyperelastic solid . The geometry of the domain is assumed to be as follows: This formulation requires solving the saddle point system $$ \\left[ \\begin{array}{cc} K &B^T \\\\ B & 0 \\end{array} \\right] \\left[\\begin{array}{c} \\Delta u \\\\ \\Delta p \\end{array} \\right] = \\left[\\begin{array}{c} R_u \\\\ R_p \\end{array} \\right] $$ at each Newton step. To solve this linear system, we implement a specialized block preconditioner of the form $$ P^{-1} = \\left[\\begin{array}{cc} I & -\\tilde{K}^{-1}B^T \\\\ 0 & I \\end{array} \\right] \\left[\\begin{array}{cc} \\tilde{K}^{-1} & 0 \\\\ 0 & -\\gamma \\tilde{S}^{-1} \\end{array} \\right] $$ where $\\tilde{K}^{-1}$ is an approximation of the inverse of the stiffness matrix $K$ and $\\tilde{S}^{-1}$ is an approximation of the inverse of the Schur complement $S = BK^{-1}B^T$. To approximate the Schur complement, we use the mass matrix for the pressure variable $p$. The example demonstrates how to solve nonlinear systems of equations that are defined with block vectors as well as how to implement specialized block preconditioners for use in iterative solvers. The example has a serial ( ex19.cpp ) and a parallel ( ex19p.cpp ) version. We recommend viewing examples 2, 5 and 10 before viewing this example. Example 20: Symplectic Integration of Hamiltonian Systems This example demonstrates the use of the variable order, symplectic time integration algorithm. Symplectic integration algorithms are designed to conserve energy when integrating systems of ODEs which are derived from Hamiltonian systems. Hamiltonian systems define the energy of a system as a function of time (t), a set of generalized coordinates (q), and their corresponding generalized momenta (p). $$ H(q,p,t) = T(p) + V(q,t) $$ Hamilton's equations then specify how q and p evolve in time: $$ \\frac{dq}{dt} = \\frac{dH}{dp}\\,,\\qquad \\frac{dp}{dt} = -\\frac{dH}{dq} $$ To use the symplectic integration classes we need to define an mfem::Operator ${\\bf P}$ which evaluates the action of dH/dp, and an mfem::TimeDependentOperator ${\\bf F}$ which computes -dH/dq. This example visualizes its results as an evolution in phase space by defining the axes to be $q$, $p$, and $t$ rather than $x$, $y$, and $z$. In this space we build a ribbon-like mesh with nodes at $(0,0,t)$ and $(q,p,t)$. Finally we plot the energy as a function of time as a scalar field on this ribbon-like mesh. This scheme highlights any variations in the energy of the system. This example offers five simple 1D Hamiltonians: Simple Harmonic Oscillator (mass on a spring) $$H = \\frac{1}{2}\\left( \\frac{p^2}{m} + \\frac{q^2}{k} \\right)$$ Pendulum $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} - k \\left( 1 - cos(q) \\right) \\right]$$ Gaussian Potential Well $$H = \\frac{p^2}{2m} - k e^{-q^2 / 2}$$ Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 + q^2 \\right) q^2 \\right]$$ Negative Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 - \\frac{q^2}{8} \\right) q^2 \\right]$$ In all cases these Hamiltonians are shifted by constant values so that the energy will remain positive. The mean and standard deviation of the computed energies at each time step are displayed upon completion. When run in parallel, each processor integrates the same Hamiltonian system but starting from different initial conditions. The example has a serial ( ex20.cpp ) and a parallel ( ex20p.cpp ) version. See the Maxwell miniapp for another application of symplectic integration. Example 21: Adaptive mesh refinement for linear elasticity This is a version of Example 2 with a simple adaptive mesh refinement loop. The problem being solved is again linear elasticity describing a multi-material cantilever beam. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear and curved meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex21.cpp ) and a parallel ( ex21p.cpp ) version. We recommend viewing Examples 2 and 6 before viewing this example. Example 22: Complex Linear Systems This example code demonstrates the use of MFEM to define and solve a complex-valued linear system. It implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. The example also demonstrates how to display a time-varying solution as a sequence of fields sent to a single GLVis socket. The example has a serial ( ex22.cpp ) and a parallel ( ex22p.cpp ) version. We recommend viewing examples 1, 3, and 4 before viewing this example. Example 23: Wave Problem This example code solves a simple 2D/3D wave equation with a second order time derivative: $$\\frac{\\partial^2 u}{\\partial t^2} - c^2\\Delta u = 0$$ The boundary conditions are either Dirichlet or Neumann. The example demonstrates the use of time dependent operators, implicit solvers and second order time integration. The example has only a serial ( ex23.cpp ) version. We recommend viewing examples 9 and 10 before viewing this example. Example 24: Mixed finite element spaces This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. Partial assembly and GPU devices are supported. The example has a serial ( ex24.cpp ) and a parallel ( ex24p.cpp ) version. We recommend viewing examples 1 and 3 before viewing this example. Example 25: Perfectly Matched Layers The example illustrates the use of a Perfectly Matched Layer (PML) for the simulation of time-harmonic electromagnetic waves propagating in unbounded domains. PML was originally introduced by Berenger in \"A Perfectly Matched Layer for the Absorption of Electromagnetic Waves\" . It is a technique used to solve wave propagation problems posed in infinite domains. The implementation involves the introduction of an artificial absorbing layer that minimizes undesired reflections. Inside this layer a complex coordinate stretching map is used which forces the wave modes to decay exponentially. The example solves the indefinite Maxwell equations $$\\nabla \\times (a \\nabla \\times E) - \\omega^2 b E = f.$$ where $a = \\mu^{-1} |J|^{-1} J^T J$, $b= \\epsilon |J| J^{-1} J^{-T}$ and $J$ is the Jacobian matrix of the coordinate transformation. The example demonstrates discretization with Nedelec finite elements in 2D or 3D, as well as the use of complex-valued bilinear and linear forms. Several test problems are included, with known exact solutions. The example has a serial ( ex25.cpp ) and a parallel ( ex25p.cpp ) version. We recommend viewing Example 22 before viewing this example. Example 26: Multigrid Preconditioner This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions and how to solve it efficiently using a matrix-free multigrid preconditioner. The example highlights on the creation of a hierarchy of discretization spaces and diffusion bilinear forms using partial assembly. The levels in the hierarchy of finite element spaces maybe constructed through geometric or order refinements. Moreover, the construction of a multigrid preconditioner for the PCG solver is shown. The multigrid uses a PCG solver on the coarsest level and second order Chebyshev accelerated smoothers on the other levels. The example has a serial ( ex26.cpp ) and a parallel ( ex26p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 27: Laplace Boundary Conditions This example code demonstrates the use of MFEM to define a simple finite element discretization of the Laplace problem: $$ -\\Delta u = 0 $$ with a variety of boundary conditions. Specifically, we discretize using a FE space of the specified order using a continuous or discontinuous space. We then apply Dirichlet, Neumann (both homogeneous and inhomogeneous), Robin, and Periodic boundary conditions on different portions of a predefined mesh. Boundary conditions: $u = u_{dbc}$ on $\\Gamma_{dbc}$ $\\hat{n}\\cdot\\nabla u = g_{nbc}$ on $\\Gamma_{nbc}$ $\\hat{n}\\cdot\\nabla u = 0$ on $\\Gamma_{nbc_0}$ $\\hat{n}\\cdot\\nabla u + a u = b$ on $\\Gamma_{rbc}$ as well as periodic boundary conditions which are enforced topologically. The example has a serial ( ex27.cpp ) and a parallel ( ex27p.cpp ) version. We recommend viewing examples 1 and 14 before viewing this example. Example 28: Constraints and Sliding Boundary Conditions This example code illustrates the use of constraints in linear solvers by solving an elasticity problem where the normal component of the displacement is constrained to zero on two boundaries but tangential displacement is allowed. The constraints can be enforced in several different ways, including eliminating them from the linear system or solving a saddle-point system that explicitly includes constraint conditions. The example has a serial ( ex28.cpp ) and a parallel ( ex28p.cpp ) version. We recommend viewing example 2 before viewing this example. Example 29: Solving PDEs on embedded surfaces This example demonstrates setting up and solving an anisotropic Laplace problem $$-\\nabla\\cdot(\\sigma\\nabla u) = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ where $\\Omega$ is a two dimensional curved surface embedded in three dimensions and $\\sigma$ is an anisotropic diffusion tensor. The example demonstrates and validates our DiffusionIntegrator 's ability to properly integrate three dimensional fluxes on a two dimensional domain. Not all of our integrators currently support such cases but the DiffusionIntegrator can be used as a simple example of how extend other integrators when necessary. The example has a serial ( ex29.cpp ) and a parallel ( ex29p.cpp ) version. We recommend viewing examples 1 and 7 before viewing this example. Example 30: Resolving rough and fine-scale problem data Unresolved problem data will affect the accuracy of a discretized PDE solution as well as a posteriori estimates of the solution error. This example uses a CoefficientRefiner object to preprocess an input mesh until the resolution of the prescribed problem data $f \\in L^2$ is below a prescribed tolerance. In this example, the resolution is identified with a data oscillation function on the mesh $\\mathcal{T}$, defined $$ \\mathrm{osc}(f) = \\Big( \\sum_{T\\in\\mathcal{T}} \\| h \\cdot (I - \\Pi)\\, f \\|^2_{L^2(T)} \\Big)^{1/2}, $$ where $h$ is the local element size function and $\\Pi$ is a finite element projection operator, and the sum is taken over all elements $T$ in the mesh. In this example, the coarse initial mesh is adaptively refined until $\\mathrm{osc}(f)$ is below a prescribed tolerance for various candidate functions $f \\in L^2$. When using rough problem data, it is recommended to perform this type of preprocessing before a posteriori error estimation. The example has a serial ( ex30.cpp ) and a parallel ( ex30p.cpp ) version. We recommend viewing examples 1 and 6 before viewing this example. Example 31: Anisotropic Definite Maxwell Problem This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + \\sigma E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". In this example $\\sigma$ is an anisotropic 3x3 tensor. Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example has a serial ( ex31.cpp ) and a parallel ( ex31p.cpp ) version. We recommend viewing example 3 before viewing this example. Example 32: Anisotropic Maxwell Eigenproblem This example code solves the Maxwell (electromagnetic) eigenvalue problem with anisotropic permittivity, $\\epsilon$ $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, \\epsilon E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces in an eigenmode context. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing multiple GLVis visualization windows for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex32p.cpp ) version. We recommend viewing examples 13 and 31 before viewing this example. Example 33: Spectral fractional Laplacian This example code demonstrates the use of MFEM to solve the fractional Laplacian problem $$ (-\\Delta)^\\alpha u = 1, \\quad \\alpha > 0, $$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is similar to ex1 , but involves a fractional-order diffusion operator whose inverse can be approximated by a series of inverses of integer-order diffusion operators. Solving each of these independent integer-order PDEs with MFEM and summing their solutions results in a discrete solution to the fractional Laplacian problem above. The example has a serial ( ex33.cpp ) and a parallel ( ex33p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 34: Source Function using a SubMesh Transfer This example demonstrates the use of a SubMesh object to transfer solution data from a sub-domain and use this as a source function on the full domain. In this case we compute a volumetric current density $\\vec{J}$ as the gradient of a scalar potential $\\varphi$ on a portion of the domain. $$\\nabla\\cdot(\\sigma\\nabla\\varphi)=0$$ $$\\vec{J} = -\\sigma\\nabla\\varphi$$ Where a voltage difference is applied on surfaces of the sub-domain (shown on the left) to generate the current density restricted to this sub-domain. The current density is then transferred to the full domain (shown on the right) using a SubMesh object. We then use this current density on the full domain as a source term in a magnetostatic solve for a vector potential $\\vec{A}$. $$\\nabla\\times(\\mu^{-1}\\nabla\\times\\vec{A})=\\vec{J}$$ $$\\vec{B} = \\nabla\\times\\vec{A}$$ This example verifies the recreation of boundary attributes on a sub-domain mesh as well as transfer of Raviart-Thomas vector fields between the SubMesh and the full Mesh. Note that the data transfer in this particular example involves arbitrary order Raviart-Thomas degrees of freedom on a mixture of tetrahedral and triangular prism elements. The example has a serial ( ex34.cpp ) and a parallel ( ex34p.cpp ) version. We recommend viewing Examples 1 and 3 before viewing this example. Example 35: Port Boundary Conditions using SubMesh Transfers This example demonstrates the use of a SubMesh object to transfer a port boundary condition from a portion of the boundary to the corresponding portion of the full domain. Just as in Example 22 this example implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0\\mbox{ with }u|_\\Gamma=v$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\times(\\vec{u}\\times\\hat{n})|_\\Gamma=\\vec{v}$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\cdot\\vec{u}|_\\Gamma=v$$ Where $\\Gamma$ is a portion of the boundary called the port . In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. In Example 22 this boundary condition was simply a constant in space. In this example the boundary condition is an eigenmode of a lower dimensional eigenvalue problem defined on a portion of the boundary as follows: For the scalar $H^1$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }v|_{\\partial\\Gamma}=0$$ For the vector $H(curl)$ field: $$\\nabla\\times\\left(\\nabla\\times\\vec{v}\\right) = \\lambda\\,\\vec{v}\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\times\\vec{v}|_{\\partial\\Gamma}=0$$ For the vector $H(div)$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\cdot\\nabla v|_{\\partial\\Gamma}=0$$ The different cases implemented in this example can be used to verify the transfer of an $H^1$ scalar field, the tangential components of an $H(curl)$ vector field, and the normal component of an $H(div)$ vector field (as a scalar $L^2$ field in this case) between a SubMesh and its parent mesh. The example has only a parallel ( ex35p.cpp ) version because the eigenmode solver used to compute the field on the port is only implemented in parallel. We recommend viewing Examples 11, 13, and 22 before viewing this example. Example 36: Obstacle Problem This example code solves the pointwise bound-constrained energy minimization problem $$ \\text{minimize } \\frac{1}{2}\\|\\nabla u\\|^2 \\text{ in } H^1_0(\\Omega)\\, \\text{ subject to } u \\ge \\varphi\\,.$$ This is known as the obstacle problem, and it is a classical motivating example in the study of variational inequalities and free boundary problems. In this example, the obstacle $\\varphi$ is the graph of a half-sphere centered at the origin of a circular domain $\\Omega$. After solving to a specified tolerance, the numerical solution is compared to a closed-form exact solution to assess its accuracy. The problem is solved using the Proximal Galerkin finite element method, which is a nonlinear, structure-preserving mixed method for pointwise bound constraints proposed by Keith and Surowiec . In turn, this example highlights MFEM's ability to deliver high-order solutions to variational inequalities and showcases how to set up and solve nonlinear mixed methods. The example has a serial ( ex36.cpp ) and a parallel ( ex36p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 37: Topology Optimization Density field $\\rho$ Problem set-up and domain $\\Omega$ This example code solves a classical cantilever beam topology optimization problem. The aim is to find an optimal material density field $\\rho$ in $L^1(\\Omega)$ to minimize the elastic compliance; i.e., $$\\begin{align} &\\text{minimize} \\int_\\Omega \\mathbf{f} \\cdot \\mathbf{u}(\\rho)\\, \\mathrm{d}x\\, \\text{ over }\\, \\rho \\in L^1(\\Omega) \\\\ &\\text{subject to }\\, 0 \\leq \\rho \\leq 1\\, \\text{ and } \\int_\\Omega \\rho\\, \\mathrm{d}x = \\theta\\, \\mathrm{vol}(\\Omega) \\,. \\end{align}$$ In this problem, $\\mathbf{f}$ is a localized force and the linearly elastic displacement field $\\mathbf{u} = \\mathbf{u}(\\rho)$ is determined by a material density field $\\rho$ with total volume fraction $0<\\theta<1$. The problem is solved using a mirror descent algorithm proposed by Keith and Surowiec . For further details, see the more elaborate description of this PDE-constrained optimization problem given in the example code and the aforementioned paper. The example has a serial ( ex37.cpp ) and a parallel ( ex37p.cpp ) version. We recommend viewing Example 2 before viewing this example. Example 38: Cut-Volume and Cut-Surface Integration This example code demonstrates construction of cut-surface and cut-volume IntegrationRules. The cut is specified by the zero level set of a given Coefficient $\\phi$. The resulting IntegrationRules are combined with standard LinearFormIntegrators to demonstrate integration of a function $u$ over an implicit interface, and a subdomain bounded by an implicit interface: $$ S = \\int_{\\phi = 0} u(x) ~ ds, \\quad V = \\int_{\\phi > 0} u(x) ~ dx. $$ The IntegrationRules are constructed by the moment-fitting algorithm introduced by M\u00fcller, Kummer and Oberlack . Through a set of basis functions, for each element the method defines and solves a local under-determined system for the vector of quadrature weights. All surface and volume integrals, which are required to form the system, are reduced to 1D integration over intersected segments. The example has only a serial ( ex38.cpp ) version, because the construction of the integration rules is an element-local procedure. It requires MFEM to be built with LAPACK, which is used to find the optimal solution of an under-determined system of equations. Example 39: Named Attribute Sets This example uses the Poisson equation to demonstrate the use of named attribute sets in MFEM to specify material regions, boundary regions, or source regions by name rather than attribute numbers. It also demonstrates how new named attribute sets may be created from arbitrary groupings of attribute numbers and used as a convenient shorthand to refer to those groupings in other portions of the application or through the command line. Named attribute sets also required changes to MFEM's mesh file formats. This example makes use of a custom input mesh file ( compass.msh ) produced using Gmsh which includes named regions and boundaries. A related mesh file ( compass.mesh ) illustrates MFEM's representation of the new named attribute sets. See file formats for details of the augmented mesh file format. The example has a serial ( ex39.cpp ) and a parallel ( ex39p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 40: Eikonal Equation This example highlights MFEM's ability to solve a fully-nonlinear, first-order PDE with high-order finite elements. In particular this example uses the proximal Galerkin method to solve the eikonal equation, $$ |\\nabla u| = 1 \\text{ in } \\Omega, \\quad u = 0 \\text{ on } \\partial \\Omega. $$ At each point $x$ in the domain $\\Omega$, the solution of this PDE provides the Euclidean distance to the domain boundary, $u(x) = \\min \\{ | x - y| : y \\in \\partial \\Omega\\}$. The problem is solved by recasting $u$ as the solution of the nonlinear program $$ \\text{maximize } \\int_\\Omega u\\, \\mathrm{d} x\\, \\text{ in } W^{1,\\infty}_0(\\Omega)\\, \\text{ subject to } |\\nabla u | \\leq 1 \\text{ a.e. in } \\Omega.$$ A solution is then obtained by discretizing and solving a sequence of nonlinear saddle-point problems. See the example code for a more detailed description of the method. The example has a serial ( ex40.cpp ) and a parallel ( ex40p.cpp ) version. We recommend viewing Example 5 and Example 36 before viewing this example. NURBS Example 1: Laplace Problem This example code demonstrates the use of MFEM to define a simple isogeometric NURBS discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is the same as Example 1 . The example has a serial ( nurbs_ex1.cpp ) and a parallel ( nurbs_ex1p.cpp ) version. There is also a version that demonstrates efficient patchwise quadrature ( nurbs_ex1 patch.cpp ). NURBS Example 3: Definite Maxwell Problem This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with NURBS-based $H(curl)$elements in 2D or 3D. The problem solved in this example is the same as Ezample 3 . The example has only a serial ( nurbs_ex1.cpp ) version. NURBS Example 5: Darcy Problem This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize the velocity ($\\bf u$) with NURBS-based $H(div)$ elements and the pressure ($p$) with a compatible NURBS-based $H_1$ elements. The problem solved in this example is the same as Example 5 . The example only has a serial ( nurbs_ex5.cpp ). NURBS Example 11: Laplace Eigenproblem This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The problem solved in this example is the same as Example 11 . The example has only a parallel ( nurbs_ex11p.cpp ) version. NURBS Example 24: Mixed finite element spaces The problem solved in this example is the same as Example 24 , but NURBS-based elements are also supported. This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. The example has a serial ( nurbs_ex24.cpp ). Volta Miniapp: Electrostatics This miniapp demonstrates the use of MFEM to solve realistic problems in the field of linear electrostatics. Its features include: dielectric materials charge densities surface charge densities prescribed voltages applied polarizations high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( volta.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Tesla Miniapp: Magnetostatics This miniapp showcases many of MFEM's features while solving a variety of realistic magnetostatics problems. Its features include: diamagnetic and/or paramagnetic materials ferromagnetic materials volumetric current densities surface current densities external fields high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( tesla.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Maxwell Miniapp: Transient Full-Wave Electromagnetics This miniapp solves the equations of transient full-wave electromagnetics. Its features include: mixed formulation of the coupled first-order Maxwell equations $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic flux energy conserving, variable order, implicit time integration dielectric materials diamagnetic and/or paramagnetic materials conductive materials volumetric current densities Sommerfeld absorbing boundary conditions high order meshes high order basis functions advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( maxwell.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Joule Miniapp: Transient Magnetics and Joule Heating This miniapp solves the equations of transient low-frequency (a.k.a. eddy current) electromagnetics, and simultaneously computes transient heat transfer with the heat source given by the electromagnetic Joule heating. Its features include: $H^1$ discretization of the electrostatic potential $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic field $H(\\mathrm{div})$ discretization of the heat flux $L^2$ discretization of the temperature implicit transient time integration high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( joule.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mobius Strip Miniapp This miniapp generates various Mobius strip-like surface meshes. It is a good way to generate complex surface meshes. Manipulating the mesh topology and performing mesh transformation are demonstrated. The mobius-strip mesh in the data directory was generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mobius-strip.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Klein Bottle Miniapp This miniapp generates three types of Klein bottle surfaces. It is similar to the mobius-strip miniapp. Manipulating the mesh topology and performing mesh transformation are demonstrated. The klein-bottle and klein-donut meshes in the data directory were generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( klein-bottle.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Toroid Miniapp This miniapp generates two types of toroidal volume meshes; one with triangular cross sections and one with square cross sections. It works by defining a stack of individual elements and bending them so that the bottom and top of the stack can be joined to form a torus. It supports various options including: The element type: 0 - Wedge, 1 - Hexahedron The geometric order of the elements The major and minor radii The number of elements in the azimuthal direction The number of nodes to offset by before rejoining the stack The initial angle of the cross sectional shape The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( toroid.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Twist Miniapp This miniapp generates simple periodic meshes to demonstrate MFEM's handling of periodic domains. MFEM's strategy is to use a discontinuous vector field to define the mesh coordinates on a topologically periodic mesh. It works by defining a stack of individual elements and stitching together the top and bottom of the mesh. The stack can also be twisted so that the vertices of the bottom and top can be joined with any integer offset (for tetrahedral and wedge meshes only even offsets are supported). The Twist miniapp supports various options including: The element type: 4 - Tetrahedron, 6 - Wedge, 8 - Hexahedron The geometric order of the elements The dimensions of the initial brick-shaped stack of elements The number of elements in the z direction The number of nodes to offset by before rejoining the stack The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( twist.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Extruder Miniapp This miniapp creates higher dimensional meshes from lower dimensional meshes by extrusion. Simple coordinate transformations can also be applied if desired. The initial mesh can be 1D or 2D 1D meshes can be extruded in both the y and z directions 2D meshes can be triangular, quadrilateral, or contain both element types Meshes with high order geometry are supported User can specify the number of elements and the distance to extrude Geometric order of the transformed mesh can be user selected or automatic This miniapp provides another demonstration of how simple meshes can be constructed and transformed in MFEM. This miniapp has only a serial ( extruder.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Trimmer Miniapp This miniapp creates a new mesh file from an existing mesh by trimming away elements with selected attributes. Newly exposed boundary elements will be assigned new or user specified boundary attributes. The initial mesh can be 2D or 3D Meshes with high order geometry are supported Periodic meshes are supported NURBS meshes are not supported This miniapp provides another demonstration of how simple meshes can be constructed in MFEM. This miniapp has only a serial ( trimmer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Polar-NC Miniapp This miniapp generates a circular sector mesh that consist of quadrilaterals and triangles of similar sizes. The 3D version of the mesh is made of prisms and tetrahedra. The mesh is non-conforming by design, and can optionally be made curvilinear. The elements are ordered along a space-filling curve by default, which makes the mesh ready for parallel non-conforming AMR in MFEM. The implementation also demonstrates how to initialize a non-conforming mesh on the fly by marking hanging nodes with Mesh::AddVertexParents . For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( polar-nc.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Shaper Miniapp This miniapp performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( shaper.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mesh Explorer Miniapp This miniapp is a handy tool to examine, visualize and manipulate a given mesh. Some of its features are: visualizing of mesh materials and individual mesh elements mesh scaling, randomization, and general transformation manipulation of the mesh curvature the ability to simulate parallel partitioning quantitative and visual reports of mesh quality For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mesh-explorer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mesh Optimizer Miniapp This miniapp performs mesh optimization using the Target-Matrix Optimization Paradigm (TMOP) by P. Knupp , and a global variational minimization approach ( Dobrev et al. ). It minimizes the quantity $$\\sum_T \\int_T \\mu(J(x)),$$ where $T$ are the target (ideal) elements, $J$ is the Jacobian of the transformation from the target to the physical element, and $\\mu$ is the mesh quality metric. This metric can measure shape, size or alignment of the region around each quadrature point. The combination of targets and quality metrics is used to optimize the physical node positions, i.e., they must be as close as possible to the shape / size / alignment of their targets. This code also demonstrates a possible use of nonlinear operators, as well as their coupling to Newton methods for solving minimization problems. Note that the utilized Newton methods are oriented towards avoiding invalid meshes with negative Jacobian determinants. Each Newton step requires the inversion of a Jacobian matrix, which is done through an inner linear solver. The miniapp has a serial ( mesh-optimizer.cpp ) and a parallel ( pmesh-optimizer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mesh Fitting Miniapp This miniapp builds upon the mesh optimizer miniapp to enable mesh alignment with the zero isosurface of a discrete level-set. The approach is based on Dobrev et al. and Mittal et al. , where we minimize the quantity $$\\sum_T \\int_T \\mu(J(x)) + \\sum_{s \\in S} w \\,\\, \\sigma^2(x_s).$$ Here, the first term controls mesh quality and the second term enforces weak alignment of a selected subset of mesh-nodes ($s \\in S$) with the zero isosurface of the discrete level-set function ($\\sigma$). Click on the image on the right to see a demonstration of this method for generating body-fitted meshes for topology optimization in LiDO to maximize beam stiffness under a downward force on the right wall. The miniapp has a parallel ( pmesh-fitting.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Minimal Surface Miniapp This miniapp solves Plateau's problem: the Dirichlet problem for the minimal surface equation. Options to solve the minimal surface equations of both parametric surfaces as well as surfaces restricted to be graphs of the form $z=f(x,y)$ are supported, including a number of examples such as the Catenoid, Helicoid, Costa and Scherk surfaces. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has a serial ( minimal-surface.cpp ) and a parallel ( pminimal-surface.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Low-Order Refined Transfer Miniapp The lor-transfer miniapp, found under miniapps/tools demonstrates the capability to generate a low-order refined mesh from a high-order mesh, and to transfer solutions between these meshes. Grid functions can be transferred between the coarse, high-order mesh and the low-order refined mesh using either $L^2$ projection or pointwise evaluation. These transfer operators can be designed to discretely conserve mass and to recover the original high-order solution when transferring a low-order grid function that was obtained by restricting a high-order grid function to the low-order refined space. The miniapp has only a serial ( lor-transfer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Interpolation Miniapps The interpolation miniapp, found under miniapps/gslib , demonstrate the capability to interpolate high-order finite element functions at given set of points in physical space. These miniapps utilize the gslib library's high-order interpolation utility for quad and hex meshes: Find Points miniapp has a serial ( findpts.cpp ) and a parallel ( pfindpts.cpp ) version that demonstrate the basic procedures for point search and evaluation of grid functions. Field Interp miniapp ( field-interp.cpp ) demonstrates how grid functions can be transferred between meshes. Field Diff miniapp ( field-diff.cpp ) demonstrates how grid functions on two different meshes can be compared with each other. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps. Extrapolation Miniapp The extrapolate miniapp, found in the miniapps/shifted directory, extrapolates a finite element function from a set of elements (known values) to the rest of the domain. The set of elements that contains the known values is specified by the positive values of a level set Coefficient. The known values are not modified. The miniapp supports two PDE-based approaches ( Aslam , Bochkov & Gibou ), both of which rely on solving a sequence of advection problems in the direction of the unknown parts of the domain. The extrapolation can be constant (1st order), linear (2nd order), or quadratic (3rd order). These formal orders hold for a limited band around the zero level set, see the above references for further information. The miniapp has only a parallel ( extrapolate.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Distance Solver Miniapp The distance miniapp, found in the miniapps/shifted directory demonstrates the capability to compute the \"distance\" to a given point source or to the zero level set of a given function. Here \"distance\" refers to the length of the shortest path through the mesh. The input can be a DeltaCoefficient (representing a point source), or any Coefficient (for the case of a level set). The output is a ParGridFunction that can be scalar (representing the scalar distance), or a vector (its magnitude is the distance, and its direction is the starting direction of the shortest path). The miniapp has only a parallel ( distance.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Shifted Diffusion Miniapp The diffusion miniapp, found in the miniapps/shifted directory, demonstrates the capability to formulate a boundary value problem using a surrogate computational domain. The method uses a distance function to the true boundary to enforce Dirichlet boundary conditions on the (non-aligned) mesh faces, therefore \"shifting\" the location where boundary conditions are imposed. The implementation in the miniapp is a high-order extension of the second-generation shifted boundary method . The miniapp has only a parallel ( diffusion.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Laghos Miniapp Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. The computational motives captured in Laghos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements (triangular and tetrahedral elements can also be used, but with the less efficient full assembly option). Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Laghos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Continuous and discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Separation between the assembly and the quadrature point-based computations. Point-wise definition of mesh size, time-step estimate and artificial viscosity coefficient. Constant-in-time velocity mass operator that is inverted iteratively on each time step. This is an example of an operator that is prepared once (fully or partially assembled), but is applied many times. The application cost is dominant for this operator. Time-dependent force matrix that is prepared every time step (fully or partially assembled) and is applied just twice per \"assembly\". Both the preparation and the application costs are important for this operator. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization / data analysis with VisIt . The Laghos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Laghos . Remhos Miniapp Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. The computational motives captured in Remhos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements. Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Remhos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Mass operator that is local per each zone. It is inverted by iterative or exact methods at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Advection operator that couples neighboring zones. It is applied once at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization and data analysis with VisIt . The Remhos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Remhos . Navier Miniapp Navier is a miniapp that solves the time-dependent Navier-Stokes equations of incompressible fluid dynamics \\begin{align} \\frac{\\partial u}{\\partial t} + (u \\cdot \\nabla) u - \\frac{1}{Re} \\nabla^2 u - \\nabla p &= f \\\\ \\nabla \\cdot u &= 0 \\end{align} using a spatially high-order finite element discretization. The time-dependent problem is solved using a (up to) third order implicit-explicit method which leverages an extrapolation scheme for the convective parts and a backward-difference formulation for the viscous parts of the equation. The miniapp supports: Arbitrary order H1 elements High order mesh elements IMEX (EXTk-BDFk) time-stepping up to third order Convenient interface for new users A variety of test cases and benchmarks This miniapp has only a parallel ( navier_solver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Block Solvers Miniapp The Block Solvers miniapp, found under miniapps/solvers , compares various linear solvers for the saddle point system obtained from mixed finite element discretization of the Darcy's flow problem \\begin{array}{rcl} k{\\bf u} & + \\nabla p & = f \\\\ -\\nabla \\cdot {\\bf u} & & = g \\end{array} The solvers being compared include: The divergence-free solver (couple and decoupled modes), which is based on a multilevel decomposition of the Raviart-Thomas finite element space and its divergence-free subspace. MINRES preconditioned by the block diagonal preconditioner in ex5p.cpp . For more details, please see the documentation in the miniapps/solvers directory. The miniapp supports: Arbitrary order mixed finite element pair (Raviart-Thomas elements + piecewise discontinuous polynomials) Various combination of essential and natural boundary conditions Homogeneous or heterogeneous scalar coefficient k This miniapp has only a parallel ( block-solvers.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Overlapping Grids Miniapps Overlapping grids-based frameworks can often make problems tractable that are otherwise inaccessible with a single conforming grid. The following gslib -based miniapps in MFEM demonstrate how to set up and use overlapping grids: The Schwarz Example 1 miniapp in miniapps/gslib has a serial ( schwarz_ex1.cpp ) a parallel ( schwarz_ex1p.cpp ) version that solves the Poisson problem on overlapping grids. The serial version is restricted to use two overlapping grids, while the parallel version supports arbitrary number of overlapping grids. The Navier Conjugate Heat Transfer miniapp in miniapps/navier ( navier_cht.cpp ) demonstrates how a conjugate heat transfer problem can be solved with the fluid dynamics (incompressible Navier-Stokes equations) and heat transfer (advection-diffusion equation) PDEs modeled on different meshes. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps. ParELAG AMGe for H(curl) and H(div) Miniapp This is a miniapp that exhibits the ParELAG library and part of its capabilities. The miniapp employs MFEM and ParELAG to solve $H(\\mathrm{curl})$- and $H(\\mathrm{div})$-elliptic forms by an element based algebraic multigrid (AMGe). ParELAG is a library mostly developed at the Center for Applied Scientific Computing of Lawrence Livermore National Laboratory, California, USA. The miniapp uses: A multilevel hierarchy of de Rham complexes of finite element spaces, built by ParELAG; Hiptmair-type (hybrid) smoothers, implemented in ParELAG; AMS (Auxiliary-space Maxwell Solver) or ADS (Auxiliary-space Divergence Solver), from HYPRE, for preconditioning or solving on the coarsest levels. Alternatively, it is possible to precondition or solve the $H(\\mathrm{div})$ form on the coarsest level via a hybridization approach. However, this is not yet implemented in ParELAG for the coarse levels. Only the hybridization solver that is directly applicable to an $H(\\mathrm{div})$-$L^2$ mixed (saddle-point) system is currently available in ParELAG. We recommend viewing ex3p.cpp and ex4p.cpp before viewing this miniapp. For more details, please see the documentation in the miniapps/parelag directory. This miniapp has only a parallel ( MultilevelHcurlHdivSolver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Generating Gaussian Random Fields via the SPDE Method This miniapp generates Gaussian random fields on meshed domains $\\Omega \\subset \\mathbb{R}^n$ via the SPDE method. The method exploits a stochastic, fractional PDE whose full-space solutions yield Gaussian random fields with a Mat\u00e9rn covariance. The method was introduced and popularized by Lindgren et. al in 2010. In this miniapp, we use a slightly modified representation following Khristenko et. al . More specifically, we solve the equation \\begin{equation} \\left( -\\frac{1}{2\\nu} \\nabla \\cdot \\left( \\Theta \\nabla \\right) + \\mathbf{1} \\right)^{\\frac{2\\nu+n}{4}} u(x,w) = \\eta W(x,w) \\ \\ \\ \\text{in} \\ \\ \\Omega, \\end{equation} with various boundary conditions. Solving this equation on $\\Omega = \\mathbb{R}^n$ delivers a homogeneous Gaussian random field with zero mean and Mat\u00e9rn covariance, \\begin{align}\\label{eq:MaternCovariance} C(x,y) &= \\sigma^2M_\\nu \\left(\\sqrt{2\\nu}\\, \\| x-y \\|_{\\Theta} \\right) , \\end{align} where $M_\\nu(z) = \\frac{2^{1-\\nu}}{\\Gamma(\\nu)} z ^{\\nu} K_\\nu \\left( z \\right)$ and $\\| x-y \\|_{\\Theta}^2 = (x-y)^\\top\\Theta (x-y)$. The Mat\u00e9rn model provides the regularity parameter $\\nu > 0$ and the anisotropic diffusion tensor $\\Theta \\in \\mathbb{R}^{n\\times n}$, which determines the spatial structure (correlation lengths). However, applying boundary conditions to the SPDE above provides the ability to model a significantly larger class of inhomogeneous random fields on complex domains. For further details, see the miniapp README . We recommend viewing ex33p.cpp before viewing this miniapp. This miniapp ( generate_random_field.cpp ) has only a parallel implementation. It further requires MFEM to be built with LAPACK, otherwise you may only use predefined values for $\\nu$. We recommend that new users start with the example codes before moving to the miniapps. Multidomain and SubMesh demonstration Miniapp This miniapp aims to demonstrate how to solve two PDEs, that represent different physics, on the same domain. MFEM's SubMesh interface is used to compute on and transfer between the spaces of predefined parts of the domain. For the sake of simplicity, the spaces on each domain are using the same order H1 finite elements. This does not mean that the approach is limited to this configuration. A 3D domain comprised of an outer box with a cylinder shaped inside is used. A heat equation is described on the outer box domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T &&\\mbox{in outer box}\\\\ T &= T_{wall} &&\\mbox{on outside wall}\\\\ \\nabla T \\cdot \\vec{n} &= 0 &&\\mbox{on inside (cylinder) wall} \\end{align} with temperature $T$ and coefficient $\\kappa$ (non-physical in this example). A convection-diffusion equation is described inside the cylinder domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T - \\alpha \\nabla \\cdot (\\vec{b} T) & &\\mbox{in inner cylinder}\\\\ T &= T_{wall} & &\\mbox{on cylinder wall}\\\\ \\nabla T \\cdot \\vec{n} &= 0 & &\\mbox{else} \\end{align} with temperature $T$, coefficients $\\kappa$, $\\alpha$ and prescribed velocity profile $\\vec{b}$, and $T_{wall}$ obtained from the heat equation. To couple the solutions of both equations, a segregated solve with one way coupling approach is used. The heat equation of the outer box is solved from the timestep $T_{box}(t)$ to $T_{box}(t+dt)$. Then for the convection-diffusion equation $T_{wall}$ is set to $T_{box}(t+dt)$ and the equation is solved for $T(t+dt)$ which results in a first-order one way coupling. This miniapp has only a parallel ( multidomain.cpp ) implementation. We recommend that new users start with the example codes before moving to the miniapps. DPG miniapp This miniapp demonstrates how to discretize and solve various PDEs using the Discontinuous Petrov-Galerkin (DPG) method. It utilizes a new user-friendly interface to assemble the block DPG systems arising from the discretization of any DPG formulation (such as Ultraweak or Primal). In addition, the miniapp supports complex-valued systems, static condensation for block systems, and AMR using the built-in DPG residual-based error indicator. This capability is showcased in the following DPG examples in miniapps/dpg . Ultraweak DPG formulation for diffusion . This example solves the simple Poisson equation $$-\u0394 u = f$$ and computes rates of convergence under successive uniform h-refinements for a smooth manufactured solution. The parallel version also includes an AMR implementation for the L-shape benchmark problem. This example has a serial ( diffusion.cpp ) and a parallel ( pdiffusion.cpp ) version. Ultraweak DPG formulation for convection-diffusion . This example solves the convection-diffusion problem: \\begin{align} -\\epsilon \\Delta u + \\nabla \\cdot (\\beta u) &= f\\\\ \\end{align} using AMR. The example demonstrates the use of mesh-dependent test norms which are suitable for problems with solutions that exhibit large gradients present in internal or boundary layers . The example has a serial ( convection-diffusion.cpp ) and a parallel ( pconvection-diffusion.cpp ) version. Ultraweak DPG formulation for time-harmonic linear acoustics . This example solves the indefinite Helmholtz equation \\begin{align} -\\Delta u - \\omega^2 u &= f\\\\ \\end{align} The example includes formulations with manufactured plane-wave solutions as well as high-frequency scattering problems and the use of Perfectly Match Layers (PML). It also demonstrates how to set up complex-valued systems and preconditioners for their solutions. The example has a serial ( acoustics.cpp ) and parallel ( pacoustics.cpp ) version. Ultraweak DPG formulation for time-harmonic Maxwell . This example solves the indefinite Maxwell problem \\begin{align} \\nabla \u00d7 (\\mu^{-1} \\nabla \\times E) - \\omega^2 \\epsilon E&= J\\\\ \\end{align} The example includes formulations with smooth manufactured solutions, AMR formulations for high-frequency scattering problems as well as a problem with a singular solution. The example has a serial ( maxwell.cpp ) and a parallel ( pmaxwell.cpp ) version. Tribol miniapp This miniapp demonstrates how to use Tribol's mortar method to solve a contact patch test. A contact patch test places two aligned, linear elastic cubes in contact, then verifies that the exact elasticity solution for this problem is recovered. The exact solution requires transmission of a uniform pressure field across a (not necessarily conforming) interface (i.e. the contact surface). Mortar methods (including the one implemented in Tribol) are generally able to pass the contact patch test. The test assumes small deformations and no accelerations, so the relationship between forces/contact pressures and deformations/contact gaps is linear and, therefore, the problem can be solved exactly with a single linear solve. The mortar implementation is based on Puso and Laursen (2004) . A description of the Tribol implementation is available in Serac documentation . Lagrange multipliers are used to solve for the pressure required to prevent violation of the contact constraints. This miniapp has only a parallel ( contact-patch-test.cpp ) implementation. For more details, please see the documentation in miniapps/tribol/README.md . We recommend that new users start with the example codes before moving to the miniapps. No examples or miniapps match your criteria. ", "title": "Examples orig"}, {"location": "examples-orig/#example-codes-and-miniapps", "text": "This page provides a brief overview of MFEM's example codes and miniapps. For detailed documentation of the MFEM sources, including the examples, see the online Doxygen documentation , or the doc directory in the distribution. The goal of the example codes is to provide a step-by-step introduction to MFEM in simple model settings. The miniapps are more complex, and are intended to be more representative of the advanced usage of the library in physics/application codes. We recommend that new users start with the example codes before moving to the miniapps. Select from the categories below to display examples and miniapps that contain the respective feature. All examples support (arbitrarily) high-order meshes and finite element spaces . The numerical results from the example codes can be visualized using the GLVis visualization tool (based on MFEM). See the GLVis website for more details. Users are encouraged to submit any example codes and miniapps that they have created and would like to share. Contact a member of the MFEM team to report bugs or post questions or comments .", "title": "Example Codes and Miniapps"}, {"location": "examples-orig/#example-0-simplest-laplace-problem", "text": "This is the simplest MFEM example and a good starting point for new users. The example demonstrates the use of MFEM to define and solve an $H^1$ finite element discretization of the Laplace problem $$-\\Delta u = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ The example illustrates the use of the basic MFEM classes for defining the mesh, finite element space, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. The example has serial ( ex0.cpp ) and parallel ( ex0p.cpp ) versions.", "title": "Example 0: Simplest Laplace Problem"}, {"location": "examples-orig/#example-1-laplace-problem", "text": "This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Specifically, we discretize with the finite element space coming from the mesh (linear by default, quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The problem solved in this example is the same as ex0 , but with more sophisticated options and features. The example highlights the use of mesh refinement, finite element grid functions, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. We also cover the explicit elimination of essential boundary conditions, static condensation, and the optional connection to the GLVis tool for visualization. The example has a serial ( ex1.cpp ), a parallel ( ex1p.cpp ), and HPC versions: performance/ex1.cpp , performance/ex1p.cpp . It also has a PETSc modification in examples/petsc , a PUMI modification in examples/pumi and a Ginkgo modification in examples/ginkgo . Partial assembly and GPU devices are supported.", "title": "Example 1: Laplace Problem"}, {"location": "examples-orig/#example-2-linear-elasticity", "text": "This example code solves a simple linear elasticity problem describing a multi-material cantilever beam. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder with $f$ being a constant pull down vector on boundary elements with attribute 2, and zero otherwise. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order and NURBS vector finite element spaces with the linear elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and vector coefficient objects. Static condensation is also illustrated. The example has a serial ( ex2.cpp ) and a parallel ( ex2p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . We recommend viewing Example 1 before viewing this example.", "title": "Example 2: Linear Elasticity"}, {"location": "examples-orig/#example-3-definite-maxwell-problem", "text": "This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 2D or 3D. The example demonstrates the use of $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. Static condensation is also illustrated. The example has a serial ( ex3.cpp ) and a parallel ( ex3p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-2 before viewing this example.", "title": "Example 3: Definite Maxwell Problem"}, {"location": "examples-orig/#example-4-grad-div-problem", "text": "This example code solves a simple 2D/3D $H(div)$ diffusion problem corresponding to the second order definite equation $$-{\\rm grad}(\\alpha\\,{\\rm div}(F)) + \\beta F = f$$ with boundary condition $F \\cdot n$ = \"given normal field\". Here we use a given exact solution $F$ and compute the corresponding right hand side $f$. We discretize with the Raviart-Thomas finite elements. The example demonstrates the use of $H(div)$ finite element spaces with the grad-div and $H(div)$ vector finite element mass bilinear form, as well as the computation of discretization error when the exact solution is known. Bilinear form hybridization and static condensation are also illustrated. The example has a serial ( ex4.cpp ) and a parallel ( ex4p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-3 before viewing this example.", "title": "Example 4: Grad-div Problem"}, {"location": "examples-orig/#example-5-darcy-problem", "text": "This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize with Raviart-Thomas finite elements (velocity $\\bf u$) and piecewise discontinuous polynomials (pressure $p$). The example demonstrates the use of the BlockMatrix and BlockOperator classes, as well as the collective saving of several grid functions in VisIt and ParaView formats. The example has a serial ( ex5.cpp ) and a parallel ( ex5p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly is supported. We recommend viewing examples 1-4 before viewing this example.", "title": "Example 5: Darcy Problem"}, {"location": "examples-orig/#example-6-laplace-problem-with-amr", "text": "This is a version of Example 1 with a simple adaptive mesh refinement loop. The problem being solved is again the Laplace equation $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear, curved and surface meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex6.cpp ) and a parallel ( ex6p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . Partial assembly and GPU devices are supported. We recommend viewing Example 1 before viewing this example.", "title": "Example 6: Laplace Problem with AMR"}, {"location": "examples-orig/#example-7-surface-meshes", "text": "This example code demonstrates the use of MFEM to define a triangulation of a unit sphere and a simple isoparametric finite element discretization of the Laplace problem with mass term, $$-\\Delta u + u = f.$$ The example highlights mesh generation, the use of mesh refinement, high-order meshes and finite elements, as well as surface-based linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. Simple local mesh refinement is also demonstrated. The example has a serial ( ex7.cpp ) and a parallel ( ex7p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 7: Surface Meshes"}, {"location": "examples-orig/#example-8-dpg-for-the-laplace-problem", "text": "This example code demonstrates the use of the Discontinuous Petrov-Galerkin (DPG) method in its primal 2x2 block form as a simple finite element discretization of the Laplace problem $$-\\Delta u = f$$ with homogeneous Dirichlet boundary conditions. We use high-order continuous trial space, a high-order interfacial (trace) space, and a high-order discontinuous test space defining a local dual ($H^{-1}$) norm. We use the primal form of DPG, see \"A primal DPG method without a first-order reformulation\" , Demkowicz and Gopalakrishnan, CAM 2013. The example highlights the use of interfacial (trace) finite elements and spaces, trace face integrators and the definition of block operators and preconditioners. The example has a serial ( ex8.cpp ) and a parallel ( ex8p.cpp ) version. We recommend viewing examples 1-5 before viewing this example.", "title": "Example 8: DPG for the Laplace Problem"}, {"location": "examples-orig/#example-9-dg-advection", "text": "This example code solves the time-dependent advection equation $$\\frac{\\partial u}{\\partial t} + v \\cdot \\nabla u = 0,$$ where $v$ is a given fluid velocity, and $u_0(x)=u(0,x)$ is a given initial condition. The example demonstrates the use of Discontinuous Galerkin (DG) bilinear forms in MFEM (face integrators), the use of explicit and implicit (with block ILU preconditioning) ODE time integrators, the definition of periodic boundary conditions through periodic meshes, as well as the use of GLVis for persistent visualization of a time-evolving solution. The saving of time-dependent data files for external visualization with VisIt and ParaView is also illustrated. The example has a serial ( ex9.cpp ) and a parallel ( ex9p.cpp ) version. It also has a SUNDIALS modification in examples/sundials , a PETSc modification in examples/petsc , and a HiOp modification in examples/hiop .", "title": "Example 9: DG Advection"}, {"location": "examples-orig/#example-10-nonlinear-elasticity", "text": "This example solves a time dependent nonlinear elasticity problem of the form $$ \\frac{dv}{dt} = H(x) + S v\\,,\\qquad \\frac{dx}{dt} = v\\,, $$ where $H$ is a hyperelastic model and $S$ is a viscosity operator of Laplacian type. The geometry of the domain is assumed to be as follows: The example demonstrates the use of nonlinear operators, as well as their implicit time integration using a Newton method for solving an associated reduced backward-Euler type nonlinear equation. Each Newton step requires the inversion of a Jacobian matrix, which is done through a (preconditioned) inner solver. The example has a serial ( ex10.cpp ) and a parallel ( ex10p.cpp ) version. It also has a SUNDIALS modification in examples/sundials and a PETSc modification in examples/petsc . We recommend viewing examples 2 and 9 before viewing this example.", "title": "Example 10: Nonlinear Elasticity"}, {"location": "examples-orig/#example-11-laplace-eigenproblem", "text": "This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex11p.cpp ) version. It also has a SLEPc modification in examples/petsc . We recommend viewing Example 1 before viewing this example.", "title": "Example 11: Laplace Eigenproblem"}, {"location": "examples-orig/#example-12-linear-elasticity-eigenproblem", "text": "This example code solves the linear elasticity eigenvalue problem for a multi-material cantilever beam. Specifically, we compute a number of the lowest eigenmodes by approximating the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = \\lambda {\\bf u} \\,,$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field $\\bf u$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder. The geometry of the domain is assumed to be as follows: The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex12p.cpp ) version. We recommend viewing examples 2 and 11 before viewing this example.", "title": "Example 12: Linear Elasticity Eigenproblem"}, {"location": "examples-orig/#example-13-maxwell-eigenproblem", "text": "This example code solves the Maxwell (electromagnetic) eigenvalue problem $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 2D or 3D. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex13p.cpp ) version. We recommend viewing examples 3 and 11 before viewing this example.", "title": "Example 13: Maxwell Eigenproblem"}, {"location": "examples-orig/#example-14-dg-diffusion", "text": "This example code demonstrates the use of MFEM to define a discontinuous Galerkin (DG) finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Finite element spaces of any order, including zero on regular grids, are supported. The example highlights the use of discontinuous spaces and DG-specific face integrators. The example has a serial ( ex14.cpp ) and a parallel ( ex14p.cpp ) version. We recommend viewing examples 1 and 9 before viewing this example.", "title": "Example 14: DG Diffusion"}, {"location": "examples-orig/#example-15-dynamic-amr", "text": "Building on Example 6 , this example demonstrates dynamic adaptive mesh refinement. The mesh is adapted to a time-dependent solution by refinement as well as by derefinement. For simplicity, the solution is prescribed and no time integration is done. However, the error estimation and refinement/derefinement decisions are realistic. At each outer iteration the right hand side function is changed to mimic a time dependent problem. Within each inner iteration the problem is solved on a sequence of meshes which are locally refined according to a simple ZZ error estimator. At the end of the inner iteration the error estimates are also used to identify any elements which may be over-refined and a single derefinement step is performed. After each refinement or derefinement step a rebalance operation is performed to keep the mesh evenly distributed among the available processors. The example demonstrates MFEM's capability to refine, derefine and load balance nonconforming meshes, in 2D and 3D, and on linear, curved and surface meshes. Interpolation of functions between coarse and fine meshes, persistent GLVis visualization, and saving of time-dependent fields for external visualization with VisIt are also illustrated. The example has a serial ( ex15.cpp ) and a parallel ( ex15p.cpp ) version. We recommend viewing examples 1, 6 and 9 before viewing this example.", "title": "Example 15: Dynamic AMR"}, {"location": "examples-orig/#example-16-time-dependent-heat-conduction", "text": "This example code solves a simple 2D/3D time dependent nonlinear heat conduction problem $$\\frac{du}{dt} = \\nabla \\cdot \\left( \\kappa + \\alpha u \\right) \\nabla u$$ with a natural insulating boundary condition $\\frac{du}{dn} = 0$. We linearize the problem by using the temperature field $u$ from the previous time step to compute the conductivity coefficient. This example demonstrates both implicit and explicit time integration as well as a single Picard step method for linearization. The saving of time dependent data files for external visualization with VisIt is also illustrated. The example has a serial ( ex16.cpp ) and a parallel ( ex16p.cpp ) version. We recommend viewing examples 2, 9, and 10 before viewing this example.", "title": "Example 16: Time Dependent Heat Conduction"}, {"location": "examples-orig/#example-17-dg-linear-elasticity", "text": "This example code solves a simple linear elasticity problem describing a multi-material cantilever beam using symmetric or non-symmetric discontinuous Galerkin (DG) formulation. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are Dirichlet, $\\bf{u}=\\bf{u_D}$, on the fixed part of the boundary, namely boundary attributes 1 and 2; on the rest of the boundary we use ${\\sigma}({\\bf u})\\cdot n = {\\bf 0}$. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order DG vector finite element spaces with the linear DG elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and function vector-coefficient objects. The use of non-homogeneous Dirichlet b.c. imposed weakly, is also illustrated. The example has a serial ( ex17.cpp ) and a parallel ( ex17p.cpp ) version. We recommend viewing examples 2 and 14 before viewing this example.", "title": "Example 17: DG Linear Elasticity"}, {"location": "examples-orig/#example-18-dg-euler-equations", "text": "This example code solves the compressible Euler system of equations, a model nonlinear hyperbolic PDE, with a discontinuous Galerkin (DG) formulation. The primary purpose is to show how a transient system of nonlinear equations can be formulated in MFEM. The equations are solved in conservative form $$\\frac{\\partial u}{\\partial t} + \\nabla \\cdot {\\bf F}(u) = 0$$ with a state vector $u = [ \\rho, \\rho v_0, \\rho v_1, \\rho E ]$, where $\\rho$ is the density, $v_i$ is the velocity in the $i^{\\rm th}$ direction, $E$ is the total specific energy, and $H = E + p / \\rho$ is the total specific enthalpy. The pressure, $p$ is computed through a simple equation of state (EOS) call. The conservative hydrodynamic flux ${\\bf F}$ in each direction $i$ is $${\\bf F_{\\it i}} = [ \\rho v_i, \\rho v_0 v_i + p \\delta_{i,0}, \\rho v_1 v_i + p \\delta_{i,1}, \\rho v_i H ]$$ Specifically, the example solves for an exact solution of the equations whereby a vortex is transported by a uniform flow. Since all boundaries are periodic here, the method's accuracy can be assessed by measuring the difference between the solution and the initial condition at a later time when the vortex returns to its initial location. Note that as the order of the spatial discretization increases, the timestep must become smaller. This example currently uses a simple estimate derived by Cockburn and Shu for the 1D RKDG method. An additional factor can be tuned by passing the --cfl (or -c shorter) flag. The example demonstrates user-defined nonlinear form with hyperbolic form integrator for systems of equations that are defined with block vectors, and how these are used with an operator for explicit time integrators. In this case the system also involves an external approximate Riemann solver for the DG interface flux. It also demonstrates how to use GLVis for in-situ visualization of vector grid functions. The example has a serial ( ex18.cpp ) and a parallel ( ex18p.cpp ) version. We recommend viewing examples 9, 14 and 17 before viewing this example.", "title": "Example 18: DG Euler Equations"}, {"location": "examples-orig/#example-19-incompressible-nonlinear-elasticity", "text": "This example code solves the quasi-static incompressible nonlinear hyperelasticity equations. Specifically, it solves the nonlinear equation $$ \\nabla \\cdot \\sigma(F) = 0 $$ subject to the constraint $$ \\text{det } F = 1 $$ where $\\sigma$ is the Cauchy stress and $F_{ij} = \\delta_{ij} + u_{i,j}$ is the deformation gradient. To handle the incompressibility constraint, pressure is included as an independent unknown $p$ and the stress response is modeled as an incompressible neo-Hookean hyperelastic solid . The geometry of the domain is assumed to be as follows: This formulation requires solving the saddle point system $$ \\left[ \\begin{array}{cc} K &B^T \\\\ B & 0 \\end{array} \\right] \\left[\\begin{array}{c} \\Delta u \\\\ \\Delta p \\end{array} \\right] = \\left[\\begin{array}{c} R_u \\\\ R_p \\end{array} \\right] $$ at each Newton step. To solve this linear system, we implement a specialized block preconditioner of the form $$ P^{-1} = \\left[\\begin{array}{cc} I & -\\tilde{K}^{-1}B^T \\\\ 0 & I \\end{array} \\right] \\left[\\begin{array}{cc} \\tilde{K}^{-1} & 0 \\\\ 0 & -\\gamma \\tilde{S}^{-1} \\end{array} \\right] $$ where $\\tilde{K}^{-1}$ is an approximation of the inverse of the stiffness matrix $K$ and $\\tilde{S}^{-1}$ is an approximation of the inverse of the Schur complement $S = BK^{-1}B^T$. To approximate the Schur complement, we use the mass matrix for the pressure variable $p$. The example demonstrates how to solve nonlinear systems of equations that are defined with block vectors as well as how to implement specialized block preconditioners for use in iterative solvers. The example has a serial ( ex19.cpp ) and a parallel ( ex19p.cpp ) version. We recommend viewing examples 2, 5 and 10 before viewing this example.", "title": "Example 19: Incompressible Nonlinear Elasticity"}, {"location": "examples-orig/#example-20-symplectic-integration-of-hamiltonian-systems", "text": "This example demonstrates the use of the variable order, symplectic time integration algorithm. Symplectic integration algorithms are designed to conserve energy when integrating systems of ODEs which are derived from Hamiltonian systems. Hamiltonian systems define the energy of a system as a function of time (t), a set of generalized coordinates (q), and their corresponding generalized momenta (p). $$ H(q,p,t) = T(p) + V(q,t) $$ Hamilton's equations then specify how q and p evolve in time: $$ \\frac{dq}{dt} = \\frac{dH}{dp}\\,,\\qquad \\frac{dp}{dt} = -\\frac{dH}{dq} $$ To use the symplectic integration classes we need to define an mfem::Operator ${\\bf P}$ which evaluates the action of dH/dp, and an mfem::TimeDependentOperator ${\\bf F}$ which computes -dH/dq. This example visualizes its results as an evolution in phase space by defining the axes to be $q$, $p$, and $t$ rather than $x$, $y$, and $z$. In this space we build a ribbon-like mesh with nodes at $(0,0,t)$ and $(q,p,t)$. Finally we plot the energy as a function of time as a scalar field on this ribbon-like mesh. This scheme highlights any variations in the energy of the system. This example offers five simple 1D Hamiltonians: Simple Harmonic Oscillator (mass on a spring) $$H = \\frac{1}{2}\\left( \\frac{p^2}{m} + \\frac{q^2}{k} \\right)$$ Pendulum $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} - k \\left( 1 - cos(q) \\right) \\right]$$ Gaussian Potential Well $$H = \\frac{p^2}{2m} - k e^{-q^2 / 2}$$ Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 + q^2 \\right) q^2 \\right]$$ Negative Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 - \\frac{q^2}{8} \\right) q^2 \\right]$$ In all cases these Hamiltonians are shifted by constant values so that the energy will remain positive. The mean and standard deviation of the computed energies at each time step are displayed upon completion. When run in parallel, each processor integrates the same Hamiltonian system but starting from different initial conditions. The example has a serial ( ex20.cpp ) and a parallel ( ex20p.cpp ) version. See the Maxwell miniapp for another application of symplectic integration.", "title": "Example 20: Symplectic Integration of Hamiltonian Systems"}, {"location": "examples-orig/#example-21-adaptive-mesh-refinement-for-linear-elasticity", "text": "This is a version of Example 2 with a simple adaptive mesh refinement loop. The problem being solved is again linear elasticity describing a multi-material cantilever beam. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear and curved meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex21.cpp ) and a parallel ( ex21p.cpp ) version. We recommend viewing Examples 2 and 6 before viewing this example.", "title": "Example 21: Adaptive mesh refinement for linear elasticity"}, {"location": "examples-orig/#example-22-complex-linear-systems", "text": "This example code demonstrates the use of MFEM to define and solve a complex-valued linear system. It implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. The example also demonstrates how to display a time-varying solution as a sequence of fields sent to a single GLVis socket. The example has a serial ( ex22.cpp ) and a parallel ( ex22p.cpp ) version. We recommend viewing examples 1, 3, and 4 before viewing this example.", "title": "Example 22: Complex Linear Systems"}, {"location": "examples-orig/#example-23-wave-problem", "text": "This example code solves a simple 2D/3D wave equation with a second order time derivative: $$\\frac{\\partial^2 u}{\\partial t^2} - c^2\\Delta u = 0$$ The boundary conditions are either Dirichlet or Neumann. The example demonstrates the use of time dependent operators, implicit solvers and second order time integration. The example has only a serial ( ex23.cpp ) version. We recommend viewing examples 9 and 10 before viewing this example.", "title": "Example 23: Wave Problem"}, {"location": "examples-orig/#example-24-mixed-finite-element-spaces", "text": "This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. Partial assembly and GPU devices are supported. The example has a serial ( ex24.cpp ) and a parallel ( ex24p.cpp ) version. We recommend viewing examples 1 and 3 before viewing this example.", "title": "Example 24: Mixed finite element spaces"}, {"location": "examples-orig/#example-25-perfectly-matched-layers", "text": "The example illustrates the use of a Perfectly Matched Layer (PML) for the simulation of time-harmonic electromagnetic waves propagating in unbounded domains. PML was originally introduced by Berenger in \"A Perfectly Matched Layer for the Absorption of Electromagnetic Waves\" . It is a technique used to solve wave propagation problems posed in infinite domains. The implementation involves the introduction of an artificial absorbing layer that minimizes undesired reflections. Inside this layer a complex coordinate stretching map is used which forces the wave modes to decay exponentially. The example solves the indefinite Maxwell equations $$\\nabla \\times (a \\nabla \\times E) - \\omega^2 b E = f.$$ where $a = \\mu^{-1} |J|^{-1} J^T J$, $b= \\epsilon |J| J^{-1} J^{-T}$ and $J$ is the Jacobian matrix of the coordinate transformation. The example demonstrates discretization with Nedelec finite elements in 2D or 3D, as well as the use of complex-valued bilinear and linear forms. Several test problems are included, with known exact solutions. The example has a serial ( ex25.cpp ) and a parallel ( ex25p.cpp ) version. We recommend viewing Example 22 before viewing this example.", "title": "Example 25: Perfectly Matched Layers"}, {"location": "examples-orig/#example-26-multigrid-preconditioner", "text": "This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions and how to solve it efficiently using a matrix-free multigrid preconditioner. The example highlights on the creation of a hierarchy of discretization spaces and diffusion bilinear forms using partial assembly. The levels in the hierarchy of finite element spaces maybe constructed through geometric or order refinements. Moreover, the construction of a multigrid preconditioner for the PCG solver is shown. The multigrid uses a PCG solver on the coarsest level and second order Chebyshev accelerated smoothers on the other levels. The example has a serial ( ex26.cpp ) and a parallel ( ex26p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 26: Multigrid Preconditioner"}, {"location": "examples-orig/#example-27-laplace-boundary-conditions", "text": "This example code demonstrates the use of MFEM to define a simple finite element discretization of the Laplace problem: $$ -\\Delta u = 0 $$ with a variety of boundary conditions. Specifically, we discretize using a FE space of the specified order using a continuous or discontinuous space. We then apply Dirichlet, Neumann (both homogeneous and inhomogeneous), Robin, and Periodic boundary conditions on different portions of a predefined mesh. Boundary conditions: $u = u_{dbc}$ on $\\Gamma_{dbc}$ $\\hat{n}\\cdot\\nabla u = g_{nbc}$ on $\\Gamma_{nbc}$ $\\hat{n}\\cdot\\nabla u = 0$ on $\\Gamma_{nbc_0}$ $\\hat{n}\\cdot\\nabla u + a u = b$ on $\\Gamma_{rbc}$ as well as periodic boundary conditions which are enforced topologically. The example has a serial ( ex27.cpp ) and a parallel ( ex27p.cpp ) version. We recommend viewing examples 1 and 14 before viewing this example.", "title": "Example 27: Laplace Boundary Conditions"}, {"location": "examples-orig/#example-28-constraints-and-sliding-boundary-conditions", "text": "This example code illustrates the use of constraints in linear solvers by solving an elasticity problem where the normal component of the displacement is constrained to zero on two boundaries but tangential displacement is allowed. The constraints can be enforced in several different ways, including eliminating them from the linear system or solving a saddle-point system that explicitly includes constraint conditions. The example has a serial ( ex28.cpp ) and a parallel ( ex28p.cpp ) version. We recommend viewing example 2 before viewing this example.", "title": "Example 28: Constraints and Sliding Boundary Conditions"}, {"location": "examples-orig/#example-29-solving-pdes-on-embedded-surfaces", "text": "This example demonstrates setting up and solving an anisotropic Laplace problem $$-\\nabla\\cdot(\\sigma\\nabla u) = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ where $\\Omega$ is a two dimensional curved surface embedded in three dimensions and $\\sigma$ is an anisotropic diffusion tensor. The example demonstrates and validates our DiffusionIntegrator 's ability to properly integrate three dimensional fluxes on a two dimensional domain. Not all of our integrators currently support such cases but the DiffusionIntegrator can be used as a simple example of how extend other integrators when necessary. The example has a serial ( ex29.cpp ) and a parallel ( ex29p.cpp ) version. We recommend viewing examples 1 and 7 before viewing this example.", "title": "Example 29: Solving PDEs on embedded surfaces"}, {"location": "examples-orig/#example-30-resolving-rough-and-fine-scale-problem-data", "text": "Unresolved problem data will affect the accuracy of a discretized PDE solution as well as a posteriori estimates of the solution error. This example uses a CoefficientRefiner object to preprocess an input mesh until the resolution of the prescribed problem data $f \\in L^2$ is below a prescribed tolerance. In this example, the resolution is identified with a data oscillation function on the mesh $\\mathcal{T}$, defined $$ \\mathrm{osc}(f) = \\Big( \\sum_{T\\in\\mathcal{T}} \\| h \\cdot (I - \\Pi)\\, f \\|^2_{L^2(T)} \\Big)^{1/2}, $$ where $h$ is the local element size function and $\\Pi$ is a finite element projection operator, and the sum is taken over all elements $T$ in the mesh. In this example, the coarse initial mesh is adaptively refined until $\\mathrm{osc}(f)$ is below a prescribed tolerance for various candidate functions $f \\in L^2$. When using rough problem data, it is recommended to perform this type of preprocessing before a posteriori error estimation. The example has a serial ( ex30.cpp ) and a parallel ( ex30p.cpp ) version. We recommend viewing examples 1 and 6 before viewing this example.", "title": "Example 30: Resolving rough and fine-scale problem data"}, {"location": "examples-orig/#example-31-anisotropic-definite-maxwell-problem", "text": "This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + \\sigma E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". In this example $\\sigma$ is an anisotropic 3x3 tensor. Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example has a serial ( ex31.cpp ) and a parallel ( ex31p.cpp ) version. We recommend viewing example 3 before viewing this example.", "title": "Example 31: Anisotropic Definite Maxwell Problem"}, {"location": "examples-orig/#example-32-anisotropic-maxwell-eigenproblem", "text": "This example code solves the Maxwell (electromagnetic) eigenvalue problem with anisotropic permittivity, $\\epsilon$ $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, \\epsilon E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces in an eigenmode context. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing multiple GLVis visualization windows for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex32p.cpp ) version. We recommend viewing examples 13 and 31 before viewing this example.", "title": "Example 32: Anisotropic Maxwell Eigenproblem"}, {"location": "examples-orig/#example-33-spectral-fractional-laplacian", "text": "This example code demonstrates the use of MFEM to solve the fractional Laplacian problem $$ (-\\Delta)^\\alpha u = 1, \\quad \\alpha > 0, $$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is similar to ex1 , but involves a fractional-order diffusion operator whose inverse can be approximated by a series of inverses of integer-order diffusion operators. Solving each of these independent integer-order PDEs with MFEM and summing their solutions results in a discrete solution to the fractional Laplacian problem above. The example has a serial ( ex33.cpp ) and a parallel ( ex33p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 33: Spectral fractional Laplacian"}, {"location": "examples-orig/#example-34-source-function-using-a-submesh-transfer", "text": "This example demonstrates the use of a SubMesh object to transfer solution data from a sub-domain and use this as a source function on the full domain. In this case we compute a volumetric current density $\\vec{J}$ as the gradient of a scalar potential $\\varphi$ on a portion of the domain. $$\\nabla\\cdot(\\sigma\\nabla\\varphi)=0$$ $$\\vec{J} = -\\sigma\\nabla\\varphi$$ Where a voltage difference is applied on surfaces of the sub-domain (shown on the left) to generate the current density restricted to this sub-domain. The current density is then transferred to the full domain (shown on the right) using a SubMesh object. We then use this current density on the full domain as a source term in a magnetostatic solve for a vector potential $\\vec{A}$. $$\\nabla\\times(\\mu^{-1}\\nabla\\times\\vec{A})=\\vec{J}$$ $$\\vec{B} = \\nabla\\times\\vec{A}$$ This example verifies the recreation of boundary attributes on a sub-domain mesh as well as transfer of Raviart-Thomas vector fields between the SubMesh and the full Mesh. Note that the data transfer in this particular example involves arbitrary order Raviart-Thomas degrees of freedom on a mixture of tetrahedral and triangular prism elements. The example has a serial ( ex34.cpp ) and a parallel ( ex34p.cpp ) version. We recommend viewing Examples 1 and 3 before viewing this example.", "title": "Example 34: Source Function using a SubMesh Transfer"}, {"location": "examples-orig/#example-35-port-boundary-conditions-using-submesh-transfers", "text": "This example demonstrates the use of a SubMesh object to transfer a port boundary condition from a portion of the boundary to the corresponding portion of the full domain. Just as in Example 22 this example implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0\\mbox{ with }u|_\\Gamma=v$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\times(\\vec{u}\\times\\hat{n})|_\\Gamma=\\vec{v}$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\cdot\\vec{u}|_\\Gamma=v$$ Where $\\Gamma$ is a portion of the boundary called the port . In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. In Example 22 this boundary condition was simply a constant in space. In this example the boundary condition is an eigenmode of a lower dimensional eigenvalue problem defined on a portion of the boundary as follows: For the scalar $H^1$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }v|_{\\partial\\Gamma}=0$$ For the vector $H(curl)$ field: $$\\nabla\\times\\left(\\nabla\\times\\vec{v}\\right) = \\lambda\\,\\vec{v}\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\times\\vec{v}|_{\\partial\\Gamma}=0$$ For the vector $H(div)$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\cdot\\nabla v|_{\\partial\\Gamma}=0$$ The different cases implemented in this example can be used to verify the transfer of an $H^1$ scalar field, the tangential components of an $H(curl)$ vector field, and the normal component of an $H(div)$ vector field (as a scalar $L^2$ field in this case) between a SubMesh and its parent mesh. The example has only a parallel ( ex35p.cpp ) version because the eigenmode solver used to compute the field on the port is only implemented in parallel. We recommend viewing Examples 11, 13, and 22 before viewing this example.", "title": "Example 35: Port Boundary Conditions using SubMesh Transfers"}, {"location": "examples-orig/#example-36-obstacle-problem", "text": "This example code solves the pointwise bound-constrained energy minimization problem $$ \\text{minimize } \\frac{1}{2}\\|\\nabla u\\|^2 \\text{ in } H^1_0(\\Omega)\\, \\text{ subject to } u \\ge \\varphi\\,.$$ This is known as the obstacle problem, and it is a classical motivating example in the study of variational inequalities and free boundary problems. In this example, the obstacle $\\varphi$ is the graph of a half-sphere centered at the origin of a circular domain $\\Omega$. After solving to a specified tolerance, the numerical solution is compared to a closed-form exact solution to assess its accuracy. The problem is solved using the Proximal Galerkin finite element method, which is a nonlinear, structure-preserving mixed method for pointwise bound constraints proposed by Keith and Surowiec . In turn, this example highlights MFEM's ability to deliver high-order solutions to variational inequalities and showcases how to set up and solve nonlinear mixed methods. The example has a serial ( ex36.cpp ) and a parallel ( ex36p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 36: Obstacle Problem"}, {"location": "examples-orig/#example-37-topology-optimization", "text": "Density field $\\rho$ Problem set-up and domain $\\Omega$ This example code solves a classical cantilever beam topology optimization problem. The aim is to find an optimal material density field $\\rho$ in $L^1(\\Omega)$ to minimize the elastic compliance; i.e., $$\\begin{align} &\\text{minimize} \\int_\\Omega \\mathbf{f} \\cdot \\mathbf{u}(\\rho)\\, \\mathrm{d}x\\, \\text{ over }\\, \\rho \\in L^1(\\Omega) \\\\ &\\text{subject to }\\, 0 \\leq \\rho \\leq 1\\, \\text{ and } \\int_\\Omega \\rho\\, \\mathrm{d}x = \\theta\\, \\mathrm{vol}(\\Omega) \\,. \\end{align}$$ In this problem, $\\mathbf{f}$ is a localized force and the linearly elastic displacement field $\\mathbf{u} = \\mathbf{u}(\\rho)$ is determined by a material density field $\\rho$ with total volume fraction $0<\\theta<1$. The problem is solved using a mirror descent algorithm proposed by Keith and Surowiec . For further details, see the more elaborate description of this PDE-constrained optimization problem given in the example code and the aforementioned paper. The example has a serial ( ex37.cpp ) and a parallel ( ex37p.cpp ) version. We recommend viewing Example 2 before viewing this example.", "title": "Example 37: Topology Optimization"}, {"location": "examples-orig/#example-38-cut-volume-and-cut-surface-integration", "text": "This example code demonstrates construction of cut-surface and cut-volume IntegrationRules. The cut is specified by the zero level set of a given Coefficient $\\phi$. The resulting IntegrationRules are combined with standard LinearFormIntegrators to demonstrate integration of a function $u$ over an implicit interface, and a subdomain bounded by an implicit interface: $$ S = \\int_{\\phi = 0} u(x) ~ ds, \\quad V = \\int_{\\phi > 0} u(x) ~ dx. $$ The IntegrationRules are constructed by the moment-fitting algorithm introduced by M\u00fcller, Kummer and Oberlack . Through a set of basis functions, for each element the method defines and solves a local under-determined system for the vector of quadrature weights. All surface and volume integrals, which are required to form the system, are reduced to 1D integration over intersected segments. The example has only a serial ( ex38.cpp ) version, because the construction of the integration rules is an element-local procedure. It requires MFEM to be built with LAPACK, which is used to find the optimal solution of an under-determined system of equations.", "title": "Example 38: Cut-Volume and Cut-Surface Integration"}, {"location": "examples-orig/#example-39-named-attribute-sets", "text": "This example uses the Poisson equation to demonstrate the use of named attribute sets in MFEM to specify material regions, boundary regions, or source regions by name rather than attribute numbers. It also demonstrates how new named attribute sets may be created from arbitrary groupings of attribute numbers and used as a convenient shorthand to refer to those groupings in other portions of the application or through the command line. Named attribute sets also required changes to MFEM's mesh file formats. This example makes use of a custom input mesh file ( compass.msh ) produced using Gmsh which includes named regions and boundaries. A related mesh file ( compass.mesh ) illustrates MFEM's representation of the new named attribute sets. See file formats for details of the augmented mesh file format. The example has a serial ( ex39.cpp ) and a parallel ( ex39p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 39: Named Attribute Sets"}, {"location": "examples-orig/#example-40-eikonal-equation", "text": "This example highlights MFEM's ability to solve a fully-nonlinear, first-order PDE with high-order finite elements. In particular this example uses the proximal Galerkin method to solve the eikonal equation, $$ |\\nabla u| = 1 \\text{ in } \\Omega, \\quad u = 0 \\text{ on } \\partial \\Omega. $$ At each point $x$ in the domain $\\Omega$, the solution of this PDE provides the Euclidean distance to the domain boundary, $u(x) = \\min \\{ | x - y| : y \\in \\partial \\Omega\\}$. The problem is solved by recasting $u$ as the solution of the nonlinear program $$ \\text{maximize } \\int_\\Omega u\\, \\mathrm{d} x\\, \\text{ in } W^{1,\\infty}_0(\\Omega)\\, \\text{ subject to } |\\nabla u | \\leq 1 \\text{ a.e. in } \\Omega.$$ A solution is then obtained by discretizing and solving a sequence of nonlinear saddle-point problems. See the example code for a more detailed description of the method. The example has a serial ( ex40.cpp ) and a parallel ( ex40p.cpp ) version. We recommend viewing Example 5 and Example 36 before viewing this example.", "title": "Example 40: Eikonal Equation"}, {"location": "examples-orig/#nurbs-example-1-laplace-problem", "text": "This example code demonstrates the use of MFEM to define a simple isogeometric NURBS discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is the same as Example 1 . The example has a serial ( nurbs_ex1.cpp ) and a parallel ( nurbs_ex1p.cpp ) version. There is also a version that demonstrates efficient patchwise quadrature ( nurbs_ex1 patch.cpp ).", "title": "NURBS Example 1: Laplace Problem"}, {"location": "examples-orig/#nurbs-example-3-definite-maxwell-problem", "text": "This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with NURBS-based $H(curl)$elements in 2D or 3D. The problem solved in this example is the same as Ezample 3 . The example has only a serial ( nurbs_ex1.cpp ) version.", "title": "NURBS Example 3: Definite Maxwell Problem"}, {"location": "examples-orig/#nurbs-example-5-darcy-problem", "text": "This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize the velocity ($\\bf u$) with NURBS-based $H(div)$ elements and the pressure ($p$) with a compatible NURBS-based $H_1$ elements. The problem solved in this example is the same as Example 5 . The example only has a serial ( nurbs_ex5.cpp ).", "title": "NURBS Example 5: Darcy Problem"}, {"location": "examples-orig/#nurbs-example-11-laplace-eigenproblem", "text": "This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The problem solved in this example is the same as Example 11 . The example has only a parallel ( nurbs_ex11p.cpp ) version.", "title": "NURBS Example 11: Laplace Eigenproblem"}, {"location": "examples-orig/#nurbs-example-24-mixed-finite-element-spaces", "text": "The problem solved in this example is the same as Example 24 , but NURBS-based elements are also supported. This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. The example has a serial ( nurbs_ex24.cpp ).", "title": "NURBS Example 24: Mixed finite element spaces"}, {"location": "examples-orig/#volta-miniapp-electrostatics", "text": "This miniapp demonstrates the use of MFEM to solve realistic problems in the field of linear electrostatics. Its features include: dielectric materials charge densities surface charge densities prescribed voltages applied polarizations high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( volta.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Volta Miniapp: Electrostatics"}, {"location": "examples-orig/#tesla-miniapp-magnetostatics", "text": "This miniapp showcases many of MFEM's features while solving a variety of realistic magnetostatics problems. Its features include: diamagnetic and/or paramagnetic materials ferromagnetic materials volumetric current densities surface current densities external fields high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( tesla.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Tesla Miniapp: Magnetostatics"}, {"location": "examples-orig/#maxwell-miniapp-transient-full-wave-electromagnetics", "text": "This miniapp solves the equations of transient full-wave electromagnetics. Its features include: mixed formulation of the coupled first-order Maxwell equations $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic flux energy conserving, variable order, implicit time integration dielectric materials diamagnetic and/or paramagnetic materials conductive materials volumetric current densities Sommerfeld absorbing boundary conditions high order meshes high order basis functions advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( maxwell.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Maxwell Miniapp: Transient Full-Wave Electromagnetics"}, {"location": "examples-orig/#joule-miniapp-transient-magnetics-and-joule-heating", "text": "This miniapp solves the equations of transient low-frequency (a.k.a. eddy current) electromagnetics, and simultaneously computes transient heat transfer with the heat source given by the electromagnetic Joule heating. Its features include: $H^1$ discretization of the electrostatic potential $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic field $H(\\mathrm{div})$ discretization of the heat flux $L^2$ discretization of the temperature implicit transient time integration high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( joule.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Joule Miniapp: Transient Magnetics and Joule Heating"}, {"location": "examples-orig/#mobius-strip-miniapp", "text": "This miniapp generates various Mobius strip-like surface meshes. It is a good way to generate complex surface meshes. Manipulating the mesh topology and performing mesh transformation are demonstrated. The mobius-strip mesh in the data directory was generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mobius-strip.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mobius Strip Miniapp"}, {"location": "examples-orig/#klein-bottle-miniapp", "text": "This miniapp generates three types of Klein bottle surfaces. It is similar to the mobius-strip miniapp. Manipulating the mesh topology and performing mesh transformation are demonstrated. The klein-bottle and klein-donut meshes in the data directory were generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( klein-bottle.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Klein Bottle Miniapp"}, {"location": "examples-orig/#toroid-miniapp", "text": "This miniapp generates two types of toroidal volume meshes; one with triangular cross sections and one with square cross sections. It works by defining a stack of individual elements and bending them so that the bottom and top of the stack can be joined to form a torus. It supports various options including: The element type: 0 - Wedge, 1 - Hexahedron The geometric order of the elements The major and minor radii The number of elements in the azimuthal direction The number of nodes to offset by before rejoining the stack The initial angle of the cross sectional shape The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( toroid.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Toroid Miniapp"}, {"location": "examples-orig/#twist-miniapp", "text": "This miniapp generates simple periodic meshes to demonstrate MFEM's handling of periodic domains. MFEM's strategy is to use a discontinuous vector field to define the mesh coordinates on a topologically periodic mesh. It works by defining a stack of individual elements and stitching together the top and bottom of the mesh. The stack can also be twisted so that the vertices of the bottom and top can be joined with any integer offset (for tetrahedral and wedge meshes only even offsets are supported). The Twist miniapp supports various options including: The element type: 4 - Tetrahedron, 6 - Wedge, 8 - Hexahedron The geometric order of the elements The dimensions of the initial brick-shaped stack of elements The number of elements in the z direction The number of nodes to offset by before rejoining the stack The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( twist.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Twist Miniapp"}, {"location": "examples-orig/#extruder-miniapp", "text": "This miniapp creates higher dimensional meshes from lower dimensional meshes by extrusion. Simple coordinate transformations can also be applied if desired. The initial mesh can be 1D or 2D 1D meshes can be extruded in both the y and z directions 2D meshes can be triangular, quadrilateral, or contain both element types Meshes with high order geometry are supported User can specify the number of elements and the distance to extrude Geometric order of the transformed mesh can be user selected or automatic This miniapp provides another demonstration of how simple meshes can be constructed and transformed in MFEM. This miniapp has only a serial ( extruder.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Extruder Miniapp"}, {"location": "examples-orig/#trimmer-miniapp", "text": "This miniapp creates a new mesh file from an existing mesh by trimming away elements with selected attributes. Newly exposed boundary elements will be assigned new or user specified boundary attributes. The initial mesh can be 2D or 3D Meshes with high order geometry are supported Periodic meshes are supported NURBS meshes are not supported This miniapp provides another demonstration of how simple meshes can be constructed in MFEM. This miniapp has only a serial ( trimmer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Trimmer Miniapp"}, {"location": "examples-orig/#polar-nc-miniapp", "text": "This miniapp generates a circular sector mesh that consist of quadrilaterals and triangles of similar sizes. The 3D version of the mesh is made of prisms and tetrahedra. The mesh is non-conforming by design, and can optionally be made curvilinear. The elements are ordered along a space-filling curve by default, which makes the mesh ready for parallel non-conforming AMR in MFEM. The implementation also demonstrates how to initialize a non-conforming mesh on the fly by marking hanging nodes with Mesh::AddVertexParents . For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( polar-nc.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Polar-NC Miniapp"}, {"location": "examples-orig/#shaper-miniapp", "text": "This miniapp performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( shaper.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Shaper Miniapp"}, {"location": "examples-orig/#mesh-explorer-miniapp", "text": "This miniapp is a handy tool to examine, visualize and manipulate a given mesh. Some of its features are: visualizing of mesh materials and individual mesh elements mesh scaling, randomization, and general transformation manipulation of the mesh curvature the ability to simulate parallel partitioning quantitative and visual reports of mesh quality For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mesh-explorer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mesh Explorer Miniapp"}, {"location": "examples-orig/#mesh-optimizer-miniapp", "text": "This miniapp performs mesh optimization using the Target-Matrix Optimization Paradigm (TMOP) by P. Knupp , and a global variational minimization approach ( Dobrev et al. ). It minimizes the quantity $$\\sum_T \\int_T \\mu(J(x)),$$ where $T$ are the target (ideal) elements, $J$ is the Jacobian of the transformation from the target to the physical element, and $\\mu$ is the mesh quality metric. This metric can measure shape, size or alignment of the region around each quadrature point. The combination of targets and quality metrics is used to optimize the physical node positions, i.e., they must be as close as possible to the shape / size / alignment of their targets. This code also demonstrates a possible use of nonlinear operators, as well as their coupling to Newton methods for solving minimization problems. Note that the utilized Newton methods are oriented towards avoiding invalid meshes with negative Jacobian determinants. Each Newton step requires the inversion of a Jacobian matrix, which is done through an inner linear solver. The miniapp has a serial ( mesh-optimizer.cpp ) and a parallel ( pmesh-optimizer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mesh Optimizer Miniapp"}, {"location": "examples-orig/#mesh-fitting-miniapp", "text": "This miniapp builds upon the mesh optimizer miniapp to enable mesh alignment with the zero isosurface of a discrete level-set. The approach is based on Dobrev et al. and Mittal et al. , where we minimize the quantity $$\\sum_T \\int_T \\mu(J(x)) + \\sum_{s \\in S} w \\,\\, \\sigma^2(x_s).$$ Here, the first term controls mesh quality and the second term enforces weak alignment of a selected subset of mesh-nodes ($s \\in S$) with the zero isosurface of the discrete level-set function ($\\sigma$). Click on the image on the right to see a demonstration of this method for generating body-fitted meshes for topology optimization in LiDO to maximize beam stiffness under a downward force on the right wall. The miniapp has a parallel ( pmesh-fitting.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mesh Fitting Miniapp"}, {"location": "examples-orig/#minimal-surface-miniapp", "text": "This miniapp solves Plateau's problem: the Dirichlet problem for the minimal surface equation. Options to solve the minimal surface equations of both parametric surfaces as well as surfaces restricted to be graphs of the form $z=f(x,y)$ are supported, including a number of examples such as the Catenoid, Helicoid, Costa and Scherk surfaces. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has a serial ( minimal-surface.cpp ) and a parallel ( pminimal-surface.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Minimal Surface Miniapp"}, {"location": "examples-orig/#low-order-refined-transfer-miniapp", "text": "The lor-transfer miniapp, found under miniapps/tools demonstrates the capability to generate a low-order refined mesh from a high-order mesh, and to transfer solutions between these meshes. Grid functions can be transferred between the coarse, high-order mesh and the low-order refined mesh using either $L^2$ projection or pointwise evaluation. These transfer operators can be designed to discretely conserve mass and to recover the original high-order solution when transferring a low-order grid function that was obtained by restricting a high-order grid function to the low-order refined space. The miniapp has only a serial ( lor-transfer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Low-Order Refined Transfer Miniapp"}, {"location": "examples-orig/#interpolation-miniapps", "text": "The interpolation miniapp, found under miniapps/gslib , demonstrate the capability to interpolate high-order finite element functions at given set of points in physical space. These miniapps utilize the gslib library's high-order interpolation utility for quad and hex meshes: Find Points miniapp has a serial ( findpts.cpp ) and a parallel ( pfindpts.cpp ) version that demonstrate the basic procedures for point search and evaluation of grid functions. Field Interp miniapp ( field-interp.cpp ) demonstrates how grid functions can be transferred between meshes. Field Diff miniapp ( field-diff.cpp ) demonstrates how grid functions on two different meshes can be compared with each other. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Interpolation Miniapps"}, {"location": "examples-orig/#extrapolation-miniapp", "text": "The extrapolate miniapp, found in the miniapps/shifted directory, extrapolates a finite element function from a set of elements (known values) to the rest of the domain. The set of elements that contains the known values is specified by the positive values of a level set Coefficient. The known values are not modified. The miniapp supports two PDE-based approaches ( Aslam , Bochkov & Gibou ), both of which rely on solving a sequence of advection problems in the direction of the unknown parts of the domain. The extrapolation can be constant (1st order), linear (2nd order), or quadratic (3rd order). These formal orders hold for a limited band around the zero level set, see the above references for further information. The miniapp has only a parallel ( extrapolate.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Extrapolation Miniapp"}, {"location": "examples-orig/#distance-solver-miniapp", "text": "The distance miniapp, found in the miniapps/shifted directory demonstrates the capability to compute the \"distance\" to a given point source or to the zero level set of a given function. Here \"distance\" refers to the length of the shortest path through the mesh. The input can be a DeltaCoefficient (representing a point source), or any Coefficient (for the case of a level set). The output is a ParGridFunction that can be scalar (representing the scalar distance), or a vector (its magnitude is the distance, and its direction is the starting direction of the shortest path). The miniapp has only a parallel ( distance.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Distance Solver Miniapp"}, {"location": "examples-orig/#shifted-diffusion-miniapp", "text": "The diffusion miniapp, found in the miniapps/shifted directory, demonstrates the capability to formulate a boundary value problem using a surrogate computational domain. The method uses a distance function to the true boundary to enforce Dirichlet boundary conditions on the (non-aligned) mesh faces, therefore \"shifting\" the location where boundary conditions are imposed. The implementation in the miniapp is a high-order extension of the second-generation shifted boundary method . The miniapp has only a parallel ( diffusion.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Shifted Diffusion Miniapp"}, {"location": "examples-orig/#laghos-miniapp", "text": "Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. The computational motives captured in Laghos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements (triangular and tetrahedral elements can also be used, but with the less efficient full assembly option). Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Laghos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Continuous and discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Separation between the assembly and the quadrature point-based computations. Point-wise definition of mesh size, time-step estimate and artificial viscosity coefficient. Constant-in-time velocity mass operator that is inverted iteratively on each time step. This is an example of an operator that is prepared once (fully or partially assembled), but is applied many times. The application cost is dominant for this operator. Time-dependent force matrix that is prepared every time step (fully or partially assembled) and is applied just twice per \"assembly\". Both the preparation and the application costs are important for this operator. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization / data analysis with VisIt . The Laghos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Laghos .", "title": "Laghos Miniapp"}, {"location": "examples-orig/#remhos-miniapp", "text": "Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. The computational motives captured in Remhos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements. Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Remhos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Mass operator that is local per each zone. It is inverted by iterative or exact methods at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Advection operator that couples neighboring zones. It is applied once at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization and data analysis with VisIt . The Remhos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Remhos .", "title": "Remhos Miniapp"}, {"location": "examples-orig/#navier-miniapp", "text": "Navier is a miniapp that solves the time-dependent Navier-Stokes equations of incompressible fluid dynamics \\begin{align} \\frac{\\partial u}{\\partial t} + (u \\cdot \\nabla) u - \\frac{1}{Re} \\nabla^2 u - \\nabla p &= f \\\\ \\nabla \\cdot u &= 0 \\end{align} using a spatially high-order finite element discretization. The time-dependent problem is solved using a (up to) third order implicit-explicit method which leverages an extrapolation scheme for the convective parts and a backward-difference formulation for the viscous parts of the equation. The miniapp supports: Arbitrary order H1 elements High order mesh elements IMEX (EXTk-BDFk) time-stepping up to third order Convenient interface for new users A variety of test cases and benchmarks This miniapp has only a parallel ( navier_solver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Navier Miniapp"}, {"location": "examples-orig/#block-solvers-miniapp", "text": "The Block Solvers miniapp, found under miniapps/solvers , compares various linear solvers for the saddle point system obtained from mixed finite element discretization of the Darcy's flow problem \\begin{array}{rcl} k{\\bf u} & + \\nabla p & = f \\\\ -\\nabla \\cdot {\\bf u} & & = g \\end{array} The solvers being compared include: The divergence-free solver (couple and decoupled modes), which is based on a multilevel decomposition of the Raviart-Thomas finite element space and its divergence-free subspace. MINRES preconditioned by the block diagonal preconditioner in ex5p.cpp . For more details, please see the documentation in the miniapps/solvers directory. The miniapp supports: Arbitrary order mixed finite element pair (Raviart-Thomas elements + piecewise discontinuous polynomials) Various combination of essential and natural boundary conditions Homogeneous or heterogeneous scalar coefficient k This miniapp has only a parallel ( block-solvers.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Block Solvers Miniapp"}, {"location": "examples-orig/#overlapping-grids-miniapps", "text": "Overlapping grids-based frameworks can often make problems tractable that are otherwise inaccessible with a single conforming grid. The following gslib -based miniapps in MFEM demonstrate how to set up and use overlapping grids: The Schwarz Example 1 miniapp in miniapps/gslib has a serial ( schwarz_ex1.cpp ) a parallel ( schwarz_ex1p.cpp ) version that solves the Poisson problem on overlapping grids. The serial version is restricted to use two overlapping grids, while the parallel version supports arbitrary number of overlapping grids. The Navier Conjugate Heat Transfer miniapp in miniapps/navier ( navier_cht.cpp ) demonstrates how a conjugate heat transfer problem can be solved with the fluid dynamics (incompressible Navier-Stokes equations) and heat transfer (advection-diffusion equation) PDEs modeled on different meshes. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Overlapping Grids Miniapps"}, {"location": "examples-orig/#parelag-amge-for-hcurl-and-hdiv-miniapp", "text": "This is a miniapp that exhibits the ParELAG library and part of its capabilities. The miniapp employs MFEM and ParELAG to solve $H(\\mathrm{curl})$- and $H(\\mathrm{div})$-elliptic forms by an element based algebraic multigrid (AMGe). ParELAG is a library mostly developed at the Center for Applied Scientific Computing of Lawrence Livermore National Laboratory, California, USA. The miniapp uses: A multilevel hierarchy of de Rham complexes of finite element spaces, built by ParELAG; Hiptmair-type (hybrid) smoothers, implemented in ParELAG; AMS (Auxiliary-space Maxwell Solver) or ADS (Auxiliary-space Divergence Solver), from HYPRE, for preconditioning or solving on the coarsest levels. Alternatively, it is possible to precondition or solve the $H(\\mathrm{div})$ form on the coarsest level via a hybridization approach. However, this is not yet implemented in ParELAG for the coarse levels. Only the hybridization solver that is directly applicable to an $H(\\mathrm{div})$-$L^2$ mixed (saddle-point) system is currently available in ParELAG. We recommend viewing ex3p.cpp and ex4p.cpp before viewing this miniapp. For more details, please see the documentation in the miniapps/parelag directory. This miniapp has only a parallel ( MultilevelHcurlHdivSolver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "ParELAG AMGe for H(curl) and H(div) Miniapp"}, {"location": "examples-orig/#generating-gaussian-random-fields-via-the-spde-method", "text": "This miniapp generates Gaussian random fields on meshed domains $\\Omega \\subset \\mathbb{R}^n$ via the SPDE method. The method exploits a stochastic, fractional PDE whose full-space solutions yield Gaussian random fields with a Mat\u00e9rn covariance. The method was introduced and popularized by Lindgren et. al in 2010. In this miniapp, we use a slightly modified representation following Khristenko et. al . More specifically, we solve the equation \\begin{equation} \\left( -\\frac{1}{2\\nu} \\nabla \\cdot \\left( \\Theta \\nabla \\right) + \\mathbf{1} \\right)^{\\frac{2\\nu+n}{4}} u(x,w) = \\eta W(x,w) \\ \\ \\ \\text{in} \\ \\ \\Omega, \\end{equation} with various boundary conditions. Solving this equation on $\\Omega = \\mathbb{R}^n$ delivers a homogeneous Gaussian random field with zero mean and Mat\u00e9rn covariance, \\begin{align}\\label{eq:MaternCovariance} C(x,y) &= \\sigma^2M_\\nu \\left(\\sqrt{2\\nu}\\, \\| x-y \\|_{\\Theta} \\right) , \\end{align} where $M_\\nu(z) = \\frac{2^{1-\\nu}}{\\Gamma(\\nu)} z ^{\\nu} K_\\nu \\left( z \\right)$ and $\\| x-y \\|_{\\Theta}^2 = (x-y)^\\top\\Theta (x-y)$. The Mat\u00e9rn model provides the regularity parameter $\\nu > 0$ and the anisotropic diffusion tensor $\\Theta \\in \\mathbb{R}^{n\\times n}$, which determines the spatial structure (correlation lengths). However, applying boundary conditions to the SPDE above provides the ability to model a significantly larger class of inhomogeneous random fields on complex domains. For further details, see the miniapp README . We recommend viewing ex33p.cpp before viewing this miniapp. This miniapp ( generate_random_field.cpp ) has only a parallel implementation. It further requires MFEM to be built with LAPACK, otherwise you may only use predefined values for $\\nu$. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Generating Gaussian Random Fields via the SPDE Method"}, {"location": "examples-orig/#multidomain-and-submesh-demonstration-miniapp", "text": "This miniapp aims to demonstrate how to solve two PDEs, that represent different physics, on the same domain. MFEM's SubMesh interface is used to compute on and transfer between the spaces of predefined parts of the domain. For the sake of simplicity, the spaces on each domain are using the same order H1 finite elements. This does not mean that the approach is limited to this configuration. A 3D domain comprised of an outer box with a cylinder shaped inside is used. A heat equation is described on the outer box domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T &&\\mbox{in outer box}\\\\ T &= T_{wall} &&\\mbox{on outside wall}\\\\ \\nabla T \\cdot \\vec{n} &= 0 &&\\mbox{on inside (cylinder) wall} \\end{align} with temperature $T$ and coefficient $\\kappa$ (non-physical in this example). A convection-diffusion equation is described inside the cylinder domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T - \\alpha \\nabla \\cdot (\\vec{b} T) & &\\mbox{in inner cylinder}\\\\ T &= T_{wall} & &\\mbox{on cylinder wall}\\\\ \\nabla T \\cdot \\vec{n} &= 0 & &\\mbox{else} \\end{align} with temperature $T$, coefficients $\\kappa$, $\\alpha$ and prescribed velocity profile $\\vec{b}$, and $T_{wall}$ obtained from the heat equation. To couple the solutions of both equations, a segregated solve with one way coupling approach is used. The heat equation of the outer box is solved from the timestep $T_{box}(t)$ to $T_{box}(t+dt)$. Then for the convection-diffusion equation $T_{wall}$ is set to $T_{box}(t+dt)$ and the equation is solved for $T(t+dt)$ which results in a first-order one way coupling. This miniapp has only a parallel ( multidomain.cpp ) implementation. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Multidomain and SubMesh demonstration Miniapp"}, {"location": "examples-orig/#dpg-miniapp", "text": "This miniapp demonstrates how to discretize and solve various PDEs using the Discontinuous Petrov-Galerkin (DPG) method. It utilizes a new user-friendly interface to assemble the block DPG systems arising from the discretization of any DPG formulation (such as Ultraweak or Primal). In addition, the miniapp supports complex-valued systems, static condensation for block systems, and AMR using the built-in DPG residual-based error indicator. This capability is showcased in the following DPG examples in miniapps/dpg . Ultraweak DPG formulation for diffusion . This example solves the simple Poisson equation $$-\u0394 u = f$$ and computes rates of convergence under successive uniform h-refinements for a smooth manufactured solution. The parallel version also includes an AMR implementation for the L-shape benchmark problem. This example has a serial ( diffusion.cpp ) and a parallel ( pdiffusion.cpp ) version. Ultraweak DPG formulation for convection-diffusion . This example solves the convection-diffusion problem: \\begin{align} -\\epsilon \\Delta u + \\nabla \\cdot (\\beta u) &= f\\\\ \\end{align} using AMR. The example demonstrates the use of mesh-dependent test norms which are suitable for problems with solutions that exhibit large gradients present in internal or boundary layers . The example has a serial ( convection-diffusion.cpp ) and a parallel ( pconvection-diffusion.cpp ) version. Ultraweak DPG formulation for time-harmonic linear acoustics . This example solves the indefinite Helmholtz equation \\begin{align} -\\Delta u - \\omega^2 u &= f\\\\ \\end{align} The example includes formulations with manufactured plane-wave solutions as well as high-frequency scattering problems and the use of Perfectly Match Layers (PML). It also demonstrates how to set up complex-valued systems and preconditioners for their solutions. The example has a serial ( acoustics.cpp ) and parallel ( pacoustics.cpp ) version. Ultraweak DPG formulation for time-harmonic Maxwell . This example solves the indefinite Maxwell problem \\begin{align} \\nabla \u00d7 (\\mu^{-1} \\nabla \\times E) - \\omega^2 \\epsilon E&= J\\\\ \\end{align} The example includes formulations with smooth manufactured solutions, AMR formulations for high-frequency scattering problems as well as a problem with a singular solution. The example has a serial ( maxwell.cpp ) and a parallel ( pmaxwell.cpp ) version.", "title": "DPG miniapp"}, {"location": "examples-orig/#tribol-miniapp", "text": "This miniapp demonstrates how to use Tribol's mortar method to solve a contact patch test. A contact patch test places two aligned, linear elastic cubes in contact, then verifies that the exact elasticity solution for this problem is recovered. The exact solution requires transmission of a uniform pressure field across a (not necessarily conforming) interface (i.e. the contact surface). Mortar methods (including the one implemented in Tribol) are generally able to pass the contact patch test. The test assumes small deformations and no accelerations, so the relationship between forces/contact pressures and deformations/contact gaps is linear and, therefore, the problem can be solved exactly with a single linear solve. The mortar implementation is based on Puso and Laursen (2004) . A description of the Tribol implementation is available in Serac documentation . Lagrange multipliers are used to solve for the pressure required to prevent violation of the contact constraints. This miniapp has only a parallel ( contact-patch-test.cpp ) implementation. For more details, please see the documentation in miniapps/tribol/README.md . We recommend that new users start with the example codes before moving to the miniapps. No examples or miniapps match your criteria. ", "title": "Tribol miniapp"}, {"location": "examples/", "text": "MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$']]}}); Example Codes and Miniapps This page provides a brief overview of MFEM's example codes and miniapps. For detailed documentation of the MFEM sources, including the examples, see the online Doxygen documentation , or the doc directory in the distribution. The goal of the example codes is to provide a step-by-step introduction to MFEM in simple model settings. The miniapps are more complex, and are intended to be more representative of the advanced usage of the library in physics/application codes. We recommend that new users start with the example codes before moving to the miniapps. Select from the categories below to display examples and miniapps that contain the respective feature. All examples support (arbitrarily) high-order meshes and finite element spaces . The numerical results from the example codes can be visualized using the GLVis visualization tool (based on MFEM). See the GLVis website for more details. Users are encouraged to submit any example codes and miniapps that they have created and would like to share. Contact a member of the MFEM team to report bugs or post questions or comments . Application (PDE) All Diffusion Convection-diffusion Elasticity Electromagnetics Acoustics grad-div Darcy Advection Conduction Wave Compressible flow Incompressible flow Meshing Nonlocal Stochastic Free boundary Finite Elements All H1 nodal elements L2 discontinuous elements H(curl) Nedelec elements H(div) Raviart-Thomas elements H^{1/2} interfacial elements H^{-1/2} interfacial elements Discretization All Galerkin FEM Mixed FEM Discontinuous Galerkin (DG) Discont. Petrov-Galerkin (DPG) Hybridization Static condensation Isogeometric analysis (NURBS) Adaptive mesh refinement (AMR) Partial assembly Solver All Jacobi Gauss-Seidel PCG MINRES GMRES Algebraic Multigrid (BoomerAMG) Auxiliary-space Maxwell Solver (AMS) Auxiliary-space Divergence Solver (ADS) SuperLU/STRUMPACK (parallel direct) UMFPACK (serial direct) Newton method (nonlinear solver) Explicit Runge-Kutta (ODE integration) Implicit Runge-Kutta (ODE integration) Newmark (ODE Integration) Symplectic Algorithm (ODE Integration) LOBPCG, AME (eigensolvers) SUNDIALS solvers PETSc solvers SLEPc eigensolvers HiOp solvers None Example 0: Simplest Laplace Problem This is the simplest MFEM example and a good starting point for new users. The example demonstrates the use of MFEM to define and solve an $H^1$ finite element discretization of the Laplace problem $$-\\Delta u = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ The example illustrates the use of the basic MFEM classes for defining the mesh, finite element space, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. The example has serial ( ex0.cpp ) and parallel ( ex0p.cpp ) versions. Example 1: Laplace Problem This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Specifically, we discretize with the finite element space coming from the mesh (linear by default, quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The problem solved in this example is the same as ex0 , but with more sophisticated options and features. The example highlights the use of mesh refinement, finite element grid functions, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. We also cover the explicit elimination of essential boundary conditions, static condensation, and the optional connection to the GLVis tool for visualization. The example has a serial ( ex1.cpp ), a parallel ( ex1p.cpp ), and HPC versions: performance/ex1.cpp , performance/ex1p.cpp . It also has a PETSc modification in examples/petsc , a PUMI modification in examples/pumi and a Ginkgo modification in examples/ginkgo . Partial assembly and GPU devices are supported. Example 2: Linear Elasticity This example code solves a simple linear elasticity problem describing a multi-material cantilever beam. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder with $f$ being a constant pull down vector on boundary elements with attribute 2, and zero otherwise. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order and NURBS vector finite element spaces with the linear elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and vector coefficient objects. Static condensation is also illustrated. The example has a serial ( ex2.cpp ) and a parallel ( ex2p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . We recommend viewing Example 1 before viewing this example. Example 3: Definite Maxwell Problem This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 2D or 3D. The example demonstrates the use of $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. Static condensation is also illustrated. The example has a serial ( ex3.cpp ) and a parallel ( ex3p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-2 before viewing this example. Example 4: Grad-div Problem This example code solves a simple 2D/3D $H(div)$ diffusion problem corresponding to the second order definite equation $$-{\\rm grad}(\\alpha\\,{\\rm div}(F)) + \\beta F = f$$ with boundary condition $F \\cdot n$ = \"given normal field\". Here we use a given exact solution $F$ and compute the corresponding right hand side $f$. We discretize with the Raviart-Thomas finite elements. The example demonstrates the use of $H(div)$ finite element spaces with the grad-div and $H(div)$ vector finite element mass bilinear form, as well as the computation of discretization error when the exact solution is known. Bilinear form hybridization and static condensation are also illustrated. The example has a serial ( ex4.cpp ) and a parallel ( ex4p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-3 before viewing this example. Example 5: Darcy Problem This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize with Raviart-Thomas finite elements (velocity $\\bf u$) and piecewise discontinuous polynomials (pressure $p$). The example demonstrates the use of the BlockMatrix and BlockOperator classes, as well as the collective saving of several grid functions in VisIt and ParaView formats. The example has a serial ( ex5.cpp ) and a parallel ( ex5p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly is supported. We recommend viewing examples 1-4 before viewing this example. Example 6: Laplace Problem with AMR This is a version of Example 1 with a simple adaptive mesh refinement loop. The problem being solved is again the Laplace equation $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear, curved and surface meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex6.cpp ) and a parallel ( ex6p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . Partial assembly and GPU devices are supported. We recommend viewing Example 1 before viewing this example. Example 7: Surface Meshes This example code demonstrates the use of MFEM to define a triangulation of a unit sphere and a simple isoparametric finite element discretization of the Laplace problem with mass term, $$-\\Delta u + u = f.$$ The example highlights mesh generation, the use of mesh refinement, high-order meshes and finite elements, as well as surface-based linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. Simple local mesh refinement is also demonstrated. The example has a serial ( ex7.cpp ) and a parallel ( ex7p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 8: DPG for the Laplace Problem This example code demonstrates the use of the Discontinuous Petrov-Galerkin (DPG) method in its primal 2x2 block form as a simple finite element discretization of the Laplace problem $$-\\Delta u = f$$ with homogeneous Dirichlet boundary conditions. We use high-order continuous trial space, a high-order interfacial (trace) space, and a high-order discontinuous test space defining a local dual ($H^{-1}$) norm. We use the primal form of DPG, see \"A primal DPG method without a first-order reformulation\" , Demkowicz and Gopalakrishnan, CAM 2013. The example highlights the use of interfacial (trace) finite elements and spaces, trace face integrators and the definition of block operators and preconditioners. The example has a serial ( ex8.cpp ) and a parallel ( ex8p.cpp ) version. We recommend viewing examples 1-5 before viewing this example. Example 9: DG Advection This example code solves the time-dependent advection equation $$\\frac{\\partial u}{\\partial t} + v \\cdot \\nabla u = 0,$$ where $v$ is a given fluid velocity, and $u_0(x)=u(0,x)$ is a given initial condition. The example demonstrates the use of Discontinuous Galerkin (DG) bilinear forms in MFEM (face integrators), the use of explicit and implicit (with block ILU preconditioning) ODE time integrators, the definition of periodic boundary conditions through periodic meshes, as well as the use of GLVis for persistent visualization of a time-evolving solution. The saving of time-dependent data files for external visualization with VisIt and ParaView is also illustrated. The example has a serial ( ex9.cpp ) and a parallel ( ex9p.cpp ) version. It also has a SUNDIALS modification in examples/sundials , a PETSc modification in examples/petsc , and a HiOp modification in examples/hiop . Example 10: Nonlinear Elasticity This example solves a time dependent nonlinear elasticity problem of the form $$ \\frac{dv}{dt} = H(x) + S v\\,,\\qquad \\frac{dx}{dt} = v\\,, $$ where $H$ is a hyperelastic model and $S$ is a viscosity operator of Laplacian type. The geometry of the domain is assumed to be as follows: The example demonstrates the use of nonlinear operators, as well as their implicit time integration using a Newton method for solving an associated reduced backward-Euler type nonlinear equation. Each Newton step requires the inversion of a Jacobian matrix, which is done through a (preconditioned) inner solver. The example has a serial ( ex10.cpp ) and a parallel ( ex10p.cpp ) version. It also has a SUNDIALS modification in examples/sundials and a PETSc modification in examples/petsc . We recommend viewing examples 2 and 9 before viewing this example. Example 11: Laplace Eigenproblem This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex11p.cpp ) version. It also has a SLEPc modification in examples/petsc . We recommend viewing Example 1 before viewing this example. Example 12: Linear Elasticity Eigenproblem This example code solves the linear elasticity eigenvalue problem for a multi-material cantilever beam. Specifically, we compute a number of the lowest eigenmodes by approximating the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = \\lambda {\\bf u} \\,,$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field $\\bf u$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder. The geometry of the domain is assumed to be as follows: The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex12p.cpp ) version. We recommend viewing examples 2 and 11 before viewing this example. Example 13: Maxwell Eigenproblem This example code solves the Maxwell (electromagnetic) eigenvalue problem $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 2D or 3D. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex13p.cpp ) version. We recommend viewing examples 3 and 11 before viewing this example. Example 14: DG Diffusion This example code demonstrates the use of MFEM to define a discontinuous Galerkin (DG) finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Finite element spaces of any order, including zero on regular grids, are supported. The example highlights the use of discontinuous spaces and DG-specific face integrators. The example has a serial ( ex14.cpp ) and a parallel ( ex14p.cpp ) version. We recommend viewing examples 1 and 9 before viewing this example. Example 15: Dynamic AMR Building on Example 6 , this example demonstrates dynamic adaptive mesh refinement. The mesh is adapted to a time-dependent solution by refinement as well as by derefinement. For simplicity, the solution is prescribed and no time integration is done. However, the error estimation and refinement/derefinement decisions are realistic. At each outer iteration the right hand side function is changed to mimic a time dependent problem. Within each inner iteration the problem is solved on a sequence of meshes which are locally refined according to a simple ZZ error estimator. At the end of the inner iteration the error estimates are also used to identify any elements which may be over-refined and a single derefinement step is performed. After each refinement or derefinement step a rebalance operation is performed to keep the mesh evenly distributed among the available processors. The example demonstrates MFEM's capability to refine, derefine and load balance nonconforming meshes, in 2D and 3D, and on linear, curved and surface meshes. Interpolation of functions between coarse and fine meshes, persistent GLVis visualization, and saving of time-dependent fields for external visualization with VisIt are also illustrated. The example has a serial ( ex15.cpp ) and a parallel ( ex15p.cpp ) version. We recommend viewing examples 1, 6 and 9 before viewing this example. Example 16: Time Dependent Heat Conduction This example code solves a simple 2D/3D time dependent nonlinear heat conduction problem $$\\frac{du}{dt} = \\nabla \\cdot \\left( \\kappa + \\alpha u \\right) \\nabla u$$ with a natural insulating boundary condition $\\frac{du}{dn} = 0$. We linearize the problem by using the temperature field $u$ from the previous time step to compute the conductivity coefficient. This example demonstrates both implicit and explicit time integration as well as a single Picard step method for linearization. The saving of time dependent data files for external visualization with VisIt is also illustrated. The example has a serial ( ex16.cpp ) and a parallel ( ex16p.cpp ) version. We recommend viewing examples 2, 9, and 10 before viewing this example. Example 17: DG Linear Elasticity This example code solves a simple linear elasticity problem describing a multi-material cantilever beam using symmetric or non-symmetric discontinuous Galerkin (DG) formulation. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are Dirichlet, $\\bf{u}=\\bf{u_D}$, on the fixed part of the boundary, namely boundary attributes 1 and 2; on the rest of the boundary we use ${\\sigma}({\\bf u})\\cdot n = {\\bf 0}$. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order DG vector finite element spaces with the linear DG elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and function vector-coefficient objects. The use of non-homogeneous Dirichlet b.c. imposed weakly, is also illustrated. The example has a serial ( ex17.cpp ) and a parallel ( ex17p.cpp ) version. We recommend viewing examples 2 and 14 before viewing this example. Example 18: DG Euler Equations This example code solves the compressible Euler system of equations, a model nonlinear hyperbolic PDE, with a discontinuous Galerkin (DG) formulation. The primary purpose is to show how a transient system of nonlinear equations can be formulated in MFEM. The equations are solved in conservative form $$\\frac{\\partial u}{\\partial t} + \\nabla \\cdot {\\bf F}(u) = 0$$ with a state vector $u = [ \\rho, \\rho v_0, \\rho v_1, \\rho E ]$, where $\\rho$ is the density, $v_i$ is the velocity in the $i^{\\rm th}$ direction, $E$ is the total specific energy, and $H = E + p / \\rho$ is the total specific enthalpy. The pressure, $p$ is computed through a simple equation of state (EOS) call. The conservative hydrodynamic flux ${\\bf F}$ in each direction $i$ is $${\\bf F_{\\it i}} = [ \\rho v_i, \\rho v_0 v_i + p \\delta_{i,0}, \\rho v_1 v_i + p \\delta_{i,1}, \\rho v_i H ]$$ Specifically, the example solves for an exact solution of the equations whereby a vortex is transported by a uniform flow. Since all boundaries are periodic here, the method's accuracy can be assessed by measuring the difference between the solution and the initial condition at a later time when the vortex returns to its initial location. Note that as the order of the spatial discretization increases, the timestep must become smaller. This example currently uses a simple estimate derived by Cockburn and Shu for the 1D RKDG method. An additional factor can be tuned by passing the --cfl (or -c shorter) flag. The example demonstrates user-defined nonlinear form with hyperbolic form integrator for systems of equations that are defined with block vectors, and how these are used with an operator for explicit time integrators. In this case the system also involves an external approximate Riemann solver for the DG interface flux. It also demonstrates how to use GLVis for in-situ visualization of vector grid functions. The example has a serial ( ex18.cpp ) and a parallel ( ex18p.cpp ) version. We recommend viewing examples 9, 14 and 17 before viewing this example. Example 19: Incompressible Nonlinear Elasticity This example code solves the quasi-static incompressible nonlinear hyperelasticity equations. Specifically, it solves the nonlinear equation $$ \\nabla \\cdot \\sigma(F) = 0 $$ subject to the constraint $$ \\text{det } F = 1 $$ where $\\sigma$ is the Cauchy stress and $F_{ij} = \\delta_{ij} + u_{i,j}$ is the deformation gradient. To handle the incompressibility constraint, pressure is included as an independent unknown $p$ and the stress response is modeled as an incompressible neo-Hookean hyperelastic solid . The geometry of the domain is assumed to be as follows: This formulation requires solving the saddle point system $$ \\left[ \\begin{array}{cc} K &B^T \\\\ B & 0 \\end{array} \\right] \\left[\\begin{array}{c} \\Delta u \\\\ \\Delta p \\end{array} \\right] = \\left[\\begin{array}{c} R_u \\\\ R_p \\end{array} \\right] $$ at each Newton step. To solve this linear system, we implement a specialized block preconditioner of the form $$ P^{-1} = \\left[\\begin{array}{cc} I & -\\tilde{K}^{-1}B^T \\\\ 0 & I \\end{array} \\right] \\left[\\begin{array}{cc} \\tilde{K}^{-1} & 0 \\\\ 0 & -\\gamma \\tilde{S}^{-1} \\end{array} \\right] $$ where $\\tilde{K}^{-1}$ is an approximation of the inverse of the stiffness matrix $K$ and $\\tilde{S}^{-1}$ is an approximation of the inverse of the Schur complement $S = BK^{-1}B^T$. To approximate the Schur complement, we use the mass matrix for the pressure variable $p$. The example demonstrates how to solve nonlinear systems of equations that are defined with block vectors as well as how to implement specialized block preconditioners for use in iterative solvers. The example has a serial ( ex19.cpp ) and a parallel ( ex19p.cpp ) version. We recommend viewing examples 2, 5 and 10 before viewing this example. Example 20: Symplectic Integration of Hamiltonian Systems This example demonstrates the use of the variable order, symplectic time integration algorithm. Symplectic integration algorithms are designed to conserve energy when integrating systems of ODEs which are derived from Hamiltonian systems. Hamiltonian systems define the energy of a system as a function of time (t), a set of generalized coordinates (q), and their corresponding generalized momenta (p). $$ H(q,p,t) = T(p) + V(q,t) $$ Hamilton's equations then specify how q and p evolve in time: $$ \\frac{dq}{dt} = \\frac{dH}{dp}\\,,\\qquad \\frac{dp}{dt} = -\\frac{dH}{dq} $$ To use the symplectic integration classes we need to define an mfem::Operator ${\\bf P}$ which evaluates the action of dH/dp, and an mfem::TimeDependentOperator ${\\bf F}$ which computes -dH/dq. This example visualizes its results as an evolution in phase space by defining the axes to be $q$, $p$, and $t$ rather than $x$, $y$, and $z$. In this space we build a ribbon-like mesh with nodes at $(0,0,t)$ and $(q,p,t)$. Finally we plot the energy as a function of time as a scalar field on this ribbon-like mesh. This scheme highlights any variations in the energy of the system. This example offers five simple 1D Hamiltonians: Simple Harmonic Oscillator (mass on a spring) $$H = \\frac{1}{2}\\left( \\frac{p^2}{m} + \\frac{q^2}{k} \\right)$$ Pendulum $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} - k \\left( 1 - cos(q) \\right) \\right]$$ Gaussian Potential Well $$H = \\frac{p^2}{2m} - k e^{-q^2 / 2}$$ Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 + q^2 \\right) q^2 \\right]$$ Negative Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 - \\frac{q^2}{8} \\right) q^2 \\right]$$ In all cases these Hamiltonians are shifted by constant values so that the energy will remain positive. The mean and standard deviation of the computed energies at each time step are displayed upon completion. When run in parallel, each processor integrates the same Hamiltonian system but starting from different initial conditions. The example has a serial ( ex20.cpp ) and a parallel ( ex20p.cpp ) version. See the Maxwell miniapp for another application of symplectic integration. Example 21: Adaptive mesh refinement for linear elasticity This is a version of Example 2 with a simple adaptive mesh refinement loop. The problem being solved is again linear elasticity describing a multi-material cantilever beam. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear and curved meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex21.cpp ) and a parallel ( ex21p.cpp ) version. We recommend viewing Examples 2 and 6 before viewing this example. Example 22: Complex Linear Systems This example code demonstrates the use of MFEM to define and solve a complex-valued linear system. It implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. The example also demonstrates how to display a time-varying solution as a sequence of fields sent to a single GLVis socket. The example has a serial ( ex22.cpp ) and a parallel ( ex22p.cpp ) version. We recommend viewing examples 1, 3, and 4 before viewing this example. Example 23: Wave Problem This example code solves a simple 2D/3D wave equation with a second order time derivative: $$\\frac{\\partial^2 u}{\\partial t^2} - c^2\\Delta u = 0$$ The boundary conditions are either Dirichlet or Neumann. The example demonstrates the use of time dependent operators, implicit solvers and second order time integration. The example has only a serial ( ex23.cpp ) version. We recommend viewing examples 9 and 10 before viewing this example. Example 24: Mixed finite element spaces This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. Partial assembly and GPU devices are supported. The example has a serial ( ex24.cpp ) and a parallel ( ex24p.cpp ) version. We recommend viewing examples 1 and 3 before viewing this example. Example 25: Perfectly Matched Layers The example illustrates the use of a Perfectly Matched Layer (PML) for the simulation of time-harmonic electromagnetic waves propagating in unbounded domains. PML was originally introduced by Berenger in \"A Perfectly Matched Layer for the Absorption of Electromagnetic Waves\" . It is a technique used to solve wave propagation problems posed in infinite domains. The implementation involves the introduction of an artificial absorbing layer that minimizes undesired reflections. Inside this layer a complex coordinate stretching map is used which forces the wave modes to decay exponentially. The example solves the indefinite Maxwell equations $$\\nabla \\times (a \\nabla \\times E) - \\omega^2 b E = f.$$ where $a = \\mu^{-1} |J|^{-1} J^T J$, $b= \\epsilon |J| J^{-1} J^{-T}$ and $J$ is the Jacobian matrix of the coordinate transformation. The example demonstrates discretization with Nedelec finite elements in 2D or 3D, as well as the use of complex-valued bilinear and linear forms. Several test problems are included, with known exact solutions. The example has a serial ( ex25.cpp ) and a parallel ( ex25p.cpp ) version. We recommend viewing Example 22 before viewing this example. Example 26: Multigrid Preconditioner This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions and how to solve it efficiently using a matrix-free multigrid preconditioner. The example highlights on the creation of a hierarchy of discretization spaces and diffusion bilinear forms using partial assembly. The levels in the hierarchy of finite element spaces maybe constructed through geometric or order refinements. Moreover, the construction of a multigrid preconditioner for the PCG solver is shown. The multigrid uses a PCG solver on the coarsest level and second order Chebyshev accelerated smoothers on the other levels. The example has a serial ( ex26.cpp ) and a parallel ( ex26p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 27: Laplace Boundary Conditions This example code demonstrates the use of MFEM to define a simple finite element discretization of the Laplace problem: $$ -\\Delta u = 0 $$ with a variety of boundary conditions. Specifically, we discretize using a FE space of the specified order using a continuous or discontinuous space. We then apply Dirichlet, Neumann (both homogeneous and inhomogeneous), Robin, and Periodic boundary conditions on different portions of a predefined mesh. Boundary conditions: $u = u_{dbc}$ on $\\Gamma_{dbc}$ $\\hat{n}\\cdot\\nabla u = g_{nbc}$ on $\\Gamma_{nbc}$ $\\hat{n}\\cdot\\nabla u = 0$ on $\\Gamma_{nbc_0}$ $\\hat{n}\\cdot\\nabla u + a u = b$ on $\\Gamma_{rbc}$ as well as periodic boundary conditions which are enforced topologically. The example has a serial ( ex27.cpp ) and a parallel ( ex27p.cpp ) version. We recommend viewing examples 1 and 14 before viewing this example. Example 28: Constraints and Sliding Boundary Conditions This example code illustrates the use of constraints in linear solvers by solving an elasticity problem where the normal component of the displacement is constrained to zero on two boundaries but tangential displacement is allowed. The constraints can be enforced in several different ways, including eliminating them from the linear system or solving a saddle-point system that explicitly includes constraint conditions. The example has a serial ( ex28.cpp ) and a parallel ( ex28p.cpp ) version. We recommend viewing example 2 before viewing this example. Example 29: Solving PDEs on embedded surfaces This example demonstrates setting up and solving an anisotropic Laplace problem $$-\\nabla\\cdot(\\sigma\\nabla u) = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ where $\\Omega$ is a two dimensional curved surface embedded in three dimensions and $\\sigma$ is an anisotropic diffusion tensor. The example demonstrates and validates our DiffusionIntegrator 's ability to properly integrate three dimensional fluxes on a two dimensional domain. Not all of our integrators currently support such cases but the DiffusionIntegrator can be used as a simple example of how extend other integrators when necessary. The example has a serial ( ex29.cpp ) and a parallel ( ex29p.cpp ) version. We recommend viewing examples 1 and 7 before viewing this example. Example 30: Resolving rough and fine-scale problem data Unresolved problem data will affect the accuracy of a discretized PDE solution as well as a posteriori estimates of the solution error. This example uses a CoefficientRefiner object to preprocess an input mesh until the resolution of the prescribed problem data $f \\in L^2$ is below a prescribed tolerance. In this example, the resolution is identified with a data oscillation function on the mesh $\\mathcal{T}$, defined $$ \\mathrm{osc}(f) = \\Big( \\sum_{T\\in\\mathcal{T}} \\| h \\cdot (I - \\Pi)\\, f \\|^2_{L^2(T)} \\Big)^{1/2}, $$ where $h$ is the local element size function and $\\Pi$ is a finite element projection operator, and the sum is taken over all elements $T$ in the mesh. In this example, the coarse initial mesh is adaptively refined until $\\mathrm{osc}(f)$ is below a prescribed tolerance for various candidate functions $f \\in L^2$. When using rough problem data, it is recommended to perform this type of preprocessing before a posteriori error estimation. The example has a serial ( ex30.cpp ) and a parallel ( ex30p.cpp ) version. We recommend viewing examples 1 and 6 before viewing this example. Example 31: Anisotropic Definite Maxwell Problem This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + \\sigma E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". In this example $\\sigma$ is an anisotropic 3x3 tensor. Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example has a serial ( ex31.cpp ) and a parallel ( ex31p.cpp ) version. We recommend viewing example 3 before viewing this example. Example 32: Anisotropic Maxwell Eigenproblem This example code solves the Maxwell (electromagnetic) eigenvalue problem with anisotropic permittivity, $\\epsilon$ $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, \\epsilon E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces in an eigenmode context. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing multiple GLVis visualization windows for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex32p.cpp ) version. We recommend viewing examples 13 and 31 before viewing this example. Example 33: Spectral fractional Laplacian This example code demonstrates the use of MFEM to solve the fractional Laplacian problem $$ (-\\Delta)^\\alpha u = 1, \\quad \\alpha > 0, $$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is similar to ex1 , but involves a fractional-order diffusion operator whose inverse can be approximated by a series of inverses of integer-order diffusion operators. Solving each of these independent integer-order PDEs with MFEM and summing their solutions results in a discrete solution to the fractional Laplacian problem above. The example has a serial ( ex33.cpp ) and a parallel ( ex33p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 34: Source Function using a SubMesh Transfer This example demonstrates the use of a SubMesh object to transfer solution data from a sub-domain and use this as a source function on the full domain. In this case we compute a volumetric current density $\\vec{J}$ as the gradient of a scalar potential $\\varphi$ on a portion of the domain. $$\\nabla\\cdot(\\sigma\\nabla\\varphi)=0$$ $$\\vec{J} = -\\sigma\\nabla\\varphi$$ Where a voltage difference is applied on surfaces of the sub-domain (shown on the left) to generate the current density restricted to this sub-domain. The current density is then transferred to the full domain (shown on the right) using a SubMesh object. We then use this current density on the full domain as a source term in a magnetostatic solve for a vector potential $\\vec{A}$. $$\\nabla\\times(\\mu^{-1}\\nabla\\times\\vec{A})=\\vec{J}$$ $$\\vec{B} = \\nabla\\times\\vec{A}$$ This example verifies the recreation of boundary attributes on a sub-domain mesh as well as transfer of Raviart-Thomas vector fields between the SubMesh and the full Mesh. Note that the data transfer in this particular example involves arbitrary order Raviart-Thomas degrees of freedom on a mixture of tetrahedral and triangular prism elements. The example has a serial ( ex34.cpp ) and a parallel ( ex34p.cpp ) version. We recommend viewing Examples 1 and 3 before viewing this example. Example 35: Port Boundary Conditions using SubMesh Transfers This example demonstrates the use of a SubMesh object to transfer a port boundary condition from a portion of the boundary to the corresponding portion of the full domain. Just as in Example 22 this example implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0\\mbox{ with }u|_\\Gamma=v$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\times(\\vec{u}\\times\\hat{n})|_\\Gamma=\\vec{v}$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\cdot\\vec{u}|_\\Gamma=v$$ Where $\\Gamma$ is a portion of the boundary called the port . In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. In Example 22 this boundary condition was simply a constant in space. In this example the boundary condition is an eigenmode of a lower dimensional eigenvalue problem defined on a portion of the boundary as follows: For the scalar $H^1$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }v|_{\\partial\\Gamma}=0$$ For the vector $H(curl)$ field: $$\\nabla\\times\\left(\\nabla\\times\\vec{v}\\right) = \\lambda\\,\\vec{v}\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\times\\vec{v}|_{\\partial\\Gamma}=0$$ For the vector $H(div)$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\cdot\\nabla v|_{\\partial\\Gamma}=0$$ The different cases implemented in this example can be used to verify the transfer of an $H^1$ scalar field, the tangential components of an $H(curl)$ vector field, and the normal component of an $H(div)$ vector field (as a scalar $L^2$ field in this case) between a SubMesh and its parent mesh. The example has only a parallel ( ex35p.cpp ) version because the eigenmode solver used to compute the field on the port is only implemented in parallel. We recommend viewing Examples 11, 13, and 22 before viewing this example. Example 36: Obstacle Problem This example code solves the pointwise bound-constrained energy minimization problem $$ \\text{minimize } \\frac{1}{2}\\|\\nabla u\\|^2 \\text{ in } H^1_0(\\Omega)\\, \\text{ subject to } u \\ge \\varphi\\,.$$ This is known as the obstacle problem, and it is a classical motivating example in the study of variational inequalities and free boundary problems. In this example, the obstacle $\\varphi$ is the graph of a half-sphere centered at the origin of a circular domain $\\Omega$. After solving to a specified tolerance, the numerical solution is compared to a closed-form exact solution to assess its accuracy. The problem is solved using the Proximal Galerkin finite element method, which is a nonlinear, structure-preserving mixed method for pointwise bound constraints proposed by Keith and Surowiec . In turn, this example highlights MFEM's ability to deliver high-order solutions to variational inequalities and showcases how to set up and solve nonlinear mixed methods. The example has a serial ( ex36.cpp ) and a parallel ( ex36p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 37: Topology Optimization Density field $\\rho$ Problem set-up and domain $\\Omega$ This example code solves a classical cantilever beam topology optimization problem. The aim is to find an optimal material density field $\\rho$ in $L^1(\\Omega)$ to minimize the elastic compliance; i.e., $$\\begin{align} &\\text{minimize} \\int_\\Omega \\mathbf{f} \\cdot \\mathbf{u}(\\rho)\\, \\mathrm{d}x\\, \\text{ over }\\, \\rho \\in L^1(\\Omega) \\\\ &\\text{subject to }\\, 0 \\leq \\rho \\leq 1\\, \\text{ and } \\int_\\Omega \\rho\\, \\mathrm{d}x = \\theta\\, \\mathrm{vol}(\\Omega) \\,. \\end{align}$$ In this problem, $\\mathbf{f}$ is a localized force and the linearly elastic displacement field $\\mathbf{u} = \\mathbf{u}(\\rho)$ is determined by a material density field $\\rho$ with total volume fraction $0<\\theta<1$. The problem is solved using a mirror descent algorithm proposed by Keith and Surowiec . For further details, see the more elaborate description of this PDE-constrained optimization problem given in the example code and the aforementioned paper. The example has a serial ( ex37.cpp ) and a parallel ( ex37p.cpp ) version. We recommend viewing Example 2 before viewing this example. Example 38: Cut-Volume and Cut-Surface Integration This example code demonstrates construction of cut-surface and cut-volume IntegrationRules. The cut is specified by the zero level set of a given Coefficient $\\phi$. The resulting IntegrationRules are combined with standard LinearFormIntegrators to demonstrate integration of a function $u$ over an implicit interface, and a subdomain bounded by an implicit interface: $$ S = \\int_{\\phi = 0} u(x) ~ ds, \\quad V = \\int_{\\phi > 0} u(x) ~ dx. $$ The IntegrationRules are constructed by the moment-fitting algorithm introduced by M\u00fcller, Kummer and Oberlack . Through a set of basis functions, for each element the method defines and solves a local under-determined system for the vector of quadrature weights. All surface and volume integrals, which are required to form the system, are reduced to 1D integration over intersected segments. The example has only a serial ( ex38.cpp ) version, because the construction of the integration rules is an element-local procedure. It requires MFEM to be built with LAPACK, which is used to find the optimal solution of an under-determined system of equations. Example 39: Named Attribute Sets This example uses the Poisson equation to demonstrate the use of named attribute sets in MFEM to specify material regions, boundary regions, or source regions by name rather than attribute numbers. It also demonstrates how new named attribute sets may be created from arbitrary groupings of attribute numbers and used as a convenient shorthand to refer to those groupings in other portions of the application or through the command line. Named attribute sets also required changes to MFEM's mesh file formats. This example makes use of a custom input mesh file ( compass.msh ) produced using Gmsh which includes named regions and boundaries. A related mesh file ( compass.mesh ) illustrates MFEM's representation of the new named attribute sets. See file formats for details of the augmented mesh file format. The example has a serial ( ex39.cpp ) and a parallel ( ex39p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 40: Eikonal Equation This example highlights MFEM's ability to solve a fully-nonlinear, first-order PDE with high-order finite elements. In particular this example uses the proximal Galerkin method to solve the eikonal equation, $$ |\\nabla u| = 1 \\text{ in } \\Omega, \\quad u = 0 \\text{ on } \\partial \\Omega. $$ At each point $x$ in the domain $\\Omega$, the solution of this PDE provides the Euclidean distance to the domain boundary, $u(x) = \\min \\{ | x - y| : y \\in \\partial \\Omega\\}$. The problem is solved by recasting $u$ as the solution of the nonlinear program $$ \\text{maximize } \\int_\\Omega u\\, \\mathrm{d} x\\, \\text{ in } W^{1,\\infty}_0(\\Omega)\\, \\text{ subject to } |\\nabla u | \\leq 1 \\text{ a.e. in } \\Omega.$$ A solution is then obtained by discretizing and solving a sequence of nonlinear saddle-point problems. See the example code for a more detailed description of the method. The example has a serial ( ex40.cpp ) and a parallel ( ex40p.cpp ) version. We recommend viewing Example 5 and Example 36 before viewing this example. NURBS Example 1: Laplace Problem This example code demonstrates the use of MFEM to define a simple isogeometric NURBS discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is the same as Example 1 . The example has a serial ( nurbs_ex1.cpp ) and a parallel ( nurbs_ex1p.cpp ) version. There is also a version that demonstrates efficient patchwise quadrature ( nurbs_ex1 patch.cpp ). NURBS Example 3: Definite Maxwell Problem This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with NURBS-based $H(curl)$elements in 2D or 3D. The problem solved in this example is the same as Ezample 3 . The example has only a serial ( nurbs_ex1.cpp ) version. NURBS Example 5: Darcy Problem This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize the velocity ($\\bf u$) with NURBS-based $H(div)$ elements and the pressure ($p$) with a compatible NURBS-based $H_1$ elements. The problem solved in this example is the same as Example 5 . The example only has a serial ( nurbs_ex5.cpp ). NURBS Example 11: Laplace Eigenproblem This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The problem solved in this example is the same as Example 11 . The example has only a parallel ( nurbs_ex11p.cpp ) version. NURBS Example 24: Mixed finite element spaces The problem solved in this example is the same as Example 24 , but NURBS-based elements are also supported. This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. The example has a serial ( nurbs_ex24.cpp ). Volta Miniapp: Electrostatics This miniapp demonstrates the use of MFEM to solve realistic problems in the field of linear electrostatics. Its features include: dielectric materials charge densities surface charge densities prescribed voltages applied polarizations high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( volta.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Tesla Miniapp: Magnetostatics This miniapp showcases many of MFEM's features while solving a variety of realistic magnetostatics problems. Its features include: diamagnetic and/or paramagnetic materials ferromagnetic materials volumetric current densities surface current densities external fields high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( tesla.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Maxwell Miniapp: Transient Full-Wave Electromagnetics This miniapp solves the equations of transient full-wave electromagnetics. Its features include: mixed formulation of the coupled first-order Maxwell equations $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic flux energy conserving, variable order, implicit time integration dielectric materials diamagnetic and/or paramagnetic materials conductive materials volumetric current densities Sommerfeld absorbing boundary conditions high order meshes high order basis functions advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( maxwell.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Joule Miniapp: Transient Magnetics and Joule Heating This miniapp solves the equations of transient low-frequency (a.k.a. eddy current) electromagnetics, and simultaneously computes transient heat transfer with the heat source given by the electromagnetic Joule heating. Its features include: $H^1$ discretization of the electrostatic potential $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic field $H(\\mathrm{div})$ discretization of the heat flux $L^2$ discretization of the temperature implicit transient time integration high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( joule.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mobius Strip Miniapp This miniapp generates various Mobius strip-like surface meshes. It is a good way to generate complex surface meshes. Manipulating the mesh topology and performing mesh transformation are demonstrated. The mobius-strip mesh in the data directory was generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mobius-strip.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Klein Bottle Miniapp This miniapp generates three types of Klein bottle surfaces. It is similar to the mobius-strip miniapp. Manipulating the mesh topology and performing mesh transformation are demonstrated. The klein-bottle and klein-donut meshes in the data directory were generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( klein-bottle.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Toroid Miniapp This miniapp generates two types of toroidal volume meshes; one with triangular cross sections and one with square cross sections. It works by defining a stack of individual elements and bending them so that the bottom and top of the stack can be joined to form a torus. It supports various options including: The element type: 0 - Wedge, 1 - Hexahedron The geometric order of the elements The major and minor radii The number of elements in the azimuthal direction The number of nodes to offset by before rejoining the stack The initial angle of the cross sectional shape The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( toroid.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Twist Miniapp This miniapp generates simple periodic meshes to demonstrate MFEM's handling of periodic domains. MFEM's strategy is to use a discontinuous vector field to define the mesh coordinates on a topologically periodic mesh. It works by defining a stack of individual elements and stitching together the top and bottom of the mesh. The stack can also be twisted so that the vertices of the bottom and top can be joined with any integer offset (for tetrahedral and wedge meshes only even offsets are supported). The Twist miniapp supports various options including: The element type: 4 - Tetrahedron, 6 - Wedge, 8 - Hexahedron The geometric order of the elements The dimensions of the initial brick-shaped stack of elements The number of elements in the z direction The number of nodes to offset by before rejoining the stack The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( twist.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Extruder Miniapp This miniapp creates higher dimensional meshes from lower dimensional meshes by extrusion. Simple coordinate transformations can also be applied if desired. The initial mesh can be 1D or 2D 1D meshes can be extruded in both the y and z directions 2D meshes can be triangular, quadrilateral, or contain both element types Meshes with high order geometry are supported User can specify the number of elements and the distance to extrude Geometric order of the transformed mesh can be user selected or automatic This miniapp provides another demonstration of how simple meshes can be constructed and transformed in MFEM. This miniapp has only a serial ( extruder.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Trimmer Miniapp This miniapp creates a new mesh file from an existing mesh by trimming away elements with selected attributes. Newly exposed boundary elements will be assigned new or user specified boundary attributes. The initial mesh can be 2D or 3D Meshes with high order geometry are supported Periodic meshes are supported NURBS meshes are not supported This miniapp provides another demonstration of how simple meshes can be constructed in MFEM. This miniapp has only a serial ( trimmer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Polar-NC Miniapp This miniapp generates a circular sector mesh that consist of quadrilaterals and triangles of similar sizes. The 3D version of the mesh is made of prisms and tetrahedra. The mesh is non-conforming by design, and can optionally be made curvilinear. The elements are ordered along a space-filling curve by default, which makes the mesh ready for parallel non-conforming AMR in MFEM. The implementation also demonstrates how to initialize a non-conforming mesh on the fly by marking hanging nodes with Mesh::AddVertexParents . For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( polar-nc.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Shaper Miniapp This miniapp performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( shaper.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mesh Explorer Miniapp This miniapp is a handy tool to examine, visualize and manipulate a given mesh. Some of its features are: visualizing of mesh materials and individual mesh elements mesh scaling, randomization, and general transformation manipulation of the mesh curvature the ability to simulate parallel partitioning quantitative and visual reports of mesh quality For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mesh-explorer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mesh Optimizer Miniapp This miniapp performs mesh optimization using the Target-Matrix Optimization Paradigm (TMOP) by P. Knupp , and a global variational minimization approach ( Dobrev et al. ). It minimizes the quantity $$\\sum_T \\int_T \\mu(J(x)),$$ where $T$ are the target (ideal) elements, $J$ is the Jacobian of the transformation from the target to the physical element, and $\\mu$ is the mesh quality metric. This metric can measure shape, size or alignment of the region around each quadrature point. The combination of targets and quality metrics is used to optimize the physical node positions, i.e., they must be as close as possible to the shape / size / alignment of their targets. This code also demonstrates a possible use of nonlinear operators, as well as their coupling to Newton methods for solving minimization problems. Note that the utilized Newton methods are oriented towards avoiding invalid meshes with negative Jacobian determinants. Each Newton step requires the inversion of a Jacobian matrix, which is done through an inner linear solver. The miniapp has a serial ( mesh-optimizer.cpp ) and a parallel ( pmesh-optimizer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mesh Fitting Miniapp This miniapp builds upon the mesh optimizer miniapp to enable mesh alignment with the zero isosurface of a discrete level-set. The approach is based on Dobrev et al. and Mittal et al. , where we minimize the quantity $$\\sum_T \\int_T \\mu(J(x)) + \\sum_{s \\in S} w \\,\\, \\sigma^2(x_s).$$ Here, the first term controls mesh quality and the second term enforces weak alignment of a selected subset of mesh-nodes ($s \\in S$) with the zero isosurface of the discrete level-set function ($\\sigma$). Click on the image on the right to see a demonstration of this method for generating body-fitted meshes for topology optimization in LiDO to maximize beam stiffness under a downward force on the right wall. The miniapp has a parallel ( pmesh-fitting.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Minimal Surface Miniapp This miniapp solves Plateau's problem: the Dirichlet problem for the minimal surface equation. Options to solve the minimal surface equations of both parametric surfaces as well as surfaces restricted to be graphs of the form $z=f(x,y)$ are supported, including a number of examples such as the Catenoid, Helicoid, Costa and Scherk surfaces. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has a serial ( minimal-surface.cpp ) and a parallel ( pminimal-surface.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Low-Order Refined Transfer Miniapp The lor-transfer miniapp, found under miniapps/tools demonstrates the capability to generate a low-order refined mesh from a high-order mesh, and to transfer solutions between these meshes. Grid functions can be transferred between the coarse, high-order mesh and the low-order refined mesh using either $L^2$ projection or pointwise evaluation. These transfer operators can be designed to discretely conserve mass and to recover the original high-order solution when transferring a low-order grid function that was obtained by restricting a high-order grid function to the low-order refined space. The miniapp has only a serial ( lor-transfer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Interpolation Miniapps The interpolation miniapp, found under miniapps/gslib , demonstrate the capability to interpolate high-order finite element functions at given set of points in physical space. These miniapps utilize the gslib library's high-order interpolation utility for quad and hex meshes: Find Points miniapp has a serial ( findpts.cpp ) and a parallel ( pfindpts.cpp ) version that demonstrate the basic procedures for point search and evaluation of grid functions. Field Interp miniapp ( field-interp.cpp ) demonstrates how grid functions can be transferred between meshes. Field Diff miniapp ( field-diff.cpp ) demonstrates how grid functions on two different meshes can be compared with each other. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps. Extrapolation Miniapp The extrapolate miniapp, found in the miniapps/shifted directory, extrapolates a finite element function from a set of elements (known values) to the rest of the domain. The set of elements that contains the known values is specified by the positive values of a level set Coefficient. The known values are not modified. The miniapp supports two PDE-based approaches ( Aslam , Bochkov & Gibou ), both of which rely on solving a sequence of advection problems in the direction of the unknown parts of the domain. The extrapolation can be constant (1st order), linear (2nd order), or quadratic (3rd order). These formal orders hold for a limited band around the zero level set, see the above references for further information. The miniapp has only a parallel ( extrapolate.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Distance Solver Miniapp The distance miniapp, found in the miniapps/shifted directory demonstrates the capability to compute the \"distance\" to a given point source or to the zero level set of a given function. Here \"distance\" refers to the length of the shortest path through the mesh. The input can be a DeltaCoefficient (representing a point source), or any Coefficient (for the case of a level set). The output is a ParGridFunction that can be scalar (representing the scalar distance), or a vector (its magnitude is the distance, and its direction is the starting direction of the shortest path). The miniapp has only a parallel ( distance.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Shifted Diffusion Miniapp The diffusion miniapp, found in the miniapps/shifted directory, demonstrates the capability to formulate a boundary value problem using a surrogate computational domain. The method uses a distance function to the true boundary to enforce Dirichlet boundary conditions on the (non-aligned) mesh faces, therefore \"shifting\" the location where boundary conditions are imposed. The implementation in the miniapp is a high-order extension of the second-generation shifted boundary method . The miniapp has only a parallel ( diffusion.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Laghos Miniapp Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. The computational motives captured in Laghos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements (triangular and tetrahedral elements can also be used, but with the less efficient full assembly option). Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Laghos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Continuous and discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Separation between the assembly and the quadrature point-based computations. Point-wise definition of mesh size, time-step estimate and artificial viscosity coefficient. Constant-in-time velocity mass operator that is inverted iteratively on each time step. This is an example of an operator that is prepared once (fully or partially assembled), but is applied many times. The application cost is dominant for this operator. Time-dependent force matrix that is prepared every time step (fully or partially assembled) and is applied just twice per \"assembly\". Both the preparation and the application costs are important for this operator. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization / data analysis with VisIt . The Laghos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Laghos . Remhos Miniapp Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. The computational motives captured in Remhos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements. Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Remhos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Mass operator that is local per each zone. It is inverted by iterative or exact methods at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Advection operator that couples neighboring zones. It is applied once at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization and data analysis with VisIt . The Remhos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Remhos . Navier Miniapp Navier is a miniapp that solves the time-dependent Navier-Stokes equations of incompressible fluid dynamics \\begin{align} \\frac{\\partial u}{\\partial t} + (u \\cdot \\nabla) u - \\frac{1}{Re} \\nabla^2 u - \\nabla p &= f \\\\ \\nabla \\cdot u &= 0 \\end{align} using a spatially high-order finite element discretization. The time-dependent problem is solved using a (up to) third order implicit-explicit method which leverages an extrapolation scheme for the convective parts and a backward-difference formulation for the viscous parts of the equation. The miniapp supports: Arbitrary order H1 elements High order mesh elements IMEX (EXTk-BDFk) time-stepping up to third order Convenient interface for new users A variety of test cases and benchmarks This miniapp has only a parallel ( navier_solver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Block Solvers Miniapp The Block Solvers miniapp, found under miniapps/solvers , compares various linear solvers for the saddle point system obtained from mixed finite element discretization of the Darcy's flow problem \\begin{array}{rcl} k{\\bf u} & + \\nabla p & = f \\\\ -\\nabla \\cdot {\\bf u} & & = g \\end{array} The solvers being compared include: The divergence-free solver (couple and decoupled modes), which is based on a multilevel decomposition of the Raviart-Thomas finite element space and its divergence-free subspace. MINRES preconditioned by the block diagonal preconditioner in ex5p.cpp . For more details, please see the documentation in the miniapps/solvers directory. The miniapp supports: Arbitrary order mixed finite element pair (Raviart-Thomas elements + piecewise discontinuous polynomials) Various combination of essential and natural boundary conditions Homogeneous or heterogeneous scalar coefficient k This miniapp has only a parallel ( block-solvers.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Overlapping Grids Miniapps Overlapping grids-based frameworks can often make problems tractable that are otherwise inaccessible with a single conforming grid. The following gslib -based miniapps in MFEM demonstrate how to set up and use overlapping grids: The Schwarz Example 1 miniapp in miniapps/gslib has a serial ( schwarz_ex1.cpp ) a parallel ( schwarz_ex1p.cpp ) version that solves the Poisson problem on overlapping grids. The serial version is restricted to use two overlapping grids, while the parallel version supports arbitrary number of overlapping grids. The Navier Conjugate Heat Transfer miniapp in miniapps/navier ( navier_cht.cpp ) demonstrates how a conjugate heat transfer problem can be solved with the fluid dynamics (incompressible Navier-Stokes equations) and heat transfer (advection-diffusion equation) PDEs modeled on different meshes. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps. ParELAG AMGe for H(curl) and H(div) Miniapp This is a miniapp that exhibits the ParELAG library and part of its capabilities. The miniapp employs MFEM and ParELAG to solve $H(\\mathrm{curl})$- and $H(\\mathrm{div})$-elliptic forms by an element based algebraic multigrid (AMGe). ParELAG is a library mostly developed at the Center for Applied Scientific Computing of Lawrence Livermore National Laboratory, California, USA. The miniapp uses: A multilevel hierarchy of de Rham complexes of finite element spaces, built by ParELAG; Hiptmair-type (hybrid) smoothers, implemented in ParELAG; AMS (Auxiliary-space Maxwell Solver) or ADS (Auxiliary-space Divergence Solver), from HYPRE, for preconditioning or solving on the coarsest levels. Alternatively, it is possible to precondition or solve the $H(\\mathrm{div})$ form on the coarsest level via a hybridization approach. However, this is not yet implemented in ParELAG for the coarse levels. Only the hybridization solver that is directly applicable to an $H(\\mathrm{div})$-$L^2$ mixed (saddle-point) system is currently available in ParELAG. We recommend viewing ex3p.cpp and ex4p.cpp before viewing this miniapp. For more details, please see the documentation in the miniapps/parelag directory. This miniapp has only a parallel ( MultilevelHcurlHdivSolver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Generating Gaussian Random Fields via the SPDE Method This miniapp generates Gaussian random fields on meshed domains $\\Omega \\subset \\mathbb{R}^n$ via the SPDE method. The method exploits a stochastic, fractional PDE whose full-space solutions yield Gaussian random fields with a Mat\u00e9rn covariance. The method was introduced and popularized by Lindgren et. al in 2010. In this miniapp, we use a slightly modified representation following Khristenko et. al . More specifically, we solve the equation \\begin{equation} \\left( -\\frac{1}{2\\nu} \\nabla \\cdot \\left( \\Theta \\nabla \\right) + \\mathbf{1} \\right)^{\\frac{2\\nu+n}{4}} u(x,w) = \\eta W(x,w) \\ \\ \\ \\text{in} \\ \\ \\Omega, \\end{equation} with various boundary conditions. Solving this equation on $\\Omega = \\mathbb{R}^n$ delivers a homogeneous Gaussian random field with zero mean and Mat\u00e9rn covariance, \\begin{align}\\label{eq:MaternCovariance} C(x,y) &= \\sigma^2M_\\nu \\left(\\sqrt{2\\nu}\\, \\| x-y \\|_{\\Theta} \\right) , \\end{align} where $M_\\nu(z) = \\frac{2^{1-\\nu}}{\\Gamma(\\nu)} z ^{\\nu} K_\\nu \\left( z \\right)$ and $\\| x-y \\|_{\\Theta}^2 = (x-y)^\\top\\Theta (x-y)$. The Mat\u00e9rn model provides the regularity parameter $\\nu > 0$ and the anisotropic diffusion tensor $\\Theta \\in \\mathbb{R}^{n\\times n}$, which determines the spatial structure (correlation lengths). However, applying boundary conditions to the SPDE above provides the ability to model a significantly larger class of inhomogeneous random fields on complex domains. For further details, see the miniapp README . We recommend viewing ex33p.cpp before viewing this miniapp. This miniapp ( generate_random_field.cpp ) has only a parallel implementation. It further requires MFEM to be built with LAPACK, otherwise you may only use predefined values for $\\nu$. We recommend that new users start with the example codes before moving to the miniapps. Multidomain and SubMesh demonstration Miniapp This miniapp aims to demonstrate how to solve two PDEs, that represent different physics, on the same domain. MFEM's SubMesh interface is used to compute on and transfer between the spaces of predefined parts of the domain. For the sake of simplicity, the spaces on each domain are using the same order H1 finite elements. This does not mean that the approach is limited to this configuration. A 3D domain comprised of an outer box with a cylinder shaped inside is used. A heat equation is described on the outer box domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T &&\\mbox{in outer box}\\\\ T &= T_{wall} &&\\mbox{on outside wall}\\\\ \\nabla T \\cdot \\hat{n} &= 0 &&\\mbox{on inside (cylinder) wall} \\end{align} with temperature $T$ and coefficient $\\kappa$ (non-physical in this example). A convection-diffusion equation is described inside the cylinder domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T - \\alpha \\nabla \\cdot (\\vec{b} T) & &\\mbox{in inner cylinder}\\\\ T &= T_{wall} & &\\mbox{on cylinder wall}\\\\ \\nabla T \\cdot \\hat{n} &= 0 & &\\mbox{else} \\end{align} with temperature $T$, coefficients $\\kappa$, $\\alpha$ and prescribed velocity profile $\\vec{b}$, and $T_{wall}$ obtained from the heat equation. To couple the solutions of both equations, a segregated solve with one way coupling approach is used. The heat equation of the outer box is solved from the timestep $T_{box}(t)$ to $T_{box}(t+dt)$. Then for the convection-diffusion equation $T_{wall}$ is set to $T_{box}(t+dt)$ and the equation is solved for $T(t+dt)$ which results in a first-order one way coupling. This miniapp has only a parallel ( multidomain.cpp ) implementation. We recommend that new users start with the example codes before moving to the miniapps. DPG miniapp This miniapp demonstrates how to discretize and solve various PDEs using the Discontinuous Petrov-Galerkin (DPG) method. It utilizes a new user-friendly interface to assemble the block DPG systems arising from the discretization of any DPG formulation (such as Ultraweak or Primal). In addition, the miniapp supports complex-valued systems, static condensation for block systems, and AMR using the built-in DPG residual-based error indicator. This capability is showcased in the following DPG examples in miniapps/dpg . Ultraweak DPG formulation for diffusion . This example solves the simple Poisson equation $$-\u0394 u = f$$ and computes rates of convergence under successive uniform h-refinements for a smooth manufactured solution. The parallel version also includes an AMR implementation for the L-shape benchmark problem. This example has a serial ( diffusion.cpp ) and a parallel ( pdiffusion.cpp ) version. Ultraweak DPG formulation for convection-diffusion . This example solves the convection-diffusion problem: \\begin{align} -\\epsilon \\Delta u + \\nabla \\cdot (\\beta u) &= f\\\\ \\end{align} using AMR. The example demonstrates the use of mesh-dependent test norms which are suitable for problems with solutions that exhibit large gradients present in internal or boundary layers . The example has a serial ( convection-diffusion.cpp ) and a parallel ( pconvection-diffusion.cpp ) version. Ultraweak DPG formulation for time-harmonic linear acoustics . This example solves the indefinite Helmholtz equation \\begin{align} -\\Delta u - \\omega^2 u &= f\\\\ \\end{align} The example includes formulations with manufactured plane-wave solutions as well as high-frequency scattering problems and the use of Perfectly Match Layers (PML). It also demonstrates how to set up complex-valued systems and preconditioners for their solutions. The example has a serial ( acoustics.cpp ) and parallel ( pacoustics.cpp ) version. Ultraweak DPG formulation for time-harmonic Maxwell . This example solves the indefinite Maxwell problem \\begin{align} \\nabla \u00d7 (\\mu^{-1} \\nabla \\times E) - \\omega^2 \\epsilon E&= J\\\\ \\end{align} The example includes formulations with smooth manufactured solutions, AMR formulations for high-frequency scattering problems as well as a problem with a singular solution. The example has a serial ( maxwell.cpp ) and a parallel ( pmaxwell.cpp ) version. Tribol miniapp This miniapp demonstrates how to use Tribol's mortar method to solve a contact patch test. A contact patch test places two aligned, linear elastic cubes in contact, then verifies that the exact elasticity solution for this problem is recovered. The exact solution requires transmission of a uniform pressure field across a (not necessarily conforming) interface (i.e. the contact surface). Mortar methods (including the one implemented in Tribol) are generally able to pass the contact patch test. The test assumes small deformations and no accelerations, so the relationship between forces/contact pressures and deformations/contact gaps is linear and, therefore, the problem can be solved exactly with a single linear solve. The mortar implementation is based on Puso and Laursen (2004) . A description of the Tribol implementation is available in Serac documentation . Lagrange multipliers are used to solve for the pressure required to prevent violation of the contact constraints. This miniapp has only a parallel ( contact-patch-test.cpp ) implementation. For more details, please see the documentation in miniapps/tribol/README.md . We recommend that new users start with the example codes before moving to the miniapps. No examples or miniapps match your criteria. ", "title": "Example Codes"}, {"location": "examples/#example-codes-and-miniapps", "text": "This page provides a brief overview of MFEM's example codes and miniapps. For detailed documentation of the MFEM sources, including the examples, see the online Doxygen documentation , or the doc directory in the distribution. The goal of the example codes is to provide a step-by-step introduction to MFEM in simple model settings. The miniapps are more complex, and are intended to be more representative of the advanced usage of the library in physics/application codes. We recommend that new users start with the example codes before moving to the miniapps. Select from the categories below to display examples and miniapps that contain the respective feature. All examples support (arbitrarily) high-order meshes and finite element spaces . The numerical results from the example codes can be visualized using the GLVis visualization tool (based on MFEM). See the GLVis website for more details. Users are encouraged to submit any example codes and miniapps that they have created and would like to share. Contact a member of the MFEM team to report bugs or post questions or comments .", "title": "Example Codes and Miniapps"}, {"location": "examples/#example-0-simplest-laplace-problem", "text": "This is the simplest MFEM example and a good starting point for new users. The example demonstrates the use of MFEM to define and solve an $H^1$ finite element discretization of the Laplace problem $$-\\Delta u = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ The example illustrates the use of the basic MFEM classes for defining the mesh, finite element space, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. The example has serial ( ex0.cpp ) and parallel ( ex0p.cpp ) versions.", "title": "Example 0: Simplest Laplace Problem"}, {"location": "examples/#example-1-laplace-problem", "text": "This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Specifically, we discretize with the finite element space coming from the mesh (linear by default, quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The problem solved in this example is the same as ex0 , but with more sophisticated options and features. The example highlights the use of mesh refinement, finite element grid functions, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. We also cover the explicit elimination of essential boundary conditions, static condensation, and the optional connection to the GLVis tool for visualization. The example has a serial ( ex1.cpp ), a parallel ( ex1p.cpp ), and HPC versions: performance/ex1.cpp , performance/ex1p.cpp . It also has a PETSc modification in examples/petsc , a PUMI modification in examples/pumi and a Ginkgo modification in examples/ginkgo . Partial assembly and GPU devices are supported.", "title": "Example 1: Laplace Problem"}, {"location": "examples/#example-2-linear-elasticity", "text": "This example code solves a simple linear elasticity problem describing a multi-material cantilever beam. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder with $f$ being a constant pull down vector on boundary elements with attribute 2, and zero otherwise. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order and NURBS vector finite element spaces with the linear elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and vector coefficient objects. Static condensation is also illustrated. The example has a serial ( ex2.cpp ) and a parallel ( ex2p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . We recommend viewing Example 1 before viewing this example.", "title": "Example 2: Linear Elasticity"}, {"location": "examples/#example-3-definite-maxwell-problem", "text": "This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 2D or 3D. The example demonstrates the use of $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. Static condensation is also illustrated. The example has a serial ( ex3.cpp ) and a parallel ( ex3p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-2 before viewing this example.", "title": "Example 3: Definite Maxwell Problem"}, {"location": "examples/#example-4-grad-div-problem", "text": "This example code solves a simple 2D/3D $H(div)$ diffusion problem corresponding to the second order definite equation $$-{\\rm grad}(\\alpha\\,{\\rm div}(F)) + \\beta F = f$$ with boundary condition $F \\cdot n$ = \"given normal field\". Here we use a given exact solution $F$ and compute the corresponding right hand side $f$. We discretize with the Raviart-Thomas finite elements. The example demonstrates the use of $H(div)$ finite element spaces with the grad-div and $H(div)$ vector finite element mass bilinear form, as well as the computation of discretization error when the exact solution is known. Bilinear form hybridization and static condensation are also illustrated. The example has a serial ( ex4.cpp ) and a parallel ( ex4p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-3 before viewing this example.", "title": "Example 4: Grad-div Problem"}, {"location": "examples/#example-5-darcy-problem", "text": "This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize with Raviart-Thomas finite elements (velocity $\\bf u$) and piecewise discontinuous polynomials (pressure $p$). The example demonstrates the use of the BlockMatrix and BlockOperator classes, as well as the collective saving of several grid functions in VisIt and ParaView formats. The example has a serial ( ex5.cpp ) and a parallel ( ex5p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly is supported. We recommend viewing examples 1-4 before viewing this example.", "title": "Example 5: Darcy Problem"}, {"location": "examples/#example-6-laplace-problem-with-amr", "text": "This is a version of Example 1 with a simple adaptive mesh refinement loop. The problem being solved is again the Laplace equation $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear, curved and surface meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex6.cpp ) and a parallel ( ex6p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . Partial assembly and GPU devices are supported. We recommend viewing Example 1 before viewing this example.", "title": "Example 6: Laplace Problem with AMR"}, {"location": "examples/#example-7-surface-meshes", "text": "This example code demonstrates the use of MFEM to define a triangulation of a unit sphere and a simple isoparametric finite element discretization of the Laplace problem with mass term, $$-\\Delta u + u = f.$$ The example highlights mesh generation, the use of mesh refinement, high-order meshes and finite elements, as well as surface-based linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. Simple local mesh refinement is also demonstrated. The example has a serial ( ex7.cpp ) and a parallel ( ex7p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 7: Surface Meshes"}, {"location": "examples/#example-8-dpg-for-the-laplace-problem", "text": "This example code demonstrates the use of the Discontinuous Petrov-Galerkin (DPG) method in its primal 2x2 block form as a simple finite element discretization of the Laplace problem $$-\\Delta u = f$$ with homogeneous Dirichlet boundary conditions. We use high-order continuous trial space, a high-order interfacial (trace) space, and a high-order discontinuous test space defining a local dual ($H^{-1}$) norm. We use the primal form of DPG, see \"A primal DPG method without a first-order reformulation\" , Demkowicz and Gopalakrishnan, CAM 2013. The example highlights the use of interfacial (trace) finite elements and spaces, trace face integrators and the definition of block operators and preconditioners. The example has a serial ( ex8.cpp ) and a parallel ( ex8p.cpp ) version. We recommend viewing examples 1-5 before viewing this example.", "title": "Example 8: DPG for the Laplace Problem"}, {"location": "examples/#example-9-dg-advection", "text": "This example code solves the time-dependent advection equation $$\\frac{\\partial u}{\\partial t} + v \\cdot \\nabla u = 0,$$ where $v$ is a given fluid velocity, and $u_0(x)=u(0,x)$ is a given initial condition. The example demonstrates the use of Discontinuous Galerkin (DG) bilinear forms in MFEM (face integrators), the use of explicit and implicit (with block ILU preconditioning) ODE time integrators, the definition of periodic boundary conditions through periodic meshes, as well as the use of GLVis for persistent visualization of a time-evolving solution. The saving of time-dependent data files for external visualization with VisIt and ParaView is also illustrated. The example has a serial ( ex9.cpp ) and a parallel ( ex9p.cpp ) version. It also has a SUNDIALS modification in examples/sundials , a PETSc modification in examples/petsc , and a HiOp modification in examples/hiop .", "title": "Example 9: DG Advection"}, {"location": "examples/#example-10-nonlinear-elasticity", "text": "This example solves a time dependent nonlinear elasticity problem of the form $$ \\frac{dv}{dt} = H(x) + S v\\,,\\qquad \\frac{dx}{dt} = v\\,, $$ where $H$ is a hyperelastic model and $S$ is a viscosity operator of Laplacian type. The geometry of the domain is assumed to be as follows: The example demonstrates the use of nonlinear operators, as well as their implicit time integration using a Newton method for solving an associated reduced backward-Euler type nonlinear equation. Each Newton step requires the inversion of a Jacobian matrix, which is done through a (preconditioned) inner solver. The example has a serial ( ex10.cpp ) and a parallel ( ex10p.cpp ) version. It also has a SUNDIALS modification in examples/sundials and a PETSc modification in examples/petsc . We recommend viewing examples 2 and 9 before viewing this example.", "title": "Example 10: Nonlinear Elasticity"}, {"location": "examples/#example-11-laplace-eigenproblem", "text": "This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex11p.cpp ) version. It also has a SLEPc modification in examples/petsc . We recommend viewing Example 1 before viewing this example.", "title": "Example 11: Laplace Eigenproblem"}, {"location": "examples/#example-12-linear-elasticity-eigenproblem", "text": "This example code solves the linear elasticity eigenvalue problem for a multi-material cantilever beam. Specifically, we compute a number of the lowest eigenmodes by approximating the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = \\lambda {\\bf u} \\,,$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field $\\bf u$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder. The geometry of the domain is assumed to be as follows: The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex12p.cpp ) version. We recommend viewing examples 2 and 11 before viewing this example.", "title": "Example 12: Linear Elasticity Eigenproblem"}, {"location": "examples/#example-13-maxwell-eigenproblem", "text": "This example code solves the Maxwell (electromagnetic) eigenvalue problem $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 2D or 3D. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex13p.cpp ) version. We recommend viewing examples 3 and 11 before viewing this example.", "title": "Example 13: Maxwell Eigenproblem"}, {"location": "examples/#example-14-dg-diffusion", "text": "This example code demonstrates the use of MFEM to define a discontinuous Galerkin (DG) finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Finite element spaces of any order, including zero on regular grids, are supported. The example highlights the use of discontinuous spaces and DG-specific face integrators. The example has a serial ( ex14.cpp ) and a parallel ( ex14p.cpp ) version. We recommend viewing examples 1 and 9 before viewing this example.", "title": "Example 14: DG Diffusion"}, {"location": "examples/#example-15-dynamic-amr", "text": "Building on Example 6 , this example demonstrates dynamic adaptive mesh refinement. The mesh is adapted to a time-dependent solution by refinement as well as by derefinement. For simplicity, the solution is prescribed and no time integration is done. However, the error estimation and refinement/derefinement decisions are realistic. At each outer iteration the right hand side function is changed to mimic a time dependent problem. Within each inner iteration the problem is solved on a sequence of meshes which are locally refined according to a simple ZZ error estimator. At the end of the inner iteration the error estimates are also used to identify any elements which may be over-refined and a single derefinement step is performed. After each refinement or derefinement step a rebalance operation is performed to keep the mesh evenly distributed among the available processors. The example demonstrates MFEM's capability to refine, derefine and load balance nonconforming meshes, in 2D and 3D, and on linear, curved and surface meshes. Interpolation of functions between coarse and fine meshes, persistent GLVis visualization, and saving of time-dependent fields for external visualization with VisIt are also illustrated. The example has a serial ( ex15.cpp ) and a parallel ( ex15p.cpp ) version. We recommend viewing examples 1, 6 and 9 before viewing this example.", "title": "Example 15: Dynamic AMR"}, {"location": "examples/#example-16-time-dependent-heat-conduction", "text": "This example code solves a simple 2D/3D time dependent nonlinear heat conduction problem $$\\frac{du}{dt} = \\nabla \\cdot \\left( \\kappa + \\alpha u \\right) \\nabla u$$ with a natural insulating boundary condition $\\frac{du}{dn} = 0$. We linearize the problem by using the temperature field $u$ from the previous time step to compute the conductivity coefficient. This example demonstrates both implicit and explicit time integration as well as a single Picard step method for linearization. The saving of time dependent data files for external visualization with VisIt is also illustrated. The example has a serial ( ex16.cpp ) and a parallel ( ex16p.cpp ) version. We recommend viewing examples 2, 9, and 10 before viewing this example.", "title": "Example 16: Time Dependent Heat Conduction"}, {"location": "examples/#example-17-dg-linear-elasticity", "text": "This example code solves a simple linear elasticity problem describing a multi-material cantilever beam using symmetric or non-symmetric discontinuous Galerkin (DG) formulation. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are Dirichlet, $\\bf{u}=\\bf{u_D}$, on the fixed part of the boundary, namely boundary attributes 1 and 2; on the rest of the boundary we use ${\\sigma}({\\bf u})\\cdot n = {\\bf 0}$. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order DG vector finite element spaces with the linear DG elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and function vector-coefficient objects. The use of non-homogeneous Dirichlet b.c. imposed weakly, is also illustrated. The example has a serial ( ex17.cpp ) and a parallel ( ex17p.cpp ) version. We recommend viewing examples 2 and 14 before viewing this example.", "title": "Example 17: DG Linear Elasticity"}, {"location": "examples/#example-18-dg-euler-equations", "text": "This example code solves the compressible Euler system of equations, a model nonlinear hyperbolic PDE, with a discontinuous Galerkin (DG) formulation. The primary purpose is to show how a transient system of nonlinear equations can be formulated in MFEM. The equations are solved in conservative form $$\\frac{\\partial u}{\\partial t} + \\nabla \\cdot {\\bf F}(u) = 0$$ with a state vector $u = [ \\rho, \\rho v_0, \\rho v_1, \\rho E ]$, where $\\rho$ is the density, $v_i$ is the velocity in the $i^{\\rm th}$ direction, $E$ is the total specific energy, and $H = E + p / \\rho$ is the total specific enthalpy. The pressure, $p$ is computed through a simple equation of state (EOS) call. The conservative hydrodynamic flux ${\\bf F}$ in each direction $i$ is $${\\bf F_{\\it i}} = [ \\rho v_i, \\rho v_0 v_i + p \\delta_{i,0}, \\rho v_1 v_i + p \\delta_{i,1}, \\rho v_i H ]$$ Specifically, the example solves for an exact solution of the equations whereby a vortex is transported by a uniform flow. Since all boundaries are periodic here, the method's accuracy can be assessed by measuring the difference between the solution and the initial condition at a later time when the vortex returns to its initial location. Note that as the order of the spatial discretization increases, the timestep must become smaller. This example currently uses a simple estimate derived by Cockburn and Shu for the 1D RKDG method. An additional factor can be tuned by passing the --cfl (or -c shorter) flag. The example demonstrates user-defined nonlinear form with hyperbolic form integrator for systems of equations that are defined with block vectors, and how these are used with an operator for explicit time integrators. In this case the system also involves an external approximate Riemann solver for the DG interface flux. It also demonstrates how to use GLVis for in-situ visualization of vector grid functions. The example has a serial ( ex18.cpp ) and a parallel ( ex18p.cpp ) version. We recommend viewing examples 9, 14 and 17 before viewing this example.", "title": "Example 18: DG Euler Equations"}, {"location": "examples/#example-19-incompressible-nonlinear-elasticity", "text": "This example code solves the quasi-static incompressible nonlinear hyperelasticity equations. Specifically, it solves the nonlinear equation $$ \\nabla \\cdot \\sigma(F) = 0 $$ subject to the constraint $$ \\text{det } F = 1 $$ where $\\sigma$ is the Cauchy stress and $F_{ij} = \\delta_{ij} + u_{i,j}$ is the deformation gradient. To handle the incompressibility constraint, pressure is included as an independent unknown $p$ and the stress response is modeled as an incompressible neo-Hookean hyperelastic solid . The geometry of the domain is assumed to be as follows: This formulation requires solving the saddle point system $$ \\left[ \\begin{array}{cc} K &B^T \\\\ B & 0 \\end{array} \\right] \\left[\\begin{array}{c} \\Delta u \\\\ \\Delta p \\end{array} \\right] = \\left[\\begin{array}{c} R_u \\\\ R_p \\end{array} \\right] $$ at each Newton step. To solve this linear system, we implement a specialized block preconditioner of the form $$ P^{-1} = \\left[\\begin{array}{cc} I & -\\tilde{K}^{-1}B^T \\\\ 0 & I \\end{array} \\right] \\left[\\begin{array}{cc} \\tilde{K}^{-1} & 0 \\\\ 0 & -\\gamma \\tilde{S}^{-1} \\end{array} \\right] $$ where $\\tilde{K}^{-1}$ is an approximation of the inverse of the stiffness matrix $K$ and $\\tilde{S}^{-1}$ is an approximation of the inverse of the Schur complement $S = BK^{-1}B^T$. To approximate the Schur complement, we use the mass matrix for the pressure variable $p$. The example demonstrates how to solve nonlinear systems of equations that are defined with block vectors as well as how to implement specialized block preconditioners for use in iterative solvers. The example has a serial ( ex19.cpp ) and a parallel ( ex19p.cpp ) version. We recommend viewing examples 2, 5 and 10 before viewing this example.", "title": "Example 19: Incompressible Nonlinear Elasticity"}, {"location": "examples/#example-20-symplectic-integration-of-hamiltonian-systems", "text": "This example demonstrates the use of the variable order, symplectic time integration algorithm. Symplectic integration algorithms are designed to conserve energy when integrating systems of ODEs which are derived from Hamiltonian systems. Hamiltonian systems define the energy of a system as a function of time (t), a set of generalized coordinates (q), and their corresponding generalized momenta (p). $$ H(q,p,t) = T(p) + V(q,t) $$ Hamilton's equations then specify how q and p evolve in time: $$ \\frac{dq}{dt} = \\frac{dH}{dp}\\,,\\qquad \\frac{dp}{dt} = -\\frac{dH}{dq} $$ To use the symplectic integration classes we need to define an mfem::Operator ${\\bf P}$ which evaluates the action of dH/dp, and an mfem::TimeDependentOperator ${\\bf F}$ which computes -dH/dq. This example visualizes its results as an evolution in phase space by defining the axes to be $q$, $p$, and $t$ rather than $x$, $y$, and $z$. In this space we build a ribbon-like mesh with nodes at $(0,0,t)$ and $(q,p,t)$. Finally we plot the energy as a function of time as a scalar field on this ribbon-like mesh. This scheme highlights any variations in the energy of the system. This example offers five simple 1D Hamiltonians: Simple Harmonic Oscillator (mass on a spring) $$H = \\frac{1}{2}\\left( \\frac{p^2}{m} + \\frac{q^2}{k} \\right)$$ Pendulum $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} - k \\left( 1 - cos(q) \\right) \\right]$$ Gaussian Potential Well $$H = \\frac{p^2}{2m} - k e^{-q^2 / 2}$$ Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 + q^2 \\right) q^2 \\right]$$ Negative Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 - \\frac{q^2}{8} \\right) q^2 \\right]$$ In all cases these Hamiltonians are shifted by constant values so that the energy will remain positive. The mean and standard deviation of the computed energies at each time step are displayed upon completion. When run in parallel, each processor integrates the same Hamiltonian system but starting from different initial conditions. The example has a serial ( ex20.cpp ) and a parallel ( ex20p.cpp ) version. See the Maxwell miniapp for another application of symplectic integration.", "title": "Example 20: Symplectic Integration of Hamiltonian Systems"}, {"location": "examples/#example-21-adaptive-mesh-refinement-for-linear-elasticity", "text": "This is a version of Example 2 with a simple adaptive mesh refinement loop. The problem being solved is again linear elasticity describing a multi-material cantilever beam. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear and curved meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex21.cpp ) and a parallel ( ex21p.cpp ) version. We recommend viewing Examples 2 and 6 before viewing this example.", "title": "Example 21: Adaptive mesh refinement for linear elasticity"}, {"location": "examples/#example-22-complex-linear-systems", "text": "This example code demonstrates the use of MFEM to define and solve a complex-valued linear system. It implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. The example also demonstrates how to display a time-varying solution as a sequence of fields sent to a single GLVis socket. The example has a serial ( ex22.cpp ) and a parallel ( ex22p.cpp ) version. We recommend viewing examples 1, 3, and 4 before viewing this example.", "title": "Example 22: Complex Linear Systems"}, {"location": "examples/#example-23-wave-problem", "text": "This example code solves a simple 2D/3D wave equation with a second order time derivative: $$\\frac{\\partial^2 u}{\\partial t^2} - c^2\\Delta u = 0$$ The boundary conditions are either Dirichlet or Neumann. The example demonstrates the use of time dependent operators, implicit solvers and second order time integration. The example has only a serial ( ex23.cpp ) version. We recommend viewing examples 9 and 10 before viewing this example.", "title": "Example 23: Wave Problem"}, {"location": "examples/#example-24-mixed-finite-element-spaces", "text": "This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. Partial assembly and GPU devices are supported. The example has a serial ( ex24.cpp ) and a parallel ( ex24p.cpp ) version. We recommend viewing examples 1 and 3 before viewing this example.", "title": "Example 24: Mixed finite element spaces"}, {"location": "examples/#example-25-perfectly-matched-layers", "text": "The example illustrates the use of a Perfectly Matched Layer (PML) for the simulation of time-harmonic electromagnetic waves propagating in unbounded domains. PML was originally introduced by Berenger in \"A Perfectly Matched Layer for the Absorption of Electromagnetic Waves\" . It is a technique used to solve wave propagation problems posed in infinite domains. The implementation involves the introduction of an artificial absorbing layer that minimizes undesired reflections. Inside this layer a complex coordinate stretching map is used which forces the wave modes to decay exponentially. The example solves the indefinite Maxwell equations $$\\nabla \\times (a \\nabla \\times E) - \\omega^2 b E = f.$$ where $a = \\mu^{-1} |J|^{-1} J^T J$, $b= \\epsilon |J| J^{-1} J^{-T}$ and $J$ is the Jacobian matrix of the coordinate transformation. The example demonstrates discretization with Nedelec finite elements in 2D or 3D, as well as the use of complex-valued bilinear and linear forms. Several test problems are included, with known exact solutions. The example has a serial ( ex25.cpp ) and a parallel ( ex25p.cpp ) version. We recommend viewing Example 22 before viewing this example.", "title": "Example 25: Perfectly Matched Layers"}, {"location": "examples/#example-26-multigrid-preconditioner", "text": "This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions and how to solve it efficiently using a matrix-free multigrid preconditioner. The example highlights on the creation of a hierarchy of discretization spaces and diffusion bilinear forms using partial assembly. The levels in the hierarchy of finite element spaces maybe constructed through geometric or order refinements. Moreover, the construction of a multigrid preconditioner for the PCG solver is shown. The multigrid uses a PCG solver on the coarsest level and second order Chebyshev accelerated smoothers on the other levels. The example has a serial ( ex26.cpp ) and a parallel ( ex26p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 26: Multigrid Preconditioner"}, {"location": "examples/#example-27-laplace-boundary-conditions", "text": "This example code demonstrates the use of MFEM to define a simple finite element discretization of the Laplace problem: $$ -\\Delta u = 0 $$ with a variety of boundary conditions. Specifically, we discretize using a FE space of the specified order using a continuous or discontinuous space. We then apply Dirichlet, Neumann (both homogeneous and inhomogeneous), Robin, and Periodic boundary conditions on different portions of a predefined mesh. Boundary conditions: $u = u_{dbc}$ on $\\Gamma_{dbc}$ $\\hat{n}\\cdot\\nabla u = g_{nbc}$ on $\\Gamma_{nbc}$ $\\hat{n}\\cdot\\nabla u = 0$ on $\\Gamma_{nbc_0}$ $\\hat{n}\\cdot\\nabla u + a u = b$ on $\\Gamma_{rbc}$ as well as periodic boundary conditions which are enforced topologically. The example has a serial ( ex27.cpp ) and a parallel ( ex27p.cpp ) version. We recommend viewing examples 1 and 14 before viewing this example.", "title": "Example 27: Laplace Boundary Conditions"}, {"location": "examples/#example-28-constraints-and-sliding-boundary-conditions", "text": "This example code illustrates the use of constraints in linear solvers by solving an elasticity problem where the normal component of the displacement is constrained to zero on two boundaries but tangential displacement is allowed. The constraints can be enforced in several different ways, including eliminating them from the linear system or solving a saddle-point system that explicitly includes constraint conditions. The example has a serial ( ex28.cpp ) and a parallel ( ex28p.cpp ) version. We recommend viewing example 2 before viewing this example.", "title": "Example 28: Constraints and Sliding Boundary Conditions"}, {"location": "examples/#example-29-solving-pdes-on-embedded-surfaces", "text": "This example demonstrates setting up and solving an anisotropic Laplace problem $$-\\nabla\\cdot(\\sigma\\nabla u) = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ where $\\Omega$ is a two dimensional curved surface embedded in three dimensions and $\\sigma$ is an anisotropic diffusion tensor. The example demonstrates and validates our DiffusionIntegrator 's ability to properly integrate three dimensional fluxes on a two dimensional domain. Not all of our integrators currently support such cases but the DiffusionIntegrator can be used as a simple example of how extend other integrators when necessary. The example has a serial ( ex29.cpp ) and a parallel ( ex29p.cpp ) version. We recommend viewing examples 1 and 7 before viewing this example.", "title": "Example 29: Solving PDEs on embedded surfaces"}, {"location": "examples/#example-30-resolving-rough-and-fine-scale-problem-data", "text": "Unresolved problem data will affect the accuracy of a discretized PDE solution as well as a posteriori estimates of the solution error. This example uses a CoefficientRefiner object to preprocess an input mesh until the resolution of the prescribed problem data $f \\in L^2$ is below a prescribed tolerance. In this example, the resolution is identified with a data oscillation function on the mesh $\\mathcal{T}$, defined $$ \\mathrm{osc}(f) = \\Big( \\sum_{T\\in\\mathcal{T}} \\| h \\cdot (I - \\Pi)\\, f \\|^2_{L^2(T)} \\Big)^{1/2}, $$ where $h$ is the local element size function and $\\Pi$ is a finite element projection operator, and the sum is taken over all elements $T$ in the mesh. In this example, the coarse initial mesh is adaptively refined until $\\mathrm{osc}(f)$ is below a prescribed tolerance for various candidate functions $f \\in L^2$. When using rough problem data, it is recommended to perform this type of preprocessing before a posteriori error estimation. The example has a serial ( ex30.cpp ) and a parallel ( ex30p.cpp ) version. We recommend viewing examples 1 and 6 before viewing this example.", "title": "Example 30: Resolving rough and fine-scale problem data"}, {"location": "examples/#example-31-anisotropic-definite-maxwell-problem", "text": "This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + \\sigma E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". In this example $\\sigma$ is an anisotropic 3x3 tensor. Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example has a serial ( ex31.cpp ) and a parallel ( ex31p.cpp ) version. We recommend viewing example 3 before viewing this example.", "title": "Example 31: Anisotropic Definite Maxwell Problem"}, {"location": "examples/#example-32-anisotropic-maxwell-eigenproblem", "text": "This example code solves the Maxwell (electromagnetic) eigenvalue problem with anisotropic permittivity, $\\epsilon$ $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, \\epsilon E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces in an eigenmode context. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing multiple GLVis visualization windows for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex32p.cpp ) version. We recommend viewing examples 13 and 31 before viewing this example.", "title": "Example 32: Anisotropic Maxwell Eigenproblem"}, {"location": "examples/#example-33-spectral-fractional-laplacian", "text": "This example code demonstrates the use of MFEM to solve the fractional Laplacian problem $$ (-\\Delta)^\\alpha u = 1, \\quad \\alpha > 0, $$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is similar to ex1 , but involves a fractional-order diffusion operator whose inverse can be approximated by a series of inverses of integer-order diffusion operators. Solving each of these independent integer-order PDEs with MFEM and summing their solutions results in a discrete solution to the fractional Laplacian problem above. The example has a serial ( ex33.cpp ) and a parallel ( ex33p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 33: Spectral fractional Laplacian"}, {"location": "examples/#example-34-source-function-using-a-submesh-transfer", "text": "This example demonstrates the use of a SubMesh object to transfer solution data from a sub-domain and use this as a source function on the full domain. In this case we compute a volumetric current density $\\vec{J}$ as the gradient of a scalar potential $\\varphi$ on a portion of the domain. $$\\nabla\\cdot(\\sigma\\nabla\\varphi)=0$$ $$\\vec{J} = -\\sigma\\nabla\\varphi$$ Where a voltage difference is applied on surfaces of the sub-domain (shown on the left) to generate the current density restricted to this sub-domain. The current density is then transferred to the full domain (shown on the right) using a SubMesh object. We then use this current density on the full domain as a source term in a magnetostatic solve for a vector potential $\\vec{A}$. $$\\nabla\\times(\\mu^{-1}\\nabla\\times\\vec{A})=\\vec{J}$$ $$\\vec{B} = \\nabla\\times\\vec{A}$$ This example verifies the recreation of boundary attributes on a sub-domain mesh as well as transfer of Raviart-Thomas vector fields between the SubMesh and the full Mesh. Note that the data transfer in this particular example involves arbitrary order Raviart-Thomas degrees of freedom on a mixture of tetrahedral and triangular prism elements. The example has a serial ( ex34.cpp ) and a parallel ( ex34p.cpp ) version. We recommend viewing Examples 1 and 3 before viewing this example.", "title": "Example 34: Source Function using a SubMesh Transfer"}, {"location": "examples/#example-35-port-boundary-conditions-using-submesh-transfers", "text": "This example demonstrates the use of a SubMesh object to transfer a port boundary condition from a portion of the boundary to the corresponding portion of the full domain. Just as in Example 22 this example implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0\\mbox{ with }u|_\\Gamma=v$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\times(\\vec{u}\\times\\hat{n})|_\\Gamma=\\vec{v}$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\cdot\\vec{u}|_\\Gamma=v$$ Where $\\Gamma$ is a portion of the boundary called the port . In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. In Example 22 this boundary condition was simply a constant in space. In this example the boundary condition is an eigenmode of a lower dimensional eigenvalue problem defined on a portion of the boundary as follows: For the scalar $H^1$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }v|_{\\partial\\Gamma}=0$$ For the vector $H(curl)$ field: $$\\nabla\\times\\left(\\nabla\\times\\vec{v}\\right) = \\lambda\\,\\vec{v}\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\times\\vec{v}|_{\\partial\\Gamma}=0$$ For the vector $H(div)$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\cdot\\nabla v|_{\\partial\\Gamma}=0$$ The different cases implemented in this example can be used to verify the transfer of an $H^1$ scalar field, the tangential components of an $H(curl)$ vector field, and the normal component of an $H(div)$ vector field (as a scalar $L^2$ field in this case) between a SubMesh and its parent mesh. The example has only a parallel ( ex35p.cpp ) version because the eigenmode solver used to compute the field on the port is only implemented in parallel. We recommend viewing Examples 11, 13, and 22 before viewing this example.", "title": "Example 35: Port Boundary Conditions using SubMesh Transfers"}, {"location": "examples/#example-36-obstacle-problem", "text": "This example code solves the pointwise bound-constrained energy minimization problem $$ \\text{minimize } \\frac{1}{2}\\|\\nabla u\\|^2 \\text{ in } H^1_0(\\Omega)\\, \\text{ subject to } u \\ge \\varphi\\,.$$ This is known as the obstacle problem, and it is a classical motivating example in the study of variational inequalities and free boundary problems. In this example, the obstacle $\\varphi$ is the graph of a half-sphere centered at the origin of a circular domain $\\Omega$. After solving to a specified tolerance, the numerical solution is compared to a closed-form exact solution to assess its accuracy. The problem is solved using the Proximal Galerkin finite element method, which is a nonlinear, structure-preserving mixed method for pointwise bound constraints proposed by Keith and Surowiec . In turn, this example highlights MFEM's ability to deliver high-order solutions to variational inequalities and showcases how to set up and solve nonlinear mixed methods. The example has a serial ( ex36.cpp ) and a parallel ( ex36p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 36: Obstacle Problem"}, {"location": "examples/#example-37-topology-optimization", "text": "Density field $\\rho$ Problem set-up and domain $\\Omega$ This example code solves a classical cantilever beam topology optimization problem. The aim is to find an optimal material density field $\\rho$ in $L^1(\\Omega)$ to minimize the elastic compliance; i.e., $$\\begin{align} &\\text{minimize} \\int_\\Omega \\mathbf{f} \\cdot \\mathbf{u}(\\rho)\\, \\mathrm{d}x\\, \\text{ over }\\, \\rho \\in L^1(\\Omega) \\\\ &\\text{subject to }\\, 0 \\leq \\rho \\leq 1\\, \\text{ and } \\int_\\Omega \\rho\\, \\mathrm{d}x = \\theta\\, \\mathrm{vol}(\\Omega) \\,. \\end{align}$$ In this problem, $\\mathbf{f}$ is a localized force and the linearly elastic displacement field $\\mathbf{u} = \\mathbf{u}(\\rho)$ is determined by a material density field $\\rho$ with total volume fraction $0<\\theta<1$. The problem is solved using a mirror descent algorithm proposed by Keith and Surowiec . For further details, see the more elaborate description of this PDE-constrained optimization problem given in the example code and the aforementioned paper. The example has a serial ( ex37.cpp ) and a parallel ( ex37p.cpp ) version. We recommend viewing Example 2 before viewing this example.", "title": "Example 37: Topology Optimization"}, {"location": "examples/#example-38-cut-volume-and-cut-surface-integration", "text": "This example code demonstrates construction of cut-surface and cut-volume IntegrationRules. The cut is specified by the zero level set of a given Coefficient $\\phi$. The resulting IntegrationRules are combined with standard LinearFormIntegrators to demonstrate integration of a function $u$ over an implicit interface, and a subdomain bounded by an implicit interface: $$ S = \\int_{\\phi = 0} u(x) ~ ds, \\quad V = \\int_{\\phi > 0} u(x) ~ dx. $$ The IntegrationRules are constructed by the moment-fitting algorithm introduced by M\u00fcller, Kummer and Oberlack . Through a set of basis functions, for each element the method defines and solves a local under-determined system for the vector of quadrature weights. All surface and volume integrals, which are required to form the system, are reduced to 1D integration over intersected segments. The example has only a serial ( ex38.cpp ) version, because the construction of the integration rules is an element-local procedure. It requires MFEM to be built with LAPACK, which is used to find the optimal solution of an under-determined system of equations.", "title": "Example 38: Cut-Volume and Cut-Surface Integration"}, {"location": "examples/#example-39-named-attribute-sets", "text": "This example uses the Poisson equation to demonstrate the use of named attribute sets in MFEM to specify material regions, boundary regions, or source regions by name rather than attribute numbers. It also demonstrates how new named attribute sets may be created from arbitrary groupings of attribute numbers and used as a convenient shorthand to refer to those groupings in other portions of the application or through the command line. Named attribute sets also required changes to MFEM's mesh file formats. This example makes use of a custom input mesh file ( compass.msh ) produced using Gmsh which includes named regions and boundaries. A related mesh file ( compass.mesh ) illustrates MFEM's representation of the new named attribute sets. See file formats for details of the augmented mesh file format. The example has a serial ( ex39.cpp ) and a parallel ( ex39p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 39: Named Attribute Sets"}, {"location": "examples/#example-40-eikonal-equation", "text": "This example highlights MFEM's ability to solve a fully-nonlinear, first-order PDE with high-order finite elements. In particular this example uses the proximal Galerkin method to solve the eikonal equation, $$ |\\nabla u| = 1 \\text{ in } \\Omega, \\quad u = 0 \\text{ on } \\partial \\Omega. $$ At each point $x$ in the domain $\\Omega$, the solution of this PDE provides the Euclidean distance to the domain boundary, $u(x) = \\min \\{ | x - y| : y \\in \\partial \\Omega\\}$. The problem is solved by recasting $u$ as the solution of the nonlinear program $$ \\text{maximize } \\int_\\Omega u\\, \\mathrm{d} x\\, \\text{ in } W^{1,\\infty}_0(\\Omega)\\, \\text{ subject to } |\\nabla u | \\leq 1 \\text{ a.e. in } \\Omega.$$ A solution is then obtained by discretizing and solving a sequence of nonlinear saddle-point problems. See the example code for a more detailed description of the method. The example has a serial ( ex40.cpp ) and a parallel ( ex40p.cpp ) version. We recommend viewing Example 5 and Example 36 before viewing this example.", "title": "Example 40: Eikonal Equation"}, {"location": "examples/#nurbs-example-1-laplace-problem", "text": "This example code demonstrates the use of MFEM to define a simple isogeometric NURBS discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is the same as Example 1 . The example has a serial ( nurbs_ex1.cpp ) and a parallel ( nurbs_ex1p.cpp ) version. There is also a version that demonstrates efficient patchwise quadrature ( nurbs_ex1 patch.cpp ).", "title": "NURBS Example 1: Laplace Problem"}, {"location": "examples/#nurbs-example-3-definite-maxwell-problem", "text": "This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with NURBS-based $H(curl)$elements in 2D or 3D. The problem solved in this example is the same as Ezample 3 . The example has only a serial ( nurbs_ex1.cpp ) version.", "title": "NURBS Example 3: Definite Maxwell Problem"}, {"location": "examples/#nurbs-example-5-darcy-problem", "text": "This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize the velocity ($\\bf u$) with NURBS-based $H(div)$ elements and the pressure ($p$) with a compatible NURBS-based $H_1$ elements. The problem solved in this example is the same as Example 5 . The example only has a serial ( nurbs_ex5.cpp ).", "title": "NURBS Example 5: Darcy Problem"}, {"location": "examples/#nurbs-example-11-laplace-eigenproblem", "text": "This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The problem solved in this example is the same as Example 11 . The example has only a parallel ( nurbs_ex11p.cpp ) version.", "title": "NURBS Example 11: Laplace Eigenproblem"}, {"location": "examples/#nurbs-example-24-mixed-finite-element-spaces", "text": "The problem solved in this example is the same as Example 24 , but NURBS-based elements are also supported. This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. The example has a serial ( nurbs_ex24.cpp ).", "title": "NURBS Example 24: Mixed finite element spaces"}, {"location": "examples/#volta-miniapp-electrostatics", "text": "This miniapp demonstrates the use of MFEM to solve realistic problems in the field of linear electrostatics. Its features include: dielectric materials charge densities surface charge densities prescribed voltages applied polarizations high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( volta.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Volta Miniapp: Electrostatics"}, {"location": "examples/#tesla-miniapp-magnetostatics", "text": "This miniapp showcases many of MFEM's features while solving a variety of realistic magnetostatics problems. Its features include: diamagnetic and/or paramagnetic materials ferromagnetic materials volumetric current densities surface current densities external fields high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( tesla.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Tesla Miniapp: Magnetostatics"}, {"location": "examples/#maxwell-miniapp-transient-full-wave-electromagnetics", "text": "This miniapp solves the equations of transient full-wave electromagnetics. Its features include: mixed formulation of the coupled first-order Maxwell equations $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic flux energy conserving, variable order, implicit time integration dielectric materials diamagnetic and/or paramagnetic materials conductive materials volumetric current densities Sommerfeld absorbing boundary conditions high order meshes high order basis functions advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( maxwell.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Maxwell Miniapp: Transient Full-Wave Electromagnetics"}, {"location": "examples/#joule-miniapp-transient-magnetics-and-joule-heating", "text": "This miniapp solves the equations of transient low-frequency (a.k.a. eddy current) electromagnetics, and simultaneously computes transient heat transfer with the heat source given by the electromagnetic Joule heating. Its features include: $H^1$ discretization of the electrostatic potential $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic field $H(\\mathrm{div})$ discretization of the heat flux $L^2$ discretization of the temperature implicit transient time integration high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( joule.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Joule Miniapp: Transient Magnetics and Joule Heating"}, {"location": "examples/#mobius-strip-miniapp", "text": "This miniapp generates various Mobius strip-like surface meshes. It is a good way to generate complex surface meshes. Manipulating the mesh topology and performing mesh transformation are demonstrated. The mobius-strip mesh in the data directory was generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mobius-strip.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mobius Strip Miniapp"}, {"location": "examples/#klein-bottle-miniapp", "text": "This miniapp generates three types of Klein bottle surfaces. It is similar to the mobius-strip miniapp. Manipulating the mesh topology and performing mesh transformation are demonstrated. The klein-bottle and klein-donut meshes in the data directory were generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( klein-bottle.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Klein Bottle Miniapp"}, {"location": "examples/#toroid-miniapp", "text": "This miniapp generates two types of toroidal volume meshes; one with triangular cross sections and one with square cross sections. It works by defining a stack of individual elements and bending them so that the bottom and top of the stack can be joined to form a torus. It supports various options including: The element type: 0 - Wedge, 1 - Hexahedron The geometric order of the elements The major and minor radii The number of elements in the azimuthal direction The number of nodes to offset by before rejoining the stack The initial angle of the cross sectional shape The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( toroid.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Toroid Miniapp"}, {"location": "examples/#twist-miniapp", "text": "This miniapp generates simple periodic meshes to demonstrate MFEM's handling of periodic domains. MFEM's strategy is to use a discontinuous vector field to define the mesh coordinates on a topologically periodic mesh. It works by defining a stack of individual elements and stitching together the top and bottom of the mesh. The stack can also be twisted so that the vertices of the bottom and top can be joined with any integer offset (for tetrahedral and wedge meshes only even offsets are supported). The Twist miniapp supports various options including: The element type: 4 - Tetrahedron, 6 - Wedge, 8 - Hexahedron The geometric order of the elements The dimensions of the initial brick-shaped stack of elements The number of elements in the z direction The number of nodes to offset by before rejoining the stack The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( twist.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Twist Miniapp"}, {"location": "examples/#extruder-miniapp", "text": "This miniapp creates higher dimensional meshes from lower dimensional meshes by extrusion. Simple coordinate transformations can also be applied if desired. The initial mesh can be 1D or 2D 1D meshes can be extruded in both the y and z directions 2D meshes can be triangular, quadrilateral, or contain both element types Meshes with high order geometry are supported User can specify the number of elements and the distance to extrude Geometric order of the transformed mesh can be user selected or automatic This miniapp provides another demonstration of how simple meshes can be constructed and transformed in MFEM. This miniapp has only a serial ( extruder.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Extruder Miniapp"}, {"location": "examples/#trimmer-miniapp", "text": "This miniapp creates a new mesh file from an existing mesh by trimming away elements with selected attributes. Newly exposed boundary elements will be assigned new or user specified boundary attributes. The initial mesh can be 2D or 3D Meshes with high order geometry are supported Periodic meshes are supported NURBS meshes are not supported This miniapp provides another demonstration of how simple meshes can be constructed in MFEM. This miniapp has only a serial ( trimmer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Trimmer Miniapp"}, {"location": "examples/#polar-nc-miniapp", "text": "This miniapp generates a circular sector mesh that consist of quadrilaterals and triangles of similar sizes. The 3D version of the mesh is made of prisms and tetrahedra. The mesh is non-conforming by design, and can optionally be made curvilinear. The elements are ordered along a space-filling curve by default, which makes the mesh ready for parallel non-conforming AMR in MFEM. The implementation also demonstrates how to initialize a non-conforming mesh on the fly by marking hanging nodes with Mesh::AddVertexParents . For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( polar-nc.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Polar-NC Miniapp"}, {"location": "examples/#shaper-miniapp", "text": "This miniapp performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( shaper.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Shaper Miniapp"}, {"location": "examples/#mesh-explorer-miniapp", "text": "This miniapp is a handy tool to examine, visualize and manipulate a given mesh. Some of its features are: visualizing of mesh materials and individual mesh elements mesh scaling, randomization, and general transformation manipulation of the mesh curvature the ability to simulate parallel partitioning quantitative and visual reports of mesh quality For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mesh-explorer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mesh Explorer Miniapp"}, {"location": "examples/#mesh-optimizer-miniapp", "text": "This miniapp performs mesh optimization using the Target-Matrix Optimization Paradigm (TMOP) by P. Knupp , and a global variational minimization approach ( Dobrev et al. ). It minimizes the quantity $$\\sum_T \\int_T \\mu(J(x)),$$ where $T$ are the target (ideal) elements, $J$ is the Jacobian of the transformation from the target to the physical element, and $\\mu$ is the mesh quality metric. This metric can measure shape, size or alignment of the region around each quadrature point. The combination of targets and quality metrics is used to optimize the physical node positions, i.e., they must be as close as possible to the shape / size / alignment of their targets. This code also demonstrates a possible use of nonlinear operators, as well as their coupling to Newton methods for solving minimization problems. Note that the utilized Newton methods are oriented towards avoiding invalid meshes with negative Jacobian determinants. Each Newton step requires the inversion of a Jacobian matrix, which is done through an inner linear solver. The miniapp has a serial ( mesh-optimizer.cpp ) and a parallel ( pmesh-optimizer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mesh Optimizer Miniapp"}, {"location": "examples/#mesh-fitting-miniapp", "text": "This miniapp builds upon the mesh optimizer miniapp to enable mesh alignment with the zero isosurface of a discrete level-set. The approach is based on Dobrev et al. and Mittal et al. , where we minimize the quantity $$\\sum_T \\int_T \\mu(J(x)) + \\sum_{s \\in S} w \\,\\, \\sigma^2(x_s).$$ Here, the first term controls mesh quality and the second term enforces weak alignment of a selected subset of mesh-nodes ($s \\in S$) with the zero isosurface of the discrete level-set function ($\\sigma$). Click on the image on the right to see a demonstration of this method for generating body-fitted meshes for topology optimization in LiDO to maximize beam stiffness under a downward force on the right wall. The miniapp has a parallel ( pmesh-fitting.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mesh Fitting Miniapp"}, {"location": "examples/#minimal-surface-miniapp", "text": "This miniapp solves Plateau's problem: the Dirichlet problem for the minimal surface equation. Options to solve the minimal surface equations of both parametric surfaces as well as surfaces restricted to be graphs of the form $z=f(x,y)$ are supported, including a number of examples such as the Catenoid, Helicoid, Costa and Scherk surfaces. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has a serial ( minimal-surface.cpp ) and a parallel ( pminimal-surface.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Minimal Surface Miniapp"}, {"location": "examples/#low-order-refined-transfer-miniapp", "text": "The lor-transfer miniapp, found under miniapps/tools demonstrates the capability to generate a low-order refined mesh from a high-order mesh, and to transfer solutions between these meshes. Grid functions can be transferred between the coarse, high-order mesh and the low-order refined mesh using either $L^2$ projection or pointwise evaluation. These transfer operators can be designed to discretely conserve mass and to recover the original high-order solution when transferring a low-order grid function that was obtained by restricting a high-order grid function to the low-order refined space. The miniapp has only a serial ( lor-transfer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Low-Order Refined Transfer Miniapp"}, {"location": "examples/#interpolation-miniapps", "text": "The interpolation miniapp, found under miniapps/gslib , demonstrate the capability to interpolate high-order finite element functions at given set of points in physical space. These miniapps utilize the gslib library's high-order interpolation utility for quad and hex meshes: Find Points miniapp has a serial ( findpts.cpp ) and a parallel ( pfindpts.cpp ) version that demonstrate the basic procedures for point search and evaluation of grid functions. Field Interp miniapp ( field-interp.cpp ) demonstrates how grid functions can be transferred between meshes. Field Diff miniapp ( field-diff.cpp ) demonstrates how grid functions on two different meshes can be compared with each other. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Interpolation Miniapps"}, {"location": "examples/#extrapolation-miniapp", "text": "The extrapolate miniapp, found in the miniapps/shifted directory, extrapolates a finite element function from a set of elements (known values) to the rest of the domain. The set of elements that contains the known values is specified by the positive values of a level set Coefficient. The known values are not modified. The miniapp supports two PDE-based approaches ( Aslam , Bochkov & Gibou ), both of which rely on solving a sequence of advection problems in the direction of the unknown parts of the domain. The extrapolation can be constant (1st order), linear (2nd order), or quadratic (3rd order). These formal orders hold for a limited band around the zero level set, see the above references for further information. The miniapp has only a parallel ( extrapolate.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Extrapolation Miniapp"}, {"location": "examples/#distance-solver-miniapp", "text": "The distance miniapp, found in the miniapps/shifted directory demonstrates the capability to compute the \"distance\" to a given point source or to the zero level set of a given function. Here \"distance\" refers to the length of the shortest path through the mesh. The input can be a DeltaCoefficient (representing a point source), or any Coefficient (for the case of a level set). The output is a ParGridFunction that can be scalar (representing the scalar distance), or a vector (its magnitude is the distance, and its direction is the starting direction of the shortest path). The miniapp has only a parallel ( distance.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Distance Solver Miniapp"}, {"location": "examples/#shifted-diffusion-miniapp", "text": "The diffusion miniapp, found in the miniapps/shifted directory, demonstrates the capability to formulate a boundary value problem using a surrogate computational domain. The method uses a distance function to the true boundary to enforce Dirichlet boundary conditions on the (non-aligned) mesh faces, therefore \"shifting\" the location where boundary conditions are imposed. The implementation in the miniapp is a high-order extension of the second-generation shifted boundary method . The miniapp has only a parallel ( diffusion.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Shifted Diffusion Miniapp"}, {"location": "examples/#laghos-miniapp", "text": "Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. The computational motives captured in Laghos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements (triangular and tetrahedral elements can also be used, but with the less efficient full assembly option). Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Laghos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Continuous and discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Separation between the assembly and the quadrature point-based computations. Point-wise definition of mesh size, time-step estimate and artificial viscosity coefficient. Constant-in-time velocity mass operator that is inverted iteratively on each time step. This is an example of an operator that is prepared once (fully or partially assembled), but is applied many times. The application cost is dominant for this operator. Time-dependent force matrix that is prepared every time step (fully or partially assembled) and is applied just twice per \"assembly\". Both the preparation and the application costs are important for this operator. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization / data analysis with VisIt . The Laghos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Laghos .", "title": "Laghos Miniapp"}, {"location": "examples/#remhos-miniapp", "text": "Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. The computational motives captured in Remhos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements. Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Remhos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Mass operator that is local per each zone. It is inverted by iterative or exact methods at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Advection operator that couples neighboring zones. It is applied once at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization and data analysis with VisIt . The Remhos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Remhos .", "title": "Remhos Miniapp"}, {"location": "examples/#navier-miniapp", "text": "Navier is a miniapp that solves the time-dependent Navier-Stokes equations of incompressible fluid dynamics \\begin{align} \\frac{\\partial u}{\\partial t} + (u \\cdot \\nabla) u - \\frac{1}{Re} \\nabla^2 u - \\nabla p &= f \\\\ \\nabla \\cdot u &= 0 \\end{align} using a spatially high-order finite element discretization. The time-dependent problem is solved using a (up to) third order implicit-explicit method which leverages an extrapolation scheme for the convective parts and a backward-difference formulation for the viscous parts of the equation. The miniapp supports: Arbitrary order H1 elements High order mesh elements IMEX (EXTk-BDFk) time-stepping up to third order Convenient interface for new users A variety of test cases and benchmarks This miniapp has only a parallel ( navier_solver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Navier Miniapp"}, {"location": "examples/#block-solvers-miniapp", "text": "The Block Solvers miniapp, found under miniapps/solvers , compares various linear solvers for the saddle point system obtained from mixed finite element discretization of the Darcy's flow problem \\begin{array}{rcl} k{\\bf u} & + \\nabla p & = f \\\\ -\\nabla \\cdot {\\bf u} & & = g \\end{array} The solvers being compared include: The divergence-free solver (couple and decoupled modes), which is based on a multilevel decomposition of the Raviart-Thomas finite element space and its divergence-free subspace. MINRES preconditioned by the block diagonal preconditioner in ex5p.cpp . For more details, please see the documentation in the miniapps/solvers directory. The miniapp supports: Arbitrary order mixed finite element pair (Raviart-Thomas elements + piecewise discontinuous polynomials) Various combination of essential and natural boundary conditions Homogeneous or heterogeneous scalar coefficient k This miniapp has only a parallel ( block-solvers.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Block Solvers Miniapp"}, {"location": "examples/#overlapping-grids-miniapps", "text": "Overlapping grids-based frameworks can often make problems tractable that are otherwise inaccessible with a single conforming grid. The following gslib -based miniapps in MFEM demonstrate how to set up and use overlapping grids: The Schwarz Example 1 miniapp in miniapps/gslib has a serial ( schwarz_ex1.cpp ) a parallel ( schwarz_ex1p.cpp ) version that solves the Poisson problem on overlapping grids. The serial version is restricted to use two overlapping grids, while the parallel version supports arbitrary number of overlapping grids. The Navier Conjugate Heat Transfer miniapp in miniapps/navier ( navier_cht.cpp ) demonstrates how a conjugate heat transfer problem can be solved with the fluid dynamics (incompressible Navier-Stokes equations) and heat transfer (advection-diffusion equation) PDEs modeled on different meshes. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Overlapping Grids Miniapps"}, {"location": "examples/#parelag-amge-for-hcurl-and-hdiv-miniapp", "text": "This is a miniapp that exhibits the ParELAG library and part of its capabilities. The miniapp employs MFEM and ParELAG to solve $H(\\mathrm{curl})$- and $H(\\mathrm{div})$-elliptic forms by an element based algebraic multigrid (AMGe). ParELAG is a library mostly developed at the Center for Applied Scientific Computing of Lawrence Livermore National Laboratory, California, USA. The miniapp uses: A multilevel hierarchy of de Rham complexes of finite element spaces, built by ParELAG; Hiptmair-type (hybrid) smoothers, implemented in ParELAG; AMS (Auxiliary-space Maxwell Solver) or ADS (Auxiliary-space Divergence Solver), from HYPRE, for preconditioning or solving on the coarsest levels. Alternatively, it is possible to precondition or solve the $H(\\mathrm{div})$ form on the coarsest level via a hybridization approach. However, this is not yet implemented in ParELAG for the coarse levels. Only the hybridization solver that is directly applicable to an $H(\\mathrm{div})$-$L^2$ mixed (saddle-point) system is currently available in ParELAG. We recommend viewing ex3p.cpp and ex4p.cpp before viewing this miniapp. For more details, please see the documentation in the miniapps/parelag directory. This miniapp has only a parallel ( MultilevelHcurlHdivSolver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "ParELAG AMGe for H(curl) and H(div) Miniapp"}, {"location": "examples/#generating-gaussian-random-fields-via-the-spde-method", "text": "This miniapp generates Gaussian random fields on meshed domains $\\Omega \\subset \\mathbb{R}^n$ via the SPDE method. The method exploits a stochastic, fractional PDE whose full-space solutions yield Gaussian random fields with a Mat\u00e9rn covariance. The method was introduced and popularized by Lindgren et. al in 2010. In this miniapp, we use a slightly modified representation following Khristenko et. al . More specifically, we solve the equation \\begin{equation} \\left( -\\frac{1}{2\\nu} \\nabla \\cdot \\left( \\Theta \\nabla \\right) + \\mathbf{1} \\right)^{\\frac{2\\nu+n}{4}} u(x,w) = \\eta W(x,w) \\ \\ \\ \\text{in} \\ \\ \\Omega, \\end{equation} with various boundary conditions. Solving this equation on $\\Omega = \\mathbb{R}^n$ delivers a homogeneous Gaussian random field with zero mean and Mat\u00e9rn covariance, \\begin{align}\\label{eq:MaternCovariance} C(x,y) &= \\sigma^2M_\\nu \\left(\\sqrt{2\\nu}\\, \\| x-y \\|_{\\Theta} \\right) , \\end{align} where $M_\\nu(z) = \\frac{2^{1-\\nu}}{\\Gamma(\\nu)} z ^{\\nu} K_\\nu \\left( z \\right)$ and $\\| x-y \\|_{\\Theta}^2 = (x-y)^\\top\\Theta (x-y)$. The Mat\u00e9rn model provides the regularity parameter $\\nu > 0$ and the anisotropic diffusion tensor $\\Theta \\in \\mathbb{R}^{n\\times n}$, which determines the spatial structure (correlation lengths). However, applying boundary conditions to the SPDE above provides the ability to model a significantly larger class of inhomogeneous random fields on complex domains. For further details, see the miniapp README . We recommend viewing ex33p.cpp before viewing this miniapp. This miniapp ( generate_random_field.cpp ) has only a parallel implementation. It further requires MFEM to be built with LAPACK, otherwise you may only use predefined values for $\\nu$. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Generating Gaussian Random Fields via the SPDE Method"}, {"location": "examples/#multidomain-and-submesh-demonstration-miniapp", "text": "This miniapp aims to demonstrate how to solve two PDEs, that represent different physics, on the same domain. MFEM's SubMesh interface is used to compute on and transfer between the spaces of predefined parts of the domain. For the sake of simplicity, the spaces on each domain are using the same order H1 finite elements. This does not mean that the approach is limited to this configuration. A 3D domain comprised of an outer box with a cylinder shaped inside is used. A heat equation is described on the outer box domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T &&\\mbox{in outer box}\\\\ T &= T_{wall} &&\\mbox{on outside wall}\\\\ \\nabla T \\cdot \\hat{n} &= 0 &&\\mbox{on inside (cylinder) wall} \\end{align} with temperature $T$ and coefficient $\\kappa$ (non-physical in this example). A convection-diffusion equation is described inside the cylinder domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T - \\alpha \\nabla \\cdot (\\vec{b} T) & &\\mbox{in inner cylinder}\\\\ T &= T_{wall} & &\\mbox{on cylinder wall}\\\\ \\nabla T \\cdot \\hat{n} &= 0 & &\\mbox{else} \\end{align} with temperature $T$, coefficients $\\kappa$, $\\alpha$ and prescribed velocity profile $\\vec{b}$, and $T_{wall}$ obtained from the heat equation. To couple the solutions of both equations, a segregated solve with one way coupling approach is used. The heat equation of the outer box is solved from the timestep $T_{box}(t)$ to $T_{box}(t+dt)$. Then for the convection-diffusion equation $T_{wall}$ is set to $T_{box}(t+dt)$ and the equation is solved for $T(t+dt)$ which results in a first-order one way coupling. This miniapp has only a parallel ( multidomain.cpp ) implementation. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Multidomain and SubMesh demonstration Miniapp"}, {"location": "examples/#dpg-miniapp", "text": "This miniapp demonstrates how to discretize and solve various PDEs using the Discontinuous Petrov-Galerkin (DPG) method. It utilizes a new user-friendly interface to assemble the block DPG systems arising from the discretization of any DPG formulation (such as Ultraweak or Primal). In addition, the miniapp supports complex-valued systems, static condensation for block systems, and AMR using the built-in DPG residual-based error indicator. This capability is showcased in the following DPG examples in miniapps/dpg . Ultraweak DPG formulation for diffusion . This example solves the simple Poisson equation $$-\u0394 u = f$$ and computes rates of convergence under successive uniform h-refinements for a smooth manufactured solution. The parallel version also includes an AMR implementation for the L-shape benchmark problem. This example has a serial ( diffusion.cpp ) and a parallel ( pdiffusion.cpp ) version. Ultraweak DPG formulation for convection-diffusion . This example solves the convection-diffusion problem: \\begin{align} -\\epsilon \\Delta u + \\nabla \\cdot (\\beta u) &= f\\\\ \\end{align} using AMR. The example demonstrates the use of mesh-dependent test norms which are suitable for problems with solutions that exhibit large gradients present in internal or boundary layers . The example has a serial ( convection-diffusion.cpp ) and a parallel ( pconvection-diffusion.cpp ) version. Ultraweak DPG formulation for time-harmonic linear acoustics . This example solves the indefinite Helmholtz equation \\begin{align} -\\Delta u - \\omega^2 u &= f\\\\ \\end{align} The example includes formulations with manufactured plane-wave solutions as well as high-frequency scattering problems and the use of Perfectly Match Layers (PML). It also demonstrates how to set up complex-valued systems and preconditioners for their solutions. The example has a serial ( acoustics.cpp ) and parallel ( pacoustics.cpp ) version. Ultraweak DPG formulation for time-harmonic Maxwell . This example solves the indefinite Maxwell problem \\begin{align} \\nabla \u00d7 (\\mu^{-1} \\nabla \\times E) - \\omega^2 \\epsilon E&= J\\\\ \\end{align} The example includes formulations with smooth manufactured solutions, AMR formulations for high-frequency scattering problems as well as a problem with a singular solution. The example has a serial ( maxwell.cpp ) and a parallel ( pmaxwell.cpp ) version.", "title": "DPG miniapp"}, {"location": "examples/#tribol-miniapp", "text": "This miniapp demonstrates how to use Tribol's mortar method to solve a contact patch test. A contact patch test places two aligned, linear elastic cubes in contact, then verifies that the exact elasticity solution for this problem is recovered. The exact solution requires transmission of a uniform pressure field across a (not necessarily conforming) interface (i.e. the contact surface). Mortar methods (including the one implemented in Tribol) are generally able to pass the contact patch test. The test assumes small deformations and no accelerations, so the relationship between forces/contact pressures and deformations/contact gaps is linear and, therefore, the problem can be solved exactly with a single linear solve. The mortar implementation is based on Puso and Laursen (2004) . A description of the Tribol implementation is available in Serac documentation . Lagrange multipliers are used to solve for the pressure required to prevent violation of the contact constraints. This miniapp has only a parallel ( contact-patch-test.cpp ) implementation. For more details, please see the documentation in miniapps/tribol/README.md . We recommend that new users start with the example codes before moving to the miniapps. No examples or miniapps match your criteria. ", "title": "Tribol miniapp"}, {"location": "fem/", "text": "Finite Element Method The finite element method is a general discretization technique that can utilize unstructured grids to approximate the solutions of many partial differential equations (PDEs). There is a large body of literature on finite elements, including the following excellent books: Numerical Solution of Partial Differential Equations by the Finite Element Method by Claes Johnson Theory and Practice of Finite Elements by Alexandre Ern and Jean-Luc Guermond Higher-Order Finite Element Methods by Pavel \u0160ol\u00edn , Karel Segeth and Ivo Dole\u017eel High-Order Methods for Incompressible Fluid Flow by Michel Deville , Paul Fischer and Ernest Mund Finite Elements: Theory, Fast Solvers, and Applications in Elasticity Theory by Dietrich Braess The Finite Element Method for Elliptic Problems by Philippe Ciarlet The Mathematical Theory of Finite Element Methods by Susanne Brenner and Ridgway Scott An Analysis of the Finite Element Method by Gilbert Strang and George Fix The Finite Element Method: Its Basis and Fundamentals by Olek Zienkiewicz , Robert Taylor and J.Z. Zhu The MFEM library is designed to be lightweight, general and highly scalable finite element toolkit that provides the building blocks for developing finite element algorithms in a manner similar to that of MATLAB for linear algebra methods. Some of the C++ classes for the finite element realizations of these PDE-level concepts in MFEM are described below. Primal and Dual Vectors The finite element method uses vectors of data in a variety of ways and the differences can be subtle. MFEM defines GridFunction , LinearForm , and Vector classes which help to distinguish the different roles that vectors of data can play. Bilinear Form Integrators Bilinear form integrators are at the heart of any finite element method, they are used to compute the integrals of products of basis functions over individual mesh elements (or sometimes over edges or faces). The BilinearForm class adds several BilinearFormIntegrator s together to build the global sparse finite element matrix. Linear Form Integrators Linear form integrators are used to compute the integrals of products of a basis function with a given source function over individual mesh elements (or sometimes over edges or faces). The LinearForm class adds several LinearFormIntegrator s together to build the global right-hand side for the finite element linear system. Integration This page offers guidance on writing custom Bilinear Form or Linear Form Integrators. Coefficients The Coefficient objects in MFEM are general functions on continuous level that are used to represent the PDE coefficients of linear and bilinear forms, as well as to specify initial conditions, boundary conditions, exact solutions, etc. Nonlinear Form Integrators Nonlinear form integrators are used to express the local action of a general nonlinear finite element operator. In addition, they may provide the capability to assemble the local gradient operator and to compute the local energy. Linear Interpolators Unlike Bilinear and Linear forms, Linear Interpolators do not perform integrations, but project one basis function (or a linear function of a basis function) onto another basis function. The DiscreteLinearOperator class adds one or more LinearInterpolators together to build a global sparse matrix representation of the linear operator. Weak Formulations Weak formulations are at the heart of the finite element method. Finite element approximations are almost always less smooth than the solutions we hope to approximate. Weak formulations provide a means of approximating derivatives of non-differentiable functions. Boundary Conditions The types of available boundary conditions and how to apply them depend on the discretizations being used. This page describes how to enforce various boundary conditions for certain classes of problems.", "title": "Finite Elements"}, {"location": "fem/#finite-element-method", "text": "The finite element method is a general discretization technique that can utilize unstructured grids to approximate the solutions of many partial differential equations (PDEs). There is a large body of literature on finite elements, including the following excellent books: Numerical Solution of Partial Differential Equations by the Finite Element Method by Claes Johnson Theory and Practice of Finite Elements by Alexandre Ern and Jean-Luc Guermond Higher-Order Finite Element Methods by Pavel \u0160ol\u00edn , Karel Segeth and Ivo Dole\u017eel High-Order Methods for Incompressible Fluid Flow by Michel Deville , Paul Fischer and Ernest Mund Finite Elements: Theory, Fast Solvers, and Applications in Elasticity Theory by Dietrich Braess The Finite Element Method for Elliptic Problems by Philippe Ciarlet The Mathematical Theory of Finite Element Methods by Susanne Brenner and Ridgway Scott An Analysis of the Finite Element Method by Gilbert Strang and George Fix The Finite Element Method: Its Basis and Fundamentals by Olek Zienkiewicz , Robert Taylor and J.Z. Zhu The MFEM library is designed to be lightweight, general and highly scalable finite element toolkit that provides the building blocks for developing finite element algorithms in a manner similar to that of MATLAB for linear algebra methods. Some of the C++ classes for the finite element realizations of these PDE-level concepts in MFEM are described below.", "title": "Finite Element Method"}, {"location": "fem/#primal-and-dual-vectors", "text": "The finite element method uses vectors of data in a variety of ways and the differences can be subtle. MFEM defines GridFunction , LinearForm , and Vector classes which help to distinguish the different roles that vectors of data can play.", "title": "Primal and Dual Vectors"}, {"location": "fem/#bilinear-form-integrators", "text": "Bilinear form integrators are at the heart of any finite element method, they are used to compute the integrals of products of basis functions over individual mesh elements (or sometimes over edges or faces). The BilinearForm class adds several BilinearFormIntegrator s together to build the global sparse finite element matrix.", "title": "Bilinear Form Integrators"}, {"location": "fem/#linear-form-integrators", "text": "Linear form integrators are used to compute the integrals of products of a basis function with a given source function over individual mesh elements (or sometimes over edges or faces). The LinearForm class adds several LinearFormIntegrator s together to build the global right-hand side for the finite element linear system.", "title": "Linear Form Integrators"}, {"location": "fem/#integration", "text": "This page offers guidance on writing custom Bilinear Form or Linear Form Integrators.", "title": "Integration"}, {"location": "fem/#coefficients", "text": "The Coefficient objects in MFEM are general functions on continuous level that are used to represent the PDE coefficients of linear and bilinear forms, as well as to specify initial conditions, boundary conditions, exact solutions, etc.", "title": "Coefficients"}, {"location": "fem/#nonlinear-form-integrators", "text": "Nonlinear form integrators are used to express the local action of a general nonlinear finite element operator. In addition, they may provide the capability to assemble the local gradient operator and to compute the local energy.", "title": "Nonlinear Form Integrators"}, {"location": "fem/#linear-interpolators", "text": "Unlike Bilinear and Linear forms, Linear Interpolators do not perform integrations, but project one basis function (or a linear function of a basis function) onto another basis function. The DiscreteLinearOperator class adds one or more LinearInterpolators together to build a global sparse matrix representation of the linear operator.", "title": "Linear Interpolators"}, {"location": "fem/#weak-formulations", "text": "Weak formulations are at the heart of the finite element method. Finite element approximations are almost always less smooth than the solutions we hope to approximate. Weak formulations provide a means of approximating derivatives of non-differentiable functions.", "title": "Weak Formulations"}, {"location": "fem/#boundary-conditions", "text": "The types of available boundary conditions and how to apply them depend on the discretizations being used. This page describes how to enforce various boundary conditions for certain classes of problems.", "title": "Boundary Conditions"}, {"location": "fem_bc/", "text": "Boundary Conditions $ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} \\newcommand{\\dO}{{\\partial\\Omega}} $ MFEM supports boundary conditions of mixed type through the definition of boundary attributes on the mesh. A boundary attribute is a positive integer assigned to each boundary element of the mesh. Since each boundary element can have only one attribute number the boundary attributes split the boundary into a group of disjoint sets. MFEM allows the user to define boundary conditions on a subset of boundary attributes. Typically mixed boundary conditions are imposed on disjoint portions of the boundary defined as: Symbol Description $\\Gamma\\equiv\\dO$ Boundary of the Domain ($\\Omega$) $\\Gamma_D$ Dirichlet Boundary $\\Gamma_N$ Neumann Boundary $\\Gamma_R$ Robin Boundary $\\Gamma_0$ Natural Boundary Where we assume $\\Gamma = \\Gamma_D\\cup\\Gamma_N\\cup\\Gamma_R\\cup\\Gamma_0$. In MFEM boundaries are usually described by \"marker arrays\". A marker array is an array of integers containing zeros and ones with a length equal to the largest boundary attribute index. // Assume we start with an array containing boundary attribute numbers // stored in bdr_attr. // // Prepare a marker array from a set of attributes Array bdr_marker(pmesh.bdr_attributes.Max()); bdr_marker = 0; for (int i=0; i ess_tdof_list(0); fespace.GetEssentialTrueDofs(dbc_marker, ess_tdof_list); // Prepare the linear system with enforcement of the essential boundary // conditions OperatorPtr A; Vector B, X; a.FormLinearSystem(ess_tdof_list, u, b, A, X, B); Natural Boundary Conditions The so called \"Natural Boundary Conditions\" arise whenever weak derivatives occur in a PDE (see below for more on weak derivatives ). Weak derivatives must be handled using integration by parts which introduces a boundary integral. If this boundary integral is ignored, its value is implicitly set to zero which creates an implicit constraint on the solution called a \"natural boundary condition\". Continuous Operator Weak Operator Natural BC $-\\div(\\lambda\\grad u)$ $(\\lambda\\grad u,\\grad v)$ $\\hat{n}\\cdot(\\lambda\\grad u)=0$ on $\\Gamma_0$ $\\curl(\\lambda\\curl\\vec{u})$ $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\hat{n}\\cross(\\lambda\\curl\\vec{u})=0$ on $\\Gamma_0$ $-\\grad(\\lambda\\div\\vec{u})$ $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $\\lambda\\div\\vec{u}=0$ on $\\Gamma_0$ $\\div(\\vec{\\lambda}u)$ $(-\\vec{\\lambda}u,\\grad v)$ $\\hat{n}\\cdot\\vec{\\lambda}u = 0$ on $\\Gamma_0$ $\\curl(\\lambda\\vec{u})$ $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\hat{n}\\cross(\\lambda\\vec{u})=0$ on $\\Gamma_0$ $-\\div(\\lambda\\grad u) + \\div(\\vec{\\beta}u)$ $(\\lambda\\grad u - \\vec{\\beta}u,\\grad v)$ $\\hat{n}\\cdot(\\lambda\\grad u-\\vec{\\beta}u)=0$ on $\\Gamma_0$ No additional implementation is necessary to impose natural boundary conditions. Any portion of the boundary where a Dirichlet, Neumann, or Robin boundary condition has not been applied will receive a natural boundary condition by default. Neumann Boundary Conditions Neumann boundary conditions are closely related to natural boundary conditions. Rather than ignoring the boundary integral we integrate a known function on the boundary which approximates the desired value of the boundary condition (often a involving a derivative of the field). The following table shows a variety of common operators and their related Neumann boundary condition. Operator Continuous Operator Neumann BC $(\\lambda\\grad u,\\grad v)$ $-\\div(\\lambda\\grad u)$ $\\hat{n}\\cdot(\\lambda\\grad u)=f$ on $\\Gamma_N$ $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $\\hat{n}\\cross(\\lambda\\curl\\vec{u})=\\hat{n}\\cross\\vec{f}$ on $\\Gamma_N$ $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ $\\lambda\\div\\vec{u}=\\hat{n}\\cdot\\vec{f}$ on $\\Gamma_N$ $(-\\vec{\\lambda}u,\\grad v)$ $\\div(\\vec{\\lambda}u)$ $\\hat{n}\\cdot\\vec{\\lambda}u = f$ on $\\Gamma_N$ $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\vec{u})$ $\\hat{n}\\cross(\\lambda\\vec{u})=\\hat{n}\\cross\\vec{f}$ on $\\Gamma_N$ $(\\lambda\\grad u - \\vec{\\beta}u,\\grad v)$ $-\\div(\\lambda\\grad u) + \\div(\\vec{\\beta}u)$ $\\hat{n}\\cdot(\\lambda\\grad u-\\vec{\\beta}u)=f$ on $\\Gamma_N$ To impose these boundary conditions in MFEM simply modify the right-hand side of your linear system by adding the appropriate boundary integral of either $f$ or $\\vec{f}$. For $H^1$ or $L^2$ fields this can be accomplished by adding the BoundaryLFIntegrator with an appropriate coefficient for $f$ to a [Par]LinearForm object. Neumann boundary conditions can be added to the above example code by adding the following line before the call to b.Assemble() . // Add Neumann BCs n.(matCoef Grad u) = nbcCoef on the boundary marked in // the nbc_marker array. b.AddBoundaryIntegrator(new BoundaryLFIntegrator(nbcCoef), nbc_marker); For H(Curl) fields this can be accomplished by adding the VectorFEBoundaryTangentLFIntegrator with an appropriate vector coefficient for $\\vec{f}$ to a [Par]LinearForm object. And finally, for H(Div) fields this can be accomplished by adding the VectorFEBoundaryFluxLFIntegrator with an appropriate scalar coefficient for $f = \\hat{n}\\cdot\\vec{f}$ to a [Par]LinearForm object. Other integrators may be appropriate if it is desirable to express the functions $\\,f$ or $\\vec{f}$ in other ways. Robin Boundary Conditions Robin boundary conditions typically involve a linear function of the field and its normal derivative. As such they also arise from weak derivatives and the boundary integrals they introduce to the system of equations. Typical forms of the Robin boundary condition include the following. Operator Continuous Operator Robin BC $(\\lambda\\grad u,\\grad v)$ $-\\div(\\lambda\\grad u)$ $\\hat{n}\\cdot(\\lambda\\grad u)+\\gamma\\,u=f$ on $\\Gamma_R$ $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $\\hat{n}\\cross(\\lambda\\curl\\vec{u}+\\gamma\\,\\hat{n}\\cross\\vec{u})=\\hat{n}\\cross\\vec{f}$ on $\\Gamma_R$ $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ $\\lambda\\div\\vec{u}+\\gamma\\,\\hat{n}\\cdot\\vec{u}=\\hat{n}\\cdot\\vec{f}$ on $\\Gamma_R$ $(\\lambda\\grad u - \\vec{\\beta}u,\\grad v)$ $-\\div(\\lambda\\grad u) + \\div(\\vec{\\beta}u)$ $\\hat{n}\\cdot(\\lambda\\grad u-\\vec{\\beta}u)+\\gamma\\,u=f$ on $\\Gamma_R$ Robin boundary conditions are applied in the same manner as Neumann boundary conditions except that one must also add a boundary integral to the [Par]BilinearForm object to account for the term involving $\\gamma$. For example, when solving for an $H^1$ field one should add a MassIntegrator with an appropriate scalar coefficient for $\\gamma$. The implementation of a Robin boundary condition requires precisely the same change to the right-hand-side as the Neumann boundary condition as well as a new term in the bilinear form before a.Assemble() : // Add Robin BCs n.(matCoef Grad u) + rbcACoef u = rbcBCoef on the boundary // marked in the rbc_marker array. b.AddBoundaryIntegrator(new BoundaryLFIntegrator(rbcBCoef), rbc_marker); ... // Add Robin BCs n.(matCoef grad u) + rbcACoef u = rbcBCoef on the boundary // marked in the rbc_marker array. a.AddBoundaryIntegrator(new MassIntegrator(rbcACoef), rbc_marker); Discontinuous Galerkin Formulations In the Discontinuous Galerkin (DG) formulation the Natural , Neumann , and Robin can be implemented in a similar the same manner as in the continuous case (adding the appropriate LinearFormIntegrator as a boundary face integrator instead of a boundary integrator ). However, since DG basis functions have no degrees of freedom associated with the boundary, Dirichlet boundary conditions must be handled differently. // Add the desired value for the Dirichlet constraint on the boundary // marked in the dbc_marker array. b.AddBdrFaceIntegrator(new DGDirichletLFIntegrator(dbcCoef, matCoef, sigma, kappa), dbc_marker); ... // Add the n.Grad(u) boundary integral on the Dirichlet portion of the // boundary marked in the dbc_marker array. a.AddBdrFaceIntegrator(new DGDiffusionIntegrator(matCoef, sigma, kappa), dbc_marker); Where sigma and kappa are parameters controlling the symmetry and interior penalty used in the DG diffusion formulation. These two integrators work together to balance the natural boundary condition associated with the DiffusionIntegrator and to penalize solutions which differ from the desired Dirichlet value near the boundary. Similar pairs of integrators can be implemented to accommodate other PDEs. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Boundary Conditions"}, {"location": "fem_bc/#boundary-conditions", "text": "$ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} \\newcommand{\\dO}{{\\partial\\Omega}} $ MFEM supports boundary conditions of mixed type through the definition of boundary attributes on the mesh. A boundary attribute is a positive integer assigned to each boundary element of the mesh. Since each boundary element can have only one attribute number the boundary attributes split the boundary into a group of disjoint sets. MFEM allows the user to define boundary conditions on a subset of boundary attributes. Typically mixed boundary conditions are imposed on disjoint portions of the boundary defined as: Symbol Description $\\Gamma\\equiv\\dO$ Boundary of the Domain ($\\Omega$) $\\Gamma_D$ Dirichlet Boundary $\\Gamma_N$ Neumann Boundary $\\Gamma_R$ Robin Boundary $\\Gamma_0$ Natural Boundary Where we assume $\\Gamma = \\Gamma_D\\cup\\Gamma_N\\cup\\Gamma_R\\cup\\Gamma_0$. In MFEM boundaries are usually described by \"marker arrays\". A marker array is an array of integers containing zeros and ones with a length equal to the largest boundary attribute index. // Assume we start with an array containing boundary attribute numbers // stored in bdr_attr. // // Prepare a marker array from a set of attributes Array bdr_marker(pmesh.bdr_attributes.Max()); bdr_marker = 0; for (int i=0; i ess_tdof_list(0); fespace.GetEssentialTrueDofs(dbc_marker, ess_tdof_list); // Prepare the linear system with enforcement of the essential boundary // conditions OperatorPtr A; Vector B, X; a.FormLinearSystem(ess_tdof_list, u, b, A, X, B);", "title": "Dirichlet (Essential) Boundary Conditions"}, {"location": "fem_bc/#natural-boundary-conditions", "text": "The so called \"Natural Boundary Conditions\" arise whenever weak derivatives occur in a PDE (see below for more on weak derivatives ). Weak derivatives must be handled using integration by parts which introduces a boundary integral. If this boundary integral is ignored, its value is implicitly set to zero which creates an implicit constraint on the solution called a \"natural boundary condition\". Continuous Operator Weak Operator Natural BC $-\\div(\\lambda\\grad u)$ $(\\lambda\\grad u,\\grad v)$ $\\hat{n}\\cdot(\\lambda\\grad u)=0$ on $\\Gamma_0$ $\\curl(\\lambda\\curl\\vec{u})$ $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\hat{n}\\cross(\\lambda\\curl\\vec{u})=0$ on $\\Gamma_0$ $-\\grad(\\lambda\\div\\vec{u})$ $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $\\lambda\\div\\vec{u}=0$ on $\\Gamma_0$ $\\div(\\vec{\\lambda}u)$ $(-\\vec{\\lambda}u,\\grad v)$ $\\hat{n}\\cdot\\vec{\\lambda}u = 0$ on $\\Gamma_0$ $\\curl(\\lambda\\vec{u})$ $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\hat{n}\\cross(\\lambda\\vec{u})=0$ on $\\Gamma_0$ $-\\div(\\lambda\\grad u) + \\div(\\vec{\\beta}u)$ $(\\lambda\\grad u - \\vec{\\beta}u,\\grad v)$ $\\hat{n}\\cdot(\\lambda\\grad u-\\vec{\\beta}u)=0$ on $\\Gamma_0$ No additional implementation is necessary to impose natural boundary conditions. Any portion of the boundary where a Dirichlet, Neumann, or Robin boundary condition has not been applied will receive a natural boundary condition by default.", "title": "Natural Boundary Conditions"}, {"location": "fem_bc/#neumann-boundary-conditions", "text": "Neumann boundary conditions are closely related to natural boundary conditions. Rather than ignoring the boundary integral we integrate a known function on the boundary which approximates the desired value of the boundary condition (often a involving a derivative of the field). The following table shows a variety of common operators and their related Neumann boundary condition. Operator Continuous Operator Neumann BC $(\\lambda\\grad u,\\grad v)$ $-\\div(\\lambda\\grad u)$ $\\hat{n}\\cdot(\\lambda\\grad u)=f$ on $\\Gamma_N$ $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $\\hat{n}\\cross(\\lambda\\curl\\vec{u})=\\hat{n}\\cross\\vec{f}$ on $\\Gamma_N$ $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ $\\lambda\\div\\vec{u}=\\hat{n}\\cdot\\vec{f}$ on $\\Gamma_N$ $(-\\vec{\\lambda}u,\\grad v)$ $\\div(\\vec{\\lambda}u)$ $\\hat{n}\\cdot\\vec{\\lambda}u = f$ on $\\Gamma_N$ $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\vec{u})$ $\\hat{n}\\cross(\\lambda\\vec{u})=\\hat{n}\\cross\\vec{f}$ on $\\Gamma_N$ $(\\lambda\\grad u - \\vec{\\beta}u,\\grad v)$ $-\\div(\\lambda\\grad u) + \\div(\\vec{\\beta}u)$ $\\hat{n}\\cdot(\\lambda\\grad u-\\vec{\\beta}u)=f$ on $\\Gamma_N$ To impose these boundary conditions in MFEM simply modify the right-hand side of your linear system by adding the appropriate boundary integral of either $f$ or $\\vec{f}$. For $H^1$ or $L^2$ fields this can be accomplished by adding the BoundaryLFIntegrator with an appropriate coefficient for $f$ to a [Par]LinearForm object. Neumann boundary conditions can be added to the above example code by adding the following line before the call to b.Assemble() . // Add Neumann BCs n.(matCoef Grad u) = nbcCoef on the boundary marked in // the nbc_marker array. b.AddBoundaryIntegrator(new BoundaryLFIntegrator(nbcCoef), nbc_marker); For H(Curl) fields this can be accomplished by adding the VectorFEBoundaryTangentLFIntegrator with an appropriate vector coefficient for $\\vec{f}$ to a [Par]LinearForm object. And finally, for H(Div) fields this can be accomplished by adding the VectorFEBoundaryFluxLFIntegrator with an appropriate scalar coefficient for $f = \\hat{n}\\cdot\\vec{f}$ to a [Par]LinearForm object. Other integrators may be appropriate if it is desirable to express the functions $\\,f$ or $\\vec{f}$ in other ways.", "title": "Neumann Boundary Conditions"}, {"location": "fem_bc/#robin-boundary-conditions", "text": "Robin boundary conditions typically involve a linear function of the field and its normal derivative. As such they also arise from weak derivatives and the boundary integrals they introduce to the system of equations. Typical forms of the Robin boundary condition include the following. Operator Continuous Operator Robin BC $(\\lambda\\grad u,\\grad v)$ $-\\div(\\lambda\\grad u)$ $\\hat{n}\\cdot(\\lambda\\grad u)+\\gamma\\,u=f$ on $\\Gamma_R$ $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $\\hat{n}\\cross(\\lambda\\curl\\vec{u}+\\gamma\\,\\hat{n}\\cross\\vec{u})=\\hat{n}\\cross\\vec{f}$ on $\\Gamma_R$ $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ $\\lambda\\div\\vec{u}+\\gamma\\,\\hat{n}\\cdot\\vec{u}=\\hat{n}\\cdot\\vec{f}$ on $\\Gamma_R$ $(\\lambda\\grad u - \\vec{\\beta}u,\\grad v)$ $-\\div(\\lambda\\grad u) + \\div(\\vec{\\beta}u)$ $\\hat{n}\\cdot(\\lambda\\grad u-\\vec{\\beta}u)+\\gamma\\,u=f$ on $\\Gamma_R$ Robin boundary conditions are applied in the same manner as Neumann boundary conditions except that one must also add a boundary integral to the [Par]BilinearForm object to account for the term involving $\\gamma$. For example, when solving for an $H^1$ field one should add a MassIntegrator with an appropriate scalar coefficient for $\\gamma$. The implementation of a Robin boundary condition requires precisely the same change to the right-hand-side as the Neumann boundary condition as well as a new term in the bilinear form before a.Assemble() : // Add Robin BCs n.(matCoef Grad u) + rbcACoef u = rbcBCoef on the boundary // marked in the rbc_marker array. b.AddBoundaryIntegrator(new BoundaryLFIntegrator(rbcBCoef), rbc_marker); ... // Add Robin BCs n.(matCoef grad u) + rbcACoef u = rbcBCoef on the boundary // marked in the rbc_marker array. a.AddBoundaryIntegrator(new MassIntegrator(rbcACoef), rbc_marker);", "title": "Robin Boundary Conditions"}, {"location": "fem_bc/#discontinuous-galerkin-formulations", "text": "In the Discontinuous Galerkin (DG) formulation the Natural , Neumann , and Robin can be implemented in a similar the same manner as in the continuous case (adding the appropriate LinearFormIntegrator as a boundary face integrator instead of a boundary integrator ). However, since DG basis functions have no degrees of freedom associated with the boundary, Dirichlet boundary conditions must be handled differently. // Add the desired value for the Dirichlet constraint on the boundary // marked in the dbc_marker array. b.AddBdrFaceIntegrator(new DGDirichletLFIntegrator(dbcCoef, matCoef, sigma, kappa), dbc_marker); ... // Add the n.Grad(u) boundary integral on the Dirichlet portion of the // boundary marked in the dbc_marker array. a.AddBdrFaceIntegrator(new DGDiffusionIntegrator(matCoef, sigma, kappa), dbc_marker); Where sigma and kappa are parameters controlling the symmetry and interior penalty used in the DG diffusion formulation. These two integrators work together to balance the natural boundary condition associated with the DiffusionIntegrator and to penalize solutions which differ from the desired Dirichlet value near the boundary. Similar pairs of integrators can be implemented to accommodate other PDEs. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Discontinuous Galerkin Formulations"}, {"location": "fem_weak_form/", "text": "Weak Formulations $ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} \\newcommand{\\dO}{{\\partial\\Omega}} $ Spaces of finite element basis functions are rarely rich enough to contain exact solutions to partial differential equations (PDEs) of interest. This is particularly true when we consider the irregular domains that often arise in practical simulations. One consequence of this is that finite element solutions often don't precisely satisfy the continuous PDEs being modeled. The goal is to build a finite element solution which approximates the true solution and satisfies the PDE in a weaker sense. Consider a general linear differential operator $L(u)$ and the partial differential equation: $$L(u) = f\\mbox{ on }\\Omega$$ We approximate the solution using a linear combination of finite element basis functions which we'll call $\\varphi_i$. $$u\\approx u_h\\equiv\\sum_i\\alpha_i\\varphi_i(\\vec{x})$$ The basis functions $\\varphi_i$ are known but we need to find the degrees of freedom, $\\alpha_i$, which produce a reasonable approximation of $u$. In Galerkin finite element methods this is done by multiplying the PDE by each of the basis functions and integrating over the problem domain. If we have a total of $N$ finite element basis functions, this leads to a set of $N$ equations for the $N$ unknowns. The resulting system of equations for the $\\alpha_i$ is called the \"weak formulation\" of the PDE. The weak formulation of this problem can be written as: $$\\sum_j\\alpha_j\\int_\\Omega L(\\varphi_j)\\varphi_i\\dO = \\int_\\Omega f\\varphi_i\\dO$$ or by the matrix equation: $$M\\vec{\\alpha}=\\vec{f}$$ Where the matrix entries $M_{ij}\\equiv\\int_\\Omega L(\\varphi_j)\\varphi_i\\dO$ and the entries of $\\,\\vec{f}$ are given by $\\,f_i\\equiv\\int_\\Omega f\\varphi_i\\dO$. However, it is much more common to write these integrals using inner product notation: $$(L(u),v)_\\Omega=(f, v)_\\Omega\\,\\forall v\\in V$$ Where $V$ is space spanned by the basis functions $\\varphi_i$. The next step is to examine the linear operator $L(u)$ and determine how to compute the integral $(L(u),v)_\\Omega$ in the most accurate manner possible which leads us to \"weak derivatives\". Weak Derivatives A \"weak derivative\" is a generalization of the notion of a derivative for integrable functions whose derivatives do not exist in the strong sense. When using the finite element method weak derivatives are required whenever terms in a PDE require derivatives of discontinuous or otherwise non-differentiable quantities. Finite element basis functions are typically not smooth functions. Even if they happen to be continuous their derivatives are often at least partially discontinuous. Also, coefficient functions can be discontinuous but, more importantly, their derivatives are often not known. For these reasons PDE terms similar to $\\grad(\\lambda u)$ or $\\div\\grad u$ cannot be accurately computed using finite element basis functions without employing weak derivatives. Consider the following discontinuous approximation to the function $\\cos(2\\pi x)e^{-2x}$. Piecewise linear, discontinuous basis functions can approximate this function rather well on this coarse 4 element mesh. If we simply ignore the discontinuities and compute the piecewise derivatives of the basis functions we obtain the following approximation of the continuous function's derivative. This is a reasonable, albeit quite crude, approximation of the derivative. Expending a little more effort to compute the weak derivative using continuous 2nd order basis functions produces a far superior approximation. Clearly we will benefit from using weak derivatives to handle derivatives of discontinuous functions which arise in our linear operators. Weak Divergence Consider a linear operator of the form $L(u)=-\\div\\vec{\\alpha}(u)$ with $\\vec{\\alpha}\\equiv\\vec{\\beta}u+\\gamma\\grad u$, where $\\vec{\\beta}$ is a vector-valued function and $\\gamma$ is a scalar or tensor-valued function. The function $\\vec{\\alpha}$ is a general linear function of $u$ and its gradient. The weak divergence of this quantity would be calculated by multiplying $\\div\\vec{\\alpha}$ by a test function, $v$, and integrating over the domain $\\Omega$. $$(-\\div\\vec{\\alpha},v)_\\Omega \\equiv-\\int_\\Omega(\\div\\vec{\\alpha})v\\,d\\Omega$$ The negative sign in this expression is only a matter of convention. Using the vector calculus identity, $\\div(\\vec{\\alpha}v) = (\\div\\vec{\\alpha})v + \\vec{\\alpha}\\cdot\\grad v$, we find: $$(-\\div\\vec{\\alpha}, v)_\\Omega = (\\vec{\\alpha}, \\grad v)_\\Omega - \\int_\\Omega\\div(\\vec{\\alpha}v)\\,d\\Omega$$ We then use the Divergence theorem to obtain: $$(-\\div\\vec{\\alpha}, v)_\\Omega = (\\vec{\\alpha}, \\grad v)_\\Omega - \\int_\\dO(\\hat{n}\\cdot\\vec{\\alpha})v\\,d\\Gamma = (\\vec{\\alpha}, \\grad v)_\\Omega - (\\hat{n}\\cdot\\vec{\\alpha},v)_\\dO $$ Where $d\\Gamma$ is the area element on the boundary of $\\Omega$. For linear operators of this type the bilinear form $\\,(\\vec{\\alpha}, \\grad v)_\\Omega$ can be much more accurately approximated than the original bilinear form $\\,(-\\div\\vec{\\alpha}, v)_\\Omega$ provided we can accurately manage the boundary integral $\\,(\\hat{n}\\cdot\\vec{\\alpha},v)_\\dO$. Boundary integrals such as this can be used to incorporate Neumann boundary conditions into a PDE. See the Boundary Conditions page for more information on this. Weak Curl For the next example consider the weak curl of a vector operator. Let $L(u)=\\curl\\vec{\\alpha}(u)$ with $\\vec{\\alpha} \\equiv \\beta\\vec{u}+\\gamma\\curl\\vec{u}$, where $\\beta$ and $\\gamma$ are either scalar or tensor-valued functions. The function $\\vec{\\alpha}$ is a general linear function of $\\vec{u}$ and its curl. The weak curl of this quantity would be calculated by multiplying $\\curl\\vec{\\alpha}$ by a test function, $\\vec{v}$, and integrating over the domain $\\Omega$. $$(\\curl\\vec{\\alpha},\\vec{v})_\\Omega \\equiv \\int_\\Omega(\\curl\\vec{\\alpha})\\cdot\\vec{v}\\,d\\Omega$$ Using the vector calculus identity, $\\div(\\vec{\\alpha}\\cross\\vec{v}) = (\\curl\\vec{\\alpha})\\cdot\\vec{v} - \\vec{\\alpha}\\cdot(\\curl\\vec{v})$, we find: $$(\\curl\\vec{\\alpha},\\vec{v})_\\Omega = (\\vec{\\alpha},\\curl\\vec{v})_\\Omega + \\int_\\Omega\\div(\\vec{\\alpha}\\times\\vec{v})\\,d\\Omega$$ We again use the Divergence theorem to obtain: $$(\\curl\\vec{\\alpha},\\vec{v})_\\Omega = (\\vec{\\alpha},\\curl\\vec{v})_\\Omega + \\int_\\dO\\hat{n}\\cdot(\\vec{\\alpha}\\times\\vec{v})\\,d\\Gamma = (\\vec{\\alpha},\\curl\\vec{v})_\\Omega + (\\hat{n}\\cross\\vec{\\alpha},\\vec{v})_\\dO$$ Where we also made use of the scalar triple product, $\\hat{n}\\cdot(\\vec{\\alpha}\\cross\\vec{v}) = \\vec{v}\\cdot(\\hat{n}\\cross\\vec{\\alpha})$, in the last equality. Again it will be more accurate to use the bilinear form $(\\vec{\\alpha},\\curl\\vec{v})_\\Omega$ and a Neumann boundary condition will arise from the boundary integral. Weak Gradient For the last example consider the weak gradient of a scalar operator. Let $L(u)=-\\grad\\alpha(u)$ with $\\alpha\\equiv\\vec{\\beta}\\cdot\\vec{u}+\\gamma\\div\\vec{u}$, where $\\vec{\\beta}$ is a vector-valued function and $\\gamma$ is a scalar-valued function. The function $\\alpha$ is a general linear function of $\\vec{u}$ and its divergence. The weak gradient of this quantity would be calculated by multiplying $\\grad\\alpha$ by a test function, $\\vec{v}$, and integrating over the domain $\\Omega$. $$(-\\grad\\alpha,\\vec{v})_\\Omega \\equiv -\\int_\\Omega(\\grad\\alpha)\\cdot\\vec{v}\\,d\\Omega$$ The negative sign in this expression is again only a matter of convention. Using the vector calculus identity, $\\div(\\alpha\\vec{v}) = (\\grad\\alpha)\\cdot\\vec{v} + \\alpha\\div\\vec{v}$, we find: $$(-\\grad\\alpha,\\vec{v})_\\Omega = (\\alpha,\\div\\vec{v})_\\Omega - \\int_\\Omega\\div(\\alpha\\vec{v})\\,d\\Omega$$ We again use the Divergence theorem to obtain: $$(-\\grad\\alpha,\\vec{v})_\\Omega = (\\alpha,\\div\\vec{v})_\\Omega - \\int_\\dO\\hat{n}\\cdot(\\alpha\\vec{v})\\,d\\Gamma = (\\alpha,\\div\\vec{v})_\\Omega - (\\alpha\\hat{n},\\vec{v})_\\dO$$ Once again we find a complimentary bilinear form in $(\\alpha,\\div\\vec{v})_\\Omega$ and a boundary integral leading to a Neumann boundary condition. Other Types of Terms Partial differential equations with other types of terms such as spatial derivatives of order three or higher (e.g. $\\nabla^4u$) or coefficients in inconvenient locations (e.g. $\\alpha\\div(\\beta\\grad u)$) will often require the introduction of auxiliary variables unless algebraic manipulations can remove the inconvenient factors. For example, $$\\nabla^4 u=f$$ can be split into a pair of coupled equations: $$ \\begin{align*} \\nabla^2u &= \\psi\\\\ \\nabla^2\\psi &= f \\end{align*} $$ and $$\\alpha\\div(\\beta\\grad u)=f$$ can be split into: $$ \\begin{align*} \\beta\\grad u &= \\psi\\\\ \\alpha\\div\\psi &= f \\end{align*} $$ Careful examination of the required derivatives will often suggest the most appropriate choice for the basis functions to be used for such auxiliary fields. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Weak Formulations"}, {"location": "fem_weak_form/#weak-formulations", "text": "$ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} \\newcommand{\\dO}{{\\partial\\Omega}} $ Spaces of finite element basis functions are rarely rich enough to contain exact solutions to partial differential equations (PDEs) of interest. This is particularly true when we consider the irregular domains that often arise in practical simulations. One consequence of this is that finite element solutions often don't precisely satisfy the continuous PDEs being modeled. The goal is to build a finite element solution which approximates the true solution and satisfies the PDE in a weaker sense. Consider a general linear differential operator $L(u)$ and the partial differential equation: $$L(u) = f\\mbox{ on }\\Omega$$ We approximate the solution using a linear combination of finite element basis functions which we'll call $\\varphi_i$. $$u\\approx u_h\\equiv\\sum_i\\alpha_i\\varphi_i(\\vec{x})$$ The basis functions $\\varphi_i$ are known but we need to find the degrees of freedom, $\\alpha_i$, which produce a reasonable approximation of $u$. In Galerkin finite element methods this is done by multiplying the PDE by each of the basis functions and integrating over the problem domain. If we have a total of $N$ finite element basis functions, this leads to a set of $N$ equations for the $N$ unknowns. The resulting system of equations for the $\\alpha_i$ is called the \"weak formulation\" of the PDE. The weak formulation of this problem can be written as: $$\\sum_j\\alpha_j\\int_\\Omega L(\\varphi_j)\\varphi_i\\dO = \\int_\\Omega f\\varphi_i\\dO$$ or by the matrix equation: $$M\\vec{\\alpha}=\\vec{f}$$ Where the matrix entries $M_{ij}\\equiv\\int_\\Omega L(\\varphi_j)\\varphi_i\\dO$ and the entries of $\\,\\vec{f}$ are given by $\\,f_i\\equiv\\int_\\Omega f\\varphi_i\\dO$. However, it is much more common to write these integrals using inner product notation: $$(L(u),v)_\\Omega=(f, v)_\\Omega\\,\\forall v\\in V$$ Where $V$ is space spanned by the basis functions $\\varphi_i$. The next step is to examine the linear operator $L(u)$ and determine how to compute the integral $(L(u),v)_\\Omega$ in the most accurate manner possible which leads us to \"weak derivatives\".", "title": "Weak Formulations"}, {"location": "fem_weak_form/#weak-derivatives", "text": "A \"weak derivative\" is a generalization of the notion of a derivative for integrable functions whose derivatives do not exist in the strong sense. When using the finite element method weak derivatives are required whenever terms in a PDE require derivatives of discontinuous or otherwise non-differentiable quantities. Finite element basis functions are typically not smooth functions. Even if they happen to be continuous their derivatives are often at least partially discontinuous. Also, coefficient functions can be discontinuous but, more importantly, their derivatives are often not known. For these reasons PDE terms similar to $\\grad(\\lambda u)$ or $\\div\\grad u$ cannot be accurately computed using finite element basis functions without employing weak derivatives. Consider the following discontinuous approximation to the function $\\cos(2\\pi x)e^{-2x}$. Piecewise linear, discontinuous basis functions can approximate this function rather well on this coarse 4 element mesh. If we simply ignore the discontinuities and compute the piecewise derivatives of the basis functions we obtain the following approximation of the continuous function's derivative. This is a reasonable, albeit quite crude, approximation of the derivative. Expending a little more effort to compute the weak derivative using continuous 2nd order basis functions produces a far superior approximation. Clearly we will benefit from using weak derivatives to handle derivatives of discontinuous functions which arise in our linear operators.", "title": "Weak Derivatives"}, {"location": "fem_weak_form/#weak-divergence", "text": "Consider a linear operator of the form $L(u)=-\\div\\vec{\\alpha}(u)$ with $\\vec{\\alpha}\\equiv\\vec{\\beta}u+\\gamma\\grad u$, where $\\vec{\\beta}$ is a vector-valued function and $\\gamma$ is a scalar or tensor-valued function. The function $\\vec{\\alpha}$ is a general linear function of $u$ and its gradient. The weak divergence of this quantity would be calculated by multiplying $\\div\\vec{\\alpha}$ by a test function, $v$, and integrating over the domain $\\Omega$. $$(-\\div\\vec{\\alpha},v)_\\Omega \\equiv-\\int_\\Omega(\\div\\vec{\\alpha})v\\,d\\Omega$$ The negative sign in this expression is only a matter of convention. Using the vector calculus identity, $\\div(\\vec{\\alpha}v) = (\\div\\vec{\\alpha})v + \\vec{\\alpha}\\cdot\\grad v$, we find: $$(-\\div\\vec{\\alpha}, v)_\\Omega = (\\vec{\\alpha}, \\grad v)_\\Omega - \\int_\\Omega\\div(\\vec{\\alpha}v)\\,d\\Omega$$ We then use the Divergence theorem to obtain: $$(-\\div\\vec{\\alpha}, v)_\\Omega = (\\vec{\\alpha}, \\grad v)_\\Omega - \\int_\\dO(\\hat{n}\\cdot\\vec{\\alpha})v\\,d\\Gamma = (\\vec{\\alpha}, \\grad v)_\\Omega - (\\hat{n}\\cdot\\vec{\\alpha},v)_\\dO $$ Where $d\\Gamma$ is the area element on the boundary of $\\Omega$. For linear operators of this type the bilinear form $\\,(\\vec{\\alpha}, \\grad v)_\\Omega$ can be much more accurately approximated than the original bilinear form $\\,(-\\div\\vec{\\alpha}, v)_\\Omega$ provided we can accurately manage the boundary integral $\\,(\\hat{n}\\cdot\\vec{\\alpha},v)_\\dO$. Boundary integrals such as this can be used to incorporate Neumann boundary conditions into a PDE. See the Boundary Conditions page for more information on this.", "title": "Weak Divergence"}, {"location": "fem_weak_form/#weak-curl", "text": "For the next example consider the weak curl of a vector operator. Let $L(u)=\\curl\\vec{\\alpha}(u)$ with $\\vec{\\alpha} \\equiv \\beta\\vec{u}+\\gamma\\curl\\vec{u}$, where $\\beta$ and $\\gamma$ are either scalar or tensor-valued functions. The function $\\vec{\\alpha}$ is a general linear function of $\\vec{u}$ and its curl. The weak curl of this quantity would be calculated by multiplying $\\curl\\vec{\\alpha}$ by a test function, $\\vec{v}$, and integrating over the domain $\\Omega$. $$(\\curl\\vec{\\alpha},\\vec{v})_\\Omega \\equiv \\int_\\Omega(\\curl\\vec{\\alpha})\\cdot\\vec{v}\\,d\\Omega$$ Using the vector calculus identity, $\\div(\\vec{\\alpha}\\cross\\vec{v}) = (\\curl\\vec{\\alpha})\\cdot\\vec{v} - \\vec{\\alpha}\\cdot(\\curl\\vec{v})$, we find: $$(\\curl\\vec{\\alpha},\\vec{v})_\\Omega = (\\vec{\\alpha},\\curl\\vec{v})_\\Omega + \\int_\\Omega\\div(\\vec{\\alpha}\\times\\vec{v})\\,d\\Omega$$ We again use the Divergence theorem to obtain: $$(\\curl\\vec{\\alpha},\\vec{v})_\\Omega = (\\vec{\\alpha},\\curl\\vec{v})_\\Omega + \\int_\\dO\\hat{n}\\cdot(\\vec{\\alpha}\\times\\vec{v})\\,d\\Gamma = (\\vec{\\alpha},\\curl\\vec{v})_\\Omega + (\\hat{n}\\cross\\vec{\\alpha},\\vec{v})_\\dO$$ Where we also made use of the scalar triple product, $\\hat{n}\\cdot(\\vec{\\alpha}\\cross\\vec{v}) = \\vec{v}\\cdot(\\hat{n}\\cross\\vec{\\alpha})$, in the last equality. Again it will be more accurate to use the bilinear form $(\\vec{\\alpha},\\curl\\vec{v})_\\Omega$ and a Neumann boundary condition will arise from the boundary integral.", "title": "Weak Curl"}, {"location": "fem_weak_form/#weak-gradient", "text": "For the last example consider the weak gradient of a scalar operator. Let $L(u)=-\\grad\\alpha(u)$ with $\\alpha\\equiv\\vec{\\beta}\\cdot\\vec{u}+\\gamma\\div\\vec{u}$, where $\\vec{\\beta}$ is a vector-valued function and $\\gamma$ is a scalar-valued function. The function $\\alpha$ is a general linear function of $\\vec{u}$ and its divergence. The weak gradient of this quantity would be calculated by multiplying $\\grad\\alpha$ by a test function, $\\vec{v}$, and integrating over the domain $\\Omega$. $$(-\\grad\\alpha,\\vec{v})_\\Omega \\equiv -\\int_\\Omega(\\grad\\alpha)\\cdot\\vec{v}\\,d\\Omega$$ The negative sign in this expression is again only a matter of convention. Using the vector calculus identity, $\\div(\\alpha\\vec{v}) = (\\grad\\alpha)\\cdot\\vec{v} + \\alpha\\div\\vec{v}$, we find: $$(-\\grad\\alpha,\\vec{v})_\\Omega = (\\alpha,\\div\\vec{v})_\\Omega - \\int_\\Omega\\div(\\alpha\\vec{v})\\,d\\Omega$$ We again use the Divergence theorem to obtain: $$(-\\grad\\alpha,\\vec{v})_\\Omega = (\\alpha,\\div\\vec{v})_\\Omega - \\int_\\dO\\hat{n}\\cdot(\\alpha\\vec{v})\\,d\\Gamma = (\\alpha,\\div\\vec{v})_\\Omega - (\\alpha\\hat{n},\\vec{v})_\\dO$$ Once again we find a complimentary bilinear form in $(\\alpha,\\div\\vec{v})_\\Omega$ and a boundary integral leading to a Neumann boundary condition.", "title": "Weak Gradient"}, {"location": "fem_weak_form/#other-types-of-terms", "text": "Partial differential equations with other types of terms such as spatial derivatives of order three or higher (e.g. $\\nabla^4u$) or coefficients in inconvenient locations (e.g. $\\alpha\\div(\\beta\\grad u)$) will often require the introduction of auxiliary variables unless algebraic manipulations can remove the inconvenient factors. For example, $$\\nabla^4 u=f$$ can be split into a pair of coupled equations: $$ \\begin{align*} \\nabla^2u &= \\psi\\\\ \\nabla^2\\psi &= f \\end{align*} $$ and $$\\alpha\\div(\\beta\\grad u)=f$$ can be split into: $$ \\begin{align*} \\beta\\grad u &= \\psi\\\\ \\alpha\\div\\psi &= f \\end{align*} $$ Careful examination of the required derivatives will often suggest the most appropriate choice for the basis functions to be used for such auxiliary fields. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Other Types of Terms"}, {"location": "fluids/", "text": "Navier-Stokes Mini Application The solver implemented in this miniapp solves the transient incompressible Navier-Stokes equations. Theory The equations are given in the non-dimensionalized form \\begin{align} \\frac{\\partial u}{\\partial t} + (u \\cdot \\nabla) u - \\frac{1}{Re} \\nabla^2 u + \\nabla p &= f & \\quad \\text{in } \\Omega\\\\ \\nabla \\cdot u &= 0 & \\quad \\text{in } \\Omega \\end{align} where $Re$ represents the Reynolds number. In order to solve these equations, the method presented in Tomboulides (1997) 1 is used, which is based on an equal order finite element discretization on quadrilateral or hexahedral elements of high polynomial order. The method describes an implicit-explicit time-integration scheme for the viscous and convective terms respectively. Introducing the following notation the nonlinear term $N(u) = -(u \\cdot \\nabla) u$ and the time-extrapolated form \\begin{align} \\label{eq:Next} N^*(u^{n+1}) = \\sum_{j=1}^k a_j N(u^{n+1-j}) \\end{align} where $a_j$ are coefficients from the corresponding explicit time integration method. Applying a BDF method with coefficients $b_j$ to the initial equation using the introduced forms yields \\begin{align} \\sum_{j=0}^k \\frac{b_j}{\\Delta t} u^{n+1-j} = -\\nabla p^{n+1} + L(u^{n+1}) + N^*(u^{n+1}) + f^{n+1}. \\end{align} Collecting all known quantities at a given time with \\begin{align} F^*(u^{n+1}) = -\\sum_{j=1}^k \\frac{b_j}{\\Delta t} u^{n+1-j} + N^*(u^{n+1}) + f^{n+1} \\end{align} the BDF expression reduces to \\begin{align} \\label{eq:bdf_short} \\frac{b_0}{\\Delta t} u^{n+1} = -\\nabla p^{n+1} + L(u^{n+1}) + F^*(u^{n+1}). \\end{align} To achieve a high order convergence in space, the linear term $L(u)$ is replaced by \\begin{align} L_{\\times}(u) = \\nu \\nabla(\\nabla \\cdot u) - \\nu \\nabla \\times \\nabla \\times u \\end{align} which is used to weakly enforce incompressibility by setting the first term to zero. Like in \\eqref{eq:Next} we introduce the time extrapolated term \\begin{align} L^*_{\\times}(u^{n+1}) = \\sum_{j=1}^k a_j L_{\\times}(u^{n+1-j}). \\end{align} To compute the pressure we rearrange \\eqref{eq:bdf_short} and take the divergence on both sides \\begin{align} \\label{eq:prespois} \\nabla^2 p^{n+1} = \\nabla \\cdot (L_{\\times}^*(u^{n+1}) + F^*(u^{n+1})), \\end{align} which is closed by the Neumann type boundary condition \\begin{align} \\nabla p^{n+1} \\cdot \\hat{n} = -\\frac{b_0}{\\Delta t} u^{n+1} \\cdot \\hat{n} + (L_{\\times}^*(u^{n+1}) + F^*(u^{n+1})) \\cdot \\hat{n}. \\end{align} We will refer to this as the pressure Poisson equation in the following. The last step is a Helmholtz type equation to solve for the implicit (viscous) velocity part which is also derived from \\eqref{eq:bdf_short}. Consider \\begin{align} \\label{eq:hlm} \\frac{b_0}{\\Delta t} u^{n+1} - L(u^{n+1}) = -\\nabla p^{n+1} + F^*(u^{n+1}) \\end{align} with the Dirichlet (essential type) boundary condition \\begin{align} u^{n+1} = g_D^{n+1}. \\end{align} A detailed walk through can also be found in Franco et al (2020) 2 . Note The notation is very similar to what is used in the code to make it easy to follow the theoretical explanation and understand what is done in the implementation. Boundary Conditions Inflow and no-slip walls For inflow or no-slip wall boundary conditions one should use the method NavierSolver::AddVelDirichletBC . This enforces the value on $u^{n+1}$ in \\eqref{eq:hlm}. It is valid to call this method multiple times on different boundary attributes of the mesh. The NavierSolver instance keeps track of the associated Coefficient and accompanying boundary attribute. The passed attribute array can be modified, deleted or reused, since a copy is created. Pressure outlet If an outlet of a domain is supposed to represent a pressure outlet (e.g. zero-pressure), one should use the method NavierSolver::AddPresDirichletBC . This enforces the pressure value $p^{n+1}$ in \\eqref{eq:prespois}. Zero-stress This boundary condition is used to represent an outflow attribute. Due to the nature of the $H^1$ finite-element discretization, the terms arise naturally in \\eqref{eq:prespois} and \\eqref{eq:hlm} resulting in \\begin{align} \\nu \\nabla u \\cdot \\hat{n} - p \\mathbb{I} \\cdot \\hat{n} = 0, \\end{align} where $\\mathbb{I}$ represents the identity tensor. If there is no other boundary condition applied to a certain attribute, this boundary condition is applied automatically (not through modification but rather through the formulation). Solvers and preconditioners The choice of solvers and preconditioners for \\eqref{eq:prespois} and \\eqref{eq:hlm} are essential for the performance and robustness of the simulation. The pressure Poisson equation \\eqref{eq:prespois} is solved using the CG Krylov method in combination with the low-order refined preconditioning technique coupled with AMG (c.f. Franco et al (2020) 2 ). Due to the nature of the explicit time discretization of the nonlinear term, the method used is CFL (and therefore time step) bound. As a result the time derivative term in \\eqref{eq:hlm} is dominating and a CG Krylov method preconditioned with Jacobi is sufficient. Depending on the problem, this results in the majority of time per time step being spent in the pressure Poisson solve. At the moment there is no interface to change the default options for the solvers, but a user can easily modify them in the code itself. FAQ You are using the spectral element method, why is the mass matrix not a vector representing the condensed diagonal? This is a design choice. It is possible to use the \"numerical integration\" option, which produces a diagonal mass matrix with 1 non zero value per row. This leaves freedom to experiment. Do you support simulations using real parameters? No, right now you have to non-dimensionalize your problem. Not doing this impacts the performance a lot. I want to implement turbulence model X, how do I dot that? This is another design choice to make and should be discussed, preferably in a Github issue. Why doesn't it have adaptive time stepping? While it is possible and there exists a branch that works with varying step sizes (variable order/variable step size IMEX), I have not found a reliable and robust method to determine the step size (CFL based error estimators are very squishy here or have to use a very conservative limit). How do I compute steady state solutions with this? There is no acceleration to steady state algorithm implemented right now. Your only option is to run the transient case until you reach a steady state criterion. (See adaptive time stepping FAQ above). MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); A. G. Tomboulides, J. C. Y. Lee & S. A. Orszag (1997) Numerical Simulation of Low Mach Number Reactive Flows \u21a9 Michael Franco, Jean-Sylvain Camier, Julian Andrej, Will Pazner (2020) High-order matrix-free incompressible flow solvers with GPU acceleration and low-order refined preconditioners (https://arxiv.org/abs/1910.03032) \u21a9 \u21a9", "title": "Fluid Dynamics"}, {"location": "fluids/#navier-stokes-mini-application", "text": "The solver implemented in this miniapp solves the transient incompressible Navier-Stokes equations.", "title": "Navier-Stokes Mini Application"}, {"location": "fluids/#theory", "text": "The equations are given in the non-dimensionalized form \\begin{align} \\frac{\\partial u}{\\partial t} + (u \\cdot \\nabla) u - \\frac{1}{Re} \\nabla^2 u + \\nabla p &= f & \\quad \\text{in } \\Omega\\\\ \\nabla \\cdot u &= 0 & \\quad \\text{in } \\Omega \\end{align} where $Re$ represents the Reynolds number. In order to solve these equations, the method presented in Tomboulides (1997) 1 is used, which is based on an equal order finite element discretization on quadrilateral or hexahedral elements of high polynomial order. The method describes an implicit-explicit time-integration scheme for the viscous and convective terms respectively. Introducing the following notation the nonlinear term $N(u) = -(u \\cdot \\nabla) u$ and the time-extrapolated form \\begin{align} \\label{eq:Next} N^*(u^{n+1}) = \\sum_{j=1}^k a_j N(u^{n+1-j}) \\end{align} where $a_j$ are coefficients from the corresponding explicit time integration method. Applying a BDF method with coefficients $b_j$ to the initial equation using the introduced forms yields \\begin{align} \\sum_{j=0}^k \\frac{b_j}{\\Delta t} u^{n+1-j} = -\\nabla p^{n+1} + L(u^{n+1}) + N^*(u^{n+1}) + f^{n+1}. \\end{align} Collecting all known quantities at a given time with \\begin{align} F^*(u^{n+1}) = -\\sum_{j=1}^k \\frac{b_j}{\\Delta t} u^{n+1-j} + N^*(u^{n+1}) + f^{n+1} \\end{align} the BDF expression reduces to \\begin{align} \\label{eq:bdf_short} \\frac{b_0}{\\Delta t} u^{n+1} = -\\nabla p^{n+1} + L(u^{n+1}) + F^*(u^{n+1}). \\end{align} To achieve a high order convergence in space, the linear term $L(u)$ is replaced by \\begin{align} L_{\\times}(u) = \\nu \\nabla(\\nabla \\cdot u) - \\nu \\nabla \\times \\nabla \\times u \\end{align} which is used to weakly enforce incompressibility by setting the first term to zero. Like in \\eqref{eq:Next} we introduce the time extrapolated term \\begin{align} L^*_{\\times}(u^{n+1}) = \\sum_{j=1}^k a_j L_{\\times}(u^{n+1-j}). \\end{align} To compute the pressure we rearrange \\eqref{eq:bdf_short} and take the divergence on both sides \\begin{align} \\label{eq:prespois} \\nabla^2 p^{n+1} = \\nabla \\cdot (L_{\\times}^*(u^{n+1}) + F^*(u^{n+1})), \\end{align} which is closed by the Neumann type boundary condition \\begin{align} \\nabla p^{n+1} \\cdot \\hat{n} = -\\frac{b_0}{\\Delta t} u^{n+1} \\cdot \\hat{n} + (L_{\\times}^*(u^{n+1}) + F^*(u^{n+1})) \\cdot \\hat{n}. \\end{align} We will refer to this as the pressure Poisson equation in the following. The last step is a Helmholtz type equation to solve for the implicit (viscous) velocity part which is also derived from \\eqref{eq:bdf_short}. Consider \\begin{align} \\label{eq:hlm} \\frac{b_0}{\\Delta t} u^{n+1} - L(u^{n+1}) = -\\nabla p^{n+1} + F^*(u^{n+1}) \\end{align} with the Dirichlet (essential type) boundary condition \\begin{align} u^{n+1} = g_D^{n+1}. \\end{align} A detailed walk through can also be found in Franco et al (2020) 2 . Note The notation is very similar to what is used in the code to make it easy to follow the theoretical explanation and understand what is done in the implementation.", "title": "Theory"}, {"location": "fluids/#boundary-conditions", "text": "", "title": "Boundary Conditions"}, {"location": "fluids/#inflow-and-no-slip-walls", "text": "For inflow or no-slip wall boundary conditions one should use the method NavierSolver::AddVelDirichletBC . This enforces the value on $u^{n+1}$ in \\eqref{eq:hlm}. It is valid to call this method multiple times on different boundary attributes of the mesh. The NavierSolver instance keeps track of the associated Coefficient and accompanying boundary attribute. The passed attribute array can be modified, deleted or reused, since a copy is created.", "title": "Inflow and no-slip walls"}, {"location": "fluids/#pressure-outlet", "text": "If an outlet of a domain is supposed to represent a pressure outlet (e.g. zero-pressure), one should use the method NavierSolver::AddPresDirichletBC . This enforces the pressure value $p^{n+1}$ in \\eqref{eq:prespois}.", "title": "Pressure outlet"}, {"location": "fluids/#zero-stress", "text": "This boundary condition is used to represent an outflow attribute. Due to the nature of the $H^1$ finite-element discretization, the terms arise naturally in \\eqref{eq:prespois} and \\eqref{eq:hlm} resulting in \\begin{align} \\nu \\nabla u \\cdot \\hat{n} - p \\mathbb{I} \\cdot \\hat{n} = 0, \\end{align} where $\\mathbb{I}$ represents the identity tensor. If there is no other boundary condition applied to a certain attribute, this boundary condition is applied automatically (not through modification but rather through the formulation).", "title": "Zero-stress"}, {"location": "fluids/#solvers-and-preconditioners", "text": "The choice of solvers and preconditioners for \\eqref{eq:prespois} and \\eqref{eq:hlm} are essential for the performance and robustness of the simulation. The pressure Poisson equation \\eqref{eq:prespois} is solved using the CG Krylov method in combination with the low-order refined preconditioning technique coupled with AMG (c.f. Franco et al (2020) 2 ). Due to the nature of the explicit time discretization of the nonlinear term, the method used is CFL (and therefore time step) bound. As a result the time derivative term in \\eqref{eq:hlm} is dominating and a CG Krylov method preconditioned with Jacobi is sufficient. Depending on the problem, this results in the majority of time per time step being spent in the pressure Poisson solve. At the moment there is no interface to change the default options for the solvers, but a user can easily modify them in the code itself.", "title": "Solvers and preconditioners"}, {"location": "fluids/#faq", "text": "You are using the spectral element method, why is the mass matrix not a vector representing the condensed diagonal? This is a design choice. It is possible to use the \"numerical integration\" option, which produces a diagonal mass matrix with 1 non zero value per row. This leaves freedom to experiment. Do you support simulations using real parameters? No, right now you have to non-dimensionalize your problem. Not doing this impacts the performance a lot. I want to implement turbulence model X, how do I dot that? This is another design choice to make and should be discussed, preferably in a Github issue. Why doesn't it have adaptive time stepping? While it is possible and there exists a branch that works with varying step sizes (variable order/variable step size IMEX), I have not found a reliable and robust method to determine the step size (CFL based error estimators are very squishy here or have to use a very conservative limit). How do I compute steady state solutions with this? There is no acceleration to steady state algorithm implemented right now. Your only option is to run the transient case until you reach a steady state criterion. (See adaptive time stepping FAQ above). MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); A. G. Tomboulides, J. C. Y. Lee & S. A. Orszag (1997) Numerical Simulation of Low Mach Number Reactive Flows \u21a9 Michael Franco, Jean-Sylvain Camier, Julian Andrej, Will Pazner (2020) High-order matrix-free incompressible flow solvers with GPU acceleration and low-order refined preconditioners (https://arxiv.org/abs/1910.03032) \u21a9 \u21a9", "title": "FAQ"}, {"location": "gallery/", "text": "Gallery This page collects screenshots from various simulations based on MFEM. Image captions with \ud83c\udfac link to simulation videos. Additional images can be found in the GLVis gallery . A version of the MFEM logo demonstrating curvilinear elements, adaptive mesh refinement and (idealized) parallel partitioning. Visualization with GLVis . Incompressible Taylor-Green vortex simulation with high-order finite elements. Visualization with ParaView . Fibers generated by LDRB approach based on 4 Laplacian solves in the Cardioid project. Solution of a Maxwell problem on a Klein bottle. Mesh generated with the klein-bottle miniapp. Solution with Example 3 . Comparisons of equipotential surfaces and force lines from Maxwell's Treatise on Electricity and Magnetism with results from MFEM's Volta miniapp . Level surfaces in the interior of the solution from Example 1 on escher.mesh . Visualization with GLVis . 3D Arbitrary Lagrangian-Eulerian (ALE) simulation of a shock-triple point interaction with Q2-Q1 elements in the MFEM-based BLAST shock hydrodynamics code. Volume visualization with VisIt . Modeling elastic-plastic flow in the 3D Taylor high-velocity impact problem using 4th order mixed elements in the MFEM-based BLAST shock hydrodynamics code. Visualization with VisIt . Poisson problem on a \"Breather\" surface. Mesh generated with the Mesh Explorer miniapp. Solution with Example 1 . Triple point shock interaction on 4 elements of order 12. Note the element curvature and the high variation of the field inside the lower right element. Visualization of the electric field generated by the electrical wave on rabbit heart ventricles during depolarization of the heart. Image courtesy of Dennis Ogiermann, winner of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Incompressible fluid flow around a rotating turbine using a space-time embedded-hybridized discontinuous Galerkin discretization. Image courtesy of Tamas Horvath, winner of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Magnetic diffusion problem solved to compute the magnetic field induced by current running through copper wire in air. Image courtesy of Will Pazner, winner of the 2022 MFEM Workshop Visualization Contest. \ud83c\udfac Shock-bubble-interaction using a Property-preserving discontinuous Galerkin scheme, see book . Image courtesy of Hennes Hajduk, as part of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Re=50,000 incompressible Navier-Stokes wall-resolved LES of a NACA 0012 airfoil in stall regime using MFEM's Navier miniapp. Image courtesy of \u00c9tienne Spieser, as part of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Plane wave scattering from a cube using a DPG Ultraweak formulation in MFEM to solve the time-harmonic linear acoustics equations. Image courtesy of Socratis Petrides, as part of the 2023 MFEM Workshop Visualization Contest. Density-based Topology Optimization for Cantilever beam with SiMPL method . Image courtesy of Dohyun Kim, as part of the 2024 MFEM Workshop Visualization Contest. \ud83c\udfac Shape interpolation between a torus and a bunny by computing their generalized Wasserstein barycenter. This barycenter is obtained by solving a mean-field optimal control problem. Image courtesy of Arjun Vijaywargiya, as part of the 2024 MFEM Workshop Visualization Contest. Streamlines of the magnetic field from a parallel computation of the magnetostatic interaction of two magnetic orbs. Visualization with VTK . Test of the propagation of a spherical shock wave through a random non-conforming mesh in the MFEM-based BLAST shock hydrodynamics code. Visualization with GLVis . Slice image of the high harmonic fast wave propagation in the NSTX-U magnetic fusion device. Computed using MFEM's 4th order H(curl) elements by the RF-SciDAC project . An electromagnetic eigenmode of a star-shaped domain computed with 3rd order finite elements computed with Example 13 . High-order multi-material inertial confinement fusion (ICF)-like implosion in the MFEM-based BLAST shock hydrodynamics code. Visualization with VisIt . Two-region AMR mesh generated by the Shaper miniapp from successive adaptation to the outlines of Australia. Radiating Kelvin-Helmholtz modeled with the MFEM-based BLAST shock hydrodynamics code. Volume visualization with VisIt . \ud83c\udfac Simulation-driven r-adaptivity using TMOP for a three-material high-velocity gas impact in BLAST . Visualization with VisIt . The Shaper miniapp applied to a multi-material input functions described by the iterates of the Mandelbrot set. Visualization with GLVis . Topology optimization of a drone body using LLNL's LiDO project , based on MFEM. Compressible Euler equations, Mach 3 flow around a cylinder in 2D, stabilized DG-P1 spacial discretization. Image courtesy of Hennes Hajduk, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Axisymmetric computation of an air flow in a tube with continuous Galerkin discretization. Image courtesy of Raphael Zanella, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Inviscid Kelvin-Helmholtz instability using high-order invariant domain preserving discontinuous Galerkin methods with convex limiting. Image courtesy of Will Pazner, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Compressible Euler in Lagrangian frame using the Laghos miniapp. Image courtesy of Vladimir Tomov, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Adaptive, implicit resistive MHD solver (from TDS-SciDAC ) resolves multi-scale features of plasmoid instability. \ud83c\udfac Topology-optimized heat sink obtained by minimizing the thermal energy in a domain with constant internal heating. Image courtesy of Tobias Duswald, winner of the 2022 MFEM Workshop Visualization Contest. \ud83c\udfac Leapfrogging vortex rings using an incompressible Schr\u00f6dinger fluid solver in MFEM. Image courtesy of John Camier, winner of the 2023 MFEM Workshop Visualization Contest. Displacement distribution of a loaded excavator arm under static equilibrium using MFEM's API in an external library. Image courtesy of Mehran Ebrahimi, winner of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Multi-component topology optimization with conformal meshes. Image courtesy of Mathias Schmidt, winner of the 2024 MFEM Workshop Visualization Contest. Electric field induced by an MRI gradient coil in a human body. Simulation by the Magnetic Resonance Physics and Instrumentation Group at Harvard Medical School. Multi-mode Rayleigh-Taylor instability simulation using 4th order mixed elements in the MFEM-based BLAST shock hydrodynamics code. Visualization with VisIt . Purely Lagrangian Rayleigh-Taylor instability simulation using 8th order mixed elements in the MFEM-based BLAST shock hydrodynamics code. Visualization with GLVis . Anisotropic refinement in a 2D shock-like AMR test problem. Visualization with GLVis . Parallel version of Example 1 on 100 processors with a relatively coarse version of square-disc.mesh . Visualization with GLVis . Anisotropic refinement in a 3D version of the AMR test. Portion of the spherical domain is cut away in GLVis . Structural topology optimization with MFEM in LLNL's Center for Design and Optimization . Test of the anisotropic refinement feature on a random mesh. A slightly modified version of Example 1 . Visualization with GLVis . Level lines in a cutting plane of the solution from the parallel version of Example 1 on 64 processors with fichera.mesh . Visualization with GLVis . Cut image of the solution from Example 1 on a sharply twisted, high order toroidal mesh. The mesh was generated with the toroid miniapp. Cut image of an induction coil mesh and three sub-meshes created with the Trimmer miniapp. Visualization with VisIt . Viscoelastic flow of blood through an artery with aneurysm modeled by the Hookean dumbbell model discretized with BCF-method (Navier-Stokes+SUPG). Image courtesy of Andreas Meier, as part of the 2021 MFEM Workshop Visualization Contest. Visualization of time-averaged mean flow from a compressible, DG Navier-Stokes solver using MFEM modeling a plasma torch. Image courtesy of Karl W. Schulz, as part of the 2021 MFEM Workshop Visualization Contest. Streamlines of the electric field generated by a current dipole source located in the temporal lobe of an epilepsy patient. Image courtesy of Ben Zwick, winner of the 2022 MFEM Workshop Visualization Contest. Flow through periodic Gyroid micro-cell, MFEM Navier mini-app with additional Brinkman penalization. Image courtesy of Mathias Schmidt, as part of the 2022 MFEM Workshop Visualization Contest. \ud83c\udfac Turbulence effect of the Kelvin-Helmholtz instability in tokamak edge plasma using an MHDeX code developed at LLNL. Image courtesy of Milan Holec, as part of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Topology optimization with conformal meshes to maximize beam stiffness under a downward force on the right wall. Image courtesy of Ketan Mittal and Mathias Schmidt, as part of the 2023 MFEM Workshop Visualization Contest. Penrose unilluminable room appears rather illuminable in 3D (at least when constructed as a solid of revolution). Image courtesy of Amit Rotem, as part of the 2023 MFEM Workshop Visualization Contest. Heat flux magnitude in a convection - (anisotropic) diffusion simulation with MFEM text as the initial temperature profile. A single implicit step of the HDG scheme was used. Image courtesy of Jan Nikl, winner of the 2024 MFEM Workshop Visualization Contest.", "title": "Gallery"}, {"location": "gallery/#gallery", "text": "This page collects screenshots from various simulations based on MFEM. Image captions with \ud83c\udfac link to simulation videos. Additional images can be found in the GLVis gallery . A version of the MFEM logo demonstrating curvilinear elements, adaptive mesh refinement and (idealized) parallel partitioning. Visualization with GLVis . Incompressible Taylor-Green vortex simulation with high-order finite elements. Visualization with ParaView . Fibers generated by LDRB approach based on 4 Laplacian solves in the Cardioid project. Solution of a Maxwell problem on a Klein bottle. Mesh generated with the klein-bottle miniapp. Solution with Example 3 . Comparisons of equipotential surfaces and force lines from Maxwell's Treatise on Electricity and Magnetism with results from MFEM's Volta miniapp . Level surfaces in the interior of the solution from Example 1 on escher.mesh . Visualization with GLVis . 3D Arbitrary Lagrangian-Eulerian (ALE) simulation of a shock-triple point interaction with Q2-Q1 elements in the MFEM-based BLAST shock hydrodynamics code. Volume visualization with VisIt . Modeling elastic-plastic flow in the 3D Taylor high-velocity impact problem using 4th order mixed elements in the MFEM-based BLAST shock hydrodynamics code. Visualization with VisIt . Poisson problem on a \"Breather\" surface. Mesh generated with the Mesh Explorer miniapp. Solution with Example 1 . Triple point shock interaction on 4 elements of order 12. Note the element curvature and the high variation of the field inside the lower right element. Visualization of the electric field generated by the electrical wave on rabbit heart ventricles during depolarization of the heart. Image courtesy of Dennis Ogiermann, winner of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Incompressible fluid flow around a rotating turbine using a space-time embedded-hybridized discontinuous Galerkin discretization. Image courtesy of Tamas Horvath, winner of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Magnetic diffusion problem solved to compute the magnetic field induced by current running through copper wire in air. Image courtesy of Will Pazner, winner of the 2022 MFEM Workshop Visualization Contest. \ud83c\udfac Shock-bubble-interaction using a Property-preserving discontinuous Galerkin scheme, see book . Image courtesy of Hennes Hajduk, as part of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Re=50,000 incompressible Navier-Stokes wall-resolved LES of a NACA 0012 airfoil in stall regime using MFEM's Navier miniapp. Image courtesy of \u00c9tienne Spieser, as part of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Plane wave scattering from a cube using a DPG Ultraweak formulation in MFEM to solve the time-harmonic linear acoustics equations. Image courtesy of Socratis Petrides, as part of the 2023 MFEM Workshop Visualization Contest. Density-based Topology Optimization for Cantilever beam with SiMPL method . Image courtesy of Dohyun Kim, as part of the 2024 MFEM Workshop Visualization Contest. \ud83c\udfac Shape interpolation between a torus and a bunny by computing their generalized Wasserstein barycenter. This barycenter is obtained by solving a mean-field optimal control problem. Image courtesy of Arjun Vijaywargiya, as part of the 2024 MFEM Workshop Visualization Contest. Streamlines of the magnetic field from a parallel computation of the magnetostatic interaction of two magnetic orbs. Visualization with VTK . Test of the propagation of a spherical shock wave through a random non-conforming mesh in the MFEM-based BLAST shock hydrodynamics code. Visualization with GLVis . Slice image of the high harmonic fast wave propagation in the NSTX-U magnetic fusion device. Computed using MFEM's 4th order H(curl) elements by the RF-SciDAC project . An electromagnetic eigenmode of a star-shaped domain computed with 3rd order finite elements computed with Example 13 . High-order multi-material inertial confinement fusion (ICF)-like implosion in the MFEM-based BLAST shock hydrodynamics code. Visualization with VisIt . Two-region AMR mesh generated by the Shaper miniapp from successive adaptation to the outlines of Australia. Radiating Kelvin-Helmholtz modeled with the MFEM-based BLAST shock hydrodynamics code. Volume visualization with VisIt . \ud83c\udfac Simulation-driven r-adaptivity using TMOP for a three-material high-velocity gas impact in BLAST . Visualization with VisIt . The Shaper miniapp applied to a multi-material input functions described by the iterates of the Mandelbrot set. Visualization with GLVis . Topology optimization of a drone body using LLNL's LiDO project , based on MFEM. Compressible Euler equations, Mach 3 flow around a cylinder in 2D, stabilized DG-P1 spacial discretization. Image courtesy of Hennes Hajduk, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Axisymmetric computation of an air flow in a tube with continuous Galerkin discretization. Image courtesy of Raphael Zanella, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Inviscid Kelvin-Helmholtz instability using high-order invariant domain preserving discontinuous Galerkin methods with convex limiting. Image courtesy of Will Pazner, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Compressible Euler in Lagrangian frame using the Laghos miniapp. Image courtesy of Vladimir Tomov, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Adaptive, implicit resistive MHD solver (from TDS-SciDAC ) resolves multi-scale features of plasmoid instability. \ud83c\udfac Topology-optimized heat sink obtained by minimizing the thermal energy in a domain with constant internal heating. Image courtesy of Tobias Duswald, winner of the 2022 MFEM Workshop Visualization Contest. \ud83c\udfac Leapfrogging vortex rings using an incompressible Schr\u00f6dinger fluid solver in MFEM. Image courtesy of John Camier, winner of the 2023 MFEM Workshop Visualization Contest. Displacement distribution of a loaded excavator arm under static equilibrium using MFEM's API in an external library. Image courtesy of Mehran Ebrahimi, winner of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Multi-component topology optimization with conformal meshes. Image courtesy of Mathias Schmidt, winner of the 2024 MFEM Workshop Visualization Contest. Electric field induced by an MRI gradient coil in a human body. Simulation by the Magnetic Resonance Physics and Instrumentation Group at Harvard Medical School. Multi-mode Rayleigh-Taylor instability simulation using 4th order mixed elements in the MFEM-based BLAST shock hydrodynamics code. Visualization with VisIt . Purely Lagrangian Rayleigh-Taylor instability simulation using 8th order mixed elements in the MFEM-based BLAST shock hydrodynamics code. Visualization with GLVis . Anisotropic refinement in a 2D shock-like AMR test problem. Visualization with GLVis . Parallel version of Example 1 on 100 processors with a relatively coarse version of square-disc.mesh . Visualization with GLVis . Anisotropic refinement in a 3D version of the AMR test. Portion of the spherical domain is cut away in GLVis . Structural topology optimization with MFEM in LLNL's Center for Design and Optimization . Test of the anisotropic refinement feature on a random mesh. A slightly modified version of Example 1 . Visualization with GLVis . Level lines in a cutting plane of the solution from the parallel version of Example 1 on 64 processors with fichera.mesh . Visualization with GLVis . Cut image of the solution from Example 1 on a sharply twisted, high order toroidal mesh. The mesh was generated with the toroid miniapp. Cut image of an induction coil mesh and three sub-meshes created with the Trimmer miniapp. Visualization with VisIt . Viscoelastic flow of blood through an artery with aneurysm modeled by the Hookean dumbbell model discretized with BCF-method (Navier-Stokes+SUPG). Image courtesy of Andreas Meier, as part of the 2021 MFEM Workshop Visualization Contest. Visualization of time-averaged mean flow from a compressible, DG Navier-Stokes solver using MFEM modeling a plasma torch. Image courtesy of Karl W. Schulz, as part of the 2021 MFEM Workshop Visualization Contest. Streamlines of the electric field generated by a current dipole source located in the temporal lobe of an epilepsy patient. Image courtesy of Ben Zwick, winner of the 2022 MFEM Workshop Visualization Contest. Flow through periodic Gyroid micro-cell, MFEM Navier mini-app with additional Brinkman penalization. Image courtesy of Mathias Schmidt, as part of the 2022 MFEM Workshop Visualization Contest. \ud83c\udfac Turbulence effect of the Kelvin-Helmholtz instability in tokamak edge plasma using an MHDeX code developed at LLNL. Image courtesy of Milan Holec, as part of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Topology optimization with conformal meshes to maximize beam stiffness under a downward force on the right wall. Image courtesy of Ketan Mittal and Mathias Schmidt, as part of the 2023 MFEM Workshop Visualization Contest. Penrose unilluminable room appears rather illuminable in 3D (at least when constructed as a solid of revolution). Image courtesy of Amit Rotem, as part of the 2023 MFEM Workshop Visualization Contest. Heat flux magnitude in a convection - (anisotropic) diffusion simulation with MFEM text as the initial temperature profile. A single implicit step of the HDG scheme was used. Image courtesy of Jan Nikl, winner of the 2024 MFEM Workshop Visualization Contest.", "title": "Gallery"}, {"location": "getting-started/", "text": "Getting Started We recommend that new users start with these articles: Building and Running Examples Building MFEM Serial Tutorial Parallel Tutorial Browse Example Codes Code Documentation Code Overview Finite Element Classes and Concepts Doxygen Documentation HowTo Articles More Advanced Topics GPU Support Performance and Partial Assembly Example Mini Applications Electromagnetics Miniapps Fluid Dynamics Miniapp Meshing Miniapps AD Miniapps Mini Application Theory Notes Tesla Miniapp Theory Maxwell Miniapp Theory", "title": "Getting Started"}, {"location": "getting-started/#getting-started", "text": "We recommend that new users start with these articles:", "title": "Getting Started"}, {"location": "getting-started/#building-and-running-examples", "text": "Building MFEM Serial Tutorial Parallel Tutorial Browse Example Codes", "title": "Building and Running Examples"}, {"location": "getting-started/#code-documentation", "text": "Code Overview Finite Element Classes and Concepts Doxygen Documentation HowTo Articles", "title": "Code Documentation"}, {"location": "getting-started/#more-advanced-topics", "text": "GPU Support Performance and Partial Assembly", "title": "More Advanced Topics"}, {"location": "getting-started/#example-mini-applications", "text": "Electromagnetics Miniapps Fluid Dynamics Miniapp Meshing Miniapps AD Miniapps", "title": "Example Mini Applications"}, {"location": "getting-started/#mini-application-theory-notes", "text": "Tesla Miniapp Theory Maxwell Miniapp Theory", "title": "Mini Application Theory Notes"}, {"location": "gpu-support/", "text": "GPU support in MFEM MFEM relies mainly on two features for running algorithms on devices such as GPUs: The memory manager handles transparently the moving of data between the host (CPU) and the device (e.g. GPU), The mfem::forall function to abstract for loops to parallelize the execution on an arbitrary device. Vector u; Vector v; // ... const auto u_data = u.Read(); // Express the intent to read u auto v_data = v.ReadWrite(); // Express the intent to read and write v // Abstract the loop: for(int i=0; i objects. The Memory objects handle host and device pointers, memory allocations, and data synchronizations between host and device. To get the pointer T* from a Memory object, one has to use the Read() , Write() , or ReadWrite() methods. Read() returns a const T* pointer, and should be used when the data will only be read, Write() returns a T* pointer, and should be used when writing data without using any previously contained data, ReadWrite() returns T* pointer, and should be used when read and write access to the pointer are required. Read() , Write() , and ReadWrite() automatically handle data movement between the host and device. They can optimize data transfer, since e.g. data that is declared as Write() on the host/device need not be updated from the device/host. The method void UseDevice(bool) specifies if a Memory object is intended for computation on host or on device. The Read() , Write() , and ReadWrite() methods will return device pointer if using the device has been set to true with UseDevice , by default it is false and will return a host pointer. Sometimes, it is necessary to access the data specifically on the host. In this case the HostRead() , HostWrite() , and HostReadWrite() methods should be used. In practice, developers rarely have to manipulate Memory objects, instead objects data can be stored using Vector and Array . Vector and Array data pointers can be accessed with the same methods as Memory . Vector v; v.UseDevice(true); const double *device_ptr = v.Read(); const double *host_ptr = v.HostRead(); mfem::forall The idea behind the mfem::forall function is to have the same behavior as a for loop and hide all device-specific code in order to enable performance portability. Example: for (int i = 0; i < N; i++) { ... } becomes mfem::forall(N, [=] MFEM_HOST_DEVICE (int i) { ... }); One class that is convenient to use in combination with the memory manager and mfem::forall is DeviceTensor : an N dimensional array containing elements of type T , which by default is double . The Reshape function reshapes its input into such an N dimensional array: Vector a; a.UseDevice(true); const int p = ...; const int q = ...; const int r = ...; const int N = ...; auto A = Reshape(a.Write(), p, q, r, N); // returns a DeviceTensor<4,double> mfem::forall(N, [=] MFEM_HOST_DEVICE (int n) { for (int k = 0; k < r; k++) for (int j = 0; j < q; j++) for (int i = 0; i < p; i++) A(i,j,k,n) = ...; }); Several variants of mfem::forall exist, such as mfem::forall_2D and mfem::forall_3D , to help map 2D or 3D blocks of threads to the hardware more efficiently. In the case of a GPU, mfem::forall_3D(N, X,Y,Z, [=] MFEM_HOST_DEVICE (int n){...}) will declare N block of threads each of size X x Y x Z threads, whereas mfem::forall uses N/MFEM_CUDA_BLOCKS block of threads each of size MFEM_CUDA_BLOCKS = 256 threads. Using mfem::forall_3D (and mfem::forall_2D ) over mfem::forall results in a higher level of parallelism, the former using N x X x Y x Z software threads and the latter only N software threads. In order to exploit 2D or 3D blocks of threads, it is convenient to use the macro MFEM_FOREACH_THREAD(i,x,p) to use threads as a for loop. The first variable i is the name of the \"loop\" variable, x is the threadId (it can take the values x , y , or z ), and p is the loop upper bound. If we rewrite the previous example using mfem::forall_3D and MFEM_FOREACH_THREAD , we get: Vector a; a.UseDevice(true); const int p = ...; const int q = ...; const int r = ...; const int N = ...; auto A = Reshape(a.Write(), p, q, r, N); // returns a DeviceTensor<4,double> mfem::forall_3D(N, p, q, r, [=] MFEM_HOST_DEVICE (int n) { MFEM_FOREACH_THREAD(k,z,r) MFEM_FOREACH_THREAD(j,y,q) MFEM_FOREACH_THREAD(i,x,p) A(i,j,k,n) = ...; }); The reasons for this more complex syntax is to better utilize the hardware, GPUs in particular. Using mfem::forall_3D and MFEM_FOREACH_THREAD allows to use more concurrency N x X x Y x Z threads instead of only N threads with mfem::forall , but more importantly the memory accesses on A(i,j,k,n) are much better with mfem::forall_3D . With mfem::forall_3D , threads access consecutive memory (i.e. coalesced memory access). Because most applied math algorithms are memory bound, having coalesced memory accesses is critical to achieve high performance. Achieving high performance on GPUs Finite element algorithms are usually memory bound on GPUs, and therefore in order to achieve peak performance one has to maximize the utilization of the different memory bandwidths . In particular, the main memory, or device memory, is the memory that has to be maximally used (i.e. saturated ) in order to achieve peak performance. It is important to not saturate memory bandwidth other than the main memory bandwidth, failing to do so will decrease the main memory throughput by creating memory bandwidth bottlenecks. Maximizing the main memory bandwidth is achieved by issuing enough memory transactions and using efficiently the transferred data. The more computationally light a kernel is the more frequently memory transactions are issued, and if there is no memory bandwidth saturated other than the main memory bandwidth, e.g: shared or L1 memory, then the first condition to achieve peak performance is fulfilled. Memory is transferred by contiguous blocks, called cache-line , which are typically the size of 32 float , or 16 double . Since each cache-line is a block of contiguous memory it is common to over-fetch data when accessing non-contiguous memory addresses (because not all the data is used in each cache-line). In the worst case, only one float of each cache-line is used resulting in only 1/32 of the data transferred being used, such a kernel is potentially 32 times slower than a kernel that fully utilizes the data in each cache line. When a kernel is carefully written to use all the data from each cache-line, the memory access are often referred as coalesced memory access. Having coalesced memory access kernels is critical to achieving peak performance. In term of parallelization, when seeing GPUs as having only one level of parallelism over threads, severe constraints are imposed to the kernels in order to achieve high performance. Each thread is limited to 255 float registers, using more registers results in what is known as register spilling which significantly impacts performance, this is why this type of parallelization strategy should only be used for the most simple kernels. Therefore, it is usually a good strategy to see GPUs as having two levels of parallelism: the coarse parallelism level among block of threads, and the fine parallelism level among threads in a block of threads. Threads in different blocks of threads can only exchange data through the main memory, therefore data exchange between blocks of threads should be kept to the absolute minimum. Threads inside a block of threads can exchange data efficiently by using the shared memory . Shared memory can also be used to store data common between threads, but stored data should be carefully managed due to the very limited storage capacity of the shared memory. Due to their low arithmetic intensity, finite element algorithms often require a significant amount of shared memory bandwidth to exchange information between threads in a block. High amounts of shared memory bandwidth usage is a common bottleneck to achieve high performance. In order to be used efficiently, shared memory also requires specific memory access patterns to prevent bank conflicts . When bank conflicts occur, memory access are serialized instead of being parallel. Each cache line in the shared memory is linearly spread over the shared memory banks, if the threads in a block of threads access different data in the same bank then a bank conflict occurs. However, if the threads in a block access the same data in a bank, or different data in different banks, then the memory access can occur optimally in parallel. Profiling on NVIDIA GPUs When profiling to improve the performance of a memory bound kernel, we recommend the following steps: Measure the main memory bandwidth and efficiency: this tells us how far from peak throughput we are. Insure that no register spills are occurring: most kernels can be written without any register spilling. Measure the shared memory bandwidth and efficiency: try to prevent the shared memory to be the performance bottleneck. Optimizing the main memory usage The first thing we need to know is how far from peak throughput and how efficiently the main memory is accessed. For instance, with nvprof the following command nvprof --metrics gld_throughput,gst_throughput,gld_efficiency,gst_efficiency gives us the desired information. The sum of the load throughput ( gld_throughput ) and store throughput ( gst_throughput ) should be as close as possible to the main memory maximum bandwidth. gld_efficiency and gst_efficiency informs us on ratio of requested global memory load/store throughput to required global memory load/store throughput expressed as percentage. As mentioned above, efficiency issues are critical to achieve peak performance and are solved by coalescing memory access. Once we know how far we are from peak throughput, one should look at the main stall reasons to get an idea of what might be slowing down the kernels: Instruction Fetch \u2014 The next assembly instruction has not yet been fetched. Memory Throttle \u2014 A large number of pending memory operations prevent further forward progress. These can be reduced by combining several memory transactions into one. Memory Dependency \u2014 A load/store cannot be made because the required resources are not available or are fully utilized, or too many requests of a given type are outstanding. Memory dependency stalls can potentially be reduced by optimizing memory alignment and access patterns. Synchronization \u2014 The warp is blocked at a __syncthreads() call. Execution Dependency \u2014 An input required by the instruction is not yet available. Execution dependency stalls can potentially be reduced by increasing instruction-level parallelism. You can use nvprof --metrics with: stall_inst_fetch for the percentage of stalls occurring because of instruction fetch, stall_exec_dependency for the percentage of stalls occurring because of execution dependency, stall_memory_dependency for the percentage of stalls occurring because a memory dependency, stall_memory_throttle for the percentage of stalls occurring because of memory throttle, stall_sync for the percentage of stalls occurring because the warp is blocked at a __syncthreads() call. Optimizing the register usage Register spilling can be detected in two ways: Compile for CUDA with -Xptxas=\"-v\" which reports at compilation the register usage and spills for each kernel. Measure the local memory transfers with a profiler to check if there are register spills. nvprof --metrics local_load_transactions,local_store_transactions --kernels myKernel should be 0 . Register spills happen for two main reasons: Each thread uses too many registers, Array indices are not known at compilation time. When each thread uses too many registers it is often useful to redesign the kernel to use more threads per block to perform the computation, this lowers the amount of registers used per thread but usually increases the shared memory usage due to more distributed data. Computing indices at compilation can often be resolved by simply unrolling loops with MFEM_UNROLL and making sure that all the necessary information to compute the indices is known at compilation time. Roofline model A roofline model helps predicting the peak performance achievable by a specific algorithm. The arithmetic intensity is the ratio of the total number of operations divided by the amount of data movement from and to the main memory. By dividing the maximum FLOPs, by the maximum bandwidth we get an arithmetic intensity threshold value between the two main regime of a GPU. A kernel with an arithmetic intensity below or above the threshold value will be memory bound or computation bound respectively. For in depths performance analysis we recommend to look at efficiency issues The list of all the possible metrics for nvprof is available here . Tips & Tricks Compile in debug mode when developing for devices The memory manager performs checks that catches most of the misuse of the memory on host or device. When using device debug, if your code fails you can run gdb or lldb , and set a breakpoint at b mfem::mfem_error . The code will break as soon as it reaches this point and then you can backtrace bt from here to see what went wrong and where. Forcing synchronization with the host or the device It is sometimes needed to force synchronization between host and device data. In order to make sure that the host data is synchronized one should use HostRead() , similarly to ensure synchronized data on the device one should use Read() . Do not use GetData Do not use GetData() to access a pointer for device work since this will always return the host pointer without synchronizing the data with the device. Tracking data movements and allocations Compiling MFEM with MFEM_TRACK_CUDA_MEM can help by printing when data is transferred, allocated, etc. Large amount of data movement between host and device should be avoided at all costs. Pinpoint where this is occurring and see if you can refactor your code so the data stays mainly on the device. Avoid allocating GPU memory too frequently, CUDA malloc calls are slow and can hinder performance. If you really need to allocate frequently GPU memory, consider using a memory pool (e.g. Umpire ), that way the mallocs are much cheaper on the GPU. The UseDevice function It is a good practice to call UseDevice(true) on any Vector intended to go on device right after constructing it. Vector v; v.UseDevice(true); Be aware that UseDevice() is not the same as UseDevice(true) , the first one just returns a boolean that tells you whether the object is intended for computation on the device or not. Using constexpr inside mfem::forall constexpr P = ...; // Results in an error on MSVC mfem::forall(N, [=] MFEM_HOST_DEVICE (int n) { double my_data[P]; }); The mfem::forall macro relies on lambda capturing in C++. One issue comes up with compilers such as MSVC is the capturing of constexpr variables inside mfem::forall . According to the C++ standard, constexpr variables do not need to be captured, and should not lose their const-ness in a lambda. However, on MSVC (e.g. in the MFEM AppVeyor CI checks), this can result in errors like: error C2131: expression did not evaluate to a constant A simple fix for this error is to declare the constexpr variable as static constexpr . static constexpr P = ...; // Omitting the static results in an error on MSVC mfem::forall(N, [=] MFEM_HOST_DEVICE (int n) { double my_data[P]; }); Similar problems and workarounds are discussed here . Error: \"alias not found\" This error message indicates that you are trying to move an \"alias\" Vector to GPU while its \"base\" Vector did not have a GPU allocation (valid or not) when the alias was created (and may still not have GPU allocation when the move of the \"alias\" was attempted). This is another case where we cannot update the \"base\" Vector because we do not have access to it (and even if we did, there are complications). This can be avoided if one follows the following rule: if you are creating an \"alias\" that will be used on device, you need to ensure that the \"base\" is allocated on that device. Depending on the context, one can use different methods to do that. For example, if the \"base\" is initialized (on host, otherwise there will be no issue) in the same function that will create the alias, one can call base.Write() to create the device allocation followed by base.HostWrite() and then initialize \"base\" on host -- this sequence avoids any unnecessary host-device transfers. Another example: if the \"base\" was initialized outside of the function where the \"alias\" is created, then the most appropriate choice probably is to call base.Read() before creating the \"alias\". Since the alias will need the data on device, the incurred host-to-device transfer is (at least partially) necessary anyway. Ideally, \"base\" Vectors that will be modified/accessed on device through aliases should be allocated on device to begin with, e.g. using Vector::SetSize(int s, MemoryType mt) typically with mt = Device::GetDeviceMemoryType() . MakeRef vectors do not see the same valid host/device data as their base vector Consider the following code snippet where the vector w is defined from v using the MakeRef() method: const int vSize = 10; Vector v; v.UseDevice(true); v.SetSize(vSize); v = 0.0; cout << \"IsHost(v) = \" << IsHostMemory(v.GetMemory().GetMemoryType()) << endl; Vector w; w.MakeRef(v, 0, vSize); cout << \"IsHost(w) = \" << IsHostMemory(w.GetMemory().GetMemoryType()) << endl; auto hv = v.HostWrite(); for (int j = 0; j < vSize; j++) { hv[j] = 1.0; } cout << \"IsHost(v) = \" << IsHostMemory(v.GetMemory().GetMemoryType()) << endl; cout << \"IsHost(w) = \" << IsHostMemory(w.GetMemory().GetMemoryType()) << endl; Vector z; z.UseDevice(true); z.SetSize(vSize); auto dz = z.Write(); auto dw = w.Read(); mfem::forall(vSize, [=] MFEM_HOST_DEVICE (int i) { dz[i] = dw[i]; }); z.HostRead(); cout << \"norm(z) = \" << z.Norml2() << endl; dz = z.Write(); auto dv = v.Read(); mfem::forall(vSize, [=] MFEM_HOST_DEVICE (int i) { dz[i] = dv[i]; }); z.HostRead(); cout << \"norm(z) = \" << z.Norml2() << endl; The resulting output may be unexpected: IsHost(v) = 0 IsHost(w) = 0 IsHost(v) = 1 IsHost(w) = 0 norm(z) = 0 norm(z) = 3.16228 Basically the issue is that the Memory objects (inside the Vector s) do not know about the other version, so they cannot update the validity flags (the host and device validity flags indicate which of the pointers has valid data) of the other Vector . Also such update may not make sense if you just moved the subvector. There is no easy way to keep the big \"base\" Vector v and the \"alias\" subvector w synchronized when they are being moved/copied between host and device. Therefore such synchronizations need to be done \"manually\" using the methods Vector::SyncMemory and Vector::SyncAliasMemory . In the example above, after you move the \"base\" Vector v to host, you need to \"inform\" the \"alias\" w that the validity flags of its base have been changed. This is done by calling w.SyncMemory(v) which simply copies the validity flags from v to w , there are no host-device memory transfers involved. On the other hand, if in the example you moved w to host and modified it there, and then you want to access the data through the base Vector v (you can think of the more general case here, when w is smaller than v ) then you need to call w.SyncAliasMemory(v) . In this particular case, the call will move the subvector described by w from host to device and update the validity flags of w to be the same as the ones of v . This way the whole Vector v gets the real data in one location -- before the call part of it was on device and the part described by w was on host. Both w.SyncMemory(v) and w.SyncAliasMemory(v) ensure that w gets the validity flags of v , the difference is where the real data is before the call -- in the first case the real data is in v and in the second, it is in w .", "title": "GPU Support"}, {"location": "gpu-support/#gpu-support-in-mfem", "text": "MFEM relies mainly on two features for running algorithms on devices such as GPUs: The memory manager handles transparently the moving of data between the host (CPU) and the device (e.g. GPU), The mfem::forall function to abstract for loops to parallelize the execution on an arbitrary device. Vector u; Vector v; // ... const auto u_data = u.Read(); // Express the intent to read u auto v_data = v.ReadWrite(); // Express the intent to read and write v // Abstract the loop: for(int i=0; i objects. The Memory objects handle host and device pointers, memory allocations, and data synchronizations between host and device. To get the pointer T* from a Memory object, one has to use the Read() , Write() , or ReadWrite() methods. Read() returns a const T* pointer, and should be used when the data will only be read, Write() returns a T* pointer, and should be used when writing data without using any previously contained data, ReadWrite() returns T* pointer, and should be used when read and write access to the pointer are required. Read() , Write() , and ReadWrite() automatically handle data movement between the host and device. They can optimize data transfer, since e.g. data that is declared as Write() on the host/device need not be updated from the device/host. The method void UseDevice(bool) specifies if a Memory object is intended for computation on host or on device. The Read() , Write() , and ReadWrite() methods will return device pointer if using the device has been set to true with UseDevice , by default it is false and will return a host pointer. Sometimes, it is necessary to access the data specifically on the host. In this case the HostRead() , HostWrite() , and HostReadWrite() methods should be used. In practice, developers rarely have to manipulate Memory objects, instead objects data can be stored using Vector and Array . Vector and Array data pointers can be accessed with the same methods as Memory . Vector v; v.UseDevice(true); const double *device_ptr = v.Read(); const double *host_ptr = v.HostRead();", "title": "Memory manager"}, {"location": "gpu-support/#mfemforall", "text": "The idea behind the mfem::forall function is to have the same behavior as a for loop and hide all device-specific code in order to enable performance portability. Example: for (int i = 0; i < N; i++) { ... } becomes mfem::forall(N, [=] MFEM_HOST_DEVICE (int i) { ... }); One class that is convenient to use in combination with the memory manager and mfem::forall is DeviceTensor : an N dimensional array containing elements of type T , which by default is double . The Reshape function reshapes its input into such an N dimensional array: Vector a; a.UseDevice(true); const int p = ...; const int q = ...; const int r = ...; const int N = ...; auto A = Reshape(a.Write(), p, q, r, N); // returns a DeviceTensor<4,double> mfem::forall(N, [=] MFEM_HOST_DEVICE (int n) { for (int k = 0; k < r; k++) for (int j = 0; j < q; j++) for (int i = 0; i < p; i++) A(i,j,k,n) = ...; }); Several variants of mfem::forall exist, such as mfem::forall_2D and mfem::forall_3D , to help map 2D or 3D blocks of threads to the hardware more efficiently. In the case of a GPU, mfem::forall_3D(N, X,Y,Z, [=] MFEM_HOST_DEVICE (int n){...}) will declare N block of threads each of size X x Y x Z threads, whereas mfem::forall uses N/MFEM_CUDA_BLOCKS block of threads each of size MFEM_CUDA_BLOCKS = 256 threads. Using mfem::forall_3D (and mfem::forall_2D ) over mfem::forall results in a higher level of parallelism, the former using N x X x Y x Z software threads and the latter only N software threads. In order to exploit 2D or 3D blocks of threads, it is convenient to use the macro MFEM_FOREACH_THREAD(i,x,p) to use threads as a for loop. The first variable i is the name of the \"loop\" variable, x is the threadId (it can take the values x , y , or z ), and p is the loop upper bound. If we rewrite the previous example using mfem::forall_3D and MFEM_FOREACH_THREAD , we get: Vector a; a.UseDevice(true); const int p = ...; const int q = ...; const int r = ...; const int N = ...; auto A = Reshape(a.Write(), p, q, r, N); // returns a DeviceTensor<4,double> mfem::forall_3D(N, p, q, r, [=] MFEM_HOST_DEVICE (int n) { MFEM_FOREACH_THREAD(k,z,r) MFEM_FOREACH_THREAD(j,y,q) MFEM_FOREACH_THREAD(i,x,p) A(i,j,k,n) = ...; }); The reasons for this more complex syntax is to better utilize the hardware, GPUs in particular. Using mfem::forall_3D and MFEM_FOREACH_THREAD allows to use more concurrency N x X x Y x Z threads instead of only N threads with mfem::forall , but more importantly the memory accesses on A(i,j,k,n) are much better with mfem::forall_3D . With mfem::forall_3D , threads access consecutive memory (i.e. coalesced memory access). Because most applied math algorithms are memory bound, having coalesced memory accesses is critical to achieve high performance.", "title": "mfem::forall"}, {"location": "gpu-support/#achieving-high-performance-on-gpus", "text": "Finite element algorithms are usually memory bound on GPUs, and therefore in order to achieve peak performance one has to maximize the utilization of the different memory bandwidths . In particular, the main memory, or device memory, is the memory that has to be maximally used (i.e. saturated ) in order to achieve peak performance. It is important to not saturate memory bandwidth other than the main memory bandwidth, failing to do so will decrease the main memory throughput by creating memory bandwidth bottlenecks. Maximizing the main memory bandwidth is achieved by issuing enough memory transactions and using efficiently the transferred data. The more computationally light a kernel is the more frequently memory transactions are issued, and if there is no memory bandwidth saturated other than the main memory bandwidth, e.g: shared or L1 memory, then the first condition to achieve peak performance is fulfilled. Memory is transferred by contiguous blocks, called cache-line , which are typically the size of 32 float , or 16 double . Since each cache-line is a block of contiguous memory it is common to over-fetch data when accessing non-contiguous memory addresses (because not all the data is used in each cache-line). In the worst case, only one float of each cache-line is used resulting in only 1/32 of the data transferred being used, such a kernel is potentially 32 times slower than a kernel that fully utilizes the data in each cache line. When a kernel is carefully written to use all the data from each cache-line, the memory access are often referred as coalesced memory access. Having coalesced memory access kernels is critical to achieving peak performance. In term of parallelization, when seeing GPUs as having only one level of parallelism over threads, severe constraints are imposed to the kernels in order to achieve high performance. Each thread is limited to 255 float registers, using more registers results in what is known as register spilling which significantly impacts performance, this is why this type of parallelization strategy should only be used for the most simple kernels. Therefore, it is usually a good strategy to see GPUs as having two levels of parallelism: the coarse parallelism level among block of threads, and the fine parallelism level among threads in a block of threads. Threads in different blocks of threads can only exchange data through the main memory, therefore data exchange between blocks of threads should be kept to the absolute minimum. Threads inside a block of threads can exchange data efficiently by using the shared memory . Shared memory can also be used to store data common between threads, but stored data should be carefully managed due to the very limited storage capacity of the shared memory. Due to their low arithmetic intensity, finite element algorithms often require a significant amount of shared memory bandwidth to exchange information between threads in a block. High amounts of shared memory bandwidth usage is a common bottleneck to achieve high performance. In order to be used efficiently, shared memory also requires specific memory access patterns to prevent bank conflicts . When bank conflicts occur, memory access are serialized instead of being parallel. Each cache line in the shared memory is linearly spread over the shared memory banks, if the threads in a block of threads access different data in the same bank then a bank conflict occurs. However, if the threads in a block access the same data in a bank, or different data in different banks, then the memory access can occur optimally in parallel.", "title": "Achieving high performance on GPUs"}, {"location": "gpu-support/#profiling-on-nvidia-gpus", "text": "When profiling to improve the performance of a memory bound kernel, we recommend the following steps: Measure the main memory bandwidth and efficiency: this tells us how far from peak throughput we are. Insure that no register spills are occurring: most kernels can be written without any register spilling. Measure the shared memory bandwidth and efficiency: try to prevent the shared memory to be the performance bottleneck.", "title": "Profiling on NVIDIA GPUs"}, {"location": "gpu-support/#optimizing-the-main-memory-usage", "text": "The first thing we need to know is how far from peak throughput and how efficiently the main memory is accessed. For instance, with nvprof the following command nvprof --metrics gld_throughput,gst_throughput,gld_efficiency,gst_efficiency gives us the desired information. The sum of the load throughput ( gld_throughput ) and store throughput ( gst_throughput ) should be as close as possible to the main memory maximum bandwidth. gld_efficiency and gst_efficiency informs us on ratio of requested global memory load/store throughput to required global memory load/store throughput expressed as percentage. As mentioned above, efficiency issues are critical to achieve peak performance and are solved by coalescing memory access. Once we know how far we are from peak throughput, one should look at the main stall reasons to get an idea of what might be slowing down the kernels: Instruction Fetch \u2014 The next assembly instruction has not yet been fetched. Memory Throttle \u2014 A large number of pending memory operations prevent further forward progress. These can be reduced by combining several memory transactions into one. Memory Dependency \u2014 A load/store cannot be made because the required resources are not available or are fully utilized, or too many requests of a given type are outstanding. Memory dependency stalls can potentially be reduced by optimizing memory alignment and access patterns. Synchronization \u2014 The warp is blocked at a __syncthreads() call. Execution Dependency \u2014 An input required by the instruction is not yet available. Execution dependency stalls can potentially be reduced by increasing instruction-level parallelism. You can use nvprof --metrics with: stall_inst_fetch for the percentage of stalls occurring because of instruction fetch, stall_exec_dependency for the percentage of stalls occurring because of execution dependency, stall_memory_dependency for the percentage of stalls occurring because a memory dependency, stall_memory_throttle for the percentage of stalls occurring because of memory throttle, stall_sync for the percentage of stalls occurring because the warp is blocked at a __syncthreads() call.", "title": "Optimizing the main memory usage"}, {"location": "gpu-support/#optimizing-the-register-usage", "text": "Register spilling can be detected in two ways: Compile for CUDA with -Xptxas=\"-v\" which reports at compilation the register usage and spills for each kernel. Measure the local memory transfers with a profiler to check if there are register spills. nvprof --metrics local_load_transactions,local_store_transactions --kernels myKernel should be 0 . Register spills happen for two main reasons: Each thread uses too many registers, Array indices are not known at compilation time. When each thread uses too many registers it is often useful to redesign the kernel to use more threads per block to perform the computation, this lowers the amount of registers used per thread but usually increases the shared memory usage due to more distributed data. Computing indices at compilation can often be resolved by simply unrolling loops with MFEM_UNROLL and making sure that all the necessary information to compute the indices is known at compilation time.", "title": "Optimizing the register usage"}, {"location": "gpu-support/#roofline-model", "text": "A roofline model helps predicting the peak performance achievable by a specific algorithm. The arithmetic intensity is the ratio of the total number of operations divided by the amount of data movement from and to the main memory. By dividing the maximum FLOPs, by the maximum bandwidth we get an arithmetic intensity threshold value between the two main regime of a GPU. A kernel with an arithmetic intensity below or above the threshold value will be memory bound or computation bound respectively. For in depths performance analysis we recommend to look at efficiency issues The list of all the possible metrics for nvprof is available here .", "title": "Roofline model"}, {"location": "gpu-support/#tips-tricks", "text": "", "title": "Tips & Tricks"}, {"location": "gpu-support/#compile-in-debug-mode-when-developing-for-devices", "text": "The memory manager performs checks that catches most of the misuse of the memory on host or device. When using device debug, if your code fails you can run gdb or lldb , and set a breakpoint at b mfem::mfem_error . The code will break as soon as it reaches this point and then you can backtrace bt from here to see what went wrong and where.", "title": "Compile in debug mode when developing for devices"}, {"location": "gpu-support/#forcing-synchronization-with-the-host-or-the-device", "text": "It is sometimes needed to force synchronization between host and device data. In order to make sure that the host data is synchronized one should use HostRead() , similarly to ensure synchronized data on the device one should use Read() .", "title": "Forcing synchronization with the host or the device"}, {"location": "gpu-support/#do-not-use-getdata", "text": "Do not use GetData() to access a pointer for device work since this will always return the host pointer without synchronizing the data with the device.", "title": "Do not use GetData"}, {"location": "gpu-support/#tracking-data-movements-and-allocations", "text": "Compiling MFEM with MFEM_TRACK_CUDA_MEM can help by printing when data is transferred, allocated, etc. Large amount of data movement between host and device should be avoided at all costs. Pinpoint where this is occurring and see if you can refactor your code so the data stays mainly on the device. Avoid allocating GPU memory too frequently, CUDA malloc calls are slow and can hinder performance. If you really need to allocate frequently GPU memory, consider using a memory pool (e.g. Umpire ), that way the mallocs are much cheaper on the GPU.", "title": "Tracking data movements and allocations"}, {"location": "gpu-support/#the-usedevice-function", "text": "It is a good practice to call UseDevice(true) on any Vector intended to go on device right after constructing it. Vector v; v.UseDevice(true); Be aware that UseDevice() is not the same as UseDevice(true) , the first one just returns a boolean that tells you whether the object is intended for computation on the device or not.", "title": "The UseDevice function"}, {"location": "gpu-support/#using-constexpr-inside-mfemforall", "text": "constexpr P = ...; // Results in an error on MSVC mfem::forall(N, [=] MFEM_HOST_DEVICE (int n) { double my_data[P]; }); The mfem::forall macro relies on lambda capturing in C++. One issue comes up with compilers such as MSVC is the capturing of constexpr variables inside mfem::forall . According to the C++ standard, constexpr variables do not need to be captured, and should not lose their const-ness in a lambda. However, on MSVC (e.g. in the MFEM AppVeyor CI checks), this can result in errors like: error C2131: expression did not evaluate to a constant A simple fix for this error is to declare the constexpr variable as static constexpr . static constexpr P = ...; // Omitting the static results in an error on MSVC mfem::forall(N, [=] MFEM_HOST_DEVICE (int n) { double my_data[P]; }); Similar problems and workarounds are discussed here .", "title": "Using constexpr inside mfem::forall"}, {"location": "gpu-support/#error-alias-not-found", "text": "This error message indicates that you are trying to move an \"alias\" Vector to GPU while its \"base\" Vector did not have a GPU allocation (valid or not) when the alias was created (and may still not have GPU allocation when the move of the \"alias\" was attempted). This is another case where we cannot update the \"base\" Vector because we do not have access to it (and even if we did, there are complications). This can be avoided if one follows the following rule: if you are creating an \"alias\" that will be used on device, you need to ensure that the \"base\" is allocated on that device. Depending on the context, one can use different methods to do that. For example, if the \"base\" is initialized (on host, otherwise there will be no issue) in the same function that will create the alias, one can call base.Write() to create the device allocation followed by base.HostWrite() and then initialize \"base\" on host -- this sequence avoids any unnecessary host-device transfers. Another example: if the \"base\" was initialized outside of the function where the \"alias\" is created, then the most appropriate choice probably is to call base.Read() before creating the \"alias\". Since the alias will need the data on device, the incurred host-to-device transfer is (at least partially) necessary anyway. Ideally, \"base\" Vectors that will be modified/accessed on device through aliases should be allocated on device to begin with, e.g. using Vector::SetSize(int s, MemoryType mt) typically with mt = Device::GetDeviceMemoryType() .", "title": "Error: \"alias not found\""}, {"location": "gpu-support/#makeref-vectors-do-not-see-the-same-valid-hostdevice-data-as-their-base-vector", "text": "Consider the following code snippet where the vector w is defined from v using the MakeRef() method: const int vSize = 10; Vector v; v.UseDevice(true); v.SetSize(vSize); v = 0.0; cout << \"IsHost(v) = \" << IsHostMemory(v.GetMemory().GetMemoryType()) << endl; Vector w; w.MakeRef(v, 0, vSize); cout << \"IsHost(w) = \" << IsHostMemory(w.GetMemory().GetMemoryType()) << endl; auto hv = v.HostWrite(); for (int j = 0; j < vSize; j++) { hv[j] = 1.0; } cout << \"IsHost(v) = \" << IsHostMemory(v.GetMemory().GetMemoryType()) << endl; cout << \"IsHost(w) = \" << IsHostMemory(w.GetMemory().GetMemoryType()) << endl; Vector z; z.UseDevice(true); z.SetSize(vSize); auto dz = z.Write(); auto dw = w.Read(); mfem::forall(vSize, [=] MFEM_HOST_DEVICE (int i) { dz[i] = dw[i]; }); z.HostRead(); cout << \"norm(z) = \" << z.Norml2() << endl; dz = z.Write(); auto dv = v.Read(); mfem::forall(vSize, [=] MFEM_HOST_DEVICE (int i) { dz[i] = dv[i]; }); z.HostRead(); cout << \"norm(z) = \" << z.Norml2() << endl; The resulting output may be unexpected: IsHost(v) = 0 IsHost(w) = 0 IsHost(v) = 1 IsHost(w) = 0 norm(z) = 0 norm(z) = 3.16228 Basically the issue is that the Memory objects (inside the Vector s) do not know about the other version, so they cannot update the validity flags (the host and device validity flags indicate which of the pointers has valid data) of the other Vector . Also such update may not make sense if you just moved the subvector. There is no easy way to keep the big \"base\" Vector v and the \"alias\" subvector w synchronized when they are being moved/copied between host and device. Therefore such synchronizations need to be done \"manually\" using the methods Vector::SyncMemory and Vector::SyncAliasMemory . In the example above, after you move the \"base\" Vector v to host, you need to \"inform\" the \"alias\" w that the validity flags of its base have been changed. This is done by calling w.SyncMemory(v) which simply copies the validity flags from v to w , there are no host-device memory transfers involved. On the other hand, if in the example you moved w to host and modified it there, and then you want to access the data through the base Vector v (you can think of the more general case here, when w is smaller than v ) then you need to call w.SyncAliasMemory(v) . In this particular case, the call will move the subvector described by w from host to device and update the validity flags of w to be the same as the ones of v . This way the whole Vector v gets the real data in one location -- before the call part of it was on device and the part described by w was on host. Both w.SyncMemory(v) and w.SyncAliasMemory(v) ensure that w gets the validity flags of v , the difference is where the real data is before the call -- in the first case the real data is in v and in the second, it is in w .", "title": "MakeRef vectors do not see the same valid host/device data as their base vector"}, {"location": "integration/", "text": "Integration MFEM's spatial integrations are performed in the usual finite element manner by first splitting the spatial domain into a collection of non-overlapping \"elements\" which cover the domain. This is usually referred to as the \"mesh\". An integral can then be computed separately in each element and the results added together: $$ \\int_\\Omega f(x)\\,d\\Omega = \\sum_i\\int_{\\Omega_i}f(x)\\,d\\Omega $$ Where $\\Omega$ is the full domain and $\\Omega_i$ is the domain of the i-th element. In MFEM this sum over elements is performed in classes such as the BilinearForm or LinearForm and their parallel counterparts. Elements come in a variety of shapes and they may be flat-sided or curved. For this reason it is much simpler to perform the element-wise integrations on reference elements which have relatively simple shapes. For example in 2D we might integrate over a unit square rather than an arbitrary quadrilateral. Finite element methods typically make the assumption that the functions to be integrated are non-singular and at least reasonably smooth. This enables us to employ families of relatively simple quadrature rules which are designed for accurately integrating polynomials. This is in contrast to boundary element methods which require more specialized rules which can accurately integrate singularities. Our rules take the form: $$\\int_{\\Omega_i} f(x)\\,d\\Omega \\approx \\sum_j w_j\\,f(x(u_j))\\,|J_i(u_j)|\\label{eq:quad_rule}$$ Where $w_j$ are the quadrature weights, $u_j$ are the quadrature points within the reference element, and $|J_i(u_j)|$ is the Jacobian determinant for element $i$ at the location $u_j$. Integrals at this level are typically computed by classes derived from BilinearFormIntegrator or LinearFormIntegrator , see Bilinear Form Integrators or Linear Form Integrators for numerous examples. Integration Rules The basic building block of an integration rule is the IntegrationPoint . This is a minimal object with member data 'x', 'y', 'z', and 'weight' (and an integer 'index' which indicates the point's place in an integration rule). These store the coordinates of the integration point in the reference coordinate system, $u_j$ from equation $\\ref{eq:quad_rule}$ is defined as $u_j\\equiv(x,y,z)$ , along with the quadrature weight, $w$ also from equation $\\ref{eq:quad_rule}$. Integration points can be collected together into an IntegrationRule object. IntegrationRule is little more than a container for the set of IntegrationPoint objects associated with an integration rule for a given order of accuracy within the domain of a specific reference element. IntegrationRule objects are in turn collected together into the IntRules global object. This object constructs and caches all IntegrationRule objects requested by the calling program. On one hand the IntRules global object is a container class which categorizes IntegrationRule objects by element type and order of accuracy but more importantly it is responsible for allocating IntegrationRule objects and populating them with appropriate IntegrationPoint objects. It is also possible to sidestep the IntRules global object and setup custom IntegrationRule objects. These custom integration rules can then be passed to BilinearFormIntegrator or LinearFormIntegrator objects (using custom integration rules with mixed meshes currently requires specialized handling). Coordinate Transformations The coordinate transformation from the reference element to an individual mesh element is performed by the ElementTransformation class. Objects of this class are prepared by the Mesh object and retrieved in various ways depending on context. For standard mesh elements for (int e = 0; e < mesh->GetNE(); e++) { ElementTransformation *Trans = mesh->GetElementTransformation(e); ... } or for boundary elements for (int be = 0; be < mesh->GetNBE(); be++) { ElementTransformation *Trans = mesh->GetBdrElementTransformation(be); ... } or for faces (usually in a Discontinuous Galerkin (DG) context) for (int f = 0; f < mesh->GetNumFaces(); f++) { FaceElementTransformation *FETrans = mesh->GetFaceElementTransformation(f); ... } or, finally, for boundary faces in a DG context for (int bf = 0; bf < mesh->GetNBE(); bf++) { FaceElementTransformation *FETrans = mesh->GetBdrFaceElementTransformation(bf); ... } A FaceElementTransformation object is a convenience object for easily accessing the three ElementTransformation objects associated with a mesh face and its two neighboring elements. In the case of boundary faces one of the neighboring element transformation objects is not present. In addition to transforming coordinates between the reference and global coordinate systems an ElementTransformation object can be used to compute the following quantities related to the Jacobian matrix: Name C++ Expression Formula Jacobian Matrix const DenseMatrix &J = Trans.Jacobian() ${\\bf J}_{ij} = \\frac{\\partial x_i}{\\partial u_j}$ Jacobian Determinant double detJ = Trans.Weight() $\\det({\\bf J})$ Inverse Jacobian const DenseMatrix &InvJ = Trans.InverseJacobian() ${\\bf J}^{-1}$ Adjugate Jacobian const DenseMatrix &AdjJ = Trans.AdjugateJacobian() $\\det({\\bf J})\\,{\\bf J}^{-1}$ Since these quantities can be expensive to compute the ElementTransformation object will avoid recomputing values whenever possible. However, once a new quadrature point is set, using ElementTransformation::SetIntPoint() , any cached values will be overwritten by subsequent calls to the above functions. Writing Custom Integrators Element-wise integration arises in various places in the finite element method. A few of the most common occurrences are square and rectangular bilinear form operators, linear functionals, and the calculation of norms from field data. Type Primary Function Needing Implementation Square Operators BilinearFormIntegrator::AssembleElementMatrix Rectangular Operators BilinearFormIntegrator::AssembleElementMatrix2 Linear Functionals LinearFormIntegrator::AssembleRHSElementVect Development of a new norm or another custom integral might follow the code found in GridFunction::ComputeElementLpErrors . The pieces that are common to each of these include: Determination of the appropriate quadrature order Obtaining the quadrature rule for the appropriate element type Working with the ElementTransformation object Evaluating the function to be integrated An appropriate quadrature order depends on many variables. If we could restrict ourselves to integrating polynomials then a specific order would produce an exact result and a higher order would only incur additional effort. However, skewed or curved elements can introduce a rational polynomial factor through the inverse Jacobian of the element transformation. Furthermore, non-trivial material coefficients can introduce factors with arbitrary functional forms. Useful rules of thumb for linear and bilinear form integration orders are: (linear form order) = (basis function order) + (geometry order) (bilinear form order) = (domain basis function order) + (range basis function order) + (geometry order) It can be appropriate to lower the basis function order by one if a derivative of the basis function is being used. It might be appropriate to increase the order if the coefficient is expected to vary more rapidly but, in such a case, it would probably be more appropriate to further refine the mesh. Appropriate orders for computing norms should probably follow the guidance for bilinear forms since most common norms tend to be quadratic. For example a custom integrator for a rectangular operator might start with the following lines: void CustomIntegrator::AssembleElementMatrix2(const FiniteElement &trial_fe, const FiniteElement &test_fe, ElementTransformation &Trans, DenseMatrix &elmat) { // Determine an appropriate integration rule order int order = trial_fe.GetOrder() // Polynomial order of domain space + test_fe.GetOrder() // Polynomial order of range space + Trans.OrderW(); // Polynomial order of the geometry // Determine the element type: triangle, quadrilateral, tetrahedron, etc. Geometry::Type geom = Trans.GetGeometryType(); // Construct or retrieve an integration rule for the appropriate // reference element with the desired order of accuracy const IntegrationRule * ir = &IntRules.Get(Trans.GetGeometryType(), order); ... } This example uses the IntRules global object but custom integration rules could be provided through the use of a similar global object or by some other means. The next piece is to loop over the integration points and, in most cases, make use of the ElementTransformation object. ... // Loop over each quadrature point in the reference element for (int i = 0; i < ir->GetNPoints(); i++) { // Extract the current quadrature point from the integration rule const IntegrationPoint &ip = ir->IntPoint(i); // Prepare to evaluate the coordinate transformation at the current // quadrature point Trans.SetIntPoint(&ip); // Compute the Jacobian determinant at the current integration point double detJ = Trans.Weight(); ... } The final piece is to evaluate the function to be integrated. This often involves evaluation of a Coefficient object as well as one or two sets of basis functions or their derivatives. The coefficient should be straightforward, simply call its Eval method with the ElementTransformation and IntegrationPoint objects and perhaps a Vector or DenseMatrix to hold the resulting coefficient value when appropriate. Basis function evaluation can be a bit more complicated. Basis Function Evaluation Some basis functions, particularly vector-valued basis functions, partially depend upon the geometry of the physical element in addition to their dependence on the reference element. The scalar basis functions provided by the H1_FECollection are straightforward. Simply call FiniteElement::CalcShape with the current quadrature point to retrieve a vector containing the values of each basis function evaluated at the given point in reference space. ... // Retrieve the number of basis functions int tr_dof = trial_fe.GetDof(); // Allocate a vector to hold the values of each basis function Vector tr_shape(tr_dof); // Loop over each quadrature point in the reference element for (int i = 0; i < ir->GetNPoints(); i++) { // Extract the current quadrature point from the integration rule const IntegrationPoint &ip = ir->IntPoint(i); ... // Evaluate the basis functions at the point ip trial_fe.CalcShape(ip, tr_shape); ... } For other types of basis functions it can be simpler to call CalcPhysShape or CalcPhysVShape . These, and similar evaluation functions with \"Phys\" in the name, internally perform the geometric transformation of the basis functions when necessary. This is clearly a convenience feature but it can lead to unnecessary computations when certain optimizations are possible. In the following table subscripts on the derivative operators indicate which coordinate system is being used to compute the derivative; 'x' for the physical coordinates and 'u' for the reference coordinates. Quantities with a caret above them indicate functions computed in the reference coordinate system. Family Evaluation Transformation H1 Basis None H1 Gradient of Basis $\\nabla_x\\varphi_i = (J^{-1})^T\\nabla_u\\hat{\\varphi}_i$ ND Basis $\\vec{W}_i = (J^{-1})^T\\hat{W}_i$ ND Curl of Basis $\\nabla_x\\times\\vec{W}_i = \\frac{1}{\\det(J)}J\\,\\nabla_u\\times\\hat{W}_i$ RT Basis $\\vec{F}_i = \\frac{1}{\\det(J)}J\\,\\hat{F}_i$ RT Divergence of Basis $\\nabla_x\\cdot\\vec{F}_i = \\frac{1}{\\det(J)}\\nabla_u\\cdot\\hat{F}_i$ L2 (INTEGRAL) Basis $\\psi_i = \\frac{1}{\\det(J)}\\hat{\\psi}_i$ L2 (VALUE) Basis None Use of these \"CalcPhys\" functions enable integrators to be used with a wider variety of basis function families without the need to explicitly handle these transformations within the integrator. This leads to more general implementations but at the possible cost of added computational expense. For example, a LinearFormIntegrator involving an L2 basis function using the INTEGRAL map type would both multiply and divide by the Jacobian determinant at each integration point. Clearly this is unnecessary and could significantly increase the computational effort needed to compute the integrals. Working with the MixedScalarIntegrator The MixedScalarIntegrator is designed to help construct BilinearFormIntegrators which build an integrand from two sets of scalar-valued basis function evaluations. Such integrands will involve combinations of the following quantities: Scalar-valued basis functions obtained from CalcPhysShape Divergence of vector-valued basis functions obtained from CalcPhysDivShape Curl of vector-valued basis functions in 2D obtained from CalcPhysCurlShape Gradient of scalar-valued basis functions in 1D obtained from CalcPhysDShape An optional scalar coefficient To derive a custom integrator from MixedScalarIntegrator a developer need only define constructors for the custom integrator. Only one constructor is necessary but support of various coefficient types is often useful. class MixedScalarMassIntegrator : public MixedScalarIntegrator { public: MixedScalarMassIntegrator() { same_calc_shape = true; } MixedScalarMassIntegrator(Coefficient &q) : MixedScalarIntegrator(q) { same_calc_shape = true; } }; By default this integrator will compute the operator: $$a_{ij} = \\int_{\\Omega_e}q(x)\\,f_j(x)\\,g_i(x)\\,d\\Omega$$ Where $f_j$ and $g_i$ are two sets of scalar-valued basis functions which produces a \"mass\" matrix. The MixedScalarIntegrator has two public methods and five protected methods which can be overridden to customize the integrator. The public methods are AssembleElementMatrix for use with the BilinearForm class of square bilinear forms and AssembleElementMatrix2 for use with the MixedBilinearForm class of rectangular bilinear forms. Typically only one of these is necessary and the default implementations will often suffice. However, one or both of these methods may be overridden by a derived class if some customization is desired. For example, to implement optimizations related to coordinate transformations or custom integration rules, etc.. More commonly a derived class will need to override one or both of the CalcTestShape and CalcTrialShape methods which compute the necessary basis function values. For example the four types of scalar basis function evaluations supported by MixedScalarIntegrator could be obtained by these overrides of the trial (domain) finite element basis functions: /// Evaluate the scalar-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { trial_fe.CalcPhysShape(Trans, shape); } or /// Evaluate the divergence of the vector-valued basis functions virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { trial_fe.CalcPhysDivShape(Trans, shape); } or /// Evaluate the 2D curl of the vector-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { DenseMatrix dshape(shape.GetData(), shape.Size(), 1); trial_fe.CalcPhysCurlShape(Trans, dshape); } or /// Evaluate the 1D gradient of the scalar-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { DenseMatrix dshape(shape.GetData(), shape.Size(), 1); trial_fe.CalcPhysDShape(Trans, dshape); } Similar overrides could be implemented for the test (range) space. Of course other overrides are possible and may be quite useful for other custom integrators. The next override that is often advisable is VerifyFiniteElementTypes which provides a means of testing the FiniteElement objects passed by the BilinearForm class to make sure they support the evaluations needed by the CalcTestShape and CalcTrialShape methods. This override is optional but highly recommended. As an example the following override verifies that the geometry is one dimensional and that the trial (domain) space supports evaluation of the gradient of the basis functions. inline virtual bool VerifyFiniteElementTypes(const FiniteElement & trial_fe, const FiniteElement & test_fe ) const { return (trial_fe.GetDim() == 1 && test_fe.GetDim() == 1 && trial_fe.GetDerivType() == mfem::FiniteElement::GRAD && test_fe.GetRangeType() == mfem::FiniteElement::SCALAR ); } A related optional method can be used to output an appropriate error message in the event that unsuitable basis functions have been provided. For example the following error message might be appropriate in conjunction with the previous VerifyFiniteElementTypes implementation: inline virtual const char * FiniteElementTypeFailureMessage() const { return \"Trial and test spaces must both be scalar fields in 1D \" \"and the trial space must implement CalcDShape.\"; } The last optional protected method allows a certain flexibility in the choice of quadrature order. The default implementation is shown below but other choices may be suitable. inline virtual int GetIntegrationOrder(const FiniteElement & trial_fe, const FiniteElement & test_fe, ElementTransformation &Trans) { return trial_fe.GetOrder() + test_fe.GetOrder() + Trans.OrderW(); } A wide variety of bilinear forms can be easily implemented using the MixedScalarIntegrator . Most of these are probably already included in MFEM, see Bilinear Form Integrators for a listing, but other options may be useful. Working with the MixedVectorIntegrator The MixedVectorIntegrator is very similar in spirit to the MixedScalarIntegrator but the integrand in this case is computed as the inner product of two vectors. Such integrands will involve combinations of the following quantities: Vector-valued basis functions obtained from CalcVShape Gradient of scalar-valued basis functions obtained from CalcPhysDShape Curl of vector-valued basis functions in 3D obtained from CalcPhysCurlShape Optional scalar, vector, or matrix-valued coefficients By default this integrator will compute different operators based on coefficient type: Coefficient Type Default Integral Scalar $a_{ij} = \\int_{\\Omega_e}q(x)\\,\\vec{F}_j(x)\\cdot\\,\\vec{G}_i(x)\\,d\\Omega$ Matrix $a_{ij} = \\int_{\\Omega_e}\\left(Q(x)\\,\\vec{F}_j(x)\\right)\\cdot\\,\\vec{G}_i(x)\\,d\\Omega$ Vector $a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\times\\vec{F}_j(x)\\right)\\cdot\\,\\vec{G}_i(x)\\,d\\Omega$ Where $\\vec{F}_j$ and $\\vec{G}_i$ are two sets of vector-valued basis functions which produces a \"mass\" matrix. The MixedVectorIntegrator also has public and protected methods which may be overridden in an analogous manner to those in MixedScalarIntegrator to implement an even wider variety of custom integrators. Note that the default implementation of the assembly methods do assume a square matrix coefficient but this assumption could be removed if necessary. The CalcTestShape and CalcTrialShape methods which compute the necessary vector-valued basis function values might be overridden as follows: /// Evaluate the vector-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, DenseMatrix & shape) { trial_fe.CalcVShape(Trans, shape); } or /// Evaluate the gradient of the scalar-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, DenseMatrix & shape) { trial_fe.CalcPhysDShape(Trans, shape); } or /// Evaluate the 3D curl of the vector-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, DenseMatrix & shape) { trial_fe.CalcPhysCurlShape(Trans, shape); } Many of the possible MixedVectorIntegrator customizations are already included in MFEM. See Bilinear Form Integrators for a listing. Working with the MixedScalarVectorIntegrator The MixedScalarVectorIntegrator follows naturally from the MixedScalarIntegrator and the MixedVectorIntegrator . The integrand in this case is computed as the product of a scalar basis function with a vector basis function. However, since the integrand must be scalar valued, a vector-valued coefficient will always be required. The types of scalar-valued basis functions will include: Scalar-valued basis functions obtained from CalcPhysShape Divergence of vector-valued basis functions obtained from CalcPhysDivShape Curl of vector-valued basis functions in 2D obtained from CalcPhysCurlShape Gradient of scalar-valued basis functions in 1D obtained from CalcPhysDShape The types of vector-valued basis functions will include: Vector-valued basis functions obtained from CalcVShape Gradient of scalar-valued basis functions obtained from CalcPhysDShape Curl of vector-valued basis functions in 3D obtained from CalcPhysCurlShape By default this integrator will compute different operators based on the choice of the trial and test spaces and, in 2D, how the vector coefficient should be employed: $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\,f_j(x)\\right)\\cdot\\vec{G}_i(x)\\,d\\Omega\\label{msv_def}$$ or $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\cdot\\vec{F}_j(x)\\right)\\,g_i(x)\\,d\\Omega\\label{msv_trans}$$ or in 2D there is an option to compute $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\,f_j(x)\\right)\\times\\vec{G}_i(x)\\,d\\Omega\\label{msv_2d_def}$$ or (again optionally in 2D) $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\times\\vec{F}_j(x)\\right)\\,g_i(x)\\,d\\Omega\\label{msv_2d_trans}$$ The methods that a developer may choose to override are again quite similar to those in MixedScalarIntegrator and MixedVectorIntegrator . The main difference is the basis function overrides which have been renamed to CalcShape for the scalar-valued basis and CalcVShape for the vector-valued basis. By default it is assumed that the trial (domain) space is scalar-valued and the test (range) space is vector-valued as in equations \\ref{msv_def} and \\ref{msv_2d_def}. The choice of trial and test spaces is here controlled by a transpose option in the MixedScalarVectorIntegrator constructor. If transpose == true then equations \\ref{msv_trans} and \\ref{msv_2d_trans} are assumed. The choice between equations \\ref{msv_def} and \\ref{msv_trans} on the one hand and equations \\ref{msv_2d_def} and \\ref{msv_2d_trans} on the other is made with the cross_2d optional constructor argument. There are several customizations of this integrator included in MFEM but others are possible. See Bilinear Form Integrators for a listing. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Integration"}, {"location": "integration/#integration", "text": "MFEM's spatial integrations are performed in the usual finite element manner by first splitting the spatial domain into a collection of non-overlapping \"elements\" which cover the domain. This is usually referred to as the \"mesh\". An integral can then be computed separately in each element and the results added together: $$ \\int_\\Omega f(x)\\,d\\Omega = \\sum_i\\int_{\\Omega_i}f(x)\\,d\\Omega $$ Where $\\Omega$ is the full domain and $\\Omega_i$ is the domain of the i-th element. In MFEM this sum over elements is performed in classes such as the BilinearForm or LinearForm and their parallel counterparts. Elements come in a variety of shapes and they may be flat-sided or curved. For this reason it is much simpler to perform the element-wise integrations on reference elements which have relatively simple shapes. For example in 2D we might integrate over a unit square rather than an arbitrary quadrilateral. Finite element methods typically make the assumption that the functions to be integrated are non-singular and at least reasonably smooth. This enables us to employ families of relatively simple quadrature rules which are designed for accurately integrating polynomials. This is in contrast to boundary element methods which require more specialized rules which can accurately integrate singularities. Our rules take the form: $$\\int_{\\Omega_i} f(x)\\,d\\Omega \\approx \\sum_j w_j\\,f(x(u_j))\\,|J_i(u_j)|\\label{eq:quad_rule}$$ Where $w_j$ are the quadrature weights, $u_j$ are the quadrature points within the reference element, and $|J_i(u_j)|$ is the Jacobian determinant for element $i$ at the location $u_j$. Integrals at this level are typically computed by classes derived from BilinearFormIntegrator or LinearFormIntegrator , see Bilinear Form Integrators or Linear Form Integrators for numerous examples.", "title": "Integration"}, {"location": "integration/#integration-rules", "text": "The basic building block of an integration rule is the IntegrationPoint . This is a minimal object with member data 'x', 'y', 'z', and 'weight' (and an integer 'index' which indicates the point's place in an integration rule). These store the coordinates of the integration point in the reference coordinate system, $u_j$ from equation $\\ref{eq:quad_rule}$ is defined as $u_j\\equiv(x,y,z)$ , along with the quadrature weight, $w$ also from equation $\\ref{eq:quad_rule}$. Integration points can be collected together into an IntegrationRule object. IntegrationRule is little more than a container for the set of IntegrationPoint objects associated with an integration rule for a given order of accuracy within the domain of a specific reference element. IntegrationRule objects are in turn collected together into the IntRules global object. This object constructs and caches all IntegrationRule objects requested by the calling program. On one hand the IntRules global object is a container class which categorizes IntegrationRule objects by element type and order of accuracy but more importantly it is responsible for allocating IntegrationRule objects and populating them with appropriate IntegrationPoint objects. It is also possible to sidestep the IntRules global object and setup custom IntegrationRule objects. These custom integration rules can then be passed to BilinearFormIntegrator or LinearFormIntegrator objects (using custom integration rules with mixed meshes currently requires specialized handling).", "title": "Integration Rules"}, {"location": "integration/#coordinate-transformations", "text": "The coordinate transformation from the reference element to an individual mesh element is performed by the ElementTransformation class. Objects of this class are prepared by the Mesh object and retrieved in various ways depending on context. For standard mesh elements for (int e = 0; e < mesh->GetNE(); e++) { ElementTransformation *Trans = mesh->GetElementTransformation(e); ... } or for boundary elements for (int be = 0; be < mesh->GetNBE(); be++) { ElementTransformation *Trans = mesh->GetBdrElementTransformation(be); ... } or for faces (usually in a Discontinuous Galerkin (DG) context) for (int f = 0; f < mesh->GetNumFaces(); f++) { FaceElementTransformation *FETrans = mesh->GetFaceElementTransformation(f); ... } or, finally, for boundary faces in a DG context for (int bf = 0; bf < mesh->GetNBE(); bf++) { FaceElementTransformation *FETrans = mesh->GetBdrFaceElementTransformation(bf); ... } A FaceElementTransformation object is a convenience object for easily accessing the three ElementTransformation objects associated with a mesh face and its two neighboring elements. In the case of boundary faces one of the neighboring element transformation objects is not present. In addition to transforming coordinates between the reference and global coordinate systems an ElementTransformation object can be used to compute the following quantities related to the Jacobian matrix: Name C++ Expression Formula Jacobian Matrix const DenseMatrix &J = Trans.Jacobian() ${\\bf J}_{ij} = \\frac{\\partial x_i}{\\partial u_j}$ Jacobian Determinant double detJ = Trans.Weight() $\\det({\\bf J})$ Inverse Jacobian const DenseMatrix &InvJ = Trans.InverseJacobian() ${\\bf J}^{-1}$ Adjugate Jacobian const DenseMatrix &AdjJ = Trans.AdjugateJacobian() $\\det({\\bf J})\\,{\\bf J}^{-1}$ Since these quantities can be expensive to compute the ElementTransformation object will avoid recomputing values whenever possible. However, once a new quadrature point is set, using ElementTransformation::SetIntPoint() , any cached values will be overwritten by subsequent calls to the above functions.", "title": "Coordinate Transformations"}, {"location": "integration/#writing-custom-integrators", "text": "Element-wise integration arises in various places in the finite element method. A few of the most common occurrences are square and rectangular bilinear form operators, linear functionals, and the calculation of norms from field data. Type Primary Function Needing Implementation Square Operators BilinearFormIntegrator::AssembleElementMatrix Rectangular Operators BilinearFormIntegrator::AssembleElementMatrix2 Linear Functionals LinearFormIntegrator::AssembleRHSElementVect Development of a new norm or another custom integral might follow the code found in GridFunction::ComputeElementLpErrors . The pieces that are common to each of these include: Determination of the appropriate quadrature order Obtaining the quadrature rule for the appropriate element type Working with the ElementTransformation object Evaluating the function to be integrated An appropriate quadrature order depends on many variables. If we could restrict ourselves to integrating polynomials then a specific order would produce an exact result and a higher order would only incur additional effort. However, skewed or curved elements can introduce a rational polynomial factor through the inverse Jacobian of the element transformation. Furthermore, non-trivial material coefficients can introduce factors with arbitrary functional forms. Useful rules of thumb for linear and bilinear form integration orders are: (linear form order) = (basis function order) + (geometry order) (bilinear form order) = (domain basis function order) + (range basis function order) + (geometry order) It can be appropriate to lower the basis function order by one if a derivative of the basis function is being used. It might be appropriate to increase the order if the coefficient is expected to vary more rapidly but, in such a case, it would probably be more appropriate to further refine the mesh. Appropriate orders for computing norms should probably follow the guidance for bilinear forms since most common norms tend to be quadratic. For example a custom integrator for a rectangular operator might start with the following lines: void CustomIntegrator::AssembleElementMatrix2(const FiniteElement &trial_fe, const FiniteElement &test_fe, ElementTransformation &Trans, DenseMatrix &elmat) { // Determine an appropriate integration rule order int order = trial_fe.GetOrder() // Polynomial order of domain space + test_fe.GetOrder() // Polynomial order of range space + Trans.OrderW(); // Polynomial order of the geometry // Determine the element type: triangle, quadrilateral, tetrahedron, etc. Geometry::Type geom = Trans.GetGeometryType(); // Construct or retrieve an integration rule for the appropriate // reference element with the desired order of accuracy const IntegrationRule * ir = &IntRules.Get(Trans.GetGeometryType(), order); ... } This example uses the IntRules global object but custom integration rules could be provided through the use of a similar global object or by some other means. The next piece is to loop over the integration points and, in most cases, make use of the ElementTransformation object. ... // Loop over each quadrature point in the reference element for (int i = 0; i < ir->GetNPoints(); i++) { // Extract the current quadrature point from the integration rule const IntegrationPoint &ip = ir->IntPoint(i); // Prepare to evaluate the coordinate transformation at the current // quadrature point Trans.SetIntPoint(&ip); // Compute the Jacobian determinant at the current integration point double detJ = Trans.Weight(); ... } The final piece is to evaluate the function to be integrated. This often involves evaluation of a Coefficient object as well as one or two sets of basis functions or their derivatives. The coefficient should be straightforward, simply call its Eval method with the ElementTransformation and IntegrationPoint objects and perhaps a Vector or DenseMatrix to hold the resulting coefficient value when appropriate. Basis function evaluation can be a bit more complicated.", "title": "Writing Custom Integrators"}, {"location": "integration/#basis-function-evaluation", "text": "Some basis functions, particularly vector-valued basis functions, partially depend upon the geometry of the physical element in addition to their dependence on the reference element. The scalar basis functions provided by the H1_FECollection are straightforward. Simply call FiniteElement::CalcShape with the current quadrature point to retrieve a vector containing the values of each basis function evaluated at the given point in reference space. ... // Retrieve the number of basis functions int tr_dof = trial_fe.GetDof(); // Allocate a vector to hold the values of each basis function Vector tr_shape(tr_dof); // Loop over each quadrature point in the reference element for (int i = 0; i < ir->GetNPoints(); i++) { // Extract the current quadrature point from the integration rule const IntegrationPoint &ip = ir->IntPoint(i); ... // Evaluate the basis functions at the point ip trial_fe.CalcShape(ip, tr_shape); ... } For other types of basis functions it can be simpler to call CalcPhysShape or CalcPhysVShape . These, and similar evaluation functions with \"Phys\" in the name, internally perform the geometric transformation of the basis functions when necessary. This is clearly a convenience feature but it can lead to unnecessary computations when certain optimizations are possible. In the following table subscripts on the derivative operators indicate which coordinate system is being used to compute the derivative; 'x' for the physical coordinates and 'u' for the reference coordinates. Quantities with a caret above them indicate functions computed in the reference coordinate system. Family Evaluation Transformation H1 Basis None H1 Gradient of Basis $\\nabla_x\\varphi_i = (J^{-1})^T\\nabla_u\\hat{\\varphi}_i$ ND Basis $\\vec{W}_i = (J^{-1})^T\\hat{W}_i$ ND Curl of Basis $\\nabla_x\\times\\vec{W}_i = \\frac{1}{\\det(J)}J\\,\\nabla_u\\times\\hat{W}_i$ RT Basis $\\vec{F}_i = \\frac{1}{\\det(J)}J\\,\\hat{F}_i$ RT Divergence of Basis $\\nabla_x\\cdot\\vec{F}_i = \\frac{1}{\\det(J)}\\nabla_u\\cdot\\hat{F}_i$ L2 (INTEGRAL) Basis $\\psi_i = \\frac{1}{\\det(J)}\\hat{\\psi}_i$ L2 (VALUE) Basis None Use of these \"CalcPhys\" functions enable integrators to be used with a wider variety of basis function families without the need to explicitly handle these transformations within the integrator. This leads to more general implementations but at the possible cost of added computational expense. For example, a LinearFormIntegrator involving an L2 basis function using the INTEGRAL map type would both multiply and divide by the Jacobian determinant at each integration point. Clearly this is unnecessary and could significantly increase the computational effort needed to compute the integrals.", "title": "Basis Function Evaluation"}, {"location": "integration/#working-with-the-mixedscalarintegrator", "text": "The MixedScalarIntegrator is designed to help construct BilinearFormIntegrators which build an integrand from two sets of scalar-valued basis function evaluations. Such integrands will involve combinations of the following quantities: Scalar-valued basis functions obtained from CalcPhysShape Divergence of vector-valued basis functions obtained from CalcPhysDivShape Curl of vector-valued basis functions in 2D obtained from CalcPhysCurlShape Gradient of scalar-valued basis functions in 1D obtained from CalcPhysDShape An optional scalar coefficient To derive a custom integrator from MixedScalarIntegrator a developer need only define constructors for the custom integrator. Only one constructor is necessary but support of various coefficient types is often useful. class MixedScalarMassIntegrator : public MixedScalarIntegrator { public: MixedScalarMassIntegrator() { same_calc_shape = true; } MixedScalarMassIntegrator(Coefficient &q) : MixedScalarIntegrator(q) { same_calc_shape = true; } }; By default this integrator will compute the operator: $$a_{ij} = \\int_{\\Omega_e}q(x)\\,f_j(x)\\,g_i(x)\\,d\\Omega$$ Where $f_j$ and $g_i$ are two sets of scalar-valued basis functions which produces a \"mass\" matrix. The MixedScalarIntegrator has two public methods and five protected methods which can be overridden to customize the integrator. The public methods are AssembleElementMatrix for use with the BilinearForm class of square bilinear forms and AssembleElementMatrix2 for use with the MixedBilinearForm class of rectangular bilinear forms. Typically only one of these is necessary and the default implementations will often suffice. However, one or both of these methods may be overridden by a derived class if some customization is desired. For example, to implement optimizations related to coordinate transformations or custom integration rules, etc.. More commonly a derived class will need to override one or both of the CalcTestShape and CalcTrialShape methods which compute the necessary basis function values. For example the four types of scalar basis function evaluations supported by MixedScalarIntegrator could be obtained by these overrides of the trial (domain) finite element basis functions: /// Evaluate the scalar-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { trial_fe.CalcPhysShape(Trans, shape); } or /// Evaluate the divergence of the vector-valued basis functions virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { trial_fe.CalcPhysDivShape(Trans, shape); } or /// Evaluate the 2D curl of the vector-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { DenseMatrix dshape(shape.GetData(), shape.Size(), 1); trial_fe.CalcPhysCurlShape(Trans, dshape); } or /// Evaluate the 1D gradient of the scalar-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { DenseMatrix dshape(shape.GetData(), shape.Size(), 1); trial_fe.CalcPhysDShape(Trans, dshape); } Similar overrides could be implemented for the test (range) space. Of course other overrides are possible and may be quite useful for other custom integrators. The next override that is often advisable is VerifyFiniteElementTypes which provides a means of testing the FiniteElement objects passed by the BilinearForm class to make sure they support the evaluations needed by the CalcTestShape and CalcTrialShape methods. This override is optional but highly recommended. As an example the following override verifies that the geometry is one dimensional and that the trial (domain) space supports evaluation of the gradient of the basis functions. inline virtual bool VerifyFiniteElementTypes(const FiniteElement & trial_fe, const FiniteElement & test_fe ) const { return (trial_fe.GetDim() == 1 && test_fe.GetDim() == 1 && trial_fe.GetDerivType() == mfem::FiniteElement::GRAD && test_fe.GetRangeType() == mfem::FiniteElement::SCALAR ); } A related optional method can be used to output an appropriate error message in the event that unsuitable basis functions have been provided. For example the following error message might be appropriate in conjunction with the previous VerifyFiniteElementTypes implementation: inline virtual const char * FiniteElementTypeFailureMessage() const { return \"Trial and test spaces must both be scalar fields in 1D \" \"and the trial space must implement CalcDShape.\"; } The last optional protected method allows a certain flexibility in the choice of quadrature order. The default implementation is shown below but other choices may be suitable. inline virtual int GetIntegrationOrder(const FiniteElement & trial_fe, const FiniteElement & test_fe, ElementTransformation &Trans) { return trial_fe.GetOrder() + test_fe.GetOrder() + Trans.OrderW(); } A wide variety of bilinear forms can be easily implemented using the MixedScalarIntegrator . Most of these are probably already included in MFEM, see Bilinear Form Integrators for a listing, but other options may be useful.", "title": "Working with the MixedScalarIntegrator"}, {"location": "integration/#working-with-the-mixedvectorintegrator", "text": "The MixedVectorIntegrator is very similar in spirit to the MixedScalarIntegrator but the integrand in this case is computed as the inner product of two vectors. Such integrands will involve combinations of the following quantities: Vector-valued basis functions obtained from CalcVShape Gradient of scalar-valued basis functions obtained from CalcPhysDShape Curl of vector-valued basis functions in 3D obtained from CalcPhysCurlShape Optional scalar, vector, or matrix-valued coefficients By default this integrator will compute different operators based on coefficient type: Coefficient Type Default Integral Scalar $a_{ij} = \\int_{\\Omega_e}q(x)\\,\\vec{F}_j(x)\\cdot\\,\\vec{G}_i(x)\\,d\\Omega$ Matrix $a_{ij} = \\int_{\\Omega_e}\\left(Q(x)\\,\\vec{F}_j(x)\\right)\\cdot\\,\\vec{G}_i(x)\\,d\\Omega$ Vector $a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\times\\vec{F}_j(x)\\right)\\cdot\\,\\vec{G}_i(x)\\,d\\Omega$ Where $\\vec{F}_j$ and $\\vec{G}_i$ are two sets of vector-valued basis functions which produces a \"mass\" matrix. The MixedVectorIntegrator also has public and protected methods which may be overridden in an analogous manner to those in MixedScalarIntegrator to implement an even wider variety of custom integrators. Note that the default implementation of the assembly methods do assume a square matrix coefficient but this assumption could be removed if necessary. The CalcTestShape and CalcTrialShape methods which compute the necessary vector-valued basis function values might be overridden as follows: /// Evaluate the vector-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, DenseMatrix & shape) { trial_fe.CalcVShape(Trans, shape); } or /// Evaluate the gradient of the scalar-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, DenseMatrix & shape) { trial_fe.CalcPhysDShape(Trans, shape); } or /// Evaluate the 3D curl of the vector-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, DenseMatrix & shape) { trial_fe.CalcPhysCurlShape(Trans, shape); } Many of the possible MixedVectorIntegrator customizations are already included in MFEM. See Bilinear Form Integrators for a listing.", "title": "Working with the MixedVectorIntegrator"}, {"location": "integration/#working-with-the-mixedscalarvectorintegrator", "text": "The MixedScalarVectorIntegrator follows naturally from the MixedScalarIntegrator and the MixedVectorIntegrator . The integrand in this case is computed as the product of a scalar basis function with a vector basis function. However, since the integrand must be scalar valued, a vector-valued coefficient will always be required. The types of scalar-valued basis functions will include: Scalar-valued basis functions obtained from CalcPhysShape Divergence of vector-valued basis functions obtained from CalcPhysDivShape Curl of vector-valued basis functions in 2D obtained from CalcPhysCurlShape Gradient of scalar-valued basis functions in 1D obtained from CalcPhysDShape The types of vector-valued basis functions will include: Vector-valued basis functions obtained from CalcVShape Gradient of scalar-valued basis functions obtained from CalcPhysDShape Curl of vector-valued basis functions in 3D obtained from CalcPhysCurlShape By default this integrator will compute different operators based on the choice of the trial and test spaces and, in 2D, how the vector coefficient should be employed: $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\,f_j(x)\\right)\\cdot\\vec{G}_i(x)\\,d\\Omega\\label{msv_def}$$ or $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\cdot\\vec{F}_j(x)\\right)\\,g_i(x)\\,d\\Omega\\label{msv_trans}$$ or in 2D there is an option to compute $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\,f_j(x)\\right)\\times\\vec{G}_i(x)\\,d\\Omega\\label{msv_2d_def}$$ or (again optionally in 2D) $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\times\\vec{F}_j(x)\\right)\\,g_i(x)\\,d\\Omega\\label{msv_2d_trans}$$ The methods that a developer may choose to override are again quite similar to those in MixedScalarIntegrator and MixedVectorIntegrator . The main difference is the basis function overrides which have been renamed to CalcShape for the scalar-valued basis and CalcVShape for the vector-valued basis. By default it is assumed that the trial (domain) space is scalar-valued and the test (range) space is vector-valued as in equations \\ref{msv_def} and \\ref{msv_2d_def}. The choice of trial and test spaces is here controlled by a transpose option in the MixedScalarVectorIntegrator constructor. If transpose == true then equations \\ref{msv_trans} and \\ref{msv_2d_trans} are assumed. The choice between equations \\ref{msv_def} and \\ref{msv_trans} on the one hand and equations \\ref{msv_2d_def} and \\ref{msv_2d_trans} on the other is made with the cross_2d optional constructor argument. There are several customizations of this integrator included in MFEM but others are possible. See Bilinear Form Integrators for a listing. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Working with the MixedScalarVectorIntegrator"}, {"location": "lininteg/", "text": "Linear Form Integrators $ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} $ Linear form integrators are the right-hand side companion to Bilinear Form Integrators that compute the integrals of products of a basis function and a given \"right-hand side\" function (coefficient) $\\,f$ over individual mesh elements (or sometimes over edges or faces). Typically each element is contained in the support of several basis functions, therefore linear integrators simultaneously compute the integrals of all combinations of the relevant basis functions with the given input function $\\,f$. This produces a one dimensional array of results that is arranged into a small vector of integral (dual) values called a local element (load) vector . To put this another way, the LinearForm class builds a global vector, glb_vec , by performing the outer loop in the following pseudocode snippet whereas the LinearFormIntegrator class performs the nested inner loops to compute the local vector, loc_vec . for each elem in elements loc_vec = 0.0 for each pt in quadrature_points for each v_i in elem loc_vec(i) += w(pt) * rhs(pt) v_i(pt) end end glb_vec += loc_vec end There are three types of integrals that typically arise although many other, more exotic, forms are possible: Integrals involving Scalar rhs $\\,f$ and basis functions: $\\int_\\Omega\\, f v$ Integrals involving Vector rhs $\\,\\vec{f}$ and basis functions: $\\int_\\Omega\\, \\vec{f}\\cdot\\vec{v}$ Integrals involving mix of Scalar and Vector rhs $\\,\\vec{f}$ and basis functions: $\\int_\\Omega f\\,\\vec{\\lambda}\\cdot\\vec{v}$ and $\\int_\\Omega v\\,\\vec{\\lambda}\\cdot\\vec{f}$ The LinearFormIntegrator classes allow MFEM to produce a wide variety of local element vectors without modifying the LinearForm class. Many of the possible operators are collected below into tables that briefly describe their action and requirements. In the tables below the Space column refers to finite element spaces which implement the following methods: Space Operator Derivative Operator H1 CalcShape CalcDShape ND CalcVShape CalcCurlShape RT CalcVShape CalcDivShape L2 CalcShape None Notation: $$\\{(f, v)\\}_i\\equiv \\int_\\Omega f v_i$$ $$\\{(\\vec{F}, \\vec{v})\\}_i\\equiv \\int_\\Omega \\lambda \\vec{F}\\cdot\\vec{v}_i$$ For boundary integrators, the integrals are over $\\partial \\Omega$. Face integrators integrate over the interior and boundary faces of mesh elements and are denoted with $\\left<\\cdot,\\cdot\\right>$. Scalar Field Operators Domain Integrators Class Name Space Operator Continuous Op. Dimension DomainLFIntegrator H1, L2 $(f, v)$ $f$ 1D, 2D, 3D DomainLFGradIntegrator H1 $(\\vec{f}, \\nabla v)$ $-\\nabla \\cdot \\vec{f}$ 1D, 2D, 3D Boundary Integrators Class Name Space Operator Continuous Op. Dimension BoundaryLFIntegrator H1, L2 $(f, v)$ $f$ 1D, 2D, 3D BoundaryNormalLFIntegrator H1, L2 $(\\vec{f} \\cdot \\hat{n}, v)$ $\\vec{f} \\cdot \\hat{n}$ 1D, 2D, 3D BoundaryTangentialLFIntegrator H1, L2 $(\\vec{f} \\cdot \\hat{\\tau}, v)$ $\\vec{f} \\cdot \\hat{\\tau}$ 2D BoundaryFlowIntegrator H1, L2 $\\frac{\\alpha}{2}\\, \\left< (\\vec{u} \\cdot \\hat{n})\\, f, v \\right> - \\beta\\, \\left<\\mid \\vec{u} \\cdot \\hat{n} \\mid f, v \\right>$ $\\frac{\\alpha}{2} (\\vec{u} \\cdot \\hat{n})\\, f - \\beta \\mid \\vec{u} \\cdot \\hat{n} \\mid f$ 1D, 2D, 3D Face Integrators Class Name Space Operator Continuous Op. Dimension DGDirichletLFIntegrator L2 $\\sigma \\left< u_D, Q \\nabla v \\cdot \\hat{n} \\right> + \\kappa \\left< \\{h^{-1} Q\\} u_D, v \\right>$ DG essential BCs for $u_D$ 1D, 2D, 3D Vector Field Operators Domain Integrators Class Name Space Operator Continuous Op. Dimension VectorDomainLFIntegrator H1, L2 $(\\vec{f}, \\vec{v})$ $\\vec{f}$ 1D, 2D, 3D VectorFEDomainLFIntegrator ND, RT $(\\vec{f}, \\vec{v})$ $\\vec{f}$ 2D, 3D VectorFEDomainLFCurlIntegrator ND $(\\vec{f}, \\nabla \\times \\vec{v})$ $\\nabla \\times \\vec{f}$ 2D, 3D VectorFEDomainLFDivIntegrator RT $(f, \\nabla \\cdot \\vec{v})$ $ - \\nabla f$ 2D, 3D Boundary Integrators Class Name Space Operator Continuous Op. Dimension VectorBoundaryLFIntegrator H1, L2 $( \\vec{f}, \\vec{v} )$ $\\vec{f}$ 1D, 2D, 3D VectorBoundaryFluxLFIntegrator H1, L2 $( f, \\vec{v} \\cdot \\hat{n} )$ $f \\hat{n}$ 1D, 2D, 3D VectorFEBoundaryFluxLFIntegrator RT $( f, \\vec{v} \\cdot \\hat{n} )$ $f \\hat{n}$ 2D, 3D VectorFEBoundaryTangentLFIntegrator ND $( \\hat{n} \\times \\vec{f}, \\vec{v} )$ $\\hat{n} \\times \\vec{f}$ 2D, 3D Face Integrators Class Name Space Operator Continuous Op. Dimension DGElasticityDirichletLFIntegrator L2 $\\alpha\\left<\\vec{u_D}, \\left(\\lambda \\left(\\div \\vec{v}\\right) I + \\mu \\left(\\nabla\\vec{v} + \\nabla\\vec{v}^T\\right)\\right) \\cdot \\hat{n}\\right> \\\\ + \\kappa\\left< h^{-1} (\\lambda + 2 \\mu) \\vec{u_D}, \\vec{v} \\right>$ DG essential BCs for $\\vec{u_D}$ 1D, 2D, 3D MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Linear Form Integrators"}, {"location": "lininteg/#linear-form-integrators", "text": "$ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} $ Linear form integrators are the right-hand side companion to Bilinear Form Integrators that compute the integrals of products of a basis function and a given \"right-hand side\" function (coefficient) $\\,f$ over individual mesh elements (or sometimes over edges or faces). Typically each element is contained in the support of several basis functions, therefore linear integrators simultaneously compute the integrals of all combinations of the relevant basis functions with the given input function $\\,f$. This produces a one dimensional array of results that is arranged into a small vector of integral (dual) values called a local element (load) vector . To put this another way, the LinearForm class builds a global vector, glb_vec , by performing the outer loop in the following pseudocode snippet whereas the LinearFormIntegrator class performs the nested inner loops to compute the local vector, loc_vec . for each elem in elements loc_vec = 0.0 for each pt in quadrature_points for each v_i in elem loc_vec(i) += w(pt) * rhs(pt) v_i(pt) end end glb_vec += loc_vec end There are three types of integrals that typically arise although many other, more exotic, forms are possible: Integrals involving Scalar rhs $\\,f$ and basis functions: $\\int_\\Omega\\, f v$ Integrals involving Vector rhs $\\,\\vec{f}$ and basis functions: $\\int_\\Omega\\, \\vec{f}\\cdot\\vec{v}$ Integrals involving mix of Scalar and Vector rhs $\\,\\vec{f}$ and basis functions: $\\int_\\Omega f\\,\\vec{\\lambda}\\cdot\\vec{v}$ and $\\int_\\Omega v\\,\\vec{\\lambda}\\cdot\\vec{f}$ The LinearFormIntegrator classes allow MFEM to produce a wide variety of local element vectors without modifying the LinearForm class. Many of the possible operators are collected below into tables that briefly describe their action and requirements. In the tables below the Space column refers to finite element spaces which implement the following methods: Space Operator Derivative Operator H1 CalcShape CalcDShape ND CalcVShape CalcCurlShape RT CalcVShape CalcDivShape L2 CalcShape None Notation: $$\\{(f, v)\\}_i\\equiv \\int_\\Omega f v_i$$ $$\\{(\\vec{F}, \\vec{v})\\}_i\\equiv \\int_\\Omega \\lambda \\vec{F}\\cdot\\vec{v}_i$$ For boundary integrators, the integrals are over $\\partial \\Omega$. Face integrators integrate over the interior and boundary faces of mesh elements and are denoted with $\\left<\\cdot,\\cdot\\right>$.", "title": "Linear Form Integrators"}, {"location": "lininteg/#scalar-field-operators", "text": "", "title": "Scalar Field Operators"}, {"location": "lininteg/#domain-integrators", "text": "Class Name Space Operator Continuous Op. Dimension DomainLFIntegrator H1, L2 $(f, v)$ $f$ 1D, 2D, 3D DomainLFGradIntegrator H1 $(\\vec{f}, \\nabla v)$ $-\\nabla \\cdot \\vec{f}$ 1D, 2D, 3D", "title": "Domain Integrators"}, {"location": "lininteg/#boundary-integrators", "text": "Class Name Space Operator Continuous Op. Dimension BoundaryLFIntegrator H1, L2 $(f, v)$ $f$ 1D, 2D, 3D BoundaryNormalLFIntegrator H1, L2 $(\\vec{f} \\cdot \\hat{n}, v)$ $\\vec{f} \\cdot \\hat{n}$ 1D, 2D, 3D BoundaryTangentialLFIntegrator H1, L2 $(\\vec{f} \\cdot \\hat{\\tau}, v)$ $\\vec{f} \\cdot \\hat{\\tau}$ 2D BoundaryFlowIntegrator H1, L2 $\\frac{\\alpha}{2}\\, \\left< (\\vec{u} \\cdot \\hat{n})\\, f, v \\right> - \\beta\\, \\left<\\mid \\vec{u} \\cdot \\hat{n} \\mid f, v \\right>$ $\\frac{\\alpha}{2} (\\vec{u} \\cdot \\hat{n})\\, f - \\beta \\mid \\vec{u} \\cdot \\hat{n} \\mid f$ 1D, 2D, 3D", "title": "Boundary Integrators"}, {"location": "lininteg/#face-integrators", "text": "Class Name Space Operator Continuous Op. Dimension DGDirichletLFIntegrator L2 $\\sigma \\left< u_D, Q \\nabla v \\cdot \\hat{n} \\right> + \\kappa \\left< \\{h^{-1} Q\\} u_D, v \\right>$ DG essential BCs for $u_D$ 1D, 2D, 3D", "title": "Face Integrators"}, {"location": "lininteg/#vector-field-operators", "text": "", "title": "Vector Field Operators"}, {"location": "lininteg/#domain-integrators_1", "text": "Class Name Space Operator Continuous Op. Dimension VectorDomainLFIntegrator H1, L2 $(\\vec{f}, \\vec{v})$ $\\vec{f}$ 1D, 2D, 3D VectorFEDomainLFIntegrator ND, RT $(\\vec{f}, \\vec{v})$ $\\vec{f}$ 2D, 3D VectorFEDomainLFCurlIntegrator ND $(\\vec{f}, \\nabla \\times \\vec{v})$ $\\nabla \\times \\vec{f}$ 2D, 3D VectorFEDomainLFDivIntegrator RT $(f, \\nabla \\cdot \\vec{v})$ $ - \\nabla f$ 2D, 3D", "title": "Domain Integrators"}, {"location": "lininteg/#boundary-integrators_1", "text": "Class Name Space Operator Continuous Op. Dimension VectorBoundaryLFIntegrator H1, L2 $( \\vec{f}, \\vec{v} )$ $\\vec{f}$ 1D, 2D, 3D VectorBoundaryFluxLFIntegrator H1, L2 $( f, \\vec{v} \\cdot \\hat{n} )$ $f \\hat{n}$ 1D, 2D, 3D VectorFEBoundaryFluxLFIntegrator RT $( f, \\vec{v} \\cdot \\hat{n} )$ $f \\hat{n}$ 2D, 3D VectorFEBoundaryTangentLFIntegrator ND $( \\hat{n} \\times \\vec{f}, \\vec{v} )$ $\\hat{n} \\times \\vec{f}$ 2D, 3D", "title": "Boundary Integrators"}, {"location": "lininteg/#face-integrators_1", "text": "Class Name Space Operator Continuous Op. Dimension DGElasticityDirichletLFIntegrator L2 $\\alpha\\left<\\vec{u_D}, \\left(\\lambda \\left(\\div \\vec{v}\\right) I + \\mu \\left(\\nabla\\vec{v} + \\nabla\\vec{v}^T\\right)\\right) \\cdot \\hat{n}\\right> \\\\ + \\kappa\\left< h^{-1} (\\lambda + 2 \\mu) \\vec{u_D}, \\vec{v} \\right>$ DG essential BCs for $\\vec{u_D}$ 1D, 2D, 3D MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Face Integrators"}, {"location": "lininterp/", "text": "Linear Interpolators $ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} $ Linear interpolators can be very useful for interpolating one discrete representation of a field onto another set of basis functions to produce another representation. However, this must be done with care because different discrete representations are not completely interchangeable. As an example consider a scalar field projected onto either piece-wise linear ($H_1$) or piece-wise constant ($L_2$) basis functions. Interpolating from an $H_1$ representation to an $L_2$ representation should produce a reasonable result because the constant value needed in each element can be computed as a weighted sum of the $H_1$ basis functions in that element. On the other hand, if we try to interpolate from the $L_2$ representation to an $H_1$ representation we don't have enough information to determine reasonable values for the degrees of freedom which are shared between neighboring elements because linear interpolators can only access one element at a time. To accurately compute an $H_1$ representation from an $L_2$ representation requires the type of weighted average of values from neighboring elements that bilinear forms provide but this requires a linear solve and often suitable boundary conditions. The operators produced by the BilinearForm classes involve integrations and therefore they sum the various contributions from neighboring elements to compute a full integral. The DiscreteLinearOperator classes are not performing integrals but rather interpolations and as such they do not combine contributions from different elements in any way. Consequently if the LinearInterpolator s produce different results for entities that are shared between neighboring elements then the resulting representation will depend on the order in which the elements are processed. Such operators are not good candidates for DiscreteLinearOperator s. The sections below will offer some guidance on the appropriate use of these operators. In the tables below the Space column refers to finite element spaces which implement the following methods: Space Operator Derivative Operator H1 CalcShape CalcDShape ND CalcVShape CalcCurlShape RT CalcVShape CalcDivShape L2 CalcShape None The Coef. column refers to the types of coefficients that are available. A boldface coefficient type is required whereas most coefficients are optional. Coef. Type S Scalar Valued Function V Vector Valued Function D Diagonal Matrix Function M General Matrix Function Derivative Interpolators The $H(Curl)$ and $H(Div)$ spaces are specifically designed to support these derivative operators by having the necessary inter-element continuity. Other possible derivative operators would not possess the correct continuity and must therefore be implemented in a weak sense. Class Name Domain Range Operator GradientInterpolator H1 ND $\\grad u$ CurlInterpolator ND in 3D RT $\\curl\\vec{u}$ CurlInterpolator ND in 2D L2 $\\hat{z}\\cdot(\\curl\\vec{u})$ DivergenceInterpolator RT L2 $\\div\\vec{u}$ Product Interpolators These operators require a bit more care than the previous set. In order for these operators to produce valid results the product of the coefficient with the domain space must be uniquely representable within the desired range space. Additionally, it may sometimes be desirable for the range space to have a higher order than the domain space if the coefficient is not constant. For example if the domain space and the coefficient are both linear it might be desirable, though not necessary, for the range space to be quadratic. Class Name Domain Range Coef. Operator ScalarProductInterpolator H1,L2 H1,L2 S $\\lambda u$ ScalarVectorProductInterpolator ND,RT ND,RT S $\\lambda\\vec{u}$ VectorScalarProductInterpolator H1,L2 ND,RT V $\\vec{\\lambda}u$ VectorCrossProductInterpolator ND,RT in 3D ND,RT V $\\vec{\\lambda}\\times\\vec{u}$ ScalarCrossProductInterpolator ND,RT in 2D H1,L2 V $\\hat{z}\\cdot(\\vec{\\lambda}\\times\\vec{u})$ VectorInnerProductInterpolator ND,RT H1,L2 V $\\vec{\\lambda}\\cdot\\vec{u}$ Special Purpose Interpolators Class Name Domain Range Operator IdentityInterpolator H1,L2 H1,L2 $u$ NormalInterpolator H1$^d$ RT_Trace $\\hat{n}\\cdot\\vec{u}$ MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Linear Interpolators"}, {"location": "lininterp/#linear-interpolators", "text": "$ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} $ Linear interpolators can be very useful for interpolating one discrete representation of a field onto another set of basis functions to produce another representation. However, this must be done with care because different discrete representations are not completely interchangeable. As an example consider a scalar field projected onto either piece-wise linear ($H_1$) or piece-wise constant ($L_2$) basis functions. Interpolating from an $H_1$ representation to an $L_2$ representation should produce a reasonable result because the constant value needed in each element can be computed as a weighted sum of the $H_1$ basis functions in that element. On the other hand, if we try to interpolate from the $L_2$ representation to an $H_1$ representation we don't have enough information to determine reasonable values for the degrees of freedom which are shared between neighboring elements because linear interpolators can only access one element at a time. To accurately compute an $H_1$ representation from an $L_2$ representation requires the type of weighted average of values from neighboring elements that bilinear forms provide but this requires a linear solve and often suitable boundary conditions. The operators produced by the BilinearForm classes involve integrations and therefore they sum the various contributions from neighboring elements to compute a full integral. The DiscreteLinearOperator classes are not performing integrals but rather interpolations and as such they do not combine contributions from different elements in any way. Consequently if the LinearInterpolator s produce different results for entities that are shared between neighboring elements then the resulting representation will depend on the order in which the elements are processed. Such operators are not good candidates for DiscreteLinearOperator s. The sections below will offer some guidance on the appropriate use of these operators. In the tables below the Space column refers to finite element spaces which implement the following methods: Space Operator Derivative Operator H1 CalcShape CalcDShape ND CalcVShape CalcCurlShape RT CalcVShape CalcDivShape L2 CalcShape None The Coef. column refers to the types of coefficients that are available. A boldface coefficient type is required whereas most coefficients are optional. Coef. Type S Scalar Valued Function V Vector Valued Function D Diagonal Matrix Function M General Matrix Function", "title": "Linear Interpolators"}, {"location": "lininterp/#derivative-interpolators", "text": "The $H(Curl)$ and $H(Div)$ spaces are specifically designed to support these derivative operators by having the necessary inter-element continuity. Other possible derivative operators would not possess the correct continuity and must therefore be implemented in a weak sense. Class Name Domain Range Operator GradientInterpolator H1 ND $\\grad u$ CurlInterpolator ND in 3D RT $\\curl\\vec{u}$ CurlInterpolator ND in 2D L2 $\\hat{z}\\cdot(\\curl\\vec{u})$ DivergenceInterpolator RT L2 $\\div\\vec{u}$", "title": "Derivative Interpolators"}, {"location": "lininterp/#product-interpolators", "text": "These operators require a bit more care than the previous set. In order for these operators to produce valid results the product of the coefficient with the domain space must be uniquely representable within the desired range space. Additionally, it may sometimes be desirable for the range space to have a higher order than the domain space if the coefficient is not constant. For example if the domain space and the coefficient are both linear it might be desirable, though not necessary, for the range space to be quadratic. Class Name Domain Range Coef. Operator ScalarProductInterpolator H1,L2 H1,L2 S $\\lambda u$ ScalarVectorProductInterpolator ND,RT ND,RT S $\\lambda\\vec{u}$ VectorScalarProductInterpolator H1,L2 ND,RT V $\\vec{\\lambda}u$ VectorCrossProductInterpolator ND,RT in 3D ND,RT V $\\vec{\\lambda}\\times\\vec{u}$ ScalarCrossProductInterpolator ND,RT in 2D H1,L2 V $\\hat{z}\\cdot(\\vec{\\lambda}\\times\\vec{u})$ VectorInnerProductInterpolator ND,RT H1,L2 V $\\vec{\\lambda}\\cdot\\vec{u}$", "title": "Product Interpolators"}, {"location": "lininterp/#special-purpose-interpolators", "text": "Class Name Domain Range Operator IdentityInterpolator H1,L2 H1,L2 $u$ NormalInterpolator H1$^d$ RT_Trace $\\hat{n}\\cdot\\vec{u}$ MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Special Purpose Interpolators"}, {"location": "maxwell-notes/", "text": "Maxwell's Equations $$\\begin{align} \\nabla\\times{\\bf H}& = & \\frac{\\partial{\\bf D}}{\\partial t} + {\\bf J}+ \\overline{\\sigma}{\\bf E}\\label{ampere} \\\\ \\nabla\\times{\\bf E}& = & -\\frac{\\partial{\\bf B}}{\\partial t} - {\\bf M}- \\overline{\\sigma}_M{\\bf H}\\label{faraday} \\\\ \\nabla\\cdot{\\bf D}& = & \\rho\\label{gauss} \\\\ \\nabla\\cdot{\\bf B}& = & 0\\label{trans} \\end{align}$$ With electric current density, ${\\bf J}$, magnetic current density, ${\\bf M}$, electric conductivity, $\\overline{\\sigma}$, magnetic conductivity, $\\overline{\\sigma}_M$, and electric charge density, $\\rho$. We will sometimes refer to these equations by the names Amp\u00e8re's Law, Faraday's Law, Gauss's Law, and the Transversality Condition respectively. It is also necessary to define the constitutive relations ${\\bf D}\\equiv\\epsilon{\\bf E}$ and ${\\bf B}\\equiv\\mu{\\bf H}$. It is also common to combine equations \\eqref{ampere} and \\eqref{faraday} into a single second order PDE. $$\\begin{align} \\frac{\\partial^2\\left(\\epsilon{\\bf E}\\right)}{\\partial t^2} + \\frac{\\partial\\left(\\overline{\\sigma}{\\bf E}\\right)}{\\partial t} + \\nabla\\times\\left(\\mu^{-1}\\nabla\\times{\\bf E}\\right) & \\nonumber \\\\ + \\nabla\\times\\left(\\mu^{-1}\\overline{\\sigma}_M{\\bf H}\\right) & = -\\frac{\\partial{\\bf J}}{\\partial t} - \\nabla\\times\\left(\\mu^{-1}{\\bf M}\\right) \\label{curlcurle} %\\end{align} {or}&\\\\ %\\begin{equation} \\frac{\\partial^2\\left(\\mu{\\bf H}\\right)}{\\partial t^2} + \\frac{\\partial\\left(\\overline{\\sigma}_M{\\bf H}\\right)}{\\partial t} + \\nabla\\times\\left(\\epsilon^{-1}\\nabla\\times{\\bf H}\\right) & \\nonumber \\\\ - \\nabla\\times\\left(\\epsilon^{-1}\\overline{\\sigma}{\\bf E}\\right) & = -\\frac{\\partial{\\bf M}}{\\partial t} +\\nabla\\times\\left(\\epsilon^{-1}{\\bf J}\\right) \\label{curlcurlh} \\end{align}$$ One drawback of these formulations is the appearance of ${\\bf H}$ in equation \\eqref{curlcurle} or ${\\bf E}$ in equation \\eqref{curlcurlh}. The only way to formulate these equations entirely in terms of ${\\bf E}$ or ${\\bf H}$ is to make assumptions about the spatial variation of $\\epsilon^{-1}\\overline{\\sigma}$ or $\\mu^{-1}\\overline{\\sigma}_M$. For this reason these second order formulations should be avoided unless $\\overline{\\sigma}_M=0$ or $\\overline{\\sigma}=0$. Discretization Basis Functions There are two sets of basis functions particularly well suited for electromagnetics; Nedelec and Raviart-Thomas. The Nedelec basis functions guarantee tangential continuity of their approximations across element interfaces. This makes them well suited for the fields ${\\bf E}$ and ${\\bf H}$ which share this constraint on material interfaces. The Raviart-Thomas basis functions guarantee continuity of the normal component of their approximations across element interfaces. This makes them well suited for the fields ${\\bf B}$ and ${\\bf D}$ which share this constraint on material interfaces. The Nedelec basis functions which discretize the H(Curl) space are indispensable due to the presence of the Curl operators in equations \\eqref{ampere}, \\eqref{faraday}, \\eqref{curlcurle}, and \\eqref{curlcurlh}. The Raviart-Thomas basis functions which discretize the H(Div) space are convenient and reduce the computational cost but are optional, strictly speaking. Discretization of the primary fields There are three choices for discretizing the set of coupled first order partial differential equations: ${\\bf E}\\in$ H(Curl) and ${\\bf B},{\\bf J},{\\bf M}\\in$ H(Div) ${\\bf H}\\in$ H(Curl) and ${\\bf D},{\\bf J},{\\bf M}\\in$ H(Div) ${\\bf E}\\in$ H(Curl), ${\\bf H}\\in$ H(Curl), and ${\\bf J},{\\bf M}\\in$ H(Curl) (grudgingly) There is only one choice for discretizing the second order equations i.e. ${\\bf E}$ or ${\\bf H}$ in H(Curl). These basis function choices merely ensure that the approximate fields maintain the proper interface constraints at material boundaries. The choice of formulation can be made based on the required sources, boundary conditions, and/or post-processing requirements. Hence, different physical requirements can lead to different choices of formulation i.e. there is no single best choice for all problems. Discretization of ${\\bf J}$ and ${\\bf M}$ The electric and magnetic current source densities are both flux vectors and as such they are best represented using the H(Div) space. This is most apparent when modeling the eddy current equation but H(Div) can be important in wave equations as well. Imagine modeling a current carrying conductor surrounded by some insulating material. The current density ${\\bf J}$ may be non-zero inside the conductor but it should be identically zero outside of it. Assuming the computational mesh conforms to the surface of this conductor, an H(Div) field can accurately represent such a current flow as long as the current at the surface of the conductor remains parallel to that surface. In other words the current will not \"leak\" out of the conductor as long as the normal component of the current is zero at the surface. On the other hand, if H(Curl) basis functions were used for ${\\bf J}$ its tangential components would need to be continuous across the surface of the conductor. This produces a non-physical current within the first layer of elements surrounding the conductor. Non-physical currents leaking out of conductors when using H(Curl) basis functions for the current density ${\\bf J}$ can lead to inaccurate eddy current simulations either by producing a larger than expected magnetic field outside the conductor or a reduced thermal heat load within the conductor. Similarly, in wave simulations the total power emanating from an antenna can be either over- or under-estimated depending upon how ${\\bf J}$ is computed on the surface of the antenna. Such matters can be eliminated by simply representing ${\\bf J}$ as an H(Div) function. I'm sure similar arguments can be made for the magnetization ${\\bf M}$ although I have less experience with that. The maxwell Miniapp The maxwell Miniapp uses the EB formulation with $\\overline{\\sigma}_M$ and ${\\bf M}$ assumed to be zero. It evolves the first order coupled system of equations using a symplectic time integration algorithm by Candy and Rozmus described in \"A Symplectic Integration Algorithm for Separable Hamiltonian Functions\", Journal of Computational Physics, Vol. 92, pages 230-256 (1991). The main advantage of this algorithm is that it conserves energy. Another advantage is that the approximations of ${\\bf E}$ and ${\\bf B}$ correspond to the same simulation time rather than being staggered as in other methods. The variable order symplectic integration class in MFEM called SIAVSolver requires that we implement our coupled set of PDEs as a pair of operators. The first is an Operator which can be used to update the magnetic field, ${\\bf B}$, using Faraday's Law by computing $-\\nabla\\times{\\bf E}$. The second is a TimeDependentOperator which can be used to update the electric field, ${\\bf E}$, using Amp\u00e8re's Law by computing $\\nabla\\times\\left(\\mu^{-1}{\\bf B}\\right)-{\\bf J}-\\overline{\\sigma}{\\bf E}$. We choose to implement both of these operators in a single class which we call MaxwellSolver . The first operator, $-\\nabla\\times{\\bf E}$, acts on ${\\bf E}\\in$ H(Curl) to produce a result $\\frac{\\partial{\\bf B}}{\\partial t}\\in$ H(Div). By design our discrete representation of H(Div) contains the curl of any field in our discrete representation of H(curl). Consequently we can compute this operator by simply evaluating the curl of our H(Curl) basis functions in terms of our H(Div) basis functions. This evaluation is handled by a DiscreteInterpolator called CurlInterpolator . The process of looping over each element to compute these interpolations is conducted by the ParDiscreteLinearOperator . In the MaxwellSolver this curl operator is simply named Curl_ and its negative, needed by the SIAVSolver , is named NegCurl_ . These operators are setup between lines 227 and 236 of the file maxwell_solver.cpp . The second operator, $\\nabla\\times\\left(\\mu^{-1}{\\bf B}\\right)-{\\bf J}-\\overline{\\sigma}{\\bf E}$, requires a bit more effort. The first thing to notice is that we cannot compute the curl of $\\mu^{-1}{\\bf B}$ precisely. Primarily this is due to the fact that ${\\bf B}\\in$ H(Div) rather than H(Curl) but, in general, the presence of $\\mu^{-1}$ is also a problem since we don't know its derivatives at all. These complications require that we compute the curl operator in a weak sense. Setup of the TimeDependentOperator Weak curl of $\\mu^{-1}{\\bf B}$ Often in wave propagation $\\mu$ is assumed to be constant but we will not make this assumption. In principle $\\mu$ could be anisotropic and inhomogeneous although we do assume it is constant in time. The magnetic field ${\\bf B}$ will be written as a linear combination of basis functions in H(Div) which we will label as ${\\bf F}_i$ e.g. ${\\bf B}(\\vec{x})\\approx\\sum_i b_i(t){\\bf F}_i(\\vec{x})$. Our goal is to compute $\\frac{\\partial{\\bf E}}{\\partial t}$ where ${\\bf E}\\in$ H(Curl) so we need to represent $\\nabla\\times\\mu^{-1}{\\bf B}$ also in H(Curl). The basis functions of H(Curl) will be labeled as ${\\bf W}_i$. To compute the weak form of this term we multiply the operator of interest by each of our H(Curl) basis functions and integrate over the entire problem domain to obtain an equation corresponding to each basis function in H(Curl). For example $$\\begin{align} \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}{\\bf B})] d\\Omega &=& \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\sum_j b_j{\\bf F}_j(\\vec{x}))] d\\Omega \\\\ &=& \\sum_j b_j\\{\\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}{\\bf F}_j(\\vec{x}))] d\\Omega \\} \\end{align}$$ The expression in curly braces depends only on our material coefficient and our basis functions so we can precompute this if we assume $\\mu$ does not change in time. This particular integral requires a little more manipulation to move the curl operator onto the H(Curl) basis function. $$\\begin{align} \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot\\left[\\nabla\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Omega &=& \\int_\\Omega\\left(\\nabla\\times{\\bf W}_i(\\vec{x})\\right)\\cdot\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\,d\\Omega \\\\ &-& \\int_\\Omega\\nabla\\cdot\\left[{\\bf W}_i(\\vec{x})\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Omega \\\\ &=& \\int_\\Omega\\left(\\mu^{-T}\\nabla\\times{\\bf W}_i(\\vec{x})\\right)\\cdot{\\bf F}_j(\\vec{x})\\,d\\Omega \\\\ &-& \\int_\\Gamma\\hat{n}\\cdot\\left[{\\bf W}_i(\\vec{x})\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Gamma \\\\ &=& \\int_\\Omega\\left(\\mu^{-T}\\nabla\\times{\\bf W}_i(\\vec{x})\\right)\\cdot{\\bf F}_j(\\vec{x})\\,d\\Omega \\\\ &+& \\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot\\left[\\hat{n}\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Gamma \\end{align}$$ Where $\\mu^{-T}$ is the transpose of the inverse of $\\mu$ and $\\Gamma=\\partial\\Omega$ i.e. the boundary of the domain. The first integral remaining on the right hand side is the weak curl operator which is implemented in MFEM as a BilinearFormIntegrator named MixedVectorWeakCurlIntegrator 1 . This operator is setup between lines 178 and 184 of the file maxwell_solver.cpp . The boundary integral term shown above is ignored in the maxwell miniapp which implies that it is assumed to be zero. This gives rise to a so-called natural boundary condition which in this case implies that $\\hat{n}\\times{\\bf H}=0$. Any portion of the boundary where an essential (a.k.a. Dirichlet) boundary condition is set will override this implicit boundary condition. Alternatively an inhomogeneous Neumann boundary condition can be applied by providing a nonzero function in place of $\\hat{n}\\times{\\bf H}$ in this integral. This would be accomplished by passing a known vector function to the LinearFormIntegrator named VectorFEDomainLFIntegrator and using this as a boundary integrator in ParLinearForm . Unfortunately we don't seem to have an example of this usage in either of the tesla or maxwell miniapps. Loss term $\\overline{\\sigma}{\\bf E}$ This would seem to be a simple term but, of course, there is a complication. According to the Candy and Rozmus paper this piece of the Hamiltonian should not depend on ${\\bf E}$. Furthermore, to properly model such a loss term it is best to handle it implicitly. To accomplish this the MaxwellSolver stores the current value of the electric field internally since the SIAVSolver will not provide this data to the update method (which is called ImplicitSolve ). The integral needed to model this term simply computes the product of the H(Curl) basis functions against each other along with the material coefficient, $\\overline{\\sigma}$ in this case. This integrator is called VectorFEMassIntegrator . The portion of this operator which will be used with the current value of the electric field is setup between lines 195 and 208 of the file maxwell_solver.cpp . The implicit portion is setup between lines 399 and 407 using the same integrator. Current density ${\\bf J}$ The maxwell miniapp does not place ${\\bf J}$ in H(Div) despite the comments in Section J and M . The reason for this is that the maxwell miniapp does not use a GridFunction representation of ${\\bf J}$ in any computations. It does, however, write ${\\bf J}$ to its data files for visualization and this really should be done using an H(Div) field. The way the current density enters the wave equation is a source term which is computed using the following integral: $$\\int_\\Omega{\\bf W}_i\\cdot{\\bf J}\\,d\\Omega$$ This is accomplished by using the LinearFormIntegrator named VectorFEDomainLFIntegrator and a ParLinearForm object. The setup of this object can be found between lines 264 and 266 of the file maxwell_solver.cpp . Integrals such as this, which directly evaluate a c-style function, avoid the continuity concerns raised in Section J and M . Setting up the solver The time derivative in Amp\u00e8re's Law is of the form: $$\\frac{\\partial\\epsilon{\\bf E}}{\\partial t} \\approx \\frac{\\partial}{\\partial t}(\\epsilon\\sum_ie(t){\\bf W}_i) = \\epsilon\\sum_i\\dot{e}(t){\\bf W}_i$$ Where we have assumed that $\\epsilon$ is constant in time. For the weak form of Amp\u00e8re's Law we need to again multiply by the H(Curl) basis functions and integrate over the problem domain. $$\\int_\\Omega{\\bf W}_i\\cdot(\\epsilon\\sum_j\\dot{e}(t){\\bf W}_j)d\\Omega = \\sum_j\\dot{e}(t)\\{\\int_\\Omega{\\bf W}_i\\cdot(\\epsilon{\\bf W}_j)d\\Omega \\}$$ The integral in the curly braces is a mass matrix which is again computed using the BilinearFormIntegrator named VectorFEMassIntegrator . This is setup between lines 388 and 395 of the file maxwell_solver.cpp . The more unusual part of this operator comes from the implicit handling of the loss term and the absorbing boundary condition. The latter is a simple Sommerfeld first order radiation boundary condition. Each of these implicit terms multiplies the electric field which we approximate at the time $t+\\Delta t/2$. Each of these bilinear forms which multiply the time derivative are mass matrices so a conjugate gradient iterative solver with a diagonal scaling preconditioner should work quite well. These are setup between lines 423 and 428 of the file maxwell_solver.cpp . One odd thing does appear in this setupSolver member function (and a few other places) and that is the variable idt . This is an integer related to the double precision time step dt . The reason for this is that our variable order symplectic time integrator breaks up a time step into a handful of smaller time steps which are generally not the same size. If we need to handle loss terms implicitly this variable time step will appear in the matrix passed to our solver. Of course we don't want to rebuild this matrix every time the time step changes so we build and cache the matrices in a container. The integer idt is simply the key used to access these cached matrices and the solvers that were setup to work with them. Putting it all together The only remaining thing to discuss is the way in which we use a combination of primal and dual vectors within the simulation code. However, it's hard to know what level of detail will be useful here. At this point I would recommend referring to our online documentation which can be found at Primal and Dual Vectors for an overview of this concept. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); A list of the various BilinearFormIntegrators can be found at Bilinear Form Integrators . More detailed descriptions can be found in the files fem/biliniteg.[ch]pp . \u21a9", "title": "_Maxwell Notes"}, {"location": "maxwell-notes/#maxwells-equations", "text": "$$\\begin{align} \\nabla\\times{\\bf H}& = & \\frac{\\partial{\\bf D}}{\\partial t} + {\\bf J}+ \\overline{\\sigma}{\\bf E}\\label{ampere} \\\\ \\nabla\\times{\\bf E}& = & -\\frac{\\partial{\\bf B}}{\\partial t} - {\\bf M}- \\overline{\\sigma}_M{\\bf H}\\label{faraday} \\\\ \\nabla\\cdot{\\bf D}& = & \\rho\\label{gauss} \\\\ \\nabla\\cdot{\\bf B}& = & 0\\label{trans} \\end{align}$$ With electric current density, ${\\bf J}$, magnetic current density, ${\\bf M}$, electric conductivity, $\\overline{\\sigma}$, magnetic conductivity, $\\overline{\\sigma}_M$, and electric charge density, $\\rho$. We will sometimes refer to these equations by the names Amp\u00e8re's Law, Faraday's Law, Gauss's Law, and the Transversality Condition respectively. It is also necessary to define the constitutive relations ${\\bf D}\\equiv\\epsilon{\\bf E}$ and ${\\bf B}\\equiv\\mu{\\bf H}$. It is also common to combine equations \\eqref{ampere} and \\eqref{faraday} into a single second order PDE. $$\\begin{align} \\frac{\\partial^2\\left(\\epsilon{\\bf E}\\right)}{\\partial t^2} + \\frac{\\partial\\left(\\overline{\\sigma}{\\bf E}\\right)}{\\partial t} + \\nabla\\times\\left(\\mu^{-1}\\nabla\\times{\\bf E}\\right) & \\nonumber \\\\ + \\nabla\\times\\left(\\mu^{-1}\\overline{\\sigma}_M{\\bf H}\\right) & = -\\frac{\\partial{\\bf J}}{\\partial t} - \\nabla\\times\\left(\\mu^{-1}{\\bf M}\\right) \\label{curlcurle} %\\end{align} {or}&\\\\ %\\begin{equation} \\frac{\\partial^2\\left(\\mu{\\bf H}\\right)}{\\partial t^2} + \\frac{\\partial\\left(\\overline{\\sigma}_M{\\bf H}\\right)}{\\partial t} + \\nabla\\times\\left(\\epsilon^{-1}\\nabla\\times{\\bf H}\\right) & \\nonumber \\\\ - \\nabla\\times\\left(\\epsilon^{-1}\\overline{\\sigma}{\\bf E}\\right) & = -\\frac{\\partial{\\bf M}}{\\partial t} +\\nabla\\times\\left(\\epsilon^{-1}{\\bf J}\\right) \\label{curlcurlh} \\end{align}$$ One drawback of these formulations is the appearance of ${\\bf H}$ in equation \\eqref{curlcurle} or ${\\bf E}$ in equation \\eqref{curlcurlh}. The only way to formulate these equations entirely in terms of ${\\bf E}$ or ${\\bf H}$ is to make assumptions about the spatial variation of $\\epsilon^{-1}\\overline{\\sigma}$ or $\\mu^{-1}\\overline{\\sigma}_M$. For this reason these second order formulations should be avoided unless $\\overline{\\sigma}_M=0$ or $\\overline{\\sigma}=0$.", "title": "Maxwell's Equations"}, {"location": "maxwell-notes/#discretization", "text": "", "title": "Discretization"}, {"location": "maxwell-notes/#basis-functions", "text": "There are two sets of basis functions particularly well suited for electromagnetics; Nedelec and Raviart-Thomas. The Nedelec basis functions guarantee tangential continuity of their approximations across element interfaces. This makes them well suited for the fields ${\\bf E}$ and ${\\bf H}$ which share this constraint on material interfaces. The Raviart-Thomas basis functions guarantee continuity of the normal component of their approximations across element interfaces. This makes them well suited for the fields ${\\bf B}$ and ${\\bf D}$ which share this constraint on material interfaces. The Nedelec basis functions which discretize the H(Curl) space are indispensable due to the presence of the Curl operators in equations \\eqref{ampere}, \\eqref{faraday}, \\eqref{curlcurle}, and \\eqref{curlcurlh}. The Raviart-Thomas basis functions which discretize the H(Div) space are convenient and reduce the computational cost but are optional, strictly speaking.", "title": "Basis Functions"}, {"location": "maxwell-notes/#discretization-of-the-primary-fields", "text": "There are three choices for discretizing the set of coupled first order partial differential equations: ${\\bf E}\\in$ H(Curl) and ${\\bf B},{\\bf J},{\\bf M}\\in$ H(Div) ${\\bf H}\\in$ H(Curl) and ${\\bf D},{\\bf J},{\\bf M}\\in$ H(Div) ${\\bf E}\\in$ H(Curl), ${\\bf H}\\in$ H(Curl), and ${\\bf J},{\\bf M}\\in$ H(Curl) (grudgingly) There is only one choice for discretizing the second order equations i.e. ${\\bf E}$ or ${\\bf H}$ in H(Curl). These basis function choices merely ensure that the approximate fields maintain the proper interface constraints at material boundaries. The choice of formulation can be made based on the required sources, boundary conditions, and/or post-processing requirements. Hence, different physical requirements can lead to different choices of formulation i.e. there is no single best choice for all problems.", "title": "Discretization of the primary fields"}, {"location": "maxwell-notes/#sec:JM", "text": "The electric and magnetic current source densities are both flux vectors and as such they are best represented using the H(Div) space. This is most apparent when modeling the eddy current equation but H(Div) can be important in wave equations as well. Imagine modeling a current carrying conductor surrounded by some insulating material. The current density ${\\bf J}$ may be non-zero inside the conductor but it should be identically zero outside of it. Assuming the computational mesh conforms to the surface of this conductor, an H(Div) field can accurately represent such a current flow as long as the current at the surface of the conductor remains parallel to that surface. In other words the current will not \"leak\" out of the conductor as long as the normal component of the current is zero at the surface. On the other hand, if H(Curl) basis functions were used for ${\\bf J}$ its tangential components would need to be continuous across the surface of the conductor. This produces a non-physical current within the first layer of elements surrounding the conductor. Non-physical currents leaking out of conductors when using H(Curl) basis functions for the current density ${\\bf J}$ can lead to inaccurate eddy current simulations either by producing a larger than expected magnetic field outside the conductor or a reduced thermal heat load within the conductor. Similarly, in wave simulations the total power emanating from an antenna can be either over- or under-estimated depending upon how ${\\bf J}$ is computed on the surface of the antenna. Such matters can be eliminated by simply representing ${\\bf J}$ as an H(Div) function. I'm sure similar arguments can be made for the magnetization ${\\bf M}$ although I have less experience with that.", "title": "Discretization of ${\\bf J}$ and ${\\bf M}$"}, {"location": "maxwell-notes/#the-maxwell-miniapp", "text": "The maxwell Miniapp uses the EB formulation with $\\overline{\\sigma}_M$ and ${\\bf M}$ assumed to be zero. It evolves the first order coupled system of equations using a symplectic time integration algorithm by Candy and Rozmus described in \"A Symplectic Integration Algorithm for Separable Hamiltonian Functions\", Journal of Computational Physics, Vol. 92, pages 230-256 (1991). The main advantage of this algorithm is that it conserves energy. Another advantage is that the approximations of ${\\bf E}$ and ${\\bf B}$ correspond to the same simulation time rather than being staggered as in other methods. The variable order symplectic integration class in MFEM called SIAVSolver requires that we implement our coupled set of PDEs as a pair of operators. The first is an Operator which can be used to update the magnetic field, ${\\bf B}$, using Faraday's Law by computing $-\\nabla\\times{\\bf E}$. The second is a TimeDependentOperator which can be used to update the electric field, ${\\bf E}$, using Amp\u00e8re's Law by computing $\\nabla\\times\\left(\\mu^{-1}{\\bf B}\\right)-{\\bf J}-\\overline{\\sigma}{\\bf E}$. We choose to implement both of these operators in a single class which we call MaxwellSolver . The first operator, $-\\nabla\\times{\\bf E}$, acts on ${\\bf E}\\in$ H(Curl) to produce a result $\\frac{\\partial{\\bf B}}{\\partial t}\\in$ H(Div). By design our discrete representation of H(Div) contains the curl of any field in our discrete representation of H(curl). Consequently we can compute this operator by simply evaluating the curl of our H(Curl) basis functions in terms of our H(Div) basis functions. This evaluation is handled by a DiscreteInterpolator called CurlInterpolator . The process of looping over each element to compute these interpolations is conducted by the ParDiscreteLinearOperator . In the MaxwellSolver this curl operator is simply named Curl_ and its negative, needed by the SIAVSolver , is named NegCurl_ . These operators are setup between lines 227 and 236 of the file maxwell_solver.cpp . The second operator, $\\nabla\\times\\left(\\mu^{-1}{\\bf B}\\right)-{\\bf J}-\\overline{\\sigma}{\\bf E}$, requires a bit more effort. The first thing to notice is that we cannot compute the curl of $\\mu^{-1}{\\bf B}$ precisely. Primarily this is due to the fact that ${\\bf B}\\in$ H(Div) rather than H(Curl) but, in general, the presence of $\\mu^{-1}$ is also a problem since we don't know its derivatives at all. These complications require that we compute the curl operator in a weak sense.", "title": "The maxwell Miniapp"}, {"location": "maxwell-notes/#setup-of-the-timedependentoperator", "text": "", "title": "Setup of the TimeDependentOperator"}, {"location": "maxwell-notes/#weak-curl-of-mu-1bf-b", "text": "Often in wave propagation $\\mu$ is assumed to be constant but we will not make this assumption. In principle $\\mu$ could be anisotropic and inhomogeneous although we do assume it is constant in time. The magnetic field ${\\bf B}$ will be written as a linear combination of basis functions in H(Div) which we will label as ${\\bf F}_i$ e.g. ${\\bf B}(\\vec{x})\\approx\\sum_i b_i(t){\\bf F}_i(\\vec{x})$. Our goal is to compute $\\frac{\\partial{\\bf E}}{\\partial t}$ where ${\\bf E}\\in$ H(Curl) so we need to represent $\\nabla\\times\\mu^{-1}{\\bf B}$ also in H(Curl). The basis functions of H(Curl) will be labeled as ${\\bf W}_i$. To compute the weak form of this term we multiply the operator of interest by each of our H(Curl) basis functions and integrate over the entire problem domain to obtain an equation corresponding to each basis function in H(Curl). For example $$\\begin{align} \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}{\\bf B})] d\\Omega &=& \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\sum_j b_j{\\bf F}_j(\\vec{x}))] d\\Omega \\\\ &=& \\sum_j b_j\\{\\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}{\\bf F}_j(\\vec{x}))] d\\Omega \\} \\end{align}$$ The expression in curly braces depends only on our material coefficient and our basis functions so we can precompute this if we assume $\\mu$ does not change in time. This particular integral requires a little more manipulation to move the curl operator onto the H(Curl) basis function. $$\\begin{align} \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot\\left[\\nabla\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Omega &=& \\int_\\Omega\\left(\\nabla\\times{\\bf W}_i(\\vec{x})\\right)\\cdot\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\,d\\Omega \\\\ &-& \\int_\\Omega\\nabla\\cdot\\left[{\\bf W}_i(\\vec{x})\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Omega \\\\ &=& \\int_\\Omega\\left(\\mu^{-T}\\nabla\\times{\\bf W}_i(\\vec{x})\\right)\\cdot{\\bf F}_j(\\vec{x})\\,d\\Omega \\\\ &-& \\int_\\Gamma\\hat{n}\\cdot\\left[{\\bf W}_i(\\vec{x})\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Gamma \\\\ &=& \\int_\\Omega\\left(\\mu^{-T}\\nabla\\times{\\bf W}_i(\\vec{x})\\right)\\cdot{\\bf F}_j(\\vec{x})\\,d\\Omega \\\\ &+& \\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot\\left[\\hat{n}\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Gamma \\end{align}$$ Where $\\mu^{-T}$ is the transpose of the inverse of $\\mu$ and $\\Gamma=\\partial\\Omega$ i.e. the boundary of the domain. The first integral remaining on the right hand side is the weak curl operator which is implemented in MFEM as a BilinearFormIntegrator named MixedVectorWeakCurlIntegrator 1 . This operator is setup between lines 178 and 184 of the file maxwell_solver.cpp . The boundary integral term shown above is ignored in the maxwell miniapp which implies that it is assumed to be zero. This gives rise to a so-called natural boundary condition which in this case implies that $\\hat{n}\\times{\\bf H}=0$. Any portion of the boundary where an essential (a.k.a. Dirichlet) boundary condition is set will override this implicit boundary condition. Alternatively an inhomogeneous Neumann boundary condition can be applied by providing a nonzero function in place of $\\hat{n}\\times{\\bf H}$ in this integral. This would be accomplished by passing a known vector function to the LinearFormIntegrator named VectorFEDomainLFIntegrator and using this as a boundary integrator in ParLinearForm . Unfortunately we don't seem to have an example of this usage in either of the tesla or maxwell miniapps.", "title": "Weak curl of $\\mu^{-1}{\\bf B}$"}, {"location": "maxwell-notes/#loss-term-overlinesigmabf-e", "text": "This would seem to be a simple term but, of course, there is a complication. According to the Candy and Rozmus paper this piece of the Hamiltonian should not depend on ${\\bf E}$. Furthermore, to properly model such a loss term it is best to handle it implicitly. To accomplish this the MaxwellSolver stores the current value of the electric field internally since the SIAVSolver will not provide this data to the update method (which is called ImplicitSolve ). The integral needed to model this term simply computes the product of the H(Curl) basis functions against each other along with the material coefficient, $\\overline{\\sigma}$ in this case. This integrator is called VectorFEMassIntegrator . The portion of this operator which will be used with the current value of the electric field is setup between lines 195 and 208 of the file maxwell_solver.cpp . The implicit portion is setup between lines 399 and 407 using the same integrator.", "title": "Loss term $\\overline{\\sigma}{\\bf E}$"}, {"location": "maxwell-notes/#current-density-bf-j", "text": "The maxwell miniapp does not place ${\\bf J}$ in H(Div) despite the comments in Section J and M . The reason for this is that the maxwell miniapp does not use a GridFunction representation of ${\\bf J}$ in any computations. It does, however, write ${\\bf J}$ to its data files for visualization and this really should be done using an H(Div) field. The way the current density enters the wave equation is a source term which is computed using the following integral: $$\\int_\\Omega{\\bf W}_i\\cdot{\\bf J}\\,d\\Omega$$ This is accomplished by using the LinearFormIntegrator named VectorFEDomainLFIntegrator and a ParLinearForm object. The setup of this object can be found between lines 264 and 266 of the file maxwell_solver.cpp . Integrals such as this, which directly evaluate a c-style function, avoid the continuity concerns raised in Section J and M .", "title": "Current density ${\\bf J}$"}, {"location": "maxwell-notes/#setting-up-the-solver", "text": "The time derivative in Amp\u00e8re's Law is of the form: $$\\frac{\\partial\\epsilon{\\bf E}}{\\partial t} \\approx \\frac{\\partial}{\\partial t}(\\epsilon\\sum_ie(t){\\bf W}_i) = \\epsilon\\sum_i\\dot{e}(t){\\bf W}_i$$ Where we have assumed that $\\epsilon$ is constant in time. For the weak form of Amp\u00e8re's Law we need to again multiply by the H(Curl) basis functions and integrate over the problem domain. $$\\int_\\Omega{\\bf W}_i\\cdot(\\epsilon\\sum_j\\dot{e}(t){\\bf W}_j)d\\Omega = \\sum_j\\dot{e}(t)\\{\\int_\\Omega{\\bf W}_i\\cdot(\\epsilon{\\bf W}_j)d\\Omega \\}$$ The integral in the curly braces is a mass matrix which is again computed using the BilinearFormIntegrator named VectorFEMassIntegrator . This is setup between lines 388 and 395 of the file maxwell_solver.cpp . The more unusual part of this operator comes from the implicit handling of the loss term and the absorbing boundary condition. The latter is a simple Sommerfeld first order radiation boundary condition. Each of these implicit terms multiplies the electric field which we approximate at the time $t+\\Delta t/2$. Each of these bilinear forms which multiply the time derivative are mass matrices so a conjugate gradient iterative solver with a diagonal scaling preconditioner should work quite well. These are setup between lines 423 and 428 of the file maxwell_solver.cpp . One odd thing does appear in this setupSolver member function (and a few other places) and that is the variable idt . This is an integer related to the double precision time step dt . The reason for this is that our variable order symplectic time integrator breaks up a time step into a handful of smaller time steps which are generally not the same size. If we need to handle loss terms implicitly this variable time step will appear in the matrix passed to our solver. Of course we don't want to rebuild this matrix every time the time step changes so we build and cache the matrices in a container. The integer idt is simply the key used to access these cached matrices and the solvers that were setup to work with them.", "title": "Setting up the solver"}, {"location": "maxwell-notes/#putting-it-all-together", "text": "The only remaining thing to discuss is the way in which we use a combination of primal and dual vectors within the simulation code. However, it's hard to know what level of detail will be useful here. At this point I would recommend referring to our online documentation which can be found at Primal and Dual Vectors for an overview of this concept. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); A list of the various BilinearFormIntegrators can be found at Bilinear Form Integrators . More detailed descriptions can be found in the files fem/biliniteg.[ch]pp . \u21a9", "title": "Putting it all together"}, {"location": "mesh-format-v1.0/", "text": "Mesh Formats MFEM mesh v1.0 This is the default format in GLVis. It can be used to describe simple (triangular, quadrilateral, tetrahedral and hexahedral meshes with straight edges) or complicated (curvilinear and more general) meshes. Straight meshes In the simple case of a mesh with straight edges the format looks as follows MFEM mesh v1.0 # Dimension of the mesh: 1, 2 or 3 (e.g. 2 for a 2D surface mesh in 3D space) dimension # Mesh elements, e.g. tetrahedrons (4) elements ... ... # Mesh faces/edges on the boundary, e.g. triangles (2) boundary ... ... # Vertex coordinates vertices ... > ... Lines starting with \"#\" denote comments. The supported geometry types are: POINT = 0 SEGMENT = 1 TRIANGLE = 2 SQUARE = 3 TETRAHEDRON = 4 CUBE = 5 PRISM = 6 see the comments in this source file for more details. For example, the beam-quad.mesh file from the data directory looks like this: MFEM mesh v1.0 dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 1 0 3 1 2 1 3 1 3 2 3 1 4 3 3 1 5 4 3 1 6 5 3 1 7 6 3 1 8 7 3 1 9 10 3 1 10 11 3 1 11 12 3 1 12 13 3 1 13 14 3 1 14 15 3 1 15 16 3 1 16 17 1 1 0 9 2 1 17 8 vertices 18 2 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 which corresponds to the mesh visualized with glvis -m beam-quad.mesh -k \"Ame****\" Curvilinear and more general meshes The MFEM mesh v1.0 format also support the general description of meshes based on a vector finite element grid function with degrees of freedom in the \"nodes\" of the mesh. This general format is described briefly below, and in more details on the General Mesh Format page . MFEM mesh v1.0 # Dimension of the mesh: 1, 2 or 3 (e.g. 2 for a 2D surface mesh in 3D space) dimension # Mesh elements, e.g. tetrahedrons (4) elements ... ... # Mesh faces/edges on the boundary, e.g. triangles (2) boundary ... ... # Number of vertices (no coordinates) vertices # Mesh nodes as degrees of freedom of a finite element grid function nodes FiniteElementSpace FiniteElementCollection: VDim: Ordering: 0 ... ... ... Some possible finite element collection choices are: Linear , Quadratic and Cubic corresponding to curvilinear P1/Q1, P2/Q2 and P3/Q3 meshes. The algorithm for the numbering of the degrees of freedom can be found in MFEM's source code . For example, the escher-p3.mesh from MFEM's data directory describes a tetrahedral mesh with nodes given by a P3 vector Lagrangian finite element function. Visualizing this mesh with glvis -m escher-p3.mesh -k \"Aaaoooooooooo**************tt\" we get: Topologically periodic meshes can also be described in this format, see for example the periodic-segment , periodic-square , and periodic-cube meshes in the data directory, as well as Example 9 . MFEM NC mesh v1.0 The MFEM NC mesh v1.0 is a format for nonconforming meshes in MFEM. It is similar in style to the default (conforming) MFEM mesh v1.0 format, but is in fact independent and supports advanced AMR features such as storing refined elements and the refinement hierarchy, anisotropic element refinement, hanging nodes (vertices), parallel partitioning. The file starts with a signature and the mesh dimension: MFEM NC mesh v1.0 # NCMesh supported geometry types: # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # PRISM = 6 # mesh dimension 1, 2 or 3 dimension # optional rank for parallel files, defaults to 0 rank The rank section defines the MPI rank of the process that saved the file. This section can be omitted in serial meshes. Similarly to the conforming format, the next section lists all elements. This time however, we recognize two kinds of elements: Regular, active elements ( refinement type == 0 ). These elements participate in the computation (are listed in the Mesh class) and reference vertex indices. Inactive, previously refined elements ( refinement type > 0 ). Instead of vertices, these elements contain links to their child elements, and are not visible in the Mesh class. All elements also have their geometry type and user attribute defined, as well as the MPI rank of their owner process (only used in parallel meshes). # mesh elements, both regular and refined elements 0 ... Storing the complete refinement hierarchy allows MFEM to coarsen some of the fine elements if necessary, and also to naturally define an ordering of the fine elements that can be used for fast parallel partitioning of the mesh (a depth-first traversal of all refinement trees defines a space-filling curve (SFC) that can be easily partitioned among parallel processes). The following picture illustrates the refinement hierarchy of a mesh that started as two quadrilaterals and then underwent two anisotropic refinements (blue numbers are vertex indices): The corresponding elements section of the mesh file could look like this: elements 6 0 1 3 2 2 3 # element 0: refinement 2 (Y), children 2, 3 0 1 3 0 1 2 5 4 # element 1: no refinement, vertices 1, 2, 5, 4 0 1 3 1 4 5 # element 2: refinement 1 (X), children 4, 5 0 1 3 0 6 7 4 3 # element 3: no refinement, vertices 6, 7, 4, 3 0 1 3 0 0 8 9 6 # element 4: no refinement, vertices 0, 8, 9, 6 0 1 3 0 8 1 7 9 # element 5: no refinement, vertices 8, 1, 7, 9 The refinement types are numbered as follows: Note that the type is encoded as a 3-bit number, where bits 0, 1, 2 correspond to the X, Y, Z axes, respectively. Other element geometries allow fewer but similar refinement types: triangle (3), square (1, 2, 3), tetrahedron (7), prism (3, 4, 7). The next section is the boundary section, which is exactly the same as in the conforming format: boundary ... The nonconforming mesh however needs to identify hanging vertices, which may occur in the middle of edges or faces as elements are refined. In fact, any vertex that was created as a result of refinement always has two \"parent\" vertices and needs to be listed in the vertex_parents section: vertex_parents ... In our example above, vertices 6, 7, 8, 9 have these parents: vertex_parents 4 6 0 3 7 1 4 8 0 1 9 6 7 Vertices can appear in any order in this section. The only limitation is that the first N vertex indices (not listed in this section) be reserved for top-level vertices (those with no parents, typically the vertices of the coarse mesh). The next section is optional and can be safely omitted when creating the mesh file manually. The root_state section affects leaf ordering when traversing the refinement trees and is used to optimize the SFC-based partitioning. There is one number per root element. The default state for all root elements is zero. root_state ... Finally, we have the coordinates section which assigns physical positions to the N top-level vertices. Note that the positions of hanging vertices are always derived from their parent vertices and are not listed in the mesh file. coordinates ... > ... If the mesh is curvilinear, the coordinates section can be replaced with an alternative section called nodes . The nodes keyword is then followed by a serialized GridFunction representing a vector-valued finite element function defining the curvature of the elements, similarly as in the conforming case. The end of the mesh file is marked with the line mfem_mesh_end . For examples of meshes using the NC mesh v1.0 format, see amr-quad.mesh , amr-hex.mesh and fichera-amr.mesh (visualized below) in the data directory of MFEM. MFEM mesh v1.3 Version 1.3 of the MFEM mesh file format adds support for named attribute sets. This is a convenience feature which allows application users (or developers) to refer to a set of attribute numbers or boundary attribute numbers using a text string as a shorthand. Domain attribute numbers and boundary attribute numbers cannot coexist in the same set. Attribute numbers can appear in more than one set so that a given region may be referenced for different purposes in different parts of an application. Domain attribute sets are listed after the elements section of the mesh file in a new section titled attribute_sets . Similarly, boundary attribute sets follow boundary in a new section titled bdr_attribute_sets . MFEM mesh v1.3 ... elements ... attribute_sets \"\" ... ... boundary ... bdr_attribute_sets \"\" ... ... vertices ... mfem_mesh_end A specific example of a v1.3 mesh file can be seen in compass.mesh , shown above, which includes names based on compass directions for illustration. NURBS meshes MFEM provides full support for meshes and discretization spaces based on Non-uniform Rational B-Splines (NURBS). These are treated similarly to general curvilinear meshes where the NURBS nodes are specified as a grid function at the end of the mesh file. For example, here is a simple quadratic NURBS mesh for a square domain with a (perfectly) circular hole in the middle. (The exact representation of conical sections is a major advantage of the NURBS approach over high-order finite elements.) MFEM NURBS mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # SEGMENT = 1 # SQUARE = 3 # CUBE = 5 # dimension 2 elements 4 1 3 0 1 5 4 1 3 1 2 6 5 1 3 2 3 7 6 1 3 3 0 4 7 boundary 8 1 1 0 1 1 1 1 2 1 1 2 3 1 1 3 0 1 1 5 4 1 1 6 5 1 1 7 6 1 1 4 7 edges 12 0 0 1 0 4 5 1 1 2 1 5 6 2 2 3 2 6 7 3 3 0 3 7 4 4 0 4 4 1 5 4 2 6 4 3 7 vertices 8 knotvectors 5 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 weights 1 1 1 1 1 1 1 1 1 0.707106781 1 0.707106781 1 0.707106781 1 0.707106781 1 1 1 1 0.853553391 0.853553391 0.853553391 0.853553391 FiniteElementSpace FiniteElementCollection: NURBS2 VDim: 2 Ordering: 1 0 0 1 0 1 1 0 1 0.358578644 0.358578644 0.641421356 0.358578644 0.641421356 0.641421356 0.358578644 0.641421356 0.5 0 0.5 0.217157288 1 0.5 0.782842712 0.5 0.5 1 0.5 0.782842712 0 0.5 0.217157288 0.5 0.15 0.15 0.85 0.15 0.85 0.85 0.15 0.85 0.5 0.108578644 0.891421356 0.5 0.5 0.891421356 0.108578644 0.5 This above file, as well as other examples of NURBS meshes, can be found in MFEM's data directory . It can be visualized directly with glvis -m square-disc-nurbs.mesh which after several refinements with the \" i \" key looks like To explain MFEM's NURBS mesh file format, we first note that the topological part of the mesh (the elements and boundary sections) describe the 4 NURBS patches visible above. We use the vertex numbers as labels, so we only need the number of vertices. In the NURBS case we need to also provide description of the edges on the patch boundaries and associate a knot vector with each of them. This is done in the edges section where the first index in each row refers to the knot vector id (from the following knotvectors section), while the remaining two indexes are the edge vertex numbers. The position of the NURBS nodes (control points) is given as a NURBS grid function at the end of the file, while the associated weights are listed in the preceding weights section. Some examples of VTK meshes can be found in MFEM's data directory . Here is one of the 3D NURBS meshes The image above was produced with some refinement (key \" o \") and mouse manipulations from glvis -m pipe-nurbs.mesh Solutions from NURBS discretization spaces are also natively supported. For example here is the approximation for the solution of a simple Poisson problem on a refined version of the above mesh. glvis -m square-disc-nurbs.mesh -g sol.gf Curvilinear VTK meshes MFEM also supports quadratic triangular, quadrilaterals, tetrahedral and hexahedral curvilinear meshes in VTK format. This format is described in the VTK file format documentation . The local numbering of degrees of freedom for the biquadratic quads and triquadratic hexes can be found in the Doxygen reference of the vtkBiQuadraticQuad and vtkTriQuadraticHexahedron classes. Currently VTK does not support cubic, and higher-order meshes. As an example, consider a simple curved quadrilateral saved in a file quad.vtk : # vtk DataFile Version 3.0 Generated by MFEM ASCII DATASET UNSTRUCTURED_GRID POINTS 9 double 0 0 0 1 0 0 1 1 0 0.1 0.9 0 0.5 -0.05 0 0.9 0.5 0 0.5 1 0 0 0.5 0 0.45 0.55 0 CELLS 1 10 9 0 1 2 3 4 5 6 7 8 CELL_TYPES 1 28 CELL_DATA 1 SCALARS material int LOOKUP_TABLE default 1 Visualizing it with \" glvis -m quad.vtk \" and typing \" Aemiii \" in the GLVis window we get: The \" i \" key increases the reference element subdivision which gives an increasingly better approximation of the actual curvature of the element. To view the curvature of the mapping inside the element we can use the \"I\" key, e.g., glvis -m quad.vtk -k \"AemIIiii\" Here is a slightly more complicated quadratic quadrilateral mesh example (the different colors in the GLVis window are used to distinguish neighboring elements): glvis -m star-q2.vtk -k \"Am\" MFEM and GLVis can also handle quadratic triangular meshes: glvis -m square-disc-p2.vtk -k \"Am\" As well as quadratic tetrahedral and quadratic hexahedral VTK meshes: glvis -m escher-p2.vtk -k \"Aaaooooo**************\" glvis -m fichera-q2.vtk -k \"Aaaooooo******\"", "title": "_Mesh Format v1.0"}, {"location": "mesh-format-v1.0/#mesh-formats", "text": "", "title": "Mesh Formats"}, {"location": "mesh-format-v1.0/#mfem-mesh-v10", "text": "This is the default format in GLVis. It can be used to describe simple (triangular, quadrilateral, tetrahedral and hexahedral meshes with straight edges) or complicated (curvilinear and more general) meshes.", "title": "MFEM mesh v1.0"}, {"location": "mesh-format-v1.0/#straight-meshes", "text": "In the simple case of a mesh with straight edges the format looks as follows MFEM mesh v1.0 # Dimension of the mesh: 1, 2 or 3 (e.g. 2 for a 2D surface mesh in 3D space) dimension # Mesh elements, e.g. tetrahedrons (4) elements ... ... # Mesh faces/edges on the boundary, e.g. triangles (2) boundary ... ... # Vertex coordinates vertices ... > ... Lines starting with \"#\" denote comments. The supported geometry types are: POINT = 0 SEGMENT = 1 TRIANGLE = 2 SQUARE = 3 TETRAHEDRON = 4 CUBE = 5 PRISM = 6 see the comments in this source file for more details. For example, the beam-quad.mesh file from the data directory looks like this: MFEM mesh v1.0 dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 1 0 3 1 2 1 3 1 3 2 3 1 4 3 3 1 5 4 3 1 6 5 3 1 7 6 3 1 8 7 3 1 9 10 3 1 10 11 3 1 11 12 3 1 12 13 3 1 13 14 3 1 14 15 3 1 15 16 3 1 16 17 1 1 0 9 2 1 17 8 vertices 18 2 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 which corresponds to the mesh visualized with glvis -m beam-quad.mesh -k \"Ame****\"", "title": "Straight meshes"}, {"location": "mesh-format-v1.0/#curvilinear-and-more-general-meshes", "text": "The MFEM mesh v1.0 format also support the general description of meshes based on a vector finite element grid function with degrees of freedom in the \"nodes\" of the mesh. This general format is described briefly below, and in more details on the General Mesh Format page . MFEM mesh v1.0 # Dimension of the mesh: 1, 2 or 3 (e.g. 2 for a 2D surface mesh in 3D space) dimension # Mesh elements, e.g. tetrahedrons (4) elements ... ... # Mesh faces/edges on the boundary, e.g. triangles (2) boundary ... ... # Number of vertices (no coordinates) vertices # Mesh nodes as degrees of freedom of a finite element grid function nodes FiniteElementSpace FiniteElementCollection: VDim: Ordering: 0 ... ... ... Some possible finite element collection choices are: Linear , Quadratic and Cubic corresponding to curvilinear P1/Q1, P2/Q2 and P3/Q3 meshes. The algorithm for the numbering of the degrees of freedom can be found in MFEM's source code . For example, the escher-p3.mesh from MFEM's data directory describes a tetrahedral mesh with nodes given by a P3 vector Lagrangian finite element function. Visualizing this mesh with glvis -m escher-p3.mesh -k \"Aaaoooooooooo**************tt\" we get: Topologically periodic meshes can also be described in this format, see for example the periodic-segment , periodic-square , and periodic-cube meshes in the data directory, as well as Example 9 .", "title": "Curvilinear and more general meshes"}, {"location": "mesh-format-v1.0/#mfem-nc-mesh-v10", "text": "The MFEM NC mesh v1.0 is a format for nonconforming meshes in MFEM. It is similar in style to the default (conforming) MFEM mesh v1.0 format, but is in fact independent and supports advanced AMR features such as storing refined elements and the refinement hierarchy, anisotropic element refinement, hanging nodes (vertices), parallel partitioning. The file starts with a signature and the mesh dimension: MFEM NC mesh v1.0 # NCMesh supported geometry types: # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # PRISM = 6 # mesh dimension 1, 2 or 3 dimension # optional rank for parallel files, defaults to 0 rank The rank section defines the MPI rank of the process that saved the file. This section can be omitted in serial meshes. Similarly to the conforming format, the next section lists all elements. This time however, we recognize two kinds of elements: Regular, active elements ( refinement type == 0 ). These elements participate in the computation (are listed in the Mesh class) and reference vertex indices. Inactive, previously refined elements ( refinement type > 0 ). Instead of vertices, these elements contain links to their child elements, and are not visible in the Mesh class. All elements also have their geometry type and user attribute defined, as well as the MPI rank of their owner process (only used in parallel meshes). # mesh elements, both regular and refined elements 0 ... Storing the complete refinement hierarchy allows MFEM to coarsen some of the fine elements if necessary, and also to naturally define an ordering of the fine elements that can be used for fast parallel partitioning of the mesh (a depth-first traversal of all refinement trees defines a space-filling curve (SFC) that can be easily partitioned among parallel processes). The following picture illustrates the refinement hierarchy of a mesh that started as two quadrilaterals and then underwent two anisotropic refinements (blue numbers are vertex indices): The corresponding elements section of the mesh file could look like this: elements 6 0 1 3 2 2 3 # element 0: refinement 2 (Y), children 2, 3 0 1 3 0 1 2 5 4 # element 1: no refinement, vertices 1, 2, 5, 4 0 1 3 1 4 5 # element 2: refinement 1 (X), children 4, 5 0 1 3 0 6 7 4 3 # element 3: no refinement, vertices 6, 7, 4, 3 0 1 3 0 0 8 9 6 # element 4: no refinement, vertices 0, 8, 9, 6 0 1 3 0 8 1 7 9 # element 5: no refinement, vertices 8, 1, 7, 9 The refinement types are numbered as follows: Note that the type is encoded as a 3-bit number, where bits 0, 1, 2 correspond to the X, Y, Z axes, respectively. Other element geometries allow fewer but similar refinement types: triangle (3), square (1, 2, 3), tetrahedron (7), prism (3, 4, 7). The next section is the boundary section, which is exactly the same as in the conforming format: boundary ... The nonconforming mesh however needs to identify hanging vertices, which may occur in the middle of edges or faces as elements are refined. In fact, any vertex that was created as a result of refinement always has two \"parent\" vertices and needs to be listed in the vertex_parents section: vertex_parents ... In our example above, vertices 6, 7, 8, 9 have these parents: vertex_parents 4 6 0 3 7 1 4 8 0 1 9 6 7 Vertices can appear in any order in this section. The only limitation is that the first N vertex indices (not listed in this section) be reserved for top-level vertices (those with no parents, typically the vertices of the coarse mesh). The next section is optional and can be safely omitted when creating the mesh file manually. The root_state section affects leaf ordering when traversing the refinement trees and is used to optimize the SFC-based partitioning. There is one number per root element. The default state for all root elements is zero. root_state ... Finally, we have the coordinates section which assigns physical positions to the N top-level vertices. Note that the positions of hanging vertices are always derived from their parent vertices and are not listed in the mesh file. coordinates ... > ... If the mesh is curvilinear, the coordinates section can be replaced with an alternative section called nodes . The nodes keyword is then followed by a serialized GridFunction representing a vector-valued finite element function defining the curvature of the elements, similarly as in the conforming case. The end of the mesh file is marked with the line mfem_mesh_end . For examples of meshes using the NC mesh v1.0 format, see amr-quad.mesh , amr-hex.mesh and fichera-amr.mesh (visualized below) in the data directory of MFEM.", "title": "MFEM NC mesh v1.0"}, {"location": "mesh-format-v1.0/#mfem-mesh-v13", "text": "Version 1.3 of the MFEM mesh file format adds support for named attribute sets. This is a convenience feature which allows application users (or developers) to refer to a set of attribute numbers or boundary attribute numbers using a text string as a shorthand. Domain attribute numbers and boundary attribute numbers cannot coexist in the same set. Attribute numbers can appear in more than one set so that a given region may be referenced for different purposes in different parts of an application. Domain attribute sets are listed after the elements section of the mesh file in a new section titled attribute_sets . Similarly, boundary attribute sets follow boundary in a new section titled bdr_attribute_sets . MFEM mesh v1.3 ... elements ... attribute_sets \"\" ... ... boundary ... bdr_attribute_sets \"\" ... ... vertices ... mfem_mesh_end A specific example of a v1.3 mesh file can be seen in compass.mesh , shown above, which includes names based on compass directions for illustration.", "title": "MFEM mesh v1.3"}, {"location": "mesh-format-v1.0/#nurbs-meshes", "text": "MFEM provides full support for meshes and discretization spaces based on Non-uniform Rational B-Splines (NURBS). These are treated similarly to general curvilinear meshes where the NURBS nodes are specified as a grid function at the end of the mesh file. For example, here is a simple quadratic NURBS mesh for a square domain with a (perfectly) circular hole in the middle. (The exact representation of conical sections is a major advantage of the NURBS approach over high-order finite elements.) MFEM NURBS mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # SEGMENT = 1 # SQUARE = 3 # CUBE = 5 # dimension 2 elements 4 1 3 0 1 5 4 1 3 1 2 6 5 1 3 2 3 7 6 1 3 3 0 4 7 boundary 8 1 1 0 1 1 1 1 2 1 1 2 3 1 1 3 0 1 1 5 4 1 1 6 5 1 1 7 6 1 1 4 7 edges 12 0 0 1 0 4 5 1 1 2 1 5 6 2 2 3 2 6 7 3 3 0 3 7 4 4 0 4 4 1 5 4 2 6 4 3 7 vertices 8 knotvectors 5 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 weights 1 1 1 1 1 1 1 1 1 0.707106781 1 0.707106781 1 0.707106781 1 0.707106781 1 1 1 1 0.853553391 0.853553391 0.853553391 0.853553391 FiniteElementSpace FiniteElementCollection: NURBS2 VDim: 2 Ordering: 1 0 0 1 0 1 1 0 1 0.358578644 0.358578644 0.641421356 0.358578644 0.641421356 0.641421356 0.358578644 0.641421356 0.5 0 0.5 0.217157288 1 0.5 0.782842712 0.5 0.5 1 0.5 0.782842712 0 0.5 0.217157288 0.5 0.15 0.15 0.85 0.15 0.85 0.85 0.15 0.85 0.5 0.108578644 0.891421356 0.5 0.5 0.891421356 0.108578644 0.5 This above file, as well as other examples of NURBS meshes, can be found in MFEM's data directory . It can be visualized directly with glvis -m square-disc-nurbs.mesh which after several refinements with the \" i \" key looks like To explain MFEM's NURBS mesh file format, we first note that the topological part of the mesh (the elements and boundary sections) describe the 4 NURBS patches visible above. We use the vertex numbers as labels, so we only need the number of vertices. In the NURBS case we need to also provide description of the edges on the patch boundaries and associate a knot vector with each of them. This is done in the edges section where the first index in each row refers to the knot vector id (from the following knotvectors section), while the remaining two indexes are the edge vertex numbers. The position of the NURBS nodes (control points) is given as a NURBS grid function at the end of the file, while the associated weights are listed in the preceding weights section. Some examples of VTK meshes can be found in MFEM's data directory . Here is one of the 3D NURBS meshes The image above was produced with some refinement (key \" o \") and mouse manipulations from glvis -m pipe-nurbs.mesh Solutions from NURBS discretization spaces are also natively supported. For example here is the approximation for the solution of a simple Poisson problem on a refined version of the above mesh. glvis -m square-disc-nurbs.mesh -g sol.gf", "title": "NURBS meshes"}, {"location": "mesh-format-v1.0/#curvilinear-vtk-meshes", "text": "MFEM also supports quadratic triangular, quadrilaterals, tetrahedral and hexahedral curvilinear meshes in VTK format. This format is described in the VTK file format documentation . The local numbering of degrees of freedom for the biquadratic quads and triquadratic hexes can be found in the Doxygen reference of the vtkBiQuadraticQuad and vtkTriQuadraticHexahedron classes. Currently VTK does not support cubic, and higher-order meshes. As an example, consider a simple curved quadrilateral saved in a file quad.vtk : # vtk DataFile Version 3.0 Generated by MFEM ASCII DATASET UNSTRUCTURED_GRID POINTS 9 double 0 0 0 1 0 0 1 1 0 0.1 0.9 0 0.5 -0.05 0 0.9 0.5 0 0.5 1 0 0 0.5 0 0.45 0.55 0 CELLS 1 10 9 0 1 2 3 4 5 6 7 8 CELL_TYPES 1 28 CELL_DATA 1 SCALARS material int LOOKUP_TABLE default 1 Visualizing it with \" glvis -m quad.vtk \" and typing \" Aemiii \" in the GLVis window we get: The \" i \" key increases the reference element subdivision which gives an increasingly better approximation of the actual curvature of the element. To view the curvature of the mapping inside the element we can use the \"I\" key, e.g., glvis -m quad.vtk -k \"AemIIiii\" Here is a slightly more complicated quadratic quadrilateral mesh example (the different colors in the GLVis window are used to distinguish neighboring elements): glvis -m star-q2.vtk -k \"Am\" MFEM and GLVis can also handle quadratic triangular meshes: glvis -m square-disc-p2.vtk -k \"Am\" As well as quadratic tetrahedral and quadratic hexahedral VTK meshes: glvis -m escher-p2.vtk -k \"Aaaooooo**************\" glvis -m fichera-q2.vtk -k \"Aaaooooo******\"", "title": "Curvilinear VTK meshes"}, {"location": "mesh-format-v1.x/", "text": "General MFEM Mesh Format The MFEM mesh v1.x format supports the general description of meshes based on a vector finite element grid function with degrees of freedom in the nodes of the mesh. For simplicity, in this document we refer to this version of the format as MFEM mesh v1.x . The legacy version for meshes with straight edges we will call MFEM linear mesh format. A mesh in the MFEM mesh v1.x format consists of two parts: Topology and Geometry. We illustrate these concepts by comparing with the beam-quad.mesh from MFEM's data/ directory. This is just a simple quadrilateral beam mesh with 8 elements, 18 vertices (numbered 0 to 17) and 18 boundary segments: The original linear mesh version of this file is given in Listing 1 . Topology The topological part of the mesh describes the relations between the elements in the mesh, in terms of neighborhood implied by shared vertices. Actual coordinates do not play a role in this part, so the vertices are just labels used to imply which elements share a vertex, an edge or a face. Some examples: General version of data/beam-quad.mesh Below is the annotated topological part of the MFEM mesh v1.x format for the beam mesh. The complete file is given in Listing 2 . ... # BEGIN Topology Part dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary # Skipping the 18 boundary segments for simplicity vertices 18 # END Topology Part ... The element format above is: ... . Type 3 is quadrilateral, which requires 4 vertex indices. The attribute identify e.g. material sub-domains (2 in this case). NOTE: The topology part of this mesh will be the same, irrespective of the order. Compare e.g. Listing 2 , Listing 3 and Listing 4 . WARNING: The vertices are used only to imply topology, and so there coordinates are not important. The mesh coordinates are implied by the mesh nodes not vertices . In particular, while the Mesh object can return vertex coordinates, they are not used an may be incorrect for high-order mesh. Periodic version of data/beam-quad.mesh The topology part can be used to describe more complicated mesh relations. For example we can identify the two vertical lines of the beam mesh, turning it topologically into a cylinder. The complete file is given in Listing 5 . ... # BEGIN Topology Part dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 0 9 16 # Last element uses vertices 0 and 9 # two vertical boundary have been removed boundary 16 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 0 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 9 16 vertices 18 # END Topology Part ... Compared to the non-periodic version, e.g. Listing 2 , the main difference above is that we have fused vertices 8 and 0 and vertices 17 and 9. The difference between the two topologies can be illustrated by solving a simple Laplace problem with homogeneous essential boundary conditions on the resulting mesh. In the periodic case we get: while the solution on the non-periodic mesh looks like: NOTE: Meshes with periodic topology allow us to solve problems with periodic boundary conditions without modifying the application to impose them -- we simply run on a different mesh. Geometry The geometry of the mesh, i.e. the actual position of mesh elements in physical space is described by specifying the mesh nodes as a general finite element (vector) function. In MFEM, finite element functions are objects of type GridFunction which belong to discrete finite element spaces specified by objects FiniteElementSpace and FiniteElementCollection . The actual geometry of each element is obtained by extracting the local degrees of freedom from the global nodes , expanding them in the corresponding (reference element) finite element basis, and using the resulting polynomial vector field to map the reference element. An example of a first order geometry is given in Listing 2 : ... # BEGIN Geometry Part nodes FiniteElementSpace FiniteElementCollection: H1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 # END Geometry Part Here VDim: 2 means that the nodes grid function is a vector field with two components (i.e. the mesh is embedded in R^2); H1_2D_P1 describes the finite element space (H1/continuous finite elements in 2D of order 1); Ordering refers to how the vector field values are serialized (in this case x,y,x,y,...); and the rest is just the global degrees of freedom representing in this case the vertex coordinates. Compare the above with the linear mesh vertex coordinates from Listing 1 : vertices 18 2 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 In the MFEM mesh v1.x format, the nodes are a regular grid function, just like an other discretized field in a simulation, which has several advantages: The nodes can be part of the discretization, and be evolved directly e.g. in a Lagrangian/ALE simulation. Mesh optimization problems can be posed directly for the nodes variable. Since the nodes can be any finite element function, a wide variety of meshes are easily supported. As an illustration of the last point, consider the geometry of the periodic version of the mesh in Listing 5 ... # BEGIN Geometry Part nodes FiniteElementSpace FiniteElementCollection: L2_T1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 0 1 1 1 1 0 2 0 1 1 2 1 2 0 3 0 2 1 3 1 3 0 4 0 3 1 4 1 4 0 5 0 4 1 5 1 5 0 6 0 5 1 6 1 6 0 7 0 6 1 7 1 7 0 8 0 7 1 8 1 8 0 9 0 8 1 9 1 9 0 10 0 9 1 10 1 # END Geometry Part ... Note that the space here is L2 , which means a discontinuous linear vector field, where four vertex coordinates are specified on each element. This allows us to plot the periodic mesh as a regular beam, which is what you'd expect for periodic boundary conditions. Finite Element Spaces To fully specify the MFEM mesh v1.x format, we need to describe the degrees of freedom of the nodes finite element space and their global numbering. This is something that the MFEM team is very interested to discuss and standardize with other high-order projects and applications. Below is a description of our current approach... Finite element spaces have degrees of freedom (dofs) that are associated with the (interiors of the) mesh vertices, edges, faces and elements. There may be multiple dofs associated with the same geometric entity (e.g. vector fields), and different spaces have different sets of degrees of freedom. For example H1/continuous spaces can have degrees of freedom associated with the Gauss-Lobatto points in a quadrilateral, while L2/discontinuous spaces can have degrees of freedom associated with the Gauss-Legendre points. These are just examples, many choices for the basis are actually possible to be encoded in the FiniteElementCollection string above. In general, based just on the mesh topology and the type of the space, the FiniteElementSpace object can determine a global set of dofs, that will be the values listed for the mesh nodes . The algorithm starts with the given numbering of the elements and the vertices, from which a numbering of the edges and the faces is derived as follows: loop over elements loop over edges and faces inside each element (see below) number currently the edges and faces that have not been numbered yet The ordering of edges/faces within each element is defined by the arrays Edges and FaceVert in the classes Geometry::Constants which are defined in the file fem/geom.cpp , e.g. search for ::Edges or ::FaceVert . Here is the result of this numbering for the beam mesh In addition to a number, each edges and face is also given a global orientation. In 2D and 3D, an edge is oriented from the vertex with the lower vertex id to the vertex with the higher vertex id. In 3D, a face is oriented according to the face-to-vertex mappings in the first element in which the face is enumerated. See the FaceVert arrays in fem/geom.cpp mentioned above, as well as the Mesh::GenerateFaces method in mesh/mesh.cpp . In particular, the normal of the face between two elements points from the element with lower number to the element with higher number. Face orientation however includes not just the normal direction, but also any rotation of the vertices compared to the base, i.e. orientation here means permutation of vertices. The global numbering of degrees of freedom is now performed as follows: loop over vertices list the dofs associated with each vertex loop over edges list the dofs associated with the interior of the edge, lexicographically with respect to the edge orientation loop over faces list the dofs associated with the interior of the face, lexicographically with respect to the face orientation loop over elements list the dofs associated with the interior of the element An example of this is the quadratic mesh in Listing 3 ... # BEGIN Geometry Part nodes FiniteElementSpace FiniteElementCollection: H1_2D_P2 VDim: 2 Ordering: 1 # 18 vertex dofs 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 # 25 edge dofs 0.5 0 1 0.5 0.5 1 0 0.5 1.5 0 2 0.5 1.5 1 2.5 0 3 0.5 2.5 1 3.5 0 4 0.5 3.5 1 4.5 0 5 0.5 4.5 1 5.5 0 6 0.5 5.5 1 6.5 0 7 0.5 6.5 1 7.5 0 8 0.5 7.5 1 # 8 element dofs 0.5 0.5 1.5 0.5 2.5 0.5 3.5 0.5 4.5 0.5 5.5 0.5 6.5 0.5 7.5 0.5 # END Geometry Part ... Listings Listing 1 This is the original version of the beam-quad.mesh using the linear mesh format. MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 2 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 Listing 2 This is a MFEM mesh v1.x version of the beam-quad.mesh which is first order. The mesh is identical to the one of Listing 1 , it is just described in a different format. MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 nodes FiniteElementSpace FiniteElementCollection: H1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 Listing 3 This is a second order version of the beam-quad.mesh . MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 nodes FiniteElementSpace FiniteElementCollection: H1_2D_P2 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 0.5 0 1 0.5 0.5 1 0 0.5 1.5 0 2 0.5 1.5 1 2.5 0 3 0.5 2.5 1 3.5 0 4 0.5 3.5 1 4.5 0 5 0.5 4.5 1 5.5 0 6 0.5 5.5 1 6.5 0 7 0.5 6.5 1 7.5 0 8 0.5 7.5 1 0.5 0.5 1.5 0.5 2.5 0.5 3.5 0.5 4.5 0.5 5.5 0.5 6.5 0.5 7.5 0.5 Listing 4 This is a third order version of the beam-quad.mesh . MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 nodes FiniteElementSpace FiniteElementCollection: H1_2D_P3 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 0.27639320225002 0 0.72360679774998 0 1 0.27639320225002 1 0.72360679774998 0.27639320225002 1 0.72360679774998 1 0 0.27639320225002 0 0.72360679774998 1.27639320225 0 1.72360679775 0 2 0.27639320225002 2 0.72360679774998 1.27639320225 1 1.72360679775 1 2.27639320225 0 2.72360679775 0 3 0.27639320225002 3 0.72360679774998 2.27639320225 1 2.72360679775 1 3.27639320225 0 3.72360679775 0 4 0.27639320225002 4 0.72360679774998 3.27639320225 1 3.72360679775 1 4.27639320225 0 4.72360679775 0 5 0.27639320225002 5 0.72360679774998 4.27639320225 1 4.72360679775 1 5.27639320225 0 5.72360679775 0 6 0.27639320225002 6 0.72360679774998 5.27639320225 1 5.72360679775 1 6.27639320225 0 6.72360679775 0 7 0.27639320225002 7 0.72360679774998 6.27639320225 1 6.72360679775 1 7.27639320225 0 7.72360679775 0 8 0.27639320225002 8 0.72360679774998 7.27639320225 1 7.72360679775 1 0.27639320225002 0.27639320225002 0.72360679774998 0.27639320225002 0.27639320225002 0.72360679774998 0.72360679774998 0.72360679774998 1.27639320225 0.27639320225002 1.72360679775 0.27639320225002 1.27639320225 0.72360679774998 1.72360679775 0.72360679774998 2.27639320225 0.27639320225002 2.72360679775 0.27639320225002 2.27639320225 0.72360679774998 2.72360679775 0.72360679774998 3.27639320225 0.27639320225002 3.72360679775 0.27639320225002 3.27639320225 0.72360679774998 3.72360679775 0.72360679774998 4.27639320225 0.27639320225002 4.72360679775 0.27639320225002 4.27639320225 0.72360679774998 4.72360679775 0.72360679774998 5.27639320225 0.27639320225002 5.72360679775 0.27639320225002 5.27639320225 0.72360679774998 5.72360679775 0.72360679774998 6.27639320225 0.27639320225002 6.72360679775 0.27639320225002 6.27639320225 0.72360679774998 6.72360679775 0.72360679774998 7.27639320225 0.27639320225002 7.72360679775 0.27639320225002 7.27639320225 0.72360679774998 7.72360679775 0.72360679774998 Listing 5 Periodic version of the first-order mesh from Listing 1 . MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 0 9 16 boundary 16 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 0 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 9 16 vertices 18 nodes FiniteElementSpace FiniteElementCollection: L2_T1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 0 1 1 1 1 0 2 0 1 1 2 1 2 0 3 0 2 1 3 1 3 0 4 0 3 1 4 1 4 0 5 0 4 1 5 1 5 0 6 0 5 1 6 1 6 0 7 0 6 1 7 1 7 0 8 0 7 1 8 1 8 0 9 0 8 1 9 1 9 0 10 0 9 1 10 1", "title": "_Mesh Format v1.x"}, {"location": "mesh-format-v1.x/#general-mfem-mesh-format", "text": "The MFEM mesh v1.x format supports the general description of meshes based on a vector finite element grid function with degrees of freedom in the nodes of the mesh. For simplicity, in this document we refer to this version of the format as MFEM mesh v1.x . The legacy version for meshes with straight edges we will call MFEM linear mesh format. A mesh in the MFEM mesh v1.x format consists of two parts: Topology and Geometry. We illustrate these concepts by comparing with the beam-quad.mesh from MFEM's data/ directory. This is just a simple quadrilateral beam mesh with 8 elements, 18 vertices (numbered 0 to 17) and 18 boundary segments: The original linear mesh version of this file is given in Listing 1 .", "title": "General MFEM Mesh Format"}, {"location": "mesh-format-v1.x/#topology", "text": "The topological part of the mesh describes the relations between the elements in the mesh, in terms of neighborhood implied by shared vertices. Actual coordinates do not play a role in this part, so the vertices are just labels used to imply which elements share a vertex, an edge or a face. Some examples:", "title": "Topology"}, {"location": "mesh-format-v1.x/#general-version-of-databeam-quadmesh", "text": "Below is the annotated topological part of the MFEM mesh v1.x format for the beam mesh. The complete file is given in Listing 2 . ... # BEGIN Topology Part dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary # Skipping the 18 boundary segments for simplicity vertices 18 # END Topology Part ... The element format above is: ... . Type 3 is quadrilateral, which requires 4 vertex indices. The attribute identify e.g. material sub-domains (2 in this case). NOTE: The topology part of this mesh will be the same, irrespective of the order. Compare e.g. Listing 2 , Listing 3 and Listing 4 . WARNING: The vertices are used only to imply topology, and so there coordinates are not important. The mesh coordinates are implied by the mesh nodes not vertices . In particular, while the Mesh object can return vertex coordinates, they are not used an may be incorrect for high-order mesh.", "title": "General version of data/beam-quad.mesh"}, {"location": "mesh-format-v1.x/#periodic-version-of-databeam-quadmesh", "text": "The topology part can be used to describe more complicated mesh relations. For example we can identify the two vertical lines of the beam mesh, turning it topologically into a cylinder. The complete file is given in Listing 5 . ... # BEGIN Topology Part dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 0 9 16 # Last element uses vertices 0 and 9 # two vertical boundary have been removed boundary 16 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 0 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 9 16 vertices 18 # END Topology Part ... Compared to the non-periodic version, e.g. Listing 2 , the main difference above is that we have fused vertices 8 and 0 and vertices 17 and 9. The difference between the two topologies can be illustrated by solving a simple Laplace problem with homogeneous essential boundary conditions on the resulting mesh. In the periodic case we get: while the solution on the non-periodic mesh looks like: NOTE: Meshes with periodic topology allow us to solve problems with periodic boundary conditions without modifying the application to impose them -- we simply run on a different mesh.", "title": "Periodic version of data/beam-quad.mesh"}, {"location": "mesh-format-v1.x/#geometry", "text": "The geometry of the mesh, i.e. the actual position of mesh elements in physical space is described by specifying the mesh nodes as a general finite element (vector) function. In MFEM, finite element functions are objects of type GridFunction which belong to discrete finite element spaces specified by objects FiniteElementSpace and FiniteElementCollection . The actual geometry of each element is obtained by extracting the local degrees of freedom from the global nodes , expanding them in the corresponding (reference element) finite element basis, and using the resulting polynomial vector field to map the reference element. An example of a first order geometry is given in Listing 2 : ... # BEGIN Geometry Part nodes FiniteElementSpace FiniteElementCollection: H1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 # END Geometry Part Here VDim: 2 means that the nodes grid function is a vector field with two components (i.e. the mesh is embedded in R^2); H1_2D_P1 describes the finite element space (H1/continuous finite elements in 2D of order 1); Ordering refers to how the vector field values are serialized (in this case x,y,x,y,...); and the rest is just the global degrees of freedom representing in this case the vertex coordinates. Compare the above with the linear mesh vertex coordinates from Listing 1 : vertices 18 2 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 In the MFEM mesh v1.x format, the nodes are a regular grid function, just like an other discretized field in a simulation, which has several advantages: The nodes can be part of the discretization, and be evolved directly e.g. in a Lagrangian/ALE simulation. Mesh optimization problems can be posed directly for the nodes variable. Since the nodes can be any finite element function, a wide variety of meshes are easily supported. As an illustration of the last point, consider the geometry of the periodic version of the mesh in Listing 5 ... # BEGIN Geometry Part nodes FiniteElementSpace FiniteElementCollection: L2_T1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 0 1 1 1 1 0 2 0 1 1 2 1 2 0 3 0 2 1 3 1 3 0 4 0 3 1 4 1 4 0 5 0 4 1 5 1 5 0 6 0 5 1 6 1 6 0 7 0 6 1 7 1 7 0 8 0 7 1 8 1 8 0 9 0 8 1 9 1 9 0 10 0 9 1 10 1 # END Geometry Part ... Note that the space here is L2 , which means a discontinuous linear vector field, where four vertex coordinates are specified on each element. This allows us to plot the periodic mesh as a regular beam, which is what you'd expect for periodic boundary conditions.", "title": "Geometry"}, {"location": "mesh-format-v1.x/#finite-element-spaces", "text": "To fully specify the MFEM mesh v1.x format, we need to describe the degrees of freedom of the nodes finite element space and their global numbering. This is something that the MFEM team is very interested to discuss and standardize with other high-order projects and applications. Below is a description of our current approach... Finite element spaces have degrees of freedom (dofs) that are associated with the (interiors of the) mesh vertices, edges, faces and elements. There may be multiple dofs associated with the same geometric entity (e.g. vector fields), and different spaces have different sets of degrees of freedom. For example H1/continuous spaces can have degrees of freedom associated with the Gauss-Lobatto points in a quadrilateral, while L2/discontinuous spaces can have degrees of freedom associated with the Gauss-Legendre points. These are just examples, many choices for the basis are actually possible to be encoded in the FiniteElementCollection string above. In general, based just on the mesh topology and the type of the space, the FiniteElementSpace object can determine a global set of dofs, that will be the values listed for the mesh nodes . The algorithm starts with the given numbering of the elements and the vertices, from which a numbering of the edges and the faces is derived as follows: loop over elements loop over edges and faces inside each element (see below) number currently the edges and faces that have not been numbered yet The ordering of edges/faces within each element is defined by the arrays Edges and FaceVert in the classes Geometry::Constants which are defined in the file fem/geom.cpp , e.g. search for ::Edges or ::FaceVert . Here is the result of this numbering for the beam mesh In addition to a number, each edges and face is also given a global orientation. In 2D and 3D, an edge is oriented from the vertex with the lower vertex id to the vertex with the higher vertex id. In 3D, a face is oriented according to the face-to-vertex mappings in the first element in which the face is enumerated. See the FaceVert arrays in fem/geom.cpp mentioned above, as well as the Mesh::GenerateFaces method in mesh/mesh.cpp . In particular, the normal of the face between two elements points from the element with lower number to the element with higher number. Face orientation however includes not just the normal direction, but also any rotation of the vertices compared to the base, i.e. orientation here means permutation of vertices. The global numbering of degrees of freedom is now performed as follows: loop over vertices list the dofs associated with each vertex loop over edges list the dofs associated with the interior of the edge, lexicographically with respect to the edge orientation loop over faces list the dofs associated with the interior of the face, lexicographically with respect to the face orientation loop over elements list the dofs associated with the interior of the element An example of this is the quadratic mesh in Listing 3 ... # BEGIN Geometry Part nodes FiniteElementSpace FiniteElementCollection: H1_2D_P2 VDim: 2 Ordering: 1 # 18 vertex dofs 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 # 25 edge dofs 0.5 0 1 0.5 0.5 1 0 0.5 1.5 0 2 0.5 1.5 1 2.5 0 3 0.5 2.5 1 3.5 0 4 0.5 3.5 1 4.5 0 5 0.5 4.5 1 5.5 0 6 0.5 5.5 1 6.5 0 7 0.5 6.5 1 7.5 0 8 0.5 7.5 1 # 8 element dofs 0.5 0.5 1.5 0.5 2.5 0.5 3.5 0.5 4.5 0.5 5.5 0.5 6.5 0.5 7.5 0.5 # END Geometry Part ...", "title": "Finite Element Spaces"}, {"location": "mesh-format-v1.x/#listings", "text": "", "title": "Listings"}, {"location": "mesh-format-v1.x/#listing-1", "text": "This is the original version of the beam-quad.mesh using the linear mesh format. MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 2 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1", "title": "Listing 1"}, {"location": "mesh-format-v1.x/#listing-2", "text": "This is a MFEM mesh v1.x version of the beam-quad.mesh which is first order. The mesh is identical to the one of Listing 1 , it is just described in a different format. MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 nodes FiniteElementSpace FiniteElementCollection: H1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1", "title": "Listing 2"}, {"location": "mesh-format-v1.x/#listing-3", "text": "This is a second order version of the beam-quad.mesh . MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 nodes FiniteElementSpace FiniteElementCollection: H1_2D_P2 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 0.5 0 1 0.5 0.5 1 0 0.5 1.5 0 2 0.5 1.5 1 2.5 0 3 0.5 2.5 1 3.5 0 4 0.5 3.5 1 4.5 0 5 0.5 4.5 1 5.5 0 6 0.5 5.5 1 6.5 0 7 0.5 6.5 1 7.5 0 8 0.5 7.5 1 0.5 0.5 1.5 0.5 2.5 0.5 3.5 0.5 4.5 0.5 5.5 0.5 6.5 0.5 7.5 0.5", "title": "Listing 3"}, {"location": "mesh-format-v1.x/#listing-4", "text": "This is a third order version of the beam-quad.mesh . MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 nodes FiniteElementSpace FiniteElementCollection: H1_2D_P3 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 0.27639320225002 0 0.72360679774998 0 1 0.27639320225002 1 0.72360679774998 0.27639320225002 1 0.72360679774998 1 0 0.27639320225002 0 0.72360679774998 1.27639320225 0 1.72360679775 0 2 0.27639320225002 2 0.72360679774998 1.27639320225 1 1.72360679775 1 2.27639320225 0 2.72360679775 0 3 0.27639320225002 3 0.72360679774998 2.27639320225 1 2.72360679775 1 3.27639320225 0 3.72360679775 0 4 0.27639320225002 4 0.72360679774998 3.27639320225 1 3.72360679775 1 4.27639320225 0 4.72360679775 0 5 0.27639320225002 5 0.72360679774998 4.27639320225 1 4.72360679775 1 5.27639320225 0 5.72360679775 0 6 0.27639320225002 6 0.72360679774998 5.27639320225 1 5.72360679775 1 6.27639320225 0 6.72360679775 0 7 0.27639320225002 7 0.72360679774998 6.27639320225 1 6.72360679775 1 7.27639320225 0 7.72360679775 0 8 0.27639320225002 8 0.72360679774998 7.27639320225 1 7.72360679775 1 0.27639320225002 0.27639320225002 0.72360679774998 0.27639320225002 0.27639320225002 0.72360679774998 0.72360679774998 0.72360679774998 1.27639320225 0.27639320225002 1.72360679775 0.27639320225002 1.27639320225 0.72360679774998 1.72360679775 0.72360679774998 2.27639320225 0.27639320225002 2.72360679775 0.27639320225002 2.27639320225 0.72360679774998 2.72360679775 0.72360679774998 3.27639320225 0.27639320225002 3.72360679775 0.27639320225002 3.27639320225 0.72360679774998 3.72360679775 0.72360679774998 4.27639320225 0.27639320225002 4.72360679775 0.27639320225002 4.27639320225 0.72360679774998 4.72360679775 0.72360679774998 5.27639320225 0.27639320225002 5.72360679775 0.27639320225002 5.27639320225 0.72360679774998 5.72360679775 0.72360679774998 6.27639320225 0.27639320225002 6.72360679775 0.27639320225002 6.27639320225 0.72360679774998 6.72360679775 0.72360679774998 7.27639320225 0.27639320225002 7.72360679775 0.27639320225002 7.27639320225 0.72360679774998 7.72360679775 0.72360679774998", "title": "Listing 4"}, {"location": "mesh-format-v1.x/#listing-5", "text": "Periodic version of the first-order mesh from Listing 1 . MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 0 9 16 boundary 16 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 0 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 9 16 vertices 18 nodes FiniteElementSpace FiniteElementCollection: L2_T1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 0 1 1 1 1 0 2 0 1 1 2 1 2 0 3 0 2 1 3 1 3 0 4 0 3 1 4 1 4 0 5 0 4 1 5 1 5 0 6 0 5 1 6 1 6 0 7 0 6 1 7 1 7 0 8 0 7 1 8 1 8 0 9 0 8 1 9 1 9 0 10 0 9 1 10 1", "title": "Listing 5"}, {"location": "mesh-formats/", "text": "Supported Mesh Formats MFEM supports a number of mesh formats, including: MFEM's built-in formats, including arbitrary high-order curvilinear meshes and non-conforming (AMR) meshes. VTK format (XML VTU format and legacy ASCII format). The CUBIT meshes through the Genesis (NetCDF) binary format. The NETGEN triangular and tetrahedral mesh formats. The TrueGrid hexahedral mesh format. See below for more details and information on the specific formats that are supported. All of these mesh formats are also supported by MFEM's native visualization tool, GLVis . MFEM Mesh Formats Detailed description of these formats can be found on MFEM's mesh formats page. MFEM supports: MFEM's mesh v1.0 format for straight meshes. MFEM's mesh v1.x format for arbitrary high-order curvilinear and more general meshes. MFEM's mesh v1.2 format, which adds support for parallel meshes. MFEM's mesh v1.3 format , which adds support for named attribute sets. MFEM's NC mesh v1.0 format , supporting non-conforming (AMR) meshes. MFEM's format for NURBS meshes. VTK Mesh Formats MFEM supports reading VTK (ASCII) and VTU (XML) unstructured meshes. For more details on these formats, see the VTK User's Guide and the VTK Wiki . Specifically, MFEM supports: Meshes with high-order Lagrange elements . Mixed meshes with all element types. XML format with inline or appended binary data, including zlib compression. If the VTK or VTU file has a cell data array named \"material\" or \"attribute\", this cell data will be used for MFEM's element attribute numbers. If both data arrays are present, the one named \"material\" will take precedence. Gmsh Mesh Formats MFEM supports reading version 2.2 of the Gmsh ASCII and binary formats for 2D and 3D meshes. High-order elements (up to order 9) are supported, as are periodic meshes. Note that newer versions of Gmsh output files in version 4.1 of the Gmsh format, which is not compatible with MFEM. Users should either specify Mesh.MshFileVersion = 2.2; in their geometry file or run Gmsh with -format msh22 from the command line. Elements' physical tags in Gmsh correspond to their attribute numbers in MFEM. MFEM only supports strictly positive (\u2265 1) attributes, so users should be sure to define all physical groups with strictly positive tag numbers. The one exception to this is in cases where all elements have physical tag zero (which happens by default in Gmsh when no physical groups are defined). In this case, MFEM will reassign all the elements to have attribute number 1 instead of failing to read the mesh.", "title": "Mesh Formats"}, {"location": "mesh-formats/#supported-mesh-formats", "text": "MFEM supports a number of mesh formats, including: MFEM's built-in formats, including arbitrary high-order curvilinear meshes and non-conforming (AMR) meshes. VTK format (XML VTU format and legacy ASCII format). The CUBIT meshes through the Genesis (NetCDF) binary format. The NETGEN triangular and tetrahedral mesh formats. The TrueGrid hexahedral mesh format. See below for more details and information on the specific formats that are supported. All of these mesh formats are also supported by MFEM's native visualization tool, GLVis .", "title": "Supported Mesh Formats"}, {"location": "mesh-formats/#mfem-mesh-formats", "text": "Detailed description of these formats can be found on MFEM's mesh formats page. MFEM supports: MFEM's mesh v1.0 format for straight meshes. MFEM's mesh v1.x format for arbitrary high-order curvilinear and more general meshes. MFEM's mesh v1.2 format, which adds support for parallel meshes. MFEM's mesh v1.3 format , which adds support for named attribute sets. MFEM's NC mesh v1.0 format , supporting non-conforming (AMR) meshes. MFEM's format for NURBS meshes.", "title": "MFEM Mesh Formats"}, {"location": "mesh-formats/#vtk-mesh-formats", "text": "MFEM supports reading VTK (ASCII) and VTU (XML) unstructured meshes. For more details on these formats, see the VTK User's Guide and the VTK Wiki . Specifically, MFEM supports: Meshes with high-order Lagrange elements . Mixed meshes with all element types. XML format with inline or appended binary data, including zlib compression. If the VTK or VTU file has a cell data array named \"material\" or \"attribute\", this cell data will be used for MFEM's element attribute numbers. If both data arrays are present, the one named \"material\" will take precedence.", "title": "VTK Mesh Formats"}, {"location": "mesh-formats/#gmsh-mesh-formats", "text": "MFEM supports reading version 2.2 of the Gmsh ASCII and binary formats for 2D and 3D meshes. High-order elements (up to order 9) are supported, as are periodic meshes. Note that newer versions of Gmsh output files in version 4.1 of the Gmsh format, which is not compatible with MFEM. Users should either specify Mesh.MshFileVersion = 2.2; in their geometry file or run Gmsh with -format msh22 from the command line. Elements' physical tags in Gmsh correspond to their attribute numbers in MFEM. MFEM only supports strictly positive (\u2265 1) attributes, so users should be sure to define all physical groups with strictly positive tag numbers. The one exception to this is in cases where all elements have physical tag zero (which happens by default in Gmsh when no physical groups are defined). In this case, MFEM will reassign all the elements to have attribute number 1 instead of failing to read the mesh.", "title": "Gmsh Mesh Formats"}, {"location": "meshing-miniapps/", "text": "Meshing Miniapps The miniapps/meshing directory contains a collection of meshing-related miniapps based on MFEM. Compared to the example codes , the miniapps are more complex, demonstrating more advanced usage of the library. They are intended to be more representative of MFEM-based application codes. We recommend that new users start with the example codes before moving to the miniapps. The current meshing miniapps are described below. Mobius Strip This miniapp generates various Mobius strip-like surface meshes. It is a good way to generate complex surface meshes. Manipulating the mesh topology and performing mesh transformation are demonstrated. The mobius-strip mesh in the data directory was generated with this miniapp. Klein Bottle This miniapp generates three types of Klein bottle surfaces. It is similar to the mobius-strip miniapp. The klein-bottle and klein-donut meshes in the data directory were generated with this miniapp. Toroid This miniapp generates two types of toroidal volume meshes; one with triangular cross sections and one with square cross sections. A wide variety of toroidal meshes can be generated by varying the amount of twist as well as the major and minor radii and other variables. The toroid-wedge and toroid-hex meshes in the data directory were generated with this miniapp. Twist This miniapp generates simple periodic meshes made from different types of elements. A wide variety of twisted meshes can be generated by varying the amount of twist as well as the dimensions, element types, and other variables. Extruder This miniapp creates higher dimensional meshes from lower dimensional meshes by extrusion. Simple coordinate transformations can also be applied if desired. The initial mesh can be 1D or 2D. 1D meshes can be extruded in the y-direction first and then in the z-direction. 2D meshes can be triangular, quadrilateral, or contain both element types. Trimmer This miniapp creates a new mesh file from an existing mesh by trimming away elements with selected attributes. High order and/or periodic meshes are supported although NURBS meshes are not. By default newly exposed boundaries will be assigned unique boundary attributes. The new boundary attributes are determined by adding the volume attribute of the exposing elements to the maximum boundary attribute in the original mesh. Alternatively the user can specify new boundary attributes to be associated with each volume attribute being trimmed away. In the later case the new attributes need not be unique. Polar-NC This miniapp generates a circular sector mesh that consist of quadrilaterals and triangles of similar sizes. The 3D version of the mesh is made of prisms and tetrahedra: The mesh is non-conforming by design, and can optionally be made curvilinear. The elements are ordered along a space-filling curve by default, which makes the mesh ready for parallel non-conforming AMR in MFEM. Shaper This miniapp performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material() function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. Mesh Explorer This miniapp is a handy tool to examine, visualize and manipulate a given mesh. Some of its features are: visualizing of mesh materials and individual mesh elements mesh scaling, randomization, and general transformation manipulation of the mesh curvature the ability to simulate parallel partitioning quantitative and visual reports of mesh quality Mesh Optimizer This miniapp performs mesh optimization using the Target-Matrix Optimization Paradigm (TMOP) by P.Knupp et al., and a global variational minimization approach. It minimizes the quantity $\\sum_T \\int_T \\mu(J(x))$, where $T$ are the target (ideal) elements, $J$ is the Jacobian of the transformation from the target to the physical element, and $\\mu$ is the mesh quality metric. This metric can measure shape, size or alignment of the region around each quadrature point. The combination of targets and quality metrics is used to optimize the physical node positions, i.e., they must be as close as possible to the shape / size / alignment of their targets. Minimal Surface This miniapp solves Plateau's nonlinear elliptic problem: the Dirichlet problem for the minimal surface equation. The weak form of the equation, with prescribed boundary conditions, is given by: $$\\int_\\Omega\\frac{\\nabla{u}\\cdot\\nabla{v}}{\\sqrt{1+|\\nabla{u}|^2}}dx = 0$$ Two problems can be run: Problem 0 solves the minimal surface equation of parametric surfaces . The command line options allow the selection of different parametrization: Catenoid, Helicoid, Enneper, Hold, Costa, Shell, Scherk or simply one from an input mesh file. Problem 1 solves the minimal surface equation for surfaces restricted to be graphs of the form $z=f(x,y)$ . This problem is solved using the Picard iterations: $$\\int_\\Omega\\frac{\\nabla{u_{n+1}}\\cdot\\nabla{v}}{\\sqrt{1+|\\nabla{u_n}|^2}}dx = 0$$ MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Meshing"}, {"location": "meshing-miniapps/#meshing-miniapps", "text": "The miniapps/meshing directory contains a collection of meshing-related miniapps based on MFEM. Compared to the example codes , the miniapps are more complex, demonstrating more advanced usage of the library. They are intended to be more representative of MFEM-based application codes. We recommend that new users start with the example codes before moving to the miniapps. The current meshing miniapps are described below.", "title": "Meshing Miniapps"}, {"location": "meshing-miniapps/#mobius-strip", "text": "This miniapp generates various Mobius strip-like surface meshes. It is a good way to generate complex surface meshes. Manipulating the mesh topology and performing mesh transformation are demonstrated. The mobius-strip mesh in the data directory was generated with this miniapp.", "title": "Mobius Strip"}, {"location": "meshing-miniapps/#klein-bottle", "text": "This miniapp generates three types of Klein bottle surfaces. It is similar to the mobius-strip miniapp. The klein-bottle and klein-donut meshes in the data directory were generated with this miniapp.", "title": "Klein Bottle"}, {"location": "meshing-miniapps/#toroid", "text": "This miniapp generates two types of toroidal volume meshes; one with triangular cross sections and one with square cross sections. A wide variety of toroidal meshes can be generated by varying the amount of twist as well as the major and minor radii and other variables. The toroid-wedge and toroid-hex meshes in the data directory were generated with this miniapp.", "title": "Toroid"}, {"location": "meshing-miniapps/#twist", "text": "This miniapp generates simple periodic meshes made from different types of elements. A wide variety of twisted meshes can be generated by varying the amount of twist as well as the dimensions, element types, and other variables.", "title": "Twist"}, {"location": "meshing-miniapps/#extruder", "text": "This miniapp creates higher dimensional meshes from lower dimensional meshes by extrusion. Simple coordinate transformations can also be applied if desired. The initial mesh can be 1D or 2D. 1D meshes can be extruded in the y-direction first and then in the z-direction. 2D meshes can be triangular, quadrilateral, or contain both element types.", "title": "Extruder"}, {"location": "meshing-miniapps/#trimmer", "text": "This miniapp creates a new mesh file from an existing mesh by trimming away elements with selected attributes. High order and/or periodic meshes are supported although NURBS meshes are not. By default newly exposed boundaries will be assigned unique boundary attributes. The new boundary attributes are determined by adding the volume attribute of the exposing elements to the maximum boundary attribute in the original mesh. Alternatively the user can specify new boundary attributes to be associated with each volume attribute being trimmed away. In the later case the new attributes need not be unique.", "title": "Trimmer"}, {"location": "meshing-miniapps/#polar-nc", "text": "This miniapp generates a circular sector mesh that consist of quadrilaterals and triangles of similar sizes. The 3D version of the mesh is made of prisms and tetrahedra: The mesh is non-conforming by design, and can optionally be made curvilinear. The elements are ordered along a space-filling curve by default, which makes the mesh ready for parallel non-conforming AMR in MFEM.", "title": "Polar-NC"}, {"location": "meshing-miniapps/#shaper", "text": "This miniapp performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material() function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported.", "title": "Shaper"}, {"location": "meshing-miniapps/#mesh-explorer", "text": "This miniapp is a handy tool to examine, visualize and manipulate a given mesh. Some of its features are: visualizing of mesh materials and individual mesh elements mesh scaling, randomization, and general transformation manipulation of the mesh curvature the ability to simulate parallel partitioning quantitative and visual reports of mesh quality", "title": "Mesh Explorer"}, {"location": "meshing-miniapps/#mesh-optimizer", "text": "This miniapp performs mesh optimization using the Target-Matrix Optimization Paradigm (TMOP) by P.Knupp et al., and a global variational minimization approach. It minimizes the quantity $\\sum_T \\int_T \\mu(J(x))$, where $T$ are the target (ideal) elements, $J$ is the Jacobian of the transformation from the target to the physical element, and $\\mu$ is the mesh quality metric. This metric can measure shape, size or alignment of the region around each quadrature point. The combination of targets and quality metrics is used to optimize the physical node positions, i.e., they must be as close as possible to the shape / size / alignment of their targets.", "title": "Mesh Optimizer"}, {"location": "meshing-miniapps/#minimal-surface", "text": "This miniapp solves Plateau's nonlinear elliptic problem: the Dirichlet problem for the minimal surface equation. The weak form of the equation, with prescribed boundary conditions, is given by: $$\\int_\\Omega\\frac{\\nabla{u}\\cdot\\nabla{v}}{\\sqrt{1+|\\nabla{u}|^2}}dx = 0$$ Two problems can be run: Problem 0 solves the minimal surface equation of parametric surfaces . The command line options allow the selection of different parametrization: Catenoid, Helicoid, Enneper, Hold, Costa, Shell, Scherk or simply one from an input mesh file. Problem 1 solves the minimal surface equation for surfaces restricted to be graphs of the form $z=f(x,y)$ . This problem is solved using the Picard iterations: $$\\int_\\Omega\\frac{\\nabla{u_{n+1}}\\cdot\\nabla{v}}{\\sqrt{1+|\\nabla{u_n}|^2}}dx = 0$$ MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Minimal Surface"}, {"location": "news/", "text": "MFEM News Oct 28, 2024 Postdoc position on the MFEM team at LLNL. Oct 22, 2024 2024 MFEM community workshop . Jun 5, 2024 MFEM in the cloud tutorial as part of the HPCIC Tutorial series. May 7, 2024 Version 4.7 released . May 2, 2024 New MFEM paper to appear in the International Journal of High Performance Computing Application. Nov 13, 2023 Recap of the 2023 Workshop , held on October 26. Oct 26, 2023 2023 MFEM community workshop . Sep 27, 2023 Version 4.6 released . Sep 11, 2023 MFEM now available in Homebrew . Jul 17, 2023 The third MFEM Community Workshop will take place on October 26th, 2023. Jul 11, 2023 MFEM in the cloud tutorial as part of the RADIUSS AWS Tutorial series. Apr 11, 2023 GitHub ReadME project article on open-source software for fusion mentions MFEM. Mar 23, 2023 Version 4.5.2 released . Feb 22, 2023 AWS releases the Palace code for cloud-based electromagnetics simulations of quantum computing hardware based on MFEM Jan 6, 2023 Complete YouTube playlist of 2022 Workshop videos now available. Nov 16, 2022 Recap of the 2022 Workshop , held on October 25. Oct 22, 2022 Version 4.5 released . Oct 11, 2022 New Enzyme + MFEM project to efficiently differentiate large-scale finite element applications. Aug 18, 2022 The second MFEM Community Workshop will take place on October 25th, 2022. Aug 15, 2022 MFEM in the cloud tutorial as part of the RADIUSS AWS Tutorial series. Mar 21, 2022 Version 4.4 released . Jan 20, 2022 FEM@LLNL seminar series starting. Nov 30, 2021 New page with recorded talks + videos. Nov 12, 2021 Article summarizing the October 20th, 2021, community workshop . Jul 29, 2021 Version 4.3 released . Jul 10, 2021 The inaugural MFEM Community Workshop will take place on October 20th, 2021. Apr 22, 2021 MFEM featured on S&TR magazine cover . Mar 1, 2021 Logo featured throughout LLNL 2020 annual report . Feb 16, 2021 New documentation page on GPU performance . Dec 19, 2020 PyMFEM available with pip install mfem . Oct 30, 2020 Version 4.2 released . Jul 11, 2020 MFEM paper in Computers & Mathematics with Applications. Jun 24, 2020 MFEM video available on YouTube. Jun 8, 2020 ECP podcast about mfem-4.1. Jun 8, 2020 Matrix-free high-order solvers research highlighted in CASC Newsletter #9. Mar 30, 2020 Remhos a new MFEM-based miniapp for high-order DG remap released. Mar 29, 2020 CEED v3.0 and libCEED v0.6 released with updated MFEM support. Mar 27, 2020 Laghos v3.0 released with direct device support based on MFEM-4.1. Mar 10, 2020 Version 4.1 released . Nov 20, 2019 MFEM overview paper available on arXiv. May 24, 2019 Version 4.0 released with initial GPU support. May 10, 2019 AMR and TMOP papers available on arXiv. Mar 30, 2019 CEED v2.0 and libCEED v0.4 released with MFEM support. Mar 22, 2019 A version of the Laghos miniapp released for use in the second edition of the Commodity Technology Systems procurement process. Nov 19, 2018 Laghos v2.0 released with CUDA, RAJA, OCCA and AMR versions. Nov 9, 2018 MFEM part of the first release of the Extreme-Scale Scientific Software Stack (E4S) by the Software Technologies focus area of the ECP. Aug 6, 2018 Unstructured technologies presentation at ATPESC18 . May 29, 2018 Version 3.4 released . Apr 2, 2018 MFEM part of OpenHPC , a Linux Foundation project for software components required to deploy and manage HPC Linux clusters. Mar 30, 2018 CEED v1.0 and libCEED v0.2 released with MFEM support. Mar 1, 2018 MFEM highlighted in LLNL's Science & Technology Review magazine, including on the cover . Dec 30, 2017 Initial version of libCEED , the low-level CEED API, released. Nov 10, 2017 Version 3.3.2 released . Nov 7, 2017 ECP article: Co-Design Center Develops Next-Generation Simulation Tools , also in HPCwire . Oct 30, 2017 Laghos part of the ECP Proxy App Suite 1.0 , CORAL-2 Benchmarks and ASC co-design miniapps . Oct 16, 2017 Postdoc position available for electromagnetic simulations with MFEM. Sep 22, 2017 LLNL Newsline: LLNL gears up for next generation of computer-aided design and engineering . Jun 15, 2017 Laghos miniapp and CEED benchmarks released. May 8, 2017 News highlight: Accelerating Simulation Software with Graphics Processing Units . Feb 16, 2017 Moved main development to GitHub. Jan 28, 2017 Version 3.3 released . Dec 15, 2016 Postdoc position for exascale computing with MFEM. Nov 11, 2016 MFEM part of the new ECP co-design Center for Efficient Exascale Discretizations (CEED) . Nov 11, 2016 LLNL Newsline: Lawrence Livermore tapped to lead co-design center for exascale computing ecosystem . Oct 6, 2016 Science & Technology Review article: Laying the Groundwork for Extreme-Scale Computing , see also the YouTube preview . Sep 19, 2016 PyMFEM - a Python wrapper for MFEM by Syun'ichi Shiraiwa from MIT's Plasma Science and Fusion Center released. Jun 30, 2016 Version 3.2 released . May 6, 2016 MFEM packages available in homebrew and spack . Mar 9, 2016 VisIt 2.10.1 released with MFEM 3.1 support. Mar 4, 2016 New LLNL open-source software Blog and Twitter . Feb 16, 2016 Version 3.1 released . Feb 5, 2016 MFEM simulation images part of the Art of Science exhibition at the Livermore public library. Jan 6, 2016 News highlight: High-order finite element library provides scientists with access to cutting-edge algorithms . Aug 18, 2015 Moved to GitHub and mfem.org . Jan 26, 2015 Version 3.0 released .", "title": "News"}, {"location": "news/#mfem-news", "text": "Oct 28, 2024 Postdoc position on the MFEM team at LLNL. Oct 22, 2024 2024 MFEM community workshop . Jun 5, 2024 MFEM in the cloud tutorial as part of the HPCIC Tutorial series. May 7, 2024 Version 4.7 released . May 2, 2024 New MFEM paper to appear in the International Journal of High Performance Computing Application. Nov 13, 2023 Recap of the 2023 Workshop , held on October 26. Oct 26, 2023 2023 MFEM community workshop . Sep 27, 2023 Version 4.6 released . Sep 11, 2023 MFEM now available in Homebrew . Jul 17, 2023 The third MFEM Community Workshop will take place on October 26th, 2023. Jul 11, 2023 MFEM in the cloud tutorial as part of the RADIUSS AWS Tutorial series. Apr 11, 2023 GitHub ReadME project article on open-source software for fusion mentions MFEM. Mar 23, 2023 Version 4.5.2 released . Feb 22, 2023 AWS releases the Palace code for cloud-based electromagnetics simulations of quantum computing hardware based on MFEM Jan 6, 2023 Complete YouTube playlist of 2022 Workshop videos now available. Nov 16, 2022 Recap of the 2022 Workshop , held on October 25. Oct 22, 2022 Version 4.5 released . Oct 11, 2022 New Enzyme + MFEM project to efficiently differentiate large-scale finite element applications. Aug 18, 2022 The second MFEM Community Workshop will take place on October 25th, 2022. Aug 15, 2022 MFEM in the cloud tutorial as part of the RADIUSS AWS Tutorial series. Mar 21, 2022 Version 4.4 released . Jan 20, 2022 FEM@LLNL seminar series starting. Nov 30, 2021 New page with recorded talks + videos. Nov 12, 2021 Article summarizing the October 20th, 2021, community workshop . Jul 29, 2021 Version 4.3 released . Jul 10, 2021 The inaugural MFEM Community Workshop will take place on October 20th, 2021. Apr 22, 2021 MFEM featured on S&TR magazine cover . Mar 1, 2021 Logo featured throughout LLNL 2020 annual report . Feb 16, 2021 New documentation page on GPU performance . Dec 19, 2020 PyMFEM available with pip install mfem . Oct 30, 2020 Version 4.2 released . Jul 11, 2020 MFEM paper in Computers & Mathematics with Applications. Jun 24, 2020 MFEM video available on YouTube. Jun 8, 2020 ECP podcast about mfem-4.1. Jun 8, 2020 Matrix-free high-order solvers research highlighted in CASC Newsletter #9. Mar 30, 2020 Remhos a new MFEM-based miniapp for high-order DG remap released. Mar 29, 2020 CEED v3.0 and libCEED v0.6 released with updated MFEM support. Mar 27, 2020 Laghos v3.0 released with direct device support based on MFEM-4.1. Mar 10, 2020 Version 4.1 released . Nov 20, 2019 MFEM overview paper available on arXiv. May 24, 2019 Version 4.0 released with initial GPU support. May 10, 2019 AMR and TMOP papers available on arXiv. Mar 30, 2019 CEED v2.0 and libCEED v0.4 released with MFEM support. Mar 22, 2019 A version of the Laghos miniapp released for use in the second edition of the Commodity Technology Systems procurement process. Nov 19, 2018 Laghos v2.0 released with CUDA, RAJA, OCCA and AMR versions. Nov 9, 2018 MFEM part of the first release of the Extreme-Scale Scientific Software Stack (E4S) by the Software Technologies focus area of the ECP. Aug 6, 2018 Unstructured technologies presentation at ATPESC18 . May 29, 2018 Version 3.4 released . Apr 2, 2018 MFEM part of OpenHPC , a Linux Foundation project for software components required to deploy and manage HPC Linux clusters. Mar 30, 2018 CEED v1.0 and libCEED v0.2 released with MFEM support. Mar 1, 2018 MFEM highlighted in LLNL's Science & Technology Review magazine, including on the cover . Dec 30, 2017 Initial version of libCEED , the low-level CEED API, released. Nov 10, 2017 Version 3.3.2 released . Nov 7, 2017 ECP article: Co-Design Center Develops Next-Generation Simulation Tools , also in HPCwire . Oct 30, 2017 Laghos part of the ECP Proxy App Suite 1.0 , CORAL-2 Benchmarks and ASC co-design miniapps . Oct 16, 2017 Postdoc position available for electromagnetic simulations with MFEM. Sep 22, 2017 LLNL Newsline: LLNL gears up for next generation of computer-aided design and engineering . Jun 15, 2017 Laghos miniapp and CEED benchmarks released. May 8, 2017 News highlight: Accelerating Simulation Software with Graphics Processing Units . Feb 16, 2017 Moved main development to GitHub. Jan 28, 2017 Version 3.3 released . Dec 15, 2016 Postdoc position for exascale computing with MFEM. Nov 11, 2016 MFEM part of the new ECP co-design Center for Efficient Exascale Discretizations (CEED) . Nov 11, 2016 LLNL Newsline: Lawrence Livermore tapped to lead co-design center for exascale computing ecosystem . Oct 6, 2016 Science & Technology Review article: Laying the Groundwork for Extreme-Scale Computing , see also the YouTube preview . Sep 19, 2016 PyMFEM - a Python wrapper for MFEM by Syun'ichi Shiraiwa from MIT's Plasma Science and Fusion Center released. Jun 30, 2016 Version 3.2 released . May 6, 2016 MFEM packages available in homebrew and spack . Mar 9, 2016 VisIt 2.10.1 released with MFEM 3.1 support. Mar 4, 2016 New LLNL open-source software Blog and Twitter . Feb 16, 2016 Version 3.1 released . Feb 5, 2016 MFEM simulation images part of the Art of Science exhibition at the Livermore public library. Jan 6, 2016 News highlight: High-order finite element library provides scientists with access to cutting-edge algorithms . Aug 18, 2015 Moved to GitHub and mfem.org . Jan 26, 2015 Version 3.0 released .", "title": "MFEM News"}, {"location": "nonlininteg/", "text": "Nonlinear Form Integrators $ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} $ Nonlinear form integrators are used to express the local action of a general nonlinear finite element operator. Depending on the implementation they can also provide the capability to assemble the local gradient operator or to compute the local energy. TMOP integrator for variational minimization The TMOP_Integrator is used for mesh optimization by node movement. It represents the nonlinear objective function that arises in the Target-Matrix Optimization Paradigm (TMOP), as described in this publication . The local action and gradient, for an element $E_p$ in physical space, of the integrator compute \\begin{equation} F(x) = \\int_{E_t} \\frac{\\partial \\mu(J_{pt})}{\\partial x} ~ d x_t \\,, \\quad \\partial F(x) = \\int_{E_t} \\frac{\\partial^2 \\mu(J_{pt})}{\\partial{x^2}} ~ d x_t \\,, \\end{equation} where $x$ is the vector of positions for the mesh nodes of $E_p$; $x_t$ are positions in the target element $E_t$, which corresponds to $E_p$ (see class TargetConstructor ), and $J_{pt}$ is the Jacobian of the transformation from $E_t$ to $E_p$; and $\\mu$ is a mesh quality metric that is evaluated at quadrature points (see class TMOP_QualityMetric ). The local energy of the integrator represents the integral of $\\mu$ over the target element. Convective acceleration The VectorConvectionNLFIntegrator implements the local action of $(u \\cdot \\grad u, v)$, where $u, v \\in H_1^d$ for $d = 2, 3$. This term arises e.g. in the weak form of the Navier-Stokes equations. It also allows to assemble the local gradient which is represented by the linearization of the local action around $\\delta u$. Using the definition of the Gateaux derivative for functions \\begin{equation} F'(u, \\delta u) = \\lim_{\\epsilon \\to \\infty} \\frac{F(u + \\epsilon \\delta u) - F(u)}{\\epsilon} \\end{equation} with $F(u) = u \\cdot \\grad u$, we arrive at \\begin{equation} F'(u, \\delta u) = u \\cdot \\grad \\delta u + \\delta u \\cdot \\grad u. \\end{equation} The local gradient $(F'(u, \\delta u), v)$ can be computed by calling the GetGradient method of NonlinearForm . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Nonlinear Form Integrators"}, {"location": "nonlininteg/#nonlinear-form-integrators", "text": "$ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} $ Nonlinear form integrators are used to express the local action of a general nonlinear finite element operator. Depending on the implementation they can also provide the capability to assemble the local gradient operator or to compute the local energy.", "title": "Nonlinear Form Integrators"}, {"location": "nonlininteg/#tmop-integrator-for-variational-minimization", "text": "The TMOP_Integrator is used for mesh optimization by node movement. It represents the nonlinear objective function that arises in the Target-Matrix Optimization Paradigm (TMOP), as described in this publication . The local action and gradient, for an element $E_p$ in physical space, of the integrator compute \\begin{equation} F(x) = \\int_{E_t} \\frac{\\partial \\mu(J_{pt})}{\\partial x} ~ d x_t \\,, \\quad \\partial F(x) = \\int_{E_t} \\frac{\\partial^2 \\mu(J_{pt})}{\\partial{x^2}} ~ d x_t \\,, \\end{equation} where $x$ is the vector of positions for the mesh nodes of $E_p$; $x_t$ are positions in the target element $E_t$, which corresponds to $E_p$ (see class TargetConstructor ), and $J_{pt}$ is the Jacobian of the transformation from $E_t$ to $E_p$; and $\\mu$ is a mesh quality metric that is evaluated at quadrature points (see class TMOP_QualityMetric ). The local energy of the integrator represents the integral of $\\mu$ over the target element.", "title": "TMOP integrator for variational minimization"}, {"location": "nonlininteg/#convective-acceleration", "text": "The VectorConvectionNLFIntegrator implements the local action of $(u \\cdot \\grad u, v)$, where $u, v \\in H_1^d$ for $d = 2, 3$. This term arises e.g. in the weak form of the Navier-Stokes equations. It also allows to assemble the local gradient which is represented by the linearization of the local action around $\\delta u$. Using the definition of the Gateaux derivative for functions \\begin{equation} F'(u, \\delta u) = \\lim_{\\epsilon \\to \\infty} \\frac{F(u + \\epsilon \\delta u) - F(u)}{\\epsilon} \\end{equation} with $F(u) = u \\cdot \\grad u$, we arrive at \\begin{equation} F'(u, \\delta u) = u \\cdot \\grad \\delta u + \\delta u \\cdot \\grad u. \\end{equation} The local gradient $(F'(u, \\delta u), v)$ can be computed by calling the GetGradient method of NonlinearForm . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Convective acceleration"}, {"location": "nurbs/", "text": "NURBS Miniapps These miniapps demonstrate the use of NURBS-based Isogeometric analysis 1 , 2 . NURBS Ex 1: Laplace problem This example code solves a simple Laplace problem \\begin{align} -\\Delta u = 1 \\end{align} with homogeneous Dirichlet boundary conditions. For implementation see miniapps/nurbs/nurbs__ex1 . NURBS Ex 3: Maxwell problem This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation \\begin{align} \\nabla\\times\\nabla\\times\\, E + E = f \\end{align} with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 2D or 3D. The example demonstrates the use of $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. Static condensation is also illustrated. For implementation see miniapps/nurbs/nurbs__ex1 . NURBS Ex 5: Darcy problem This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system \\begin{align} \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} \\end{align} with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize with Raviart-Thomas finite elements (velocity $\\bf u$) and piecewise discontinuous polynomials (pressure $p$). For implementation see miniapps/nurbs/nurbs__ex5 . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs: \"Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement\", Computer Methods in Applied Mechanics and Engineering, Elsevier, 2005, 194 (39-41), pp.4135-4195. \u21a9 T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs: \"Isogeometric analysis: toward integration of CAD and FEA\", Wiley&Sons 2009 \u21a9", "title": "NURBS Discretization"}, {"location": "nurbs/#nurbs-miniapps", "text": "These miniapps demonstrate the use of NURBS-based Isogeometric analysis 1 , 2 .", "title": "NURBS Miniapps"}, {"location": "nurbs/#nurbs-ex-1-laplace-problem", "text": "This example code solves a simple Laplace problem \\begin{align} -\\Delta u = 1 \\end{align} with homogeneous Dirichlet boundary conditions. For implementation see miniapps/nurbs/nurbs__ex1 .", "title": "NURBS Ex 1: Laplace problem"}, {"location": "nurbs/#nurbs-ex-3-maxwell-problem", "text": "This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation \\begin{align} \\nabla\\times\\nabla\\times\\, E + E = f \\end{align} with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 2D or 3D. The example demonstrates the use of $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. Static condensation is also illustrated. For implementation see miniapps/nurbs/nurbs__ex1 .", "title": "NURBS Ex 3: Maxwell problem"}, {"location": "nurbs/#nurbs-ex-5-darcy-problem", "text": "This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system \\begin{align} \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} \\end{align} with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize with Raviart-Thomas finite elements (velocity $\\bf u$) and piecewise discontinuous polynomials (pressure $p$). For implementation see miniapps/nurbs/nurbs__ex5 . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs: \"Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement\", Computer Methods in Applied Mechanics and Engineering, Elsevier, 2005, 194 (39-41), pp.4135-4195. \u21a9 T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs: \"Isogeometric analysis: toward integration of CAD and FEA\", Wiley&Sons 2009 \u21a9", "title": "NURBS Ex 5: Darcy problem"}, {"location": "parallel-tutorial/", "text": "Parallel Tutorial Summary This tutorial illustrates the building and sample use of the following MFEM parallel example codes: Example 1p Example 2p Example 3p An interactive documentation of all example codes is available here . Building Follow the building instructions to build the parallel MFEM library and to start a GLVis server. The latter is the recommended visualization software for MFEM (though its use is optional). To build the parallel example codes, type make in MFEM's examples directory: ~/mfem/examples> make mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex1p.cpp -o ex1p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex2p.cpp -o ex2p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex3p.cpp -o ex3p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex4p.cpp -o ex4p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex5p.cpp -o ex5p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex7p.cpp -o ex7p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex8p.cpp -o ex8p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex9p.cpp -o ex9p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex10p.cpp -o ex10p ... Example 1p This is a parallel version of Example 1 using hypre 's BoomerAMG preconditioner. Run this example as follows: ~/mfem/examples> mpirun -np 16 ex1p -m ../data/square-disc.mesh ... PCG Iterations = 26 Final PCG Relative Residual Norm = 4.30922e-13 If a GLVis server is running, the computed finite element solution combined from all processors , will appear in an interactive window: You can examine the solution using the mouse and the GLVis command keystrokes . To view the parallel partitioning, for example, press the following keys in the GLVis window: \" RAjlmm \" followed by F11/F12 and zooming with the right mouse button. To examine the solution only in one, or a few parallel subdomains, one can use the F9/F10 and the F8 keys. In 2D, one can also use press \" b \" to draw the only the boundaries between the subdomains. For example was produced by glvis -np 16 -m mesh -g sol -k \"RAjlb\" followed by F9 and scaling/position adjustment with the mouse. Three-dimensional and curvilinear meshes are also supported in parallel: ~/mfem/examples> mpirun -np 16 ex1p -m ../data/escher-p3.mesh ... PCG Iterations = 24 Final PCG Relative Residual Norm = 3.59964e-13 ~/mfem/examples> glvis -np 16 -m mesh -g sol -k \"Aooogtt\" The continuity of the solution across the inter-processor interfaces can be seen by using a cutting plane (keys \" AoooiMMtmm \" followed by \" z \" and \" Y \" adjustments): Example 2p This is a parallel version of Example 2 using the systems version of hypre 's BoomerAMG preconditioner, which can be run analogous to the serial case: ~/mfem/examples> mpirun -np 16 ex2p -m ../data/beam-hex.mesh -o 1 ... PCG Iterations = 39 Final PCG Relative Residual Norm = 2.91528e-09 To view the parallel partitioning with the magnitude of the computed displacement field, type \" Atttaa \" in the GLVis window followed by subdomain shrinking with F11 and scaling adjustments with the mouse: Example 3p This is a parallel version of Example 3 using hypre 's AMS preconditioner. Its use is analogous to the serial case: /mfem/examples> mpirun -np 16 ex3p -m ../data/fichera-q3.mesh ... PCG Iterations = 17 Final PCG Relative Residual Norm = 7.61595e-13 || E_h - E ||_{L^2} = 0.0821685 Note that AMS leads to much fewer iterations than the Gauss-Seidel preconditioner used in the serial code. The parallel subdomain partitioning can be seen with \" ooogt \" and F11/F12: One can also visualize individual components of the Nedelec solution and remove the elements in a cutting plane to see the surfaces corresponding to inter-processor boundaries: glvis -np 16 -m mesh -g sol -k \"ooottmiEF\"", "title": "_Parallel Tutorial"}, {"location": "parallel-tutorial/#parallel-tutorial", "text": "", "title": "Parallel Tutorial"}, {"location": "parallel-tutorial/#summary", "text": "This tutorial illustrates the building and sample use of the following MFEM parallel example codes: Example 1p Example 2p Example 3p An interactive documentation of all example codes is available here .", "title": "Summary"}, {"location": "parallel-tutorial/#building", "text": "Follow the building instructions to build the parallel MFEM library and to start a GLVis server. The latter is the recommended visualization software for MFEM (though its use is optional). To build the parallel example codes, type make in MFEM's examples directory: ~/mfem/examples> make mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex1p.cpp -o ex1p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex2p.cpp -o ex2p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex3p.cpp -o ex3p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex4p.cpp -o ex4p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex5p.cpp -o ex5p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex7p.cpp -o ex7p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex8p.cpp -o ex8p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex9p.cpp -o ex9p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex10p.cpp -o ex10p ...", "title": "Building"}, {"location": "parallel-tutorial/#example-1p", "text": "This is a parallel version of Example 1 using hypre 's BoomerAMG preconditioner. Run this example as follows: ~/mfem/examples> mpirun -np 16 ex1p -m ../data/square-disc.mesh ... PCG Iterations = 26 Final PCG Relative Residual Norm = 4.30922e-13 If a GLVis server is running, the computed finite element solution combined from all processors , will appear in an interactive window: You can examine the solution using the mouse and the GLVis command keystrokes . To view the parallel partitioning, for example, press the following keys in the GLVis window: \" RAjlmm \" followed by F11/F12 and zooming with the right mouse button. To examine the solution only in one, or a few parallel subdomains, one can use the F9/F10 and the F8 keys. In 2D, one can also use press \" b \" to draw the only the boundaries between the subdomains. For example was produced by glvis -np 16 -m mesh -g sol -k \"RAjlb\" followed by F9 and scaling/position adjustment with the mouse. Three-dimensional and curvilinear meshes are also supported in parallel: ~/mfem/examples> mpirun -np 16 ex1p -m ../data/escher-p3.mesh ... PCG Iterations = 24 Final PCG Relative Residual Norm = 3.59964e-13 ~/mfem/examples> glvis -np 16 -m mesh -g sol -k \"Aooogtt\" The continuity of the solution across the inter-processor interfaces can be seen by using a cutting plane (keys \" AoooiMMtmm \" followed by \" z \" and \" Y \" adjustments):", "title": "Example 1p"}, {"location": "parallel-tutorial/#example-2p", "text": "This is a parallel version of Example 2 using the systems version of hypre 's BoomerAMG preconditioner, which can be run analogous to the serial case: ~/mfem/examples> mpirun -np 16 ex2p -m ../data/beam-hex.mesh -o 1 ... PCG Iterations = 39 Final PCG Relative Residual Norm = 2.91528e-09 To view the parallel partitioning with the magnitude of the computed displacement field, type \" Atttaa \" in the GLVis window followed by subdomain shrinking with F11 and scaling adjustments with the mouse:", "title": "Example 2p"}, {"location": "parallel-tutorial/#example-3p", "text": "This is a parallel version of Example 3 using hypre 's AMS preconditioner. Its use is analogous to the serial case: /mfem/examples> mpirun -np 16 ex3p -m ../data/fichera-q3.mesh ... PCG Iterations = 17 Final PCG Relative Residual Norm = 7.61595e-13 || E_h - E ||_{L^2} = 0.0821685 Note that AMS leads to much fewer iterations than the Gauss-Seidel preconditioner used in the serial code. The parallel subdomain partitioning can be seen with \" ooogt \" and F11/F12: One can also visualize individual components of the Nedelec solution and remove the elements in a cutting plane to see the surfaces corresponding to inter-processor boundaries: glvis -np 16 -m mesh -g sol -k \"ooottmiEF\"", "title": "Example 3p"}, {"location": "performance/", "text": "Performance and Partial Assembly This document provides a brief overview of the tensor-based high-performance and partial assembly features in MFEM. In the traditional finite element setting, the operator is assembled in the form of a matrix. The action of the operator is computed by multiplying with this matrix. At high orders this requires both a large amount of memory to store the matrix, as well as many floating point operations to compute and apply it. Partial assembly is a technique that allows for efficiently applying the action of finite element operators without forming the corresponding matrix. This is particularly important when running on GPUs . Partial assembly is enabled at the level of the BilinearForm by setting the assembly level: a->SetAssemblyLevel(AssemblyLevel::PARTIAL); Once partial assembly is enabled, subsequent calls to member functions such as FormLinearSystem will result in an Operator that represents the action of the bilinear form a , without assembling a matrix. This functionality is illustrated in several MFEM examples , including examples 1, 3, 4, 5, 6, 9, 24, and 26. Note that partial assembly is currently implemented for tensor-product elements (i.e. quadrilaterals and hexahedra). Partial assembly for simplex elements (triangles and tetrahedra) is planned. Preconditioning with Partial Assembly When using partial assembly, the system matrix is no longer available for constructing preconditioners. This means that some of the standard preconditioners in MFEM such as HypreBoomerAMG and GSSmoother cannot be used. MFEM allows for the efficient construction of diagonal (Jacobi) smoothers for partially assembled operators on quad and hex meshes using the class OperatorJacobiSmoother . This class efficiently assembles the diagonal of the corresponding matrix, exploiting the tensor-product structure for efficient evaluation. MFEM also allows for Chebyshev smoothing with partial assembly using the class OperatorChebyshevSmoother . This smoother uses estimates of the eigenvalues of the operator computed using the power method , and is built upon the functionality of OperatorJacobiSmoother . Very efficient partially assembled h-multigrid and p-multigrid preconditioners can be constructed by leveraging a hierarchy of discretizations and the smoothers described above. This functionality is illustrated in Example 26 . Finite Element Operator Decomposition The partial assembly functionality in MFEM is based on decomposing the finite element operator into a nested sequence of operations that act on different levels of the discretization. Finite element operators are typically defined through weak formulations of partial differential equations that involve integration over a computational mesh. The required integrals are computed by splitting them as a sum over the mesh elements, mapping each element to a simple reference element (e.g. the unit square) and applying a quadrature rule in reference space. This sequence of operations highlights an inherent hierarchical structure present in all finite element operators where the evaluation starts on global (trial) degrees of freedom (dofs) on the whole mesh , restricts to degrees of freedom on subdomains (groups of elements), then moves to independent degrees of freedom on each element , transitions to independent quadrature points in reference space, performs the integration, and then goes back in reverse order to global (test) degrees of freedom on the whole mesh. This is illustrated below for the case of a symmetric linear operator. We use the notions T-vector , L-vector , E-vector and Q-vector to represent the sets corresponding to the (true) degrees of freedom on the global mesh, the split local degrees of freedom on the subdomains, the split degrees of freedom on the mesh elements, and the values at quadrature points, respectively. We refer to the operators that connect the different types of vectors as: Subdomain restriction P Element restriction G Basis (Dofs-to-Qpts) evaluator B Operator at quadrature points D More generally, if the operator is nonsymmetric or the test and trial space differ, then the operators mapping back from quadrature points to test spaces may not be transposes of P , G and B , but they still have the same basic structure and interpretation. Note that in the case of adaptive mesh refinement (AMR), the prolongation operator P involves not only extracting sub-vectors, but evaluating values at constrained degrees of freedom through the AMR interpolation. There can also be several levels of subdomains ( P1 , P2 , etc.), and it may be convenient to split D as the product of several operators ( D1 , D2 , etc.). Partial Assembly in MFEM Since the global operator A is just a series of variational restrictions with B , G and P , starting from its point-wise kernel D , a \"matrix-vector product\" with A can be performed by evaluating and storing some of the innermost variational restriction matrices, and applying the rest of the operators \"on-the-fly\". For example, one can compute and store a global matrix on T-vector level. Alternatively, one can compute and store only the subdomain (L-vector) or element (E-vector) matrices and perform the action of A using matvecs with P or P and G . While these options are natural for low-order discretizations, they are not a good fit for high-order methods due to the amount of FLOPs needed for their evaluation, as well as the memory transfer needed for a matvec. MFEM's partial assembly functionality computes and stores only D (or portions of it) and evaluates the actions of P , G and B on-the-fly. Critically for performance, MFEM takes advantage of the tensor-product structure of the degrees of freedom and quadrature points on quadrilateral and hexahedral elements to perform the action of B without storing it as a matrix. Note that the action of B is performed element-wise (it corresponds to a block-diagonal matrix), and the blocks depend only on the element order and reference geometry. Currently, only fixed order and geometry is supported, meaning that all the blocks of B are identical. The partial assembly algorithm requires the optimal amount of memory transfers (with respect to the polynomial order) and near-optimal FLOPs for operator evaluation. It consists of an operator setup phase, that evaluates and stores D and an operator apply (evaluation) phase that computes the action of A on an input vector. When desired, the setup phase may be done as a side-effect of evaluating a different operator, such as a nonlinear residual. The relative costs of the setup and apply phases are different depending on the physics being expressed and the representation of D . Parallel Decomposition After the application of each of the first three transition operators, P , G and B , the operator evaluation is decoupled on their ranges, so P , G and B allow us to \"zoom-in\" to subdomain, element and quadrature point level, ignoring the coupling at higher levels. Thus, a natural mapping of A on a parallel computer is to split the T-vector over MPI ranks (a non-overlapping decomposition, as is typically used for sparse matrices), and then split the rest of the vector types over computational devices (CPUs, GPUs, etc.) as indicated by the shaded regions in the diagram above. One of the advantages of the decomposition perspective in these settings is that the operators P , G , B and D clearly separate the MPI parallelism in the operator ( P ) from the unstructured mesh topology ( G ), the choice of the finite element space/basis ( B ) and the geometry and point-wise physics D . These components also naturally fall in different classes of numerical algorithms: parallel (multi-device) linear algebra for P , sparse (on-device) linear algebra for G , dense/structured linear algebra (tensor contractions) for B and parallel point-wise evaluations for D . Essential Boundary Conditions Essential boundary conditions for partially assembled operators are enforced using the class ConstrainedOperator (or, for rectangular systems, RectangularConstrainedOperator ). These operators represent the action of the partially assembled operator, together with specified constraints on essential degrees of freedom. The Operator returned from, for example, BilinearForm::FormLinearSystem or BilinearForm::FormSystemMatrix will in fact be a ConstrainedOperator . The Operator returned from MixedBilinearForm::FormRectangularSystemMatrix will be a RectangularConstrainedOperator . These classes perform the matrix-free equivalent of eliminating the rows and columns of the system matrix corresponding to the essential degrees of freedom. Partial Assembly for Discontinuous Galerkin methods A complementary partial assembly decomposition is used for Discontinuous Galerkin methods to handle face terms, where a similar sequence of operators is applied on the faces to compute the numerical fluxes. However, since elements are decoupled, the element restriction G is the identity, and a face restriction G F is used instead to compute the numerical fluxes and couple elements together. This face restriction G F goes from element degrees of freedom to face degrees of freedom. Then a B F operator can be applied on the faces. An analogous D F operator is then applied at the face quadrature points. Currently, we support partial assembly only for Gauss-Lobatto and Bernstein bases, with integrators that don't require derivatives on the faces. High-Performance Templated Operators MFEM also offers a set of templated classes to evaluate finite element operators on tensor-product (quadrilateral and hexahedral) meshes, described in further detail here .", "title": "Performance"}, {"location": "performance/#performance-and-partial-assembly", "text": "This document provides a brief overview of the tensor-based high-performance and partial assembly features in MFEM. In the traditional finite element setting, the operator is assembled in the form of a matrix. The action of the operator is computed by multiplying with this matrix. At high orders this requires both a large amount of memory to store the matrix, as well as many floating point operations to compute and apply it. Partial assembly is a technique that allows for efficiently applying the action of finite element operators without forming the corresponding matrix. This is particularly important when running on GPUs . Partial assembly is enabled at the level of the BilinearForm by setting the assembly level: a->SetAssemblyLevel(AssemblyLevel::PARTIAL); Once partial assembly is enabled, subsequent calls to member functions such as FormLinearSystem will result in an Operator that represents the action of the bilinear form a , without assembling a matrix. This functionality is illustrated in several MFEM examples , including examples 1, 3, 4, 5, 6, 9, 24, and 26. Note that partial assembly is currently implemented for tensor-product elements (i.e. quadrilaterals and hexahedra). Partial assembly for simplex elements (triangles and tetrahedra) is planned.", "title": "Performance and Partial Assembly"}, {"location": "performance/#preconditioning-with-partial-assembly", "text": "When using partial assembly, the system matrix is no longer available for constructing preconditioners. This means that some of the standard preconditioners in MFEM such as HypreBoomerAMG and GSSmoother cannot be used. MFEM allows for the efficient construction of diagonal (Jacobi) smoothers for partially assembled operators on quad and hex meshes using the class OperatorJacobiSmoother . This class efficiently assembles the diagonal of the corresponding matrix, exploiting the tensor-product structure for efficient evaluation. MFEM also allows for Chebyshev smoothing with partial assembly using the class OperatorChebyshevSmoother . This smoother uses estimates of the eigenvalues of the operator computed using the power method , and is built upon the functionality of OperatorJacobiSmoother . Very efficient partially assembled h-multigrid and p-multigrid preconditioners can be constructed by leveraging a hierarchy of discretizations and the smoothers described above. This functionality is illustrated in Example 26 .", "title": "Preconditioning with Partial Assembly"}, {"location": "performance/#finite-element-operator-decomposition", "text": "The partial assembly functionality in MFEM is based on decomposing the finite element operator into a nested sequence of operations that act on different levels of the discretization. Finite element operators are typically defined through weak formulations of partial differential equations that involve integration over a computational mesh. The required integrals are computed by splitting them as a sum over the mesh elements, mapping each element to a simple reference element (e.g. the unit square) and applying a quadrature rule in reference space. This sequence of operations highlights an inherent hierarchical structure present in all finite element operators where the evaluation starts on global (trial) degrees of freedom (dofs) on the whole mesh , restricts to degrees of freedom on subdomains (groups of elements), then moves to independent degrees of freedom on each element , transitions to independent quadrature points in reference space, performs the integration, and then goes back in reverse order to global (test) degrees of freedom on the whole mesh. This is illustrated below for the case of a symmetric linear operator. We use the notions T-vector , L-vector , E-vector and Q-vector to represent the sets corresponding to the (true) degrees of freedom on the global mesh, the split local degrees of freedom on the subdomains, the split degrees of freedom on the mesh elements, and the values at quadrature points, respectively. We refer to the operators that connect the different types of vectors as: Subdomain restriction P Element restriction G Basis (Dofs-to-Qpts) evaluator B Operator at quadrature points D More generally, if the operator is nonsymmetric or the test and trial space differ, then the operators mapping back from quadrature points to test spaces may not be transposes of P , G and B , but they still have the same basic structure and interpretation. Note that in the case of adaptive mesh refinement (AMR), the prolongation operator P involves not only extracting sub-vectors, but evaluating values at constrained degrees of freedom through the AMR interpolation. There can also be several levels of subdomains ( P1 , P2 , etc.), and it may be convenient to split D as the product of several operators ( D1 , D2 , etc.).", "title": "Finite Element Operator Decomposition"}, {"location": "performance/#partial-assembly-in-mfem", "text": "Since the global operator A is just a series of variational restrictions with B , G and P , starting from its point-wise kernel D , a \"matrix-vector product\" with A can be performed by evaluating and storing some of the innermost variational restriction matrices, and applying the rest of the operators \"on-the-fly\". For example, one can compute and store a global matrix on T-vector level. Alternatively, one can compute and store only the subdomain (L-vector) or element (E-vector) matrices and perform the action of A using matvecs with P or P and G . While these options are natural for low-order discretizations, they are not a good fit for high-order methods due to the amount of FLOPs needed for their evaluation, as well as the memory transfer needed for a matvec. MFEM's partial assembly functionality computes and stores only D (or portions of it) and evaluates the actions of P , G and B on-the-fly. Critically for performance, MFEM takes advantage of the tensor-product structure of the degrees of freedom and quadrature points on quadrilateral and hexahedral elements to perform the action of B without storing it as a matrix. Note that the action of B is performed element-wise (it corresponds to a block-diagonal matrix), and the blocks depend only on the element order and reference geometry. Currently, only fixed order and geometry is supported, meaning that all the blocks of B are identical. The partial assembly algorithm requires the optimal amount of memory transfers (with respect to the polynomial order) and near-optimal FLOPs for operator evaluation. It consists of an operator setup phase, that evaluates and stores D and an operator apply (evaluation) phase that computes the action of A on an input vector. When desired, the setup phase may be done as a side-effect of evaluating a different operator, such as a nonlinear residual. The relative costs of the setup and apply phases are different depending on the physics being expressed and the representation of D .", "title": "Partial Assembly in MFEM"}, {"location": "performance/#parallel-decomposition", "text": "After the application of each of the first three transition operators, P , G and B , the operator evaluation is decoupled on their ranges, so P , G and B allow us to \"zoom-in\" to subdomain, element and quadrature point level, ignoring the coupling at higher levels. Thus, a natural mapping of A on a parallel computer is to split the T-vector over MPI ranks (a non-overlapping decomposition, as is typically used for sparse matrices), and then split the rest of the vector types over computational devices (CPUs, GPUs, etc.) as indicated by the shaded regions in the diagram above. One of the advantages of the decomposition perspective in these settings is that the operators P , G , B and D clearly separate the MPI parallelism in the operator ( P ) from the unstructured mesh topology ( G ), the choice of the finite element space/basis ( B ) and the geometry and point-wise physics D . These components also naturally fall in different classes of numerical algorithms: parallel (multi-device) linear algebra for P , sparse (on-device) linear algebra for G , dense/structured linear algebra (tensor contractions) for B and parallel point-wise evaluations for D .", "title": "Parallel Decomposition"}, {"location": "performance/#essential-boundary-conditions", "text": "Essential boundary conditions for partially assembled operators are enforced using the class ConstrainedOperator (or, for rectangular systems, RectangularConstrainedOperator ). These operators represent the action of the partially assembled operator, together with specified constraints on essential degrees of freedom. The Operator returned from, for example, BilinearForm::FormLinearSystem or BilinearForm::FormSystemMatrix will in fact be a ConstrainedOperator . The Operator returned from MixedBilinearForm::FormRectangularSystemMatrix will be a RectangularConstrainedOperator . These classes perform the matrix-free equivalent of eliminating the rows and columns of the system matrix corresponding to the essential degrees of freedom.", "title": "Essential Boundary Conditions"}, {"location": "performance/#partial-assembly-for-discontinuous-galerkin-methods", "text": "A complementary partial assembly decomposition is used for Discontinuous Galerkin methods to handle face terms, where a similar sequence of operators is applied on the faces to compute the numerical fluxes. However, since elements are decoupled, the element restriction G is the identity, and a face restriction G F is used instead to compute the numerical fluxes and couple elements together. This face restriction G F goes from element degrees of freedom to face degrees of freedom. Then a B F operator can be applied on the faces. An analogous D F operator is then applied at the face quadrature points. Currently, we support partial assembly only for Gauss-Lobatto and Bernstein bases, with integrators that don't require derivatives on the faces.", "title": "Partial Assembly for Discontinuous Galerkin methods"}, {"location": "performance/#high-performance-templated-operators", "text": "MFEM also offers a set of templated classes to evaluate finite element operators on tensor-product (quadrilateral and hexahedral) meshes, described in further detail here .", "title": "High-Performance Templated Operators"}, {"location": "pri-dual-vec/", "text": "Primal and Dual Vectors The finite element method uses vectors of data in a variety of ways and the differences can be subtle. MFEM defines GridFunction , LinearForm , and Vector classes which help to distinguish the different roles that vectors of data can play. Graphical summary of Primal, Dual, DoF (dofs), and True DoF (tdofs) vectors Primal Vectors The finite element method is based on the notion that a smooth function can be approximated by a sum of piece-wise smooth functions (typically piece-wise polynomials) called basis functions : $$f(\\vec{x})\\approx\\sum_i f_i \\phi_i(\\vec{x}) \\label{expan}$$ The support of an individual basis function, $\\;\\phi_i(\\vec{x})$, will either be a single zone or a collection of zones that share a common vertex, edge, or face. The expansion coefficients, $\\;f_i$, are linear functionals of the field being approximated, $\\;f(\\vec{x})$ in this case. The $\\;f_i$ could be as simple as values of the function at particular points, called interpolation points, e.g. $\\;f_i=f(\\vec{x}_i)$, or they could be integrals of the field over submanifolds of the domain, e.g. $\\;f_i = \\int_{\\Omega_i}f(\\vec{x})d\\vec{x}$. There are many possibilities but the expansion coefficients must be linear functionals of $\\;f(\\vec{x})$. The expansion coefficients are often called degrees of freedom , or DoFs for short, though in certain cases they may not be actually independent because of some problem specific constraints. We'll discuss this more in a later section on True DoFs . Once the basis functions are defined, with some unique ordering, the expansion coefficients can be stored in a vector using the same order. Such a vector of coefficients is called a primal vector . The original function, $\\;f(\\vec{x})$, can then be approximated using \\eqref{expan}. In practice this requires not only the primal vector of coefficients but also knowledge of the mesh and the basis functions for each element of the mesh. In MFEM these collections of information are combined into GridFunction objects (or ParGridFunction objects when used in parallel) which represent piece-wise functions belonging to a finite element approximation space. The GridFunction class contains many Get methods which can compute the expansion \\eqref{expan} at particular locations within an element. The primal vector of expansion coefficients can be computed by solving a linear system or by using any of the various Project methods provided by the GridFunction class. These methods compute the degrees of freedom, $\\;f_i$, or some subset of them, from a Coefficient object representing $\\;f(\\vec{x})$. Other methods in this class can be used to compute various measures of the error in the finite element approximation of $\\;f(\\vec{x})$. Dual Vectors Any vector space, such as the space of primal vectors , has a dual space containing co-vectors a.k.a. dual vectors . In this context a dual vector is a linear functional of a primal vector meaning that the action of a dual vector upon a primal vector is a real number. For example, the integral of a field over a domain, $\\;\\alpha=\\int_\\Omega g(\\vec{x})d\\vec{x}$, is a linear functional because the integral is linear with respect to the function being integrated and the result is a real number. Indeed we can derive similar linear functionals using compatible functions, $\\;f(\\vec{x})$, in a variety of ways, for example $G(f)=\\int_\\Omega g(\\vec{x})f(\\vec{x})d\\vec{x}$. If we compute the action of our functional on the finite element basis functions, $$G_i=G(\\phi_i(\\vec{x})) = \\int_\\Omega g(\\vec{x})\\phi_i(\\vec{x})d\\vec{x}\\label{dualvec},$$ and we collect the results into a vector with entries $\\;G_i$, we call this a dual vector of $\\;g(\\vec{x})$. Integrals such as this often arise when enforcing energy balance in physical systems. For example, if $\\vec{J}$ is a current density describing a flow of charged particles and $\\vec{E}$ is an electric field acting upon those particles, then $\\int_\\Omega\\vec{J}\\cdot\\vec{E}\\,d\\vec{x}$ is the rate at which work is being done by the field on the charged particles. MFEM provides LinearForm objects (or ParLinearForm objects in parallel) which can compute dual vectors from a given function, $\\;g(\\vec{x})$, described by a Coefficient object. (Par)LinearForm objects require not only the mesh, basis functions, and the field $\\;g(\\vec{x})$ but also a LinearFormIntegrator which defines precisely what type of linear functional is being computed. See Linear Form Integrators for more information about MFEM's linear form integrators. If, instead of a Coefficient object, you have a primal vector , $\\;g_j$, representing $\\;g(\\vec{x})$ you can form a dual vector by multiplying $\\;g_j$ by a bilinear form, see Bilinear Form Integrators for more information on bilinear forms. To understand why this is so, consider inserting the expansion \\eqref{expan} into \\eqref{dualvec}. $$ G_i=\\int_\\Omega \\left(\\sum_j g_j \\phi_j(\\vec{x})\\right)\\phi_i(\\vec{x})d\\vec{x} = \\sum_j \\left(\\int_\\Omega \\phi_j(\\vec{x})\\phi_i(\\vec{x})d\\vec{x}\\right)g_j \\label{dualvecprod}$$ The last integral contains two indices and can therefore be viewed as an entry in a square matrix. Furthermore, each dual vector entry, $\\;G_i$, is equivalent to one row of a matrix-vector product between this matrix of basis function integrals and the primal vector $\\;g_j$. This particular matrix, involving only the product of basis functions, is traditionally called a mass matrix . However, the action of any matrix, resulting from a bilinear form, upon a primal vector will produce a dual vector . In general, such dual vectors will have more complicated definitions than \\eqref{dualvec} or \\eqref{dualvecprod} but they will still be linear functionals of primal vectors . True Degree-of-Freedom Vectors Primal vectors contain all of the expansion coefficients needed to compute the finite element approximation of a function in each element of a mesh. When run in parallel, the local portion of a primal vector only contains data for the locally owned elements. Regardless of whether or not the simulation is being run in parallel, some of these coefficients may in fact be redundant or interdependent. Sources of redundancy: In parallel some coefficients must be shared between processors. When using static condensation or hybridization many coefficients will depend upon the coefficients which are associated with the skeleton of the mesh as well as upon other data. When using non-conforming meshes some of the coefficients on the finer side of a non-conforming interface between elements will depend upon those on the coarser side of the interface. For any or all of these reasons primal vectors may not contain the true degrees-of-freedom for describing a finite element approximation of a field. The true set of degrees-of-freedom may in fact be much smaller than the size of the primal vector. When setting up and solving a linear system to determine the finite element approximation of a field, the size of the linear system is determined by the number of true degrees-of-freedom . The details of creating this linear system are mostly hidden within the BilinearForm object. To convert individual bilinear form objects the user can call the BilinearForm::FormSystemMatrix() method, however, the more common task is to form the entire linear system with BilinearForm::FormLinearSystem() . As input, this method requires a primal vector , a dual vector , and an array of Dirichlet boundary degree-of-freedom indices. The degree-of-freedom array contains the true degrees-of-freedom, as obtained from a FiniteElementSpace object, which coincide with the Dirichlet, a.k.a. essential , boundaries. // Given a bilinear form 'a', a primal vector 'x', a dual vector 'b', // and an array of essential boundary true dof indices... SparseMatrix A; Vector B, X; a.FormLinearSystem(ess_tdof_list, x, b, A, X, B); // Solve X = A^{-1}B ... a.RecoverFEMSolution(X, b, x); The primal vector must contain the appropriate values for the solution on the essential boundaries. The interior of the primal vector is ignored by default although it can be used to supply an initial guess when using certain solvers. The dual vector should be an assembled LinearForm object or the product of a GridFunction and a BilinearForm . As output, BilinearForm::FormLinearSystem() produces the objects $A$, $X$, and $B$ in the linear system $A X=B$. Where $A$ is ready to be passed to the appropriate MFEM solver, $X$ is properly initialized, and $B$ has been modified to incorporate the essential boundary conditions. After the linear system has been solved the primal vector representing the solution must be built from $X$ and the original dual vector by calling BilinearForm::RecoverFEMSolution() . Technical Details Constructing Dual Vectors It was mentioned above, in the section on Dual Vectors , that you can create a dual vector by multiplying a primal vector by a bilinear form. But of course if you have a primal vector you can also use a GridFunctionCoefficient to create a dual vector using a LinearForm and an appropriate LinearFormIntegrator . These two choices should produce nearly identical results if the BilinearFormIntegrator and the LinearFormIntegrator use the same integration rule order. The order of the summation might differ between BilinearFormIntegrator and LinearFormIntegrator , potentially resulting in round-off error differences. When considering to use a BilinearForm or a LinearForm, one must be aware of their different computational and memory costs. A bilinear form must create a sparse matrix which can require a great deal of memory. Integrating a GridFunctionCoefficient in a LinearForm object will require very little memory. On the other hand, computing the integrals inside a LinearForm object can be computationally expensive even in comparison to assembling the bilinear form. Which is the better option? As always, there are trade-offs. The answer depends on many variables; the complexities of the BilinearFormIntegrator and the LinearFormIntegrator , the complexity of other coefficients that may be present, the order of the basis functions, can the bilinear form be reused or is this a one-time calculation, whether the code runs on a CPU or GPU , etc. On some architectures the motion of data through memory during a matrix-vector multiplication may be expensive enough that using a LinearForm and recomputing the integrals is more efficient. Often the construction of dual vectors is a small portion of the overall compute time so this choice may not be critical. The best choice is to test your application and determine which method is more appropriate for your algorithm on your hardware. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Primal and Dual Vectors"}, {"location": "pri-dual-vec/#primal-and-dual-vectors", "text": "The finite element method uses vectors of data in a variety of ways and the differences can be subtle. MFEM defines GridFunction , LinearForm , and Vector classes which help to distinguish the different roles that vectors of data can play. Graphical summary of Primal, Dual, DoF (dofs), and True DoF (tdofs) vectors", "title": "Primal and Dual Vectors"}, {"location": "pri-dual-vec/#primal-vectors", "text": "The finite element method is based on the notion that a smooth function can be approximated by a sum of piece-wise smooth functions (typically piece-wise polynomials) called basis functions : $$f(\\vec{x})\\approx\\sum_i f_i \\phi_i(\\vec{x}) \\label{expan}$$ The support of an individual basis function, $\\;\\phi_i(\\vec{x})$, will either be a single zone or a collection of zones that share a common vertex, edge, or face. The expansion coefficients, $\\;f_i$, are linear functionals of the field being approximated, $\\;f(\\vec{x})$ in this case. The $\\;f_i$ could be as simple as values of the function at particular points, called interpolation points, e.g. $\\;f_i=f(\\vec{x}_i)$, or they could be integrals of the field over submanifolds of the domain, e.g. $\\;f_i = \\int_{\\Omega_i}f(\\vec{x})d\\vec{x}$. There are many possibilities but the expansion coefficients must be linear functionals of $\\;f(\\vec{x})$. The expansion coefficients are often called degrees of freedom , or DoFs for short, though in certain cases they may not be actually independent because of some problem specific constraints. We'll discuss this more in a later section on True DoFs . Once the basis functions are defined, with some unique ordering, the expansion coefficients can be stored in a vector using the same order. Such a vector of coefficients is called a primal vector . The original function, $\\;f(\\vec{x})$, can then be approximated using \\eqref{expan}. In practice this requires not only the primal vector of coefficients but also knowledge of the mesh and the basis functions for each element of the mesh. In MFEM these collections of information are combined into GridFunction objects (or ParGridFunction objects when used in parallel) which represent piece-wise functions belonging to a finite element approximation space. The GridFunction class contains many Get methods which can compute the expansion \\eqref{expan} at particular locations within an element. The primal vector of expansion coefficients can be computed by solving a linear system or by using any of the various Project methods provided by the GridFunction class. These methods compute the degrees of freedom, $\\;f_i$, or some subset of them, from a Coefficient object representing $\\;f(\\vec{x})$. Other methods in this class can be used to compute various measures of the error in the finite element approximation of $\\;f(\\vec{x})$.", "title": "Primal Vectors"}, {"location": "pri-dual-vec/#dual-vectors", "text": "Any vector space, such as the space of primal vectors , has a dual space containing co-vectors a.k.a. dual vectors . In this context a dual vector is a linear functional of a primal vector meaning that the action of a dual vector upon a primal vector is a real number. For example, the integral of a field over a domain, $\\;\\alpha=\\int_\\Omega g(\\vec{x})d\\vec{x}$, is a linear functional because the integral is linear with respect to the function being integrated and the result is a real number. Indeed we can derive similar linear functionals using compatible functions, $\\;f(\\vec{x})$, in a variety of ways, for example $G(f)=\\int_\\Omega g(\\vec{x})f(\\vec{x})d\\vec{x}$. If we compute the action of our functional on the finite element basis functions, $$G_i=G(\\phi_i(\\vec{x})) = \\int_\\Omega g(\\vec{x})\\phi_i(\\vec{x})d\\vec{x}\\label{dualvec},$$ and we collect the results into a vector with entries $\\;G_i$, we call this a dual vector of $\\;g(\\vec{x})$. Integrals such as this often arise when enforcing energy balance in physical systems. For example, if $\\vec{J}$ is a current density describing a flow of charged particles and $\\vec{E}$ is an electric field acting upon those particles, then $\\int_\\Omega\\vec{J}\\cdot\\vec{E}\\,d\\vec{x}$ is the rate at which work is being done by the field on the charged particles. MFEM provides LinearForm objects (or ParLinearForm objects in parallel) which can compute dual vectors from a given function, $\\;g(\\vec{x})$, described by a Coefficient object. (Par)LinearForm objects require not only the mesh, basis functions, and the field $\\;g(\\vec{x})$ but also a LinearFormIntegrator which defines precisely what type of linear functional is being computed. See Linear Form Integrators for more information about MFEM's linear form integrators. If, instead of a Coefficient object, you have a primal vector , $\\;g_j$, representing $\\;g(\\vec{x})$ you can form a dual vector by multiplying $\\;g_j$ by a bilinear form, see Bilinear Form Integrators for more information on bilinear forms. To understand why this is so, consider inserting the expansion \\eqref{expan} into \\eqref{dualvec}. $$ G_i=\\int_\\Omega \\left(\\sum_j g_j \\phi_j(\\vec{x})\\right)\\phi_i(\\vec{x})d\\vec{x} = \\sum_j \\left(\\int_\\Omega \\phi_j(\\vec{x})\\phi_i(\\vec{x})d\\vec{x}\\right)g_j \\label{dualvecprod}$$ The last integral contains two indices and can therefore be viewed as an entry in a square matrix. Furthermore, each dual vector entry, $\\;G_i$, is equivalent to one row of a matrix-vector product between this matrix of basis function integrals and the primal vector $\\;g_j$. This particular matrix, involving only the product of basis functions, is traditionally called a mass matrix . However, the action of any matrix, resulting from a bilinear form, upon a primal vector will produce a dual vector . In general, such dual vectors will have more complicated definitions than \\eqref{dualvec} or \\eqref{dualvecprod} but they will still be linear functionals of primal vectors .", "title": "Dual Vectors"}, {"location": "pri-dual-vec/#true-degree-of-freedom-vectors", "text": "Primal vectors contain all of the expansion coefficients needed to compute the finite element approximation of a function in each element of a mesh. When run in parallel, the local portion of a primal vector only contains data for the locally owned elements. Regardless of whether or not the simulation is being run in parallel, some of these coefficients may in fact be redundant or interdependent. Sources of redundancy: In parallel some coefficients must be shared between processors. When using static condensation or hybridization many coefficients will depend upon the coefficients which are associated with the skeleton of the mesh as well as upon other data. When using non-conforming meshes some of the coefficients on the finer side of a non-conforming interface between elements will depend upon those on the coarser side of the interface. For any or all of these reasons primal vectors may not contain the true degrees-of-freedom for describing a finite element approximation of a field. The true set of degrees-of-freedom may in fact be much smaller than the size of the primal vector. When setting up and solving a linear system to determine the finite element approximation of a field, the size of the linear system is determined by the number of true degrees-of-freedom . The details of creating this linear system are mostly hidden within the BilinearForm object. To convert individual bilinear form objects the user can call the BilinearForm::FormSystemMatrix() method, however, the more common task is to form the entire linear system with BilinearForm::FormLinearSystem() . As input, this method requires a primal vector , a dual vector , and an array of Dirichlet boundary degree-of-freedom indices. The degree-of-freedom array contains the true degrees-of-freedom, as obtained from a FiniteElementSpace object, which coincide with the Dirichlet, a.k.a. essential , boundaries. // Given a bilinear form 'a', a primal vector 'x', a dual vector 'b', // and an array of essential boundary true dof indices... SparseMatrix A; Vector B, X; a.FormLinearSystem(ess_tdof_list, x, b, A, X, B); // Solve X = A^{-1}B ... a.RecoverFEMSolution(X, b, x); The primal vector must contain the appropriate values for the solution on the essential boundaries. The interior of the primal vector is ignored by default although it can be used to supply an initial guess when using certain solvers. The dual vector should be an assembled LinearForm object or the product of a GridFunction and a BilinearForm . As output, BilinearForm::FormLinearSystem() produces the objects $A$, $X$, and $B$ in the linear system $A X=B$. Where $A$ is ready to be passed to the appropriate MFEM solver, $X$ is properly initialized, and $B$ has been modified to incorporate the essential boundary conditions. After the linear system has been solved the primal vector representing the solution must be built from $X$ and the original dual vector by calling BilinearForm::RecoverFEMSolution() .", "title": "True Degree-of-Freedom Vectors"}, {"location": "pri-dual-vec/#technical-details", "text": "", "title": "Technical Details"}, {"location": "pri-dual-vec/#constructing-dual-vectors", "text": "It was mentioned above, in the section on Dual Vectors , that you can create a dual vector by multiplying a primal vector by a bilinear form. But of course if you have a primal vector you can also use a GridFunctionCoefficient to create a dual vector using a LinearForm and an appropriate LinearFormIntegrator . These two choices should produce nearly identical results if the BilinearFormIntegrator and the LinearFormIntegrator use the same integration rule order. The order of the summation might differ between BilinearFormIntegrator and LinearFormIntegrator , potentially resulting in round-off error differences. When considering to use a BilinearForm or a LinearForm, one must be aware of their different computational and memory costs. A bilinear form must create a sparse matrix which can require a great deal of memory. Integrating a GridFunctionCoefficient in a LinearForm object will require very little memory. On the other hand, computing the integrals inside a LinearForm object can be computationally expensive even in comparison to assembling the bilinear form. Which is the better option? As always, there are trade-offs. The answer depends on many variables; the complexities of the BilinearFormIntegrator and the LinearFormIntegrator , the complexity of other coefficients that may be present, the order of the basis functions, can the bilinear form be reused or is this a one-time calculation, whether the code runs on a CPU or GPU , etc. On some architectures the motion of data through memory during a matrix-vector multiplication may be expensive enough that using a LinearForm and recomputing the integrals is more efficient. Often the construction of dual vectors is a small portion of the overall compute time so this choice may not be critical. The best choice is to test your application and determine which method is more appropriate for your algorithm on your hardware. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Constructing Dual Vectors"}, {"location": "publications/", "text": "Publications Google Scholar Citations Recent All time Selected Publications 2024 T. Dzanic, K. Mittal, D. Kim, J. Yang, S. Petrides, B. Keith, R. Anderson, DynAMO: Multi-agent reinforcement learning for dynamic anticipatory mesh optimization with applications to hyperbolic conservation laws , Journal of Computational Physics , 506, 112924, 2024 K. Mittal, V. Dobrev, P. Knupp, T. Kolev, F. Ledoux, C. Roche, V. Tomov, Mixed-Order Meshes through rp-adaptivity for Surface Fitting to Implicit Geometries , Proceedings of the 2024 SIAM International Meshing Roundtable (IMR) . 2024 . T. Stitt, K. Belcher, A. Campos, T. Kolev, P. Mocz, R. Rieben, A. Skinner, V. Tomov, A. Vargas, K. Weiss, Performance portable GPU acceleration of a high-order finite element multiphysics application , Journal of Fluids Engineering , 146(4):041102, 2024 . V. Dobrev, P. Knupp, T. Kolev, K. Mittal, R. Rieben, M. Stees, V. Tomov, Asymptotic Analysis of Compound Volume+ Shape Metrics for Mesh Optimization , Proceedings of the 2024 SIAM International Meshing Roundtable (IMR) . 2024 . W. Pazner, Tz. Kolev, P. Vassilevski, Matrix-free high-performance saddle-point solvers for high-order problems in H(div) , SIAM Journal on Scientific Computing , 2024 . Also available as arXiv:2304.12387 . G. Fu, S. Osher, W. Pazner, and W. Li. Generalized optimal transport and mean field control problems for reaction-diffusion systems with high-order finite element computation , Journal of Computational Physics , 2024 . Also available as arXiv:2306.06287 . J. Andrej, N. Atallah, J.-P. B\u00e4cker, J. Camier, D. Copeland, V. Dobrev, Y. Dudouit, T. Duswald, B. Keith, D. Kim, Tz. Kolev, B. Lazarov, K. Mittal, W. Pazner, S. Petrides, S. Shiraiwa, M. Stowell, V. Tomov. High-performance finite elements with MFEM , accepted for publication in the International Journal of High Performance Computing Applications, 2024 . Also available as arXiv:2402.15940 . A. Gillette, B. Keith, S. Petrides, Learning robust marking policies for adaptive mesh refinement , SIAM Journal on Scientific Computing , 2024 . Also available as arXiv:2207.06339 . T. Duswald, B. Keith, B. Lazarov, S. Petrides, B. Wohlmuth, Finite elements for Mat\u00e9rn-type random fields: Uncertainty in computational mechanics and design optimization (in-review). Also available as arXiv:2403.03658 2023 J. Vedral, Dissipative WENO stabilization of high-order discontinuous Galerkin methods for hyperbolic problems , in review . D. Kuzmin, H. Hajduk, Property-Preserving Numerical Schemes for Conservation Laws , World Scientific , 2023 D. Kuzmin, J. Vedral, Dissipation-based WENO stabilization of high-order finite element methods for scalar conservation laws , Journal of Computational Physics , 487, 112153, 2023 B. Keith, T.M. Surowiec, Proximal Galerkin: A structure-preserving finite element method for pointwise bound constraints , 2023 . R. Bollapragada, C. Karamanli, B. Keith, B. Lazarov, S. Petrides, J. Wang, An Adaptive Sampling Augmented Lagrangian Method for Stochastic Optimization with Deterministic Constraints , Computers & Mathematics with Applications , 2023 . Also available as arXiv:2305.01018 . J. Yang, K. Mittal, T. Dzanic, S. Petrides, B. Keith, B. Petersen, D. Faissol, R. Anderson, Multi-Agent Reinforcement Learning for Adaptive Mesh Refinement , Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems , 2023 . J. Yang, T. Dzanic, B. Petersen, J. Kudo, K. Mittal, V. Tomov, J.S. Camier, T. Zhao, H. Zha, T. Kolev, R. Anderson, Reinforcement learning for adaptive mesh refinement , Proceedings of the International Conference on Artificial Intelligence and Statistics , 2023 . W. Pazner, Tz. Kolev, and J. Camier, End-to-end GPU acceleration of low-order-refined preconditioning for high-order finite element discretizations , The International Journal of High Performance Computing Applications , 2023 . Also available as arXiv:2210.12253 . W. Pazner, Tz. Kolev, and C. Dohrmann, Low-order preconditioning for the high-order finite element de Rham complex , SIAM Journal on Scientific Computing , 2023 . Also available as arXiv:2203.02465 . J. Barrera, Tz. Kolev, K. Mittal, and V. Tomov, High-Order Mesh Morphing for Boundary and Interface Fitting to Implicit Geometries , Computer-Aided Design , 158, 103499, 2023 . Also available as arXiv:2208.05062 . J. Camier, V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, R. Rieben, and V. Tomov, Accelerating high-order mesh optimization using finite element partial assembly on GPUs , Journal of Computational Physics , 474, 111808, 2023 . Also available as arXiv:2205.12721 . F. G\u00f3mez-Lozada, C. Andr\u00e9s del Valle, J. D. Jim\u00e9nez-Paz, B. S. Lazarov and J. Galvis, Modelling and simulation of brinicle formation , Royal Society Open Science , 10, 10, 230268, 2023 . 2022 D. Kuzmin, J.-P. B\u00e4cker, An unfitted finite element method using level set functions for extrapolation into deformable diffuse interfaces , Journal of Computational Physics , 461, 111218, 2022 A. Vargas, T. Stitt, K. Weiss, V. Tomov, J. Camier, Tz. Kolev, and R. Rieben, Matrix-free approaches for GPU acceleration of a high-order finite element hydrodynamics application using MFEM, Umpire, and RAJA , The International Journal of High Performance Computing Applications , 36(4):492-509, 2022 . Also available as arXiv:2112.07075 . J. Nikl, M. Kucha\u0159\u00edk, and S. Weber, High-Order Curvilinear Finite Element Magneto-Hydrodynamics I: A Conservative Lagrangian Scheme , Journal of Computational Physics , 464, 111158, 2022 . Also available as arXiv:2110.11669 . T. L. Horvath and S. Rhebergen, A conforming sliding mesh technique for an embedded-hybridized discontinuous Galerkin discretization for fluid-rigid body interaction , in review , 2022 . N. Yavich, N. Koshev, M. Malovichko, A. Razorenova and M. Fedorov, Conservative Finite Element Modeling of EEG and MEG on Unstructured Grids , IEEE Transactions on Medical Imaging , 41(3):647-656, 2022 . Q. Tang, L. Chacon, Tz. Kolev, J. N. Shadid and X.-Z. Tang, An adaptive scalable fully implicit algorithm based on stabilized finite element for reduced visco-resistive MHD , Journal of Computational Physics , (454) 110967, 2022 . Also available as arXiv:2106.00260 . J. A. Turner, J. Belak, N. Barton, M. Bement, N. Carlson, R. Carson, S. DeWitt, J.-L. Fattebert, N. Hodge, Z. Jibben, W. King, L. Levine, C. Newman, A. Plotkowski, B. Radhakrishnan, S. T. Reeve, M. Rolchigo, A. Sabau, S. Slattery, and B. Stump. ExaAM: Metal additive manufacturing simulation at the fidelity of the microstructure. The International Journal of High Performance Computing Applications , 36(1):13-39, 2022 . Tz. Kolev and W. Pazner, Conservative and accurate solution transfer between high-order and low-order refined finite element spaces , SIAM Journal on Scientific Computing , 44(1), A1-A27, 2022 . Also available as arXiv:2103.05283 . 2021 A. Abdelfattah, V. Barra, N. Beams, R. Bleile, J. Brown, J. Camier, R. Carson, N. Chalmers, V. Dobrev, Y. Dudouit, P. Fischer, A. Karakus, S. Kerkemeier, Tz. Kolev, Y. Lan, E. Merzari, M. Min, M. Phillips, T. Rathnayake, R. Rieben, T. Stitt, A. Tomboulides, S. Tomov, V. Tomov, A. Vargas, T. Warburton, K. Weiss, GPU Algorithms for Efficient Exascale Discretizations , Parallel Computing , 108, 102841, 2021 . W. Pazner and Tz. Kolev, Uniform subspace correction preconditioners for discontinuous Galerkin methods with hp -refinement , Communications on Applied Mathematics and Computation , 2021 . Also available as arXiv:2009.01287 . Tz. Kolev, P. Fischer, J. Brown, V. Dobrev, J. Dongarra, M. Min, M. Shephard, S. Tomov, T. Warburton, A. Abdelfattah, V. Barra, N. Beams, J.-S. Camier, N. Chalmers, Y. Dudouit, W. Pazner, C. Smith, K. Swirydowicz, J. Thompson and V. Tomov, Efficient Exascale Discretizations: High Order Finite Element Methods , The International Journal on High Performance Computing Applications , 35(6), 527-552, 2021 . V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, and V. Tomov, hr -adaptivity for nonconforming high-order meshes with the target matrix optimization paradigm , Engineering with Computers , 2021 . Also available as arXiv:2010.02166 . W. Pazner, Sparse invariant domain preserving discontinuous Galerkin methods with subcell convex limiting , Computer Methods in Applied Mechanics and Engineering , 382, 113876, 2021 . Also available as arXiv:2004.08503 . J. Yang, T. Dzanic, B. Petersen, J. Kudo, K. Mittal, V. Tomov, J.-S. Camier, T. Zhao, H. Zha, Tz. Kolev, R. Anderson, D. Faissol, Reinforcement Learning for Adaptive Mesh Refinement , in review , 2021 . D. Kalchev, P. Vassilevski, and U. Villa, Parallel Element-based Algebraic Multigrid for H(curl) and H(div) Problems Using the ParELAG Library , in review , 2021 . N. Whitman, T. Palmer, P. Greaney, S. Hosseini, D. Burkes, and D. Senor, Gray Phonon Transport Prediction of Thermal Conductivity in Lithium Aluminate with Higher-Order Finite Elements on Meshes with Curved Surfaces , Journal of Computational and Theoretical Transport , 2021 . H. Hajduk, Monolithic convex limiting in discontinuous Galerkin discretizations of hyperbolic conservation laws , Computers & Mathematics with Applications , (87) 120-138, 2021 . Also available as arXiv:2007.01212 . J. Nikl, I. G\u00f6thel, M. Kucha\u0159\u00edk, S. Weber, and M. Bussmann, Implicit reduced Vlasov-Fokker-Planck-Maxwell model based on high-order mixed elements , Journal of Computational Physics , (434) 110214, 2021 . D. Kalchev, P. Vassilevski, and U. Villa, On ParELAG's Parallel Element-based Algebraic Multigrid and its MFEM Miniapps for H(curl) and H(div) Problems: a report including lowest and next to the lowest order numerical results , LLNL Tech. Report , LLNL-TR-824455, 2021 . J. Brown, A. Abdelfattah, V. Barra, N. Beams, J. Camier, V. Dobrev, Y. Dudouit, L. Ghaffari, Tz. Kolev, D. Medina, W. Pazner, T. Ratnayaka, J. Thompson and S. Tomov, libCEED: Fast algebra for high-order element-based discretizations , The Journal of Open Source Software , 2021 . P. Knupp, Tz. Kolev, K. Mittal, V. Tomov, Adaptive Surface Fitting and Tangential Relaxation for High-Order Mesh Optimization . International Meshing Roundtable , 2021 . 2020 N. Beams, A. Abdelfattah, S. Tomov, J. Dongarra, T. Kolev, and Y. Dudouit, High-Order Finite Element Method using Standard and Device-Level Batch GEMM on GPUs , IEEE/ACM 11th ScalA Workshop , 53-60, 2020 . A. Barker and Tz. Kolev, Matrix-free preconditioning for high-order H(curl) discretizations , Numerical Linear Algebra with Applications , 28(2) e2348, 2020 . D. Kuzmin and M. Quezada de Luna, Entropy conservation property and entropy stabilization of high-order continuous Galerkin approximations to scalar conservation laws , Computers & Fluids , (213) 104742, 2020 . A. Sandu, V. Tomov, L. Cervena, and Tz. Kolev, Conservative High-Order Time Integration for Lagrangian Hydrodynamics , SIAM Journal on Scientific Computing , 43(1), A221-A241, 2020 . B. S. Southworth, M. Holec, and T. Haut. Diffusion synthetic acceleration for heterogeneous domains, compatible with voids , Nuclear Science and Engineering , 195(2), 119-136, 2020 . T. Haut, B. Southworth, P. Maginot, V. Tomov, Diffusion Synthetic Acceleration Preconditioning for Discontinuous Galerkin Discretizations of SN Transport on High-Order Curved Meshes , SIAM Journal on Scientific Computing , 42(5), B1271-B1301, 2020 . R. Anderson, J. Andrej, A. Barker, J. Bramwell, J.-S. Camier, J. Cerveny V. Dobrev, Y. Dudouit, A. Fisher, Tz. Kolev, W. Pazner, M. Stowell, V. Tomov, I. Akkerman, J. Dahm, D. Medina, and S. Zampini, MFEM: A Modular Finite Element Library , Computers & Mathematics with Applications , (81) 42-74, 2020 . Also available as arXiv:1911.09220 . R. Li and C. Zhang, Efficient Parallel Implementations of Sparse Triangular Solves for GPU Architectures , Proceedings of the 2020 SIAM Conference on Parallel Processing for Scientific Computing , 2020 . W. Pazner, Efficient low-order refined preconditioners for high-order matrix-free continuous and discontinuous Galerkin methods , SIAM Journal on Scientific Computing , 42(5), pp. A3055-A3083, 2020 . B. Yee, S. Olivier, T. Haut, M. Holec, V. Tomov, P. Maginot, A Quadratic Programming Flux Correction Method for High-Order DG Discretizations of SN Transport , Journal of Computational Physics , (419) 109696, 2020 . T. L. Horvath and S. Rhebergen, An exactly mass conserving space-time embedded-hybridized discontinuous Galerkin method for the Navier-Stokes equations on moving domains , Journal of Computational Physics , (417) 109577, 2020 . S. Rhebergen and G. N. Wells, An embedded-hybridized discontinuous Galerkin finite element method for the Stokes equations , Computer Methods in Applied Mechanics and Engineering , (358) 112619, 2020 . P. Bello-Maldonado, Tz. Kolev, R. Rieben, and V. Tomov, A Matrix-Free Hyperviscosity Formulation for High-Order ALE Hydrodynamics , Computers & Fluids , (205) 104577, 2020 . V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, R. Rieben, and V. Tomov, Simulation-Driven Optimization of High-Order Meshes in ALE Hydrodynamics , Computers & Fluids , (208) 104602, 2020 . H. Hajduk, D. Kuzmin, Tz. Kolev, V. Tomov, I. Tomas, and J. Shadid, Matrix-free subcell residual distribution for Bernstein finite elements: Monolithic limiting , Computers & Fluids , (200) 104451, 2020 . M. Franco, J.-S. Camier, J. Andrej, and W. Pazner, High-order matrix-free incompressible flow solvers with GPU acceleration and low-order refined preconditioners , Computers & Fluids , (203) 104541, 2020 . S. Friedhoff and B. S. Southworth, On \"Optimal\" h-independent convergence of Parareal and multigrid-reduction-in-time using Runge-Kutta time integration , Numerical Linear Algebra with Applications , e2301, 2020 . B. S. Southworth, A. A. Sivas, and S. Rhebergen, On fixed-point, Krylov, and 2x2 block preconditioners for nonsymmetric problems , SIAM Journal on Matrix Analysis and Applications , 41(2), pp. 871-900, 2020 . P. Fischer, M. Min, T. Rathnayake, S. Dutta, Tz. Kolev, V. Dobrev, J.S. Camier, M. Kronbichler, T. Warburton, K. Swirydowicz, and J. Brown, Scalability of High-Performance PDE Solvers , The International Journal on High Performance Computing Applications , 34(5), pp. 562-586, 2020 . G. Sosa Jones, J. J. Lee, and S. Rhebergen, A space-time hybridizable discontinuous Galerkin method for linear free-surface waves , Journal of Scientific Computing , (85) 61, 2020 . Also available as arXiv:1910.07315 Z. Peng, Q. Tang and X.-Z. Tang. An adaptive discontinuous Petrov-Galerkin method for the Grad-Shafranov equation , SIAM Journal on Scientific Computing , 42(5):B1227-B1249, 2020 . 2019 H. Hajduk, D. Kuzmin, Tz. Kolev, and R. Abgrall, Matrix-free subcell residual distribution for Bernstein finite elements: Low-order schemes and FCT , Comp. Meth. Appl. Mech. Eng. , (359) 112658, 2019 . K. Suzuki, M. Fujisawa, and M. Mikawa, Simulation Controlling Method for Generating Desired Water Caustics , 2019 International Conference on Cyberworlds (CW) , Kyoto, Japan, pp. 163-170, 2019 . D. White, Y. Choit, and J. Kudo, A dual mesh method with adaptivity for stress constrained topology optimization , Structural and Multidisciplinary Optimization , 61, pp. 749-762, 2019 . S. Watts, W. Arrighi, J. Kudo, D. A. Tortorelli, and D. A. White, Simple, accurate surrogate models of the elastic response of three-dimensional open truss micro-architectures with applications to multiscale topology design , Structural and Multidisciplinary Optimization , 60, pp. 1887-1920, 2019 . V. Dobrev, P. Knupp, Tz. Kolev, and V. Tomov, Towards Simulation-Driven Optimization of High-Order Meshes by the Target-Matrix Optimization Paradigm , 27th International Meshing Roundtable, Oct 1-8, 2018, Albuquerque , Lecture Notes in Computational Science and Engineering, 127, pp. 285-302, 2019 . J. Cerveny, V. Dobrev, and Tz. Kolev, Non-Conforming Mesh Refinement For High-Order Finite Elements , SIAM Journal on Scientific Computing , 41(4):C367-C392, 2019 . D. White, W. Arrighi, J. Kudo, and S. Watts, Multiscale topology optimization using neural network surrogate models , Comp. Meth. Appl. Mech. Eng. , 346, pp. 1118-1135, 2019 . V. A. Dobrev, T. V. Kolev, C. S. Lee, V. Z. Tomov, and P. S. Vassilevski, Algebraic Hybridization and Static Condensation with Application to Scalable H(div) Preconditioning , SIAM Journal on Scientific Computing , 41(3):B425-B447, 2019 . D. White, and A. Voronin, A computational study of symmetry and well-posedness of structural topology optimization , Structural and Multidisciplinary Optimization , 59(3), pp. 759-766, 2019 . T. Haut, P. Maginot, V. Tomov, B. Southworth, T. Brunner and T. Bailey, An Efficient Sweep-Based Solver for the SN Equations on High-Order Meshes , Nuclear Science and Engineering , 193(7):746-759, 2019 . V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, and V. Tomov, The Target-Matrix Optimization Paradigm For High-Order Meshes , SIAM Journal on Scientific Computing , 41(1):B50-B68, 2019 . K. L. A. Kirk, T. L. Horvath, A. Cesmelioglu, and S. Rhebergen, Analysis of a space-time hybridizable discontinuous Galerkin method for the advection-diffusion problem on time-dependent domains , SIAM Journal on Numerical Analysis , 57(4), pp. 1677-1696, 2019 . T. L. Horvath and S. Rhebergen, A locally conservative and energy-stable finite element method for the Navier-Stokes problem on time-dependent domains , International Journal for Numerical Methods in Fluids , 89(12):519-532, 2019 . R. Li, Y. Xi, L. Erlandson, and Y. Saad, The Eigenvalues Slicing Library (EVSL): Algorithms, Implementation, and Software , SIAM Journal on Scientific Computing , 41(4), pp. C393-C415, 2019 . 2018 H. Auten, The High Value of Open Source Software , Science & Technology Review , January/February 2018, pp. 5-11, 2018 . R. W. Anderson, V. A. Dobrev, Tz. V. Kolev, R. N. Rieben, and V. Z. Tomov, High-Order Multi-Material ALE Hydrodynamics , SIAM Journal on Scientific Computing , 40(1), pp. B32-B58, 2018 . A. T. Barker, V. Dobrev, J. Gopalakrishnan, and Tz. Kolev, A scalable preconditioner for a primal discontinuous Petrov-Galerkin method , SIAM Journal on Scientific Computing , 40(2), pp. A1187-A1203, 2018 . V. Dobrev, T. Kolev, D. Kuzmin, R. Rieben, and V. Tomov, Sequential limiting in continuous and discontinuous Galerkin methods for the Euler equations , Journal of Computational Physics , 356, pp. 372-390, 2018 . M. Reberol and B. L\u00e9vy, Computing the Distance between Two Finite Element Solutions Defined on Different 3D Meshes on a GPU , SIAM Journal on Scientific Computing , 40(1), pp. C131-C155, 2018 . A. Mazuyer, P. Cupillard, R. Giot, M. Conin, Y. Leroy, and P. Thore, Stress estimation in reservoirs using an integrated inverse method , Computers & Geosciences , 114, pp. 30-40, 2018 . J. Gopalakrishnan, M. Neum\u00fcller, and P. Vassilevski, The auxiliary space preconditioner for the de Rham complex , SIAM Journal on Numerical Analysis , 56(6), pp. 3196-3218, 2018 . D. A. White, M. Stowell, and D. A. Tortorelli, Topological optimization of structures using Fourier representations , Structural and Multidisciplinary Optimization , pp. 1-16, 2018 . S. Rhebergen and G. N. Wells, Preconditioning of a hybridized discontinuous Galerkin finite element method for the Stokes equations , Journal of Scientific Computing , 77(3), pp. 1936-1501, 2018 . T. S. Haut, P. G. Maginot, V. Z. Tomov, T. A. Brunner, and T. S. Bailey, An Efficient Sweep-based Solver for the $S_N$ Equations on High-Order Meshes , American Nuclear Society 2018 Annual Meeting, June 14-21, Philadelphia, PA , 2018 . A. S\u00e1nchez-Villar and M. Merino, Advances in Wave-Plasma Modelling in ECR Thrusters , 2018 Space Propulsion Conference, May 14-18, Seville, Spain , 2018 . 2017 S. Osborn, P. S. Vassilevski, and U. Villa, A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields , SIAM Journal on Scientific Computing , 39(5), pp. S543-S562, 2017 . R. D. Falgout, T. A. Manteuffel, B. O'Neill, and J. B. Schroder, Multigrid Reduction In Time For Nonlinear Parabolic Problems: A Case Study , SIAM Journal on Scientific Computing , 39(5), pp. S298-S322, 2017 . T. A. Manteuffel, L. N. Olson, J. B. Schroder, and B. S. Southworth, A Root-Node Based Algebraic Multigrid Method , SIAM Journal on Scientific Computing , 39(5), pp. S723-S756, 2017 . A. T. Barker, C. S. Lee, and P. S. Vassilevski, Spectral Upscaling for Graph Laplacian Problems with Application to Reservoir Simulation , SIAM Journal on Scientific Computing , 39(5), pp. S323-S346, 2017 . V. A. Dobrev, Tz. Kolev, N. A. Peterson, and J. B. Schroder, Two-level Convergence Theory For Multigrid Reduction In Time (MGRIT) , SIAM Journal on Scientific Computing , 39(5), pp. S501-S527, 2017 . R. E. Bank, P. S. Vassilevski, and L. T. Zikatanov, Arbitrary Dimension Convection-Diffusion Schemes For Space-Time Discretizations , Journal of Computational and Applied Mathematics , 310, pp. 19-31, 2017 . S. Osborn, P. Zulian, T. Benson, U. Villa, R. Krause, and P. S. Vassilevski, Scalable hierarchical PDE sampler for generating spatially correlated random fields using non-matching meshes , Numerical Linear Algebra with Applications , 25, pp. e2146, 2017 . J. H. Adler, I. Lashuk, and S. P. MacLachlan, Composite-grid multigrid for diffusion on the sphere , Numerical Linear Algebra with Applications , 25(1), pp. e2115, 2017 . S. Zampini, P. S. Vassilevski, V. Dobrev, and T. Kolev, Balancing Domain Decomposition by Constraints Algorithms for Curl-conforming Spaces of Arbitrary Order , Domain Decomposition Methods in Science and Engineering XXIV , 2017 . M. Larsen, J. Ahrens, U. Ayachit, E. Brugger, H. Childs, B. Geveci, and C. Harrison, The ALPINE In Situ Infrastructure: Ascending from the Ashes of Strawman , ISAV 2017: In Situ Infrastructures for Enabling Extreme-scale Analysis and Visualization , 2017 . J. Wright and S. Shiraiwa, Antenna to Core: A New Approach to RF Modelling , 22 Topical Conference on Radio-Frequency Power in Plasmas , 2017 . S. Shiraiwa, J. C. Wright, P. T. Bonoli, Tz. Kolev, and M. Stowell, RF wave simulation for cold edge plasmas using the MFEM library , 22 Topical Conference on Radio-Frequency Power in Plasmas , 2017 . C. Hofer, U. Langer, M. Neum\u00fcller, and I. Toulopoulos, Time-Multipatch Discontinuous Galerkin Space-Time Isogeometric Analysis of Parabolic Evolution Problems , RICAM-Report 2017-26 , 2017 . J. Billings, A. McCaskey, G. Vallee, and G. Watson, Will humans even write code in 2040 and what would that mean for extreme heterogeneity in computing? , arXiv:1712.00676 , 2017 . M. L. C. Christensen, U. Villa, A. Engsig-Karup, and P. S. Vassilevski, Numerical Multilevel Upscaling For Incompressible Flow in Reservoir Simulation: An Element-Based Algebraic Multigrid (AMGe) Approach , SIAM Journal on Scientific Computing , 39(1), pp. B102-B137, 2017 . R. Anderson, V. Dobrev, Tz. Kolev, D. Kuzmin, M. Q. de Luna, R. Rieben, and V. Tomov, High-order local maximum principle preserving (MPP) discontinuous Galerkin finite element method for the transport equation , Journal of Computational Physics , 334, pp. 102-124, 2017 . R. Li and Y. Saad, Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners , SIAM Journal on Matrix Analysis and Applications , 38(3), pp. 807-828, 2017 . 2016 D. Z. Kalchev, C. S. Lee, U. Villa, Y. Efendiev, and P. S. Vassilevski, Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method , SIAM Journal on Scientific Computing , 38(5), pp. A2912-A2933, 2016 . V. A. Dobrev, Tz. V. Kolev, R. N. Rieben, and V. Z. Tomov, Multi-material closure model for high-order finite element Lagrangian hydrodynamics , International Journal for Numerical Methods in Fluids , 82(10), pp. 689-706, 2016 . J. Guermond, B. Popov, and V. Tomov, Entropy-viscosity method for the single material Euler equations in Lagrangian frame , Computer Methods in Applied Mechanics and Engineering , 300, pp. 402-426, 2016 . M. Holec, J. Limpouch, R. Liska, and S. Weber, High-order discontinuous Galerkin nonlocal transport and energy equations scheme for radiation hydrodynamics , International Journal for Numerical Methods in Fluids , 83(10), pp. 779-797, 2016 . Tz. V. Kolev, J. Xu, and Y. Zhu, Multilevel Preconditioners for Reaction-Diffusion Problems with Discontinuous Coefficients , Journal of Scientific Computing , 67(1), pp. 324-350, 2016 . M. Reberol and B. L\u00e9vy, Low-order continuous finite element spaces on hybrid non-conforming hexahedral-tetrahedral meshes , CoRR , abs/1605.02626, 2016 . O. Marques, A. Druinsky, X. S. Li, A. T. Barker, P. Vassilevski, and D. Kalchev, Tuning the Coarse Space Construction in a Spectral AMG Solver , Procedia Computer Science , 80, pp. 212-221, International Conference on Computational Science 2016, ICCS 2016, 6-8 June 2016, San Diego, California, USA, 2016 . J. S. Yeom, J. J. Thiagarajan, A. Bhatele, G. Bronevetsky, and T. Kolev, Data-Driven Performance Modeling of Linear Solvers for Sparse Matrices , 2016 7th International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS) , 2016 . 2015 and earlier D. Osei-Kuffuor, R. Li, and Y. Saad, Matrix Reordering Using Multilevel Graph Coarsening for ILU Preconditioning , SIAM Journal on Scientific Computing , 37(1), pp. A391-A419, 2015 . R. Anderson, V. Dobrev, Tz. Kolev, and R. Rieben, Monotonicity in high-order curvilinear finite element ALE remap , Int. J. Numer. Meth. Fluids , 77(5), pp. 249-273, 2014 . V. Dobrev, Tz. Kolev, and R. Rieben, High-order curvilinear finite element methods for elastic-plastic Lagrangian dynamics , J. Comp. Phys. , (257B), pp. 1062-1080, 2014 . P. Vassilevski and U. Villa, A mixed formulation for the Brinkman problem , SIAM Journal on Numerical Analysis , 52-1, pp. 258-281, 2014 . J. H. Adler and P. S. Vassilevski, Error Analysis for Constrained First-Order System Least-Squares Finite-Element Methods , SIAM Journal on Scientific Computing , 36(3), pp. A1071-A1088, 2014 . A. Aposporidis, P. S. Vassilevski, and A. Veneziani, Multigrid preconditioning of the non-regularized augmented Bingham fluid problem , ETNA. Electronic Transactions on Numerical Analysis , 41, 2014 . P. S. Vassilevski and U. M. Yang, Reducing communication in algebraic multigrid using additive variants , Numerical Linear Algebra with Applications , 21(2), pp. 275-296, 2014 . T. Dong, V. Dobrev, T. Kolev, R. Rieben, S. Tomov, and J. Dongarra, A Step towards Energy Efficient Computing: Redesigning a Hydrodynamic Application on CPU-GPU , 2014 IEEE 28th International Parallel and Distributed Processing Symposium , May 2014 . P. Vassilevski and U. Villa, A block-diagonal algebraic multigrid preconditioner for the Brinkman problem , SIAM Journal on Scientific Computing , 35-5, pp. S3-S17, 2013 . V. Dobrev, T. Ellis, Tz. Kolev, and R. Rieben, High-order curvilinear finite elements for axisymmetric Lagrangian hydrodynamics , Computers & Fluids , pp. 58-69, 2013 . D. Kalchev, C. Ketelsen, and P. S. Vassilevski, Two-level adaptive algebraic multigrid for sequence of problems with slowly varying random coefficients , SIAM Journal on Scientific Computing , 35(6), pp. B1215-B1234, 2013 . P. D'Ambra and P. S. Vassilevski, Adaptive AMG with coarsening based on compatible weighted matching , Computing and Visualization in Science , 16(2), pp. 59-76, 2013 . T. A. Brunner, T. V. Kolev, T. S. Bailey, and A. T. Till, Preserving Spherical Symmetry in Axisymmetric Coordinates for Diffusion , International Conference on Mathematics and Computational Methods Applied to Nuclear Science & Engineering , 2013 . Tz. Kolev and P. Vassilevski, Parallel auxiliary space AMG solver for H(div) problems , SIAM Journal on Scientific Computing , 34, pp. A3079-A3098, 2012 . V. Dobrev, Tz. Kolev, and R. Rieben, High-order curvilinear finite element methods for Lagrangian hydrodynamics , SIAM Journal on Scientific Computing , 34, pp. B606-B641, 2012 . I. Lashuk and P. Vassilevski, Element agglomeration coarse Raviart-Thomas spaces with improved approximation properties , Numerical Linear Algebra with Applications , 19, pp. 414-426, 2012 . D. Kalchev, Adaptive algebraic multigrid for finite element elliptic equations with random coefficients , LLNL Tech. Report , LLNL-TR-553254, 2012 . A. Aposporidis, P. Vassilevski, and A. Veneziani, A geometric nonlinear AMLI preconditioner for the Bingham fluid flow in mixed variables , LLNL Tech. Report , LLNL-JRNL-600372, 2012 . P. Knupp, Introducing the target-matrix paradigm for mesh optimization by node movement , Engineering with Computers , 28(4), pp. 419-429, 2012 . T. A. Brunner, Mulard: A Multigroup Thermal Radiation Diffusion Mini-Application , DOE Exascale Research Conference, Portland, Oregon , 2012 . A. Baker, R. Falgout, T. Kolev, and U. Yang, Multigrid smoothers for ultra-parallel computing , SIAM Journal on Scientific Computing , 33(5), pp. 2864-2887, 2011 . V. Dobrev, T. Ellis, Tz. Kolev, and R. Rieben, Curvilinear finite elements for Lagrangian hydrodynamics , Int. J. Numer. Meth. Fluids , 65, pp. 1295-1310, 2011 . V. Dobrev, J.-L. Guermond, and B. Popov, Surface reconstruction and image enhancement via L1-minimization , SIAM Journal on Scientific Computing , 32 (3), pp. 1591-1616, 2010 . J. Brannick and R. Falgout, Compatible relaxation and coarsening in algebraic multigrid , SIAM Journal on Scientific Computing , 32, pp. 1393-1416, 2010 . A. Baker, Tz. Kolev, and U. M. Yang, Improving algebraic multigrid interpolation operators for linear elasticity problems , Numerical Linear Algebra with Applications , 17, pp. 495-517, 2010 . U. M. Yang, On long-range interpolation operators for aggressive coarsening , Numerical Linear Algebra with Applications , 17, pp. 453-472, 2010 . Tz. Kolev and P. Vassilevski, Parallel auxiliary space AMG for H(curl) problems , Journal of Computational Mathematics , 27, pp. 604-623, 2009 . Tz. V. Kolev and R. N. Rieben, A tensor artificial viscosity using a finite element approach , Journal of Computational Physics , 228(22), pp. 8336 - 8366, 2009 . A. Baker, E. Jessup, and Tz. Kolev, A simple strategy for varying the restart parameter in GMRES(m) , J. Comp. Appl. Math. , 230, pp. 751-761, 2009 . Tz. Kolev, J. Pasciak, and P. Vassilevski, H(curl) auxiliary mesh preconditioning , Numerical Linear Algebra with Applications , 15, pp. 455-471, 2008 . H. De Sterck, R. Falgout, J. Nolting, and U. M. Yang, Distance-two interpolation for parallel algebraic multigrid , Numerical Linear Algebra with Applications , 15, pp. 115-139, 2008 . V. Dobrev, R. Lazarov, and L. Zikatanov, Preconditioning of symmetric interior penalty discontinuous Galerkin FEM for second order elliptic problems , in Domain Decomposition Methods in Science and Engineering XVII, Lecture Notes in Computational Science and Engineering, vol. 60, U. Langer et al. eds, Springer-Verlag, Berlin, Heidelberg, pp. 33-44, 2008 . D. Alber and L. Olson, Parallel coarse grid selection , Numerical Linear Algebra with Applications , 14, pp. 611-643, 2007 . V. Dobrev, R. Lazarov, P. Vassilevski, and L. Zikatanov, Two-level preconditioning of discontinuous Galerkin approximations of second-order elliptic equations , Numerical Linear Algebra with Applications , 13 (9), pp. 753-770, 2006 . Tz. Kolev and P. Vassilevski, AMG by element agglomeration and constrained energy minimization interpolation , Numerical Linear Algebra with Applications , 13, pp. 771-788, 2006 . J. Bramble, Tz. Kolev, and J. Pasciak, A least-squares approximation method for the time-harmonic Maxwell equations , Journal of Numerical Mathematics , 13(4), pp. 237-263, 2005 . P. Vassilevski, Sparse matrix element topology with application to AMG(e) and preconditioning , Numerical Linear Algebra with Applications , 9, pp. 429-444, 2002 . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Publications"}, {"location": "publications/#publications", "text": "", "title": "Publications"}, {"location": "publications/#google-scholar-citations", "text": "Recent All time", "title": "Google Scholar Citations"}, {"location": "publications/#selected-publications", "text": "", "title": "Selected Publications"}, {"location": "publications/#2024", "text": "T. Dzanic, K. Mittal, D. Kim, J. Yang, S. Petrides, B. Keith, R. Anderson, DynAMO: Multi-agent reinforcement learning for dynamic anticipatory mesh optimization with applications to hyperbolic conservation laws , Journal of Computational Physics , 506, 112924, 2024 K. Mittal, V. Dobrev, P. Knupp, T. Kolev, F. Ledoux, C. Roche, V. Tomov, Mixed-Order Meshes through rp-adaptivity for Surface Fitting to Implicit Geometries , Proceedings of the 2024 SIAM International Meshing Roundtable (IMR) . 2024 . T. Stitt, K. Belcher, A. Campos, T. Kolev, P. Mocz, R. Rieben, A. Skinner, V. Tomov, A. Vargas, K. Weiss, Performance portable GPU acceleration of a high-order finite element multiphysics application , Journal of Fluids Engineering , 146(4):041102, 2024 . V. Dobrev, P. Knupp, T. Kolev, K. Mittal, R. Rieben, M. Stees, V. Tomov, Asymptotic Analysis of Compound Volume+ Shape Metrics for Mesh Optimization , Proceedings of the 2024 SIAM International Meshing Roundtable (IMR) . 2024 . W. Pazner, Tz. Kolev, P. Vassilevski, Matrix-free high-performance saddle-point solvers for high-order problems in H(div) , SIAM Journal on Scientific Computing , 2024 . Also available as arXiv:2304.12387 . G. Fu, S. Osher, W. Pazner, and W. Li. Generalized optimal transport and mean field control problems for reaction-diffusion systems with high-order finite element computation , Journal of Computational Physics , 2024 . Also available as arXiv:2306.06287 . J. Andrej, N. Atallah, J.-P. B\u00e4cker, J. Camier, D. Copeland, V. Dobrev, Y. Dudouit, T. Duswald, B. Keith, D. Kim, Tz. Kolev, B. Lazarov, K. Mittal, W. Pazner, S. Petrides, S. Shiraiwa, M. Stowell, V. Tomov. High-performance finite elements with MFEM , accepted for publication in the International Journal of High Performance Computing Applications, 2024 . Also available as arXiv:2402.15940 . A. Gillette, B. Keith, S. Petrides, Learning robust marking policies for adaptive mesh refinement , SIAM Journal on Scientific Computing , 2024 . Also available as arXiv:2207.06339 . T. Duswald, B. Keith, B. Lazarov, S. Petrides, B. Wohlmuth, Finite elements for Mat\u00e9rn-type random fields: Uncertainty in computational mechanics and design optimization (in-review). Also available as arXiv:2403.03658", "title": "2024"}, {"location": "publications/#2023", "text": "J. Vedral, Dissipative WENO stabilization of high-order discontinuous Galerkin methods for hyperbolic problems , in review . D. Kuzmin, H. Hajduk, Property-Preserving Numerical Schemes for Conservation Laws , World Scientific , 2023 D. Kuzmin, J. Vedral, Dissipation-based WENO stabilization of high-order finite element methods for scalar conservation laws , Journal of Computational Physics , 487, 112153, 2023 B. Keith, T.M. Surowiec, Proximal Galerkin: A structure-preserving finite element method for pointwise bound constraints , 2023 . R. Bollapragada, C. Karamanli, B. Keith, B. Lazarov, S. Petrides, J. Wang, An Adaptive Sampling Augmented Lagrangian Method for Stochastic Optimization with Deterministic Constraints , Computers & Mathematics with Applications , 2023 . Also available as arXiv:2305.01018 . J. Yang, K. Mittal, T. Dzanic, S. Petrides, B. Keith, B. Petersen, D. Faissol, R. Anderson, Multi-Agent Reinforcement Learning for Adaptive Mesh Refinement , Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems , 2023 . J. Yang, T. Dzanic, B. Petersen, J. Kudo, K. Mittal, V. Tomov, J.S. Camier, T. Zhao, H. Zha, T. Kolev, R. Anderson, Reinforcement learning for adaptive mesh refinement , Proceedings of the International Conference on Artificial Intelligence and Statistics , 2023 . W. Pazner, Tz. Kolev, and J. Camier, End-to-end GPU acceleration of low-order-refined preconditioning for high-order finite element discretizations , The International Journal of High Performance Computing Applications , 2023 . Also available as arXiv:2210.12253 . W. Pazner, Tz. Kolev, and C. Dohrmann, Low-order preconditioning for the high-order finite element de Rham complex , SIAM Journal on Scientific Computing , 2023 . Also available as arXiv:2203.02465 . J. Barrera, Tz. Kolev, K. Mittal, and V. Tomov, High-Order Mesh Morphing for Boundary and Interface Fitting to Implicit Geometries , Computer-Aided Design , 158, 103499, 2023 . Also available as arXiv:2208.05062 . J. Camier, V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, R. Rieben, and V. Tomov, Accelerating high-order mesh optimization using finite element partial assembly on GPUs , Journal of Computational Physics , 474, 111808, 2023 . Also available as arXiv:2205.12721 . F. G\u00f3mez-Lozada, C. Andr\u00e9s del Valle, J. D. Jim\u00e9nez-Paz, B. S. Lazarov and J. Galvis, Modelling and simulation of brinicle formation , Royal Society Open Science , 10, 10, 230268, 2023 .", "title": "2023"}, {"location": "publications/#2022", "text": "D. Kuzmin, J.-P. B\u00e4cker, An unfitted finite element method using level set functions for extrapolation into deformable diffuse interfaces , Journal of Computational Physics , 461, 111218, 2022 A. Vargas, T. Stitt, K. Weiss, V. Tomov, J. Camier, Tz. Kolev, and R. Rieben, Matrix-free approaches for GPU acceleration of a high-order finite element hydrodynamics application using MFEM, Umpire, and RAJA , The International Journal of High Performance Computing Applications , 36(4):492-509, 2022 . Also available as arXiv:2112.07075 . J. Nikl, M. Kucha\u0159\u00edk, and S. Weber, High-Order Curvilinear Finite Element Magneto-Hydrodynamics I: A Conservative Lagrangian Scheme , Journal of Computational Physics , 464, 111158, 2022 . Also available as arXiv:2110.11669 . T. L. Horvath and S. Rhebergen, A conforming sliding mesh technique for an embedded-hybridized discontinuous Galerkin discretization for fluid-rigid body interaction , in review , 2022 . N. Yavich, N. Koshev, M. Malovichko, A. Razorenova and M. Fedorov, Conservative Finite Element Modeling of EEG and MEG on Unstructured Grids , IEEE Transactions on Medical Imaging , 41(3):647-656, 2022 . Q. Tang, L. Chacon, Tz. Kolev, J. N. Shadid and X.-Z. Tang, An adaptive scalable fully implicit algorithm based on stabilized finite element for reduced visco-resistive MHD , Journal of Computational Physics , (454) 110967, 2022 . Also available as arXiv:2106.00260 . J. A. Turner, J. Belak, N. Barton, M. Bement, N. Carlson, R. Carson, S. DeWitt, J.-L. Fattebert, N. Hodge, Z. Jibben, W. King, L. Levine, C. Newman, A. Plotkowski, B. Radhakrishnan, S. T. Reeve, M. Rolchigo, A. Sabau, S. Slattery, and B. Stump. ExaAM: Metal additive manufacturing simulation at the fidelity of the microstructure. The International Journal of High Performance Computing Applications , 36(1):13-39, 2022 . Tz. Kolev and W. Pazner, Conservative and accurate solution transfer between high-order and low-order refined finite element spaces , SIAM Journal on Scientific Computing , 44(1), A1-A27, 2022 . Also available as arXiv:2103.05283 .", "title": "2022"}, {"location": "publications/#2021", "text": "A. Abdelfattah, V. Barra, N. Beams, R. Bleile, J. Brown, J. Camier, R. Carson, N. Chalmers, V. Dobrev, Y. Dudouit, P. Fischer, A. Karakus, S. Kerkemeier, Tz. Kolev, Y. Lan, E. Merzari, M. Min, M. Phillips, T. Rathnayake, R. Rieben, T. Stitt, A. Tomboulides, S. Tomov, V. Tomov, A. Vargas, T. Warburton, K. Weiss, GPU Algorithms for Efficient Exascale Discretizations , Parallel Computing , 108, 102841, 2021 . W. Pazner and Tz. Kolev, Uniform subspace correction preconditioners for discontinuous Galerkin methods with hp -refinement , Communications on Applied Mathematics and Computation , 2021 . Also available as arXiv:2009.01287 . Tz. Kolev, P. Fischer, J. Brown, V. Dobrev, J. Dongarra, M. Min, M. Shephard, S. Tomov, T. Warburton, A. Abdelfattah, V. Barra, N. Beams, J.-S. Camier, N. Chalmers, Y. Dudouit, W. Pazner, C. Smith, K. Swirydowicz, J. Thompson and V. Tomov, Efficient Exascale Discretizations: High Order Finite Element Methods , The International Journal on High Performance Computing Applications , 35(6), 527-552, 2021 . V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, and V. Tomov, hr -adaptivity for nonconforming high-order meshes with the target matrix optimization paradigm , Engineering with Computers , 2021 . Also available as arXiv:2010.02166 . W. Pazner, Sparse invariant domain preserving discontinuous Galerkin methods with subcell convex limiting , Computer Methods in Applied Mechanics and Engineering , 382, 113876, 2021 . Also available as arXiv:2004.08503 . J. Yang, T. Dzanic, B. Petersen, J. Kudo, K. Mittal, V. Tomov, J.-S. Camier, T. Zhao, H. Zha, Tz. Kolev, R. Anderson, D. Faissol, Reinforcement Learning for Adaptive Mesh Refinement , in review , 2021 . D. Kalchev, P. Vassilevski, and U. Villa, Parallel Element-based Algebraic Multigrid for H(curl) and H(div) Problems Using the ParELAG Library , in review , 2021 . N. Whitman, T. Palmer, P. Greaney, S. Hosseini, D. Burkes, and D. Senor, Gray Phonon Transport Prediction of Thermal Conductivity in Lithium Aluminate with Higher-Order Finite Elements on Meshes with Curved Surfaces , Journal of Computational and Theoretical Transport , 2021 . H. Hajduk, Monolithic convex limiting in discontinuous Galerkin discretizations of hyperbolic conservation laws , Computers & Mathematics with Applications , (87) 120-138, 2021 . Also available as arXiv:2007.01212 . J. Nikl, I. G\u00f6thel, M. Kucha\u0159\u00edk, S. Weber, and M. Bussmann, Implicit reduced Vlasov-Fokker-Planck-Maxwell model based on high-order mixed elements , Journal of Computational Physics , (434) 110214, 2021 . D. Kalchev, P. Vassilevski, and U. Villa, On ParELAG's Parallel Element-based Algebraic Multigrid and its MFEM Miniapps for H(curl) and H(div) Problems: a report including lowest and next to the lowest order numerical results , LLNL Tech. Report , LLNL-TR-824455, 2021 . J. Brown, A. Abdelfattah, V. Barra, N. Beams, J. Camier, V. Dobrev, Y. Dudouit, L. Ghaffari, Tz. Kolev, D. Medina, W. Pazner, T. Ratnayaka, J. Thompson and S. Tomov, libCEED: Fast algebra for high-order element-based discretizations , The Journal of Open Source Software , 2021 . P. Knupp, Tz. Kolev, K. Mittal, V. Tomov, Adaptive Surface Fitting and Tangential Relaxation for High-Order Mesh Optimization . International Meshing Roundtable , 2021 .", "title": "2021"}, {"location": "publications/#2020", "text": "N. Beams, A. Abdelfattah, S. Tomov, J. Dongarra, T. Kolev, and Y. Dudouit, High-Order Finite Element Method using Standard and Device-Level Batch GEMM on GPUs , IEEE/ACM 11th ScalA Workshop , 53-60, 2020 . A. Barker and Tz. Kolev, Matrix-free preconditioning for high-order H(curl) discretizations , Numerical Linear Algebra with Applications , 28(2) e2348, 2020 . D. Kuzmin and M. Quezada de Luna, Entropy conservation property and entropy stabilization of high-order continuous Galerkin approximations to scalar conservation laws , Computers & Fluids , (213) 104742, 2020 . A. Sandu, V. Tomov, L. Cervena, and Tz. Kolev, Conservative High-Order Time Integration for Lagrangian Hydrodynamics , SIAM Journal on Scientific Computing , 43(1), A221-A241, 2020 . B. S. Southworth, M. Holec, and T. Haut. Diffusion synthetic acceleration for heterogeneous domains, compatible with voids , Nuclear Science and Engineering , 195(2), 119-136, 2020 . T. Haut, B. Southworth, P. Maginot, V. Tomov, Diffusion Synthetic Acceleration Preconditioning for Discontinuous Galerkin Discretizations of SN Transport on High-Order Curved Meshes , SIAM Journal on Scientific Computing , 42(5), B1271-B1301, 2020 . R. Anderson, J. Andrej, A. Barker, J. Bramwell, J.-S. Camier, J. Cerveny V. Dobrev, Y. Dudouit, A. Fisher, Tz. Kolev, W. Pazner, M. Stowell, V. Tomov, I. Akkerman, J. Dahm, D. Medina, and S. Zampini, MFEM: A Modular Finite Element Library , Computers & Mathematics with Applications , (81) 42-74, 2020 . Also available as arXiv:1911.09220 . R. Li and C. Zhang, Efficient Parallel Implementations of Sparse Triangular Solves for GPU Architectures , Proceedings of the 2020 SIAM Conference on Parallel Processing for Scientific Computing , 2020 . W. Pazner, Efficient low-order refined preconditioners for high-order matrix-free continuous and discontinuous Galerkin methods , SIAM Journal on Scientific Computing , 42(5), pp. A3055-A3083, 2020 . B. Yee, S. Olivier, T. Haut, M. Holec, V. Tomov, P. Maginot, A Quadratic Programming Flux Correction Method for High-Order DG Discretizations of SN Transport , Journal of Computational Physics , (419) 109696, 2020 . T. L. Horvath and S. Rhebergen, An exactly mass conserving space-time embedded-hybridized discontinuous Galerkin method for the Navier-Stokes equations on moving domains , Journal of Computational Physics , (417) 109577, 2020 . S. Rhebergen and G. N. Wells, An embedded-hybridized discontinuous Galerkin finite element method for the Stokes equations , Computer Methods in Applied Mechanics and Engineering , (358) 112619, 2020 . P. Bello-Maldonado, Tz. Kolev, R. Rieben, and V. Tomov, A Matrix-Free Hyperviscosity Formulation for High-Order ALE Hydrodynamics , Computers & Fluids , (205) 104577, 2020 . V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, R. Rieben, and V. Tomov, Simulation-Driven Optimization of High-Order Meshes in ALE Hydrodynamics , Computers & Fluids , (208) 104602, 2020 . H. Hajduk, D. Kuzmin, Tz. Kolev, V. Tomov, I. Tomas, and J. Shadid, Matrix-free subcell residual distribution for Bernstein finite elements: Monolithic limiting , Computers & Fluids , (200) 104451, 2020 . M. Franco, J.-S. Camier, J. Andrej, and W. Pazner, High-order matrix-free incompressible flow solvers with GPU acceleration and low-order refined preconditioners , Computers & Fluids , (203) 104541, 2020 . S. Friedhoff and B. S. Southworth, On \"Optimal\" h-independent convergence of Parareal and multigrid-reduction-in-time using Runge-Kutta time integration , Numerical Linear Algebra with Applications , e2301, 2020 . B. S. Southworth, A. A. Sivas, and S. Rhebergen, On fixed-point, Krylov, and 2x2 block preconditioners for nonsymmetric problems , SIAM Journal on Matrix Analysis and Applications , 41(2), pp. 871-900, 2020 . P. Fischer, M. Min, T. Rathnayake, S. Dutta, Tz. Kolev, V. Dobrev, J.S. Camier, M. Kronbichler, T. Warburton, K. Swirydowicz, and J. Brown, Scalability of High-Performance PDE Solvers , The International Journal on High Performance Computing Applications , 34(5), pp. 562-586, 2020 . G. Sosa Jones, J. J. Lee, and S. Rhebergen, A space-time hybridizable discontinuous Galerkin method for linear free-surface waves , Journal of Scientific Computing , (85) 61, 2020 . Also available as arXiv:1910.07315 Z. Peng, Q. Tang and X.-Z. Tang. An adaptive discontinuous Petrov-Galerkin method for the Grad-Shafranov equation , SIAM Journal on Scientific Computing , 42(5):B1227-B1249, 2020 .", "title": "2020"}, {"location": "publications/#2019", "text": "H. Hajduk, D. Kuzmin, Tz. Kolev, and R. Abgrall, Matrix-free subcell residual distribution for Bernstein finite elements: Low-order schemes and FCT , Comp. Meth. Appl. Mech. Eng. , (359) 112658, 2019 . K. Suzuki, M. Fujisawa, and M. Mikawa, Simulation Controlling Method for Generating Desired Water Caustics , 2019 International Conference on Cyberworlds (CW) , Kyoto, Japan, pp. 163-170, 2019 . D. White, Y. Choit, and J. Kudo, A dual mesh method with adaptivity for stress constrained topology optimization , Structural and Multidisciplinary Optimization , 61, pp. 749-762, 2019 . S. Watts, W. Arrighi, J. Kudo, D. A. Tortorelli, and D. A. White, Simple, accurate surrogate models of the elastic response of three-dimensional open truss micro-architectures with applications to multiscale topology design , Structural and Multidisciplinary Optimization , 60, pp. 1887-1920, 2019 . V. Dobrev, P. Knupp, Tz. Kolev, and V. Tomov, Towards Simulation-Driven Optimization of High-Order Meshes by the Target-Matrix Optimization Paradigm , 27th International Meshing Roundtable, Oct 1-8, 2018, Albuquerque , Lecture Notes in Computational Science and Engineering, 127, pp. 285-302, 2019 . J. Cerveny, V. Dobrev, and Tz. Kolev, Non-Conforming Mesh Refinement For High-Order Finite Elements , SIAM Journal on Scientific Computing , 41(4):C367-C392, 2019 . D. White, W. Arrighi, J. Kudo, and S. Watts, Multiscale topology optimization using neural network surrogate models , Comp. Meth. Appl. Mech. Eng. , 346, pp. 1118-1135, 2019 . V. A. Dobrev, T. V. Kolev, C. S. Lee, V. Z. Tomov, and P. S. Vassilevski, Algebraic Hybridization and Static Condensation with Application to Scalable H(div) Preconditioning , SIAM Journal on Scientific Computing , 41(3):B425-B447, 2019 . D. White, and A. Voronin, A computational study of symmetry and well-posedness of structural topology optimization , Structural and Multidisciplinary Optimization , 59(3), pp. 759-766, 2019 . T. Haut, P. Maginot, V. Tomov, B. Southworth, T. Brunner and T. Bailey, An Efficient Sweep-Based Solver for the SN Equations on High-Order Meshes , Nuclear Science and Engineering , 193(7):746-759, 2019 . V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, and V. Tomov, The Target-Matrix Optimization Paradigm For High-Order Meshes , SIAM Journal on Scientific Computing , 41(1):B50-B68, 2019 . K. L. A. Kirk, T. L. Horvath, A. Cesmelioglu, and S. Rhebergen, Analysis of a space-time hybridizable discontinuous Galerkin method for the advection-diffusion problem on time-dependent domains , SIAM Journal on Numerical Analysis , 57(4), pp. 1677-1696, 2019 . T. L. Horvath and S. Rhebergen, A locally conservative and energy-stable finite element method for the Navier-Stokes problem on time-dependent domains , International Journal for Numerical Methods in Fluids , 89(12):519-532, 2019 . R. Li, Y. Xi, L. Erlandson, and Y. Saad, The Eigenvalues Slicing Library (EVSL): Algorithms, Implementation, and Software , SIAM Journal on Scientific Computing , 41(4), pp. C393-C415, 2019 .", "title": "2019"}, {"location": "publications/#2018", "text": "H. Auten, The High Value of Open Source Software , Science & Technology Review , January/February 2018, pp. 5-11, 2018 . R. W. Anderson, V. A. Dobrev, Tz. V. Kolev, R. N. Rieben, and V. Z. Tomov, High-Order Multi-Material ALE Hydrodynamics , SIAM Journal on Scientific Computing , 40(1), pp. B32-B58, 2018 . A. T. Barker, V. Dobrev, J. Gopalakrishnan, and Tz. Kolev, A scalable preconditioner for a primal discontinuous Petrov-Galerkin method , SIAM Journal on Scientific Computing , 40(2), pp. A1187-A1203, 2018 . V. Dobrev, T. Kolev, D. Kuzmin, R. Rieben, and V. Tomov, Sequential limiting in continuous and discontinuous Galerkin methods for the Euler equations , Journal of Computational Physics , 356, pp. 372-390, 2018 . M. Reberol and B. L\u00e9vy, Computing the Distance between Two Finite Element Solutions Defined on Different 3D Meshes on a GPU , SIAM Journal on Scientific Computing , 40(1), pp. C131-C155, 2018 . A. Mazuyer, P. Cupillard, R. Giot, M. Conin, Y. Leroy, and P. Thore, Stress estimation in reservoirs using an integrated inverse method , Computers & Geosciences , 114, pp. 30-40, 2018 . J. Gopalakrishnan, M. Neum\u00fcller, and P. Vassilevski, The auxiliary space preconditioner for the de Rham complex , SIAM Journal on Numerical Analysis , 56(6), pp. 3196-3218, 2018 . D. A. White, M. Stowell, and D. A. Tortorelli, Topological optimization of structures using Fourier representations , Structural and Multidisciplinary Optimization , pp. 1-16, 2018 . S. Rhebergen and G. N. Wells, Preconditioning of a hybridized discontinuous Galerkin finite element method for the Stokes equations , Journal of Scientific Computing , 77(3), pp. 1936-1501, 2018 . T. S. Haut, P. G. Maginot, V. Z. Tomov, T. A. Brunner, and T. S. Bailey, An Efficient Sweep-based Solver for the $S_N$ Equations on High-Order Meshes , American Nuclear Society 2018 Annual Meeting, June 14-21, Philadelphia, PA , 2018 . A. S\u00e1nchez-Villar and M. Merino, Advances in Wave-Plasma Modelling in ECR Thrusters , 2018 Space Propulsion Conference, May 14-18, Seville, Spain , 2018 .", "title": "2018"}, {"location": "publications/#2017", "text": "S. Osborn, P. S. Vassilevski, and U. Villa, A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields , SIAM Journal on Scientific Computing , 39(5), pp. S543-S562, 2017 . R. D. Falgout, T. A. Manteuffel, B. O'Neill, and J. B. Schroder, Multigrid Reduction In Time For Nonlinear Parabolic Problems: A Case Study , SIAM Journal on Scientific Computing , 39(5), pp. S298-S322, 2017 . T. A. Manteuffel, L. N. Olson, J. B. Schroder, and B. S. Southworth, A Root-Node Based Algebraic Multigrid Method , SIAM Journal on Scientific Computing , 39(5), pp. S723-S756, 2017 . A. T. Barker, C. S. Lee, and P. S. Vassilevski, Spectral Upscaling for Graph Laplacian Problems with Application to Reservoir Simulation , SIAM Journal on Scientific Computing , 39(5), pp. S323-S346, 2017 . V. A. Dobrev, Tz. Kolev, N. A. Peterson, and J. B. Schroder, Two-level Convergence Theory For Multigrid Reduction In Time (MGRIT) , SIAM Journal on Scientific Computing , 39(5), pp. S501-S527, 2017 . R. E. Bank, P. S. Vassilevski, and L. T. Zikatanov, Arbitrary Dimension Convection-Diffusion Schemes For Space-Time Discretizations , Journal of Computational and Applied Mathematics , 310, pp. 19-31, 2017 . S. Osborn, P. Zulian, T. Benson, U. Villa, R. Krause, and P. S. Vassilevski, Scalable hierarchical PDE sampler for generating spatially correlated random fields using non-matching meshes , Numerical Linear Algebra with Applications , 25, pp. e2146, 2017 . J. H. Adler, I. Lashuk, and S. P. MacLachlan, Composite-grid multigrid for diffusion on the sphere , Numerical Linear Algebra with Applications , 25(1), pp. e2115, 2017 . S. Zampini, P. S. Vassilevski, V. Dobrev, and T. Kolev, Balancing Domain Decomposition by Constraints Algorithms for Curl-conforming Spaces of Arbitrary Order , Domain Decomposition Methods in Science and Engineering XXIV , 2017 . M. Larsen, J. Ahrens, U. Ayachit, E. Brugger, H. Childs, B. Geveci, and C. Harrison, The ALPINE In Situ Infrastructure: Ascending from the Ashes of Strawman , ISAV 2017: In Situ Infrastructures for Enabling Extreme-scale Analysis and Visualization , 2017 . J. Wright and S. Shiraiwa, Antenna to Core: A New Approach to RF Modelling , 22 Topical Conference on Radio-Frequency Power in Plasmas , 2017 . S. Shiraiwa, J. C. Wright, P. T. Bonoli, Tz. Kolev, and M. Stowell, RF wave simulation for cold edge plasmas using the MFEM library , 22 Topical Conference on Radio-Frequency Power in Plasmas , 2017 . C. Hofer, U. Langer, M. Neum\u00fcller, and I. Toulopoulos, Time-Multipatch Discontinuous Galerkin Space-Time Isogeometric Analysis of Parabolic Evolution Problems , RICAM-Report 2017-26 , 2017 . J. Billings, A. McCaskey, G. Vallee, and G. Watson, Will humans even write code in 2040 and what would that mean for extreme heterogeneity in computing? , arXiv:1712.00676 , 2017 . M. L. C. Christensen, U. Villa, A. Engsig-Karup, and P. S. Vassilevski, Numerical Multilevel Upscaling For Incompressible Flow in Reservoir Simulation: An Element-Based Algebraic Multigrid (AMGe) Approach , SIAM Journal on Scientific Computing , 39(1), pp. B102-B137, 2017 . R. Anderson, V. Dobrev, Tz. Kolev, D. Kuzmin, M. Q. de Luna, R. Rieben, and V. Tomov, High-order local maximum principle preserving (MPP) discontinuous Galerkin finite element method for the transport equation , Journal of Computational Physics , 334, pp. 102-124, 2017 . R. Li and Y. Saad, Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners , SIAM Journal on Matrix Analysis and Applications , 38(3), pp. 807-828, 2017 .", "title": "2017"}, {"location": "publications/#2016", "text": "D. Z. Kalchev, C. S. Lee, U. Villa, Y. Efendiev, and P. S. Vassilevski, Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method , SIAM Journal on Scientific Computing , 38(5), pp. A2912-A2933, 2016 . V. A. Dobrev, Tz. V. Kolev, R. N. Rieben, and V. Z. Tomov, Multi-material closure model for high-order finite element Lagrangian hydrodynamics , International Journal for Numerical Methods in Fluids , 82(10), pp. 689-706, 2016 . J. Guermond, B. Popov, and V. Tomov, Entropy-viscosity method for the single material Euler equations in Lagrangian frame , Computer Methods in Applied Mechanics and Engineering , 300, pp. 402-426, 2016 . M. Holec, J. Limpouch, R. Liska, and S. Weber, High-order discontinuous Galerkin nonlocal transport and energy equations scheme for radiation hydrodynamics , International Journal for Numerical Methods in Fluids , 83(10), pp. 779-797, 2016 . Tz. V. Kolev, J. Xu, and Y. Zhu, Multilevel Preconditioners for Reaction-Diffusion Problems with Discontinuous Coefficients , Journal of Scientific Computing , 67(1), pp. 324-350, 2016 . M. Reberol and B. L\u00e9vy, Low-order continuous finite element spaces on hybrid non-conforming hexahedral-tetrahedral meshes , CoRR , abs/1605.02626, 2016 . O. Marques, A. Druinsky, X. S. Li, A. T. Barker, P. Vassilevski, and D. Kalchev, Tuning the Coarse Space Construction in a Spectral AMG Solver , Procedia Computer Science , 80, pp. 212-221, International Conference on Computational Science 2016, ICCS 2016, 6-8 June 2016, San Diego, California, USA, 2016 . J. S. Yeom, J. J. Thiagarajan, A. Bhatele, G. Bronevetsky, and T. Kolev, Data-Driven Performance Modeling of Linear Solvers for Sparse Matrices , 2016 7th International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS) , 2016 .", "title": "2016"}, {"location": "publications/#2015-and-earlier", "text": "D. Osei-Kuffuor, R. Li, and Y. Saad, Matrix Reordering Using Multilevel Graph Coarsening for ILU Preconditioning , SIAM Journal on Scientific Computing , 37(1), pp. A391-A419, 2015 . R. Anderson, V. Dobrev, Tz. Kolev, and R. Rieben, Monotonicity in high-order curvilinear finite element ALE remap , Int. J. Numer. Meth. Fluids , 77(5), pp. 249-273, 2014 . V. Dobrev, Tz. Kolev, and R. Rieben, High-order curvilinear finite element methods for elastic-plastic Lagrangian dynamics , J. Comp. Phys. , (257B), pp. 1062-1080, 2014 . P. Vassilevski and U. Villa, A mixed formulation for the Brinkman problem , SIAM Journal on Numerical Analysis , 52-1, pp. 258-281, 2014 . J. H. Adler and P. S. Vassilevski, Error Analysis for Constrained First-Order System Least-Squares Finite-Element Methods , SIAM Journal on Scientific Computing , 36(3), pp. A1071-A1088, 2014 . A. Aposporidis, P. S. Vassilevski, and A. Veneziani, Multigrid preconditioning of the non-regularized augmented Bingham fluid problem , ETNA. Electronic Transactions on Numerical Analysis , 41, 2014 . P. S. Vassilevski and U. M. Yang, Reducing communication in algebraic multigrid using additive variants , Numerical Linear Algebra with Applications , 21(2), pp. 275-296, 2014 . T. Dong, V. Dobrev, T. Kolev, R. Rieben, S. Tomov, and J. Dongarra, A Step towards Energy Efficient Computing: Redesigning a Hydrodynamic Application on CPU-GPU , 2014 IEEE 28th International Parallel and Distributed Processing Symposium , May 2014 . P. Vassilevski and U. Villa, A block-diagonal algebraic multigrid preconditioner for the Brinkman problem , SIAM Journal on Scientific Computing , 35-5, pp. S3-S17, 2013 . V. Dobrev, T. Ellis, Tz. Kolev, and R. Rieben, High-order curvilinear finite elements for axisymmetric Lagrangian hydrodynamics , Computers & Fluids , pp. 58-69, 2013 . D. Kalchev, C. Ketelsen, and P. S. Vassilevski, Two-level adaptive algebraic multigrid for sequence of problems with slowly varying random coefficients , SIAM Journal on Scientific Computing , 35(6), pp. B1215-B1234, 2013 . P. D'Ambra and P. S. Vassilevski, Adaptive AMG with coarsening based on compatible weighted matching , Computing and Visualization in Science , 16(2), pp. 59-76, 2013 . T. A. Brunner, T. V. Kolev, T. S. Bailey, and A. T. Till, Preserving Spherical Symmetry in Axisymmetric Coordinates for Diffusion , International Conference on Mathematics and Computational Methods Applied to Nuclear Science & Engineering , 2013 . Tz. Kolev and P. Vassilevski, Parallel auxiliary space AMG solver for H(div) problems , SIAM Journal on Scientific Computing , 34, pp. A3079-A3098, 2012 . V. Dobrev, Tz. Kolev, and R. Rieben, High-order curvilinear finite element methods for Lagrangian hydrodynamics , SIAM Journal on Scientific Computing , 34, pp. B606-B641, 2012 . I. Lashuk and P. Vassilevski, Element agglomeration coarse Raviart-Thomas spaces with improved approximation properties , Numerical Linear Algebra with Applications , 19, pp. 414-426, 2012 . D. Kalchev, Adaptive algebraic multigrid for finite element elliptic equations with random coefficients , LLNL Tech. Report , LLNL-TR-553254, 2012 . A. Aposporidis, P. Vassilevski, and A. Veneziani, A geometric nonlinear AMLI preconditioner for the Bingham fluid flow in mixed variables , LLNL Tech. Report , LLNL-JRNL-600372, 2012 . P. Knupp, Introducing the target-matrix paradigm for mesh optimization by node movement , Engineering with Computers , 28(4), pp. 419-429, 2012 . T. A. Brunner, Mulard: A Multigroup Thermal Radiation Diffusion Mini-Application , DOE Exascale Research Conference, Portland, Oregon , 2012 . A. Baker, R. Falgout, T. Kolev, and U. Yang, Multigrid smoothers for ultra-parallel computing , SIAM Journal on Scientific Computing , 33(5), pp. 2864-2887, 2011 . V. Dobrev, T. Ellis, Tz. Kolev, and R. Rieben, Curvilinear finite elements for Lagrangian hydrodynamics , Int. J. Numer. Meth. Fluids , 65, pp. 1295-1310, 2011 . V. Dobrev, J.-L. Guermond, and B. Popov, Surface reconstruction and image enhancement via L1-minimization , SIAM Journal on Scientific Computing , 32 (3), pp. 1591-1616, 2010 . J. Brannick and R. Falgout, Compatible relaxation and coarsening in algebraic multigrid , SIAM Journal on Scientific Computing , 32, pp. 1393-1416, 2010 . A. Baker, Tz. Kolev, and U. M. Yang, Improving algebraic multigrid interpolation operators for linear elasticity problems , Numerical Linear Algebra with Applications , 17, pp. 495-517, 2010 . U. M. Yang, On long-range interpolation operators for aggressive coarsening , Numerical Linear Algebra with Applications , 17, pp. 453-472, 2010 . Tz. Kolev and P. Vassilevski, Parallel auxiliary space AMG for H(curl) problems , Journal of Computational Mathematics , 27, pp. 604-623, 2009 . Tz. V. Kolev and R. N. Rieben, A tensor artificial viscosity using a finite element approach , Journal of Computational Physics , 228(22), pp. 8336 - 8366, 2009 . A. Baker, E. Jessup, and Tz. Kolev, A simple strategy for varying the restart parameter in GMRES(m) , J. Comp. Appl. Math. , 230, pp. 751-761, 2009 . Tz. Kolev, J. Pasciak, and P. Vassilevski, H(curl) auxiliary mesh preconditioning , Numerical Linear Algebra with Applications , 15, pp. 455-471, 2008 . H. De Sterck, R. Falgout, J. Nolting, and U. M. Yang, Distance-two interpolation for parallel algebraic multigrid , Numerical Linear Algebra with Applications , 15, pp. 115-139, 2008 . V. Dobrev, R. Lazarov, and L. Zikatanov, Preconditioning of symmetric interior penalty discontinuous Galerkin FEM for second order elliptic problems , in Domain Decomposition Methods in Science and Engineering XVII, Lecture Notes in Computational Science and Engineering, vol. 60, U. Langer et al. eds, Springer-Verlag, Berlin, Heidelberg, pp. 33-44, 2008 . D. Alber and L. Olson, Parallel coarse grid selection , Numerical Linear Algebra with Applications , 14, pp. 611-643, 2007 . V. Dobrev, R. Lazarov, P. Vassilevski, and L. Zikatanov, Two-level preconditioning of discontinuous Galerkin approximations of second-order elliptic equations , Numerical Linear Algebra with Applications , 13 (9), pp. 753-770, 2006 . Tz. Kolev and P. Vassilevski, AMG by element agglomeration and constrained energy minimization interpolation , Numerical Linear Algebra with Applications , 13, pp. 771-788, 2006 . J. Bramble, Tz. Kolev, and J. Pasciak, A least-squares approximation method for the time-harmonic Maxwell equations , Journal of Numerical Mathematics , 13(4), pp. 237-263, 2005 . P. Vassilevski, Sparse matrix element topology with application to AMG(e) and preconditioning , Numerical Linear Algebra with Applications , 9, pp. 429-444, 2002 . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "2015 and earlier"}, {"location": "seminar/", "text": "FEM@LLNL Seminar Series The FEM@LLNL seminar series is focused on finite element research and applications talks of interest to the MFEM community. Videos will be added to a YouTube playlist as well as this site's videos page . Sign-Up Fill in this form to sign-up for future FEM@LLNL seminar announcements. Next Talk TBD TBD 9am PDT Webex Abstract: TBD Previous Talks Pablo Brubeck (University of Oxford) FIAT: from basis functions to efficient finite element solvers November 12, 2024 Slides Talk Recording Abstract: The FInite element Automatic Tabulator (FIAT) is a powerful Python library for tabulating basis functions. In this talk, we present two major recent developments in FIAT. First, we have extended the FIAT abstraction to natively support macroelements. Macroelements offer conforming discretizations with highly desirable properties, such as divergence-free vector fields, and divergence-conforming symmetric tensors with low-order polynomial degrees. Elements implemented include the Hsieh-Clough-Tocher macroelement for biharmonic problems, the divergence-free, H1-conforming, inf-sup stable Guzm\u00e1n-Neilan macroelement for Stokes, and the Johnson-Mercier macroelement for strongly-symmetric, H(div)-conforming stresses in solid mechanics. We also improved the performance of tabulation and quadrature for simplicial high-order elements, and introduced novel basis functions, leading to solvers with better complexity in polynomial degree. Inspired by the fast diagonalization method, we define new degrees of freedom on simplices as moments against a numerically-computed orthogonal polynomial basis to decouple element interiors in the stiffness matrix. We exploit this decoupling in a domain decomposition method with vertex or edge subdomains on the interface degrees of freedom, and Jacobi relaxation for the interior degrees of freedom. This enables fast solvers for high-order discretizations of the Riesz maps of the spaces of the de Rham complex (Lagrange, N\u00e9d\u00e9lec, Raviart-Thomas, and Brezzi-Douglas-Marini). For each case, we illustrate the performance gains with numerical examples in Firedrake. Denis Ridzal (Sandia National Laboratories) R-Adaptive Mesh Optimization to Enhance Finite Element Basis Compression October 15, 2024 Slides Talk Recording Abstract: Modern computing systems are capable of exascale calculations. While these systems continue to grow in processing power, the available system memory has not increased commensurately. A predominant approach to limit the memory usage in large-scale applications is to exploit the abundant processing power and continually recompute many low-level simulation quantities, rather than storing them. However, this approach can adversely impact the throughput of the simulation and diminish the benefits of modern computing architectures. We present two novel contributions to reduce the memory burden while maintaining performance in simulations based on finite element discretizations. The first contribution develops dictionary-based data compression schemes that detect and exploit the structure of the discretization, due to redundancies across the finite element mesh. These schemes are shown to reduce the memory requirements of key computational kernels by more than 99% on meshes with large numbers of nearly identical mesh cells. For applications where this structure does not exist, our second contribution leverages a recently developed augmented Lagrangian sequential quadratic programming algorithm to enable r-adaptive mesh optimization, with the goal of enhancing redundancies in the mesh. Numerical results demonstrate the effectiveness of the proposed methods to detect, exploit and enhance mesh structure on examples inspired by large-scale applications. Daniele Panozzo (Courant Institute, NYU) Geometric Predicates for Unconditionally Robust Elastodynamics Simulation October 1, 2024 Slides Talk Recording Abstract: The numerical solution of partial differential equations (PDE) is ubiquitously used for physical simulation in scientific computing and engineering. Ideally, a PDE solver should be opaque: the user provides as input the domain boundary, boundary conditions, and the governing equations, and the code returns an evaluator that can compute the value of the solution at any point of the input domain. This is surprisingly far from being the case for all existing open-source or commercial software, despite the research efforts in this direction and the large academic and industrial interest. To a large extent, this is due to lack of robustness in geometric algorithms used to create the discretization, detect collisions, and evaluate element validity. I will present the incremental potential contact simulation paradigm, which provides strong robustness guarantees in simulation codes, ensuring, for the first time, validity of the trajectories accounting for floating point rounding errors over an entire elastodynamic simulation with contact. A core part of this approach is the use of a conservative line-search to check for collisions between geometric primitives and for ensuring validity of the deforming elements over linear trajectories. I will discuss both problems in depth, showing that SOTA approaches favor numerical efficiency but are unfortunately not robust to floating point rounding, leading to major failures in simulation. I will then present an alternative approach based on judiciously using rational and interval types to ensure provable correctness, while keeping a running time comparable with non-conservative methods. To conclude, I will discuss a set of applications enabled by this approach in microscopy and biomechanics, including traction force estimation on a live zebrafish and efficient modeling and simulation of fibrous materials. Rub\u00e9n Sevilla (Swansea University) Mesh Generation and Adaptation using Green AI September 17, 2024 Slides Talk Recording Abstract: Most methods used to solve partial differential equations require creating a mesh that represents the model's geometry. Today, unstructured mesh technology is widely used, allowing three-dimensional meshes with hundreds of millions of elements to be generated in just a few minutes. However, when optimising a design, many simulations are needed for different operating conditions and geometric configurations. Creating the best mesh for each setup becomes time-consuming due to the requirement of excessive human intervention and expertise. This talk will cover our recent work on using artificial intelligence to predict near-optimal meshes suitable for simulations. The main idea is to take advantage of the large amount of data that already exists in the industry to improve the selection of a suitable spacing function, including anisotropic spacing. The proposed approach aims to use knowledge from previous simulations to guide the mesh generation process. I will assess the proposed method based on the accuracy of the predictions, efficiency, and environmental impact. This includes considering the carbon footprint and energy consumption of the computations required for a parametric CFD analysis under different flow conditions and angles of attack. For transient problems, the use of high order methods provides several advantages due to the low dissipation and dispersion errors associated to these schemes. An attractive approach to simulate these problems is to incorporate degree adaptive schemes to enhance the approximation only where needed. In this talk I will also present our recent work on using artificial intelligence to aid a degree adaptive process. Esteban Ferrer and David Huergo (Universidad Polit\u00e9cnica de Madrid) New Avenues in High Order Fluid Dynamics September 3, 2024 Slides Talk Recording Abstract: We present the latest developments of our High-Order Spectral Element Solver (HORSES3D), an open source high-order discontinuous Galerkin framework capable of solving a variety of flow applications, including compressible flows (with or without shocks), incompressible flows, various RANS and LES turbulence models, particle dynamics, multiphase flows, and aeroacoustics [ 1 ]. Recent developments allow us to simulate challenging multiphysics including turbulent flows, multiphase and moving bodies, using local h and p-adaption. In addition, we present recent work that couples Machine Learning and reinforcement learning techniques with high order simulations. Patrick Farrell (University of Oxford) Designing conservative and accurately dissipative numerical integrators in time July 30, 2024 Slides Talk Recording Abstract: Numerical methods for the simulation of transient systems with structure-preserving properties are known to exhibit greater accuracy and physical reliability, in particular over long durations. These schemes are often built on powerful geometric ideas for broad classes of problems, such as Hamiltonian or reversible systems. However, there remain difficulties in devising higher-order- in-time structure-preserving discretizations for nonlinear problems, and in conserving non-polynomial invariants. In this work we propose a new, general framework for the construction of structure-preserving timesteppers via finite elements in time and the systematic introduction of auxiliary variables. The framework reduces to Gauss methods where those are structure-preserving, but extends to generate arbitrary-order structure-preserving schemes for nonlinear problems, and allows for the construction of schemes that conserve multiple higher-order invariants. We demonstrate the ideas by devising novel schemes that exactly conserve all known invariants of the Kepler and Kovalevskaya problems, arbitrary-order schemes for the compressible Navier\u2013Stokes equations that conserve mass, momentum, and energy, and provably dissipate entropy, and multi-conservative schemes for the Benjamin-Bona-Mahony equation. Gonzalo de Diego (Courant Institute) Numerical Solvers for Viscous Contact Problems in Glaciology May 6, 2024 Slides Talk Recording Abstract: Viscous contact problems are time-dependent viscous flow problems where the fluid is in contact with a solid surface from which it can detach and reattach. Over sufficiently long timescales, ice is assumed to flow like a viscous fluid with a nonlinear rheology. Therefore, certain phenomena in glaciology, like the formation of subglacial cavities in the base of an ice sheet or the dynamics of marine ice sheets (continental ice sheets that slide into the ocean and go afloat at a grounding line, detaching from the bedrock), can be modelled as viscous contact problems. In particular, these problems can be described by coupling the Stokes equations with contact boundary conditions to free boundary equations that evolve the ice domain in time. In this talk, I will describe the difficulties that arise when attempting to solve this system numerically and I will introduce a method that is capable of overcoming them. Nat Trask (University of Pennsylvania) A Data Driven Finite Element Exterior Calculus April 2, 2024 Slides Talk Recording Abstract: Despite the recent flurry of work employing machine learning to develop surrogate models to accelerate scientific computation, the \"black-box\" underpinnings of current techniques fail to provide the verification and validation guarantees provided by modern finite element methods. In this talk we present a data-driven finite element exterior calculus for developing reduced-order models of multiphysics systems when the governing equations are either unknown or require closure. The framework employs deep learning architectures typically used for logistic classification to construct a trainable partition of unity which provides notions of control volumes with associated boundary operators. This alternative to a traditional finite element mesh is fully differentiable and allows construction of a discrete de Rham complex with a corresponding Hodge theory. We demonstrate how models may be obtained with the same robustness guarantees as traditional mixed finite element discretization, with deep connections to contemporary techniques in graph neural networks. For applications developing digital twins where surrogates are intended to support real time data assimilation and optimal control, we further develop the framework to support Bayesian optimization of unknown physics on the underlying adjacency matrices of the chain complex. By framing the learning of fluxes via an optimal recovery problem with a computationally tractable posterior distribution, we are able to develop models with intrinsic representations of epistemic uncertainty. William Moses (University of Illinois Urbana-Champaign) Supercharging Programming Through Compiler Technology March 14, 2024 Slides Talk Recording Abstract: The decline of Moore's law and an increasing reliance on computation has led to an explosion of specialized software packages and hardware architectures. While this diversity enables unprecedented flexibility, it also requires domain-experts to learn how to customize programs to efficiently leverage the latest platform-specific API's and data structures, instead of working on their intended problem. Rather than forcing each user to bear this burden, I propose building high-level abstractions within general-purpose compilers that enable fast, portable, and composable programs to be automatically generated. This talk will demonstrate this approach through compilers that I built for two domains: automatic differentiation and parallelism. These domains are critical to both scientific computing and machine learning, forming the basis of neural network training, uncertainty quantification, and high-performance computing. For example, a researcher hoping to incorporate their climate simulation into a machine learning model must also provide a corresponding derivative simulation. My compiler, Enzyme, automatically generates these derivatives from existing computer programs, without modifying the original application. Moreover, operating within the compiler enables Enzyme to combine differentiation with program optimization, resulting in asymptotically and empirically faster code. Looking forward, this talk will also touch on how this domain-agnostic compiler approach can be applied to new directions, including probabilistic programming. Sungho Lee (University of Memphis) LAGHOST: Development of Lagrangian High-Order Solver for Tectonics March 5, 2024 Slides Talk Recording Abstract: Long-term geological and tectonic processes associated with large deformation highlight the importance of using a moving Lagrangian frame. However, modern advancements in the finite element method, such as MPI parallelization, GPU acceleration, high-order elements, and adaptive grid refinement for tectonics based on this frame, have not been updated. Moreover, the existing solvers available in open access suffer from limited tutorials, a poor user manual, and several dependencies that make model building complex. These limitations can discourage both new users and developers from utilizing and improving these models. As a result, we are motivated to develop a user-friendly, Lagrangian thermo-mechanical numerical model that incorporates visco-elastoplastic rheology to simulate long-term tectonic processes like mountain building, mantle convection and so on. We introduce an ongoing project called LAGHOST (Lagrangian High-Order Solver for Tectonics), which is an MFEM-based tectonic solver. LAGHOST expands the capabilities of MFEM's LAGHOS mini-app. Currently, our solver incorporates constitutive equation, body force, mass scaling, dynamic relaxation, Mohr-Coulomb plasticity, plastic softening, Winkler foundation, remeshing, and remapping. To evaluate LAGHOST, we conducted four benchmark tests. The first test involved compressing an elastic box at a constant velocity, while the second test focused on the compaction of a self-weighted elastic column. To enable larger time-step sizes and achieve quasi-static solutions in the benchmarks, we introduced a fictitious density and implemented dynamic relaxation. This involved scaling the density factor and introducing a portion of force component opposing the previous velocity direction at nodal points. Our results exhibited good agreement with analytical solutions. Subsequently, we incorporated Mohr-Coulomb plasticity, a reliable model for predicting rock failure, into LAGHOST. We revisited the elastic box benchmark and considered plastic materials. By considering stress correction arising from plastic yielding, we confirmed that the updated solution from elastic guess aligned with the analytical solution. Furthermore, we applied LAGHOST to simulate the evolution of a normal fault, a significant tectonic phenomenon. To model normal fault evolution, we introduced strain softening on cohesion as the dominant factor based on geological evidence. Our simulations successfully captured the normal fault's evolution, with plastic strain localizing at shallow depths before propagating deeper. The fault angle reached approximately 60 degrees, in line with the Mohr-Coulomb failure theory. Kevin Chung (LLNL) Data-Driven DG FEM Via Reduced Order Modeling and Domain Decomposition February 6, 2024 Slides Talk Recording Abstract: Numerous cutting-edge scientific technologies originate at the laboratory scale, but transitioning them to practical industry applications can be a formidable challenge. Traditional pilot projects at intermediate scales are costly and time-consuming. Alternatives such as E-pilots can rely on high-fidelity numerical simulations, but even these simulations can be computationally prohibitive at larger scales. To overcome these limitations, we propose a scalable, component reduced order model (CROM) method. We employ Discontinuous Galerkin Domain Decomposition (DG-DD) to decompose the physics governing equation for a large-scale system into repeated small-scale unit components. Critical physics modes are identified via proper orthogonal decomposition (POD) from small-scale unit component samples. The computationally expensive, high-fidelity discretization of the physics governing equation is then projected onto these modes to create a reduced order model (ROM) that retains essential physics details. The combination of DG-DD and POD enables ROMs to be used as building blocks comprised of unit components and interfaces, which can then be used to construct a global large-scale ROM without data at such large scales. This method is demonstrated on the Poisson and Stokes flow equations, showing that it can solve equations about 15\u221240 times faster with only \u223c 1% relative error, even at scales 1000 times larger than the unit components. This research is ongoing, with efforts to apply these methods to more complex physics such as Navier-Stokes equation, highlighting their potential for transitioning laboratory-scale technologies to practical industrial use. Brian Young A Full-Wave Electromagnetic Simulator for Frequency-Domain S-Parameter Calculations January 9, 2024 Slides Talk Recording Abstract: An open-source and free full-wave electromagnetic simulator is presented that addresses the engineering community\u2019s need for the calculation of frequency-domain S-parameters. Two-dimensional port simulations are used to excite the 3D space and to extract S-parameters using modal projections. Matrix solutions are performed using complex computations. Features enabled by the MFEM library include adaptive mesh refinement, arbitrary order finite elements, and parallel processing using MPI. Implementation details are presented along with sample results and accuracy demonstrations. Jesse Chan (Rice University) High order positivity-preserving entropy stable discontinuous Galerkin discretizations December 5, 2023 Slides Talk Recording Abstract: High order discontinuous Galerkin (DG) methods provide high order accuracy and geometric flexibility, but are known to be unstable when applied to nonlinear conservation laws whose solutions exhibit shocks and under-resolved solution features. Entropy stable schemes improve robustness by ensuring that physically relevant solutions satisfy a semi-discrete cell entropy inequality independently of numerical resolution and solution regularization while retaining formal high order accuracy. In this talk, we will review the construction of entropy stable high order discontinuous Galerkin methods and describe approaches for enforcing that solutions are \"physically relevant\" (i.e., the thermodynamic variables remain positive). Youngsoo Choi (Lawrence Livermore National Laboratory) Physics-guided interpretable data-driven simulations November 14, 2023 Slides Talk Recording Abstract: A computationally demanding physical simulation often presents a significant impediment to scientific and technological progress. Fortunately, recent advancements in machine learning (ML) and artificial intelligence have given rise to data-driven methods that can expedite these simulations. For instance, a well-trained 2D convolutional deep neural network can provide a 100,000-fold acceleration in solving complex problems like Richtmyer-Meshkov instability [ 1 ]. However, conventional black-box ML models lack the integration of fundamental physics principles, such as the conservation of mass, momentum, and energy. Consequently, they often run afoul of critical physical laws, raising concerns among physicists. These models attempt to compensate for the absence of physics information by relying on vast amounts of data. Additionally, they suffer from various drawbacks, including a lack of structure-preservation, computationally intensive training phases, reduced interpretability, and susceptibility to extrapolation issues. To address these shortcomings, we propose an approach that incorporates physics into the data-driven framework. This integration occurs at different stages of the modeling process, including the sampling and model-building phases. A physics-informed greedy sampling procedure minimizes the necessary training data while maintaining target accuracy [ 2 ]. A physics-guided data-driven model not only preserves the underlying physical structure more effectively but also demonstrates greater robustness in extrapolation compared to traditional black-box ML models. We will showcase numerical results in areas such as hydrodynamics [ 3 , 4 ], particle transport [ 5 ], plasma physics, pore-collapse, and 3D printing to highlight the efficacy of these data-driven approaches. The advantages of these methods will also become apparent in multi-query decision-making applications, such as design optimization [ 6 , 7 ]. Ben Southworth (Los Alamos National Laboratory) Superior discretizations and AMG solvers for extremely anisotropic diffusion via hyperbolic operators October 17, 2023 Slides Talk Recording Abstract: Heat conduction in magnetic confinement fusion can reach anisotropy ratios of 10^9-10^10, and in complex problems the direction of anisotropy may not be aligned with (or is impossible to align with) the spatial mesh. Such problems pose major challenges for both discretization accuracy and efficient implicit linear solvers. Although the underlying problem is elliptic or parabolic in nature, we argue that the problem is better approached from the perspective of hyperbolic operators. The problem is posed in a directional gradient first order formulation, introducing a directional heat flux along magnetic field lines as an auxiliary variable. We then develop novel continuous and discontinuous discretizations of the mixed system, using stabilization techniques developed for hyperbolic problems. The resulting block matrix system is then reordered so that the advective operators are on the diagonal, and the system is solved using AMG based on approximate ideal restriction (AIR), which is particularly efficient for upwind discretizations of advection. Compared with traditional discretizations and AMG solvers, we achieve orders of magnitude reduction in error and AMG iterations in the extremely anisotropic regime. Natasha Sharma (University of Texas at El Paso) A Continuous Interior Penalty Method Framework for Sixth Order Cahn-Hilliard-type Equations with applications to microstructure evolution and microemulsions July 18, 2023 Slides Talk Recording Abstract: The focus of this talk is on presenting unconditionally stable, uniquely solvable, and convergent numerical methods to solve two classes of the sixth-order Cahn-Hilliard-type equations. The first class arises as the so-called phase field crystal atomistic model of crystal growth, which has been employed to simulate a number of physical phenomena such as crystal growth in a supercooled liquid, crack propagation in ductile material, dendritic and eutectic solidification. The second class, henceforth referred to as Microemulsion systems (ME systems) appears as a model capturing the dynamics of phase transitions in ternary oil-water-surfactant systems in which three phases namely a microemulsion, almost pure oil, and almost pure water can coexist in equilibrium. ME systems have several applications ranging from enhanced oil recovery to the development of environmentally friendly solvents and drug delivery systems. Despite the widespread applications of these models, the major challenge impeding their use has been and continues to be a lack of understanding of the complex systems which they model. Thus, building computational models for these systems is crucial to the understanding of these systems. The presence of the higher order derivative in combination with a time-dependent process poses many challenges to the creation of stable, convergent, and efficient numerical methods approximating solutions to these equations. In this talk, we present a continuous interior penalty Galerkin framework for solving these equations and theoretically establish the desirable properties of stability, unique solvability, and first-order convergence. We close the talk by presenting the numerical results of some benchmark problems to verify the practical performance of the proposed approach and discuss some exciting current and future applications. Freddie Witherden (Texas A&M University) FSSpMDM \u2014 Accelerating Small Sparse Matrix Multiplications by Run-Time Code Generation June 20, 2023 Slides Talk Recording Abstract: Small matrix multiplications are a key building block of modern high-order finite element method solvers. Such multiplications describe the act of applying a specific finite element operator onto a set of state vectors. The small and irregular size of these multiplications makes them poor candidates for generic matrix multiplication routines. Moreover, for elements with a tensor product construction, the operators themselves can exhibit a significant degree of sparsity. In this talk, I will describe the code generation strategies employed by our Fixed Size Sparse Matrix-Dense Matrix (FSSpMDM) routine in libxsmm and show how these result in performant operator kernels for prismatic and hexahedral elements. Strategies will be described for both x86-64 (AVX2/AVX-512) and AARCH64 (NEON/SVE) instruction sets. Results will be presented on recent Intel and Apple CPUs and compared against the well-known GiMMiK C code generation library. Frank Giraldo (Naval Postgraduate School) Using High-Order Element-Based Galerkin Methods to Capture Hurricane Intensification May 16, 2023 Slides Talk Recording Abstract: Properly capture hurricane rapid intensification (where the winds increase by 30 knots in the first 24 hours) remains challenging for atmospheric models. The reason is that we need LES-type scales \ud835\udcaa(100m) which is still elusive due to computational cost. In this talk, I describe the work that we are doing in this area and how element-based Galerkin Methods are being used to approximate spatial derivatives. I will also discuss the time-integration strategy that we are exploring for this class of problems. In particular, we are exploring process Multirate methods whereby each process in a system of nonlinear partial different equations (PDEs) uses a time-integrator and time-step commensurate with the wave-speed of that process. We have constructed Multirate methods of any order using extrapolation methods. Along this same idea, we have also developed a multi-modeling framework (MMF) designed to replace the physical parameterizations used in weather/climate models. Our approach is to view the coarse-scale and fine-scale models through the lens of Variational Multi-Scale (VMS) methods in order to give MMF a more rigorous mathematical foundation. Our end goal is to use MMF in order to better resolve the inner core of hurricanes. In addition, I will show some results using flux differencing discontinuous Galerkin Methods for constructing both Kinetic Energy Preserving and Entropy Stable methods and discuss why we need scalable models in order to achieve our goals. Our model, NUMA, is a 3D nonhydrostatic atmospheric model that runs on large CPU clusters and on GPUs. Leszek F. Demkowicz (University of Texas at Austin) Full Envelope DPG Approximation for Electromagnetic Waveguides. Stability and Convergence Analysis April 25, 2023 Slides Talk Recording Abstract: The presented work started with a convergence and stability analysis for the so-called full envelope approximation used in analyzing optical amplifiers (lasers). The specific problem of interest was the simulation of Transverse Mode Instabilities (TMI). The problem translates into the solution of a system of two nonlinear time-harmonic Maxwell equations coupled with a transient heat equation. Simulation of a 1 m long fiber involves the resolution of 10 M wavelengths. A superefficient MPI/openMP hp FE code run on a supercomputer gets you to the range of ten thousand wavelengths. The resolution of the additional thousand wavelengths is done using an exponential ansatz e^{ikz} in the z-coordinate. This results in a non-standard Maxwell problem. The stability and convergence analysis for the problem has been restricted to the linear case only. It turns out that the modified Maxwell problem is stable if and only if the original waveguide problem is stable and the boundedness below stability constants are identical. We have converged to the task of determining the boundedness below constant. The stability analysis started with an easier, acoustic waveguide problem. Separation of variables leads to an eigenproblem for a self-adjoint operator in the transverse plane (in x,y). Expansion of the solution in terms of the corresponding eigenvectors leads then to a decoupled system of ODEs, and a stability analysis for a two-point BVP for an ODE parametrized with the corresponding eigenvalues. The L^2-orthogonality of the eigenmodes and the stability result for a single mode, lead then to the final result: the inverse boundedness below constant depends inversely linearly upon the length L of the waveguide. The corresponding stability for the Maxwell waveguide turned out to be unexpectedly difficult. The equation is vector-valued so a direct separation of variables is out to begin with. An exponential ansatz in z leads to a non-standard eigenproblem involving an operator that is non-self adjoint even for the easiest, homogeneous case. The answer to the problem came from a tricky analysis of the eigenproblem combined with the perturbation technique for perturbed self-adjoint operators. The use of perturbation theory requires an assumption on the smallness of perturbation of the dielectric constant (around a constant value) but with no additional assumptions on its differentiability (discontinuities are allowed). In the end, the final result is similar to that for the acoustic waveguide - the boundedness below constant depends inversely linearly on L. Joachim Sch\u00f6berl (Vienna University of Technology) The Netgen/NGSolve Finite Element Software March 28, 2023 Slides Talk Recording Abstract: In this talk we give an overview of the open source finite element software Netgen/NGSolve, available from www.ngsolve.org . We show how to setup various physical models using FEniCS-like Python scripting. We discuss how we use NGSolve for teaching finite element methods, and how recent research projects have contributed to the further development of the NGSolve software. Some recent highlights are matrix-valued finite elements with applications in elasticity, fluid dynamics, and numerical relativity. We show how the recently extended framework of linear operators allows the utilization of GPUs for linear solvers, as well as time-dependent problems. Vikram Gavini (University of Michigan) Fast, Accurate and Large-scale Ab-initio Calculations for Materials Modeling March 7, 2023 Slides Talk Recording Abstract: Electronic structure calculations, especially those using density functional theory (DFT), have been very useful in understanding and predicting a wide range of materials properties. The importance of DFT calculations to engineering and physical sciences is evident from the fact that ~20% of computational resources on some of the world's largest public supercomputers are devoted to DFT calculations. Despite the wide adoption of DFT, the state-of-the-art implementations of DFT suffer from cell-size and geometry limitations, with the widely used codes in solid state physics being limited to periodic geometries and typical simulation domains containing a few hundred atoms. This talk will present our recent advances towards the development of computational methods and numerical algorithms for conducting fast and accurate large-scale DFT calculations using adaptive finite-element discretization, which form the basis for the recently released DFT-FE open-source code . Details of the implementation, including mixed precision algorithms and asynchronous computing, will be presented. The computational efficiency, scalability and performance of DFT-FE will be presented, which demonstrates a significant outperformance of widely used plane-wave DFT codes. Som Dutta (Utah State University) Quantifying the Potential of Covid-19 Transmission Across Scales: Using SEM based Navier-Stokes solver and the CEAT February 7, 2023 Slides Talk Recording Abstract: The ongoing Covid-19 pandemic has redefined our understanding of respiratory infectious disease transmission. The primary modes of transmission of the SARS-CoV-2 virus has been identified to be airborne, with human generated respiratory aerosols being the main carrier of the virus. Understanding the dispersion of these aerosols/droplets generated during speaking and coughing, has helped quantify potential for transmission and design effective mitigation strategies. Through my talk I will present how models at two ends of the spatio-temporal resolution spectrum helped quantify the physics and aid NASA Ames administrators design mitigation strategies. For the higher spatio-temporal resolution I will illustrate how the high-order SEM based Navier-Stokes solver Nek5000/NekRS was utilized to develop the models, including algorithms developed through CEED. I will present the two main modes of respiratory aerosol/droplet dispersal indoors, first at a shorter time-scale through expiratory events like coughing, and second at a longer time-scale through the room ventilation system induced flow and turbulence. At the other end of the spatio-temporal resolution, I will talk briefly about Covid-19 Exposure Assessment Tool (CEAT), a novel tool developed to account for multiple factors that affect transmission. I will end my talk by briefly discussing how we can bridge the scales and heterogeneities in the problem with the aid of cutting edge computing and data-driven methods, so that we are fully prepared for the next pandemic. The work presented here has been facilitated by funding through DOE's National Virtual Biotechnology Laboratory (NVBL). Stefan Henneking (University of Texas at Austin) Bayesian Inversion of an Acoustic-Gravity Model for Predictive Tsunami Simulation January 10, 2023 Slides Talk Recording Abstract: To improve tsunami preparedness, early-alert systems and real-time monitoring are essential. We use a novel approach for predictive tsunami modeling within the Bayesian inversion framework. This effort focuses on informing the immediate response to an occurring tsunami event using near-field data observation. Our forward model is based on a coupled acoustic-gravity model (e.g., Lotto and Dunham, Comput Geosci (2015) 19:327-340). Similar to other tsunami models, our forward model relies on transient boundary data describing the location and magnitude of the seafloor deformation. In a real-time scenario, these parameter fields must be inferred from a variety of measurements, including observations from pressure gauges mounted on the seafloor. One particular difficulty of this inference problem lies in the accurate inversion from sparse pressure data recorded in the near-field where strong hydroacoustic waves propagate in the compressible ocean; these acoustic waves complicate the task of estimating the hydrostatic pressure changes related to the forming surface gravity wave. Our space-time model is discretized with finite elements in space and finite differences in time. The forward model incurs a high computational complexity, since the pressure waves must be resolved in the 3D compressible ocean over a sufficiently long time span. Due to the infeasibility of rapidly solving the corresponding inverse problem for the fully discretized space-time operator, we discuss approaches for using compact representations of the parameter-to-observable map. Lin Mu (University of Georgia) An Efficient and Effective FEM Solver for Diffusion Equation with Strong Anisotropy December 13, 2022 Slides Talk Recording Abstract: The Diffusion equation with strong anisotropy has broad applications. In this project, we discuss numerical solution of diffusion equations with strong anisotropy on meshes not aligned with the anisotropic vector field, focusing on application to magnetic confinement fusion. In order to resolve the numerical pollution for simulations on a non-anisotropy-aligned mesh and reduce the associated high computational cost, we developed a high-order discontinuous Galerkin scheme with an efficient preconditioner. The auxiliary space preconditioning framework is designed by employing a continuous finite element space as the auxiliary space for the discontinuous finite element space. An effective line smoother that can mitigate the high-frequency error perpendicular to the magnetic field has been designed by a graph-based approach to pick the line smoother that is approximately perpendicular to the vector fields when the mesh does not align with anisotropy. Numerical experiments for several benchmark problems are presented to validate the effectiveness and robustness. Garth Wells (University of Cambridge) FEniCSx: design of the next generation FEniCS libraries for finite element methods November 8, 2022 Slides Talk Recording Abstract: The FEniCS Project provides libraries for solving partial differential equations using the finite element method. An aim of the FEniCS Project has been to provide high-performance solver environments that closely mirror mathematical syntax, with the hypothesis that high-level representations means that solvers are faster to write, easier to debug, and can deliver faster runtime performance than is reasonably possible by hand. Using domain-specific languages and code generation techniques, arguably the FEniCS libraries delivered on these goals for a set of problems. However, over time limitations, including performance and extensibility, become clear and maintainability/sustainability became an issue.Building on experiences from the FEniCS libraries, I will present and discuss the design on the next generation of tools, FEniCSx. The new design retains strengths of the past approach, and addresses limitations using new designs and new tools. Solvers can be written in C++ or Python, and a number of design changes allow the creation of flexible, fast solvers in Python. In the second part of my presentation, I will discuss high-performance finite element kernels (limited to CPUs on this occasion), motivated by the Center for Efficient Exascale Discretizations 'bake-off' problems, and which would not have been possible in the original FEniCS libraries. Double, single and half-precision kernels are considered, and results include (i) the observation that kernels with vector intrinsics can be slower than auto-vectorised kernels for common cases, and (ii) a cache-aware performance model which is remarkably accurate in predicting performance across architectures. Dennis Ogiermann (University of Bochum) Computing Meets Cardiology: Making Heart Simulations Fast and Accurate September 13, 2022 Slides Talk Recording Abstract: Heart diseases are an ubiquitous societal burden responsible for a majority of deaths world wide. A central problem in developing effective treatments for heart diseases is the inherent complexity of the heart as an organ. From a modeling perspective, the heart can be interpreted as a biological pump involving multiple physical fields, namely fluid and solid mechanics, as well as chemistry and electricity, all interacting on different time scales. This multiphysics and multiscale aspect makes simulations inherently expensive, especially when approached with naive numerical techniques. However, computational models can be extraordinarily useful in helping us understanding how the healthy heart functions and especially how malfunctions influence different diseases. In this context, also information about possible weaknesses of therapies can be obtained to ultimately improve clinical treatment and decision support. In this talk, we will focus primarily on two important model classes in computational cardiology and their respective efficient numerical treatment without compromising significant accuracy. The first class is the problem of computing electrocardiograms (ECG) from electrical heart simulations. Since ECG measurements can give a wide range of insights about a wide range of heart diseases they offer suitable data to validate our electrophysiological models and verify our numerical schemes on organ-scale. Known numerical issues, arising in the context of electrophysiological models, will be reviewed. The second class addresses bidirectionally coupled electromechanical models and their efficient numerical treatment. Focus will be on a unified space-time adaptive operator splitting framework developed on top of MFEM which proves highly efficient so far for the investigated model classes while still preserving high accuracy. Ricardo Vinuesa (KTH) Modeling and Controlling Turbulent Flows through Deep Learning August 23, 2022 Slides Talk Recording Abstract: The advent of new powerful deep neural networks (DNNs) has fostered their application in a wide range of research areas, including more recently in fluid mechanics. In this presentation, we will cover some of the fundamentals of deep learning applied to computational fluid dynamics (CFD). Furthermore, we explore the capabilities of DNNs to perform various predictions in turbulent flows: we will use convolutional neural networks (CNNs) for non-intrusive sensing, i.e. to predict the flow in a turbulent open channel based on quantities measured at the wall. We show that it is possible to obtain very good flow predictions, outperforming traditional linear models, and we showcase the potential of transfer learning between friction Reynolds numbers of 180 and 550. We also discuss other modelling methods based on autoencoders (AEs) and generative adversarial networks (GANs), and we present results of deep-reinforcement-learning-based flow control. Jeffrey Banks (RPI) Efficient Techniques for Fluid Structure Interaction: Compatibility Coupling and Galerkin Differences July 26, 2022 Slides Talk Recording Abstract: Predictive simulation increasingly involves the dynamics of complex systems with multiple interacting physical processes. In designing simulation tools for these problems, both the formulation of individual constituent solvers, as well as coupling of such solvers into a cohesive simulation tool must be addressed. In this talk, I discuss both of these aspects in the context of fluid-structure interaction, where we have recently developed a new class of stable and accurate partitioned solvers that overcome added-mass instability through the use of so-called compatibility boundary conditions. Here I will present partitioned coupling strategies for incompressible FSI. One interesting aspect of CBC-based coupling is the occurrence of nonstandard and/or high-derivative operators, which can make adoption of the techniques challenging, e.g. in the context of FEM methods. To address this, I will also discuss our newly developed Galerkin Difference approximations, which may provide a natural pathway for CBCs in an FEM context. Although GD is fundamentally a finite element approximation based on a Galerkin projection, the underlying GD space is nonstandard and is derived using profitable ideas from the finite difference literature. The resulting schemes possess remarkable properties including nodal superconvergence and the ability to use large CFL-one time steps. I will also present preliminary results for GD discretizations on unstructured grids using MFEM. Paul Fischer (UIUC/ANL) Outlook for Exascale Fluid Dynamics Simulations June 21, 2022 Slides Talk Recording Abstract: We consider design, development, and use of simulation software for exascale computing, with a particular emphasis on fluid dynamics simulation. Our perspective is through the lens of the high-order code Nek5000/RS, which has been developed under DOE's Center for Efficient Exascale Discretizations (CEED). Nek5000/RS is an open source thermal fluids simulation code with a long development history on leadership computing platforms--it was the first commercial software on distributed memory platforms and a Gordon Bell Prize winner on Intel's ASCII Red. There are a myriad of objectives that drive software design choices in HPC, such as scalability, low-memory, portability, and maintainability. Throughout, our design objective has been to address the needs of the user, including facilitating data analysis and ensuring flexibility with respect to platform and number of processors that can be used. When running on large-scale HPC platforms, three of the most common user questions are How long will my job take? How many nodes will be required? Is there anything I can do to make my job run faster? Additionally, one might have concerns about storage, post-processing (Will I be able to analyze the results? Where?), and queue times. This talk will seek to answer several of these questions over a broad range of fluid-thermal problems from the perspective of a Nek5000/RS user. We specifically address performance with data for NekRS on several of the DOE's pre-exascale architectures, which feature AMD MI250X or NVIDIA V100 or A100 GPUs. Mike Puso (LLNL) Topics in Immersed Boundary and Contact Methods: Current LLNL Projects and Research May 24, 2022 Slides Talk Recording Abstract: Many of the most interesting phenomena in solid mechanics occurs at material interfaces. This can be in the form of fluid structure interaction, cracks, material discontinuities, impact etc. Solutions to these problems often require some form of immersed/embedded boundary method or contact or combination of both. This talk will provide a brief overview of different lab efforts in these areas and presents some of the current research aspects and results using from LLNL production codes. Technically speaking, the methods discussed here all require Lagrange multipliers to satisfy the constraints on the interface of overlapping or dissimilar meshes which complicates the solution. Stability and consistency of Lagrange multiplier approaches can be hard to achieve both in space and time. For example, the wrong choice of multiplier space will either be over-constrained and/or cause oscillations at the material interfaces for simple statics problems. For dynamics, many of the basic time integration schemes such as Newmark's method are known to be unstable due to gaps opening and closing. Here we introduce some (non-Nitsche) stabilized multiplier spaces for immersed boundary and contact problems and a structure preserving time integration scheme for long time dynamic contact problems. Finally, I will describe some on-going efforts extending this work. Robert Chiodi (UIUC) CHyPS: An MFEM-Based Material Response Solver for Hypersonic Thermal Protection Systems April 26, 2022 Slides Talk Recording Abstract: The University of Illinois at Urbana-Champaign's Center for Hypersonics and Entry Systems Studies has developed a material response solver, named CHyPS, to predict the behavior of thermal protection systems for hypersonic flight. CHyPS uses MFEM to provide the underlying discontinuous Galerkin spatial discretization and linear solvers used to solve the equations. In this talk, we will briefly present the physics and corresponding equations governing material response in hypersonic environments. We will also include a discussion on the implementation of a direct Arbitrary Lagrangian-Eulerian approach to handle mesh movement resulting from the ablation of the material surface. Results for standard community test cases developed at a series of Ablation Workshop meetings over the past decade will be presented and compared to other material response solvers. We will also show the potential of high-order solutions for simulating thermal protection system material response. Tamas Horvath (Oakland University) Space-Time Hybridizable Discontinuous Galerkin with MFEM March 29, 2022 Slides Talk Recording Abstract: Unsteady partial differential equations on deforming domains appear in many real-life scenarios, such as wind turbines, helicopter rotors, car wheels, free-surface flows, etc. We will focus on the space-time finite element method, which is an excellent approach to discretize problems on evolving domains. This method uses discontinuous Galerkin to discretize both in the spatial and temporal directions, allowing for an arbitrarily high-order approximation in space and time. Furthermore, this method automatically satisfies the geometric conservation law, which is essential for accurate solutions on time-dependent domains. The biggest criticism is that the application of space-time discretization increases the computational complexity significantly. To overcome this, we can use the high-order accurate Hybridizable or Embedded discontinuous Galerkin method. Numerical results will be presented to illustrate the applicability of the method for fluid flow around rigid bodies. Tobin Isaac (Georgia Tech) Unifying the Analysis of Geometric Decomposition in FEEC March 22, 2022 Slides Talk Recording Abstract: Two operations take function spaces and make them suitable for finite element computations. The first is the construction of trace-free subspaces (which creates \"bubble\" functions) and the second is the extension of functions from cell boundaries into cell interiors (which create edge functions with the correct continuity): together these operations define the geometric decomposition of a function space. In finite element exterior calculus (FEEC), these two operations have been treated separately for the two main families of finite elements: full polynomial elements and trimmed polynomial elements. In this talk we will see how one constructor of trace-free functions and one extension operator can be used for both families, and indeed for all differential forms. We will also examine the practicality of these two operators as tools for implementing geometric decompositions in actual finite element codes. Rapha\u00ebl Zanella (UT Austin) Axisymmetric MFEM-Based Solvers for the Compressible Navier-Stokes Equations and Other Problems March 1, 2022 Slides Talk Recording Abstract: An axisymmetric model leads, when suitable, to a substantial cut in the computational cost with respect to a 3D model. Although not as accurate, the axisymmetric model allows to quickly obtain a result which can be satisfying. Simple modifications to a 2D finite element solver allow to obtain an axisymmetric solver. We present MFEM-based parallel axisymmetric solvers for different problems. We first present simple axisymmetric solvers for the Laplacian problem and the heat equation. We then present an axisymmetric solver for the compressible Navier-Stokes equations. All solvers are based on H^1-conforming finite element spaces. The correctness of the implementation is verified with convergence tests on manufactured solutions. The Navier-Stokes solver is used to simulate axisymmetric flows with an analytical solution (Poiseuille and Taylor-Couette) and an air flow in a plasma torch geometry. Robert Carson (LLNL) An Overview of ExaConstit and Its Use in the ExaAM Project February 1, 2022 Slides Talk Recording Abstract: As additively manufactured (AM) parts become increasingly more popular in industry, a growing need exists to help expediate the certifying process of parts. The ExaAM project seeks to help this process by producing a workflow to model the AM process from the melt pool process all the way up to the part scale response by leveraging multiple physics codes run on upcoming exascale computing platforms. As part of this workflow, ExaConstit is a next-generation quasi-static, solid mechanics FEM code built upon MFEM used to connect local microstructures and local properties within the part scale response. Within this talk, we will first provide an overview of ExaConstit, how we have ported it over to the GPU, and some performance numbers on a number of different platforms. Next, we will discuss how we have leveraged MFEM and the FLUX workflow to run hundreds of high-fidelity simulations on Summit in-order to generate the local properties needed to drive the part scale simulation in the ExaAM workflow. Finally, we will show case a few other areas ExaConstit has been used in. Guglielmo Scovazzi (Duke University) The Shifted Boundary Method: An Immersed Approach for Computational Mechanics January 20, 2022 Slides Talk Recording Abstract: Immersed/embedded/unfitted boundary methods obviate the need for continual re-meshing in many applications involving rapid prototyping and design. Unfortunately, many finite element embedded boundary methods are also difficult to implement due to the need to perform complex cell cutting operations at boundaries, and the consequences that these operations may have on the overall conditioning of the ensuing algebraic problems. We present a new, stable, and simple embedded boundary method, named \"shifted boundary method\" (SBM), which eliminates the need to perform cell cutting. Boundary conditions are imposed on a surrogate discrete boundary, lying on the interior of the true boundary interface. We then construct appropriate field extension operators, by way of Taylor expansions, with the purpose of preserving accuracy when imposing the boundary conditions. We demonstrate the SBM on large-scale solid and fracture mechanics problems; thermomechanics problems; porous media flow problems; incompressible flow problems governed by the Navier-Stokes equations (also including free surfaces); and problems governed by hyperbolic conservation laws. Future Talks Martin Kronbichler (Ruhr University Bochum) December 17, 2024 Svetlana Tokareva (Los Alamos National Laboratory) January 14, 2025 Patrick Zulian (Universit\u00e0 della Svizzera italiana / UniDistance Suisse) February 18, 2025 Stefan Turek (Technical University Dortmund) March 11, 2025 \u0141ukasz Kaczmarczyk (University of Glasgow) April 8, 2025", "title": "Seminar"}, {"location": "seminar/#femllnl-seminar-series", "text": "The FEM@LLNL seminar series is focused on finite element research and applications talks of interest to the MFEM community. Videos will be added to a YouTube playlist as well as this site's videos page .", "title": "FEM@LLNL Seminar Series"}, {"location": "seminar/#sign-up", "text": "Fill in this form to sign-up for future FEM@LLNL seminar announcements.", "title": " Sign-Up"}, {"location": "seminar/#next-talk", "text": "", "title": " Next Talk"}, {"location": "seminar/#tbd", "text": "", "title": "TBD"}, {"location": "seminar/#tbd_1", "text": "", "title": "TBD"}, {"location": "seminar/#9am-pdt", "text": "Webex Abstract: TBD", "title": "9am PDT"}, {"location": "seminar/#previous-talks", "text": "", "title": " Previous Talks"}, {"location": "seminar/#pablo-brubeck-university-of-oxford", "text": "", "title": "Pablo Brubeck (University of Oxford)"}, {"location": "seminar/#fiat-from-basis-functions-to-efficient-finite-element-solvers", "text": "", "title": "FIAT: from basis functions to efficient finite element solvers"}, {"location": "seminar/#november-12-2024", "text": "Slides Talk Recording Abstract: The FInite element Automatic Tabulator (FIAT) is a powerful Python library for tabulating basis functions. In this talk, we present two major recent developments in FIAT. First, we have extended the FIAT abstraction to natively support macroelements. Macroelements offer conforming discretizations with highly desirable properties, such as divergence-free vector fields, and divergence-conforming symmetric tensors with low-order polynomial degrees. Elements implemented include the Hsieh-Clough-Tocher macroelement for biharmonic problems, the divergence-free, H1-conforming, inf-sup stable Guzm\u00e1n-Neilan macroelement for Stokes, and the Johnson-Mercier macroelement for strongly-symmetric, H(div)-conforming stresses in solid mechanics. We also improved the performance of tabulation and quadrature for simplicial high-order elements, and introduced novel basis functions, leading to solvers with better complexity in polynomial degree. Inspired by the fast diagonalization method, we define new degrees of freedom on simplices as moments against a numerically-computed orthogonal polynomial basis to decouple element interiors in the stiffness matrix. We exploit this decoupling in a domain decomposition method with vertex or edge subdomains on the interface degrees of freedom, and Jacobi relaxation for the interior degrees of freedom. This enables fast solvers for high-order discretizations of the Riesz maps of the spaces of the de Rham complex (Lagrange, N\u00e9d\u00e9lec, Raviart-Thomas, and Brezzi-Douglas-Marini). For each case, we illustrate the performance gains with numerical examples in Firedrake.", "title": "November 12, 2024"}, {"location": "seminar/#denis-ridzal-sandia-national-laboratories", "text": "", "title": "Denis Ridzal (Sandia National Laboratories)"}, {"location": "seminar/#r-adaptive-mesh-optimization-to-enhance-finite-element-basis-compression", "text": "", "title": "R-Adaptive Mesh Optimization to Enhance Finite Element Basis Compression"}, {"location": "seminar/#october-15-2024", "text": "Slides Talk Recording Abstract: Modern computing systems are capable of exascale calculations. While these systems continue to grow in processing power, the available system memory has not increased commensurately. A predominant approach to limit the memory usage in large-scale applications is to exploit the abundant processing power and continually recompute many low-level simulation quantities, rather than storing them. However, this approach can adversely impact the throughput of the simulation and diminish the benefits of modern computing architectures. We present two novel contributions to reduce the memory burden while maintaining performance in simulations based on finite element discretizations. The first contribution develops dictionary-based data compression schemes that detect and exploit the structure of the discretization, due to redundancies across the finite element mesh. These schemes are shown to reduce the memory requirements of key computational kernels by more than 99% on meshes with large numbers of nearly identical mesh cells. For applications where this structure does not exist, our second contribution leverages a recently developed augmented Lagrangian sequential quadratic programming algorithm to enable r-adaptive mesh optimization, with the goal of enhancing redundancies in the mesh. Numerical results demonstrate the effectiveness of the proposed methods to detect, exploit and enhance mesh structure on examples inspired by large-scale applications.", "title": "October 15, 2024"}, {"location": "seminar/#daniele-panozzo-courant-institute-nyu", "text": "", "title": "Daniele Panozzo (Courant Institute, NYU)"}, {"location": "seminar/#geometric-predicates-for-unconditionally-robust-elastodynamics-simulation", "text": "", "title": "Geometric Predicates for Unconditionally Robust Elastodynamics Simulation"}, {"location": "seminar/#october-1-2024", "text": "Slides Talk Recording Abstract: The numerical solution of partial differential equations (PDE) is ubiquitously used for physical simulation in scientific computing and engineering. Ideally, a PDE solver should be opaque: the user provides as input the domain boundary, boundary conditions, and the governing equations, and the code returns an evaluator that can compute the value of the solution at any point of the input domain. This is surprisingly far from being the case for all existing open-source or commercial software, despite the research efforts in this direction and the large academic and industrial interest. To a large extent, this is due to lack of robustness in geometric algorithms used to create the discretization, detect collisions, and evaluate element validity. I will present the incremental potential contact simulation paradigm, which provides strong robustness guarantees in simulation codes, ensuring, for the first time, validity of the trajectories accounting for floating point rounding errors over an entire elastodynamic simulation with contact. A core part of this approach is the use of a conservative line-search to check for collisions between geometric primitives and for ensuring validity of the deforming elements over linear trajectories. I will discuss both problems in depth, showing that SOTA approaches favor numerical efficiency but are unfortunately not robust to floating point rounding, leading to major failures in simulation. I will then present an alternative approach based on judiciously using rational and interval types to ensure provable correctness, while keeping a running time comparable with non-conservative methods. To conclude, I will discuss a set of applications enabled by this approach in microscopy and biomechanics, including traction force estimation on a live zebrafish and efficient modeling and simulation of fibrous materials.", "title": "October 1, 2024"}, {"location": "seminar/#ruben-sevilla-swansea-university", "text": "", "title": "Rub\u00e9n Sevilla (Swansea University)"}, {"location": "seminar/#mesh-generation-and-adaptation-using-green-ai", "text": "", "title": "Mesh Generation and Adaptation using Green AI"}, {"location": "seminar/#september-17-2024", "text": "Slides Talk Recording Abstract: Most methods used to solve partial differential equations require creating a mesh that represents the model's geometry. Today, unstructured mesh technology is widely used, allowing three-dimensional meshes with hundreds of millions of elements to be generated in just a few minutes. However, when optimising a design, many simulations are needed for different operating conditions and geometric configurations. Creating the best mesh for each setup becomes time-consuming due to the requirement of excessive human intervention and expertise. This talk will cover our recent work on using artificial intelligence to predict near-optimal meshes suitable for simulations. The main idea is to take advantage of the large amount of data that already exists in the industry to improve the selection of a suitable spacing function, including anisotropic spacing. The proposed approach aims to use knowledge from previous simulations to guide the mesh generation process. I will assess the proposed method based on the accuracy of the predictions, efficiency, and environmental impact. This includes considering the carbon footprint and energy consumption of the computations required for a parametric CFD analysis under different flow conditions and angles of attack. For transient problems, the use of high order methods provides several advantages due to the low dissipation and dispersion errors associated to these schemes. An attractive approach to simulate these problems is to incorporate degree adaptive schemes to enhance the approximation only where needed. In this talk I will also present our recent work on using artificial intelligence to aid a degree adaptive process.", "title": "September 17, 2024"}, {"location": "seminar/#esteban-ferrer-and-david-huergo-universidad-politecnica-de-madrid", "text": "", "title": "Esteban Ferrer and David Huergo (Universidad Polit\u00e9cnica de Madrid)"}, {"location": "seminar/#new-avenues-in-high-order-fluid-dynamics", "text": "", "title": "New Avenues in High Order Fluid Dynamics"}, {"location": "seminar/#september-3-2024", "text": "Slides Talk Recording Abstract: We present the latest developments of our High-Order Spectral Element Solver (HORSES3D), an open source high-order discontinuous Galerkin framework capable of solving a variety of flow applications, including compressible flows (with or without shocks), incompressible flows, various RANS and LES turbulence models, particle dynamics, multiphase flows, and aeroacoustics [ 1 ]. Recent developments allow us to simulate challenging multiphysics including turbulent flows, multiphase and moving bodies, using local h and p-adaption. In addition, we present recent work that couples Machine Learning and reinforcement learning techniques with high order simulations.", "title": "September 3, 2024"}, {"location": "seminar/#patrick-farrell-university-of-oxford", "text": "", "title": "Patrick Farrell (University of Oxford)"}, {"location": "seminar/#designing-conservative-and-accurately-dissipative-numerical-integrators-in-time", "text": "", "title": "Designing conservative and accurately dissipative numerical integrators in time"}, {"location": "seminar/#july-30-2024", "text": "Slides Talk Recording Abstract: Numerical methods for the simulation of transient systems with structure-preserving properties are known to exhibit greater accuracy and physical reliability, in particular over long durations. These schemes are often built on powerful geometric ideas for broad classes of problems, such as Hamiltonian or reversible systems. However, there remain difficulties in devising higher-order- in-time structure-preserving discretizations for nonlinear problems, and in conserving non-polynomial invariants. In this work we propose a new, general framework for the construction of structure-preserving timesteppers via finite elements in time and the systematic introduction of auxiliary variables. The framework reduces to Gauss methods where those are structure-preserving, but extends to generate arbitrary-order structure-preserving schemes for nonlinear problems, and allows for the construction of schemes that conserve multiple higher-order invariants. We demonstrate the ideas by devising novel schemes that exactly conserve all known invariants of the Kepler and Kovalevskaya problems, arbitrary-order schemes for the compressible Navier\u2013Stokes equations that conserve mass, momentum, and energy, and provably dissipate entropy, and multi-conservative schemes for the Benjamin-Bona-Mahony equation.", "title": "July 30, 2024"}, {"location": "seminar/#gonzalo-de-diego-courant-institute", "text": "", "title": "Gonzalo de Diego (Courant Institute)"}, {"location": "seminar/#numerical-solvers-for-viscous-contact-problems-in-glaciology", "text": "", "title": "Numerical Solvers for Viscous Contact Problems in Glaciology"}, {"location": "seminar/#may-6-2024", "text": "Slides Talk Recording Abstract: Viscous contact problems are time-dependent viscous flow problems where the fluid is in contact with a solid surface from which it can detach and reattach. Over sufficiently long timescales, ice is assumed to flow like a viscous fluid with a nonlinear rheology. Therefore, certain phenomena in glaciology, like the formation of subglacial cavities in the base of an ice sheet or the dynamics of marine ice sheets (continental ice sheets that slide into the ocean and go afloat at a grounding line, detaching from the bedrock), can be modelled as viscous contact problems. In particular, these problems can be described by coupling the Stokes equations with contact boundary conditions to free boundary equations that evolve the ice domain in time. In this talk, I will describe the difficulties that arise when attempting to solve this system numerically and I will introduce a method that is capable of overcoming them.", "title": "May 6, 2024"}, {"location": "seminar/#nat-trask-university-of-pennsylvania", "text": "", "title": "Nat Trask (University of Pennsylvania)"}, {"location": "seminar/#a-data-driven-finite-element-exterior-calculus", "text": "", "title": "A Data Driven Finite Element Exterior Calculus"}, {"location": "seminar/#april-2-2024", "text": "Slides Talk Recording Abstract: Despite the recent flurry of work employing machine learning to develop surrogate models to accelerate scientific computation, the \"black-box\" underpinnings of current techniques fail to provide the verification and validation guarantees provided by modern finite element methods. In this talk we present a data-driven finite element exterior calculus for developing reduced-order models of multiphysics systems when the governing equations are either unknown or require closure. The framework employs deep learning architectures typically used for logistic classification to construct a trainable partition of unity which provides notions of control volumes with associated boundary operators. This alternative to a traditional finite element mesh is fully differentiable and allows construction of a discrete de Rham complex with a corresponding Hodge theory. We demonstrate how models may be obtained with the same robustness guarantees as traditional mixed finite element discretization, with deep connections to contemporary techniques in graph neural networks. For applications developing digital twins where surrogates are intended to support real time data assimilation and optimal control, we further develop the framework to support Bayesian optimization of unknown physics on the underlying adjacency matrices of the chain complex. By framing the learning of fluxes via an optimal recovery problem with a computationally tractable posterior distribution, we are able to develop models with intrinsic representations of epistemic uncertainty.", "title": "April 2, 2024"}, {"location": "seminar/#william-moses-university-of-illinois-urbana-champaign", "text": "", "title": "William Moses (University of Illinois Urbana-Champaign)"}, {"location": "seminar/#supercharging-programming-through-compiler-technology", "text": "", "title": "Supercharging Programming Through Compiler Technology"}, {"location": "seminar/#march-14-2024", "text": "Slides Talk Recording Abstract: The decline of Moore's law and an increasing reliance on computation has led to an explosion of specialized software packages and hardware architectures. While this diversity enables unprecedented flexibility, it also requires domain-experts to learn how to customize programs to efficiently leverage the latest platform-specific API's and data structures, instead of working on their intended problem. Rather than forcing each user to bear this burden, I propose building high-level abstractions within general-purpose compilers that enable fast, portable, and composable programs to be automatically generated. This talk will demonstrate this approach through compilers that I built for two domains: automatic differentiation and parallelism. These domains are critical to both scientific computing and machine learning, forming the basis of neural network training, uncertainty quantification, and high-performance computing. For example, a researcher hoping to incorporate their climate simulation into a machine learning model must also provide a corresponding derivative simulation. My compiler, Enzyme, automatically generates these derivatives from existing computer programs, without modifying the original application. Moreover, operating within the compiler enables Enzyme to combine differentiation with program optimization, resulting in asymptotically and empirically faster code. Looking forward, this talk will also touch on how this domain-agnostic compiler approach can be applied to new directions, including probabilistic programming.", "title": "March 14, 2024"}, {"location": "seminar/#sungho-lee-university-of-memphis", "text": "", "title": "Sungho Lee (University of Memphis)"}, {"location": "seminar/#laghost-development-of-lagrangian-high-order-solver-for-tectonics", "text": "", "title": "LAGHOST: Development of Lagrangian High-Order Solver for Tectonics"}, {"location": "seminar/#march-5-2024", "text": "Slides Talk Recording Abstract: Long-term geological and tectonic processes associated with large deformation highlight the importance of using a moving Lagrangian frame. However, modern advancements in the finite element method, such as MPI parallelization, GPU acceleration, high-order elements, and adaptive grid refinement for tectonics based on this frame, have not been updated. Moreover, the existing solvers available in open access suffer from limited tutorials, a poor user manual, and several dependencies that make model building complex. These limitations can discourage both new users and developers from utilizing and improving these models. As a result, we are motivated to develop a user-friendly, Lagrangian thermo-mechanical numerical model that incorporates visco-elastoplastic rheology to simulate long-term tectonic processes like mountain building, mantle convection and so on. We introduce an ongoing project called LAGHOST (Lagrangian High-Order Solver for Tectonics), which is an MFEM-based tectonic solver. LAGHOST expands the capabilities of MFEM's LAGHOS mini-app. Currently, our solver incorporates constitutive equation, body force, mass scaling, dynamic relaxation, Mohr-Coulomb plasticity, plastic softening, Winkler foundation, remeshing, and remapping. To evaluate LAGHOST, we conducted four benchmark tests. The first test involved compressing an elastic box at a constant velocity, while the second test focused on the compaction of a self-weighted elastic column. To enable larger time-step sizes and achieve quasi-static solutions in the benchmarks, we introduced a fictitious density and implemented dynamic relaxation. This involved scaling the density factor and introducing a portion of force component opposing the previous velocity direction at nodal points. Our results exhibited good agreement with analytical solutions. Subsequently, we incorporated Mohr-Coulomb plasticity, a reliable model for predicting rock failure, into LAGHOST. We revisited the elastic box benchmark and considered plastic materials. By considering stress correction arising from plastic yielding, we confirmed that the updated solution from elastic guess aligned with the analytical solution. Furthermore, we applied LAGHOST to simulate the evolution of a normal fault, a significant tectonic phenomenon. To model normal fault evolution, we introduced strain softening on cohesion as the dominant factor based on geological evidence. Our simulations successfully captured the normal fault's evolution, with plastic strain localizing at shallow depths before propagating deeper. The fault angle reached approximately 60 degrees, in line with the Mohr-Coulomb failure theory.", "title": "March 5, 2024"}, {"location": "seminar/#kevin-chung-llnl", "text": "", "title": "Kevin Chung (LLNL)"}, {"location": "seminar/#data-driven-dg-fem-via-reduced-order-modeling-and-domain-decomposition", "text": "", "title": "Data-Driven DG FEM Via Reduced Order Modeling and Domain Decomposition"}, {"location": "seminar/#february-6-2024", "text": "Slides Talk Recording Abstract: Numerous cutting-edge scientific technologies originate at the laboratory scale, but transitioning them to practical industry applications can be a formidable challenge. Traditional pilot projects at intermediate scales are costly and time-consuming. Alternatives such as E-pilots can rely on high-fidelity numerical simulations, but even these simulations can be computationally prohibitive at larger scales. To overcome these limitations, we propose a scalable, component reduced order model (CROM) method. We employ Discontinuous Galerkin Domain Decomposition (DG-DD) to decompose the physics governing equation for a large-scale system into repeated small-scale unit components. Critical physics modes are identified via proper orthogonal decomposition (POD) from small-scale unit component samples. The computationally expensive, high-fidelity discretization of the physics governing equation is then projected onto these modes to create a reduced order model (ROM) that retains essential physics details. The combination of DG-DD and POD enables ROMs to be used as building blocks comprised of unit components and interfaces, which can then be used to construct a global large-scale ROM without data at such large scales. This method is demonstrated on the Poisson and Stokes flow equations, showing that it can solve equations about 15\u221240 times faster with only \u223c 1% relative error, even at scales 1000 times larger than the unit components. This research is ongoing, with efforts to apply these methods to more complex physics such as Navier-Stokes equation, highlighting their potential for transitioning laboratory-scale technologies to practical industrial use.", "title": "February 6, 2024"}, {"location": "seminar/#brian-young", "text": "", "title": "Brian Young"}, {"location": "seminar/#a-full-wave-electromagnetic-simulator-for-frequency-domain-s-parameter-calculations", "text": "", "title": "A Full-Wave Electromagnetic Simulator for Frequency-Domain S-Parameter Calculations"}, {"location": "seminar/#january-9-2024", "text": "Slides Talk Recording Abstract: An open-source and free full-wave electromagnetic simulator is presented that addresses the engineering community\u2019s need for the calculation of frequency-domain S-parameters. Two-dimensional port simulations are used to excite the 3D space and to extract S-parameters using modal projections. Matrix solutions are performed using complex computations. Features enabled by the MFEM library include adaptive mesh refinement, arbitrary order finite elements, and parallel processing using MPI. Implementation details are presented along with sample results and accuracy demonstrations.", "title": "January 9, 2024"}, {"location": "seminar/#jesse-chan-rice-university", "text": "", "title": "Jesse Chan (Rice University)"}, {"location": "seminar/#high-order-positivity-preserving-entropy-stable-discontinuous-galerkin-discretizations", "text": "", "title": "High order positivity-preserving entropy stable discontinuous Galerkin discretizations"}, {"location": "seminar/#december-5-2023", "text": "Slides Talk Recording Abstract: High order discontinuous Galerkin (DG) methods provide high order accuracy and geometric flexibility, but are known to be unstable when applied to nonlinear conservation laws whose solutions exhibit shocks and under-resolved solution features. Entropy stable schemes improve robustness by ensuring that physically relevant solutions satisfy a semi-discrete cell entropy inequality independently of numerical resolution and solution regularization while retaining formal high order accuracy. In this talk, we will review the construction of entropy stable high order discontinuous Galerkin methods and describe approaches for enforcing that solutions are \"physically relevant\" (i.e., the thermodynamic variables remain positive).", "title": "December 5, 2023"}, {"location": "seminar/#youngsoo-choi-lawrence-livermore-national-laboratory", "text": "", "title": "Youngsoo Choi (Lawrence Livermore National Laboratory)"}, {"location": "seminar/#physics-guided-interpretable-data-driven-simulations", "text": "", "title": "Physics-guided interpretable data-driven simulations"}, {"location": "seminar/#november-14-2023", "text": "Slides Talk Recording Abstract: A computationally demanding physical simulation often presents a significant impediment to scientific and technological progress. Fortunately, recent advancements in machine learning (ML) and artificial intelligence have given rise to data-driven methods that can expedite these simulations. For instance, a well-trained 2D convolutional deep neural network can provide a 100,000-fold acceleration in solving complex problems like Richtmyer-Meshkov instability [ 1 ]. However, conventional black-box ML models lack the integration of fundamental physics principles, such as the conservation of mass, momentum, and energy. Consequently, they often run afoul of critical physical laws, raising concerns among physicists. These models attempt to compensate for the absence of physics information by relying on vast amounts of data. Additionally, they suffer from various drawbacks, including a lack of structure-preservation, computationally intensive training phases, reduced interpretability, and susceptibility to extrapolation issues. To address these shortcomings, we propose an approach that incorporates physics into the data-driven framework. This integration occurs at different stages of the modeling process, including the sampling and model-building phases. A physics-informed greedy sampling procedure minimizes the necessary training data while maintaining target accuracy [ 2 ]. A physics-guided data-driven model not only preserves the underlying physical structure more effectively but also demonstrates greater robustness in extrapolation compared to traditional black-box ML models. We will showcase numerical results in areas such as hydrodynamics [ 3 , 4 ], particle transport [ 5 ], plasma physics, pore-collapse, and 3D printing to highlight the efficacy of these data-driven approaches. The advantages of these methods will also become apparent in multi-query decision-making applications, such as design optimization [ 6 , 7 ].", "title": "November 14, 2023"}, {"location": "seminar/#ben-southworth-los-alamos-national-laboratory", "text": "", "title": "Ben Southworth (Los Alamos National Laboratory)"}, {"location": "seminar/#superior-discretizations-and-amg-solvers-for-extremely-anisotropic-diffusion-via-hyperbolic-operators", "text": "", "title": "Superior discretizations and AMG solvers for extremely anisotropic diffusion via hyperbolic operators"}, {"location": "seminar/#october-17-2023", "text": "Slides Talk Recording Abstract: Heat conduction in magnetic confinement fusion can reach anisotropy ratios of 10^9-10^10, and in complex problems the direction of anisotropy may not be aligned with (or is impossible to align with) the spatial mesh. Such problems pose major challenges for both discretization accuracy and efficient implicit linear solvers. Although the underlying problem is elliptic or parabolic in nature, we argue that the problem is better approached from the perspective of hyperbolic operators. The problem is posed in a directional gradient first order formulation, introducing a directional heat flux along magnetic field lines as an auxiliary variable. We then develop novel continuous and discontinuous discretizations of the mixed system, using stabilization techniques developed for hyperbolic problems. The resulting block matrix system is then reordered so that the advective operators are on the diagonal, and the system is solved using AMG based on approximate ideal restriction (AIR), which is particularly efficient for upwind discretizations of advection. Compared with traditional discretizations and AMG solvers, we achieve orders of magnitude reduction in error and AMG iterations in the extremely anisotropic regime.", "title": "October 17, 2023"}, {"location": "seminar/#natasha-sharma-university-of-texas-at-el-paso", "text": "", "title": "Natasha Sharma (University of Texas at El Paso)"}, {"location": "seminar/#a-continuous-interior-penalty-method-framework-for-sixth-order-cahn-hilliard-type-equations-with-applications-to-microstructure-evolution-and-microemulsions", "text": "", "title": "A Continuous Interior Penalty Method Framework for Sixth Order Cahn-Hilliard-type Equations with applications to microstructure evolution and microemulsions"}, {"location": "seminar/#july-18-2023", "text": "Slides Talk Recording Abstract: The focus of this talk is on presenting unconditionally stable, uniquely solvable, and convergent numerical methods to solve two classes of the sixth-order Cahn-Hilliard-type equations. The first class arises as the so-called phase field crystal atomistic model of crystal growth, which has been employed to simulate a number of physical phenomena such as crystal growth in a supercooled liquid, crack propagation in ductile material, dendritic and eutectic solidification. The second class, henceforth referred to as Microemulsion systems (ME systems) appears as a model capturing the dynamics of phase transitions in ternary oil-water-surfactant systems in which three phases namely a microemulsion, almost pure oil, and almost pure water can coexist in equilibrium. ME systems have several applications ranging from enhanced oil recovery to the development of environmentally friendly solvents and drug delivery systems. Despite the widespread applications of these models, the major challenge impeding their use has been and continues to be a lack of understanding of the complex systems which they model. Thus, building computational models for these systems is crucial to the understanding of these systems. The presence of the higher order derivative in combination with a time-dependent process poses many challenges to the creation of stable, convergent, and efficient numerical methods approximating solutions to these equations. In this talk, we present a continuous interior penalty Galerkin framework for solving these equations and theoretically establish the desirable properties of stability, unique solvability, and first-order convergence. We close the talk by presenting the numerical results of some benchmark problems to verify the practical performance of the proposed approach and discuss some exciting current and future applications.", "title": "July 18, 2023"}, {"location": "seminar/#freddie-witherden-texas-am-university", "text": "", "title": "Freddie Witherden (Texas A&M University)"}, {"location": "seminar/#fsspmdm-accelerating-small-sparse-matrix-multiplications-by-run-time-code-generation", "text": "", "title": "FSSpMDM \u2014 Accelerating Small Sparse Matrix Multiplications by Run-Time Code Generation"}, {"location": "seminar/#june-20-2023", "text": "Slides Talk Recording Abstract: Small matrix multiplications are a key building block of modern high-order finite element method solvers. Such multiplications describe the act of applying a specific finite element operator onto a set of state vectors. The small and irregular size of these multiplications makes them poor candidates for generic matrix multiplication routines. Moreover, for elements with a tensor product construction, the operators themselves can exhibit a significant degree of sparsity. In this talk, I will describe the code generation strategies employed by our Fixed Size Sparse Matrix-Dense Matrix (FSSpMDM) routine in libxsmm and show how these result in performant operator kernels for prismatic and hexahedral elements. Strategies will be described for both x86-64 (AVX2/AVX-512) and AARCH64 (NEON/SVE) instruction sets. Results will be presented on recent Intel and Apple CPUs and compared against the well-known GiMMiK C code generation library.", "title": "June 20, 2023"}, {"location": "seminar/#frank-giraldo-naval-postgraduate-school", "text": "", "title": "Frank Giraldo (Naval Postgraduate School)"}, {"location": "seminar/#using-high-order-element-based-galerkin-methods-to-capture-hurricane-intensification", "text": "", "title": "Using High-Order Element-Based Galerkin Methods to Capture Hurricane Intensification"}, {"location": "seminar/#may-16-2023", "text": "Slides Talk Recording Abstract: Properly capture hurricane rapid intensification (where the winds increase by 30 knots in the first 24 hours) remains challenging for atmospheric models. The reason is that we need LES-type scales \ud835\udcaa(100m) which is still elusive due to computational cost. In this talk, I describe the work that we are doing in this area and how element-based Galerkin Methods are being used to approximate spatial derivatives. I will also discuss the time-integration strategy that we are exploring for this class of problems. In particular, we are exploring process Multirate methods whereby each process in a system of nonlinear partial different equations (PDEs) uses a time-integrator and time-step commensurate with the wave-speed of that process. We have constructed Multirate methods of any order using extrapolation methods. Along this same idea, we have also developed a multi-modeling framework (MMF) designed to replace the physical parameterizations used in weather/climate models. Our approach is to view the coarse-scale and fine-scale models through the lens of Variational Multi-Scale (VMS) methods in order to give MMF a more rigorous mathematical foundation. Our end goal is to use MMF in order to better resolve the inner core of hurricanes. In addition, I will show some results using flux differencing discontinuous Galerkin Methods for constructing both Kinetic Energy Preserving and Entropy Stable methods and discuss why we need scalable models in order to achieve our goals. Our model, NUMA, is a 3D nonhydrostatic atmospheric model that runs on large CPU clusters and on GPUs.", "title": "May 16, 2023"}, {"location": "seminar/#leszek-f-demkowicz-university-of-texas-at-austin", "text": "", "title": "Leszek F. Demkowicz (University of Texas at Austin)"}, {"location": "seminar/#full-envelope-dpg-approximation-for-electromagnetic-waveguides-stability-and-convergence-analysis", "text": "", "title": "Full Envelope DPG Approximation for Electromagnetic Waveguides. Stability and Convergence Analysis"}, {"location": "seminar/#april-25-2023", "text": "Slides Talk Recording Abstract: The presented work started with a convergence and stability analysis for the so-called full envelope approximation used in analyzing optical amplifiers (lasers). The specific problem of interest was the simulation of Transverse Mode Instabilities (TMI). The problem translates into the solution of a system of two nonlinear time-harmonic Maxwell equations coupled with a transient heat equation. Simulation of a 1 m long fiber involves the resolution of 10 M wavelengths. A superefficient MPI/openMP hp FE code run on a supercomputer gets you to the range of ten thousand wavelengths. The resolution of the additional thousand wavelengths is done using an exponential ansatz e^{ikz} in the z-coordinate. This results in a non-standard Maxwell problem. The stability and convergence analysis for the problem has been restricted to the linear case only. It turns out that the modified Maxwell problem is stable if and only if the original waveguide problem is stable and the boundedness below stability constants are identical. We have converged to the task of determining the boundedness below constant. The stability analysis started with an easier, acoustic waveguide problem. Separation of variables leads to an eigenproblem for a self-adjoint operator in the transverse plane (in x,y). Expansion of the solution in terms of the corresponding eigenvectors leads then to a decoupled system of ODEs, and a stability analysis for a two-point BVP for an ODE parametrized with the corresponding eigenvalues. The L^2-orthogonality of the eigenmodes and the stability result for a single mode, lead then to the final result: the inverse boundedness below constant depends inversely linearly upon the length L of the waveguide. The corresponding stability for the Maxwell waveguide turned out to be unexpectedly difficult. The equation is vector-valued so a direct separation of variables is out to begin with. An exponential ansatz in z leads to a non-standard eigenproblem involving an operator that is non-self adjoint even for the easiest, homogeneous case. The answer to the problem came from a tricky analysis of the eigenproblem combined with the perturbation technique for perturbed self-adjoint operators. The use of perturbation theory requires an assumption on the smallness of perturbation of the dielectric constant (around a constant value) but with no additional assumptions on its differentiability (discontinuities are allowed). In the end, the final result is similar to that for the acoustic waveguide - the boundedness below constant depends inversely linearly on L.", "title": "April 25, 2023"}, {"location": "seminar/#joachim-schoberl-vienna-university-of-technology", "text": "", "title": "Joachim Sch\u00f6berl (Vienna University of Technology)"}, {"location": "seminar/#the-netgenngsolve-finite-element-software", "text": "", "title": "The Netgen/NGSolve Finite Element Software"}, {"location": "seminar/#march-28-2023", "text": "Slides Talk Recording Abstract: In this talk we give an overview of the open source finite element software Netgen/NGSolve, available from www.ngsolve.org . We show how to setup various physical models using FEniCS-like Python scripting. We discuss how we use NGSolve for teaching finite element methods, and how recent research projects have contributed to the further development of the NGSolve software. Some recent highlights are matrix-valued finite elements with applications in elasticity, fluid dynamics, and numerical relativity. We show how the recently extended framework of linear operators allows the utilization of GPUs for linear solvers, as well as time-dependent problems.", "title": "March 28, 2023"}, {"location": "seminar/#vikram-gavini-university-of-michigan", "text": "", "title": "Vikram Gavini (University of Michigan)"}, {"location": "seminar/#fast-accurate-and-large-scale-ab-initio-calculations-for-materials-modeling", "text": "", "title": "Fast, Accurate and Large-scale Ab-initio Calculations for Materials Modeling"}, {"location": "seminar/#march-7-2023", "text": "Slides Talk Recording Abstract: Electronic structure calculations, especially those using density functional theory (DFT), have been very useful in understanding and predicting a wide range of materials properties. The importance of DFT calculations to engineering and physical sciences is evident from the fact that ~20% of computational resources on some of the world's largest public supercomputers are devoted to DFT calculations. Despite the wide adoption of DFT, the state-of-the-art implementations of DFT suffer from cell-size and geometry limitations, with the widely used codes in solid state physics being limited to periodic geometries and typical simulation domains containing a few hundred atoms. This talk will present our recent advances towards the development of computational methods and numerical algorithms for conducting fast and accurate large-scale DFT calculations using adaptive finite-element discretization, which form the basis for the recently released DFT-FE open-source code . Details of the implementation, including mixed precision algorithms and asynchronous computing, will be presented. The computational efficiency, scalability and performance of DFT-FE will be presented, which demonstrates a significant outperformance of widely used plane-wave DFT codes.", "title": "March 7, 2023"}, {"location": "seminar/#som-dutta-utah-state-university", "text": "", "title": "Som Dutta (Utah State University)"}, {"location": "seminar/#quantifying-the-potential-of-covid-19-transmission-across-scales-using-sem-based-navier-stokes-solver-and-the-ceat", "text": "", "title": "Quantifying the Potential of Covid-19 Transmission Across Scales: Using SEM based Navier-Stokes solver and the CEAT"}, {"location": "seminar/#february-7-2023", "text": "Slides Talk Recording Abstract: The ongoing Covid-19 pandemic has redefined our understanding of respiratory infectious disease transmission. The primary modes of transmission of the SARS-CoV-2 virus has been identified to be airborne, with human generated respiratory aerosols being the main carrier of the virus. Understanding the dispersion of these aerosols/droplets generated during speaking and coughing, has helped quantify potential for transmission and design effective mitigation strategies. Through my talk I will present how models at two ends of the spatio-temporal resolution spectrum helped quantify the physics and aid NASA Ames administrators design mitigation strategies. For the higher spatio-temporal resolution I will illustrate how the high-order SEM based Navier-Stokes solver Nek5000/NekRS was utilized to develop the models, including algorithms developed through CEED. I will present the two main modes of respiratory aerosol/droplet dispersal indoors, first at a shorter time-scale through expiratory events like coughing, and second at a longer time-scale through the room ventilation system induced flow and turbulence. At the other end of the spatio-temporal resolution, I will talk briefly about Covid-19 Exposure Assessment Tool (CEAT), a novel tool developed to account for multiple factors that affect transmission. I will end my talk by briefly discussing how we can bridge the scales and heterogeneities in the problem with the aid of cutting edge computing and data-driven methods, so that we are fully prepared for the next pandemic. The work presented here has been facilitated by funding through DOE's National Virtual Biotechnology Laboratory (NVBL).", "title": "February 7, 2023"}, {"location": "seminar/#stefan-henneking-university-of-texas-at-austin", "text": "", "title": "Stefan Henneking (University of Texas at Austin)"}, {"location": "seminar/#bayesian-inversion-of-an-acoustic-gravity-model-for-predictive-tsunami-simulation", "text": "", "title": "Bayesian Inversion of an Acoustic-Gravity Model for Predictive Tsunami Simulation"}, {"location": "seminar/#january-10-2023", "text": "Slides Talk Recording Abstract: To improve tsunami preparedness, early-alert systems and real-time monitoring are essential. We use a novel approach for predictive tsunami modeling within the Bayesian inversion framework. This effort focuses on informing the immediate response to an occurring tsunami event using near-field data observation. Our forward model is based on a coupled acoustic-gravity model (e.g., Lotto and Dunham, Comput Geosci (2015) 19:327-340). Similar to other tsunami models, our forward model relies on transient boundary data describing the location and magnitude of the seafloor deformation. In a real-time scenario, these parameter fields must be inferred from a variety of measurements, including observations from pressure gauges mounted on the seafloor. One particular difficulty of this inference problem lies in the accurate inversion from sparse pressure data recorded in the near-field where strong hydroacoustic waves propagate in the compressible ocean; these acoustic waves complicate the task of estimating the hydrostatic pressure changes related to the forming surface gravity wave. Our space-time model is discretized with finite elements in space and finite differences in time. The forward model incurs a high computational complexity, since the pressure waves must be resolved in the 3D compressible ocean over a sufficiently long time span. Due to the infeasibility of rapidly solving the corresponding inverse problem for the fully discretized space-time operator, we discuss approaches for using compact representations of the parameter-to-observable map.", "title": "January 10, 2023"}, {"location": "seminar/#lin-mu-university-of-georgia", "text": "", "title": "Lin Mu (University of Georgia)"}, {"location": "seminar/#an-efficient-and-effective-fem-solver-for-diffusion-equation-with-strong-anisotropy", "text": "", "title": "An Efficient and Effective FEM Solver for Diffusion Equation with Strong Anisotropy"}, {"location": "seminar/#december-13-2022", "text": "Slides Talk Recording Abstract: The Diffusion equation with strong anisotropy has broad applications. In this project, we discuss numerical solution of diffusion equations with strong anisotropy on meshes not aligned with the anisotropic vector field, focusing on application to magnetic confinement fusion. In order to resolve the numerical pollution for simulations on a non-anisotropy-aligned mesh and reduce the associated high computational cost, we developed a high-order discontinuous Galerkin scheme with an efficient preconditioner. The auxiliary space preconditioning framework is designed by employing a continuous finite element space as the auxiliary space for the discontinuous finite element space. An effective line smoother that can mitigate the high-frequency error perpendicular to the magnetic field has been designed by a graph-based approach to pick the line smoother that is approximately perpendicular to the vector fields when the mesh does not align with anisotropy. Numerical experiments for several benchmark problems are presented to validate the effectiveness and robustness.", "title": "December 13, 2022"}, {"location": "seminar/#garth-wells-university-of-cambridge", "text": "", "title": "Garth Wells (University of Cambridge)"}, {"location": "seminar/#fenicsx-design-of-the-next-generation-fenics-libraries-for-finite-element-methods", "text": "", "title": "FEniCSx: design of the next generation FEniCS libraries for finite element methods"}, {"location": "seminar/#november-8-2022", "text": "Slides Talk Recording Abstract: The FEniCS Project provides libraries for solving partial differential equations using the finite element method. An aim of the FEniCS Project has been to provide high-performance solver environments that closely mirror mathematical syntax, with the hypothesis that high-level representations means that solvers are faster to write, easier to debug, and can deliver faster runtime performance than is reasonably possible by hand. Using domain-specific languages and code generation techniques, arguably the FEniCS libraries delivered on these goals for a set of problems. However, over time limitations, including performance and extensibility, become clear and maintainability/sustainability became an issue.Building on experiences from the FEniCS libraries, I will present and discuss the design on the next generation of tools, FEniCSx. The new design retains strengths of the past approach, and addresses limitations using new designs and new tools. Solvers can be written in C++ or Python, and a number of design changes allow the creation of flexible, fast solvers in Python. In the second part of my presentation, I will discuss high-performance finite element kernels (limited to CPUs on this occasion), motivated by the Center for Efficient Exascale Discretizations 'bake-off' problems, and which would not have been possible in the original FEniCS libraries. Double, single and half-precision kernels are considered, and results include (i) the observation that kernels with vector intrinsics can be slower than auto-vectorised kernels for common cases, and (ii) a cache-aware performance model which is remarkably accurate in predicting performance across architectures.", "title": "November 8, 2022"}, {"location": "seminar/#dennis-ogiermann-university-of-bochum", "text": "", "title": "Dennis Ogiermann (University of Bochum)"}, {"location": "seminar/#computing-meets-cardiology-making-heart-simulations-fast-and-accurate", "text": "", "title": "Computing Meets Cardiology: Making Heart Simulations Fast and Accurate"}, {"location": "seminar/#september-13-2022", "text": "Slides Talk Recording Abstract: Heart diseases are an ubiquitous societal burden responsible for a majority of deaths world wide. A central problem in developing effective treatments for heart diseases is the inherent complexity of the heart as an organ. From a modeling perspective, the heart can be interpreted as a biological pump involving multiple physical fields, namely fluid and solid mechanics, as well as chemistry and electricity, all interacting on different time scales. This multiphysics and multiscale aspect makes simulations inherently expensive, especially when approached with naive numerical techniques. However, computational models can be extraordinarily useful in helping us understanding how the healthy heart functions and especially how malfunctions influence different diseases. In this context, also information about possible weaknesses of therapies can be obtained to ultimately improve clinical treatment and decision support. In this talk, we will focus primarily on two important model classes in computational cardiology and their respective efficient numerical treatment without compromising significant accuracy. The first class is the problem of computing electrocardiograms (ECG) from electrical heart simulations. Since ECG measurements can give a wide range of insights about a wide range of heart diseases they offer suitable data to validate our electrophysiological models and verify our numerical schemes on organ-scale. Known numerical issues, arising in the context of electrophysiological models, will be reviewed. The second class addresses bidirectionally coupled electromechanical models and their efficient numerical treatment. Focus will be on a unified space-time adaptive operator splitting framework developed on top of MFEM which proves highly efficient so far for the investigated model classes while still preserving high accuracy.", "title": "September 13, 2022"}, {"location": "seminar/#ricardo-vinuesa-kth", "text": "", "title": "Ricardo Vinuesa (KTH)"}, {"location": "seminar/#modeling-and-controlling-turbulent-flows-through-deep-learning", "text": "", "title": "Modeling and Controlling Turbulent Flows through Deep Learning"}, {"location": "seminar/#august-23-2022", "text": "Slides Talk Recording Abstract: The advent of new powerful deep neural networks (DNNs) has fostered their application in a wide range of research areas, including more recently in fluid mechanics. In this presentation, we will cover some of the fundamentals of deep learning applied to computational fluid dynamics (CFD). Furthermore, we explore the capabilities of DNNs to perform various predictions in turbulent flows: we will use convolutional neural networks (CNNs) for non-intrusive sensing, i.e. to predict the flow in a turbulent open channel based on quantities measured at the wall. We show that it is possible to obtain very good flow predictions, outperforming traditional linear models, and we showcase the potential of transfer learning between friction Reynolds numbers of 180 and 550. We also discuss other modelling methods based on autoencoders (AEs) and generative adversarial networks (GANs), and we present results of deep-reinforcement-learning-based flow control.", "title": "August 23, 2022"}, {"location": "seminar/#jeffrey-banks-rpi", "text": "", "title": "Jeffrey Banks (RPI)"}, {"location": "seminar/#efficient-techniques-for-fluid-structure-interaction-compatibility-coupling-and-galerkin-differences", "text": "", "title": "Efficient Techniques for Fluid Structure Interaction: Compatibility Coupling and Galerkin Differences"}, {"location": "seminar/#july-26-2022", "text": "Slides Talk Recording Abstract: Predictive simulation increasingly involves the dynamics of complex systems with multiple interacting physical processes. In designing simulation tools for these problems, both the formulation of individual constituent solvers, as well as coupling of such solvers into a cohesive simulation tool must be addressed. In this talk, I discuss both of these aspects in the context of fluid-structure interaction, where we have recently developed a new class of stable and accurate partitioned solvers that overcome added-mass instability through the use of so-called compatibility boundary conditions. Here I will present partitioned coupling strategies for incompressible FSI. One interesting aspect of CBC-based coupling is the occurrence of nonstandard and/or high-derivative operators, which can make adoption of the techniques challenging, e.g. in the context of FEM methods. To address this, I will also discuss our newly developed Galerkin Difference approximations, which may provide a natural pathway for CBCs in an FEM context. Although GD is fundamentally a finite element approximation based on a Galerkin projection, the underlying GD space is nonstandard and is derived using profitable ideas from the finite difference literature. The resulting schemes possess remarkable properties including nodal superconvergence and the ability to use large CFL-one time steps. I will also present preliminary results for GD discretizations on unstructured grids using MFEM.", "title": "July 26, 2022"}, {"location": "seminar/#paul-fischer-uiucanl", "text": "", "title": "Paul Fischer (UIUC/ANL)"}, {"location": "seminar/#outlook-for-exascale-fluid-dynamics-simulations", "text": "", "title": "Outlook for Exascale Fluid Dynamics Simulations"}, {"location": "seminar/#june-21-2022", "text": "Slides Talk Recording Abstract: We consider design, development, and use of simulation software for exascale computing, with a particular emphasis on fluid dynamics simulation. Our perspective is through the lens of the high-order code Nek5000/RS, which has been developed under DOE's Center for Efficient Exascale Discretizations (CEED). Nek5000/RS is an open source thermal fluids simulation code with a long development history on leadership computing platforms--it was the first commercial software on distributed memory platforms and a Gordon Bell Prize winner on Intel's ASCII Red. There are a myriad of objectives that drive software design choices in HPC, such as scalability, low-memory, portability, and maintainability. Throughout, our design objective has been to address the needs of the user, including facilitating data analysis and ensuring flexibility with respect to platform and number of processors that can be used. When running on large-scale HPC platforms, three of the most common user questions are How long will my job take? How many nodes will be required? Is there anything I can do to make my job run faster? Additionally, one might have concerns about storage, post-processing (Will I be able to analyze the results? Where?), and queue times. This talk will seek to answer several of these questions over a broad range of fluid-thermal problems from the perspective of a Nek5000/RS user. We specifically address performance with data for NekRS on several of the DOE's pre-exascale architectures, which feature AMD MI250X or NVIDIA V100 or A100 GPUs.", "title": "June 21, 2022"}, {"location": "seminar/#mike-puso-llnl", "text": "", "title": "Mike Puso (LLNL)"}, {"location": "seminar/#topics-in-immersed-boundary-and-contact-methods-current-llnl-projects-and-research", "text": "", "title": "Topics in Immersed Boundary and Contact Methods: Current LLNL Projects and Research"}, {"location": "seminar/#may-24-2022", "text": "Slides Talk Recording Abstract: Many of the most interesting phenomena in solid mechanics occurs at material interfaces. This can be in the form of fluid structure interaction, cracks, material discontinuities, impact etc. Solutions to these problems often require some form of immersed/embedded boundary method or contact or combination of both. This talk will provide a brief overview of different lab efforts in these areas and presents some of the current research aspects and results using from LLNL production codes. Technically speaking, the methods discussed here all require Lagrange multipliers to satisfy the constraints on the interface of overlapping or dissimilar meshes which complicates the solution. Stability and consistency of Lagrange multiplier approaches can be hard to achieve both in space and time. For example, the wrong choice of multiplier space will either be over-constrained and/or cause oscillations at the material interfaces for simple statics problems. For dynamics, many of the basic time integration schemes such as Newmark's method are known to be unstable due to gaps opening and closing. Here we introduce some (non-Nitsche) stabilized multiplier spaces for immersed boundary and contact problems and a structure preserving time integration scheme for long time dynamic contact problems. Finally, I will describe some on-going efforts extending this work.", "title": "May 24, 2022"}, {"location": "seminar/#robert-chiodi-uiuc", "text": "", "title": "Robert Chiodi (UIUC)"}, {"location": "seminar/#chyps-an-mfem-based-material-response-solver-for-hypersonic-thermal-protection-systems", "text": "", "title": "CHyPS: An MFEM-Based Material Response Solver for Hypersonic Thermal Protection Systems"}, {"location": "seminar/#april-26-2022", "text": "Slides Talk Recording Abstract: The University of Illinois at Urbana-Champaign's Center for Hypersonics and Entry Systems Studies has developed a material response solver, named CHyPS, to predict the behavior of thermal protection systems for hypersonic flight. CHyPS uses MFEM to provide the underlying discontinuous Galerkin spatial discretization and linear solvers used to solve the equations. In this talk, we will briefly present the physics and corresponding equations governing material response in hypersonic environments. We will also include a discussion on the implementation of a direct Arbitrary Lagrangian-Eulerian approach to handle mesh movement resulting from the ablation of the material surface. Results for standard community test cases developed at a series of Ablation Workshop meetings over the past decade will be presented and compared to other material response solvers. We will also show the potential of high-order solutions for simulating thermal protection system material response.", "title": "April 26, 2022"}, {"location": "seminar/#tamas-horvath-oakland-university", "text": "", "title": "Tamas Horvath (Oakland University)"}, {"location": "seminar/#space-time-hybridizable-discontinuous-galerkin-with-mfem", "text": "", "title": "Space-Time Hybridizable Discontinuous Galerkin with MFEM"}, {"location": "seminar/#march-29-2022", "text": "Slides Talk Recording Abstract: Unsteady partial differential equations on deforming domains appear in many real-life scenarios, such as wind turbines, helicopter rotors, car wheels, free-surface flows, etc. We will focus on the space-time finite element method, which is an excellent approach to discretize problems on evolving domains. This method uses discontinuous Galerkin to discretize both in the spatial and temporal directions, allowing for an arbitrarily high-order approximation in space and time. Furthermore, this method automatically satisfies the geometric conservation law, which is essential for accurate solutions on time-dependent domains. The biggest criticism is that the application of space-time discretization increases the computational complexity significantly. To overcome this, we can use the high-order accurate Hybridizable or Embedded discontinuous Galerkin method. Numerical results will be presented to illustrate the applicability of the method for fluid flow around rigid bodies.", "title": "March 29, 2022"}, {"location": "seminar/#tobin-isaac-georgia-tech", "text": "", "title": "Tobin Isaac (Georgia Tech)"}, {"location": "seminar/#unifying-the-analysis-of-geometric-decomposition-in-feec", "text": "", "title": "Unifying the Analysis of Geometric Decomposition in FEEC"}, {"location": "seminar/#march-22-2022", "text": "Slides Talk Recording Abstract: Two operations take function spaces and make them suitable for finite element computations. The first is the construction of trace-free subspaces (which creates \"bubble\" functions) and the second is the extension of functions from cell boundaries into cell interiors (which create edge functions with the correct continuity): together these operations define the geometric decomposition of a function space. In finite element exterior calculus (FEEC), these two operations have been treated separately for the two main families of finite elements: full polynomial elements and trimmed polynomial elements. In this talk we will see how one constructor of trace-free functions and one extension operator can be used for both families, and indeed for all differential forms. We will also examine the practicality of these two operators as tools for implementing geometric decompositions in actual finite element codes.", "title": "March 22, 2022"}, {"location": "seminar/#raphael-zanella-ut-austin", "text": "", "title": "Rapha\u00ebl Zanella (UT Austin)"}, {"location": "seminar/#axisymmetric-mfem-based-solvers-for-the-compressible-navier-stokes-equations-and-other-problems", "text": "", "title": "Axisymmetric MFEM-Based Solvers for the Compressible Navier-Stokes Equations and Other Problems"}, {"location": "seminar/#march-1-2022", "text": "Slides Talk Recording Abstract: An axisymmetric model leads, when suitable, to a substantial cut in the computational cost with respect to a 3D model. Although not as accurate, the axisymmetric model allows to quickly obtain a result which can be satisfying. Simple modifications to a 2D finite element solver allow to obtain an axisymmetric solver. We present MFEM-based parallel axisymmetric solvers for different problems. We first present simple axisymmetric solvers for the Laplacian problem and the heat equation. We then present an axisymmetric solver for the compressible Navier-Stokes equations. All solvers are based on H^1-conforming finite element spaces. The correctness of the implementation is verified with convergence tests on manufactured solutions. The Navier-Stokes solver is used to simulate axisymmetric flows with an analytical solution (Poiseuille and Taylor-Couette) and an air flow in a plasma torch geometry.", "title": "March 1, 2022"}, {"location": "seminar/#robert-carson-llnl", "text": "", "title": "Robert Carson (LLNL)"}, {"location": "seminar/#an-overview-of-exaconstit-and-its-use-in-the-exaam-project", "text": "", "title": "An Overview of ExaConstit and Its Use in the ExaAM Project"}, {"location": "seminar/#february-1-2022", "text": "Slides Talk Recording Abstract: As additively manufactured (AM) parts become increasingly more popular in industry, a growing need exists to help expediate the certifying process of parts. The ExaAM project seeks to help this process by producing a workflow to model the AM process from the melt pool process all the way up to the part scale response by leveraging multiple physics codes run on upcoming exascale computing platforms. As part of this workflow, ExaConstit is a next-generation quasi-static, solid mechanics FEM code built upon MFEM used to connect local microstructures and local properties within the part scale response. Within this talk, we will first provide an overview of ExaConstit, how we have ported it over to the GPU, and some performance numbers on a number of different platforms. Next, we will discuss how we have leveraged MFEM and the FLUX workflow to run hundreds of high-fidelity simulations on Summit in-order to generate the local properties needed to drive the part scale simulation in the ExaAM workflow. Finally, we will show case a few other areas ExaConstit has been used in.", "title": "February 1, 2022"}, {"location": "seminar/#guglielmo-scovazzi-duke-university", "text": "", "title": "Guglielmo Scovazzi (Duke University)"}, {"location": "seminar/#the-shifted-boundary-method-an-immersed-approach-for-computational-mechanics", "text": "", "title": "The Shifted Boundary Method: An Immersed Approach for Computational Mechanics"}, {"location": "seminar/#january-20-2022", "text": "Slides Talk Recording Abstract: Immersed/embedded/unfitted boundary methods obviate the need for continual re-meshing in many applications involving rapid prototyping and design. Unfortunately, many finite element embedded boundary methods are also difficult to implement due to the need to perform complex cell cutting operations at boundaries, and the consequences that these operations may have on the overall conditioning of the ensuing algebraic problems. We present a new, stable, and simple embedded boundary method, named \"shifted boundary method\" (SBM), which eliminates the need to perform cell cutting. Boundary conditions are imposed on a surrogate discrete boundary, lying on the interior of the true boundary interface. We then construct appropriate field extension operators, by way of Taylor expansions, with the purpose of preserving accuracy when imposing the boundary conditions. We demonstrate the SBM on large-scale solid and fracture mechanics problems; thermomechanics problems; porous media flow problems; incompressible flow problems governed by the Navier-Stokes equations (also including free surfaces); and problems governed by hyperbolic conservation laws.", "title": "January 20, 2022"}, {"location": "seminar/#future-talks", "text": "", "title": " Future Talks"}, {"location": "seminar/#martin-kronbichler-ruhr-university-bochum", "text": "", "title": "Martin Kronbichler (Ruhr University Bochum)"}, {"location": "seminar/#december-17-2024", "text": "", "title": "December 17, 2024"}, {"location": "seminar/#svetlana-tokareva-los-alamos-national-laboratory", "text": "", "title": "Svetlana Tokareva (Los Alamos National Laboratory)"}, {"location": "seminar/#january-14-2025", "text": "", "title": "January 14, 2025"}, {"location": "seminar/#patrick-zulian-universita-della-svizzera-italiana-unidistance-suisse", "text": "", "title": "Patrick Zulian (Universit\u00e0 della Svizzera italiana / UniDistance Suisse)"}, {"location": "seminar/#february-18-2025", "text": "", "title": "February 18, 2025"}, {"location": "seminar/#stefan-turek-technical-university-dortmund", "text": "", "title": "Stefan Turek (Technical University Dortmund)"}, {"location": "seminar/#march-11-2025", "text": "", "title": "March 11, 2025"}, {"location": "seminar/#ukasz-kaczmarczyk-university-of-glasgow", "text": "", "title": "\u0141ukasz Kaczmarczyk (University of Glasgow)"}, {"location": "seminar/#april-8-2025", "text": "", "title": "April 8, 2025"}, {"location": "serial-tutorial/", "text": "MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$']]}}); Serial Tutorial Summary This tutorial illustrates the building and sample use of the following MFEM serial example codes: Example 1 Example 2 Example 3 An interactive documentation of all example codes is available here . Building Follow the serial instructions to build the MFEM library and to start a GLVis server. The latter is the recommended visualization software for MFEM (though its use is optional). To build the serial example codes, type make in MFEM's examples directory: ~/mfem/examples> make g++ -O3 -I.. ex1.cpp -o ex1 -L.. -lmfem g++ -O3 -I.. ex2.cpp -o ex2 -L.. -lmfem g++ -O3 -I.. ex3.cpp -o ex3 -L.. -lmfem g++ -O3 -I.. ex4.cpp -o ex4 -L.. -lmfem g++ -O3 -I.. ex5.cpp -o ex5 -L.. -lmfem g++ -O3 -I.. ex6.cpp -o ex6 -L.. -lmfem g++ -O3 -I.. ex7.cpp -o ex7 -L.. -lmfem g++ -O3 -I.. ex8.cpp -o ex8 -L.. -lmfem g++ -O3 -I.. ex9.cpp -o ex9 -L.. -lmfem g++ -O3 -I.. ex10.cpp -o ex10 -L.. -lmfem Example 1 This example code demonstrates the use of MFEM to define a simple linear finite element discretization of the Laplace problem $-\\Delta u = 1$ with homogeneous Dirichlet boundary conditions. To run it, simply specify the input mesh file (which will be refined to a final mesh with no more than 50,000 elements): ~/mfem/examples> ex1 -m ../data/star.mesh Iteration : 0 (B r, r) = 0.00111712 Iteration : 1 (B r, r) = 0.00674088 Iteration : 2 (B r, r) = 0.0123008 ... Iteration : 88 (B r, r) = 5.28955e-15 Iteration : 89 (B r, r) = 1.99155e-15 Iteration : 90 (B r, r) = 9.91309e-16 Average reduction factor = 0.857127 If a GLVis server is running, the computed finite element solution will appear in an interactive window: You can examine the solution using the mouse and the GLVis command keystrokes . Pressing \" RAfjlmm \", for example, will give us a 2D view without light or perspective showing the computed level lines: This example saves two files called refined.mesh and sol.gf , which represent the refined mesh and the computed solution as a grid function. These can be visualized with glvis -m refined.mesh -g sol.gf as discussed here . Example 1 can be run on any mesh that is supported by MFEM, including 3D, curvilinear and VTK meshes, e.g., ~/mfem/examples> ex1 -m ../data/fichera-q2.vtk Iteration : 0 (B r, r) = 0.0235996 Iteration : 1 (B r, r) = 0.0476694 Iteration : 2 (B r, r) = 0.0200109 ... Iteration : 27 (B r, r) = 7.77888e-14 Iteration : 28 (B r, r) = 2.36255e-14 Iteration : 29 (B r, r) = 8.56679e-15 Average reduction factor = 0.610261 The picture above shows the solution with level lines plotted in normal direction of a cutting plane, and was produced by typing \" AaafmIMMooo \" followed by cutting plane adjustments with \" z \", \" y \" and \" w \". Example 2 This example code solves a simple linear elasticity problem describing a multi-material Cantilever beam. Note that the input mesh should have at least two materials and two boundary attributes as shown below: +----------+----------+ boundary --->| material | material |<--- boundary attribute 1 | 1 | 2 | attribute 2 (fixed) +----------+----------+ (pull down) The example demonstrates the use of (high-order) vector finite element spaces by supporting several different discretization options: ~/mfem/examples> ex2 -m ../data/beam-quad.mesh -o 2 Assembling: r.h.s. ... matrix ... done. Iteration : 0 (B r, r) = 1.88755e-06 Iteration : 1 (B r, r) = 8.2357e-07 Iteration : 2 (B r, r) = 9.9098e-07 ... Iteration : 498 (B r, r) = 2.78279e-11 Iteration : 499 (B r, r) = 3.75298e-11 Iteration : 500 (B r, r) = 4.95682e-11 PCG: No convergence! (B r_0, r_0) = 1.88755e-06 (B r_N, r_N) = 4.95682e-11 Number of PCG iterations: 500 Average reduction factor = 0.989508 The output shows the (curved) displaced mesh together with the inverse displacement vector field: The above plot can be alternatively produced with: glvis -m displaced.mesh -g sol.gf -k \"RfjliiiiimmAbb\" Example 2 also works in 3D: ~/mfem/examples> ex2 -m ../data/beam-tet.mesh -o 3 Assembling: r.h.s. ... matrix ... done. Iteration : 0 (B r, r) = 2.7147e-06 Iteration : 1 (B r, r) = 1.95756e-06 Iteration : 2 (B r, r) = 2.24159e-06 ... Iteration : 426 (B r, r) = 3.37563e-14 Iteration : 427 (B r, r) = 3.06198e-14 Iteration : 428 (B r, r) = 2.5706e-14 Average reduction factor = 0.978648 One can visualize the vector field, e.g., by pressing \" dbAfmeoooovvaa \" followed by scale and position adjustments with the mouse: Example 3 This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation ${\\rm curl\\, curl}\\, E + E = f$ discretized with the lowest order Nedelec finite elements. It computes the approximation error with a know exact solution, and requires a 3D input mesh: ~/mfem/examples> ex3 -m ../data/fichera.mesh Iteration : 0 (B r, r) = 121.209 Iteration : 1 (B r, r) = 21.1137 Iteration : 2 (B r, r) = 12.6503 ... Iteration : 149 (B r, r) = 2.40571e-10 Iteration : 150 (B r, r) = 1.39788e-10 Iteration : 151 (B r, r) = 9.43635e-11 Average reduction factor = 0.911811 || E_h - E ||_{L^2} = 0.00976655 To visualize the magnitude of the solution with the proportionally-sized vector field shown only on the boundary of the domain, type \" Vfooogt \" in the GLVis window (or run glvis -m refined.mesh -g sol.gf -k \"Vfooogt\" ): Curved meshes are also supported: ~/mfem/examples> ex3 -m ../data/fichera-q3.mesh Iteration : 0 (B r, r) = 135.613 Iteration : 1 (B r, r) = 22.3785 Iteration : 2 (B r, r) = 12.5215 ... Iteration : 168 (B r, r) = 4.95911e-10 Iteration : 169 (B r, r) = 2.23499e-10 Iteration : 170 (B r, r) = 1.25714e-10 Average reduction factor = 0.921741 || E_h - E ||_{L^2} = 0.0821686 To visualize the entire vector field, type \" fooogtevv \" instead, which will use uniform sized arrows colored according to their magnitude. Here is the corresponding plot from \" ex3 -m ../data/beam-hex.mesh \": Since entire vector fields in 3D might be difficult to see, a good alternative might be to plot the separate components of the field as scalar functions. For example: ~/mfem/examples> ex3 -m ../data/escher.mesh Iteration : 0 (B r, r) = 348.797 Iteration : 1 (B r, r) = 32.0699 Iteration : 2 (B r, r) = 14.902 ... Iteration : 159 (B r, r) = 4.16076e-10 Iteration : 160 (B r, r) = 3.50907e-10 Iteration : 161 (B r, r) = 3.22923e-10 Average reduction factor = 0.917548 || E_h - E ||_{L^2} = 0.36541 ~/mfem/examples> glvis -m refined.mesh -g sol.gf -gc 0 -k \"gooottF\" The discontinuity of the Nedelec functions is clearly seen in the above plot.", "title": "_Serial Tutorial"}, {"location": "serial-tutorial/#serial-tutorial", "text": "", "title": "Serial Tutorial"}, {"location": "serial-tutorial/#summary", "text": "This tutorial illustrates the building and sample use of the following MFEM serial example codes: Example 1 Example 2 Example 3 An interactive documentation of all example codes is available here .", "title": "Summary"}, {"location": "serial-tutorial/#building", "text": "Follow the serial instructions to build the MFEM library and to start a GLVis server. The latter is the recommended visualization software for MFEM (though its use is optional). To build the serial example codes, type make in MFEM's examples directory: ~/mfem/examples> make g++ -O3 -I.. ex1.cpp -o ex1 -L.. -lmfem g++ -O3 -I.. ex2.cpp -o ex2 -L.. -lmfem g++ -O3 -I.. ex3.cpp -o ex3 -L.. -lmfem g++ -O3 -I.. ex4.cpp -o ex4 -L.. -lmfem g++ -O3 -I.. ex5.cpp -o ex5 -L.. -lmfem g++ -O3 -I.. ex6.cpp -o ex6 -L.. -lmfem g++ -O3 -I.. ex7.cpp -o ex7 -L.. -lmfem g++ -O3 -I.. ex8.cpp -o ex8 -L.. -lmfem g++ -O3 -I.. ex9.cpp -o ex9 -L.. -lmfem g++ -O3 -I.. ex10.cpp -o ex10 -L.. -lmfem", "title": "Building"}, {"location": "serial-tutorial/#example-1", "text": "This example code demonstrates the use of MFEM to define a simple linear finite element discretization of the Laplace problem $-\\Delta u = 1$ with homogeneous Dirichlet boundary conditions. To run it, simply specify the input mesh file (which will be refined to a final mesh with no more than 50,000 elements): ~/mfem/examples> ex1 -m ../data/star.mesh Iteration : 0 (B r, r) = 0.00111712 Iteration : 1 (B r, r) = 0.00674088 Iteration : 2 (B r, r) = 0.0123008 ... Iteration : 88 (B r, r) = 5.28955e-15 Iteration : 89 (B r, r) = 1.99155e-15 Iteration : 90 (B r, r) = 9.91309e-16 Average reduction factor = 0.857127 If a GLVis server is running, the computed finite element solution will appear in an interactive window: You can examine the solution using the mouse and the GLVis command keystrokes . Pressing \" RAfjlmm \", for example, will give us a 2D view without light or perspective showing the computed level lines: This example saves two files called refined.mesh and sol.gf , which represent the refined mesh and the computed solution as a grid function. These can be visualized with glvis -m refined.mesh -g sol.gf as discussed here . Example 1 can be run on any mesh that is supported by MFEM, including 3D, curvilinear and VTK meshes, e.g., ~/mfem/examples> ex1 -m ../data/fichera-q2.vtk Iteration : 0 (B r, r) = 0.0235996 Iteration : 1 (B r, r) = 0.0476694 Iteration : 2 (B r, r) = 0.0200109 ... Iteration : 27 (B r, r) = 7.77888e-14 Iteration : 28 (B r, r) = 2.36255e-14 Iteration : 29 (B r, r) = 8.56679e-15 Average reduction factor = 0.610261 The picture above shows the solution with level lines plotted in normal direction of a cutting plane, and was produced by typing \" AaafmIMMooo \" followed by cutting plane adjustments with \" z \", \" y \" and \" w \".", "title": "Example 1"}, {"location": "serial-tutorial/#example-2", "text": "This example code solves a simple linear elasticity problem describing a multi-material Cantilever beam. Note that the input mesh should have at least two materials and two boundary attributes as shown below: +----------+----------+ boundary --->| material | material |<--- boundary attribute 1 | 1 | 2 | attribute 2 (fixed) +----------+----------+ (pull down) The example demonstrates the use of (high-order) vector finite element spaces by supporting several different discretization options: ~/mfem/examples> ex2 -m ../data/beam-quad.mesh -o 2 Assembling: r.h.s. ... matrix ... done. Iteration : 0 (B r, r) = 1.88755e-06 Iteration : 1 (B r, r) = 8.2357e-07 Iteration : 2 (B r, r) = 9.9098e-07 ... Iteration : 498 (B r, r) = 2.78279e-11 Iteration : 499 (B r, r) = 3.75298e-11 Iteration : 500 (B r, r) = 4.95682e-11 PCG: No convergence! (B r_0, r_0) = 1.88755e-06 (B r_N, r_N) = 4.95682e-11 Number of PCG iterations: 500 Average reduction factor = 0.989508 The output shows the (curved) displaced mesh together with the inverse displacement vector field: The above plot can be alternatively produced with: glvis -m displaced.mesh -g sol.gf -k \"RfjliiiiimmAbb\" Example 2 also works in 3D: ~/mfem/examples> ex2 -m ../data/beam-tet.mesh -o 3 Assembling: r.h.s. ... matrix ... done. Iteration : 0 (B r, r) = 2.7147e-06 Iteration : 1 (B r, r) = 1.95756e-06 Iteration : 2 (B r, r) = 2.24159e-06 ... Iteration : 426 (B r, r) = 3.37563e-14 Iteration : 427 (B r, r) = 3.06198e-14 Iteration : 428 (B r, r) = 2.5706e-14 Average reduction factor = 0.978648 One can visualize the vector field, e.g., by pressing \" dbAfmeoooovvaa \" followed by scale and position adjustments with the mouse:", "title": "Example 2"}, {"location": "serial-tutorial/#example-3", "text": "This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation ${\\rm curl\\, curl}\\, E + E = f$ discretized with the lowest order Nedelec finite elements. It computes the approximation error with a know exact solution, and requires a 3D input mesh: ~/mfem/examples> ex3 -m ../data/fichera.mesh Iteration : 0 (B r, r) = 121.209 Iteration : 1 (B r, r) = 21.1137 Iteration : 2 (B r, r) = 12.6503 ... Iteration : 149 (B r, r) = 2.40571e-10 Iteration : 150 (B r, r) = 1.39788e-10 Iteration : 151 (B r, r) = 9.43635e-11 Average reduction factor = 0.911811 || E_h - E ||_{L^2} = 0.00976655 To visualize the magnitude of the solution with the proportionally-sized vector field shown only on the boundary of the domain, type \" Vfooogt \" in the GLVis window (or run glvis -m refined.mesh -g sol.gf -k \"Vfooogt\" ): Curved meshes are also supported: ~/mfem/examples> ex3 -m ../data/fichera-q3.mesh Iteration : 0 (B r, r) = 135.613 Iteration : 1 (B r, r) = 22.3785 Iteration : 2 (B r, r) = 12.5215 ... Iteration : 168 (B r, r) = 4.95911e-10 Iteration : 169 (B r, r) = 2.23499e-10 Iteration : 170 (B r, r) = 1.25714e-10 Average reduction factor = 0.921741 || E_h - E ||_{L^2} = 0.0821686 To visualize the entire vector field, type \" fooogtevv \" instead, which will use uniform sized arrows colored according to their magnitude. Here is the corresponding plot from \" ex3 -m ../data/beam-hex.mesh \": Since entire vector fields in 3D might be difficult to see, a good alternative might be to plot the separate components of the field as scalar functions. For example: ~/mfem/examples> ex3 -m ../data/escher.mesh Iteration : 0 (B r, r) = 348.797 Iteration : 1 (B r, r) = 32.0699 Iteration : 2 (B r, r) = 14.902 ... Iteration : 159 (B r, r) = 4.16076e-10 Iteration : 160 (B r, r) = 3.50907e-10 Iteration : 161 (B r, r) = 3.22923e-10 Average reduction factor = 0.917548 || E_h - E ||_{L^2} = 0.36541 ~/mfem/examples> glvis -m refined.mesh -g sol.gf -gc 0 -k \"gooottF\" The discontinuity of the Nedelec functions is clearly seen in the above plot.", "title": "Example 3"}, {"location": "tesla-notes/", "text": "Magnetostatic Equations The magnetostatic equations that we start from are the following: $$\\nabla\\times\\bf H = \\bf J \\label{ampere}$$ $$\\nabla\\cdot{\\bf B}= 0 \\label{mag_gauss}$$ $${\\bf B} = \\mu{\\bf H}+\\mu_0{\\bf M} \\label{const}$$ Where \\eqref{ampere} is Amp\u00e8re's Law, \\eqref{mag_gauss} is Gauss's Law for Magnetism, and \\eqref{const} is a somewhat atypical way to write the Constitutive Relation between ${\\bf B}$ and ${\\bf H}$. The constitutive relation used here follows \"Classical Electrodynamics\" 3rd edition by J.D. Jackson and uses ${\\bf M}$, measured in A/m, to represent the magnetization of a permanent magnet. Some sources would instead use ${\\bf B}_r=\\mu_0{\\bf M}$ to represent a residual magnetization, measured in tesla. These conventions are, of course, mathematically equivalent but the choice made in this miniapp does seem a bit odd as I look at it now. These equations can be combined if we make use of the fact that $\\nabla\\cdot{\\bf B}=0$ implies ${\\bf B}=\\nabla\\times{\\bf A}$ for some vector potential ${\\bf A}$. This leads to: $$\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf A}) = {\\bf J}+ \\nabla\\times(\\mu^{-1}\\mu_0{\\bf M})$$ This equation supports a current source density, a permanent magnetization, surface current boundary conditions, and fixed ${\\bf A}$ boundary condition which can be used to apply an external magnetic field. There also exists a special case in magnetostatics when the current density is equal to zero. In this case $\\nabla\\times{\\bf H}=0$ which implies that the magnetic field can be computed as ${\\bf H}=-\\nabla\\Phi_M$. This leads to the scalar potential formulation which we will not consider further except to say that the electrostatic solver, named volta , can be adapted to model such situations. The tesla Miniapp The tesla miniapp models the magnetostatic equation for the magnetic vector potential ${\\bf A}$. It includes source terms derived from a volumetric current source ${\\bf J}$, magnetization vector ${\\bf M}$, or surface currents ${\\bf K}$. $$\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf A}) = {\\bf J}+\\nabla\\times(\\mu^{-1}\\mu_0{\\bf M})$$ $$\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf A}) = \\hat{n}\\times{\\bf K}$$ The magnetic vector potential will be approximated in H(Curl) so that the left hand side operator is well defined. $${\\bf A} \\approx \\sum_i a_i {\\bf W}_i (\\vec{x})$$ Inserting this into the left hand side of the equation and integrating the resulting equation against each H(Curl) basis function leads to the following weak form: $$\\begin{align} \\int_{\\Omega}{\\bf W}_{i}(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf A})]d\\Omega & \\approx \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot\\{\\nabla\\times[\\mu^{-1}\\nabla\\times(\\sum_j a_j{\\bf W}_j(\\vec{x}))]\\}d\\Omega \\\\ & = \\sum_j a_j\\{\\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]d\\Omega\\} \\end{align}$$ The expression in curly braces depends only on our material coefficient and our basis functions. This particular integral requires a little more manipulation to move the outermost curl operator onto the H(Curl) basis function. $$\\begin{aligned} \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Omega &=& \\int_\\Omega(\\nabla\\times{\\bf W}_i(\\vec{x}))\\cdot(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))\\,d\\Omega \\\\ &-& \\int_\\Omega\\nabla\\cdot[{\\bf W}_i(\\vec{x})\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Omega \\\\ &=& \\int_\\Omega(\\nabla\\times{\\bf W}_i(\\vec{x}))\\cdot(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))\\,d\\Omega \\\\ &-& \\int_\\Gamma\\hat{n}\\cdot[{\\bf W}_i(\\vec{x})\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Gamma \\\\ &=& \\int_\\Omega(\\nabla\\times{\\bf W}_i(\\vec{x}))\\cdot(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))\\,d\\Omega \\\\ &+& \\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot[\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Gamma \\\\ \\end{aligned}$$ The first integral remaining on the right hand side is implemented in MFEM as a BilinearFormIntegrator named CurlCurlIntegrator . The second integral, the boundary integral, gives rise to a Neumann boundary condition which will be discussed further in Section 2.1.3 . Source Terms Current Density ${\\bf J}$ The current density ${\\bf J}$ requires special care. In order for the magnetostatic equations to possess a solution ${\\bf J}$ must be in the range of the curl operator. Another way to say this is that the divergence of ${\\bf J}$ must be zero. If $\\nabla\\cdot{\\bf J}\\neq 0$ we can correct this by adding the gradient of a scalar field. If we start with some initial estimate of the current density which we call ${\\bf J}_0$, $$\\begin{aligned} \\nabla\\cdot({\\bf J}_0-\\nabla\\Psi) &=& 0 \\\\ \\nabla\\cdot\\nabla\\Psi &=& \\nabla\\cdot{\\bf J}_0 \\\\ {\\bf J}& = & {\\bf J}_0 - \\nabla\\Psi \\end{aligned}$$ The current density ${\\bf J}$ computed in this manner will be divergence free and therefore it will be in the range of the curl operator. Normally, in the continuous world, we simply define ${\\bf J}$ directly, however, in the discrete world we can only approximate ${\\bf J}$ so we must always perform this divergence cleaning procedure on our approximations of ${\\bf J}$. Failure to do so can lead to lack of convergence or complete failure of the solve. In MFEM the divergence cleaning procedure is handled by a class called DivergenceFreeProjector which is not a part of the MFEM library itself. It is provided as part of a collection of convenience classes in the miniapps/common subdirectory. Magnetization ${\\bf M}$ The magnetization ${\\bf M}$ is intended to represent permanent magnetics or other regions of prescribed magnetization. In the Tesla miniapp ${\\bf M}$ is discretized using H(Div) basis functions which allow its tangential components to be discontinuous. Its curl appears in the magnetostatic equations as a source term and this curl operation ensures that this source lies in the range of the curl operator so no divergence cleaning operation is needed for this portion of the source. In the Tesla miniapp this source is computed and applied on lines 338-343 in the TeslaSolver::Solve() function. The weak curl operator is configured on lines 168-175 in the TeslaSolver constructor. Surface Current ${\\bf K}$ The integration by parts needed to create the weak form of the curl-curl operators also leads to a boundary integral: $$\\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot[\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Gamma$$ This means that our weak curl-curl operator applied to ${\\bf A}$ differs from the continuous curl-curl operator by a surface integral of the form: $$\\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot[\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf A})]\\,d\\Gamma = \\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot(\\hat{n}\\times{\\bf H})\\,d\\Gamma$$ If we do nothing to account for this boundary integral we are implicitly setting it equal to zero which amounts to a boundary condition on the tangential part of the magnetic field i.e. $\\hat{n}\\times{\\bf H}=0$. Another possibility is to set a surface current boundary condition i.e. $\\hat{n}\\times{\\bf H}=\\hat{n}\\times{\\bf K}$. This could be done by using a ParLinearForm object to integrate $\\hat{n}\\times{\\bf K}$ over the portion of the boundary where ${\\bf K}$ is non-zero and adding the resulting vector to the right hand side of the linear system. However, this is not the approach used in the Tesla miniapp. In Tesla we employ a trick based on the Stoke's theorem. A surface current leads to a discontinuity in the tangential part of ${\\bf H}$ on the boundary. Similarly, a discontinuity in ${\\bf H}$ leads to a discontinuity in ${\\bf A}$ on the boundary. Therefore we can set the tangential part of ${\\bf A}$ to equal ${\\bf K}$ and we get the correct behavior as long as we set the tangential part of ${\\bf A}=0$ elsewhere on the boundary. To be honest I'm not sure how valid this approach is but it does seem to work and it can improve solver convergence. I would recommend confirming this approach before relying on it. Post-Processing Computation of ${\\bf H}$ The magnetic field ${\\bf H}$ needs to have tangential continuity so we approximate it using the H(Curl) basis: $${\\bf H}\\approx\\sum_i h_i{\\bf W}_i(\\vec{x})$$ Recall that the magnetic flux ${\\bf B}$ is approximated using the H(Div) basis due to the continuity of its normal component. $${\\bf B}\\approx\\sum_i b_i{\\bf F}_i(\\vec{x})$$ To compute ${\\bf H}$ from ${\\bf B}$ we make use of the constitutive equation ${\\bf B}=\\mu{\\bf H}$. Inserting our approximations and integrating this equation against each H(Curl) basis function we obtain the following: $$\\sum_j h_j\\int_\\Omega\\mu{\\bf W}_i\\cdot{\\bf W}_j\\,d\\Omega = \\sum_k b_k\\int_\\Omega{\\bf W}_i\\cdot{\\bf F}_k\\,d\\Omega$$ This set of linear equations is equivalent to the matrix equation: $$M_1(\\mu)h = M_{21}b$$ Where $M_1(\\mu)$ is an H(Curl) mass matrix incorporating the material coefficient $\\mu$ which is implemented in MFEM as a BilinearFormIntegrator named VectorFEMassIntegrator . The $M_{21}$ operator is a rectangular matrix which maps H(Div) to H(Curl) and is also built using the VectorFEMassIntegrator but with the default material coefficient which is equal to 1. The solution of this linear system is usually obtained with a conjugate gradient iterative solver along with a diagonal scaling preconditioner. Since the matrix to be inverted is a mass matrix this solution is usually very efficient involving fewer than thirty solver iterations. It is important to point out that an H(Curl) approximation usually has more degrees of freedom than a comparable H(Div) approximation. In the interior of the domain the density of degrees of freedom are approximately equal but H(Curl) approximations tend to have more degrees of freedom on the boundary. Consequently, this type of conversion can produce H(Curl) approximations with poor accuracy near the boundary. If the tangential components of ${\\bf B}$ are nearly constant within the elements adjacent to the boundary the conversion can produce a good approximation. However, if these tangential components vary too rapidly non-physical oscillations can occur in ${\\bf H}$. To alleviate these oscillations Dirichlet boundary conditions can be applied during the solution of ${\\bf H}$ provided that reasonable values for $(\\hat{n}\\times{\\bf H})\\times\\hat{n}$ can be determined. In the present magnetostatics context we can reuse any Neumann boundary conditions used during the solution of ${\\bf A}$ since these were equivalent to setting $\\hat{n}\\times{\\bf H}$ on the boundary. Magnetic Energy in a Region The tesla miniapp does not compute the energy in the magnetic field but such a computation should be easy to add. There are two basic procedures for computing energy in MFEM. One involves a bilinear form and the other a linear form. The bilinear form approach makes sense when the energies of multiple fields will be computed with the same operator so that the cost of building the bilinear form can be amortized. In a magnetostatic problem the linear form approach is likely to be more efficient. The usual formula for magnetic energy is $u = \\frac{1}{2}\\int_\\Omega{\\bf H}\\cdot{\\bf B}\\,d\\Omega$. There are many ways to compute this quantity in MFEM but perhaps the most convenient is to make use of a VectorCoefficient and a ParLinearForm . For example let's assume we have a coefficient for $\\mu^{-1}$ and a GridFunction for ${\\bf B}$ called Bgf : { VectorGridFunctionCoefficient BCoef(&Bgf); ScalarVectorProductCoefficient HCoef(muInvCoef, BCoef); ParLinearForm Hlf(&HDivFESpace); Hlf.AddDomainIntegrator(new VectorFEDomainIntegrator(HCoef)); Hlf.Assemble(); double energy = 0.5 * Hlf(Bgf); } This integral can be restricted to some region, defined by a set of element attributes, by incorporating a VectorRestrictedCoefficient . Other forms of energy such as $\\frac{1}{2}\\int_\\Omega{\\bf J}\\cdot{\\bf A}\\,d\\Omega$ or perhaps $\\int_\\Omega{\\bf M}\\cdot{\\bf B}\\,d\\Omega$ could be computed in a similar manner. Torque on a Current Density Torque can also be defined as a volume integral so we can employ a technique similar to the one used for the energy computation. The important difference is that torque is a vector quantity so we will need to integrate each of its vector components separately. This will likely require custom coefficients but the procedure should be straightforward. The existing vector coefficient classes ScalarVectorProductCoefficient and VectorCrossProductCoefficient should serve as guides for how this can be accomplished. Torque on a Permanent Magnet Torque on a Surface Current In theory a surface integral can be computed in a very similar manner to a volume integral. However, discontinuous finite element spaces such as H(Curl), H(Div), or L2 create a complication. Approximations made with these discontinuous fields do not possess well defined values on surfaces. Consequently such an integral could lack precision or even be multi-valued. To overcome this limitation it may be necessary to compute different contributions to the torque in different manners and combine the results. For example the normal component of ${\\bf B}$ is well defined on surfaces. Therefore the force ${\\bf K}\\times{\\bf B}$ may be inaccurate but the quantity $(\\hat{n}\\cdot{\\bf B}){\\bf K}\\times\\hat{n}$ will be more reliable. To obtain another contribution to the torque we can use the tangential components of ${\\bf H}$ as $\\mu{\\bf K}\\times[(\\hat{n}\\times{\\bf H})\\times\\hat{n}]$. This of course assumes that we have an accurate representation of ${\\bf H}$ on this surface which may not be the case if the surface is an outer boundary (see Section Computation of H ). Appendix A: Magnetic Energy class MagneticEnergy { private: const ParGridFunction & b_; const ParGridFunction & h_; public: MagneticEnergy(const ParGridFunction & b, const ParGridFunction & h) : b_(b), h_(h) {} double ComputeEnergy() { VectorGridFunctionCoefficient h_coef(&h_); ParLinearForm h_lf(b_.ParFESpace()); h_lf.AddDomainIntegrator(new VectorFEDomainLFIntegrator(h_coef)); h_lf.Assemble(); return 0.5 * h_lf(b_); } double ComputeEnergy(const Array & elem_attr_marker) { VectorGridFunctionCoefficient h_coef(&h_); ParLinearForm h_lf(b_.ParFESpace()); h_lf.AddDomainIntegrator(new VectorFEDomainLFIntegrator(h_coef), const_cast&>(elem_attr_marker)); h_lf.Assemble(); return 0.5 * h_lf(b_); } }; Appendix B: Torque class Torque { private: const ParGridFunction & b_; const ParGridFunction & h_; const ParGridFunction & j_; public: Torque(const ParGridFunction & b, const ParGridFunction & h, const ParGridFunction & j) : b_(b), h_(h), j_(j) {} void ComputeTorqueOnSurface(const Array &bdr_attr_marker, const Vector ¢, Vector &T); void ComputeTorqueOnVolume(const Array &vol_attr_marker, const Vector ¢, Vector &T); }; void Torque::ComputeTorqueOnSurface(const Array &bdr_attr_marker, const Vector ¢, Vector &trq) { trq = 0.0; ParFiniteElementSpace * fes = b_.ParFESpace(); ParMesh *mesh = b_.ParFESpace()->GetParMesh(); ElementTransformation *eltrans = NULL; Vector b, h, ht(3), nor(3), x(3), f(3), loc_trq(3); loc_trq = 0.0; for (int i=0; iGetNBE(); i++) { const int bdr_attr = mesh->GetBdrAttribute(i); if (bdr_attr_marker[bdr_attr-1] == 0) { continue; } eltrans = fes->GetBdrElementTransformation(i); const FiniteElement &el = *fes->GetBE(i); const IntegrationRule *ir = NULL; if (ir == NULL) { const int order = 2*el.GetOrder() + eltrans->OrderW(); // <----- ir = &IntRules.Get(eltrans->GetGeometryType(), order); } for (int pi = 0; pi < ir->GetNPoints(); ++pi) { const IntegrationPoint &ip = ir->IntPoint(pi); eltrans->SetIntPoint(&ip); CalcOrtho(eltrans->Jacobian(), nor); double a = nor.Norml2(); eltrans->Transform(ip, x); b_.GetVectorValue(*eltrans, ip, b); h_.GetVectorValue(*eltrans, ip, h); double bn = b * nor / a; double hn = h * nor / a; add(h, -hn / a, nor, ht); f.Set(ip.weight * bn * bn / mu0_, nor); f.Add(ip.weight * a * bn, ht); f.Add(-0.5 * ip.weight * (mu0_ * (ht * ht) + bn * bn / mu0_), nor); loc_trq[0] += (x[1]-cent[1]) * f[2] - (x[2]-cent[2]) * f[1]; loc_trq[1] += (x[2]-cent[2]) * f[0] - (x[0]-cent[0]) * f[2]; loc_trq[2] += (x[0]-cent[0]) * f[1] - (x[1]-cent[1]) * f[0]; } } MPI_Allreduce(loc_trq, trq, 3, MPI_DOUBLE, MPI_SUM, fes->GetComm()); } void Torque::ComputeTorqueOnVolume(const Array &attr_marker, const Vector ¢, Vector &trq) { trq = 0.0; ParFiniteElementSpace * fes = b_.ParFESpace(); ParMesh *mesh = b_.ParFESpace()->GetParMesh(); ElementTransformation *eltrans = NULL; Vector b, j, x(3), f(3), t(3), loc_trq(3); loc_trq = 0.0; for (int i=0; iGetNE(); i++) { const int attr = mesh->GetAttribute(i); if (attr_marker[attr-1] == 0) { continue; } eltrans = fes->GetElementTransformation(i); const FiniteElement &el = *fes->GetFE(i); const IntegrationRule *ir = NULL; if (ir == NULL) { const int order = 2*el.GetOrder() + eltrans->OrderW(); // <----- ir = &IntRules.Get(eltrans->GetGeometryType(), order); } for (int pi = 0; pi < ir->GetNPoints(); ++pi) { const IntegrationPoint &ip = ir->IntPoint(pi); eltrans->SetIntPoint(&ip); eltrans->Transform(ip, x); b_.GetVectorValue(*eltrans, ip, b); j_.GetVectorValue(*eltrans, ip, j); f[0] = j[1] * b[2] - j[2] * b[1]; f[1] = j[2] * b[0] - j[0] * b[2]; f[2] = j[0] * b[1] - j[1] * b[0]; t[0] = (x[1]-cent[1]) * f[2] - (x[2]-cent[2]) * f[1]; t[1] = (x[2]-cent[2]) * f[0] - (x[0]-cent[0]) * f[2]; t[2] = (x[0]-cent[0]) * f[1] - (x[1]-cent[1]) * f[0]; loc_trq.Add(ip.weight * eltrans->Weight(), t); } } MPI_Allreduce(loc_trq, trq, 3, MPI_DOUBLE, MPI_SUM, fes->GetComm()); } MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Tesla Notes"}, {"location": "tesla-notes/#magnetostatic-equations", "text": "The magnetostatic equations that we start from are the following: $$\\nabla\\times\\bf H = \\bf J \\label{ampere}$$ $$\\nabla\\cdot{\\bf B}= 0 \\label{mag_gauss}$$ $${\\bf B} = \\mu{\\bf H}+\\mu_0{\\bf M} \\label{const}$$ Where \\eqref{ampere} is Amp\u00e8re's Law, \\eqref{mag_gauss} is Gauss's Law for Magnetism, and \\eqref{const} is a somewhat atypical way to write the Constitutive Relation between ${\\bf B}$ and ${\\bf H}$. The constitutive relation used here follows \"Classical Electrodynamics\" 3rd edition by J.D. Jackson and uses ${\\bf M}$, measured in A/m, to represent the magnetization of a permanent magnet. Some sources would instead use ${\\bf B}_r=\\mu_0{\\bf M}$ to represent a residual magnetization, measured in tesla. These conventions are, of course, mathematically equivalent but the choice made in this miniapp does seem a bit odd as I look at it now. These equations can be combined if we make use of the fact that $\\nabla\\cdot{\\bf B}=0$ implies ${\\bf B}=\\nabla\\times{\\bf A}$ for some vector potential ${\\bf A}$. This leads to: $$\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf A}) = {\\bf J}+ \\nabla\\times(\\mu^{-1}\\mu_0{\\bf M})$$ This equation supports a current source density, a permanent magnetization, surface current boundary conditions, and fixed ${\\bf A}$ boundary condition which can be used to apply an external magnetic field. There also exists a special case in magnetostatics when the current density is equal to zero. In this case $\\nabla\\times{\\bf H}=0$ which implies that the magnetic field can be computed as ${\\bf H}=-\\nabla\\Phi_M$. This leads to the scalar potential formulation which we will not consider further except to say that the electrostatic solver, named volta , can be adapted to model such situations.", "title": "Magnetostatic Equations"}, {"location": "tesla-notes/#the-tesla-miniapp", "text": "The tesla miniapp models the magnetostatic equation for the magnetic vector potential ${\\bf A}$. It includes source terms derived from a volumetric current source ${\\bf J}$, magnetization vector ${\\bf M}$, or surface currents ${\\bf K}$. $$\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf A}) = {\\bf J}+\\nabla\\times(\\mu^{-1}\\mu_0{\\bf M})$$ $$\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf A}) = \\hat{n}\\times{\\bf K}$$ The magnetic vector potential will be approximated in H(Curl) so that the left hand side operator is well defined. $${\\bf A} \\approx \\sum_i a_i {\\bf W}_i (\\vec{x})$$ Inserting this into the left hand side of the equation and integrating the resulting equation against each H(Curl) basis function leads to the following weak form: $$\\begin{align} \\int_{\\Omega}{\\bf W}_{i}(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf A})]d\\Omega & \\approx \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot\\{\\nabla\\times[\\mu^{-1}\\nabla\\times(\\sum_j a_j{\\bf W}_j(\\vec{x}))]\\}d\\Omega \\\\ & = \\sum_j a_j\\{\\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]d\\Omega\\} \\end{align}$$ The expression in curly braces depends only on our material coefficient and our basis functions. This particular integral requires a little more manipulation to move the outermost curl operator onto the H(Curl) basis function. $$\\begin{aligned} \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Omega &=& \\int_\\Omega(\\nabla\\times{\\bf W}_i(\\vec{x}))\\cdot(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))\\,d\\Omega \\\\ &-& \\int_\\Omega\\nabla\\cdot[{\\bf W}_i(\\vec{x})\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Omega \\\\ &=& \\int_\\Omega(\\nabla\\times{\\bf W}_i(\\vec{x}))\\cdot(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))\\,d\\Omega \\\\ &-& \\int_\\Gamma\\hat{n}\\cdot[{\\bf W}_i(\\vec{x})\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Gamma \\\\ &=& \\int_\\Omega(\\nabla\\times{\\bf W}_i(\\vec{x}))\\cdot(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))\\,d\\Omega \\\\ &+& \\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot[\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Gamma \\\\ \\end{aligned}$$ The first integral remaining on the right hand side is implemented in MFEM as a BilinearFormIntegrator named CurlCurlIntegrator . The second integral, the boundary integral, gives rise to a Neumann boundary condition which will be discussed further in Section 2.1.3 .", "title": "The tesla Miniapp"}, {"location": "tesla-notes/#source-terms", "text": "", "title": "Source Terms"}, {"location": "tesla-notes/#current-density-bf-j", "text": "The current density ${\\bf J}$ requires special care. In order for the magnetostatic equations to possess a solution ${\\bf J}$ must be in the range of the curl operator. Another way to say this is that the divergence of ${\\bf J}$ must be zero. If $\\nabla\\cdot{\\bf J}\\neq 0$ we can correct this by adding the gradient of a scalar field. If we start with some initial estimate of the current density which we call ${\\bf J}_0$, $$\\begin{aligned} \\nabla\\cdot({\\bf J}_0-\\nabla\\Psi) &=& 0 \\\\ \\nabla\\cdot\\nabla\\Psi &=& \\nabla\\cdot{\\bf J}_0 \\\\ {\\bf J}& = & {\\bf J}_0 - \\nabla\\Psi \\end{aligned}$$ The current density ${\\bf J}$ computed in this manner will be divergence free and therefore it will be in the range of the curl operator. Normally, in the continuous world, we simply define ${\\bf J}$ directly, however, in the discrete world we can only approximate ${\\bf J}$ so we must always perform this divergence cleaning procedure on our approximations of ${\\bf J}$. Failure to do so can lead to lack of convergence or complete failure of the solve. In MFEM the divergence cleaning procedure is handled by a class called DivergenceFreeProjector which is not a part of the MFEM library itself. It is provided as part of a collection of convenience classes in the miniapps/common subdirectory.", "title": "Current Density ${\\bf J}$"}, {"location": "tesla-notes/#magnetization-bf-m", "text": "The magnetization ${\\bf M}$ is intended to represent permanent magnetics or other regions of prescribed magnetization. In the Tesla miniapp ${\\bf M}$ is discretized using H(Div) basis functions which allow its tangential components to be discontinuous. Its curl appears in the magnetostatic equations as a source term and this curl operation ensures that this source lies in the range of the curl operator so no divergence cleaning operation is needed for this portion of the source. In the Tesla miniapp this source is computed and applied on lines 338-343 in the TeslaSolver::Solve() function. The weak curl operator is configured on lines 168-175 in the TeslaSolver constructor.", "title": "Magnetization ${\\bf M}$"}, {"location": "tesla-notes/#sec:surf_current", "text": "The integration by parts needed to create the weak form of the curl-curl operators also leads to a boundary integral: $$\\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot[\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Gamma$$ This means that our weak curl-curl operator applied to ${\\bf A}$ differs from the continuous curl-curl operator by a surface integral of the form: $$\\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot[\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf A})]\\,d\\Gamma = \\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot(\\hat{n}\\times{\\bf H})\\,d\\Gamma$$ If we do nothing to account for this boundary integral we are implicitly setting it equal to zero which amounts to a boundary condition on the tangential part of the magnetic field i.e. $\\hat{n}\\times{\\bf H}=0$. Another possibility is to set a surface current boundary condition i.e. $\\hat{n}\\times{\\bf H}=\\hat{n}\\times{\\bf K}$. This could be done by using a ParLinearForm object to integrate $\\hat{n}\\times{\\bf K}$ over the portion of the boundary where ${\\bf K}$ is non-zero and adding the resulting vector to the right hand side of the linear system. However, this is not the approach used in the Tesla miniapp. In Tesla we employ a trick based on the Stoke's theorem. A surface current leads to a discontinuity in the tangential part of ${\\bf H}$ on the boundary. Similarly, a discontinuity in ${\\bf H}$ leads to a discontinuity in ${\\bf A}$ on the boundary. Therefore we can set the tangential part of ${\\bf A}$ to equal ${\\bf K}$ and we get the correct behavior as long as we set the tangential part of ${\\bf A}=0$ elsewhere on the boundary. To be honest I'm not sure how valid this approach is but it does seem to work and it can improve solver convergence. I would recommend confirming this approach before relying on it.", "title": "Surface Current ${\\bf K}$"}, {"location": "tesla-notes/#post-processing", "text": "", "title": "Post-Processing"}, {"location": "tesla-notes/#sec:h_comp", "text": "The magnetic field ${\\bf H}$ needs to have tangential continuity so we approximate it using the H(Curl) basis: $${\\bf H}\\approx\\sum_i h_i{\\bf W}_i(\\vec{x})$$ Recall that the magnetic flux ${\\bf B}$ is approximated using the H(Div) basis due to the continuity of its normal component. $${\\bf B}\\approx\\sum_i b_i{\\bf F}_i(\\vec{x})$$ To compute ${\\bf H}$ from ${\\bf B}$ we make use of the constitutive equation ${\\bf B}=\\mu{\\bf H}$. Inserting our approximations and integrating this equation against each H(Curl) basis function we obtain the following: $$\\sum_j h_j\\int_\\Omega\\mu{\\bf W}_i\\cdot{\\bf W}_j\\,d\\Omega = \\sum_k b_k\\int_\\Omega{\\bf W}_i\\cdot{\\bf F}_k\\,d\\Omega$$ This set of linear equations is equivalent to the matrix equation: $$M_1(\\mu)h = M_{21}b$$ Where $M_1(\\mu)$ is an H(Curl) mass matrix incorporating the material coefficient $\\mu$ which is implemented in MFEM as a BilinearFormIntegrator named VectorFEMassIntegrator . The $M_{21}$ operator is a rectangular matrix which maps H(Div) to H(Curl) and is also built using the VectorFEMassIntegrator but with the default material coefficient which is equal to 1. The solution of this linear system is usually obtained with a conjugate gradient iterative solver along with a diagonal scaling preconditioner. Since the matrix to be inverted is a mass matrix this solution is usually very efficient involving fewer than thirty solver iterations. It is important to point out that an H(Curl) approximation usually has more degrees of freedom than a comparable H(Div) approximation. In the interior of the domain the density of degrees of freedom are approximately equal but H(Curl) approximations tend to have more degrees of freedom on the boundary. Consequently, this type of conversion can produce H(Curl) approximations with poor accuracy near the boundary. If the tangential components of ${\\bf B}$ are nearly constant within the elements adjacent to the boundary the conversion can produce a good approximation. However, if these tangential components vary too rapidly non-physical oscillations can occur in ${\\bf H}$. To alleviate these oscillations Dirichlet boundary conditions can be applied during the solution of ${\\bf H}$ provided that reasonable values for $(\\hat{n}\\times{\\bf H})\\times\\hat{n}$ can be determined. In the present magnetostatics context we can reuse any Neumann boundary conditions used during the solution of ${\\bf A}$ since these were equivalent to setting $\\hat{n}\\times{\\bf H}$ on the boundary.", "title": "Computation of ${\\bf H}$"}, {"location": "tesla-notes/#magnetic-energy-in-a-region", "text": "The tesla miniapp does not compute the energy in the magnetic field but such a computation should be easy to add. There are two basic procedures for computing energy in MFEM. One involves a bilinear form and the other a linear form. The bilinear form approach makes sense when the energies of multiple fields will be computed with the same operator so that the cost of building the bilinear form can be amortized. In a magnetostatic problem the linear form approach is likely to be more efficient. The usual formula for magnetic energy is $u = \\frac{1}{2}\\int_\\Omega{\\bf H}\\cdot{\\bf B}\\,d\\Omega$. There are many ways to compute this quantity in MFEM but perhaps the most convenient is to make use of a VectorCoefficient and a ParLinearForm . For example let's assume we have a coefficient for $\\mu^{-1}$ and a GridFunction for ${\\bf B}$ called Bgf : { VectorGridFunctionCoefficient BCoef(&Bgf); ScalarVectorProductCoefficient HCoef(muInvCoef, BCoef); ParLinearForm Hlf(&HDivFESpace); Hlf.AddDomainIntegrator(new VectorFEDomainIntegrator(HCoef)); Hlf.Assemble(); double energy = 0.5 * Hlf(Bgf); } This integral can be restricted to some region, defined by a set of element attributes, by incorporating a VectorRestrictedCoefficient . Other forms of energy such as $\\frac{1}{2}\\int_\\Omega{\\bf J}\\cdot{\\bf A}\\,d\\Omega$ or perhaps $\\int_\\Omega{\\bf M}\\cdot{\\bf B}\\,d\\Omega$ could be computed in a similar manner.", "title": "Magnetic Energy in a Region"}, {"location": "tesla-notes/#torque-on-a-current-density", "text": "Torque can also be defined as a volume integral so we can employ a technique similar to the one used for the energy computation. The important difference is that torque is a vector quantity so we will need to integrate each of its vector components separately. This will likely require custom coefficients but the procedure should be straightforward. The existing vector coefficient classes ScalarVectorProductCoefficient and VectorCrossProductCoefficient should serve as guides for how this can be accomplished.", "title": "Torque on a Current Density"}, {"location": "tesla-notes/#torque-on-a-permanent-magnet", "text": "", "title": "Torque on a Permanent Magnet"}, {"location": "tesla-notes/#torque-on-a-surface-current", "text": "In theory a surface integral can be computed in a very similar manner to a volume integral. However, discontinuous finite element spaces such as H(Curl), H(Div), or L2 create a complication. Approximations made with these discontinuous fields do not possess well defined values on surfaces. Consequently such an integral could lack precision or even be multi-valued. To overcome this limitation it may be necessary to compute different contributions to the torque in different manners and combine the results. For example the normal component of ${\\bf B}$ is well defined on surfaces. Therefore the force ${\\bf K}\\times{\\bf B}$ may be inaccurate but the quantity $(\\hat{n}\\cdot{\\bf B}){\\bf K}\\times\\hat{n}$ will be more reliable. To obtain another contribution to the torque we can use the tangential components of ${\\bf H}$ as $\\mu{\\bf K}\\times[(\\hat{n}\\times{\\bf H})\\times\\hat{n}]$. This of course assumes that we have an accurate representation of ${\\bf H}$ on this surface which may not be the case if the surface is an outer boundary (see Section Computation of H ).", "title": "Torque on a Surface Current"}, {"location": "tesla-notes/#appendix-a-magnetic-energy", "text": "class MagneticEnergy { private: const ParGridFunction & b_; const ParGridFunction & h_; public: MagneticEnergy(const ParGridFunction & b, const ParGridFunction & h) : b_(b), h_(h) {} double ComputeEnergy() { VectorGridFunctionCoefficient h_coef(&h_); ParLinearForm h_lf(b_.ParFESpace()); h_lf.AddDomainIntegrator(new VectorFEDomainLFIntegrator(h_coef)); h_lf.Assemble(); return 0.5 * h_lf(b_); } double ComputeEnergy(const Array & elem_attr_marker) { VectorGridFunctionCoefficient h_coef(&h_); ParLinearForm h_lf(b_.ParFESpace()); h_lf.AddDomainIntegrator(new VectorFEDomainLFIntegrator(h_coef), const_cast&>(elem_attr_marker)); h_lf.Assemble(); return 0.5 * h_lf(b_); } };", "title": "Appendix A: Magnetic Energy"}, {"location": "tesla-notes/#appendix-b-torque", "text": "class Torque { private: const ParGridFunction & b_; const ParGridFunction & h_; const ParGridFunction & j_; public: Torque(const ParGridFunction & b, const ParGridFunction & h, const ParGridFunction & j) : b_(b), h_(h), j_(j) {} void ComputeTorqueOnSurface(const Array &bdr_attr_marker, const Vector ¢, Vector &T); void ComputeTorqueOnVolume(const Array &vol_attr_marker, const Vector ¢, Vector &T); }; void Torque::ComputeTorqueOnSurface(const Array &bdr_attr_marker, const Vector ¢, Vector &trq) { trq = 0.0; ParFiniteElementSpace * fes = b_.ParFESpace(); ParMesh *mesh = b_.ParFESpace()->GetParMesh(); ElementTransformation *eltrans = NULL; Vector b, h, ht(3), nor(3), x(3), f(3), loc_trq(3); loc_trq = 0.0; for (int i=0; iGetNBE(); i++) { const int bdr_attr = mesh->GetBdrAttribute(i); if (bdr_attr_marker[bdr_attr-1] == 0) { continue; } eltrans = fes->GetBdrElementTransformation(i); const FiniteElement &el = *fes->GetBE(i); const IntegrationRule *ir = NULL; if (ir == NULL) { const int order = 2*el.GetOrder() + eltrans->OrderW(); // <----- ir = &IntRules.Get(eltrans->GetGeometryType(), order); } for (int pi = 0; pi < ir->GetNPoints(); ++pi) { const IntegrationPoint &ip = ir->IntPoint(pi); eltrans->SetIntPoint(&ip); CalcOrtho(eltrans->Jacobian(), nor); double a = nor.Norml2(); eltrans->Transform(ip, x); b_.GetVectorValue(*eltrans, ip, b); h_.GetVectorValue(*eltrans, ip, h); double bn = b * nor / a; double hn = h * nor / a; add(h, -hn / a, nor, ht); f.Set(ip.weight * bn * bn / mu0_, nor); f.Add(ip.weight * a * bn, ht); f.Add(-0.5 * ip.weight * (mu0_ * (ht * ht) + bn * bn / mu0_), nor); loc_trq[0] += (x[1]-cent[1]) * f[2] - (x[2]-cent[2]) * f[1]; loc_trq[1] += (x[2]-cent[2]) * f[0] - (x[0]-cent[0]) * f[2]; loc_trq[2] += (x[0]-cent[0]) * f[1] - (x[1]-cent[1]) * f[0]; } } MPI_Allreduce(loc_trq, trq, 3, MPI_DOUBLE, MPI_SUM, fes->GetComm()); } void Torque::ComputeTorqueOnVolume(const Array &attr_marker, const Vector ¢, Vector &trq) { trq = 0.0; ParFiniteElementSpace * fes = b_.ParFESpace(); ParMesh *mesh = b_.ParFESpace()->GetParMesh(); ElementTransformation *eltrans = NULL; Vector b, j, x(3), f(3), t(3), loc_trq(3); loc_trq = 0.0; for (int i=0; iGetNE(); i++) { const int attr = mesh->GetAttribute(i); if (attr_marker[attr-1] == 0) { continue; } eltrans = fes->GetElementTransformation(i); const FiniteElement &el = *fes->GetFE(i); const IntegrationRule *ir = NULL; if (ir == NULL) { const int order = 2*el.GetOrder() + eltrans->OrderW(); // <----- ir = &IntRules.Get(eltrans->GetGeometryType(), order); } for (int pi = 0; pi < ir->GetNPoints(); ++pi) { const IntegrationPoint &ip = ir->IntPoint(pi); eltrans->SetIntPoint(&ip); eltrans->Transform(ip, x); b_.GetVectorValue(*eltrans, ip, b); j_.GetVectorValue(*eltrans, ip, j); f[0] = j[1] * b[2] - j[2] * b[1]; f[1] = j[2] * b[0] - j[0] * b[2]; f[2] = j[0] * b[1] - j[1] * b[0]; t[0] = (x[1]-cent[1]) * f[2] - (x[2]-cent[2]) * f[1]; t[1] = (x[2]-cent[2]) * f[0] - (x[0]-cent[0]) * f[2]; t[2] = (x[0]-cent[0]) * f[1] - (x[1]-cent[1]) * f[0]; loc_trq.Add(ip.weight * eltrans->Weight(), t); } } MPI_Allreduce(loc_trq, trq, 3, MPI_DOUBLE, MPI_SUM, fes->GetComm()); } MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Appendix B: Torque"}, {"location": "tools/", "text": "Tools This page provides a brief description of several useful tool programs that are distributed in the MFEM's miniapps/tools directory. General Tools Display Basis The display-basis miniapp, found under miniapps/tools , visualizes various types of finite element basis functions on a single mesh element in 1D, 2D, and 3D. The element type, basis type and order can be changed interactively. The mesh element is either the reference element, or a simple transformation of it. Low-Order Refined Transfer The lor-transfer miniapp, found under miniapps/tools demonstrates the capability to generate a low-order refined mesh from a high-order mesh, and to transfer solutions between these meshes. Grid functions can be transferred between the coarse, high-order mesh and the low-order refined mesh using either $L^2$ projection or pointwise evaluation. These transfer operators can be designed to discretely conserve mass and to recover the original high-order solution when transferring a low-order grid function that was obtained by restricting a high-order grid function to the low-order refined space. DataCollection Tools Convert DC This tool, named convert-dc in the miniapps/tools subdirectory, demonstrates how to convert between MFEM's different concrete DataCollection options. Currently supported data collection type options: Nickname Full Class Name visit VisItDataCollection (default) sidre or sidre_hdf5 SidreDataCollection json ConduitDataCollection w/ protocol json conduit_json ConduitDataCollection w/ protocol conduit_json conduit_bin ConduitDataCollection w/ protocol conduit_bin hdf5 ConduitDataCollection w/ protocol hdf5 Load DC The load-dc miniapp, found in the miniapps/tools subdirectory, loads and visualizes (in GLVis) previously saved data using DataCollection sub-classes, see e.g. Example 5/5p. Currently, only the VisItDataCollection class is supported. Get Values The get-values miniapp, found in miniapps/tools , loads previously saved data using DataCollection sub-classes and outputs field values at a set of points. Currently, only the VisItDataCollection class is supported. # Number of fields 3 # Legend # \"Index\" \"Location\":2 \"pressure\":1 \"velocity\":2 2 1 2 # Number of points 6 0 0.0 0.8 0.717336 -0.716172 -0.696674 1 0.2 0.8 0.876045 -0.875874 -0.852278 2 0.4 0.8 1.06999 -1.07106 -1.03923 3 0.6 0.8 1.30719 -1.30931 -1.26903 4 0.8 0.8 1.59678 -1.59601 -1.54949 5 1.0 0.8 1.94995 -1.94853 -1.89371 Point locations can be specified on the command line using -p or within a data file whose name can be given with option -pf . The data file format is: number_of_points space_dimension x_0 y_0 ... x_1 y_1 ... etc. By default all available fields are evaluated. The list of fields can be reduced by specifying the desired field names with -fn . The -fn option takes a space separated list of field names surrounded by quotes. Field names containing spaces, such as \"Field 1\" and \"Field 2\", can be entered as: get-values -fn \"Field\\ 1 Field\\ 2\" By default the data is written to standard out. This can be overwritten with the -o [filename] option. The output format contains comments as well as sizing information to aid in subsequent processing. The bulk of the data consists of one line per point with a 0-based integer index followed by the point coordinates and then the field data. A legend, appearing before the bulk data, shows the order of the fields along with the number of values per field (for vector data). MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Tools"}, {"location": "tools/#tools", "text": "This page provides a brief description of several useful tool programs that are distributed in the MFEM's miniapps/tools directory.", "title": "Tools"}, {"location": "tools/#general-tools", "text": "", "title": "General Tools"}, {"location": "tools/#display-basis", "text": "The display-basis miniapp, found under miniapps/tools , visualizes various types of finite element basis functions on a single mesh element in 1D, 2D, and 3D. The element type, basis type and order can be changed interactively. The mesh element is either the reference element, or a simple transformation of it.", "title": "Display Basis"}, {"location": "tools/#low-order-refined-transfer", "text": "The lor-transfer miniapp, found under miniapps/tools demonstrates the capability to generate a low-order refined mesh from a high-order mesh, and to transfer solutions between these meshes. Grid functions can be transferred between the coarse, high-order mesh and the low-order refined mesh using either $L^2$ projection or pointwise evaluation. These transfer operators can be designed to discretely conserve mass and to recover the original high-order solution when transferring a low-order grid function that was obtained by restricting a high-order grid function to the low-order refined space.", "title": "Low-Order Refined Transfer"}, {"location": "tools/#datacollection-tools", "text": "", "title": "DataCollection Tools"}, {"location": "tools/#convert-dc", "text": "This tool, named convert-dc in the miniapps/tools subdirectory, demonstrates how to convert between MFEM's different concrete DataCollection options. Currently supported data collection type options: Nickname Full Class Name visit VisItDataCollection (default) sidre or sidre_hdf5 SidreDataCollection json ConduitDataCollection w/ protocol json conduit_json ConduitDataCollection w/ protocol conduit_json conduit_bin ConduitDataCollection w/ protocol conduit_bin hdf5 ConduitDataCollection w/ protocol hdf5", "title": "Convert DC"}, {"location": "tools/#load-dc", "text": "The load-dc miniapp, found in the miniapps/tools subdirectory, loads and visualizes (in GLVis) previously saved data using DataCollection sub-classes, see e.g. Example 5/5p. Currently, only the VisItDataCollection class is supported.", "title": "Load DC"}, {"location": "tools/#get-values", "text": "The get-values miniapp, found in miniapps/tools , loads previously saved data using DataCollection sub-classes and outputs field values at a set of points. Currently, only the VisItDataCollection class is supported. # Number of fields 3 # Legend # \"Index\" \"Location\":2 \"pressure\":1 \"velocity\":2 2 1 2 # Number of points 6 0 0.0 0.8 0.717336 -0.716172 -0.696674 1 0.2 0.8 0.876045 -0.875874 -0.852278 2 0.4 0.8 1.06999 -1.07106 -1.03923 3 0.6 0.8 1.30719 -1.30931 -1.26903 4 0.8 0.8 1.59678 -1.59601 -1.54949 5 1.0 0.8 1.94995 -1.94853 -1.89371 Point locations can be specified on the command line using -p or within a data file whose name can be given with option -pf . The data file format is: number_of_points space_dimension x_0 y_0 ... x_1 y_1 ... etc. By default all available fields are evaluated. The list of fields can be reduced by specifying the desired field names with -fn . The -fn option takes a space separated list of field names surrounded by quotes. Field names containing spaces, such as \"Field 1\" and \"Field 2\", can be entered as: get-values -fn \"Field\\ 1 Field\\ 2\" By default the data is written to standard out. This can be overwritten with the -o [filename] option. The output format contains comments as well as sizing information to aid in subsequent processing. The bulk of the data consists of one line per point with a 0-based integer index followed by the point coordinates and then the field data. A legend, appearing before the bulk data, shows the order of the fields along with the number of values per field (for vector data). MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Get Values"}, {"location": "toys/", "text": "Toys A handful of \"toy\" miniapps of less serious nature demonstrate the flexibility of MFEM (and provide a bit of fun): Automata The automata miniapp implements a one dimensional elementary cellular automata as described in: Wolfram MathWorld . This miniapp shows a completely unnecessary use of the finite element method to simply display binary data (but it's fun to play with). The automata miniapp has only three options; -vis or -no-vis to enable or disable visualization, -ns which defines the number of steps to evolve the cellular automata, and -r to select the rule which is applied at each step. Rules for this type of cellular automata consist of a sequence of 8 bits which are normally passed as an integer 0-255. The rule defines how to update each cell based on the current values of that cell and its two nearest neighbors. Life The life miniapp implements Conway's Game of Life. A few simple starting positions are available as well as a random initial state. The game will terminate only if two successive iterations are identical. Users can control the size of the domain and the initial placement of simple objects like blinkers and gliders . Arbitrary patterns can be supplied through the --sketch-pad or -sp option. The sketch pad was used to produce the above image with the command line: life -nx 30 -sp '11 11 1 1 1 1 1 1 1 1 2 1 0 1 1 1 1 0 1 2 1 1 1 1 1 1 1 1' The values following -sp are the starting coordinates of the pattern followed by zeros or ones to indicate pixels that should be off or on, any twos indicate new lines in the pattern. Lissajous The lissajous miniapp generates two different Lissajous curves in 3D which appear to spin vertically and/or horizontally, even though the net motion is the same. Vertical Rotation Horizontal Rotation Based on the 2019 Illusion of the year \"Dual Axis Illusion\" by Frank Force, see Dual Axis Illusion . Mandel The mandel miniapp is a specialized version of the shaper miniapp which adapts a mesh to the Mandelbrot set. Both planar and surface meshes are supported. Mondrian The mondrian miniapp is a specialized version of the shaper miniapp that converts an input image to an AMR mesh. It allows the fast approximate meshing of any domain for which there is an image. The input image should be in 8-bit grayscale PGM format. You can use a number of image manipulation tools, such as GIMP (gimp.org) and ImageMagick's convert utility (imagemagick.org/script/convert.php) to convert your image to this format as a pre-processing step, e.g.: /usr/bin/convert australia.svg -compress none -depth 8 australia.pgm Rubik The rubik miniapp implements an interactive model of a Rubik's Cube\u2122 puzzle. The basic interactive command is of the form [xyz][1,2,3][0-3] which rotates, about the x, y, or z axis, a single tier, indicated by the first integer, by a number of increments, indicated by the final integer. Any manipulation of the cube can be accomplished with a sequence of these simple three character commands. Common commands: Command Action R Resets or re-paints the cube S or s Solve the cube starting from the top and working down r[0-9]+ Specific number of random moves p Print the current state of the cube to the screen q Quit Other commands: Command Action T Solve the top tier only M Solve the middle tier assuming the top has already been solved B Solve the bottom tier assuming the top and middle are done c Swap bottom tier corners in positions 0 and 1 t[0,1] Twist, in place, three of the bottom tier corners e[0,1] Permute three of the bottom tier edges f[2,4] Flip, in place, 2 or 4 of the bottom tier edges Snake The snake miniapp provides a light-hearted example of mesh manipulation and GLVis integration. The Rubik's Snake\u2122 a.k.a. Twist is a simple tool for experimenting with geometric shapes in 3D. It consists of 24 triangular prisms attached in a row so that neighboring wedges can rotate against each other but cannot be separated. An astonishing variety of different configurations can be reached. Thirteen pre-programmed configurations are available via the -c [0-12] command line option. Other configurations can be reached with the -u option. Each configuration must be 23 integers long corresponding to the 23 joints making up the Snake\u2122 puzzle. The values can be 0-3 indicating how far to rotate the joint in the clockwise direction when looking along the snake from the starting (lower) end. The values 0, 1, 2, and 3 correspond to angles of 0, 90, 180, and 270 degrees respectively.", "title": "Toys"}, {"location": "toys/#toys", "text": "A handful of \"toy\" miniapps of less serious nature demonstrate the flexibility of MFEM (and provide a bit of fun):", "title": "Toys"}, {"location": "toys/#automata", "text": "The automata miniapp implements a one dimensional elementary cellular automata as described in: Wolfram MathWorld . This miniapp shows a completely unnecessary use of the finite element method to simply display binary data (but it's fun to play with). The automata miniapp has only three options; -vis or -no-vis to enable or disable visualization, -ns which defines the number of steps to evolve the cellular automata, and -r to select the rule which is applied at each step. Rules for this type of cellular automata consist of a sequence of 8 bits which are normally passed as an integer 0-255. The rule defines how to update each cell based on the current values of that cell and its two nearest neighbors.", "title": "Automata"}, {"location": "toys/#life", "text": "The life miniapp implements Conway's Game of Life. A few simple starting positions are available as well as a random initial state. The game will terminate only if two successive iterations are identical. Users can control the size of the domain and the initial placement of simple objects like blinkers and gliders . Arbitrary patterns can be supplied through the --sketch-pad or -sp option. The sketch pad was used to produce the above image with the command line: life -nx 30 -sp '11 11 1 1 1 1 1 1 1 1 2 1 0 1 1 1 1 0 1 2 1 1 1 1 1 1 1 1' The values following -sp are the starting coordinates of the pattern followed by zeros or ones to indicate pixels that should be off or on, any twos indicate new lines in the pattern.", "title": "Life"}, {"location": "toys/#lissajous", "text": "The lissajous miniapp generates two different Lissajous curves in 3D which appear to spin vertically and/or horizontally, even though the net motion is the same. Vertical Rotation Horizontal Rotation Based on the 2019 Illusion of the year \"Dual Axis Illusion\" by Frank Force, see Dual Axis Illusion .", "title": "Lissajous"}, {"location": "toys/#mandel", "text": "The mandel miniapp is a specialized version of the shaper miniapp which adapts a mesh to the Mandelbrot set. Both planar and surface meshes are supported.", "title": "Mandel"}, {"location": "toys/#mondrian", "text": "The mondrian miniapp is a specialized version of the shaper miniapp that converts an input image to an AMR mesh. It allows the fast approximate meshing of any domain for which there is an image. The input image should be in 8-bit grayscale PGM format. You can use a number of image manipulation tools, such as GIMP (gimp.org) and ImageMagick's convert utility (imagemagick.org/script/convert.php) to convert your image to this format as a pre-processing step, e.g.: /usr/bin/convert australia.svg -compress none -depth 8 australia.pgm", "title": "Mondrian"}, {"location": "toys/#rubik", "text": "The rubik miniapp implements an interactive model of a Rubik's Cube\u2122 puzzle. The basic interactive command is of the form [xyz][1,2,3][0-3] which rotates, about the x, y, or z axis, a single tier, indicated by the first integer, by a number of increments, indicated by the final integer. Any manipulation of the cube can be accomplished with a sequence of these simple three character commands. Common commands: Command Action R Resets or re-paints the cube S or s Solve the cube starting from the top and working down r[0-9]+ Specific number of random moves p Print the current state of the cube to the screen q Quit Other commands: Command Action T Solve the top tier only M Solve the middle tier assuming the top has already been solved B Solve the bottom tier assuming the top and middle are done c Swap bottom tier corners in positions 0 and 1 t[0,1] Twist, in place, three of the bottom tier corners e[0,1] Permute three of the bottom tier edges f[2,4] Flip, in place, 2 or 4 of the bottom tier edges", "title": "Rubik"}, {"location": "toys/#snake", "text": "The snake miniapp provides a light-hearted example of mesh manipulation and GLVis integration. The Rubik's Snake\u2122 a.k.a. Twist is a simple tool for experimenting with geometric shapes in 3D. It consists of 24 triangular prisms attached in a row so that neighboring wedges can rotate against each other but cannot be separated. An astonishing variety of different configurations can be reached. Thirteen pre-programmed configurations are available via the -c [0-12] command line option. Other configurations can be reached with the -u option. Each configuration must be 23 integers long corresponding to the 23 joints making up the Snake\u2122 puzzle. The values can be 0-3 indicating how far to rotate the joint in the clockwise direction when looking along the snake from the starting (lower) end. The values 0, 1, 2, and 3 correspond to angles of 0, 90, 180, and 270 degrees respectively.", "title": "Snake"}, {"location": "videos/", "text": "MFEM Videos A collection of MFEM-related videos, including recorded talks from the MFEM workshops and conference presentations. MFEM Workshop 2024 Aaron Fisher (LLNL) Welcome and Overview October 22-24, 2024 | MFEM Workshop 2024 Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community resources. Tzanio Kolev (LLNL) The State of MFEM October 22-24, 2024 | MFEM Workshop 2024 MFEM project lead Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities, examples, and mini-apps. Kolev also highlighted the growth of the global community as well as features developed during 2024. Veselin Dobrev (LLNL) Recent Developments October 22-24, 2024 | MFEM Workshop 2024 Veselin Dobrev of LLNL detailed the project\u2019s recent developments including meshing and discretization improvements, GPU acceleration and partial/full assembly support, new examples and mini-apps, and more. He also highlighted functionality such as anisotropic refinement, conforming H1 spaces, square pyramid shaped elements, and hybridized discontinuous Galerkin solutions. Ketan Mittal (LLNL) Interpolation at Arbitrary Points in High-Order Meshes on GPUs October 22-24, 2024 | MFEM Workshop 2024 Robust and scalable arbitrary point interpolation is required in the finite element method and spectral element method for querying the partial differential equation solution at points of interest in the domain, comparison of solution between different meshes, and Lagrangian particle tracking. This is a challenging problem, particularly for high-order unstructured meshes partitioned in parallel with MPI, as it requires identifying the element that overlaps a given point and computing the reference space coordinates inside the element corresponding to the point. We present a robust and efficient way to address this problem for large-scale high-order meshes. First, a combination of globally partitioned and processor-local maps are used to determine a list of candidate MPI ranks and element pairs that could contain the point. Next, element-wise bounding boxes are used to further narrow down the list of candidate elements. Finally, Newton's method with trust region-based approach is used to invert the affine map for the candidate elements and determine the reference space coordinates corresponding to the point. Since GPU-based architectures have demonstrated to accelerate computational analyses using meshes with tensor-product elements, specialized kernel have been developed to effect the arbitrary point search and interpolation on GPUs. We demonstrate the effectiveness of this approach using various high-order meshes. Michael Tupek (LLNL) Automatic Parameter Sensitivities in Serac for Engineering Applications October 22-24, 2024 | MFEM Workshop 2024 We present a framework for automatically calculating sensitivities for both topology and shape design optimization workflows. Building on MFEM infrastructure, we provide abstractions for quickly specifying, solving, coupling, and differentiating new PDEs for engineering applications. Recent developments in Serac include: highly robust nonlinear solvers, integration of the Tribol library for contact enforcement, coupled thermal-mechanics, differentiable material model library, and checkpointing for transient adjoint calculations. Jan Nikl (LLNL) Hybridization of Convection-Diffusion Systems in MFEM October 22-24, 2024 | MFEM Workshop 2024 Convection-diffusion systems are likely the most common class of partial differential equations appearing in practically all different applications. However, their mixed formulation typically suffers from prohibitively high computational costs and difficult preconditioning, especially close to the steady state where the system becomes a saddle point problem. The hybridization technique offers an appealing answer to these issues. The new framework for mixed systems enables single-line hybridization, reducing the problem to face traces of the total flux only. Solution of such system is then inexpensive, and preconditioning becomes nearly trivial. Non-linear convection is also supported with the action-based regime of operation. Description of the mechanism as well as code examples to show ease of usage are presented. Vladimir Tomov (LLNL) Miniapps for Shock Hydro, Field Remap, and Mesh Optimization October 22-24, 2024 | MFEM Workshop 2024 This presentation discusses recent advancements, research, and exploratory work in the MFEM miniapps for shock hydrodynamics (Laghos), field remap (Remhos), and mesh optimization. For shock hydro, we present the implementation of slip wall boundary conditions for curved domains, along with research involving material interfaces using the shifted interface method or cut-element integration through Algoim and moments-based integration. In the field remap miniapp, we cover developments in stabilized remap for continuous fields, interface sharpening techniques, and matrix-free methods for GPU execution. Lastly, we explore recent progress in mesh optimization, including surface fitting and its GPU implementation, tangential relaxation, automatic differentiation (AD) for complex objective functionals, enhanced metric theory and quality metrics, and hpr-adaptivity for the mesh representation. While some of these advancements are public, general methods that can be applied across various practical miniapps, others are exploratory, demonstrating how the miniapps can serve as a starting point for research in specific areas. Dylan Copeland (LLNL) Sparse, Approximate Quadrature for Acceleration of Isogeometric Analysis & ROMs October 22-24, 2024 | MFEM Workshop 2024 Numerical integration for assembly of FEM systems typically employs quadrature rules selected for the polynomial order of basis functions in each element. In some cases, a much sparser rule can maintain accuracy. We present an algebraic method for constructing sparse rules, by formulating a constraint system of states required to be integrated accurately. A nonnegative least squares solver finds a sparse, approximate solution to this constraint system, yielding a quadrature rule with fewer points. One application we demonstrate is isogeometric analysis, where a NURBS FEM space is defined on patches consisting of many elements. Setup times are greatly accelerated, by using patch-wise integration with sum factorization and reduced quadrature rules constructed on patches. Another area of application is reduced order models (ROM), where the FEM system is restricted to a reduced POD basis formed from training data. Instead of hyper-reduction methods such as DEIM, the empirical quadrature procedure (EQP) can be used to accelerate ROM simulations with a sparse quadrature rule in the reduced subspace. We demonstrate this on several benchmark problems in the Laghos miniapp and show that energy conservation is maintained. Jacob Spainhour (CU Boulder) Robust Containment Queries over Collections of Parametric Curves via Generalized Winding Numbers October 22-24, 2024 | MFEM Workshop 2024 The containment query is an important geometric primitive in many multiphysics applications. For example, when initializing multimaterial Arbitrary Lagrangian-Eulerian (ALE) simulations, we often need to determine whether arbitrary quadrature points from the background mesh are inside or outside the regions associated with each material. However, existing methods require expensive refinement to accurately capture curved regions. At the same time, many methods are wholly incompatible with user-defined geometries that contain geometric and numeric gaps and/or self-intersections. In this work, we develop a containment query for 2D regions defined by rational Bezier curves that operates directly on curved objects. Our method relies on the generalized winding number (GWN), a mathematical construction that can be evaluated for each curve independently, making the derived containment query robust to non-watertightness. We use an adaptive algorithm to compute the GWN field exactly, which permits fast evaluation for points considered \"distant\" to the curve while being numerically stable for points that are arbitrarily close. Overall, this classification scheme greatly expands the types of bounding geometry that can be used directly in shaping applications without the need for otherwise expensive repair techniques. If time permits, we will also discuss our extensions of this idea to 3D shapes defined by parametric surfaces. Mathias Schmidt (LLNL) Level-Set Topology Optimization with PDE Generated Conformal Meshes October 22-24, 2024 | MFEM Workshop 2024 The promise of topology optimization (TO) is to provide engineers with a systematic computational tool to support the development of optimal designs. A shortcoming of classic density based multi-material TO designs is the nebulous interphase region between materials, which leads to inaccurate response predictions in these very regions. In contrast, designs based on boundary and interface regions, rather than interphase regions, yield accurate response predictions. Level-set based TO is an example of such; however, the analysis of the response often requires repeated mesh generation or non-standard finite element computations. We present a solely PDE-based, level-set topology optimization approach in which geometries are described through the iso-contour of one or multiple level-set fields which are discretized over a mesh. The nodal heights serve as the design parameters. The governing field equations are discretized by a conformal discretization over a separate \u201canalysis\u201d mesh. In the optimization, the \u201canalysis\u201d mesh is morphed such that its boundary and interfaces conform with the isocontours of the LS fields. The mesh morphing is performed using the Target-Matrix Optimization Paradigm (TMOP) approach. Our TMOP formulation is a PDE-based mesh morphing operation which aims to improve the interface conformity while preserving mesh quality. Design sensitivities of the optimization cost and constraint functions with respect to all design level-set fields are computed through an adjoint approach which accounts for the mesh morphing process. The proposed analysis and optimization framework is based on MFEM, a free, lightweight, scalable C++ library for finite element methods which supports the optimization of large-scale problems. We investigate the robustness of the proposed optimization methodology by solving two- and three-dimensional multi-material optimization problems involving linear diffusion and elasticity. We discuss the advantages and challenges of our approach with regards to the mesh morphing process. LS regularization techniques are employed to produce a well-behaved mesh morphing problem throughout the optimization. Finally, select aspects and challenges of our approach with respect to parallel computing and processor decomposition are discussed. Yohann Dudouit (LLNL) Mitigating Rays-Effect in Phase-Space Advection with Matrix-Free HD DG Methods October 22-24, 2024 | MFEM Workshop 2024 The mitigation of the rays-effect in phase-space advection problems is a critical challenge in deterministic transport simulations, particularly when using traditional methods that struggle with numerical artifacts. In this work, we propose a novel high-dimensional matrix-free discontinuous Galerkin (DG) approach designed to address the rays-effect by fully discretizing phase space, including velocity components, up to six dimensions. This methodology avoids the excessive computational cost associated with Monte Carlo simulations while offering a deterministic alternative that preserves accuracy and scalability. A key component of our approach is the use of advanced coordinate transformations, which optimize the coordinate system to minimize the rays-effect by aligning the coordinate system with the net flux. Our matrix-free formulation minimizes memory usage and improves computational efficiency by avoiding the assembly of large sparse matrices, a critical factor when scaling to high-dimensional problems. Numerical experiments demonstrate the effectiveness of this approach in reducing rays-effect artifacts, providing a robust and scalable solution for high-dimensional transport problems. FEM@LLNL Seminars Denis Ridzal (Sandia National Laboratories) R-Adaptive Mesh Optimization to Enhance Finite Element Basis Compression October 15, 2024 | FEM@LLNL Seminar Series Modern computing systems are capable of exascale calculations. While these systems continue to grow in processing power, the available system memory has not increased commensurately. A predominant approach to limit the memory usage in large-scale applications is to exploit the abundant processing power and continually recompute many low-level simulation quantities, rather than storing them. However, this approach can adversely impact the throughput of the simulation and diminish the benefits of modern computing architectures. We present two novel contributions to reduce the memory burden while maintaining performance in simulations based on finite element discretizations. The first contribution develops dictionary-based data compression schemes that detect and exploit the structure of the discretization, due to redundancies across the finite element mesh. These schemes are shown to reduce the memory requirements of key computational kernels by more than 99% on meshes with large numbers of nearly identical mesh cells. For applications where this structure does not exist, our second contribution leverages a recently developed augmented Lagrangian sequential quadratic programming algorithm to enable r-adaptive mesh optimization, with the goal of enhancing redundancies in the mesh. Numerical results demonstrate the effectiveness of the proposed methods to detect, exploit and enhance mesh structure on examples inspired by large-scale applications. Rub\u00e9n Sevilla (Swansea University) Mesh Generation and Adaptation using Green AI September 17, 2024 | FEM@LLNL Seminar Series Most methods used to solve partial differential equations require creating a mesh that represents the model's geometry. Today, unstructured mesh technology is widely used, allowing three-dimensional meshes with hundreds of millions of elements to be generated in just a few minutes. However, when optimising a design, many simulations are needed for different operating conditions and geometric configurations. Creating the best mesh for each setup becomes time-consuming due to the requirement of excessive human intervention and expertise. This talk will cover our recent work on using artificial intelligence to predict near-optimal meshes suitable for simulations. The main idea is to take advantage of the large amount of data that already exists in the industry to improve the selection of a suitable spacing function, including anisotropic spacing. The proposed approach aims to use knowledge from previous simulations to guide the mesh generation process. I will assess the proposed method based on the accuracy of the predictions, efficiency, and environmental impact. This includes considering the carbon footprint and energy consumption of the computations required for a parametric CFD analysis under different flow conditions and angles of attack. For transient problems, the use of high order methods provides several advantages due to the low dissipation and dispersion errors associated to these schemes. An attractive approach to simulate these problems is to incorporate degree adaptive schemes to enhance the approximation only where needed. In this talk I will also present our recent work on using artificial intelligence to aid a degree adaptive process. Esteban Ferrer and David Huergo (Universidad Polit\u00e9cnica de Madrid) New Avenues in High Order Fluid Dynamics September 3, 2024 | FEM@LLNL Seminar Series We present the latest developments of our High-Order Spectral Element Solver (HORSES3D), an open source high-order discontinuous Galerkin framework capable of solving a variety of flow applications, including compressible flows (with or without shocks), incompressible flows, various RANS and LES turbulence models, particle dynamics, multiphase flows, and aeroacoustics [ 1 ]. Recent developments allow us to simulate challenging multiphysics including turbulent flows, multiphase and moving bodies, using local h and p-adaption. In addition, we present recent work that couples Machine Learning and reinforcement learning techniques with high order simulations. Patrick Farrell (University of Oxford) Designing conservative and accurately dissipative numerical integrators in time July 30, 2024 | FEM@LLNL Seminar Series Numerical methods for the simulation of transient systems with structure-preserving properties are known to exhibit greater accuracy and physical reliability, in particular over long durations. These schemes are often built on powerful geometric ideas for broad classes of problems, such as Hamiltonian or reversible systems. However, there remain difficulties in devising higher-order- in-time structure-preserving discretizations for nonlinear problems, and in conserving non-polynomial invariants. In this work we propose a new, general framework for the construction of structure-preserving timesteppers via finite elements in time and the systematic introduction of auxiliary variables. The framework reduces to Gauss methods where those are structure-preserving, but extends to generate arbitrary-order structure-preserving schemes for nonlinear problems, and allows for the construction of schemes that conserve multiple higher-order invariants. We demonstrate the ideas by devising novel schemes that exactly conserve all known invariants of the Kepler and Kovalevskaya problems, arbitrary-order schemes for the compressible Navier\u2013Stokes equations that conserve mass, momentum, and energy, and provably dissipate entropy, and multi-conservative schemes for the Benjamin-Bona-Mahony equation. Gonzalo de Diego (Courant Institute) Numerical Solvers for Viscous Contact Problems in Glaciology May 6, 2024 | FEM@LLNL Seminar Series Viscous contact problems are time-dependent viscous flow problems where the fluid is in contact with a solid surface from which it can detach and reattach. Over sufficiently long timescales, ice is assumed to flow like a viscous fluid with a nonlinear rheology. Therefore, certain phenomena in glaciology, like the formation of subglacial cavities in the base of an ice sheet or the dynamics of marine ice sheets (continental ice sheets that slide into the ocean and go afloat at a grounding line, detaching from the bedrock), can be modelled as viscous contact problems. In particular, these problems can be described by coupling the Stokes equations with contact boundary conditions to free boundary equations that evolve the ice domain in time. In this talk, I will describe the difficulties that arise when attempting to solve this system numerically and I will introduce a method that is capable of overcoming them. Nat Trask (University of Pennsylvania) A Data Driven Finite Element Exterior Calculus April 2, 2024 | FEM@LLNL Seminar Series Despite the recent flurry of work employing machine learning to develop surrogate models to accelerate scientific computation, the \"black-box\" underpinnings of current techniques fail to provide the verification and validation guarantees provided by modern finite element methods. In this talk we present a data-driven finite element exterior calculus for developing reduced-order models of multiphysics systems when the governing equations are either unknown or require closure. The framework employs deep learning architectures typically used for logistic classification to construct a trainable partition of unity which provides notions of control volumes with associated boundary operators. This alternative to a traditional finite element mesh is fully differentiable and allows construction of a discrete de Rham complex with a corresponding Hodge theory. We demonstrate how models may be obtained with the same robustness guarantees as traditional mixed finite element discretization, with deep connections to contemporary techniques in graph neural networks. For applications developing digital twins where surrogates are intended to support real time data assimilation and optimal control, we further develop the framework to support Bayesian optimization of unknown physics on the underlying adjacency matrices of the chain complex. By framing the learning of fluxes via an optimal recovery problem with a computationally tractable posterior distribution, we are able to develop models with intrinsic representations of epistemic uncertainty. William Moses (University of Illinois Urbana-Champaign) Supercharging Programming Through Compiler Technology March 14, 2024 | FEM@LLNL Seminar Series The decline of Moore's law and an increasing reliance on computation has led to an explosion of specialized software packages and hardware architectures. While this diversity enables unprecedented flexibility, it also requires domain-experts to learn how to customize programs to efficiently leverage the latest platform-specific API's and data structures, instead of working on their intended problem. Rather than forcing each user to bear this burden, I propose building high-level abstractions within general-purpose compilers that enable fast, portable, and composable programs to be automatically generated. This talk will demonstrate this approach through compilers that I built for two domains: automatic differentiation and parallelism. These domains are critical to both scientific computing and machine learning, forming the basis of neural network training, uncertainty quantification, and high-performance computing. For example, a researcher hoping to incorporate their climate simulation into a machine learning model must also provide a corresponding derivative simulation. My compiler, Enzyme, automatically generates these derivatives from existing computer programs, without modifying the original application. Moreover, operating within the compiler enables Enzyme to combine differentiation with program optimization, resulting in asymptotically and empirically faster code. Looking forward, this talk will also touch on how this domain-agnostic compiler approach can be applied to new directions, including probabilistic programming. Sungho Lee (University of Memphis) LAGHOST: Development of Lagrangian High-Order Solver for Tectonics March 5, 2024 | FEM@LLNL Seminar Series Long-term geological and tectonic processes associated with large deformation highlight the importance of using a moving Lagrangian frame. However, modern advancements in the finite element method, such as MPI parallelization, GPU acceleration, high-order elements, and adaptive grid refinement for tectonics based on this frame, have not been updated. Moreover, the existing solvers available in open access suffer from limited tutorials, a poor user manual, and several dependencies that make model building complex. These limitations can discourage both new users and developers from utilizing and improving these models. As a result, we are motivated to develop a user-friendly, Lagrangian thermo-mechanical numerical model that incorporates visco-elastoplastic rheology to simulate long-term tectonic processes like mountain building, mantle convection and so on. We introduce an ongoing project called LAGHOST (Lagrangian High-Order Solver for Tectonics), which is an MFEM-based tectonic solver. LAGHOST expands the capabilities of MFEM's LAGHOS mini-app. Currently, our solver incorporates constitutive equation, body force, mass scaling, dynamic relaxation, Mohr-Coulomb plasticity, plastic softening, Winkler foundation, remeshing, and remapping. To evaluate LAGHOST, we conducted four benchmark tests. The first test involved compressing an elastic box at a constant velocity, while the second test focused on the compaction of a self-weighted elastic column. To enable larger time-step sizes and achieve quasi-static solutions in the benchmarks, we introduced a fictitious density and implemented dynamic relaxation. This involved scaling the density factor and introducing a portion of force component opposing the previous velocity direction at nodal points. Our results exhibited good agreement with analytical solutions. Subsequently, we incorporated Mohr-Coulomb plasticity, a reliable model for predicting rock failure, into LAGHOST. We revisited the elastic box benchmark and considered plastic materials. By considering stress correction arising from plastic yielding, we confirmed that the updated solution from elastic guess aligned with the analytical solution. Furthermore, we applied LAGHOST to simulate the evolution of a normal fault, a significant tectonic phenomenon. To model normal fault evolution, we introduced strain softening on cohesion as the dominant factor based on geological evidence. Our simulations successfully captured the normal fault's evolution, with plastic strain localizing at shallow depths before propagating deeper. The fault angle reached approximately 60 degrees, in line with the Mohr-Coulomb failure theory. Kevin Chung (LLNL) Data-Driven DG FEM Via Reduced Order Modeling and Domain Decomposition February 6, 2024 | FEM@LLNL Seminar Series Numerous cutting-edge scientific technologies originate at the laboratory scale, but transitioning them to practical industry applications can be a formidable challenge. Traditional pilot projects at intermediate scales are costly and time-consuming. Alternatives such as E-pilots can rely on high-fidelity numerical simulations, but even these simulations can be computationally prohibitive at larger scales. To overcome these limitations, we propose a scalable, component reduced order model (CROM) method. We employ Discontinuous Galerkin Domain Decomposition (DG-DD) to decompose the physics governing equation for a large-scale system into repeated small-scale unit components. Critical physics modes are identified via proper orthogonal decomposition (POD) from small-scale unit component samples. The computationally expensive, high-fidelity discretization of the physics governing equation is then projected onto these modes to create a reduced order model (ROM) that retains essential physics details. The combination of DG-DD and POD enables ROMs to be used as building blocks comprised of unit components and interfaces, which can then be used to construct a global large-scale ROM without data at such large scales. This method is demonstrated on the Poisson and Stokes flow equations, showing that it can solve equations about 15\u221240 times faster with only \u223c 1% relative error, even at scales 1000 times larger than the unit components. This research is ongoing, with efforts to apply these methods to more complex physics such as Navier-Stokes equation, highlighting their potential for transitioning laboratory-scale technologies to practical industrial use. Brian Young A Full-Wave Electromagnetic Simulator for Frequency-Domain S-Parameter Calculations January 9, 2024 | FEM@LLNL Seminar Series An open-source and free full-wave electromagnetic simulator is presented that addresses the engineering community\u2019s need for the calculation of frequency-domain S-parameters. Two-dimensional port simulations are used to excite the 3D space and to extract S-parameters using modal projections. Matrix solutions are performed using complex computations. Features enabled by the MFEM library include adaptive mesh refinement, arbitrary order finite elements, and parallel processing using MPI. Implementation details are presented along with sample results and accuracy demonstrations. Jesse Chan (Rice University) High order positivity-preserving entropy stable discontinuous Galerkin discretizations December 5, 2023 | FEM@LLNL Seminar Series High order discontinuous Galerkin (DG) methods provide high order accuracy and geometric flexibility, but are known to be unstable when applied to nonlinear conservation laws whose solutions exhibit shocks and under-resolved solution features. Entropy stable schemes improve robustness by ensuring that physically relevant solutions satisfy a semi-discrete cell entropy inequality independently of numerical resolution and solution regularization while retaining formal high order accuracy. In this talk, we will review the construction of entropy stable high order discontinuous Galerkin methods and describe approaches for enforcing that solutions are \"physically relevant\" (i.e., the thermodynamic variables remain positive). Youngsoo Choi (LLNL) Physics-guided interpretable data-driven simulations November 14, 2023 | FEM@LLNL Seminar Series A computationally demanding physical simulation often presents a significant impediment to scientific and technological progress. Fortunately, recent advancements in machine learning (ML) and artificial intelligence have given rise to data-driven methods that can expedite these simulations. For instance, a well-trained 2D convolutional deep neural network can provide a 100,000-fold acceleration in solving complex problems like Richtmyer-Meshkov instability [ 1 ]. However, conventional black-box ML models lack the integration of fundamental physics principles, such as the conservation of mass, momentum, and energy. Consequently, they often run afoul of critical physical laws, raising concerns among physicists. These models attempt to compensate for the absence of physics information by relying on vast amounts of data. Additionally, they suffer from various drawbacks, including a lack of structure-preservation, computationally intensive training phases, reduced interpretability, and susceptibility to extrapolation issues. To address these shortcomings, we propose an approach that incorporates physics into the data-driven framework. This integration occurs at different stages of the modeling process, including the sampling and model-building phases. A physics-informed greedy sampling procedure minimizes the necessary training data while maintaining target accuracy [ 2 ]. A physics-guided data-driven model not only preserves the underlying physical structure more effectively but also demonstrates greater robustness in extrapolation compared to traditional black-box ML models. We will showcase numerical results in areas such as hydrodynamics [ 3 , 4 ], particle transport [ 5 ], plasma physics, pore-collapse, and 3D printing to highlight the efficacy of these data-driven approaches. The advantages of these methods will also become apparent in multi-query decision-making applications, such as design optimization [ 6 , 7 ]. Ben Southworth (Los Alamos National Laboratory) Superior discretizations and AMG solvers for extremely anisotropic diffusion via hyperbolic operators October 17, 2023 | FEM@LLNL Seminar Series Heat conduction in magnetic confinement fusion can reach anisotropy ratios of 10^9-10^10, and in complex problems the direction of anisotropy may not be aligned with (or is impossible to align with) the spatial mesh. Such problems pose major challenges for both discretization accuracy and efficient implicit linear solvers. Although the underlying problem is elliptic or parabolic in nature, we argue that the problem is better approached from the perspective of hyperbolic operators. The problem is posed in a directional gradient first order formulation, introducing a directional heat flux along magnetic field lines as an auxiliary variable. We then develop novel continuous and discontinuous discretizations of the mixed system, using stabilization techniques developed for hyperbolic problems. The resulting block matrix system is then reordered so that the advective operators are on the diagonal, and the system is solved using AMG based on approximate ideal restriction (AIR), which is particularly efficient for upwind discretizations of advection. Compared with traditional discretizations and AMG solvers, we achieve orders of magnitude reduction in error and AMG iterations in the extremely anisotropic regime. Natasha Sharma (University of Texas at El Paso) A Continuous Interior Penalty Method Framework for Sixth Order Cahn-Hilliard-type Equations with applications to microstructure evolution and microemulsions July 18, 2023 | FEM@LLNL Seminar Series The focus of this talk is on presenting unconditionally stable, uniquely solvable, and convergent numerical methods to solve two classes of the sixth-order Cahn-Hilliard-type equations. The first class arises as the so-called phase field crystal atomistic model of crystal growth, which has been employed to simulate a number of physical phenomena such as crystal growth in a supercooled liquid, crack propagation in ductile material, dendritic and eutectic solidification. The second class, henceforth referred to as Microemulsion systems (ME systems) appears as a model capturing the dynamics of phase transitions in ternary oil-water-surfactant systems in which three phases namely a microemulsion, almost pure oil, and almost pure water can coexist in equilibrium. ME systems have several applications ranging from enhanced oil recovery to the development of environmentally friendly solvents and drug delivery systems. Despite the widespread applications of these models, the major challenge impeding their use has been and continues to be a lack of understanding of the complex systems which they model. Thus, building computational models for these systems is crucial to the understanding of these systems. The presence of the higher order derivative in combination with a time-dependent process poses many challenges to the creation of stable, convergent, and efficient numerical methods approximating solutions to these equations. In this talk, we present a continuous interior penalty Galerkin framework for solving these equations and theoretically establish the desirable properties of stability, unique solvability, and first-order convergence. We close the talk by presenting the numerical results of some benchmark problems to verify the practical performance of the proposed approach and discuss some exciting current and future applications. Freddie Witherden (Texas A&M University) FSSpMDM \u2014 Accelerating Small Sparse Matrix Multiplications by Run-Time Code Generation June 20, 2023 | FEM@LLNL Seminar Series Small matrix multiplications are a key building block of modern high-order finite element method solvers. Such multiplications describe the act of applying a specific finite element operator onto a set of state vectors. The small and irregular size of these multiplications makes them poor candidates for generic matrix multiplication routines. Moreover, for elements with a tensor product construction, the operators themselves can exhibit a significant degree of sparsity. In this talk, I will describe the code generation strategies employed by our Fixed Size Sparse Matrix-Dense Matrix (FSSpMDM) routine in libxsmm and show how these result in performant operator kernels for prismatic and hexahedral elements. Strategies will be described for both x86-64 (AVX2/AVX-512) and AARCH64 (NEON/SVE) instruction sets. Results will be presented on recent Intel and Apple CPUs and compared against the well-known GiMMiK C code generation library. Frank Giraldo (Naval Postgraduate School) Using High-Order Element-Based Galerkin Methods to Capture Hurricane Intensification May 16, 2023 | FEM@LLNL Seminar Series Properly capture hurricane rapid intensification (where the winds increase by 30 knots in the first 24 hours) remains challenging for atmospheric models. The reason is that we need LES-type scales \ud835\udcaa(100m) which is still elusive due to computational cost. In this talk, I describe the work that we are doing in this area and how element-based Galerkin Methods are being used to approximate spatial derivatives. I will also discuss the time-integration strategy that we are exploring for this class of problems. In particular, we are exploring process Multirate methods whereby each process in a system of nonlinear partial different equations (PDEs) uses a time-integrator and time-step commensurate with the wave-speed of that process. We have constructed Multirate methods of any order using extrapolation methods. Along this same idea, we have also developed a multi-modeling framework (MMF) designed to replace the physical parameterizations used in weather/climate models. Our approach is to view the coarse-scale and fine-scale models through the lens of Variational Multi-Scale (VMS) methods in order to give MMF a more rigorous mathematical foundation. Our end goal is to use MMF in order to better resolve the inner core of hurricanes. In addition, I will show some results using flux differencing discontinuous Galerkin Methods for constructing both Kinetic Energy Preserving and Entropy Stable methods and discuss why we need scalable models in order to achieve our goals.Our model, NUMA, is a 3D nonhydrostatic atmospheric model that runs on large CPU clusters and on GPUs. Leszek F. Demkowicz (University of Texas at Austin) Full Envelope DPG Approximation for Electromagnetic Waveguides. Stability and Convergence Analysis April 25, 2023 | FEM@LLNL Seminar Series The presented work started with a convergence and stability analysis for the so-called full envelope approximation used in analyzing optical amplifiers (lasers). The specific problem of interest was the simulation of Transverse Mode Instabilities (TMI). The problem translates into the solution of a system of two nonlinear time-harmonic Maxwell equations coupled with a transient heat equation. Simulation of a 1 m long fiber involves the resolution of 10 M wavelengths. A superefficient MPI/openMP hp FE code run on a supercomputer gets you to the range of ten thousand wavelengths. The resolution of the additional thousand wavelengths is done using an exponential ansatz e^{ikz} in the z-coordinate. This results in a non-standard Maxwell problem. The stability and convergence analysis for the problem has been restricted to the linear case only. It turns out that the modified Maxwell problem is stable if and only if the original waveguide problem is stable and the boundedness below stability constants are identical. We have converged to the task of determining the boundedness below constant. The stability analysis started with an easier, acoustic waveguide problem. Separation of variables leads to an eigenproblem for a self-adjoint operator in the transverse plane (in x,y). Expansion of the solution in terms of the corresponding eigenvectors leads then to a decoupled system of ODEs, and a stability analysis for a two-point BVP for an ODE parametrized with the corresponding eigenvalues. The L^2-orthogonality of the eigenmodes and the stability result for a single mode, lead then to the final result: the inverse boundedness below constant depends inversely linearly upon the length L of the waveguide. The corresponding stability for the Maxwell waveguide turned out to be unexpectedly difficult. The equation is vector-valued so a direct separation of variables is out to begin with. An exponential ansatz in z leads to a non-standard eigenproblem involving an operator that is non-self adjoint even for the easiest, homogeneous case. The answer to the problem came from a tricky analysis of the eigenproblem combined with the perturbation technique for perturbed self-adjoint operators. The use of perturbation theory requires an assumption on the smallness of perturbation of the dielectric constant (around a constant value) but with no additional assumptions on its differentiability (discontinuities are allowed). In the end, the final result is similar to that for the acoustic waveguide - the boundedness below constant depends inversely linearly on L. Joachim Sch\u00f6berl (Vienna University of Technology) The Netgen/NGSolve Finite Element Software March 28, 2023 | FEM@LLNL Seminar Series In this talk we give an overview of the open source finite element software Netgen/NGSolve, available from www.ngsolve.org . We show how to setup various physical models using FEniCS-like Python scripting. We discuss how we use NGSolve for teaching finite element methods, and how recent research projects have contributed to the further development of the NGSolve software. Some recent highlights are matrix-valued finite elements with applications in elasticity, fluid dynamics, and numerical relativity. We show how the recently extended framework of linear operators allows the utilization of GPUs for linear solvers, as well as time-dependent problems. Vikram Gavini (University of Michigan) Fast, Accurate and Large-scale Ab-initio Calculations for Materials Modeling March 7, 2023 | FEM@LLNL Seminar Series Electronic structure calculations, especially those using density functional theory (DFT), have been very useful in understanding and predicting a wide range of materials properties. The importance of DFT calculations to engineering and physical sciences is evident from the fact that ~20% of computational resources on some of the world's largest public supercomputers are devoted to DFT calculations. Despite the wide adoption of DFT, the state-of-the-art implementations of DFT suffer from cell-size and geometry limitations, with the widely used codes in solid state physics being limited to periodic geometries and typical simulation domains containing a few hundred atoms. This talk will present our recent advances towards the development of computational methods and numerical algorithms for conducting fast and accurate large-scale DFT calculations using adaptive finite-element discretization, which form the basis for the recently released DFT-FE open-source code . Details of the implementation, including mixed precision algorithms and asynchronous computing, will be presented. The computational efficiency, scalability and performance of DFT-FE will be presented, which demonstrates a significant outperformance of widely used plane-wave DFT codes. Stefan Henneking (University of Texas at Austin) Bayesian Inversion of an Acoustic-Gravity Model for Predictive Tsunami Simulation January 10, 2023 | FEM@LLNL Seminar Series To improve tsunami preparedness, early-alert systems and real-time monitoring are essential. We use a novel approach for predictive tsunami modeling within the Bayesian inversion framework. This effort focuses on informing the immediate response to an occurring tsunami event using near-field data observation. Our forward model is based on a coupled acoustic-gravity model (e.g., Lotto and Dunham, Comput Geosci (2015) 19:327\u2014340). Similar to other tsunami models, our forward model relies on transient boundary data describing the location and magnitude of the seafloor deformation. In a real-time scenario, these parameter fields must be inferred from a variety of measurements, including observations from pressure gauges mounted on the seafloor. One particular difficulty of this inference problem lies in the accurate inversion from sparse pressure data recorded in the near-field where strong hydroacoustic waves propagate in the compressible ocean; these acoustic waves complicate the task of estimating the hydrostatic pressure changes related to the forming surface gravity wave. Our space-time model is discretized with finite elements in space and finite differences in time. The forward model incurs a high computational complexity, since the pressure waves must be resolved in the 3D compressible ocean over a sufficiently long time span. Due to the infeasibility of rapidly solving the corresponding inverse problem for the fully discretized space-time operator, we discuss approaches for using compact representations of the parameter-to-observable map. Lin Mu (University of Georgia) An Efficient and Effective FEM Solver for Diffusion Equation with Strong Anisotropy December 13, 2022 | FEM@LLNL Seminar Series The Diffusion equation with strong anisotropy has broad applications. In this project, we discuss numerical solution of diffusion equations with strong anisotropy on meshes not aligned with the anisotropic vector field, focusing on application to magnetic confinement fusion. In order to resolve the numerical pollution for simulations on a non-anisotropy-aligned mesh and reduce the associated high computational cost, we developed a high-order discontinuous Galerkin scheme with an efficient preconditioner. The auxiliary space preconditioning framework is designed by employing a continuous finite element space as the auxiliary space for the discontinuous finite element space. An effective line smoother that can mitigate the high-frequency error perpendicular to the magnetic field has been designed by a graph-based approach to pick the line smoother that is approximately perpendicular to the vector fields when the mesh does not align with anisotropy. Numerical experiments for several benchmark problems are presented to validate the effectiveness and robustness. Garth Wells (University of Cambridge) FEniCSx: design of the next generation FEniCS libraries for finite element methods November 8, 2022 | FEM@LLNL Seminar Series The FEniCS Project provides libraries for solving partial differential equations using the finite element method. An aim of the FEniCS Project has been to provide high-performance solver environments that closely mirror mathematical syntax, with the hypothesis that high-level representations means that solvers are faster to write, easier to debug, and can deliver faster runtime performance than is reasonably possible by hand. Using domain-specific languages and code generation techniques, arguably the FEniCS libraries delivered on these goals for a set of problems. However, over time limitations, including performance and extensibility, become clear and maintainability/sustainability became an issue.Building on experiences from the FEniCS libraries, I will present and discuss the design on the next generation of tools, FEniCSx. The new design retains strengths of the past approach, and addresses limitations using new designs and new tools. Solvers can be written in C++ or Python, and a number of design changes allow the creation of flexible, fast solvers in Python. In the second part of my presentation, I will discuss high-performance finite element kernels (limited to CPUs on this occasion), motivated by the Center for Efficient Exascale Discretizations 'bake-off' problems, and which would not have been possible in the original FEniCS libraries. Double, single and half-precision kernels are considered, and results include (i) the observation that kernels with vector intrinsics can be slower than auto-vectorised kernels for common cases, and (ii) a cache-aware performance model which is remarkably accurate in predicting performance across architectures. Dennis Ogiermann (University of Bochum) Computing Meets Cardiology: Making Heart Simulations Fast and Accurate September 13, 2022 | FEM@LLNL Seminar Series Heart diseases are an ubiquitous societal burden responsible for a majority of deaths world wide. A central problem in developing effective treatments for heart diseases is the inherent complexity of the heart as an organ. From a modeling perspective, the heart can be interpreted as a biological pump involving multiple physical fields, namely fluid and solid mechanics, as well as chemistry and electricity, all interacting on different time scales. This multiphysics and multiscale aspect makes simulations inherently expensive, especially when approached with naive numerical techniques. However, computational models can be extraordinarily useful in helping us understanding how the healthy heart functions and especially how malfunctions influence different diseases. In this context, also information about possible weaknesses of therapies can be obtained to ultimately improve clinical treatment and decision support. In this talk, we will focus primarily on two important model classes in computational cardiology and their respective efficient numerical treatment without compromising significant accuracy. The first class is the problem of computing electrocardiograms (ECG) from electrical heart simulations. Since ECG measurements can give a wide range of insights about a wide range of heart diseases they offer suitable data to validate our electrophysiological models and verify our numerical schemes on organ-scale. Known numerical issues, arising in the context of electrophysiological models, will be reviewed. The second class addresses bidirectionally coupled electromechanical models and their efficient numerical treatment. Focus will be on a unified space-time adaptive operator splitting framework developed on top of MFEM which proves highly efficient so far for the investigated model classes while still preserving high accuracy. Ricardo Vinuesa (KTH) Modeling and Controlling Turbulent Flows through Deep Learning August 23, 2022 | FEM@LLNL Seminar Series The advent of new powerful deep neural networks (DNNs) has fostered their application in a wide range of research areas, including more recently in fluid mechanics. In this presentation, we will cover some of the fundamentals of deep learning applied to computational fluid dynamics (CFD). Furthermore, we explore the capabilities of DNNs to perform various predictions in turbulent flows: we will use convolutional neural networks (CNNs) for non-intrusive sensing, i.e. to predict the flow in a turbulent open channel based on quantities measured at the wall. We show that it is possible to obtain very good flow predictions, outperforming traditional linear models, and we showcase the potential of transfer learning between friction Reynolds numbers of 180 and 550. We also discuss other modelling methods based on autoencoders (AEs) and generative adversarial networks (GANs), and we present results of deep-reinforcement-learning-based flow control. Jeffrey Banks (RPI) Efficient Techniques for Fluid Structure Interaction: Compatibility Coupling and Galerkin Differences July 26, 2022 | FEM@LLNL Seminar Series Predictive simulation increasingly involves the dynamics of complex systems with multiple interacting physical processes. In designing simulation tools for these problems, both the formulation of individual constituent solvers, as well as coupling of such solvers into a cohesive simulation tool must be addressed. In this talk, I discuss both of these aspects in the context of fluid-structure interaction, where we have recently developed a new class of stable and accurate partitioned solvers that overcome added-mass instability through the use of so-called compatibility boundary conditions. Here I will present partitioned coupling strategies for incompressible FSI. One interesting aspect of CBC-based coupling is the occurrence of nonstandard and/or high-derivative operators, which can make adoption of the techniques challenging, e.g. in the context of FEM methods. To address this, I will also discuss our newly developed Galerkin Difference approximations, which may provide a natural pathway for CBCs in an FEM context. Although GD is fundamentally a finite element approximation based on a Galerkin projection, the underlying GD space is nonstandard and is derived using profitable ideas from the finite difference literature. The resulting schemes possess remarkable properties including nodal superconvergence and the ability to use large CFL-one time steps. I will also present preliminary results for GD discretizations on unstructured grids using MFEM. Paul Fischer (UIUC/ANL) Outlook for Exascale Fluid Dynamics Simulations June 21, 2022 | FEM@LLNL Seminar Series We consider design, development, and use of simulation software for exascale computing, with a particular emphasis on fluid dynamics simulation. Our perspective is through the lens of the high-order code Nek5000/RS, which has been developed under DOE's Center for Efficient Exascale Discretizations (CEED). Nek5000/RS is an open source thermal fluids simulation code with a long development history on leadership computing platforms\u2014it was the first commercial software on distributed memory platforms and a Gordon Bell Prize winner on Intel's ASCII Red. There are a myriad of objectives that drive software design choices in HPC, such as scalability, low-memory, portability, and maintainability. Throughout, our design objective has been to address the needs of the user, including facilitating data analysis and ensuring flexibility with respect to platform and number of processors that can be used. When running on large-scale HPC platforms, three of the most common user questions are: How long will my job take? How many nodes will be required? Is there anything I can do to make my job run faster? Additionally, one might have concerns about storage, post-processing (Will I be able to analyze the results? Where?), and queue times. This talk will seek to answer several of these questions over a broad range of fluid-thermal problems from the perspective of a Nek5000/RS user. We specifically address performance with data for NekRS on several of the DOE's pre-exascale architectures, which feature AMD MI250X or NVIDIA V100 or A100 GPUs. Mike Puso (LLNL) Topics in Immersed Boundary and Contact Methods: Current LLNL Projects and Research May 24, 2022 | FEM@LLNL Seminar Series Many of the most interesting phenomena in solid mechanics occurs at material interfaces. This can be in the form of fluid structure interaction, cracks, material discontinuities, impact etc. Solutions to these problems often require some form of immersed/embedded boundary method or contact or combination of both. This talk will provide a brief overview of different lab efforts in these areas and presents some of the current research aspects and results using from LLNL production codes. Technically speaking, the methods discussed here all require Lagrange multipliers to satisfy the constraints on the interface of overlapping or dissimilar meshes which complicates the solution. Stability and consistency of Lagrange multiplier approaches can be hard to achieve both in space and time. For example, the wrong choice of multiplier space will either be over-constrained and/or cause oscillations at the material interfaces for simple statics problems. For dynamics, many of the basic time integration schemes such as Newmark's method are known to be unstable due to gaps opening and closing. Here we introduce some (non-Nitsche) stabilized multiplier spaces for immersed boundary and contact problems and a structure preserving time integration scheme for long time dynamic contact problems. Finally, I will describe some on-going efforts extending this work. Robert Chiodi (UIUC) CHyPS: An MFEM-Based Material Response Solver for Hypersonic Thermal Protection Systems April 16, 2022 | FEM@LLNL Seminar Series The University of Illinois at Urbana-Champaign\u2019s Center for Hypersonics and Entry Systems Studies has developed a material response solver, named CHyPS, to predict the behavior of thermal protection systems for hypersonic flight. CHyPS uses MFEM to provide the underlying discontinuous Galerkin spatial discretization and linear solvers used to solve the equations. In this talk, we will briefly present the physics and corresponding equations governing material response in hypersonic environments. We will also include a discussion on the implementation of a direct Arbitrary Lagrangian-Eulerian approach to handle mesh movement resulting from the ablation of the material surface. Results for standard community test cases developed at a series of Ablation Workshop meetings over the past decade will be presented and compared to other material response solvers. We will also show the potential of high-order solutions for simulating thermal protection system material response. Tamas Horvath (Oakland University) Space-Time Hybridizable Discontinuous Galerkin with MFEM March 29, 2022 | FEM@LLNL Seminar Series Unsteady partial differential equations on deforming domains appear in many real-life scenarios, such as wind turbines, helicopter rotors, car wheels, free-surface flows, etc. We will focus on the space-time finite element method, which is an excellent approach to discretize problems on evolving domains. This method uses discontinuous Galerkin to discretize both in the spatial and temporal directions, allowing for an arbitrarily high-order approximation in space and time. Furthermore, this method automatically satisfies the geometric conservation law, which is essential for accurate solutions on time-dependent domains. The biggest criticism is that the application of space-time discretization increases the computational complexity significantly. To overcome this, we can use the high-order accurate Hybridizable or Embedded discontinuous Galerkin method. Numerical results will be presented to illustrate the applicability of the method for fluid flow around rigid bodies. Tobin Isaac (Georgia Tech) Unifying the Analysis of Geometric Decomposition in FEEC March 22, 2022 | FEM@LLNL Seminar Series Two operations take function spaces and make them suitable for finite element computations. The first is the construction of trace-free subspaces (which creates \"bubble\" functions) and the second is the extension of functions from cell boundaries into cell interiors (which create edge functions with the correct continuity): together these operations define the geometric decomposition of a function space. In finite element exterior calculus (FEEC), these two operations have been treated separately for the two main families of finite elements: full polynomial elements and trimmed polynomial elements. In this talk we will see how one constructor of trace-free functions and one extension operator can be used for both families, and indeed for all differential forms. We will also examine the practicality of these two operators as tools for implementing geometric decompositions in actual finite element codes. Rapha\u00ebl Zanella (UT Austin) Axisymmetric MFEM-Based Solvers for the Compressible Navier-Stokes Equations and Other Problems March 1, 2022 | FEM@LLNL Seminar Series An axisymmetric model leads, when suitable, to a substantial cut in the computational cost with respect to a 3D model. Although not as accurate, the axisymmetric model allows to quickly obtain a result which can be satisfying. Simple modifications to a 2D finite element solver allow to obtain an axisymmetric solver. We present MFEM-based parallel axisymmetric solvers for different problems. We first present simple axisymmetric solvers for the Laplacian problem and the heat equation. We then present an axisymmetric solver for the compressible Navier-Stokes equations. All solvers are based on H^1-conforming finite element spaces. The correctness of the implementation is verified with convergence tests on manufactured solutions. The Navier-Stokes solver is used to simulate axisymmetric flows with an analytical solution (Poiseuille and Taylor-Couette) and an air flow in a plasma torch geometry. Robert Carson (LLNL) An Overview of ExaConstit and Its Use in the ExaAM Project February 1, 2022 | FEM@LLNL Seminar Series As additively manufactured (AM) parts become increasingly more popular in industry, a growing need exists to help expediate the certifying process of parts. The ExaAM project seeks to help this process by producing a workflow to model the AM process from the melt pool process all the way up to the part scale response by leveraging multiple physics codes run on upcoming exascale computing platforms. As part of this workflow, ExaConstit is a next-generation quasi-static, solid mechanics FEM code built upon MFEM used to connect local microstructures and local properties within the part scale response. Within this talk, we will first provide an overview of ExaConstit, how we have ported it over to the GPU, and some performance numbers on a number of different platforms. Next, we will discuss how we have leveraged MFEM and the FLUX workflow to run hundreds of high-fidelity simulations on Summit in-order to generate the local properties needed to drive the part scale simulation in the ExaAM workflow. Finally, we will show case a few other areas ExaConstit has been used in. Guglielmo Scovazzi (Duke) The Shifted Boundary Method: An Immersed Approach for Computational Mechanics January 20, 2022 | FEM@LLNL Seminar Series Immersed/embedded/unfitted boundary methods obviate the need for continual re-meshing in many applications involving rapid prototyping and design. Unfortunately, many finite element embedded boundary methods are also difficult to implement due to the need to perform complex cell cutting operations at boundaries, and the consequences that these operations may have on the overall conditioning of the ensuing algebraic problems. We present a new, stable, and simple embedded boundary method, named \u201cshifted boundary method\u201d (SBM), which eliminates the need to perform cell cutting. Boundary conditions are imposed on a surrogate discrete boundary, lying on the interior of the true boundary interface. We then construct appropriate field extension operators, by way of Taylor expansions, with the purpose of preserving accuracy when imposing the boundary conditions. We demonstrate the SBM on large-scale solid and fracture mechanics problems; thermomechanics problems; porous media flow problems; incompressible flow problems governed by the Navier-Stokes equations (also including free surfaces); and problems governed by hyperbolic conservation laws. MFEM Workshop 2023 Aaron Fisher (LLNL) Welcome and Overview October 26, 2023 | MFEM Workshop 2023 Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community resources. Tzanio Kolev (LLNL) The State of MFEM October 26, 2023 | MFEM Workshop 2023 MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities (including adaptive mesh refinement, GPU support, and FEM operator decomposition and partial assembly), examples, and mini-apps. Kolev also highlighted the growth of the global community as well as features included in the recent v4.5 software release. Veselin Dobrev (LLNL) Recent Developments October 26, 2023 | MFEM Workshop 2023 Veselin Dobrev of LLNL detailed the project\u2019s recent developments including sub-mesh extraction, linear form assembly on GPUs, coefficient evaluation on GPUs, new mini-apps and examples, Windows 2022 CI testing on GitHub, and more. He also summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Extreme-scale Scientific Software Development Kit, SciDAC, and the FASTMath Institute. Sebastian Grimberg (AWS) Palace: PArallel LArge-scale Computational Electromagnetics October 26, 2023 | MFEM Workshop 2023 Palace is a parallel finite element code for full-wave electromagnetics simulations based on the MFEM library. Palace is used at the AWS Center for Quantum Computing to perform large-scale 3D simulations of complex electromagnetics models and enable the design of quantum computing hardware. Grimberg provided an overview of the simulation capabilities of Palace as well as some recent developments for conforming and nonconforming adaptive mesh refinement, operator partial assembly, and GPU support. Jacob Lotz (Delft University of Technology) Computation and Reduced Order Modelling of Periodic Flows October 26, 2023 | MFEM Workshop 2023 Many types of periodic flows can be found in nature and industrial applications and their computation is expensive due to lengthy time simulations. His work aims to reduce the cost of these computations. His team solves periodic flows in a space-time domain in which both ends in time are periodic such that they only have to model one period. MFEM is used to discretize the space-time domain and solve our discretized system of equations. Lotz applies a hyper-reduced Proper Orthogonal Decomposition Galerkin reduced order model to speed up our computations. During the presentation he showed (results of) their full order model and their advances in reduced order modelling. Boyan Lazarov (LLNL) Scalable Design and Optimization with MFEM October 26, 2023 | MFEM Workshop 2023 Lazarov discussed recently added and ongoing code development facilitating the solution of shape and topology optimization problems. Both topology and shape optimization are gradient-based iterative algorithms aiming to find a material distribution that minimizes an objective and fulfills a set of constraints. Every optimization step includes a solution to a forward optimization problem, an evaluation of the objective and constraints, a solution to an adjoint problem associated with every objective or constraint, an evaluation of gradients, and an update of the design based on mathematical programming techniques. All these steps can be easily implemented and executed by using MFEM in a scalable manner, allowing the design and optimization of large-scale realistic industrial problems. Thus, the goal is to exemplify these features, highlight the techniques that simplify the implementation of new problems, and provide a glimpse into the future. Student Lightning Talks Part 1 October 26, 2023 | MFEM Workshop 2023 The following four students presented in this video: Shani Martinez Weissberg (Tel Aviv University): \u201c\u00b5FEA of a Rabbit Femur\u201d Paul Moujaes (TU-Dortmund): \u201cDissipation-Based Entropy Stabilization for Slope-Limited Discontinuous Galerkin Approximations of Hyperbolic Problems\u201d Alejandro Mu\u00f1oz (Universidad de Granada): \u201cDiscontinuous Galerkin in the Time Domain for Maxwell\u2019s Equations\u201d Bill Ellis (UK Atomic Energy Authority): \u201cComparing Thermo-Mechanical Solves in MOOSE and MFEM\u201d Student Lightning Talks Part 2 October 26, 2023 | MFEM Workshop 2023 The following four students presented in this video: Alexander Mote (Oregon State University): \u201cA Neural Network Surrogate Model for Nonlocal Thermal Flux Calculations\u201d (LLNL-PRES-854134) Amit Rotem (Virginia Tech): \u201cGPU Acceleration of IPDG in MFEM\u201d Josiah Brown (Relogic Research): \u201cProject Minerva\u201d Mike Pozulp (UC Berkeley): \u201cAn Implicit Monte Carlo Acceleration Scheme\u201d Syun'ichi Shiraiwa (PPPL) Radio-Frequency Wave Simulation in Hot Magnetized Plasma using Differential Operator for Non-Local Conductivity Response October 26, 2023 | MFEM Workshop 2023 In high-temperature plasmas, the dielectric response to the RF fields is caused by freely moving charged particles, which naturally makes such a response non-local and correspondingly, the Maxwell wave problem becomes an integro-differential equation. A differential form of dielectric operator, based on the small k\u22a5\u03c1 expansion, is widely used. However, they typically includes up-to the second order terms, and thus the use of such an operator is limited to the waves that satisfy k\u22a5\u03c1 < 1. We propose an alternative approach to construct a dielectric operator, which includes all-order finite Larmor radius effects without explicitly containing higher order derivatives. We use a rational approximation of the plasma dielectric tensor in the wave number space, in order to yield a differential operator acting on the dielectric current (J). The 1D O-X-B mode-conversion of the electron Bernstein wave in the non-relativistic Maxwellian plasma was modeled using this approach. An agreement with analytic calculation and the conservation of wave energy carried by the Poynting flux and electron thermal motion (\u201csloshing\u201d) is found. The connection between our construction method and superposition of Green\u2019s function for these screened Poisson\u2019s equations is presented. An approach to extend the operator in a multi-dimensional setting will also be discussed. Tamas Horvath (Oakland University) Implementation of Hybridizable Discontinuous Galerkin Methods via the HDG Branch October 26, 2023 | MFEM Workshop 2023 Horvath presented the HDG branch, which was initially developed for HDG discretizations of advection-diffusion problems. Recent updates have made the branch highly adaptable for various applications, allowing a flexible implementation of HDG for many different PDEs. He showcased these enhancements and provide insights into their versatile usage across different problems. Yohann Dudouit (LLNL) Empowering MFEM Using libCEED October 26, 2023 | MFEM Workshop 2023 Dudouit began with an overview of the features introduced to MFEM through the integration of libCEED. He emphasized capabilities that are distinct from native MFEM functionalities, marking an enhancement in the software\u2019s suite of tools, such as support for simplices, handling of mixed meshes, and support for p-adaptivity. The presentation concluded by showcasing benchmarks for various problems executed on different HPC architectures, illustrating the performance gains and efficiencies achieved through the libCEED integration. Zhang Chunyu (Sun Yat-Sen University) Homogenized Energy Theory for Solution of Elasticity Problems with Consideration of Higher-Order Microscopic Deformations October 26, 2023 | MFEM Workshop 2023 The classical continuum mechanics faces difficulties in solving problems involving highly inhomogeneous deformations. The proposed theory investigates the impact of high-order microscopic deformation on modeling of material behaviors and provides a refined interpretation of strain gradients through the averaged strain energy density. Only one scale parameter, i.e., the size of the Representative Volume Element(RVE), is required by the proposed theory. By employing the variational approach and the Augmented Lagrangian Method(ALM), the governing equations for deformation as well as the numerical solution procedure are derived. It is demonstrated that the homogenized energy theory offers plausible explanations and reasonable predictions for the problems yet unsolved by the classical theory such as the size effect of deformation and the stress singularity at the crack tip. The concept of averaged strain energy proves to be more suitable for describing the intricate mechanical behavior of materials. And high order partial differential equations can be effectively solved by the ALM by introducing supplementary variables to lower the highest order of the equations. Eric Chin (LLNL) Contact Constraint Enforcement Using the Tribol Interface Physics Library October 26, 2023 | MFEM Workshop 2023 Chin discussed recent additions to the Tribol interface physics library to simplify MPI parallel contact constraint enforcement in large deformation, implicit and explicit continuum solid mechanics simulations using MFEM. Tribol is an open-source software package available on GitHub and includes tools for contact detection, state-of-the-art Lagrangian contact methods such as common plane and mortar, and various enforcement techniques such as penalty and Lagrange multiplier. Additionally, Tribol recently added a domain redecomposer for coalescing proximal contact pairs on a single rank. Tribol\u2019s features are designed to interact seamlessly with MFEM and other codes that use MFEM, with native support for MFEM data structures such as ParMesh, ParGridFunction, and HypreParMatrix. Chin highlighted the simplicity of adding Tribol features to an MFEM-based code by looking at integration with Serac , an open-source implicit nonlinear thermal-structural simulation code. Milan Holec (LLNL) Deterministic Transport MFEM-Miniapp October 26, 2023 | MFEM Workshop 2023 Holec introduced a new multidimensional discretization in MFEM enabling efficient high-order phase-space simulations of various types of Boltzmann transport. In terms of a generalized form of the standard discrete ordinate SN method for the phase-space, his team carefully designs discrete analogs obeying important continuous properties such as conservation of energy, preservation of positivity, preservation of the diffusion limit of transport, preservation of symmetry leading to rays-effect mitigation, and other laws of physics. Finally, Holec showed how to apply this new phase-space MFEM feature to increase the fidelity of modeling of fusion energy experiments. Aaron Fisher (LLNL) Wrap-Up and Visualization Contest Winners October 26, 2023 | MFEM Workshop 2023 The workshop concluded with the announcement of winners of the simulation and visualization contest: (1) displacement distribution of a loaded excavator arm under static equilibrium, rendered by Mehran Ebrahimi from Autodesk Research; and (2) leapfrogging vortex rings based on an MFEM incompressible Schr\u00f6dinger fluid solver, rendered by John Camier from LLNL. Contest winners are featured in the gallery . Conferences in 2023 Tzanio Kolev (LLNL) PDE Simulations on Unstructured Grids with Finite Element Discretizations March 15, 2023 | IPAM at UCLA LLNL computational mathematician Tzanio Kolev presented an overview of MFEM as part of the long program on New Mathematics for the Exascale: Applications to Materials Science at the Institute for Pure and Applied Mathematics. MFEM Workshop 2022 Aaron Fisher (LLNL) Welcome and Overview October 25, 2022 | MFEM Workshop 2022 Held on October 25, 2022, the second annual MFEM community workshop brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, an interactive Q&A session, and a visualization contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results. Tzanio Kolev (LLNL) The State of MFEM October 25, 2022 | MFEM Workshop 2022 MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities (including adaptive mesh refinement, GPU support, and FEM operator decomposition and partial assembly), examples, and mini-apps. Kolev also highlighted the growth of the global community as well as features included in the recent v4.5 software release. Veselin Dobrev (LLNL) Recent Developments in MFEM October 25, 2022 | MFEM Workshop 2022 Veselin Dobrev of LLNL detailed the project\u2019s recent developments including sub-mesh extraction, linear form assembly on GPUs, coefficient evaluation on GPUs, new mini-apps and examples, Windows 2022 CI testing on GitHub, and more. He also summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Extreme-scale Scientific Software Development Kit, SciDAC, and the FASTMath Institute. Ben Zwick (University of Western Australia) Solution of the Electroencephalography (EEG) Forward Problem October 25, 2022 | MFEM Workshop 2022 Ben Zwick of the University of Western Australia presented \"Solution of the Electroencephalography (EEG) Forward Problem.\" The brain's electrical activity can be measured using EEG with electrodes attached to the scalp, or electrocorticography (ECoG), also known as intracranial EEG (iEEG), with electrodes implanted on the brain's surface. EEG source localization combines measurements from EEG or iEEG with data from medical imaging to estimate the location and strengths of the current sources that generated the measured electric potential at the electrodes. Source localization can be used to locate the epileptic zone in pharmaco-resistant focal epilepsies and study evoked related potentials. Accurate source localization requires fast and accurate solutions of the EEG forward problem, which involves calculating the electric potential within the brain volume given a predefined source. This presentation demonstrates how MFEM can be used to solve the EEG forward problem using patient-specific geometry and tissue conductivity obtained from medical images. Carlos Brito Pacheco (Universit\u00e9 Grenoble Alpes) Rodin: Density and Topology Optimization Framework October 25, 2022 | MFEM Workshop 2022 Carlos Brito Pacheco of Universit\u00e9 Grenoble Alpes presented \"Rodin: Lightweight and Modern C++17 Shape, Density and Topology Optimization Framework.\" He introduced the shape optimization library Rodin; a lightweight and modular shape optimization framework which provides many of the associated functionalities that are needed when implementing shape and topology optimization algorithms. These functionalities range from refining and remeshing the underlying shape, to providing elegant mechanisms to specify and solve variational problems. Learn more about Rodin on GitHub . Tobias Duswald (CERN/TUM) Stochastic Fractional PDEs: Random Field Generation & Topology Optimization October 25, 2022 | MFEM Workshop 2022 Tobias Duswald of CERN/Technical University of Munich presented \"Stochastic Fractional PDEs with MFEM with Applications to Random Field Generation and Topology Optimization.\" Over the last several centuries, engineers, physicists, and mathematicians have learned how to describe their problems accurately with partial differential equations (PDEs). PDEs govern the laws of continuum mechanics, quantum mechanics, heat transfer, and many other phenomena. More recently, fractional PDEs have gained popularity in the scientific community because they allow for a more general description of complicated systems (e.g., multiphysics) by leveraging a real-valued exponent for the operators. Besides fractional operators, stochastic PDEs have also sparked the community's interest because they generalize the PDE framework to account for randomness appearing in many disciplines. This talk addresses the numerical solution of stochastic, fractional PDEs with MFEM. To deal with these two flavors of PDEs, Duswald introduced MFEM\u2019s WhiteNoiseIntegrator to treat a stochastic linear form and adopt a rational approximation for the fractional operator. He presented results for three different use cases. First, he showed numerical results for the fractional Laplace problem with homogeneous Dirichlet boundary conditions. Second, he generated Mat\u00e9rn-type Gaussian random fields (GRFs) by solving a specific stochastic, fractional PDE using an approach commonly referred to as SPDE method in the spatial statistics literature. Thirdly, he used GRFs to model geometric uncertainties in additive manufacturing processes and apply the model for topology optimization under uncertainty. Alvaro S\u00e1nchez Villar (Princeton Plasma Physics Laboratory) MFEM Application to EM-Wave Simulation in ECR Space Plasma Thrusters October 25, 2022 | MFEM Workshop 2022 Alvaro S\u00e1nchez Villar of the Princeton Plasma Physics Laboratory presented \"MFEM Application to EM-Wave Simulation in ECR Space Plasma Thrusters.\" The solution of Maxwell equations using the cold-plasma approximation is shown in the context of the design of electron cyclotron resonance plasma thrusters for space propulsion applications. This thruster class utilizes the electron cyclotron resonance to energize the plasma constituents and to sustain the plasma discharge. MFEM finite element discretization is used to solve for the time-harmonic electromagnetic waves. The shape and magnitude of the electromagnetic power density absorbed by the plasma is coupled to the plasma transport variables, and therefore determines the thruster operation performance parameters. Coupled simulations of the electromagnetic-wave and the plasma transport problems are used to interpret thruster operational principles, to understand its sensitivity to operational and design parameters, and compared to experimental measurements to both assess the accuracy of the current numerical model and to highlight its main limitations. Brian Young OpenParEM2D: A 2D Simulator for Guided Waves October 25, 2022 | MFEM Workshop 2022 Independent software developer Brian Young presented \"OpenParEM2D: A Free, Open-Source Electromagnetic Simulator for 2D Waveguides and Transmission Lines.\" An overview is provided on a 2D electromagnetic simulator for guided waves called OpenParEM2D. It is an open-source and free project licensed under GPLv3 or later and released at its website . Capabilities and methodology are presented. Christina Migliore (MIT) The Development of the EM RF-Edge Interactions Mini-app \u201cStix\u201d Using MFEM October 25, 2022 | MFEM Workshop 2022 Christina Migliore of MIT presented \"The Development of the EM RF-Edge Interactions Mini-App Stix Using MFEM.\" Ion cyclotron radio frequency range (ICRF) power plays an important role in heating and current drive in fusion devices. However, experiments show that in the ICRF regime there is a formation of a radio frequency (RF) sheath at the material and antenna boundaries that influences sputtering and power dissipation. Given the size of the sheath relative to the scale of the device, it can be approximated as a boundary condition (BC). Electromagnetic field solvers in the ICRF regime typically treat material boundaries as perfectly conducting, thus ignoring the effect of the RF sheath. Here it is described progress of implementing a model for the RF sheath based on a finite impedance sheath BC formulated by J. Myra and D. A. D\u2019Ippolito, Physics of Plasmas 22 (2015) which provides a representation of the RF rectified sheath including capacitive and resistive effects. This research will discuss the results from the development of a parallelized cold-plasma wave equation solver Stix that implements this non-linear sheath impedance BC through the method of finite elements in pseudo-1D and pseudo-2D using the MFEM library. Will Pazner (Portland State University) High-Order Solvers + GPU Acceleration October 25, 2022 | MFEM Workshop 2022 Will Pazner of Portland State University presented \"High-Order Solvers + GPU Acceleration.\" He discussed the benefits of high-order (HO) methods in modeling under-resolved physics and on modern computing architectures, acknowledging that solving HO finite element problems remains challenging. His talk included details about how MFEM supports matrix-free solvers for HO methods, HO operator setup and application, low-order-refined (LOR) preconditioning and matrix assembly, LOR assembly throughput on GPUs (including CPU and GPU comparisons and parallel scalability), and LOR adaptive mesh refinement preconditioning. Jorge-Luis Barrera (LLNL) Shape and Topology Optimization Powered by MFEM October 25, 2022 | MFEM Workshop 2022 Jorge-Luis Barrera of LLNL presented \"Shape and Topology Optimization Powered by MFEM.\" He discussed the Livermore Design Optimization (LiDO) code, which solves optimization problems for a wide range of Lab-relevant engineering applications. Leveraging MFEM and the LLNL-developed engineering simulation code Serac, LiDO delivers a powerful suite of design tools that run on HPC systems. The talk highlighted several design examples that benefit from LiDO\u2019s integration with MFEM, including multi-material geometries, octet truss lattices, and a concrete dam under stress. LiDO\u2019s graph architecture that seamlessly integrates MFEM features ensures robust topology optimization, as well as shape optimization using nodal coordinates and level set fields as optimization variables. Siu Wun Cheung (LLNL) Reduced Order Modeling for FE Simulations with MFEM & libROM October 25, 2022 | MFEM Workshop 2022 Siu Wun Cheung of LLNL presented \"Reduced Order Modeling for Finite Element Simulations Through the Partnership of MFEM and libROM.\" MFEM provides a wide variety of mesh types and high-order finite element discretizations. However, subject to the model complexity and fine resolution of the discretization, the computational cost can be high, requiring a long time to complete a single forward simulation. In this talk, we will introduce various reduced order modeling techniques, which aim to lower the computational complexity and maintain good accuracy, including intrusive projection-based model reduction and non-intrusive approaches. We will demonstrate the use of reduced order modeling techniques in libROM (www.librom.net), which can be applied to various MFEM examples, including the Poisson problem, linear elasticity, linear advection, mixed nonlinear diffusion, nonlinear elasticity, nonlinear heat conduction, Euler equation, and optimal control problems. Devlin Hayduke (ReLogic Research) Accelerated Deployment of MFEM Based Solvers in Large Scale Industrial Problems October 25, 2022 | MFEM Workshop 2022 Devlin Hayduke of ReLogic Research presented \"Project Minerva: Accelerated Deployment of MFEM Based Solvers in Large Scale Industrial Problems.\" While many Advanced Scientific Computing Research (ASCR) supported software packages are open source, they are often complicated to use, distributed primarily in source-code form targeting HPC systems, and potential adopters lack options for purchasing commercial support, training, and custom-development services. In response to this need, ReLogic Research, Inc., in collaboration with LLNL, is developing a secure, cloud deployable platform based on the MFEM software termed Minerva. Minerva will feature an integration layer allowing users of commercially available finite element pre/post-processing software (e.g., Abaqus/CAE, Hypermesh, Femap) for use with the Abaqus solver to run simulation studies with the MFEM discretization library and will strengthen further MFEM implemented solvers to make them applicable for solving large scale industrial design and optimization problems. Synthetik Applied Technologies blastFEM: GPU-Accelerated, High-Performance, Energy-Efficient Solver October 25, 2022 | MFEM Workshop 2022 Tim Brewer, Ben Shields, Peter Vonk, Jeff Heylmun, and Barlev Raymond of Synthetik Applied Technologies presented \"blastFEM: A GPU-Accelerated, Very High-Performance and Energy-Efficient Solver for Highly Compressible Flows.\" Highly compressible multiphase and reactive flows are important, and manifest across a myriad of practical applications: novel energy production and propulsion methods, building design, safety and energy efficiency, material discovery, and maintenance of our nuclear arsenal. There are, however, few tools available to industry capable of simulating these flows at a resolution and scale suitable make predictions of adequate detail\u2014at least within reasonable timeframes and budgetary constraints\u2014to inform engineers and designers. A next generation, highly efficient simulation code is needed that can deliver results within useful timeframes, with sufficient detail to be useful to support simulation-driven design, discovery, and optimization. Furthermore, the code must be designed to run on modern and emerging heterogeneous architectures, and can efficiently leverage these architectures though the use of numerical schemes designed to maximized computational efficiency. Adolfo Rodriguez (OpenSim Technology) Using MFEM for Wellbore Stability Analysis October 25, 2022 | MFEM Workshop 2022 Adolfo Rodriguez of OpenSim Technology presented \"Using MFEM for Wellbore Stability Analysis.\" He discussed the results from a Department of Energy Small Business Innovation Research project regarding the implementation of wellbore stability analysis for hydrocarbon producing wells. Julian Andrej (LLNL) AWS Tutorial October 25, 2022 | MFEM Workshop 2022 In this tutorial, Julian Andrej of LLNL demonstrated how to use MFEM in the cloud (e.g., an Amazon Web Services instance) for scalable finite element discretization application development. Step-by-step instructions for the tutorial can be found on the tutorial page . Aaron Fisher (LLNL) Wrap-Up and Simulation Contest Winners October 25, 2022 | MFEM Workshop 2022 Aaron Fisher of LLNL concluded the workshop by announcing the winners of the simulation and visualization contest: (1) streamlines of the electric field generated by a current dipole source located in the temporal lobe of an epilepsy patient, rendered by Ben Zwick of the University of Western Australia; (2) a topology-optimized heat sink, rendered by Tobias Duswald of CERN/Technical University of Munich; (3) the magnetic field induced by current running through copper wire in air, rendered by Will Pazner of Portland State University. Contest winners are featured in the MFEM gallery . Conferences in 2022 Vladimir Tomov (LLNL) Finite Element Algorithms and Research Topics in ALE Hydrodynamics November 17, 2022 | Texas A&M University-Corpus Christi Department of Math & Statistics LLNL computational mathematician Vladimir Tomov discussed high-order finite element methods research, development, and application in the context of shock hydrodynamics simulations. The method is based on an Arbitrary Lagrangian-Eulerian (ALE) formulation consisting of separate Lagrangian, mesh optimization, and remap phases. The presentation addressed the following topics: Lagrangian shock hydrodynamics on curved meshes; multi-material closure models; coupling to multigroup radiation diffusion; optimization, r-adaptivity, and surface fitting of high-order meshes; advection-based remap with nonlinear sharpening of material interfaces; synchronization between the max/min bounds of primal and conservative fields during remap; computationally efficient finite element kernels based on partial assembly and sum factorization. The talk also covered the existing methods followed by a discussion about the outstanding research challenges and ongoing work to address them. John Camier (LLNL) All-Out Kernel Fusion: Reaching Peak Performance Faster in High-Order Finite Element Simulations March 21\u201324, 2022 | NVIDIA GTC22 LLNL research scientist John Camier described recent improvements of high-order finite element CUDA kernels that can reduce the time-to-solution by a factor of 10. Augmenting traditional compiler representations with a general mathematical description enables a sustainable way to generate optimized kernels, matching the peak performance of hand-tuned CUDA code. Such intermediate graph-based representation provides significant potential for optimization, both in terms of minimizing the number of kernel launches and in reducing the memory bandwidth. Camier also presented results on single and multiple GPUs that demonstrate significant reduction in the local problem size required to reach peak performance, leading to faster time-to-solution in finite element applications. MFEM Workshop 2021 Aaron Fisher (LLNL) Wrap-Up and Simulation Contest Winners October 20, 2021 | MFEM Workshop 2021 MFEM\u2019s first community workshop was held virtually on October 20, 2021, with participants around the world. Aaron Fisher of LLNL concluded the workshop by announcing the results of the simulation and visualization contest. The winners represent two very different research applications using MFEM: (1) the electric field generated by electrocardiogram waves of a rabbit\u2019s heart ventricles, rendered by Dennis Ogiermann of Ruhr-University Bochum (Germany); (2) incompressible fluid flow around a rotating turbine, animated by Tamas Horvath of Oakland University (Michigan). Contest submissions are featured in the gallery . Will Pazner (LLNL) High-Order Matrix-Free Solvers October 20, 2021 | MFEM Workshop 2021 For users unfamiliar with MFEM\u2019s solver library, Will Pazner of LLNL demonstrated a few ways\u2014in some cases adding just a single line of code\u2014to run scalable solvers for differential equations. These solvers execute hierarchical finite element discretizations for both low- and high-order problems. Vladimir Tomov (LLNL) MFEM Capabilities for High-Order Mesh Optimization October 20, 2021 | MFEM Workshop 2021 Vladimir Tomov of LLNL described MFEM\u2019s mesh optimization strategies including ways the user can define target elements. He demonstrated optimizing a mesh\u2019s shape by limiting displacements to preserve a boundary layer and by changing the size of a uniform mesh in a specific region. MFEM\u2019s mesh-optimizing miniapps are available online . William Dawn (NCSU) Unstructured Finite Element Neutron Transport using MFEM October 20, 2021 | MFEM Workshop 2021 William Dawn from North Carolina State University described his work with unstructured neutron transport. His team models microreactors, a new class of compact reactor with relatively small electrical output. As part of the Exascale Computing Project, Dawn\u2019s team is modeling the MARVEL reactor, which is planned for construction at Idaho National Laboratory. MFEM satisfies their need for a finite element framework with GPU support and rapid prototyping. With MFEM, the team discretizes a neutron transport equation with six independent variables in space, direction, and energy. Traditional neutron transport methods use a \u201csweeping\u201d method to transport particles through a problem, but this is not feasible for generally unstructured meshes. In Dawn\u2019s models, the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation is used to transform the neutron transport equation from a first-order hyperbolic form to a second-order elliptic form. Then, the SAAF equations are discretized with the finite element method and solved using MFEM. Due to the dependence of the neutron flux on angle and direction, these problems have a high vector-dimension with hundreds to thousands of degrees of freedom (DOF) per mesh vertex. Also, due to the second-order nature of these equations, highly refined meshes are required to sufficiently resolve reactor geometries with millions of vertices in a mesh. Results have been prepared for problems with billions of DOF using the Summit supercomputer at Oak Ridge National Laboratory. Syun\u2019ichi Shiraiwa (PPPL) Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion October 20, 2021 | MFEM Workshop 2021 Syun\u2019ichi Shiraiwa of Pacific Northwest National Laboratory introduced PyMFEM, a Python wrapper for MFEM that his team uses in radiofrequency (RF) wave simulations for the RF-SciDAC project. RF waves can be used to heat plasma in a nuclear fusion reaction. Simulations of this process present multiple challenges when incorporating a variety of antenna structures at different frequencies, waves with different wave lengths in the same space or spatially diverse, and RF wave effects on background plasma. To integrate MFEM, a C++ software library, into their multiphysics platform, Shiraiwa\u2019s team created a code \u201cwrapper\u201d that binds MFEM to the external Python components of RF wave simulations, ultimately extending MFEM\u2019s features to Python users. Shiraiwa described how the PyMFEM module works in serial and parallel and invited the audience to contribute to the open-source code. Qi Tang (LANL) An Adaptive, Scalable Fully Implicit Resistive MHD Solver October 20, 2021 | MFEM Workshop 2021 Qi Tang of Los Alamos National Laboratory described his team\u2019s development of an efficient, scalable solver for tokamak plasma simulations. Magnetohydrodynamics (MHD) equations are important for studying plasma systems, but efficient numerical solutions for MHD are extremely challenging due to disparate time and length scales, strong hyperbolic phenomena, and nonlinearity. Tang\u2019s team has developed a high-order stabilized finite element algorithm for incompressible resistive MHD equations based on MFEM, which provides physics-based preconditioners, adaptive mesh refinement, parallelization, and load balancing. Tang showed animated examples of the model\u2019s scalable and efficient results. Jan Nikl (ELI Beamlines) Laser Plasma Modeling with High-Order Finite Elements October 20, 2021 | MFEM Workshop 2021 Jan Nikl outlined how his team at the ELI Beamlines Centre uses MFEM for laser plasma modeling. Lasers have found their application in many scientific disciplines, where generation of plasma, the fourth state of matter, holds great potential for the future. A detailed description of laser produced plasmas is then essential for many applications, like (pre)pulses of ultra-intense lasers and ion acceleration beamlines, laboratory astrophysics, inertial confinement fusion, and many others. All of the mentioned are investigated at ELI Beamlines in the Czech Republic, a European laser facility aiming to operate the most intense laser system in the world. In this context, Nikl described the numerical construction based on the finite element method. This effort concentrates mainly on the Lagrangian hydrodynamics and Vlasov\u2013Fokker\u2013Planck\u2013Maxwell kinetic description of plasma, utilizing the MFEM library for its flexibility and scalability. Mathias Davids (Harvard) Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI) October 20, 2021 | MFEM Workshop 2021 Mathias Davids from Harvard Medical School presented MFEM\u2019s use in a medical setting. Peripheral nerve stimulation (PNS) limits the usable image encoding performance in the latest generation of magnetic resonance imaging (MRI) scanners. The rapid switching of the MRI gradient coils\u2019 magnetic fields induces electric fields in the human body strong enough to evoke unwanted action potential in peripheral nerves, leading to muscle contractions or touch perceptions. Despite its limiting role in MRI, PNS effects are not directly included during the coil design phase. Davids\u2019 team developed a modeling tool to predict PNS thresholds and locations in the human body, allowing them to directly incorporate PNS metrics in the numeric coil winding optimization to design PNS-optimized coil layouts. This modeling tool relies on electromagnetic field simulations in heterogeneous finite element body models coupled to neurodynamic models of myelinated nerve fibers. This tool enables researchers to develop strategies that mitigate PNS effects without building expensive prototype MRI systems, maximizing the usable image encoding performance. Marc Bolinches (UT) Development of DG Compressible Navier-Stokes Solver with MFEM October 20, 2021 | MFEM Workshop 2021 Marc Bolinches from the University of Texas at Austin described a compressible Navier-Stokes solver using MFEM v4.2 which did not include full support for GPUs. The solver uses the discontinuous Galerkin (DG) method as a space discretization and an explicit Runge-Kutta time-integration scheme. An effort has been made to fully support GPU computation by taking over some of the loops internal to the NonLinearForm class. This has also allowed us to implement overlap between computation and communication. The team hopes their open-source code will help other researchers in creating high-fidelity simulations of compressible flows. Robert Rieben (LLNL) The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling October 20, 2021 | MFEM Workshop 2021 High-energy-density physics (HEDP) experiments performed at LLNL and other Department of Energy laboratories require multiphysics simulations to predict the behavior of complex physical systems for applications including inertial confinement fusion, pulsed power, and material strength/equations-of-state studies. Robert Rieben described the variety of mathematical algorithms needed for these simulations, including ALE methods, unstructured adaptive mesh refinement, and high-order discretizations. LLNL\u2019s Multiphysics on Advanced Platforms Project (MAPP) is developing a next-generation multiphysics code, called MARBL, based on high-order numerical methods and modular infrastructure for deployment on advanced HPC architectures. MARBL\u2019s use of high-order methods produce better throughput on GPUs. MARBL uses MFEM for finite elements and mesh/field/operator abstractions while leveraging its support for efficient memory management. Rieben explained that co-design efforts among the MARBL, MFEM, and RAJA (portability software) teams led to better device utilization and improved performance for the MARBL code. Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia) Phase Change Heat and Mass Transfer Simulation with MFEM October 20, 2021 | MFEM Workshop 2021 Three undergraduate students\u2014Felipe G\u00f3mez, Carlos del Valle, and Juli\u00e1n Jim\u00e9nez\u2014from the National University of Colombia presented their work using MFEM in an oceanographic model. Below the Arctic sea ice, and under the right conditions, a flux of icy brine flows down into the sea. The icy brine has a much lower fusion point and a higher density than normal seawater. As a result, it sinks while freezing everything around it, forming an ice channel called a brinicle (also known as ice stalactite). The team shared their simulations of this phenomenon assuming cylindrical symmetry. The fluid is considered viscous and quasi-stationary, and the problem is simulated taking advantage of the setup symmetries. The heat and salt transport are weakly coupled to the fluid motion and are modeled with the corresponding conservation equations, taking into account diffusive and convective effects. The coupled system of partial differential equations is discretized and solved with the help of the MFEM finite element library. Thomas Helfer (CEA) MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic October 20, 2021 | MFEM Workshop 2021 Thomas Helfer from the French Atomic Energy Commission (CEA) introduced the MFEM-MGIS-MFront library (MMM), which aims for efficient use of supercomputers in the field of implicit nonlinear thermomechanics. His team\u2019s primary focus is to develop advanced nuclear fuel element simulations where the evolution of materials under irradiation are influenced by multiple phenomena (e.g., viscoplasticity, damage, phase transitions, swelling due to solid and gaseous fission products). MFEM provides this project with finite element abstractions, adaptive mesh refinement, and a parallel API. However, as applications dedicated to solid mechanics in MFEM are mostly limited to a few constitutive equations such as elasticity and hyperelasticity, Helfer explained that his team extended the software\u2019s functionality to cover a broader spectrum of mechanics. Thus, this MMM project combines MFEM with the MFrontGenericInterfaceSupport (MGIS), an open-source C++ library that provides data structures to support arbitrarily complex nonlinear constitutive equations generated by the MFront code generator. MMM is developed within the scope of CEA\u2019s PLEIADES project. Helfer\u2019s presentation provided (1) an introduction to MMM goals; (2) a tutorial of MMM usage with a focus on the high-level user interface; (3) an overview of the core design choices of MMM and how MFEM was extended to support a range of scenarios; and (4) feedback on the two main issues encountered during MMM development. Jamie Bramwell (LLNL) Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications October 20, 2021 | MFEM Workshop 2021 Jamie Bramwell of LLNL presented an overview of the open-source Serac project , whose goal is to provide user-friendly abstractions and modules that enable rapid development of complex nonlinear multiphysics simulation codes. She provided an overview of both the high-level physics modules (thermal conduction, solid mechanics, incompressible flow, electromagnetics) as well as the serac::Functional framework for quickly developing nonlinear GPU-enabled finite element method kernels. Veselin Dobrev (LLNL) Recent Developments in MFEM October 20, 2021 | MFEM Workshop 2021 Veselin Dobrev of LLNL detailed the project\u2019s recent developments including memory manager improvements; serial support for p- and hp-refinement; high-order/low-order refined solution transfer; GLVis visualization via Jupyter Notebooks; and additional GPU support regarding HYPRE preconditioners, PETSc tools, and mesh optimization. MFEM now also integrates with various new libraries (AmgX, Gingko, FMS, and others), and continuous integration testing has been conducted on LLNL\u2019s Quartz, Lassen, and Corona machines. Additionally, Dobrev summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Exascale Computing Project, SciDAC, the FASTMath Institute, and other projects. Tzanio Kolev (LLNL) The State of MFEM October 20, 2021 | MFEM Workshop 2021 MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities of discretization algorithms, built-in solvers, parallel scalability, adaptive mesh refinement, and support for a range of computing architectures. Kolev also highlighted the global community\u2019s contributions as well as features included in the recent v4.3 software release. Aaron Fisher (LLNL) Welcome and Overview October 20, 2021 | MFEM Workshop 2021 The MFEM community workshop held virtually on October 20, 2021, brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, collaborative breakout sessions, and a simulation contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results. Conferences in 2021 Tzanio Kolev (LLNL) Efficient Finite Element Discretizations for Exascale Applications February 25, 2021 | ExCALIBUR SLE 3 workshop ATPESC 2017, 2018 Tzanio Kolev (LLNL), Mark Shephard (RPI) and Cameron Smith (RPI) Unstructured Meshing Technologies August 6, 2018 | ATPESC 2018 Presented at the Argonne Training Program on Extreme-Scale Computing 2018. Slides for this presentation are available here . Tzanio Kolev (LLNL) and Mark Shephard (RPI) Unstructured Meshing Technologies August 7, 2017 | ATPESC 2017 Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here . Tzanio Kolev (LLNL) and Mark Shephard (RPI) Conforming & Nonconforming Adaptivity for Unstructured Meshes August 7, 2017 | ATPESC 2017 Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here . Other Videos LLNL HPC Software Tutorials: MFEM Aug 22, 2024 Instructions for a self-paced overview of MFEM. MFEM: Advanced Simulation Algorithms for HPC Applications Jun 24, 2020 Overview of MFEM 4.0 featuring some of its developers. Center for Applied Scientific Computing Jul 12, 2019 Overview of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, including a highlight of MFEM. S&TR Preview: Exascale Computing October 6, 2016 Some early MFEM results in the BLAST project.", "title": "Videos"}, {"location": "videos/#mfem-videos", "text": "A collection of MFEM-related videos, including recorded talks from the MFEM workshops and conference presentations.", "title": "MFEM Videos"}, {"location": "videos/#mfem-workshop-2024", "text": "", "title": "MFEM Workshop 2024"}, {"location": "videos/#aaron-fisher-llnl", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#welcome-and-overview", "text": "", "title": "Welcome and Overview"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024", "text": "Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community resources.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#tzanio-kolev-llnl", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos/#the-state-of-mfem", "text": "", "title": "The State of MFEM"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_1", "text": "MFEM project lead Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities, examples, and mini-apps. Kolev also highlighted the growth of the global community as well as features developed during 2024.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#veselin-dobrev-llnl", "text": "", "title": "Veselin Dobrev (LLNL)"}, {"location": "videos/#recent-developments", "text": "", "title": "Recent Developments"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_2", "text": "Veselin Dobrev of LLNL detailed the project\u2019s recent developments including meshing and discretization improvements, GPU acceleration and partial/full assembly support, new examples and mini-apps, and more. He also highlighted functionality such as anisotropic refinement, conforming H1 spaces, square pyramid shaped elements, and hybridized discontinuous Galerkin solutions.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#ketan-mittal-llnl", "text": "", "title": "Ketan Mittal (LLNL)"}, {"location": "videos/#interpolation-at-arbitrary-points-in-high-order-meshes-on-gpus", "text": "", "title": "Interpolation at Arbitrary Points in High-Order Meshes on GPUs"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_3", "text": "Robust and scalable arbitrary point interpolation is required in the finite element method and spectral element method for querying the partial differential equation solution at points of interest in the domain, comparison of solution between different meshes, and Lagrangian particle tracking. This is a challenging problem, particularly for high-order unstructured meshes partitioned in parallel with MPI, as it requires identifying the element that overlaps a given point and computing the reference space coordinates inside the element corresponding to the point. We present a robust and efficient way to address this problem for large-scale high-order meshes. First, a combination of globally partitioned and processor-local maps are used to determine a list of candidate MPI ranks and element pairs that could contain the point. Next, element-wise bounding boxes are used to further narrow down the list of candidate elements. Finally, Newton's method with trust region-based approach is used to invert the affine map for the candidate elements and determine the reference space coordinates corresponding to the point. Since GPU-based architectures have demonstrated to accelerate computational analyses using meshes with tensor-product elements, specialized kernel have been developed to effect the arbitrary point search and interpolation on GPUs. We demonstrate the effectiveness of this approach using various high-order meshes.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#michael-tupek-llnl", "text": "", "title": "Michael Tupek (LLNL)"}, {"location": "videos/#automatic-parameter-sensitivities-in-serac-for-engineering-applications", "text": "", "title": "Automatic Parameter Sensitivities in Serac for Engineering Applications"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_4", "text": "We present a framework for automatically calculating sensitivities for both topology and shape design optimization workflows. Building on MFEM infrastructure, we provide abstractions for quickly specifying, solving, coupling, and differentiating new PDEs for engineering applications. Recent developments in Serac include: highly robust nonlinear solvers, integration of the Tribol library for contact enforcement, coupled thermal-mechanics, differentiable material model library, and checkpointing for transient adjoint calculations.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#jan-nikl-llnl", "text": "", "title": "Jan Nikl (LLNL)"}, {"location": "videos/#hybridization-of-convection-diffusion-systems-in-mfem", "text": "", "title": "Hybridization of Convection-Diffusion Systems in MFEM"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_5", "text": "Convection-diffusion systems are likely the most common class of partial differential equations appearing in practically all different applications. However, their mixed formulation typically suffers from prohibitively high computational costs and difficult preconditioning, especially close to the steady state where the system becomes a saddle point problem. The hybridization technique offers an appealing answer to these issues. The new framework for mixed systems enables single-line hybridization, reducing the problem to face traces of the total flux only. Solution of such system is then inexpensive, and preconditioning becomes nearly trivial. Non-linear convection is also supported with the action-based regime of operation. Description of the mechanism as well as code examples to show ease of usage are presented.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#vladimir-tomov-llnl", "text": "", "title": "Vladimir Tomov (LLNL)"}, {"location": "videos/#miniapps-for-shock-hydro-field-remap-and-mesh-optimization", "text": "", "title": "Miniapps for Shock Hydro, Field Remap, and Mesh Optimization"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_6", "text": "This presentation discusses recent advancements, research, and exploratory work in the MFEM miniapps for shock hydrodynamics (Laghos), field remap (Remhos), and mesh optimization. For shock hydro, we present the implementation of slip wall boundary conditions for curved domains, along with research involving material interfaces using the shifted interface method or cut-element integration through Algoim and moments-based integration. In the field remap miniapp, we cover developments in stabilized remap for continuous fields, interface sharpening techniques, and matrix-free methods for GPU execution. Lastly, we explore recent progress in mesh optimization, including surface fitting and its GPU implementation, tangential relaxation, automatic differentiation (AD) for complex objective functionals, enhanced metric theory and quality metrics, and hpr-adaptivity for the mesh representation. While some of these advancements are public, general methods that can be applied across various practical miniapps, others are exploratory, demonstrating how the miniapps can serve as a starting point for research in specific areas.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#dylan-copeland-llnl", "text": "", "title": "Dylan Copeland (LLNL)"}, {"location": "videos/#sparse-approximate-quadrature-for-acceleration-of-isogeometric-analysis-roms", "text": "", "title": "Sparse, Approximate Quadrature for Acceleration of Isogeometric Analysis & ROMs"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_7", "text": "Numerical integration for assembly of FEM systems typically employs quadrature rules selected for the polynomial order of basis functions in each element. In some cases, a much sparser rule can maintain accuracy. We present an algebraic method for constructing sparse rules, by formulating a constraint system of states required to be integrated accurately. A nonnegative least squares solver finds a sparse, approximate solution to this constraint system, yielding a quadrature rule with fewer points. One application we demonstrate is isogeometric analysis, where a NURBS FEM space is defined on patches consisting of many elements. Setup times are greatly accelerated, by using patch-wise integration with sum factorization and reduced quadrature rules constructed on patches. Another area of application is reduced order models (ROM), where the FEM system is restricted to a reduced POD basis formed from training data. Instead of hyper-reduction methods such as DEIM, the empirical quadrature procedure (EQP) can be used to accelerate ROM simulations with a sparse quadrature rule in the reduced subspace. We demonstrate this on several benchmark problems in the Laghos miniapp and show that energy conservation is maintained.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#jacob-spainhour-cu-boulder", "text": "", "title": "Jacob Spainhour (CU Boulder)"}, {"location": "videos/#robust-containment-queries-over-collections-of-parametric-curves-via-generalized-winding-numbers", "text": "", "title": "Robust Containment Queries over Collections of Parametric Curves via Generalized Winding Numbers"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_8", "text": "The containment query is an important geometric primitive in many multiphysics applications. For example, when initializing multimaterial Arbitrary Lagrangian-Eulerian (ALE) simulations, we often need to determine whether arbitrary quadrature points from the background mesh are inside or outside the regions associated with each material. However, existing methods require expensive refinement to accurately capture curved regions. At the same time, many methods are wholly incompatible with user-defined geometries that contain geometric and numeric gaps and/or self-intersections. In this work, we develop a containment query for 2D regions defined by rational Bezier curves that operates directly on curved objects. Our method relies on the generalized winding number (GWN), a mathematical construction that can be evaluated for each curve independently, making the derived containment query robust to non-watertightness. We use an adaptive algorithm to compute the GWN field exactly, which permits fast evaluation for points considered \"distant\" to the curve while being numerically stable for points that are arbitrarily close. Overall, this classification scheme greatly expands the types of bounding geometry that can be used directly in shaping applications without the need for otherwise expensive repair techniques. If time permits, we will also discuss our extensions of this idea to 3D shapes defined by parametric surfaces.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#mathias-schmidt-llnl", "text": "", "title": "Mathias Schmidt (LLNL)"}, {"location": "videos/#level-set-topology-optimization-with-pde-generated-conformal-meshes", "text": "", "title": "Level-Set Topology Optimization with PDE Generated Conformal Meshes"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_9", "text": "The promise of topology optimization (TO) is to provide engineers with a systematic computational tool to support the development of optimal designs. A shortcoming of classic density based multi-material TO designs is the nebulous interphase region between materials, which leads to inaccurate response predictions in these very regions. In contrast, designs based on boundary and interface regions, rather than interphase regions, yield accurate response predictions. Level-set based TO is an example of such; however, the analysis of the response often requires repeated mesh generation or non-standard finite element computations. We present a solely PDE-based, level-set topology optimization approach in which geometries are described through the iso-contour of one or multiple level-set fields which are discretized over a mesh. The nodal heights serve as the design parameters. The governing field equations are discretized by a conformal discretization over a separate \u201canalysis\u201d mesh. In the optimization, the \u201canalysis\u201d mesh is morphed such that its boundary and interfaces conform with the isocontours of the LS fields. The mesh morphing is performed using the Target-Matrix Optimization Paradigm (TMOP) approach. Our TMOP formulation is a PDE-based mesh morphing operation which aims to improve the interface conformity while preserving mesh quality. Design sensitivities of the optimization cost and constraint functions with respect to all design level-set fields are computed through an adjoint approach which accounts for the mesh morphing process. The proposed analysis and optimization framework is based on MFEM, a free, lightweight, scalable C++ library for finite element methods which supports the optimization of large-scale problems. We investigate the robustness of the proposed optimization methodology by solving two- and three-dimensional multi-material optimization problems involving linear diffusion and elasticity. We discuss the advantages and challenges of our approach with regards to the mesh morphing process. LS regularization techniques are employed to produce a well-behaved mesh morphing problem throughout the optimization. Finally, select aspects and challenges of our approach with respect to parallel computing and processor decomposition are discussed.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#yohann-dudouit-llnl", "text": "", "title": "Yohann Dudouit (LLNL)"}, {"location": "videos/#mitigating-rays-effect-in-phase-space-advection-with-matrix-free-hd-dg-methods", "text": "", "title": "Mitigating Rays-Effect in Phase-Space Advection with Matrix-Free HD DG Methods"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_10", "text": "The mitigation of the rays-effect in phase-space advection problems is a critical challenge in deterministic transport simulations, particularly when using traditional methods that struggle with numerical artifacts. In this work, we propose a novel high-dimensional matrix-free discontinuous Galerkin (DG) approach designed to address the rays-effect by fully discretizing phase space, including velocity components, up to six dimensions. This methodology avoids the excessive computational cost associated with Monte Carlo simulations while offering a deterministic alternative that preserves accuracy and scalability. A key component of our approach is the use of advanced coordinate transformations, which optimize the coordinate system to minimize the rays-effect by aligning the coordinate system with the net flux. Our matrix-free formulation minimizes memory usage and improves computational efficiency by avoiding the assembly of large sparse matrices, a critical factor when scaling to high-dimensional problems. Numerical experiments demonstrate the effectiveness of this approach in reducing rays-effect artifacts, providing a robust and scalable solution for high-dimensional transport problems.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#femllnl-seminars", "text": "", "title": "FEM@LLNL Seminars"}, {"location": "videos/#denis-ridzal-sandia-national-laboratories", "text": "", "title": "Denis Ridzal (Sandia National Laboratories)"}, {"location": "videos/#r-adaptive-mesh-optimization-to-enhance-finite-element-basis-compression", "text": "", "title": "R-Adaptive Mesh Optimization to Enhance Finite Element Basis Compression"}, {"location": "videos/#october-15-2024-femllnl-seminar-series", "text": "Modern computing systems are capable of exascale calculations. While these systems continue to grow in processing power, the available system memory has not increased commensurately. A predominant approach to limit the memory usage in large-scale applications is to exploit the abundant processing power and continually recompute many low-level simulation quantities, rather than storing them. However, this approach can adversely impact the throughput of the simulation and diminish the benefits of modern computing architectures. We present two novel contributions to reduce the memory burden while maintaining performance in simulations based on finite element discretizations. The first contribution develops dictionary-based data compression schemes that detect and exploit the structure of the discretization, due to redundancies across the finite element mesh. These schemes are shown to reduce the memory requirements of key computational kernels by more than 99% on meshes with large numbers of nearly identical mesh cells. For applications where this structure does not exist, our second contribution leverages a recently developed augmented Lagrangian sequential quadratic programming algorithm to enable r-adaptive mesh optimization, with the goal of enhancing redundancies in the mesh. Numerical results demonstrate the effectiveness of the proposed methods to detect, exploit and enhance mesh structure on examples inspired by large-scale applications.", "title": "October 15, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#ruben-sevilla-swansea-university", "text": "", "title": "Rub\u00e9n Sevilla (Swansea University)"}, {"location": "videos/#mesh-generation-and-adaptation-using-green-ai", "text": "", "title": "Mesh Generation and Adaptation using Green AI"}, {"location": "videos/#september-17-2024-femllnl-seminar-series", "text": "Most methods used to solve partial differential equations require creating a mesh that represents the model's geometry. Today, unstructured mesh technology is widely used, allowing three-dimensional meshes with hundreds of millions of elements to be generated in just a few minutes. However, when optimising a design, many simulations are needed for different operating conditions and geometric configurations. Creating the best mesh for each setup becomes time-consuming due to the requirement of excessive human intervention and expertise. This talk will cover our recent work on using artificial intelligence to predict near-optimal meshes suitable for simulations. The main idea is to take advantage of the large amount of data that already exists in the industry to improve the selection of a suitable spacing function, including anisotropic spacing. The proposed approach aims to use knowledge from previous simulations to guide the mesh generation process. I will assess the proposed method based on the accuracy of the predictions, efficiency, and environmental impact. This includes considering the carbon footprint and energy consumption of the computations required for a parametric CFD analysis under different flow conditions and angles of attack. For transient problems, the use of high order methods provides several advantages due to the low dissipation and dispersion errors associated to these schemes. An attractive approach to simulate these problems is to incorporate degree adaptive schemes to enhance the approximation only where needed. In this talk I will also present our recent work on using artificial intelligence to aid a degree adaptive process.", "title": "September 17, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#esteban-ferrer-and-david-huergo-universidad-politecnica-de-madrid", "text": "", "title": "Esteban Ferrer and David Huergo (Universidad Polit\u00e9cnica de Madrid)"}, {"location": "videos/#new-avenues-in-high-order-fluid-dynamics", "text": "", "title": "New Avenues in High Order Fluid Dynamics"}, {"location": "videos/#september-3-2024-femllnl-seminar-series", "text": "We present the latest developments of our High-Order Spectral Element Solver (HORSES3D), an open source high-order discontinuous Galerkin framework capable of solving a variety of flow applications, including compressible flows (with or without shocks), incompressible flows, various RANS and LES turbulence models, particle dynamics, multiphase flows, and aeroacoustics [ 1 ]. Recent developments allow us to simulate challenging multiphysics including turbulent flows, multiphase and moving bodies, using local h and p-adaption. In addition, we present recent work that couples Machine Learning and reinforcement learning techniques with high order simulations.", "title": "September 3, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#patrick-farrell-university-of-oxford", "text": "", "title": "Patrick Farrell (University of Oxford)"}, {"location": "videos/#designing-conservative-and-accurately-dissipative-numerical-integrators-in-time", "text": "", "title": "Designing conservative and accurately dissipative numerical integrators in time"}, {"location": "videos/#july-30-2024-femllnl-seminar-series", "text": "Numerical methods for the simulation of transient systems with structure-preserving properties are known to exhibit greater accuracy and physical reliability, in particular over long durations. These schemes are often built on powerful geometric ideas for broad classes of problems, such as Hamiltonian or reversible systems. However, there remain difficulties in devising higher-order- in-time structure-preserving discretizations for nonlinear problems, and in conserving non-polynomial invariants. In this work we propose a new, general framework for the construction of structure-preserving timesteppers via finite elements in time and the systematic introduction of auxiliary variables. The framework reduces to Gauss methods where those are structure-preserving, but extends to generate arbitrary-order structure-preserving schemes for nonlinear problems, and allows for the construction of schemes that conserve multiple higher-order invariants. We demonstrate the ideas by devising novel schemes that exactly conserve all known invariants of the Kepler and Kovalevskaya problems, arbitrary-order schemes for the compressible Navier\u2013Stokes equations that conserve mass, momentum, and energy, and provably dissipate entropy, and multi-conservative schemes for the Benjamin-Bona-Mahony equation.", "title": "July 30, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#gonzalo-de-diego-courant-institute", "text": "", "title": "Gonzalo de Diego (Courant Institute)"}, {"location": "videos/#numerical-solvers-for-viscous-contact-problems-in-glaciology", "text": "", "title": "Numerical Solvers for Viscous Contact Problems in Glaciology"}, {"location": "videos/#may-6-2024-femllnl-seminar-series", "text": "Viscous contact problems are time-dependent viscous flow problems where the fluid is in contact with a solid surface from which it can detach and reattach. Over sufficiently long timescales, ice is assumed to flow like a viscous fluid with a nonlinear rheology. Therefore, certain phenomena in glaciology, like the formation of subglacial cavities in the base of an ice sheet or the dynamics of marine ice sheets (continental ice sheets that slide into the ocean and go afloat at a grounding line, detaching from the bedrock), can be modelled as viscous contact problems. In particular, these problems can be described by coupling the Stokes equations with contact boundary conditions to free boundary equations that evolve the ice domain in time. In this talk, I will describe the difficulties that arise when attempting to solve this system numerically and I will introduce a method that is capable of overcoming them.", "title": "May 6, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#nat-trask-university-of-pennsylvania", "text": "", "title": "Nat Trask (University of Pennsylvania)"}, {"location": "videos/#a-data-driven-finite-element-exterior-calculus", "text": "", "title": "A Data Driven Finite Element Exterior Calculus"}, {"location": "videos/#april-2-2024-femllnl-seminar-series", "text": "Despite the recent flurry of work employing machine learning to develop surrogate models to accelerate scientific computation, the \"black-box\" underpinnings of current techniques fail to provide the verification and validation guarantees provided by modern finite element methods. In this talk we present a data-driven finite element exterior calculus for developing reduced-order models of multiphysics systems when the governing equations are either unknown or require closure. The framework employs deep learning architectures typically used for logistic classification to construct a trainable partition of unity which provides notions of control volumes with associated boundary operators. This alternative to a traditional finite element mesh is fully differentiable and allows construction of a discrete de Rham complex with a corresponding Hodge theory. We demonstrate how models may be obtained with the same robustness guarantees as traditional mixed finite element discretization, with deep connections to contemporary techniques in graph neural networks. For applications developing digital twins where surrogates are intended to support real time data assimilation and optimal control, we further develop the framework to support Bayesian optimization of unknown physics on the underlying adjacency matrices of the chain complex. By framing the learning of fluxes via an optimal recovery problem with a computationally tractable posterior distribution, we are able to develop models with intrinsic representations of epistemic uncertainty.", "title": "April 2, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#william-moses-university-of-illinois-urbana-champaign", "text": "", "title": "William Moses (University of Illinois Urbana-Champaign)"}, {"location": "videos/#supercharging-programming-through-compiler-technology", "text": "", "title": "Supercharging Programming Through Compiler Technology"}, {"location": "videos/#march-14-2024-femllnl-seminar-series", "text": "The decline of Moore's law and an increasing reliance on computation has led to an explosion of specialized software packages and hardware architectures. While this diversity enables unprecedented flexibility, it also requires domain-experts to learn how to customize programs to efficiently leverage the latest platform-specific API's and data structures, instead of working on their intended problem. Rather than forcing each user to bear this burden, I propose building high-level abstractions within general-purpose compilers that enable fast, portable, and composable programs to be automatically generated. This talk will demonstrate this approach through compilers that I built for two domains: automatic differentiation and parallelism. These domains are critical to both scientific computing and machine learning, forming the basis of neural network training, uncertainty quantification, and high-performance computing. For example, a researcher hoping to incorporate their climate simulation into a machine learning model must also provide a corresponding derivative simulation. My compiler, Enzyme, automatically generates these derivatives from existing computer programs, without modifying the original application. Moreover, operating within the compiler enables Enzyme to combine differentiation with program optimization, resulting in asymptotically and empirically faster code. Looking forward, this talk will also touch on how this domain-agnostic compiler approach can be applied to new directions, including probabilistic programming.", "title": "March 14, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#sungho-lee-university-of-memphis", "text": "", "title": "Sungho Lee (University of Memphis)"}, {"location": "videos/#laghost-development-of-lagrangian-high-order-solver-for-tectonics", "text": "", "title": "LAGHOST: Development of Lagrangian High-Order Solver for Tectonics"}, {"location": "videos/#march-5-2024-femllnl-seminar-series", "text": "Long-term geological and tectonic processes associated with large deformation highlight the importance of using a moving Lagrangian frame. However, modern advancements in the finite element method, such as MPI parallelization, GPU acceleration, high-order elements, and adaptive grid refinement for tectonics based on this frame, have not been updated. Moreover, the existing solvers available in open access suffer from limited tutorials, a poor user manual, and several dependencies that make model building complex. These limitations can discourage both new users and developers from utilizing and improving these models. As a result, we are motivated to develop a user-friendly, Lagrangian thermo-mechanical numerical model that incorporates visco-elastoplastic rheology to simulate long-term tectonic processes like mountain building, mantle convection and so on. We introduce an ongoing project called LAGHOST (Lagrangian High-Order Solver for Tectonics), which is an MFEM-based tectonic solver. LAGHOST expands the capabilities of MFEM's LAGHOS mini-app. Currently, our solver incorporates constitutive equation, body force, mass scaling, dynamic relaxation, Mohr-Coulomb plasticity, plastic softening, Winkler foundation, remeshing, and remapping. To evaluate LAGHOST, we conducted four benchmark tests. The first test involved compressing an elastic box at a constant velocity, while the second test focused on the compaction of a self-weighted elastic column. To enable larger time-step sizes and achieve quasi-static solutions in the benchmarks, we introduced a fictitious density and implemented dynamic relaxation. This involved scaling the density factor and introducing a portion of force component opposing the previous velocity direction at nodal points. Our results exhibited good agreement with analytical solutions. Subsequently, we incorporated Mohr-Coulomb plasticity, a reliable model for predicting rock failure, into LAGHOST. We revisited the elastic box benchmark and considered plastic materials. By considering stress correction arising from plastic yielding, we confirmed that the updated solution from elastic guess aligned with the analytical solution. Furthermore, we applied LAGHOST to simulate the evolution of a normal fault, a significant tectonic phenomenon. To model normal fault evolution, we introduced strain softening on cohesion as the dominant factor based on geological evidence. Our simulations successfully captured the normal fault's evolution, with plastic strain localizing at shallow depths before propagating deeper. The fault angle reached approximately 60 degrees, in line with the Mohr-Coulomb failure theory.", "title": "March 5, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#kevin-chung-llnl", "text": "", "title": "Kevin Chung (LLNL)"}, {"location": "videos/#data-driven-dg-fem-via-reduced-order-modeling-and-domain-decomposition", "text": "", "title": "Data-Driven DG FEM Via Reduced Order Modeling and Domain Decomposition"}, {"location": "videos/#february-6-2024-femllnl-seminar-series", "text": "Numerous cutting-edge scientific technologies originate at the laboratory scale, but transitioning them to practical industry applications can be a formidable challenge. Traditional pilot projects at intermediate scales are costly and time-consuming. Alternatives such as E-pilots can rely on high-fidelity numerical simulations, but even these simulations can be computationally prohibitive at larger scales. To overcome these limitations, we propose a scalable, component reduced order model (CROM) method. We employ Discontinuous Galerkin Domain Decomposition (DG-DD) to decompose the physics governing equation for a large-scale system into repeated small-scale unit components. Critical physics modes are identified via proper orthogonal decomposition (POD) from small-scale unit component samples. The computationally expensive, high-fidelity discretization of the physics governing equation is then projected onto these modes to create a reduced order model (ROM) that retains essential physics details. The combination of DG-DD and POD enables ROMs to be used as building blocks comprised of unit components and interfaces, which can then be used to construct a global large-scale ROM without data at such large scales. This method is demonstrated on the Poisson and Stokes flow equations, showing that it can solve equations about 15\u221240 times faster with only \u223c 1% relative error, even at scales 1000 times larger than the unit components. This research is ongoing, with efforts to apply these methods to more complex physics such as Navier-Stokes equation, highlighting their potential for transitioning laboratory-scale technologies to practical industrial use.", "title": "February 6, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#brian-young", "text": "", "title": "Brian Young"}, {"location": "videos/#a-full-wave-electromagnetic-simulator-for-frequency-domain-s-parameter-calculations", "text": "", "title": "A Full-Wave Electromagnetic Simulator for Frequency-Domain S-Parameter Calculations"}, {"location": "videos/#january-9-2024-femllnl-seminar-series", "text": "An open-source and free full-wave electromagnetic simulator is presented that addresses the engineering community\u2019s need for the calculation of frequency-domain S-parameters. Two-dimensional port simulations are used to excite the 3D space and to extract S-parameters using modal projections. Matrix solutions are performed using complex computations. Features enabled by the MFEM library include adaptive mesh refinement, arbitrary order finite elements, and parallel processing using MPI. Implementation details are presented along with sample results and accuracy demonstrations.", "title": "January 9, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#jesse-chan-rice-university", "text": "", "title": "Jesse Chan (Rice University)"}, {"location": "videos/#high-order-positivity-preserving-entropy-stable-discontinuous-galerkin-discretizations", "text": "", "title": "High order positivity-preserving entropy stable discontinuous Galerkin discretizations"}, {"location": "videos/#december-5-2023-femllnl-seminar-series", "text": "High order discontinuous Galerkin (DG) methods provide high order accuracy and geometric flexibility, but are known to be unstable when applied to nonlinear conservation laws whose solutions exhibit shocks and under-resolved solution features. Entropy stable schemes improve robustness by ensuring that physically relevant solutions satisfy a semi-discrete cell entropy inequality independently of numerical resolution and solution regularization while retaining formal high order accuracy. In this talk, we will review the construction of entropy stable high order discontinuous Galerkin methods and describe approaches for enforcing that solutions are \"physically relevant\" (i.e., the thermodynamic variables remain positive).", "title": "December 5, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#youngsoo-choi-llnl", "text": "", "title": "Youngsoo Choi (LLNL)"}, {"location": "videos/#physics-guided-interpretable-data-driven-simulations", "text": "", "title": "Physics-guided interpretable data-driven simulations"}, {"location": "videos/#november-14-2023-femllnl-seminar-series", "text": "A computationally demanding physical simulation often presents a significant impediment to scientific and technological progress. Fortunately, recent advancements in machine learning (ML) and artificial intelligence have given rise to data-driven methods that can expedite these simulations. For instance, a well-trained 2D convolutional deep neural network can provide a 100,000-fold acceleration in solving complex problems like Richtmyer-Meshkov instability [ 1 ]. However, conventional black-box ML models lack the integration of fundamental physics principles, such as the conservation of mass, momentum, and energy. Consequently, they often run afoul of critical physical laws, raising concerns among physicists. These models attempt to compensate for the absence of physics information by relying on vast amounts of data. Additionally, they suffer from various drawbacks, including a lack of structure-preservation, computationally intensive training phases, reduced interpretability, and susceptibility to extrapolation issues. To address these shortcomings, we propose an approach that incorporates physics into the data-driven framework. This integration occurs at different stages of the modeling process, including the sampling and model-building phases. A physics-informed greedy sampling procedure minimizes the necessary training data while maintaining target accuracy [ 2 ]. A physics-guided data-driven model not only preserves the underlying physical structure more effectively but also demonstrates greater robustness in extrapolation compared to traditional black-box ML models. We will showcase numerical results in areas such as hydrodynamics [ 3 , 4 ], particle transport [ 5 ], plasma physics, pore-collapse, and 3D printing to highlight the efficacy of these data-driven approaches. The advantages of these methods will also become apparent in multi-query decision-making applications, such as design optimization [ 6 , 7 ].", "title": "November 14, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#ben-southworth-los-alamos-national-laboratory", "text": "", "title": "Ben Southworth (Los Alamos National Laboratory)"}, {"location": "videos/#superior-discretizations-and-amg-solvers-for-extremely-anisotropic-diffusion-via-hyperbolic-operators", "text": "", "title": "Superior discretizations and AMG solvers for extremely anisotropic diffusion via hyperbolic operators"}, {"location": "videos/#october-17-2023-femllnl-seminar-series", "text": "Heat conduction in magnetic confinement fusion can reach anisotropy ratios of 10^9-10^10, and in complex problems the direction of anisotropy may not be aligned with (or is impossible to align with) the spatial mesh. Such problems pose major challenges for both discretization accuracy and efficient implicit linear solvers. Although the underlying problem is elliptic or parabolic in nature, we argue that the problem is better approached from the perspective of hyperbolic operators. The problem is posed in a directional gradient first order formulation, introducing a directional heat flux along magnetic field lines as an auxiliary variable. We then develop novel continuous and discontinuous discretizations of the mixed system, using stabilization techniques developed for hyperbolic problems. The resulting block matrix system is then reordered so that the advective operators are on the diagonal, and the system is solved using AMG based on approximate ideal restriction (AIR), which is particularly efficient for upwind discretizations of advection. Compared with traditional discretizations and AMG solvers, we achieve orders of magnitude reduction in error and AMG iterations in the extremely anisotropic regime.", "title": "October 17, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#natasha-sharma-university-of-texas-at-el-paso", "text": "", "title": "Natasha Sharma (University of Texas at El Paso)"}, {"location": "videos/#a-continuous-interior-penalty-method-framework-for-sixth-order-cahn-hilliard-type-equations-with-applications-to-microstructure-evolution-and-microemulsions", "text": "", "title": "A Continuous Interior Penalty Method Framework for Sixth Order Cahn-Hilliard-type Equations with applications to microstructure evolution and microemulsions"}, {"location": "videos/#july-18-2023-femllnl-seminar-series", "text": "The focus of this talk is on presenting unconditionally stable, uniquely solvable, and convergent numerical methods to solve two classes of the sixth-order Cahn-Hilliard-type equations. The first class arises as the so-called phase field crystal atomistic model of crystal growth, which has been employed to simulate a number of physical phenomena such as crystal growth in a supercooled liquid, crack propagation in ductile material, dendritic and eutectic solidification. The second class, henceforth referred to as Microemulsion systems (ME systems) appears as a model capturing the dynamics of phase transitions in ternary oil-water-surfactant systems in which three phases namely a microemulsion, almost pure oil, and almost pure water can coexist in equilibrium. ME systems have several applications ranging from enhanced oil recovery to the development of environmentally friendly solvents and drug delivery systems. Despite the widespread applications of these models, the major challenge impeding their use has been and continues to be a lack of understanding of the complex systems which they model. Thus, building computational models for these systems is crucial to the understanding of these systems. The presence of the higher order derivative in combination with a time-dependent process poses many challenges to the creation of stable, convergent, and efficient numerical methods approximating solutions to these equations. In this talk, we present a continuous interior penalty Galerkin framework for solving these equations and theoretically establish the desirable properties of stability, unique solvability, and first-order convergence. We close the talk by presenting the numerical results of some benchmark problems to verify the practical performance of the proposed approach and discuss some exciting current and future applications.", "title": "July 18, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#freddie-witherden-texas-am-university", "text": "", "title": "Freddie Witherden (Texas A&M University)"}, {"location": "videos/#fsspmdm-accelerating-small-sparse-matrix-multiplications-by-run-time-code-generation", "text": "", "title": "FSSpMDM \u2014 Accelerating Small Sparse Matrix Multiplications by Run-Time Code Generation"}, {"location": "videos/#june-20-2023-femllnl-seminar-series", "text": "Small matrix multiplications are a key building block of modern high-order finite element method solvers. Such multiplications describe the act of applying a specific finite element operator onto a set of state vectors. The small and irregular size of these multiplications makes them poor candidates for generic matrix multiplication routines. Moreover, for elements with a tensor product construction, the operators themselves can exhibit a significant degree of sparsity. In this talk, I will describe the code generation strategies employed by our Fixed Size Sparse Matrix-Dense Matrix (FSSpMDM) routine in libxsmm and show how these result in performant operator kernels for prismatic and hexahedral elements. Strategies will be described for both x86-64 (AVX2/AVX-512) and AARCH64 (NEON/SVE) instruction sets. Results will be presented on recent Intel and Apple CPUs and compared against the well-known GiMMiK C code generation library.", "title": "June 20, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#frank-giraldo-naval-postgraduate-school", "text": "", "title": "Frank Giraldo (Naval Postgraduate School)"}, {"location": "videos/#using-high-order-element-based-galerkin-methods-to-capture-hurricane-intensification", "text": "", "title": "Using High-Order Element-Based Galerkin Methods to Capture Hurricane Intensification"}, {"location": "videos/#may-16-2023-femllnl-seminar-series", "text": "Properly capture hurricane rapid intensification (where the winds increase by 30 knots in the first 24 hours) remains challenging for atmospheric models. The reason is that we need LES-type scales \ud835\udcaa(100m) which is still elusive due to computational cost. In this talk, I describe the work that we are doing in this area and how element-based Galerkin Methods are being used to approximate spatial derivatives. I will also discuss the time-integration strategy that we are exploring for this class of problems. In particular, we are exploring process Multirate methods whereby each process in a system of nonlinear partial different equations (PDEs) uses a time-integrator and time-step commensurate with the wave-speed of that process. We have constructed Multirate methods of any order using extrapolation methods. Along this same idea, we have also developed a multi-modeling framework (MMF) designed to replace the physical parameterizations used in weather/climate models. Our approach is to view the coarse-scale and fine-scale models through the lens of Variational Multi-Scale (VMS) methods in order to give MMF a more rigorous mathematical foundation. Our end goal is to use MMF in order to better resolve the inner core of hurricanes. In addition, I will show some results using flux differencing discontinuous Galerkin Methods for constructing both Kinetic Energy Preserving and Entropy Stable methods and discuss why we need scalable models in order to achieve our goals.Our model, NUMA, is a 3D nonhydrostatic atmospheric model that runs on large CPU clusters and on GPUs.", "title": "May 16, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#leszek-f-demkowicz-university-of-texas-at-austin", "text": "", "title": "Leszek F. Demkowicz (University of Texas at Austin)"}, {"location": "videos/#full-envelope-dpg-approximation-for-electromagnetic-waveguides-stability-and-convergence-analysis", "text": "", "title": "Full Envelope DPG Approximation for Electromagnetic Waveguides. Stability and Convergence Analysis"}, {"location": "videos/#april-25-2023-femllnl-seminar-series", "text": "The presented work started with a convergence and stability analysis for the so-called full envelope approximation used in analyzing optical amplifiers (lasers). The specific problem of interest was the simulation of Transverse Mode Instabilities (TMI). The problem translates into the solution of a system of two nonlinear time-harmonic Maxwell equations coupled with a transient heat equation. Simulation of a 1 m long fiber involves the resolution of 10 M wavelengths. A superefficient MPI/openMP hp FE code run on a supercomputer gets you to the range of ten thousand wavelengths. The resolution of the additional thousand wavelengths is done using an exponential ansatz e^{ikz} in the z-coordinate. This results in a non-standard Maxwell problem. The stability and convergence analysis for the problem has been restricted to the linear case only. It turns out that the modified Maxwell problem is stable if and only if the original waveguide problem is stable and the boundedness below stability constants are identical. We have converged to the task of determining the boundedness below constant. The stability analysis started with an easier, acoustic waveguide problem. Separation of variables leads to an eigenproblem for a self-adjoint operator in the transverse plane (in x,y). Expansion of the solution in terms of the corresponding eigenvectors leads then to a decoupled system of ODEs, and a stability analysis for a two-point BVP for an ODE parametrized with the corresponding eigenvalues. The L^2-orthogonality of the eigenmodes and the stability result for a single mode, lead then to the final result: the inverse boundedness below constant depends inversely linearly upon the length L of the waveguide. The corresponding stability for the Maxwell waveguide turned out to be unexpectedly difficult. The equation is vector-valued so a direct separation of variables is out to begin with. An exponential ansatz in z leads to a non-standard eigenproblem involving an operator that is non-self adjoint even for the easiest, homogeneous case. The answer to the problem came from a tricky analysis of the eigenproblem combined with the perturbation technique for perturbed self-adjoint operators. The use of perturbation theory requires an assumption on the smallness of perturbation of the dielectric constant (around a constant value) but with no additional assumptions on its differentiability (discontinuities are allowed). In the end, the final result is similar to that for the acoustic waveguide - the boundedness below constant depends inversely linearly on L.", "title": "April 25, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#joachim-schoberl-vienna-university-of-technology", "text": "", "title": "Joachim Sch\u00f6berl (Vienna University of Technology)"}, {"location": "videos/#the-netgenngsolve-finite-element-software", "text": "", "title": "The Netgen/NGSolve Finite Element Software"}, {"location": "videos/#march-28-2023-femllnl-seminar-series", "text": "In this talk we give an overview of the open source finite element software Netgen/NGSolve, available from www.ngsolve.org . We show how to setup various physical models using FEniCS-like Python scripting. We discuss how we use NGSolve for teaching finite element methods, and how recent research projects have contributed to the further development of the NGSolve software. Some recent highlights are matrix-valued finite elements with applications in elasticity, fluid dynamics, and numerical relativity. We show how the recently extended framework of linear operators allows the utilization of GPUs for linear solvers, as well as time-dependent problems.", "title": "March 28, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#vikram-gavini-university-of-michigan", "text": "", "title": "Vikram Gavini (University of Michigan)"}, {"location": "videos/#fast-accurate-and-large-scale-ab-initio-calculations-for-materials-modeling", "text": "", "title": "Fast, Accurate and Large-scale Ab-initio Calculations for Materials Modeling"}, {"location": "videos/#march-7-2023-femllnl-seminar-series", "text": "Electronic structure calculations, especially those using density functional theory (DFT), have been very useful in understanding and predicting a wide range of materials properties. The importance of DFT calculations to engineering and physical sciences is evident from the fact that ~20% of computational resources on some of the world's largest public supercomputers are devoted to DFT calculations. Despite the wide adoption of DFT, the state-of-the-art implementations of DFT suffer from cell-size and geometry limitations, with the widely used codes in solid state physics being limited to periodic geometries and typical simulation domains containing a few hundred atoms. This talk will present our recent advances towards the development of computational methods and numerical algorithms for conducting fast and accurate large-scale DFT calculations using adaptive finite-element discretization, which form the basis for the recently released DFT-FE open-source code . Details of the implementation, including mixed precision algorithms and asynchronous computing, will be presented. The computational efficiency, scalability and performance of DFT-FE will be presented, which demonstrates a significant outperformance of widely used plane-wave DFT codes.", "title": "March 7, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#stefan-henneking-university-of-texas-at-austin", "text": "", "title": "Stefan Henneking (University of Texas at Austin)"}, {"location": "videos/#bayesian-inversion-of-an-acoustic-gravity-model-for-predictive-tsunami-simulation", "text": "", "title": "Bayesian Inversion of an Acoustic-Gravity Model for Predictive Tsunami Simulation"}, {"location": "videos/#january-10-2023-femllnl-seminar-series", "text": "To improve tsunami preparedness, early-alert systems and real-time monitoring are essential. We use a novel approach for predictive tsunami modeling within the Bayesian inversion framework. This effort focuses on informing the immediate response to an occurring tsunami event using near-field data observation. Our forward model is based on a coupled acoustic-gravity model (e.g., Lotto and Dunham, Comput Geosci (2015) 19:327\u2014340). Similar to other tsunami models, our forward model relies on transient boundary data describing the location and magnitude of the seafloor deformation. In a real-time scenario, these parameter fields must be inferred from a variety of measurements, including observations from pressure gauges mounted on the seafloor. One particular difficulty of this inference problem lies in the accurate inversion from sparse pressure data recorded in the near-field where strong hydroacoustic waves propagate in the compressible ocean; these acoustic waves complicate the task of estimating the hydrostatic pressure changes related to the forming surface gravity wave. Our space-time model is discretized with finite elements in space and finite differences in time. The forward model incurs a high computational complexity, since the pressure waves must be resolved in the 3D compressible ocean over a sufficiently long time span. Due to the infeasibility of rapidly solving the corresponding inverse problem for the fully discretized space-time operator, we discuss approaches for using compact representations of the parameter-to-observable map.", "title": "January 10, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#lin-mu-university-of-georgia", "text": "", "title": "Lin Mu (University of Georgia)"}, {"location": "videos/#an-efficient-and-effective-fem-solver-for-diffusion-equation-with-strong-anisotropy", "text": "", "title": "An Efficient and Effective FEM Solver for Diffusion Equation with Strong Anisotropy"}, {"location": "videos/#december-13-2022-femllnl-seminar-series", "text": "The Diffusion equation with strong anisotropy has broad applications. In this project, we discuss numerical solution of diffusion equations with strong anisotropy on meshes not aligned with the anisotropic vector field, focusing on application to magnetic confinement fusion. In order to resolve the numerical pollution for simulations on a non-anisotropy-aligned mesh and reduce the associated high computational cost, we developed a high-order discontinuous Galerkin scheme with an efficient preconditioner. The auxiliary space preconditioning framework is designed by employing a continuous finite element space as the auxiliary space for the discontinuous finite element space. An effective line smoother that can mitigate the high-frequency error perpendicular to the magnetic field has been designed by a graph-based approach to pick the line smoother that is approximately perpendicular to the vector fields when the mesh does not align with anisotropy. Numerical experiments for several benchmark problems are presented to validate the effectiveness and robustness.", "title": "December 13, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#garth-wells-university-of-cambridge", "text": "", "title": "Garth Wells (University of Cambridge)"}, {"location": "videos/#fenicsx-design-of-the-next-generation-fenics-libraries-for-finite-element-methods", "text": "", "title": "FEniCSx: design of the next generation FEniCS libraries for finite element methods"}, {"location": "videos/#november-8-2022-femllnl-seminar-series", "text": "The FEniCS Project provides libraries for solving partial differential equations using the finite element method. An aim of the FEniCS Project has been to provide high-performance solver environments that closely mirror mathematical syntax, with the hypothesis that high-level representations means that solvers are faster to write, easier to debug, and can deliver faster runtime performance than is reasonably possible by hand. Using domain-specific languages and code generation techniques, arguably the FEniCS libraries delivered on these goals for a set of problems. However, over time limitations, including performance and extensibility, become clear and maintainability/sustainability became an issue.Building on experiences from the FEniCS libraries, I will present and discuss the design on the next generation of tools, FEniCSx. The new design retains strengths of the past approach, and addresses limitations using new designs and new tools. Solvers can be written in C++ or Python, and a number of design changes allow the creation of flexible, fast solvers in Python. In the second part of my presentation, I will discuss high-performance finite element kernels (limited to CPUs on this occasion), motivated by the Center for Efficient Exascale Discretizations 'bake-off' problems, and which would not have been possible in the original FEniCS libraries. Double, single and half-precision kernels are considered, and results include (i) the observation that kernels with vector intrinsics can be slower than auto-vectorised kernels for common cases, and (ii) a cache-aware performance model which is remarkably accurate in predicting performance across architectures.", "title": "November 8, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#dennis-ogiermann-university-of-bochum", "text": "", "title": "Dennis Ogiermann (University of Bochum)"}, {"location": "videos/#computing-meets-cardiology-making-heart-simulations-fast-and-accurate", "text": "", "title": "Computing Meets Cardiology: Making Heart Simulations Fast and Accurate"}, {"location": "videos/#september-13-2022-femllnl-seminar-series", "text": "Heart diseases are an ubiquitous societal burden responsible for a majority of deaths world wide. A central problem in developing effective treatments for heart diseases is the inherent complexity of the heart as an organ. From a modeling perspective, the heart can be interpreted as a biological pump involving multiple physical fields, namely fluid and solid mechanics, as well as chemistry and electricity, all interacting on different time scales. This multiphysics and multiscale aspect makes simulations inherently expensive, especially when approached with naive numerical techniques. However, computational models can be extraordinarily useful in helping us understanding how the healthy heart functions and especially how malfunctions influence different diseases. In this context, also information about possible weaknesses of therapies can be obtained to ultimately improve clinical treatment and decision support. In this talk, we will focus primarily on two important model classes in computational cardiology and their respective efficient numerical treatment without compromising significant accuracy. The first class is the problem of computing electrocardiograms (ECG) from electrical heart simulations. Since ECG measurements can give a wide range of insights about a wide range of heart diseases they offer suitable data to validate our electrophysiological models and verify our numerical schemes on organ-scale. Known numerical issues, arising in the context of electrophysiological models, will be reviewed. The second class addresses bidirectionally coupled electromechanical models and their efficient numerical treatment. Focus will be on a unified space-time adaptive operator splitting framework developed on top of MFEM which proves highly efficient so far for the investigated model classes while still preserving high accuracy.", "title": "September 13, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#ricardo-vinuesa-kth", "text": "", "title": "Ricardo Vinuesa (KTH)"}, {"location": "videos/#modeling-and-controlling-turbulent-flows-through-deep-learning", "text": "", "title": "Modeling and Controlling Turbulent Flows through Deep Learning"}, {"location": "videos/#august-23-2022-femllnl-seminar-series", "text": "The advent of new powerful deep neural networks (DNNs) has fostered their application in a wide range of research areas, including more recently in fluid mechanics. In this presentation, we will cover some of the fundamentals of deep learning applied to computational fluid dynamics (CFD). Furthermore, we explore the capabilities of DNNs to perform various predictions in turbulent flows: we will use convolutional neural networks (CNNs) for non-intrusive sensing, i.e. to predict the flow in a turbulent open channel based on quantities measured at the wall. We show that it is possible to obtain very good flow predictions, outperforming traditional linear models, and we showcase the potential of transfer learning between friction Reynolds numbers of 180 and 550. We also discuss other modelling methods based on autoencoders (AEs) and generative adversarial networks (GANs), and we present results of deep-reinforcement-learning-based flow control.", "title": "August 23, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#jeffrey-banks-rpi", "text": "", "title": "Jeffrey Banks (RPI)"}, {"location": "videos/#efficient-techniques-for-fluid-structure-interaction-compatibility-coupling-and-galerkin-differences", "text": "", "title": "Efficient Techniques for Fluid Structure Interaction: Compatibility Coupling and Galerkin Differences"}, {"location": "videos/#july-26-2022-femllnl-seminar-series", "text": "Predictive simulation increasingly involves the dynamics of complex systems with multiple interacting physical processes. In designing simulation tools for these problems, both the formulation of individual constituent solvers, as well as coupling of such solvers into a cohesive simulation tool must be addressed. In this talk, I discuss both of these aspects in the context of fluid-structure interaction, where we have recently developed a new class of stable and accurate partitioned solvers that overcome added-mass instability through the use of so-called compatibility boundary conditions. Here I will present partitioned coupling strategies for incompressible FSI. One interesting aspect of CBC-based coupling is the occurrence of nonstandard and/or high-derivative operators, which can make adoption of the techniques challenging, e.g. in the context of FEM methods. To address this, I will also discuss our newly developed Galerkin Difference approximations, which may provide a natural pathway for CBCs in an FEM context. Although GD is fundamentally a finite element approximation based on a Galerkin projection, the underlying GD space is nonstandard and is derived using profitable ideas from the finite difference literature. The resulting schemes possess remarkable properties including nodal superconvergence and the ability to use large CFL-one time steps. I will also present preliminary results for GD discretizations on unstructured grids using MFEM.", "title": "July 26, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#paul-fischer-uiucanl", "text": "", "title": "Paul Fischer (UIUC/ANL)"}, {"location": "videos/#outlook-for-exascale-fluid-dynamics-simulations", "text": "", "title": "Outlook for Exascale Fluid Dynamics Simulations"}, {"location": "videos/#june-21-2022-femllnl-seminar-series", "text": "We consider design, development, and use of simulation software for exascale computing, with a particular emphasis on fluid dynamics simulation. Our perspective is through the lens of the high-order code Nek5000/RS, which has been developed under DOE's Center for Efficient Exascale Discretizations (CEED). Nek5000/RS is an open source thermal fluids simulation code with a long development history on leadership computing platforms\u2014it was the first commercial software on distributed memory platforms and a Gordon Bell Prize winner on Intel's ASCII Red. There are a myriad of objectives that drive software design choices in HPC, such as scalability, low-memory, portability, and maintainability. Throughout, our design objective has been to address the needs of the user, including facilitating data analysis and ensuring flexibility with respect to platform and number of processors that can be used. When running on large-scale HPC platforms, three of the most common user questions are: How long will my job take? How many nodes will be required? Is there anything I can do to make my job run faster? Additionally, one might have concerns about storage, post-processing (Will I be able to analyze the results? Where?), and queue times. This talk will seek to answer several of these questions over a broad range of fluid-thermal problems from the perspective of a Nek5000/RS user. We specifically address performance with data for NekRS on several of the DOE's pre-exascale architectures, which feature AMD MI250X or NVIDIA V100 or A100 GPUs.", "title": "June 21, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#mike-puso-llnl", "text": "", "title": "Mike Puso (LLNL)"}, {"location": "videos/#topics-in-immersed-boundary-and-contact-methods-current-llnl-projects-and-research", "text": "", "title": "Topics in Immersed Boundary and Contact Methods: Current LLNL Projects and Research"}, {"location": "videos/#may-24-2022-femllnl-seminar-series", "text": "Many of the most interesting phenomena in solid mechanics occurs at material interfaces. This can be in the form of fluid structure interaction, cracks, material discontinuities, impact etc. Solutions to these problems often require some form of immersed/embedded boundary method or contact or combination of both. This talk will provide a brief overview of different lab efforts in these areas and presents some of the current research aspects and results using from LLNL production codes. Technically speaking, the methods discussed here all require Lagrange multipliers to satisfy the constraints on the interface of overlapping or dissimilar meshes which complicates the solution. Stability and consistency of Lagrange multiplier approaches can be hard to achieve both in space and time. For example, the wrong choice of multiplier space will either be over-constrained and/or cause oscillations at the material interfaces for simple statics problems. For dynamics, many of the basic time integration schemes such as Newmark's method are known to be unstable due to gaps opening and closing. Here we introduce some (non-Nitsche) stabilized multiplier spaces for immersed boundary and contact problems and a structure preserving time integration scheme for long time dynamic contact problems. Finally, I will describe some on-going efforts extending this work.", "title": "May 24, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#robert-chiodi-uiuc", "text": "", "title": "Robert Chiodi (UIUC)"}, {"location": "videos/#chyps-an-mfem-based-material-response-solver-for-hypersonic-thermal-protection-systems", "text": "", "title": "CHyPS: An MFEM-Based Material Response Solver for Hypersonic Thermal Protection Systems"}, {"location": "videos/#april-16-2022-femllnl-seminar-series", "text": "The University of Illinois at Urbana-Champaign\u2019s Center for Hypersonics and Entry Systems Studies has developed a material response solver, named CHyPS, to predict the behavior of thermal protection systems for hypersonic flight. CHyPS uses MFEM to provide the underlying discontinuous Galerkin spatial discretization and linear solvers used to solve the equations. In this talk, we will briefly present the physics and corresponding equations governing material response in hypersonic environments. We will also include a discussion on the implementation of a direct Arbitrary Lagrangian-Eulerian approach to handle mesh movement resulting from the ablation of the material surface. Results for standard community test cases developed at a series of Ablation Workshop meetings over the past decade will be presented and compared to other material response solvers. We will also show the potential of high-order solutions for simulating thermal protection system material response.", "title": "April 16, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#tamas-horvath-oakland-university", "text": "", "title": "Tamas Horvath (Oakland University)"}, {"location": "videos/#space-time-hybridizable-discontinuous-galerkin-with-mfem", "text": "", "title": "Space-Time Hybridizable Discontinuous Galerkin with MFEM"}, {"location": "videos/#march-29-2022-femllnl-seminar-series", "text": "Unsteady partial differential equations on deforming domains appear in many real-life scenarios, such as wind turbines, helicopter rotors, car wheels, free-surface flows, etc. We will focus on the space-time finite element method, which is an excellent approach to discretize problems on evolving domains. This method uses discontinuous Galerkin to discretize both in the spatial and temporal directions, allowing for an arbitrarily high-order approximation in space and time. Furthermore, this method automatically satisfies the geometric conservation law, which is essential for accurate solutions on time-dependent domains. The biggest criticism is that the application of space-time discretization increases the computational complexity significantly. To overcome this, we can use the high-order accurate Hybridizable or Embedded discontinuous Galerkin method. Numerical results will be presented to illustrate the applicability of the method for fluid flow around rigid bodies.", "title": "March 29, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#tobin-isaac-georgia-tech", "text": "", "title": "Tobin Isaac (Georgia Tech)"}, {"location": "videos/#unifying-the-analysis-of-geometric-decomposition-in-feec", "text": "", "title": "Unifying the Analysis of Geometric Decomposition in FEEC"}, {"location": "videos/#march-22-2022-femllnl-seminar-series", "text": "Two operations take function spaces and make them suitable for finite element computations. The first is the construction of trace-free subspaces (which creates \"bubble\" functions) and the second is the extension of functions from cell boundaries into cell interiors (which create edge functions with the correct continuity): together these operations define the geometric decomposition of a function space. In finite element exterior calculus (FEEC), these two operations have been treated separately for the two main families of finite elements: full polynomial elements and trimmed polynomial elements. In this talk we will see how one constructor of trace-free functions and one extension operator can be used for both families, and indeed for all differential forms. We will also examine the practicality of these two operators as tools for implementing geometric decompositions in actual finite element codes.", "title": "March 22, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#raphael-zanella-ut-austin", "text": "", "title": "Rapha\u00ebl Zanella (UT Austin)"}, {"location": "videos/#axisymmetric-mfem-based-solvers-for-the-compressible-navier-stokes-equations-and-other-problems", "text": "", "title": "Axisymmetric MFEM-Based Solvers for the Compressible Navier-Stokes Equations and Other Problems"}, {"location": "videos/#march-1-2022-femllnl-seminar-series", "text": "An axisymmetric model leads, when suitable, to a substantial cut in the computational cost with respect to a 3D model. Although not as accurate, the axisymmetric model allows to quickly obtain a result which can be satisfying. Simple modifications to a 2D finite element solver allow to obtain an axisymmetric solver. We present MFEM-based parallel axisymmetric solvers for different problems. We first present simple axisymmetric solvers for the Laplacian problem and the heat equation. We then present an axisymmetric solver for the compressible Navier-Stokes equations. All solvers are based on H^1-conforming finite element spaces. The correctness of the implementation is verified with convergence tests on manufactured solutions. The Navier-Stokes solver is used to simulate axisymmetric flows with an analytical solution (Poiseuille and Taylor-Couette) and an air flow in a plasma torch geometry.", "title": "March 1, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#robert-carson-llnl", "text": "", "title": "Robert Carson (LLNL)"}, {"location": "videos/#an-overview-of-exaconstit-and-its-use-in-the-exaam-project", "text": "", "title": "An Overview of ExaConstit and Its Use in the ExaAM Project"}, {"location": "videos/#february-1-2022-femllnl-seminar-series", "text": "As additively manufactured (AM) parts become increasingly more popular in industry, a growing need exists to help expediate the certifying process of parts. The ExaAM project seeks to help this process by producing a workflow to model the AM process from the melt pool process all the way up to the part scale response by leveraging multiple physics codes run on upcoming exascale computing platforms. As part of this workflow, ExaConstit is a next-generation quasi-static, solid mechanics FEM code built upon MFEM used to connect local microstructures and local properties within the part scale response. Within this talk, we will first provide an overview of ExaConstit, how we have ported it over to the GPU, and some performance numbers on a number of different platforms. Next, we will discuss how we have leveraged MFEM and the FLUX workflow to run hundreds of high-fidelity simulations on Summit in-order to generate the local properties needed to drive the part scale simulation in the ExaAM workflow. Finally, we will show case a few other areas ExaConstit has been used in.", "title": "February 1, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#guglielmo-scovazzi-duke", "text": "", "title": "Guglielmo Scovazzi (Duke)"}, {"location": "videos/#the-shifted-boundary-method-an-immersed-approach-for-computational-mechanics", "text": "", "title": "The Shifted Boundary Method: An Immersed Approach for Computational Mechanics"}, {"location": "videos/#january-20-2022-femllnl-seminar-series", "text": "Immersed/embedded/unfitted boundary methods obviate the need for continual re-meshing in many applications involving rapid prototyping and design. Unfortunately, many finite element embedded boundary methods are also difficult to implement due to the need to perform complex cell cutting operations at boundaries, and the consequences that these operations may have on the overall conditioning of the ensuing algebraic problems. We present a new, stable, and simple embedded boundary method, named \u201cshifted boundary method\u201d (SBM), which eliminates the need to perform cell cutting. Boundary conditions are imposed on a surrogate discrete boundary, lying on the interior of the true boundary interface. We then construct appropriate field extension operators, by way of Taylor expansions, with the purpose of preserving accuracy when imposing the boundary conditions. We demonstrate the SBM on large-scale solid and fracture mechanics problems; thermomechanics problems; porous media flow problems; incompressible flow problems governed by the Navier-Stokes equations (also including free surfaces); and problems governed by hyperbolic conservation laws.", "title": "January 20, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#mfem-workshop-2023", "text": "", "title": "MFEM Workshop 2023"}, {"location": "videos/#aaron-fisher-llnl_1", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#welcome-and-overview_1", "text": "", "title": "Welcome and Overview"}, {"location": "videos/#october-26-2023-mfem-workshop-2023", "text": "Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community resources.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#tzanio-kolev-llnl_1", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos/#the-state-of-mfem_1", "text": "", "title": "The State of MFEM"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_1", "text": "MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities (including adaptive mesh refinement, GPU support, and FEM operator decomposition and partial assembly), examples, and mini-apps. Kolev also highlighted the growth of the global community as well as features included in the recent v4.5 software release.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#veselin-dobrev-llnl_1", "text": "", "title": "Veselin Dobrev (LLNL)"}, {"location": "videos/#recent-developments_1", "text": "", "title": "Recent Developments"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_2", "text": "Veselin Dobrev of LLNL detailed the project\u2019s recent developments including sub-mesh extraction, linear form assembly on GPUs, coefficient evaluation on GPUs, new mini-apps and examples, Windows 2022 CI testing on GitHub, and more. He also summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Extreme-scale Scientific Software Development Kit, SciDAC, and the FASTMath Institute.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#sebastian-grimberg-aws", "text": "", "title": "Sebastian Grimberg (AWS)"}, {"location": "videos/#palace-parallel-large-scale-computational-electromagnetics", "text": "", "title": "Palace: PArallel LArge-scale Computational Electromagnetics"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_3", "text": "Palace is a parallel finite element code for full-wave electromagnetics simulations based on the MFEM library. Palace is used at the AWS Center for Quantum Computing to perform large-scale 3D simulations of complex electromagnetics models and enable the design of quantum computing hardware. Grimberg provided an overview of the simulation capabilities of Palace as well as some recent developments for conforming and nonconforming adaptive mesh refinement, operator partial assembly, and GPU support.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#jacob-lotz-delft-university-of-technology", "text": "", "title": "Jacob Lotz (Delft University of Technology)"}, {"location": "videos/#computation-and-reduced-order-modelling-of-periodic-flows", "text": "", "title": "Computation and Reduced Order Modelling of Periodic Flows"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_4", "text": "Many types of periodic flows can be found in nature and industrial applications and their computation is expensive due to lengthy time simulations. His work aims to reduce the cost of these computations. His team solves periodic flows in a space-time domain in which both ends in time are periodic such that they only have to model one period. MFEM is used to discretize the space-time domain and solve our discretized system of equations. Lotz applies a hyper-reduced Proper Orthogonal Decomposition Galerkin reduced order model to speed up our computations. During the presentation he showed (results of) their full order model and their advances in reduced order modelling.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#boyan-lazarov-llnl", "text": "", "title": "Boyan Lazarov (LLNL)"}, {"location": "videos/#scalable-design-and-optimization-with-mfem", "text": "", "title": "Scalable Design and Optimization with MFEM"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_5", "text": "Lazarov discussed recently added and ongoing code development facilitating the solution of shape and topology optimization problems. Both topology and shape optimization are gradient-based iterative algorithms aiming to find a material distribution that minimizes an objective and fulfills a set of constraints. Every optimization step includes a solution to a forward optimization problem, an evaluation of the objective and constraints, a solution to an adjoint problem associated with every objective or constraint, an evaluation of gradients, and an update of the design based on mathematical programming techniques. All these steps can be easily implemented and executed by using MFEM in a scalable manner, allowing the design and optimization of large-scale realistic industrial problems. Thus, the goal is to exemplify these features, highlight the techniques that simplify the implementation of new problems, and provide a glimpse into the future.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#student-lightning-talks", "text": "", "title": "Student Lightning Talks"}, {"location": "videos/#part-1", "text": "", "title": "Part 1"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_6", "text": "The following four students presented in this video: Shani Martinez Weissberg (Tel Aviv University): \u201c\u00b5FEA of a Rabbit Femur\u201d Paul Moujaes (TU-Dortmund): \u201cDissipation-Based Entropy Stabilization for Slope-Limited Discontinuous Galerkin Approximations of Hyperbolic Problems\u201d Alejandro Mu\u00f1oz (Universidad de Granada): \u201cDiscontinuous Galerkin in the Time Domain for Maxwell\u2019s Equations\u201d Bill Ellis (UK Atomic Energy Authority): \u201cComparing Thermo-Mechanical Solves in MOOSE and MFEM\u201d", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#student-lightning-talks_1", "text": "", "title": "Student Lightning Talks"}, {"location": "videos/#part-2", "text": "", "title": "Part 2"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_7", "text": "The following four students presented in this video: Alexander Mote (Oregon State University): \u201cA Neural Network Surrogate Model for Nonlocal Thermal Flux Calculations\u201d (LLNL-PRES-854134) Amit Rotem (Virginia Tech): \u201cGPU Acceleration of IPDG in MFEM\u201d Josiah Brown (Relogic Research): \u201cProject Minerva\u201d Mike Pozulp (UC Berkeley): \u201cAn Implicit Monte Carlo Acceleration Scheme\u201d", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#syunichi-shiraiwa-pppl", "text": "", "title": "Syun'ichi Shiraiwa (PPPL)"}, {"location": "videos/#radio-frequency-wave-simulation-in-hot-magnetized-plasma-using-differential-operator-for-non-local-conductivity-response", "text": "", "title": "Radio-Frequency Wave Simulation in Hot Magnetized Plasma using Differential Operator for Non-Local Conductivity Response"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_8", "text": "In high-temperature plasmas, the dielectric response to the RF fields is caused by freely moving charged particles, which naturally makes such a response non-local and correspondingly, the Maxwell wave problem becomes an integro-differential equation. A differential form of dielectric operator, based on the small k\u22a5\u03c1 expansion, is widely used. However, they typically includes up-to the second order terms, and thus the use of such an operator is limited to the waves that satisfy k\u22a5\u03c1 < 1. We propose an alternative approach to construct a dielectric operator, which includes all-order finite Larmor radius effects without explicitly containing higher order derivatives. We use a rational approximation of the plasma dielectric tensor in the wave number space, in order to yield a differential operator acting on the dielectric current (J). The 1D O-X-B mode-conversion of the electron Bernstein wave in the non-relativistic Maxwellian plasma was modeled using this approach. An agreement with analytic calculation and the conservation of wave energy carried by the Poynting flux and electron thermal motion (\u201csloshing\u201d) is found. The connection between our construction method and superposition of Green\u2019s function for these screened Poisson\u2019s equations is presented. An approach to extend the operator in a multi-dimensional setting will also be discussed.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#tamas-horvath-oakland-university_1", "text": "", "title": "Tamas Horvath (Oakland University)"}, {"location": "videos/#implementation-of-hybridizable-discontinuous-galerkin-methods-via-the-hdg-branch", "text": "", "title": "Implementation of Hybridizable Discontinuous Galerkin Methods via the HDG Branch"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_9", "text": "Horvath presented the HDG branch, which was initially developed for HDG discretizations of advection-diffusion problems. Recent updates have made the branch highly adaptable for various applications, allowing a flexible implementation of HDG for many different PDEs. He showcased these enhancements and provide insights into their versatile usage across different problems.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#yohann-dudouit-llnl_1", "text": "", "title": "Yohann Dudouit (LLNL)"}, {"location": "videos/#empowering-mfem-using-libceed", "text": "", "title": "Empowering MFEM Using libCEED"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_10", "text": "Dudouit began with an overview of the features introduced to MFEM through the integration of libCEED. He emphasized capabilities that are distinct from native MFEM functionalities, marking an enhancement in the software\u2019s suite of tools, such as support for simplices, handling of mixed meshes, and support for p-adaptivity. The presentation concluded by showcasing benchmarks for various problems executed on different HPC architectures, illustrating the performance gains and efficiencies achieved through the libCEED integration.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#zhang-chunyu-sun-yat-sen-university", "text": "", "title": "Zhang Chunyu (Sun Yat-Sen University)"}, {"location": "videos/#homogenized-energy-theory-for-solution-of-elasticity-problems-with-consideration-of-higher-order-microscopic-deformations", "text": "", "title": "Homogenized Energy Theory for Solution of Elasticity Problems with Consideration of Higher-Order Microscopic Deformations"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_11", "text": "The classical continuum mechanics faces difficulties in solving problems involving highly inhomogeneous deformations. The proposed theory investigates the impact of high-order microscopic deformation on modeling of material behaviors and provides a refined interpretation of strain gradients through the averaged strain energy density. Only one scale parameter, i.e., the size of the Representative Volume Element(RVE), is required by the proposed theory. By employing the variational approach and the Augmented Lagrangian Method(ALM), the governing equations for deformation as well as the numerical solution procedure are derived. It is demonstrated that the homogenized energy theory offers plausible explanations and reasonable predictions for the problems yet unsolved by the classical theory such as the size effect of deformation and the stress singularity at the crack tip. The concept of averaged strain energy proves to be more suitable for describing the intricate mechanical behavior of materials. And high order partial differential equations can be effectively solved by the ALM by introducing supplementary variables to lower the highest order of the equations.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#eric-chin-llnl", "text": "", "title": "Eric Chin (LLNL)"}, {"location": "videos/#contact-constraint-enforcement-using-the-tribol-interface-physics-library", "text": "", "title": "Contact Constraint Enforcement Using the Tribol Interface Physics Library"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_12", "text": "Chin discussed recent additions to the Tribol interface physics library to simplify MPI parallel contact constraint enforcement in large deformation, implicit and explicit continuum solid mechanics simulations using MFEM. Tribol is an open-source software package available on GitHub and includes tools for contact detection, state-of-the-art Lagrangian contact methods such as common plane and mortar, and various enforcement techniques such as penalty and Lagrange multiplier. Additionally, Tribol recently added a domain redecomposer for coalescing proximal contact pairs on a single rank. Tribol\u2019s features are designed to interact seamlessly with MFEM and other codes that use MFEM, with native support for MFEM data structures such as ParMesh, ParGridFunction, and HypreParMatrix. Chin highlighted the simplicity of adding Tribol features to an MFEM-based code by looking at integration with Serac , an open-source implicit nonlinear thermal-structural simulation code.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#milan-holec-llnl", "text": "", "title": "Milan Holec (LLNL)"}, {"location": "videos/#deterministic-transport-mfem-miniapp", "text": "", "title": "Deterministic Transport MFEM-Miniapp"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_13", "text": "Holec introduced a new multidimensional discretization in MFEM enabling efficient high-order phase-space simulations of various types of Boltzmann transport. In terms of a generalized form of the standard discrete ordinate SN method for the phase-space, his team carefully designs discrete analogs obeying important continuous properties such as conservation of energy, preservation of positivity, preservation of the diffusion limit of transport, preservation of symmetry leading to rays-effect mitigation, and other laws of physics. Finally, Holec showed how to apply this new phase-space MFEM feature to increase the fidelity of modeling of fusion energy experiments.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#aaron-fisher-llnl_2", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#wrap-up-and-visualization-contest-winners", "text": "", "title": "Wrap-Up and Visualization Contest Winners"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_14", "text": "The workshop concluded with the announcement of winners of the simulation and visualization contest: (1) displacement distribution of a loaded excavator arm under static equilibrium, rendered by Mehran Ebrahimi from Autodesk Research; and (2) leapfrogging vortex rings based on an MFEM incompressible Schr\u00f6dinger fluid solver, rendered by John Camier from LLNL. Contest winners are featured in the gallery .", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#conferences-in-2023", "text": "", "title": "Conferences in 2023"}, {"location": "videos/#tzanio-kolev-llnl_2", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos/#pde-simulations-on-unstructured-grids-with-finite-element-discretizations", "text": "", "title": "PDE Simulations on Unstructured Grids with Finite Element Discretizations"}, {"location": "videos/#march-15-2023-ipam-at-ucla", "text": "LLNL computational mathematician Tzanio Kolev presented an overview of MFEM as part of the long program on New Mathematics for the Exascale: Applications to Materials Science at the Institute for Pure and Applied Mathematics.", "title": "March 15, 2023 | IPAM at UCLA"}, {"location": "videos/#mfem-workshop-2022", "text": "", "title": "MFEM Workshop 2022"}, {"location": "videos/#aaron-fisher-llnl_3", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#welcome-and-overview_2", "text": "", "title": "Welcome and Overview"}, {"location": "videos/#october-25-2022-mfem-workshop-2022", "text": "Held on October 25, 2022, the second annual MFEM community workshop brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, an interactive Q&A session, and a visualization contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#tzanio-kolev-llnl_3", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos/#the-state-of-mfem_2", "text": "", "title": "The State of MFEM"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_1", "text": "MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities (including adaptive mesh refinement, GPU support, and FEM operator decomposition and partial assembly), examples, and mini-apps. Kolev also highlighted the growth of the global community as well as features included in the recent v4.5 software release.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#veselin-dobrev-llnl_2", "text": "", "title": "Veselin Dobrev (LLNL)"}, {"location": "videos/#recent-developments-in-mfem", "text": "", "title": "Recent Developments in MFEM"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_2", "text": "Veselin Dobrev of LLNL detailed the project\u2019s recent developments including sub-mesh extraction, linear form assembly on GPUs, coefficient evaluation on GPUs, new mini-apps and examples, Windows 2022 CI testing on GitHub, and more. He also summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Extreme-scale Scientific Software Development Kit, SciDAC, and the FASTMath Institute.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#ben-zwick-university-of-western-australia", "text": "", "title": "Ben Zwick (University of Western Australia)"}, {"location": "videos/#solution-of-the-electroencephalography-eeg-forward-problem", "text": "", "title": "Solution of the Electroencephalography (EEG) Forward Problem"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_3", "text": "Ben Zwick of the University of Western Australia presented \"Solution of the Electroencephalography (EEG) Forward Problem.\" The brain's electrical activity can be measured using EEG with electrodes attached to the scalp, or electrocorticography (ECoG), also known as intracranial EEG (iEEG), with electrodes implanted on the brain's surface. EEG source localization combines measurements from EEG or iEEG with data from medical imaging to estimate the location and strengths of the current sources that generated the measured electric potential at the electrodes. Source localization can be used to locate the epileptic zone in pharmaco-resistant focal epilepsies and study evoked related potentials. Accurate source localization requires fast and accurate solutions of the EEG forward problem, which involves calculating the electric potential within the brain volume given a predefined source. This presentation demonstrates how MFEM can be used to solve the EEG forward problem using patient-specific geometry and tissue conductivity obtained from medical images.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#carlos-brito-pacheco-universite-grenoble-alpes", "text": "", "title": "Carlos Brito Pacheco (Universit\u00e9 Grenoble Alpes)"}, {"location": "videos/#rodin-density-and-topology-optimization-framework", "text": "", "title": "Rodin: Density and Topology Optimization Framework"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_4", "text": "Carlos Brito Pacheco of Universit\u00e9 Grenoble Alpes presented \"Rodin: Lightweight and Modern C++17 Shape, Density and Topology Optimization Framework.\" He introduced the shape optimization library Rodin; a lightweight and modular shape optimization framework which provides many of the associated functionalities that are needed when implementing shape and topology optimization algorithms. These functionalities range from refining and remeshing the underlying shape, to providing elegant mechanisms to specify and solve variational problems. Learn more about Rodin on GitHub .", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#tobias-duswald-cerntum", "text": "", "title": "Tobias Duswald (CERN/TUM)"}, {"location": "videos/#stochastic-fractional-pdes-random-field-generation-topology-optimization", "text": "", "title": "Stochastic Fractional PDEs: Random Field Generation & Topology Optimization"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_5", "text": "Tobias Duswald of CERN/Technical University of Munich presented \"Stochastic Fractional PDEs with MFEM with Applications to Random Field Generation and Topology Optimization.\" Over the last several centuries, engineers, physicists, and mathematicians have learned how to describe their problems accurately with partial differential equations (PDEs). PDEs govern the laws of continuum mechanics, quantum mechanics, heat transfer, and many other phenomena. More recently, fractional PDEs have gained popularity in the scientific community because they allow for a more general description of complicated systems (e.g., multiphysics) by leveraging a real-valued exponent for the operators. Besides fractional operators, stochastic PDEs have also sparked the community's interest because they generalize the PDE framework to account for randomness appearing in many disciplines. This talk addresses the numerical solution of stochastic, fractional PDEs with MFEM. To deal with these two flavors of PDEs, Duswald introduced MFEM\u2019s WhiteNoiseIntegrator to treat a stochastic linear form and adopt a rational approximation for the fractional operator. He presented results for three different use cases. First, he showed numerical results for the fractional Laplace problem with homogeneous Dirichlet boundary conditions. Second, he generated Mat\u00e9rn-type Gaussian random fields (GRFs) by solving a specific stochastic, fractional PDE using an approach commonly referred to as SPDE method in the spatial statistics literature. Thirdly, he used GRFs to model geometric uncertainties in additive manufacturing processes and apply the model for topology optimization under uncertainty.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#alvaro-sanchez-villar-princeton-plasma-physics-laboratory", "text": "", "title": "Alvaro S\u00e1nchez Villar (Princeton Plasma Physics Laboratory)"}, {"location": "videos/#mfem-application-to-em-wave-simulation-in-ecr-space-plasma-thrusters", "text": "", "title": "MFEM Application to EM-Wave Simulation in ECR Space Plasma Thrusters"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_6", "text": "Alvaro S\u00e1nchez Villar of the Princeton Plasma Physics Laboratory presented \"MFEM Application to EM-Wave Simulation in ECR Space Plasma Thrusters.\" The solution of Maxwell equations using the cold-plasma approximation is shown in the context of the design of electron cyclotron resonance plasma thrusters for space propulsion applications. This thruster class utilizes the electron cyclotron resonance to energize the plasma constituents and to sustain the plasma discharge. MFEM finite element discretization is used to solve for the time-harmonic electromagnetic waves. The shape and magnitude of the electromagnetic power density absorbed by the plasma is coupled to the plasma transport variables, and therefore determines the thruster operation performance parameters. Coupled simulations of the electromagnetic-wave and the plasma transport problems are used to interpret thruster operational principles, to understand its sensitivity to operational and design parameters, and compared to experimental measurements to both assess the accuracy of the current numerical model and to highlight its main limitations.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#brian-young_1", "text": "", "title": "Brian Young"}, {"location": "videos/#openparem2d-a-2d-simulator-for-guided-waves", "text": "", "title": "OpenParEM2D: A 2D Simulator for Guided Waves"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_7", "text": "Independent software developer Brian Young presented \"OpenParEM2D: A Free, Open-Source Electromagnetic Simulator for 2D Waveguides and Transmission Lines.\" An overview is provided on a 2D electromagnetic simulator for guided waves called OpenParEM2D. It is an open-source and free project licensed under GPLv3 or later and released at its website . Capabilities and methodology are presented.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#christina-migliore-mit", "text": "", "title": "Christina Migliore (MIT)"}, {"location": "videos/#the-development-of-the-em-rf-edge-interactions-mini-app-stix-using-mfem", "text": "", "title": "The Development of the EM RF-Edge Interactions Mini-app \u201cStix\u201d Using MFEM"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_8", "text": "Christina Migliore of MIT presented \"The Development of the EM RF-Edge Interactions Mini-App Stix Using MFEM.\" Ion cyclotron radio frequency range (ICRF) power plays an important role in heating and current drive in fusion devices. However, experiments show that in the ICRF regime there is a formation of a radio frequency (RF) sheath at the material and antenna boundaries that influences sputtering and power dissipation. Given the size of the sheath relative to the scale of the device, it can be approximated as a boundary condition (BC). Electromagnetic field solvers in the ICRF regime typically treat material boundaries as perfectly conducting, thus ignoring the effect of the RF sheath. Here it is described progress of implementing a model for the RF sheath based on a finite impedance sheath BC formulated by J. Myra and D. A. D\u2019Ippolito, Physics of Plasmas 22 (2015) which provides a representation of the RF rectified sheath including capacitive and resistive effects. This research will discuss the results from the development of a parallelized cold-plasma wave equation solver Stix that implements this non-linear sheath impedance BC through the method of finite elements in pseudo-1D and pseudo-2D using the MFEM library.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#will-pazner-portland-state-university", "text": "", "title": "Will Pazner (Portland State University)"}, {"location": "videos/#high-order-solvers-gpu-acceleration", "text": "", "title": "High-Order Solvers + GPU Acceleration"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_9", "text": "Will Pazner of Portland State University presented \"High-Order Solvers + GPU Acceleration.\" He discussed the benefits of high-order (HO) methods in modeling under-resolved physics and on modern computing architectures, acknowledging that solving HO finite element problems remains challenging. His talk included details about how MFEM supports matrix-free solvers for HO methods, HO operator setup and application, low-order-refined (LOR) preconditioning and matrix assembly, LOR assembly throughput on GPUs (including CPU and GPU comparisons and parallel scalability), and LOR adaptive mesh refinement preconditioning.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#jorge-luis-barrera-llnl", "text": "", "title": "Jorge-Luis Barrera (LLNL)"}, {"location": "videos/#shape-and-topology-optimization-powered-by-mfem", "text": "", "title": "Shape and Topology Optimization Powered by MFEM"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_10", "text": "Jorge-Luis Barrera of LLNL presented \"Shape and Topology Optimization Powered by MFEM.\" He discussed the Livermore Design Optimization (LiDO) code, which solves optimization problems for a wide range of Lab-relevant engineering applications. Leveraging MFEM and the LLNL-developed engineering simulation code Serac, LiDO delivers a powerful suite of design tools that run on HPC systems. The talk highlighted several design examples that benefit from LiDO\u2019s integration with MFEM, including multi-material geometries, octet truss lattices, and a concrete dam under stress. LiDO\u2019s graph architecture that seamlessly integrates MFEM features ensures robust topology optimization, as well as shape optimization using nodal coordinates and level set fields as optimization variables.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#siu-wun-cheung-llnl", "text": "", "title": "Siu Wun Cheung (LLNL)"}, {"location": "videos/#reduced-order-modeling-for-fe-simulations-with-mfem-librom", "text": "", "title": "Reduced Order Modeling for FE Simulations with MFEM & libROM"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_11", "text": "Siu Wun Cheung of LLNL presented \"Reduced Order Modeling for Finite Element Simulations Through the Partnership of MFEM and libROM.\" MFEM provides a wide variety of mesh types and high-order finite element discretizations. However, subject to the model complexity and fine resolution of the discretization, the computational cost can be high, requiring a long time to complete a single forward simulation. In this talk, we will introduce various reduced order modeling techniques, which aim to lower the computational complexity and maintain good accuracy, including intrusive projection-based model reduction and non-intrusive approaches. We will demonstrate the use of reduced order modeling techniques in libROM (www.librom.net), which can be applied to various MFEM examples, including the Poisson problem, linear elasticity, linear advection, mixed nonlinear diffusion, nonlinear elasticity, nonlinear heat conduction, Euler equation, and optimal control problems.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#devlin-hayduke-relogic-research", "text": "", "title": "Devlin Hayduke (ReLogic Research)"}, {"location": "videos/#accelerated-deployment-of-mfem-based-solvers-in-large-scale-industrial-problems", "text": "", "title": "Accelerated Deployment of MFEM Based Solvers in Large Scale Industrial Problems"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_12", "text": "Devlin Hayduke of ReLogic Research presented \"Project Minerva: Accelerated Deployment of MFEM Based Solvers in Large Scale Industrial Problems.\" While many Advanced Scientific Computing Research (ASCR) supported software packages are open source, they are often complicated to use, distributed primarily in source-code form targeting HPC systems, and potential adopters lack options for purchasing commercial support, training, and custom-development services. In response to this need, ReLogic Research, Inc., in collaboration with LLNL, is developing a secure, cloud deployable platform based on the MFEM software termed Minerva. Minerva will feature an integration layer allowing users of commercially available finite element pre/post-processing software (e.g., Abaqus/CAE, Hypermesh, Femap) for use with the Abaqus solver to run simulation studies with the MFEM discretization library and will strengthen further MFEM implemented solvers to make them applicable for solving large scale industrial design and optimization problems.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#synthetik-applied-technologies", "text": "", "title": "Synthetik Applied Technologies"}, {"location": "videos/#blastfem-gpu-accelerated-high-performance-energy-efficient-solver", "text": "", "title": "blastFEM: GPU-Accelerated, High-Performance, Energy-Efficient Solver"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_13", "text": "Tim Brewer, Ben Shields, Peter Vonk, Jeff Heylmun, and Barlev Raymond of Synthetik Applied Technologies presented \"blastFEM: A GPU-Accelerated, Very High-Performance and Energy-Efficient Solver for Highly Compressible Flows.\" Highly compressible multiphase and reactive flows are important, and manifest across a myriad of practical applications: novel energy production and propulsion methods, building design, safety and energy efficiency, material discovery, and maintenance of our nuclear arsenal. There are, however, few tools available to industry capable of simulating these flows at a resolution and scale suitable make predictions of adequate detail\u2014at least within reasonable timeframes and budgetary constraints\u2014to inform engineers and designers. A next generation, highly efficient simulation code is needed that can deliver results within useful timeframes, with sufficient detail to be useful to support simulation-driven design, discovery, and optimization. Furthermore, the code must be designed to run on modern and emerging heterogeneous architectures, and can efficiently leverage these architectures though the use of numerical schemes designed to maximized computational efficiency.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#adolfo-rodriguez-opensim-technology", "text": "", "title": "Adolfo Rodriguez (OpenSim Technology)"}, {"location": "videos/#using-mfem-for-wellbore-stability-analysis", "text": "", "title": "Using MFEM for Wellbore Stability Analysis"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_14", "text": "Adolfo Rodriguez of OpenSim Technology presented \"Using MFEM for Wellbore Stability Analysis.\" He discussed the results from a Department of Energy Small Business Innovation Research project regarding the implementation of wellbore stability analysis for hydrocarbon producing wells.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#julian-andrej-llnl", "text": "", "title": "Julian Andrej (LLNL)"}, {"location": "videos/#aws-tutorial", "text": "", "title": "AWS Tutorial"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_15", "text": "In this tutorial, Julian Andrej of LLNL demonstrated how to use MFEM in the cloud (e.g., an Amazon Web Services instance) for scalable finite element discretization application development. Step-by-step instructions for the tutorial can be found on the tutorial page .", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#aaron-fisher-llnl_4", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#wrap-up-and-simulation-contest-winners", "text": "", "title": "Wrap-Up and Simulation Contest Winners"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_16", "text": "Aaron Fisher of LLNL concluded the workshop by announcing the winners of the simulation and visualization contest: (1) streamlines of the electric field generated by a current dipole source located in the temporal lobe of an epilepsy patient, rendered by Ben Zwick of the University of Western Australia; (2) a topology-optimized heat sink, rendered by Tobias Duswald of CERN/Technical University of Munich; (3) the magnetic field induced by current running through copper wire in air, rendered by Will Pazner of Portland State University. Contest winners are featured in the MFEM gallery .", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#conferences-in-2022", "text": "", "title": "Conferences in 2022"}, {"location": "videos/#vladimir-tomov-llnl_1", "text": "", "title": "Vladimir Tomov (LLNL)"}, {"location": "videos/#finite-element-algorithms-and-research-topics-in-ale-hydrodynamics", "text": "", "title": "Finite Element Algorithms and Research Topics in ALE Hydrodynamics"}, {"location": "videos/#november-17-2022-texas-am-university-corpus-christi-department-of-math-statistics", "text": "LLNL computational mathematician Vladimir Tomov discussed high-order finite element methods research, development, and application in the context of shock hydrodynamics simulations. The method is based on an Arbitrary Lagrangian-Eulerian (ALE) formulation consisting of separate Lagrangian, mesh optimization, and remap phases. The presentation addressed the following topics: Lagrangian shock hydrodynamics on curved meshes; multi-material closure models; coupling to multigroup radiation diffusion; optimization, r-adaptivity, and surface fitting of high-order meshes; advection-based remap with nonlinear sharpening of material interfaces; synchronization between the max/min bounds of primal and conservative fields during remap; computationally efficient finite element kernels based on partial assembly and sum factorization. The talk also covered the existing methods followed by a discussion about the outstanding research challenges and ongoing work to address them.", "title": "November 17, 2022 | Texas A&M University-Corpus Christi Department of Math & Statistics"}, {"location": "videos/#john-camier-llnl", "text": "", "title": "John Camier (LLNL)"}, {"location": "videos/#all-out-kernel-fusion-reaching-peak-performance-faster-in-high-order-finite-element-simulations", "text": "", "title": "All-Out Kernel Fusion: Reaching Peak Performance Faster in High-Order Finite Element Simulations"}, {"location": "videos/#march-2124-2022-nvidia-gtc22", "text": "LLNL research scientist John Camier described recent improvements of high-order finite element CUDA kernels that can reduce the time-to-solution by a factor of 10. Augmenting traditional compiler representations with a general mathematical description enables a sustainable way to generate optimized kernels, matching the peak performance of hand-tuned CUDA code. Such intermediate graph-based representation provides significant potential for optimization, both in terms of minimizing the number of kernel launches and in reducing the memory bandwidth. Camier also presented results on single and multiple GPUs that demonstrate significant reduction in the local problem size required to reach peak performance, leading to faster time-to-solution in finite element applications.", "title": "March 21\u201324, 2022 | NVIDIA GTC22"}, {"location": "videos/#mfem-workshop-2021", "text": "", "title": "MFEM Workshop 2021"}, {"location": "videos/#aaron-fisher-llnl_5", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#wrap-up-and-simulation-contest-winners_1", "text": "", "title": "Wrap-Up and Simulation Contest Winners"}, {"location": "videos/#october-20-2021-mfem-workshop-2021", "text": "MFEM\u2019s first community workshop was held virtually on October 20, 2021, with participants around the world. Aaron Fisher of LLNL concluded the workshop by announcing the results of the simulation and visualization contest. The winners represent two very different research applications using MFEM: (1) the electric field generated by electrocardiogram waves of a rabbit\u2019s heart ventricles, rendered by Dennis Ogiermann of Ruhr-University Bochum (Germany); (2) incompressible fluid flow around a rotating turbine, animated by Tamas Horvath of Oakland University (Michigan). Contest submissions are featured in the gallery .", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#will-pazner-llnl", "text": "", "title": "Will Pazner (LLNL)"}, {"location": "videos/#high-order-matrix-free-solvers", "text": "", "title": "High-Order Matrix-Free Solvers"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_1", "text": "For users unfamiliar with MFEM\u2019s solver library, Will Pazner of LLNL demonstrated a few ways\u2014in some cases adding just a single line of code\u2014to run scalable solvers for differential equations. These solvers execute hierarchical finite element discretizations for both low- and high-order problems.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#vladimir-tomov-llnl_2", "text": "", "title": "Vladimir Tomov (LLNL)"}, {"location": "videos/#mfem-capabilities-for-high-order-mesh-optimization", "text": "", "title": "MFEM Capabilities for High-Order Mesh Optimization"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_2", "text": "Vladimir Tomov of LLNL described MFEM\u2019s mesh optimization strategies including ways the user can define target elements. He demonstrated optimizing a mesh\u2019s shape by limiting displacements to preserve a boundary layer and by changing the size of a uniform mesh in a specific region. MFEM\u2019s mesh-optimizing miniapps are available online .", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#william-dawn-ncsu", "text": "", "title": "William Dawn (NCSU)"}, {"location": "videos/#unstructured-finite-element-neutron-transport-using-mfem", "text": "", "title": "Unstructured Finite Element Neutron Transport using MFEM"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_3", "text": "William Dawn from North Carolina State University described his work with unstructured neutron transport. His team models microreactors, a new class of compact reactor with relatively small electrical output. As part of the Exascale Computing Project, Dawn\u2019s team is modeling the MARVEL reactor, which is planned for construction at Idaho National Laboratory. MFEM satisfies their need for a finite element framework with GPU support and rapid prototyping. With MFEM, the team discretizes a neutron transport equation with six independent variables in space, direction, and energy. Traditional neutron transport methods use a \u201csweeping\u201d method to transport particles through a problem, but this is not feasible for generally unstructured meshes. In Dawn\u2019s models, the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation is used to transform the neutron transport equation from a first-order hyperbolic form to a second-order elliptic form. Then, the SAAF equations are discretized with the finite element method and solved using MFEM. Due to the dependence of the neutron flux on angle and direction, these problems have a high vector-dimension with hundreds to thousands of degrees of freedom (DOF) per mesh vertex. Also, due to the second-order nature of these equations, highly refined meshes are required to sufficiently resolve reactor geometries with millions of vertices in a mesh. Results have been prepared for problems with billions of DOF using the Summit supercomputer at Oak Ridge National Laboratory.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#syunichi-shiraiwa-pppl_1", "text": "", "title": "Syun\u2019ichi Shiraiwa (PPPL)"}, {"location": "videos/#development-of-pymfem-python-wrapper-for-mfem-scalable-rf-wave-simulation-for-nuclear-fusion", "text": "", "title": "Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_4", "text": "Syun\u2019ichi Shiraiwa of Pacific Northwest National Laboratory introduced PyMFEM, a Python wrapper for MFEM that his team uses in radiofrequency (RF) wave simulations for the RF-SciDAC project. RF waves can be used to heat plasma in a nuclear fusion reaction. Simulations of this process present multiple challenges when incorporating a variety of antenna structures at different frequencies, waves with different wave lengths in the same space or spatially diverse, and RF wave effects on background plasma. To integrate MFEM, a C++ software library, into their multiphysics platform, Shiraiwa\u2019s team created a code \u201cwrapper\u201d that binds MFEM to the external Python components of RF wave simulations, ultimately extending MFEM\u2019s features to Python users. Shiraiwa described how the PyMFEM module works in serial and parallel and invited the audience to contribute to the open-source code.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#qi-tang-lanl", "text": "", "title": "Qi Tang (LANL)"}, {"location": "videos/#an-adaptive-scalable-fully-implicit-resistive-mhd-solver", "text": "", "title": "An Adaptive, Scalable Fully Implicit Resistive MHD Solver"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_5", "text": "Qi Tang of Los Alamos National Laboratory described his team\u2019s development of an efficient, scalable solver for tokamak plasma simulations. Magnetohydrodynamics (MHD) equations are important for studying plasma systems, but efficient numerical solutions for MHD are extremely challenging due to disparate time and length scales, strong hyperbolic phenomena, and nonlinearity. Tang\u2019s team has developed a high-order stabilized finite element algorithm for incompressible resistive MHD equations based on MFEM, which provides physics-based preconditioners, adaptive mesh refinement, parallelization, and load balancing. Tang showed animated examples of the model\u2019s scalable and efficient results.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#jan-nikl-eli-beamlines", "text": "", "title": "Jan Nikl (ELI Beamlines)"}, {"location": "videos/#laser-plasma-modeling-with-high-order-finite-elements", "text": "", "title": "Laser Plasma Modeling with High-Order Finite Elements"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_6", "text": "Jan Nikl outlined how his team at the ELI Beamlines Centre uses MFEM for laser plasma modeling. Lasers have found their application in many scientific disciplines, where generation of plasma, the fourth state of matter, holds great potential for the future. A detailed description of laser produced plasmas is then essential for many applications, like (pre)pulses of ultra-intense lasers and ion acceleration beamlines, laboratory astrophysics, inertial confinement fusion, and many others. All of the mentioned are investigated at ELI Beamlines in the Czech Republic, a European laser facility aiming to operate the most intense laser system in the world. In this context, Nikl described the numerical construction based on the finite element method. This effort concentrates mainly on the Lagrangian hydrodynamics and Vlasov\u2013Fokker\u2013Planck\u2013Maxwell kinetic description of plasma, utilizing the MFEM library for its flexibility and scalability.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#mathias-davids-harvard", "text": "", "title": "Mathias Davids (Harvard)"}, {"location": "videos/#modeling-peripheral-nerve-stimulations-pns-in-magnetic-resonance-imaging-mri", "text": "", "title": "Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI)"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_7", "text": "Mathias Davids from Harvard Medical School presented MFEM\u2019s use in a medical setting. Peripheral nerve stimulation (PNS) limits the usable image encoding performance in the latest generation of magnetic resonance imaging (MRI) scanners. The rapid switching of the MRI gradient coils\u2019 magnetic fields induces electric fields in the human body strong enough to evoke unwanted action potential in peripheral nerves, leading to muscle contractions or touch perceptions. Despite its limiting role in MRI, PNS effects are not directly included during the coil design phase. Davids\u2019 team developed a modeling tool to predict PNS thresholds and locations in the human body, allowing them to directly incorporate PNS metrics in the numeric coil winding optimization to design PNS-optimized coil layouts. This modeling tool relies on electromagnetic field simulations in heterogeneous finite element body models coupled to neurodynamic models of myelinated nerve fibers. This tool enables researchers to develop strategies that mitigate PNS effects without building expensive prototype MRI systems, maximizing the usable image encoding performance.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#marc-bolinches-ut", "text": "", "title": "Marc Bolinches (UT)"}, {"location": "videos/#development-of-dg-compressible-navier-stokes-solver-with-mfem", "text": "", "title": "Development of DG Compressible Navier-Stokes Solver with MFEM"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_8", "text": "Marc Bolinches from the University of Texas at Austin described a compressible Navier-Stokes solver using MFEM v4.2 which did not include full support for GPUs. The solver uses the discontinuous Galerkin (DG) method as a space discretization and an explicit Runge-Kutta time-integration scheme. An effort has been made to fully support GPU computation by taking over some of the loops internal to the NonLinearForm class. This has also allowed us to implement overlap between computation and communication. The team hopes their open-source code will help other researchers in creating high-fidelity simulations of compressible flows.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#robert-rieben-llnl", "text": "", "title": "Robert Rieben (LLNL)"}, {"location": "videos/#the-multiphysics-on-advanced-platforms-project-performance-portability-and-scaling", "text": "", "title": "The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_9", "text": "High-energy-density physics (HEDP) experiments performed at LLNL and other Department of Energy laboratories require multiphysics simulations to predict the behavior of complex physical systems for applications including inertial confinement fusion, pulsed power, and material strength/equations-of-state studies. Robert Rieben described the variety of mathematical algorithms needed for these simulations, including ALE methods, unstructured adaptive mesh refinement, and high-order discretizations. LLNL\u2019s Multiphysics on Advanced Platforms Project (MAPP) is developing a next-generation multiphysics code, called MARBL, based on high-order numerical methods and modular infrastructure for deployment on advanced HPC architectures. MARBL\u2019s use of high-order methods produce better throughput on GPUs. MARBL uses MFEM for finite elements and mesh/field/operator abstractions while leveraging its support for efficient memory management. Rieben explained that co-design efforts among the MARBL, MFEM, and RAJA (portability software) teams led to better device utilization and improved performance for the MARBL code.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#felipe-gomez-carlos-del-valle-julian-jimenez-national-university-of-colombia", "text": "", "title": "Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia)"}, {"location": "videos/#phase-change-heat-and-mass-transfer-simulation-with-mfem", "text": "", "title": "Phase Change Heat and Mass Transfer Simulation with MFEM"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_10", "text": "Three undergraduate students\u2014Felipe G\u00f3mez, Carlos del Valle, and Juli\u00e1n Jim\u00e9nez\u2014from the National University of Colombia presented their work using MFEM in an oceanographic model. Below the Arctic sea ice, and under the right conditions, a flux of icy brine flows down into the sea. The icy brine has a much lower fusion point and a higher density than normal seawater. As a result, it sinks while freezing everything around it, forming an ice channel called a brinicle (also known as ice stalactite). The team shared their simulations of this phenomenon assuming cylindrical symmetry. The fluid is considered viscous and quasi-stationary, and the problem is simulated taking advantage of the setup symmetries. The heat and salt transport are weakly coupled to the fluid motion and are modeled with the corresponding conservation equations, taking into account diffusive and convective effects. The coupled system of partial differential equations is discretized and solved with the help of the MFEM finite element library.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#thomas-helfer-cea", "text": "", "title": "Thomas Helfer (CEA)"}, {"location": "videos/#mfem-mgis-mfront-a-mfem-based-library-for-nonlinear-solid-thermomechanic", "text": "", "title": "MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_11", "text": "Thomas Helfer from the French Atomic Energy Commission (CEA) introduced the MFEM-MGIS-MFront library (MMM), which aims for efficient use of supercomputers in the field of implicit nonlinear thermomechanics. His team\u2019s primary focus is to develop advanced nuclear fuel element simulations where the evolution of materials under irradiation are influenced by multiple phenomena (e.g., viscoplasticity, damage, phase transitions, swelling due to solid and gaseous fission products). MFEM provides this project with finite element abstractions, adaptive mesh refinement, and a parallel API. However, as applications dedicated to solid mechanics in MFEM are mostly limited to a few constitutive equations such as elasticity and hyperelasticity, Helfer explained that his team extended the software\u2019s functionality to cover a broader spectrum of mechanics. Thus, this MMM project combines MFEM with the MFrontGenericInterfaceSupport (MGIS), an open-source C++ library that provides data structures to support arbitrarily complex nonlinear constitutive equations generated by the MFront code generator. MMM is developed within the scope of CEA\u2019s PLEIADES project. Helfer\u2019s presentation provided (1) an introduction to MMM goals; (2) a tutorial of MMM usage with a focus on the high-level user interface; (3) an overview of the core design choices of MMM and how MFEM was extended to support a range of scenarios; and (4) feedback on the two main issues encountered during MMM development.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#jamie-bramwell-llnl", "text": "", "title": "Jamie Bramwell (LLNL)"}, {"location": "videos/#serac-user-friendly-abstractions-for-mfem-based-engineering-applications", "text": "", "title": "Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_12", "text": "Jamie Bramwell of LLNL presented an overview of the open-source Serac project , whose goal is to provide user-friendly abstractions and modules that enable rapid development of complex nonlinear multiphysics simulation codes. She provided an overview of both the high-level physics modules (thermal conduction, solid mechanics, incompressible flow, electromagnetics) as well as the serac::Functional framework for quickly developing nonlinear GPU-enabled finite element method kernels.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#veselin-dobrev-llnl_3", "text": "", "title": "Veselin Dobrev (LLNL)"}, {"location": "videos/#recent-developments-in-mfem_1", "text": "", "title": "Recent Developments in MFEM"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_13", "text": "Veselin Dobrev of LLNL detailed the project\u2019s recent developments including memory manager improvements; serial support for p- and hp-refinement; high-order/low-order refined solution transfer; GLVis visualization via Jupyter Notebooks; and additional GPU support regarding HYPRE preconditioners, PETSc tools, and mesh optimization. MFEM now also integrates with various new libraries (AmgX, Gingko, FMS, and others), and continuous integration testing has been conducted on LLNL\u2019s Quartz, Lassen, and Corona machines. Additionally, Dobrev summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Exascale Computing Project, SciDAC, the FASTMath Institute, and other projects.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#tzanio-kolev-llnl_4", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos/#the-state-of-mfem_3", "text": "", "title": "The State of MFEM"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_14", "text": "MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities of discretization algorithms, built-in solvers, parallel scalability, adaptive mesh refinement, and support for a range of computing architectures. Kolev also highlighted the global community\u2019s contributions as well as features included in the recent v4.3 software release.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#aaron-fisher-llnl_6", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#welcome-and-overview_3", "text": "", "title": "Welcome and Overview"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_15", "text": "The MFEM community workshop held virtually on October 20, 2021, brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, collaborative breakout sessions, and a simulation contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#conferences-in-2021", "text": "", "title": "Conferences in 2021"}, {"location": "videos/#tzanio-kolev-llnl_5", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos/#efficient-finite-element-discretizations-for-exascale-applications", "text": "", "title": "Efficient Finite Element Discretizations for Exascale Applications"}, {"location": "videos/#february-25-2021-excalibur-sle-3-workshop", "text": "", "title": "February 25, 2021 | ExCALIBUR SLE 3 workshop"}, {"location": "videos/#atpesc-2017-2018", "text": "", "title": "ATPESC 2017, 2018"}, {"location": "videos/#tzanio-kolev-llnl-mark-shephard-rpi-and-cameron-smith-rpi", "text": "", "title": "Tzanio Kolev (LLNL), Mark Shephard (RPI) and Cameron Smith (RPI)"}, {"location": "videos/#unstructured-meshing-technologies", "text": "", "title": "Unstructured Meshing Technologies"}, {"location": "videos/#august-6-2018-atpesc-2018", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2018. Slides for this presentation are available here .", "title": "August 6, 2018 | ATPESC 2018"}, {"location": "videos/#tzanio-kolev-llnl-and-mark-shephard-rpi", "text": "", "title": "Tzanio Kolev (LLNL) and Mark Shephard (RPI)"}, {"location": "videos/#unstructured-meshing-technologies_1", "text": "", "title": "Unstructured Meshing Technologies"}, {"location": "videos/#august-7-2017-atpesc-2017", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here .", "title": "August 7, 2017 | ATPESC 2017"}, {"location": "videos/#tzanio-kolev-llnl-and-mark-shephard-rpi_1", "text": "", "title": "Tzanio Kolev (LLNL) and Mark Shephard (RPI)"}, {"location": "videos/#conforming-nonconforming-adaptivity-for-unstructured-meshes", "text": "", "title": "Conforming & Nonconforming Adaptivity for Unstructured Meshes"}, {"location": "videos/#august-7-2017-atpesc-2017_1", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here .", "title": "August 7, 2017 | ATPESC 2017"}, {"location": "videos/#other-videos", "text": "", "title": "Other Videos"}, {"location": "videos/#llnl-hpc-software-tutorials-mfem", "text": "", "title": "LLNL HPC Software Tutorials: MFEM"}, {"location": "videos/#aug-22-2024", "text": "Instructions for a self-paced overview of MFEM.", "title": "Aug 22, 2024"}, {"location": "videos/#mfem-advanced-simulation-algorithms-for-hpc-applications", "text": "", "title": "MFEM: Advanced Simulation Algorithms for HPC Applications"}, {"location": "videos/#jun-24-2020", "text": "Overview of MFEM 4.0 featuring some of its developers.", "title": "Jun 24, 2020"}, {"location": "videos/#center-for-applied-scientific-computing", "text": "", "title": "Center for Applied Scientific Computing"}, {"location": "videos/#jul-12-2019", "text": "Overview of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, including a highlight of MFEM.", "title": "Jul 12, 2019"}, {"location": "videos/#str-preview-exascale-computing", "text": "", "title": "S&TR Preview: Exascale Computing"}, {"location": "videos/#october-6-2016", "text": "Some early MFEM results in the BLAST project.", "title": "October 6, 2016"}, {"location": "videos2/", "text": "MFEM Videos A collection of MFEM-related videos, including recorded talks from the MFEM workshops and conference presentations. 2021 Aaron Fisher (LLNL) Wrap-Up and Simulation Contest Winners October 20, 2021 | MFEM Workshop 2021 MFEM\u2019s first community workshop was held virtually on October 20, 2021, with participants around the world. Aaron Fisher of LLNL concluded the workshop by announcing the results of the simulation and visualization contest. The winners represent two very different research applications using MFEM: (1) the electric field generated by electrocardiogram waves of a rabbit\u2019s heart ventricles, rendered by Dennis Ogiermann of Ruhr-University Bochum (Germany); (2) incompressible fluid flow around a rotating turbine, animated by Tamas Horvath of Oakland University (Michigan). Contest submissions are featured at https://mfem.org/gallery . Will Pazner (LLNL) High-Order Matrix-Free Solvers October 20, 2021 | MFEM Workshop 2021 For users unfamiliar with MFEM\u2019s solver library, Will Pazner of LLNL demonstrated a few ways\u2014in some cases adding just a single line of code\u2014to run scalable solvers for differential equations. These solvers execute hierarchical finite element discretizations for both low- and high-order problems. Vladimir Tomov (LLNL) MFEM Capabilities for High-Order Mesh Optimization October 20, 2021 | MFEM Workshop 2021 Vladimir Tomov of LLNL described MFEM\u2019s mesh optimization strategies including ways the user can define target elements. He demonstrated optimizing a mesh\u2019s shape by limiting displacements to preserve a boundary layer and by changing the size of a uniform mesh in a specific region. MFEM\u2019s mesh-optimizing miniapps are available online at https://mfem.org/meshing-miniapps . William Dawn (NCSU) Unstructured Finite Element Neutron Transport using MFEM October 20, 2021 | MFEM Workshop 2021 William Dawn from North Carolina State University described his work with unstructured neutron transport. His team models microreactors, a new class of compact reactor with relatively small electrical output. As part of the Exascale Computing Project, Dawn\u2019s team is modeling the MARVEL reactor, which is planned for construction at Idaho National Laboratory. MFEM satisfies their need for a finite element framework with GPU support and rapid prototyping. With MFEM, the team discretizes a neutron transport equation with six independent variables in space, direction, and energy. Traditional neutron transport methods use a \u201csweeping\u201d method to transport particles through a problem, but this is not feasible for generally unstructured meshes. In Dawn\u2019s models, the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation is used to transform the neutron transport equation from a first-order hyperbolic form to a second-order elliptic form. Then, the SAAF equations are discretized with the finite element method and solved using MFEM. Due to the dependence of the neutron flux on angle and direction, these problems have a high vector-dimension with hundreds to thousands of degrees of freedom (DOF) per mesh vertex. Also, due to the second-order nature of these equations, highly refined meshes are required to sufficiently resolve reactor geometries with millions of vertices in a mesh. Results have been prepared for problems with billions of DOF using the Summit supercomputer at Oak Ridge National Laboratory. Syun\u2019ichi Shiraiwa (PPPL) Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion October 20, 2021 | MFEM Workshop 2021 Syun\u2019ichi Shiraiwa of Pacific Northwest National Laboratory introduced PyMFEM, a Python wrapper for MFEM that his team uses in radiofrequency (RF) wave simulations for the RF-SciDAC project. RF waves can be used to heat plasma in a nuclear fusion reaction. Simulations of this process present multiple challenges when incorporating a variety of antenna structures at different frequencies, waves with different wave lengths in the same space or spatially diverse, and RF wave effects on background plasma. To integrate MFEM, a C++ software library, into their multiphysics platform, Shiraiwa\u2019s team created a code \u201cwrapper\u201d that binds MFEM to the external Python components of RF wave simulations, ultimately extending MFEM\u2019s features to Python users. Shiraiwa described how the PyMFEM module works in serial and parallel and invited the audience to contribute to the open-source code. Qi Tang (LANL) An Adaptive, Scalable Fully Implicit Resistive MHD Solver October 20, 2021 | MFEM Workshop 2021 Qi Tang of Los Alamos National Laboratory described his team\u2019s development of an efficient, scalable solver for tokamak plasma simulations. Magnetohydrodynamics (MHD) equations are important for studying plasma systems, but efficient numerical solutions for MHD are extremely challenging due to disparate time and length scales, strong hyperbolic phenomena, and nonlinearity. Tang\u2019s team has developed a high-order stabilized finite element algorithm for incompressible resistive MHD equations based on MFEM, which provides physics-based preconditioners, adaptive mesh refinement, parallelization, and load balancing. Tang showed animated examples of the model\u2019s scalable and efficient results. Jan Nikl (ELI Beamlines) Laser Plasma Modeling with High-Order Finite Elements October 20, 2021 | MFEM Workshop 2021 Jan Nikl outlined how his team at the ELI Beamlines Centre uses MFEM for laser plasma modeling. Lasers have found their application in many scientific disciplines, where generation of plasma, the fourth state of matter, holds great potential for the future. A detailed description of laser produced plasmas is then essential for many applications, like (pre)pulses of ultra-intense lasers and ion acceleration beamlines, laboratory astrophysics, inertial confinement fusion, and many others. All of the mentioned are investigated at ELI Beamlines in the Czech Republic, a European laser facility aiming to operate the most intense laser system in the world. In this context, Nikl described the numerical construction based on the finite element method. This effort concentrates mainly on the Lagrangian hydrodynamics and Vlasov\u2013Fokker\u2013Planck\u2013Maxwell kinetic description of plasma, utilizing the MFEM library for its flexibility and scalability. Mathias Davids (Harvard) Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI) October 20, 2021 | MFEM Workshop 2021 Mathias Davids from Harvard Medical School presented MFEM\u2019s use in a medical setting. Peripheral nerve stimulation (PNS) limits the usable image encoding performance in the latest generation of magnetic resonance imaging (MRI) scanners. The rapid switching of the MRI gradient coils\u2019 magnetic fields induces electric fields in the human body strong enough to evoke unwanted action potential in peripheral nerves, leading to muscle contractions or touch perceptions. Despite its limiting role in MRI, PNS effects are not directly included during the coil design phase. Davids\u2019 team developed a modeling tool to predict PNS thresholds and locations in the human body, allowing them to directly incorporate PNS metrics in the numeric coil winding optimization to design PNS-optimized coil layouts. This modeling tool relies on electromagnetic field simulations in heterogeneous finite element body models coupled to neurodynamic models of myelinated nerve fibers. This tool enables researchers to develop strategies that mitigate PNS effects without building expensive prototype MRI systems, maximizing the usable image encoding performance. Marc Bolinches (UT) Development of DG Compressible Navier-Stokes Solver with MFEM October 20, 2021 | MFEM Workshop 2021 Marc Bolinches from the University of Texas at Austin described a compressible Navier-Stokes solver using MFEM v4,2 which did not include full support for GPUs. The solver uses the discontinuous Galerkin (DG) method as a space discretization and an explicit Runge-Kutta time-integration scheme. An effort has been made to fully support GPU computation by taking over some of the loops internal to the NonLinearForm class. This has also allowed us to implement overlap between computation and communication. The team hopes their open-source code will help other researchers in creating high-fidelity simulations of compressible flows. Robert Rieben (LLNL) The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling October 20, 2021 | MFEM Workshop 2021 High-energy-density physics (HEDP) experiments performed at LLNL and other Department of Energy laboratories require multiphysics simulations to predict the behavior of complex physical systems for applications including inertial confinement fusion, pulsed power, and material strength/equations-of-state studies. Robert Rieben described the variety of mathematical algorithms needed for these simulations, including ALE methods, unstructured adaptive mesh refinement, and high-order discretizations. LLNL\u2019s Multiphysics on Advanced Platforms Project (MAPP) is developing a next-generation multiphysics code, called MARBL, based on high-order numerical methods and modular infrastructure for deployment on advanced HPC architectures. MARBL\u2019s use of high-order methods produce better throughput on GPUs. MARBL uses MFEM for finite elements and mesh/field/operator abstractions while leveraging its support for efficient memory management. Rieben explained that co-design efforts among the MARBL, MFEM, and RAJA (portability software) teams led to better device utilization and improved performance for the MARBL code. Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia) Phase Change Heat and Mass Transfer Simulation with MFEM October 20, 2021 | MFEM Workshop 2021 Three undergraduate students\u2014Felipe G\u00f3mez, Carlos del Valle, and Juli\u00e1n Jim\u00e9nez\u2014from the National University of Colombia presented their work using MFEM in an oceanographic model. Below the Arctic sea ice, and under the right conditions, a flux of icy brine flows down into the sea. The icy brine has a much lower fusion point and a higher density than normal seawater. As a result, it sinks while freezing everything around it, forming an ice channel called a brinicle (also known as ice stalactite). The team shared their simulations of this phenomenon assuming cylindrical symmetry. The fluid is considered viscous and quasi-stationary, and the problem is simulated taking advantage of the setup symmetries. The heat and salt transport are weakly coupled to the fluid motion and are modeled with the corresponding conservation equations, taking into account diffusive and convective effects. The coupled system of partial differential equations is discretized and solved with the help of the MFEM finite element library. Thomas Helfer (CEA) MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic October 20, 2021 | MFEM Workshop 2021 Thomas Helfer from the French Atomic Energy Commission (CEA) introduced the MFEM-MGIS-MFront library (MMM), which aims for efficient use of supercomputers in the field of implicit nonlinear thermomechanics. His team\u2019s primary focus is to develop advanced nuclear fuel element simulations where the evolution of materials under irradiation are influenced by multiple phenomena (e.g., viscoplasticity, damage, phase transitions, swelling due to solid and gaseous fission products). MFEM provides this project with finite element abstractions, adaptive mesh refinement, and a parallel API. However, as applications dedicated to solid mechanics in MFEM are mostly limited to a few constitutive equations such as elasticity and hyperelasticity, Helfer explained that his team extended the software\u2019s functionality to cover a broader spectrum of mechanics. Thus, this MMM project combines MFEM with the MFrontGenericInterfaceSupport (MGIS), an open-source C++ library that provides data structures to support arbitrarily complex nonlinear constitutive equations generated by the MFront code generator. MMM is developed within the scope of CEA\u2019s PLEIADES project. Helfer\u2019s presentation provided (1) an introduction to MMM goals; (2) a tutorial of MMM usage with a focus on the high-level user interface; (3) an overview of the core design choices of MMM and how MFEM was extended to support a range of scenarios; and (4) feedback on the two main issues encountered during MMM development. Jamie Bramwell (LLNL) Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications October 20, 2021 | MFEM Workshop 2021 Jamie Bramwell of LLNL presented an overview of the open-source Serac project ( https://serac.readthedocs.io/en/latest ), whose goal is to provide user-friendly abstractions and modules that enable rapid development of complex nonlinear multiphysics simulation codes. She provided an overview of both the high-level physics modules (thermal conduction, solid mechanics, incompressible flow, electromagnetics) as well as the serac::Functional framework for quickly developing nonlinear GPU-enabled finite element method kernels. Veselin Dobrev (LLNL) Recent Developments in MFEM October 20, 2021 | MFEM Workshop 2021 Veselin Dobrev of LLNL detailed the project\u2019s recent developments including memory manager improvements; serial support for p- and hp-refinement; high-order/low-order refined solution transfer; GLVis visualization via Jupyter Notebooks; and additional GPU support regarding HYPRE preconditioners, PETSc tools, and mesh optimization. MFEM now also integrates with various new libraries (AmgX, Gingko, FMS, and others), and continuous integration testing has been conducted on LLNL\u2019s Quartz, Lassen, and Corona machines. Additionally, Dobrev summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Exascale Computing Project, SciDAC, the FASTMath Institute, and other projects. Tzanio Kolev (LLNL) The State of MFEM October 20, 2021 | MFEM Workshop 2021 MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities of discretization algorithms, built-in solvers, parallel scalability, adaptive mesh refinement, and support for a range of computing architectures. Kolev also highlighted the global community\u2019s contributions as well as features included in the recent v4.3 software release. Aaron Fisher (LLNL) Welcome and Overview October 20, 2021 | MFEM Workshop 2021 The MFEM community workshop held virtually on October 20, 2021, brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, collaborative breakout sessions, and a simulation contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results. Tzanio Kolev (LLNL) Efficient Finite Element Discretizations for Exascale Applications February 25, 2021 | ExCALIBUR SLE 3 workshop 2020 MFEM: Advanced Simulation Algorithms for HPC Applications Jun 24, 2020 | YouTube Overview of MFEM 4.0 featuring some of its developers. 2019 Center for Applied Scientific Computing Jul 12, 2019 | YouTube Overview of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, including a highlight of MFEM. 2018 Tzanio Kolev (LLNL), Mark Shephard (RPI) and Cameron Smith (RPI) Unstructured Meshing Technologies August 6, 2018 | ATPESC 2018 Presented at the Argonne Training Program on Extreme-Scale Computing 2018. Slides for this presentation are available here . 2017 Tzanio Kolev (LLNL) and Mark Shephard (RPI) Unstructured Meshing Technologies August 7, 2017 | ATPESC 2017 Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here . Tzanio Kolev (LLNL) and Mark Shephard (RPI) Conforming & Nonconforming Adaptivity for Unstructured Meshes August 7, 2017 | ATPESC 2017 Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here . 2016 S&TR Preview: Exascale Computing October 6, 2016 | YouTube Some early MFEM results in the BLAST project.", "title": "MFEM Videos"}, {"location": "videos2/#mfem-videos", "text": "A collection of MFEM-related videos, including recorded talks from the MFEM workshops and conference presentations.", "title": "MFEM Videos"}, {"location": "videos2/#2021", "text": "", "title": "2021"}, {"location": "videos2/#aaron-fisher-llnl", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos2/#wrap-up-and-simulation-contest-winners", "text": "", "title": "Wrap-Up and Simulation Contest Winners"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021", "text": "MFEM\u2019s first community workshop was held virtually on October 20, 2021, with participants around the world. Aaron Fisher of LLNL concluded the workshop by announcing the results of the simulation and visualization contest. The winners represent two very different research applications using MFEM: (1) the electric field generated by electrocardiogram waves of a rabbit\u2019s heart ventricles, rendered by Dennis Ogiermann of Ruhr-University Bochum (Germany); (2) incompressible fluid flow around a rotating turbine, animated by Tamas Horvath of Oakland University (Michigan). Contest submissions are featured at https://mfem.org/gallery .", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#will-pazner-llnl", "text": "", "title": "Will Pazner (LLNL)"}, {"location": "videos2/#high-order-matrix-free-solvers", "text": "", "title": "High-Order Matrix-Free Solvers"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_1", "text": "For users unfamiliar with MFEM\u2019s solver library, Will Pazner of LLNL demonstrated a few ways\u2014in some cases adding just a single line of code\u2014to run scalable solvers for differential equations. These solvers execute hierarchical finite element discretizations for both low- and high-order problems.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#vladimir-tomov-llnl", "text": "", "title": "Vladimir Tomov (LLNL)"}, {"location": "videos2/#mfem-capabilities-for-high-order-mesh-optimization", "text": "", "title": "MFEM Capabilities for High-Order Mesh Optimization"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_2", "text": "Vladimir Tomov of LLNL described MFEM\u2019s mesh optimization strategies including ways the user can define target elements. He demonstrated optimizing a mesh\u2019s shape by limiting displacements to preserve a boundary layer and by changing the size of a uniform mesh in a specific region. MFEM\u2019s mesh-optimizing miniapps are available online at https://mfem.org/meshing-miniapps .", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#william-dawn-ncsu", "text": "", "title": "William Dawn (NCSU)"}, {"location": "videos2/#unstructured-finite-element-neutron-transport-using-mfem", "text": "", "title": "Unstructured Finite Element Neutron Transport using MFEM"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_3", "text": "William Dawn from North Carolina State University described his work with unstructured neutron transport. His team models microreactors, a new class of compact reactor with relatively small electrical output. As part of the Exascale Computing Project, Dawn\u2019s team is modeling the MARVEL reactor, which is planned for construction at Idaho National Laboratory. MFEM satisfies their need for a finite element framework with GPU support and rapid prototyping. With MFEM, the team discretizes a neutron transport equation with six independent variables in space, direction, and energy. Traditional neutron transport methods use a \u201csweeping\u201d method to transport particles through a problem, but this is not feasible for generally unstructured meshes. In Dawn\u2019s models, the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation is used to transform the neutron transport equation from a first-order hyperbolic form to a second-order elliptic form. Then, the SAAF equations are discretized with the finite element method and solved using MFEM. Due to the dependence of the neutron flux on angle and direction, these problems have a high vector-dimension with hundreds to thousands of degrees of freedom (DOF) per mesh vertex. Also, due to the second-order nature of these equations, highly refined meshes are required to sufficiently resolve reactor geometries with millions of vertices in a mesh. Results have been prepared for problems with billions of DOF using the Summit supercomputer at Oak Ridge National Laboratory.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#syunichi-shiraiwa-pppl", "text": "", "title": "Syun\u2019ichi Shiraiwa (PPPL)"}, {"location": "videos2/#development-of-pymfem-python-wrapper-for-mfem-scalable-rf-wave-simulation-for-nuclear-fusion", "text": "", "title": "Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_4", "text": "Syun\u2019ichi Shiraiwa of Pacific Northwest National Laboratory introduced PyMFEM, a Python wrapper for MFEM that his team uses in radiofrequency (RF) wave simulations for the RF-SciDAC project. RF waves can be used to heat plasma in a nuclear fusion reaction. Simulations of this process present multiple challenges when incorporating a variety of antenna structures at different frequencies, waves with different wave lengths in the same space or spatially diverse, and RF wave effects on background plasma. To integrate MFEM, a C++ software library, into their multiphysics platform, Shiraiwa\u2019s team created a code \u201cwrapper\u201d that binds MFEM to the external Python components of RF wave simulations, ultimately extending MFEM\u2019s features to Python users. Shiraiwa described how the PyMFEM module works in serial and parallel and invited the audience to contribute to the open-source code.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#qi-tang-lanl", "text": "", "title": "Qi Tang (LANL)"}, {"location": "videos2/#an-adaptive-scalable-fully-implicit-resistive-mhd-solver", "text": "", "title": "An Adaptive, Scalable Fully Implicit Resistive MHD Solver"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_5", "text": "Qi Tang of Los Alamos National Laboratory described his team\u2019s development of an efficient, scalable solver for tokamak plasma simulations. Magnetohydrodynamics (MHD) equations are important for studying plasma systems, but efficient numerical solutions for MHD are extremely challenging due to disparate time and length scales, strong hyperbolic phenomena, and nonlinearity. Tang\u2019s team has developed a high-order stabilized finite element algorithm for incompressible resistive MHD equations based on MFEM, which provides physics-based preconditioners, adaptive mesh refinement, parallelization, and load balancing. Tang showed animated examples of the model\u2019s scalable and efficient results.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#jan-nikl-eli-beamlines", "text": "", "title": "Jan Nikl (ELI Beamlines)"}, {"location": "videos2/#laser-plasma-modeling-with-high-order-finite-elements", "text": "", "title": "Laser Plasma Modeling with High-Order Finite Elements"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_6", "text": "Jan Nikl outlined how his team at the ELI Beamlines Centre uses MFEM for laser plasma modeling. Lasers have found their application in many scientific disciplines, where generation of plasma, the fourth state of matter, holds great potential for the future. A detailed description of laser produced plasmas is then essential for many applications, like (pre)pulses of ultra-intense lasers and ion acceleration beamlines, laboratory astrophysics, inertial confinement fusion, and many others. All of the mentioned are investigated at ELI Beamlines in the Czech Republic, a European laser facility aiming to operate the most intense laser system in the world. In this context, Nikl described the numerical construction based on the finite element method. This effort concentrates mainly on the Lagrangian hydrodynamics and Vlasov\u2013Fokker\u2013Planck\u2013Maxwell kinetic description of plasma, utilizing the MFEM library for its flexibility and scalability.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#mathias-davids-harvard", "text": "", "title": "Mathias Davids (Harvard)"}, {"location": "videos2/#modeling-peripheral-nerve-stimulations-pns-in-magnetic-resonance-imaging-mri", "text": "", "title": "Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI)"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_7", "text": "Mathias Davids from Harvard Medical School presented MFEM\u2019s use in a medical setting. Peripheral nerve stimulation (PNS) limits the usable image encoding performance in the latest generation of magnetic resonance imaging (MRI) scanners. The rapid switching of the MRI gradient coils\u2019 magnetic fields induces electric fields in the human body strong enough to evoke unwanted action potential in peripheral nerves, leading to muscle contractions or touch perceptions. Despite its limiting role in MRI, PNS effects are not directly included during the coil design phase. Davids\u2019 team developed a modeling tool to predict PNS thresholds and locations in the human body, allowing them to directly incorporate PNS metrics in the numeric coil winding optimization to design PNS-optimized coil layouts. This modeling tool relies on electromagnetic field simulations in heterogeneous finite element body models coupled to neurodynamic models of myelinated nerve fibers. This tool enables researchers to develop strategies that mitigate PNS effects without building expensive prototype MRI systems, maximizing the usable image encoding performance.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#marc-bolinches-ut", "text": "", "title": "Marc Bolinches (UT)"}, {"location": "videos2/#development-of-dg-compressible-navier-stokes-solver-with-mfem", "text": "", "title": "Development of DG Compressible Navier-Stokes Solver with MFEM"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_8", "text": "Marc Bolinches from the University of Texas at Austin described a compressible Navier-Stokes solver using MFEM v4,2 which did not include full support for GPUs. The solver uses the discontinuous Galerkin (DG) method as a space discretization and an explicit Runge-Kutta time-integration scheme. An effort has been made to fully support GPU computation by taking over some of the loops internal to the NonLinearForm class. This has also allowed us to implement overlap between computation and communication. The team hopes their open-source code will help other researchers in creating high-fidelity simulations of compressible flows.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#robert-rieben-llnl", "text": "", "title": "Robert Rieben (LLNL)"}, {"location": "videos2/#the-multiphysics-on-advanced-platforms-project-performance-portability-and-scaling", "text": "", "title": "The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_9", "text": "High-energy-density physics (HEDP) experiments performed at LLNL and other Department of Energy laboratories require multiphysics simulations to predict the behavior of complex physical systems for applications including inertial confinement fusion, pulsed power, and material strength/equations-of-state studies. Robert Rieben described the variety of mathematical algorithms needed for these simulations, including ALE methods, unstructured adaptive mesh refinement, and high-order discretizations. LLNL\u2019s Multiphysics on Advanced Platforms Project (MAPP) is developing a next-generation multiphysics code, called MARBL, based on high-order numerical methods and modular infrastructure for deployment on advanced HPC architectures. MARBL\u2019s use of high-order methods produce better throughput on GPUs. MARBL uses MFEM for finite elements and mesh/field/operator abstractions while leveraging its support for efficient memory management. Rieben explained that co-design efforts among the MARBL, MFEM, and RAJA (portability software) teams led to better device utilization and improved performance for the MARBL code.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#felipe-gomez-carlos-del-valle-julian-jimenez-national-university-of-colombia", "text": "", "title": "Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia)"}, {"location": "videos2/#phase-change-heat-and-mass-transfer-simulation-with-mfem", "text": "", "title": "Phase Change Heat and Mass Transfer Simulation with MFEM"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_10", "text": "Three undergraduate students\u2014Felipe G\u00f3mez, Carlos del Valle, and Juli\u00e1n Jim\u00e9nez\u2014from the National University of Colombia presented their work using MFEM in an oceanographic model. Below the Arctic sea ice, and under the right conditions, a flux of icy brine flows down into the sea. The icy brine has a much lower fusion point and a higher density than normal seawater. As a result, it sinks while freezing everything around it, forming an ice channel called a brinicle (also known as ice stalactite). The team shared their simulations of this phenomenon assuming cylindrical symmetry. The fluid is considered viscous and quasi-stationary, and the problem is simulated taking advantage of the setup symmetries. The heat and salt transport are weakly coupled to the fluid motion and are modeled with the corresponding conservation equations, taking into account diffusive and convective effects. The coupled system of partial differential equations is discretized and solved with the help of the MFEM finite element library.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#thomas-helfer-cea", "text": "", "title": "Thomas Helfer (CEA)"}, {"location": "videos2/#mfem-mgis-mfront-a-mfem-based-library-for-nonlinear-solid-thermomechanic", "text": "", "title": "MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_11", "text": "Thomas Helfer from the French Atomic Energy Commission (CEA) introduced the MFEM-MGIS-MFront library (MMM), which aims for efficient use of supercomputers in the field of implicit nonlinear thermomechanics. His team\u2019s primary focus is to develop advanced nuclear fuel element simulations where the evolution of materials under irradiation are influenced by multiple phenomena (e.g., viscoplasticity, damage, phase transitions, swelling due to solid and gaseous fission products). MFEM provides this project with finite element abstractions, adaptive mesh refinement, and a parallel API. However, as applications dedicated to solid mechanics in MFEM are mostly limited to a few constitutive equations such as elasticity and hyperelasticity, Helfer explained that his team extended the software\u2019s functionality to cover a broader spectrum of mechanics. Thus, this MMM project combines MFEM with the MFrontGenericInterfaceSupport (MGIS), an open-source C++ library that provides data structures to support arbitrarily complex nonlinear constitutive equations generated by the MFront code generator. MMM is developed within the scope of CEA\u2019s PLEIADES project. Helfer\u2019s presentation provided (1) an introduction to MMM goals; (2) a tutorial of MMM usage with a focus on the high-level user interface; (3) an overview of the core design choices of MMM and how MFEM was extended to support a range of scenarios; and (4) feedback on the two main issues encountered during MMM development.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#jamie-bramwell-llnl", "text": "", "title": "Jamie Bramwell (LLNL)"}, {"location": "videos2/#serac-user-friendly-abstractions-for-mfem-based-engineering-applications", "text": "", "title": "Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_12", "text": "Jamie Bramwell of LLNL presented an overview of the open-source Serac project ( https://serac.readthedocs.io/en/latest ), whose goal is to provide user-friendly abstractions and modules that enable rapid development of complex nonlinear multiphysics simulation codes. She provided an overview of both the high-level physics modules (thermal conduction, solid mechanics, incompressible flow, electromagnetics) as well as the serac::Functional framework for quickly developing nonlinear GPU-enabled finite element method kernels.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#veselin-dobrev-llnl", "text": "", "title": "Veselin Dobrev (LLNL)"}, {"location": "videos2/#recent-developments-in-mfem", "text": "", "title": "Recent Developments in MFEM"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_13", "text": "Veselin Dobrev of LLNL detailed the project\u2019s recent developments including memory manager improvements; serial support for p- and hp-refinement; high-order/low-order refined solution transfer; GLVis visualization via Jupyter Notebooks; and additional GPU support regarding HYPRE preconditioners, PETSc tools, and mesh optimization. MFEM now also integrates with various new libraries (AmgX, Gingko, FMS, and others), and continuous integration testing has been conducted on LLNL\u2019s Quartz, Lassen, and Corona machines. Additionally, Dobrev summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Exascale Computing Project, SciDAC, the FASTMath Institute, and other projects.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#tzanio-kolev-llnl", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos2/#the-state-of-mfem", "text": "", "title": "The State of MFEM"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_14", "text": "MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities of discretization algorithms, built-in solvers, parallel scalability, adaptive mesh refinement, and support for a range of computing architectures. Kolev also highlighted the global community\u2019s contributions as well as features included in the recent v4.3 software release.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#aaron-fisher-llnl_1", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos2/#welcome-and-overview", "text": "", "title": "Welcome and Overview"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_15", "text": "The MFEM community workshop held virtually on October 20, 2021, brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, collaborative breakout sessions, and a simulation contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#tzanio-kolev-llnl_1", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos2/#efficient-finite-element-discretizations-for-exascale-applications", "text": "", "title": "Efficient Finite Element Discretizations for Exascale Applications"}, {"location": "videos2/#february-25-2021-excalibur-sle-3-workshop", "text": "", "title": "February 25, 2021 | ExCALIBUR SLE 3 workshop"}, {"location": "videos2/#2020", "text": "", "title": "2020"}, {"location": "videos2/#mfem-advanced-simulation-algorithms-for-hpc-applications", "text": "", "title": "MFEM: Advanced Simulation Algorithms for HPC Applications"}, {"location": "videos2/#jun-24-2020-youtube", "text": "Overview of MFEM 4.0 featuring some of its developers.", "title": "Jun 24, 2020 | YouTube"}, {"location": "videos2/#2019", "text": "", "title": "2019"}, {"location": "videos2/#center-for-applied-scientific-computing", "text": "", "title": "Center for Applied Scientific Computing"}, {"location": "videos2/#jul-12-2019-youtube", "text": "Overview of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, including a highlight of MFEM.", "title": "Jul 12, 2019 | YouTube"}, {"location": "videos2/#2018", "text": "", "title": "2018"}, {"location": "videos2/#tzanio-kolev-llnl-mark-shephard-rpi-and-cameron-smith-rpi", "text": "", "title": "Tzanio Kolev (LLNL), Mark Shephard (RPI) and Cameron Smith (RPI)"}, {"location": "videos2/#unstructured-meshing-technologies", "text": "", "title": "Unstructured Meshing Technologies"}, {"location": "videos2/#august-6-2018-atpesc-2018", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2018. Slides for this presentation are available here .", "title": "August 6, 2018 | ATPESC 2018"}, {"location": "videos2/#2017", "text": "", "title": "2017"}, {"location": "videos2/#tzanio-kolev-llnl-and-mark-shephard-rpi", "text": "", "title": "Tzanio Kolev (LLNL) and Mark Shephard (RPI)"}, {"location": "videos2/#unstructured-meshing-technologies_1", "text": "", "title": "Unstructured Meshing Technologies"}, {"location": "videos2/#august-7-2017-atpesc-2017", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here .", "title": "August 7, 2017 | ATPESC 2017"}, {"location": "videos2/#tzanio-kolev-llnl-and-mark-shephard-rpi_1", "text": "", "title": "Tzanio Kolev (LLNL) and Mark Shephard (RPI)"}, {"location": "videos2/#conforming-nonconforming-adaptivity-for-unstructured-meshes", "text": "", "title": "Conforming & Nonconforming Adaptivity for Unstructured Meshes"}, {"location": "videos2/#august-7-2017-atpesc-2017_1", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here .", "title": "August 7, 2017 | ATPESC 2017"}, {"location": "videos2/#2016", "text": "", "title": "2016"}, {"location": "videos2/#str-preview-exascale-computing", "text": "", "title": "S&TR Preview: Exascale Computing"}, {"location": "videos2/#october-6-2016-youtube", "text": "Some early MFEM results in the BLAST project.", "title": "October 6, 2016 | YouTube"}, {"location": "videos3/", "text": "MFEM Videos A collection of MFEM-related videos, including recorded talks from the MFEM workshops and conference presentations. MFEM Workshop 2021 Aaron Fisher (LLNL) Wrap-Up and Simulation Contest Winners October 20, 2021 | MFEM Workshop 2021 MFEM\u2019s first community workshop was held virtually on October 20, 2021, with participants around the world. Aaron Fisher of LLNL concluded the workshop by announcing the results of the simulation and visualization contest. The winners represent two very different research applications using MFEM: (1) the electric field generated by electrocardiogram waves of a rabbit\u2019s heart ventricles, rendered by Dennis Ogiermann of Ruhr-University Bochum (Germany); (2) incompressible fluid flow around a rotating turbine, animated by Tamas Horvath of Oakland University (Michigan). Contest submissions are featured in the gallery . Will Pazner (LLNL) High-Order Matrix-Free Solvers October 20, 2021 | MFEM Workshop 2021 For users unfamiliar with MFEM\u2019s solver library, Will Pazner of LLNL demonstrated a few ways\u2014in some cases adding just a single line of code\u2014to run scalable solvers for differential equations. These solvers execute hierarchical finite element discretizations for both low- and high-order problems. Vladimir Tomov (LLNL) MFEM Capabilities for High-Order Mesh Optimization October 20, 2021 | MFEM Workshop 2021 Vladimir Tomov of LLNL described MFEM\u2019s mesh optimization strategies including ways the user can define target elements. He demonstrated optimizing a mesh\u2019s shape by limiting displacements to preserve a boundary layer and by changing the size of a uniform mesh in a specific region. MFEM\u2019s mesh-optimizing miniapps are available online . William Dawn (NCSU) Unstructured Finite Element Neutron Transport using MFEM October 20, 2021 | MFEM Workshop 2021 William Dawn from North Carolina State University described his work with unstructured neutron transport. His team models microreactors, a new class of compact reactor with relatively small electrical output. As part of the Exascale Computing Project, Dawn\u2019s team is modeling the MARVEL reactor, which is planned for construction at Idaho National Laboratory. MFEM satisfies their need for a finite element framework with GPU support and rapid prototyping. With MFEM, the team discretizes a neutron transport equation with six independent variables in space, direction, and energy. Traditional neutron transport methods use a \u201csweeping\u201d method to transport particles through a problem, but this is not feasible for generally unstructured meshes. In Dawn\u2019s models, the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation is used to transform the neutron transport equation from a first-order hyperbolic form to a second-order elliptic form. Then, the SAAF equations are discretized with the finite element method and solved using MFEM. Due to the dependence of the neutron flux on angle and direction, these problems have a high vector-dimension with hundreds to thousands of degrees of freedom (DOF) per mesh vertex. Also, due to the second-order nature of these equations, highly refined meshes are required to sufficiently resolve reactor geometries with millions of vertices in a mesh. Results have been prepared for problems with billions of DOF using the Summit supercomputer at Oak Ridge National Laboratory. Syun\u2019ichi Shiraiwa (PPPL) Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion October 20, 2021 | MFEM Workshop 2021 Syun\u2019ichi Shiraiwa of Pacific Northwest National Laboratory introduced PyMFEM, a Python wrapper for MFEM that his team uses in radiofrequency (RF) wave simulations for the RF-SciDAC project. RF waves can be used to heat plasma in a nuclear fusion reaction. Simulations of this process present multiple challenges when incorporating a variety of antenna structures at different frequencies, waves with different wave lengths in the same space or spatially diverse, and RF wave effects on background plasma. To integrate MFEM, a C++ software library, into their multiphysics platform, Shiraiwa\u2019s team created a code \u201cwrapper\u201d that binds MFEM to the external Python components of RF wave simulations, ultimately extending MFEM\u2019s features to Python users. Shiraiwa described how the PyMFEM module works in serial and parallel and invited the audience to contribute to the open-source code. Qi Tang (LANL) An Adaptive, Scalable Fully Implicit Resistive MHD Solver October 20, 2021 | MFEM Workshop 2021 Qi Tang of Los Alamos National Laboratory described his team\u2019s development of an efficient, scalable solver for tokamak plasma simulations. Magnetohydrodynamics (MHD) equations are important for studying plasma systems, but efficient numerical solutions for MHD are extremely challenging due to disparate time and length scales, strong hyperbolic phenomena, and nonlinearity. Tang\u2019s team has developed a high-order stabilized finite element algorithm for incompressible resistive MHD equations based on MFEM, which provides physics-based preconditioners, adaptive mesh refinement, parallelization, and load balancing. Tang showed animated examples of the model\u2019s scalable and efficient results. Jan Nikl (ELI Beamlines) Laser Plasma Modeling with High-Order Finite Elements October 20, 2021 | MFEM Workshop 2021 Jan Nikl outlined how his team at the ELI Beamlines Centre uses MFEM for laser plasma modeling. Lasers have found their application in many scientific disciplines, where generation of plasma, the fourth state of matter, holds great potential for the future. A detailed description of laser produced plasmas is then essential for many applications, like (pre)pulses of ultra-intense lasers and ion acceleration beamlines, laboratory astrophysics, inertial confinement fusion, and many others. All of the mentioned are investigated at ELI Beamlines in the Czech Republic, a European laser facility aiming to operate the most intense laser system in the world. In this context, Nikl described the numerical construction based on the finite element method. This effort concentrates mainly on the Lagrangian hydrodynamics and Vlasov\u2013Fokker\u2013Planck\u2013Maxwell kinetic description of plasma, utilizing the MFEM library for its flexibility and scalability. Mathias Davids (Harvard) Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI) October 20, 2021 | MFEM Workshop 2021 Mathias Davids from Harvard Medical School presented MFEM\u2019s use in a medical setting. Peripheral nerve stimulation (PNS) limits the usable image encoding performance in the latest generation of magnetic resonance imaging (MRI) scanners. The rapid switching of the MRI gradient coils\u2019 magnetic fields induces electric fields in the human body strong enough to evoke unwanted action potential in peripheral nerves, leading to muscle contractions or touch perceptions. Despite its limiting role in MRI, PNS effects are not directly included during the coil design phase. Davids\u2019 team developed a modeling tool to predict PNS thresholds and locations in the human body, allowing them to directly incorporate PNS metrics in the numeric coil winding optimization to design PNS-optimized coil layouts. This modeling tool relies on electromagnetic field simulations in heterogeneous finite element body models coupled to neurodynamic models of myelinated nerve fibers. This tool enables researchers to develop strategies that mitigate PNS effects without building expensive prototype MRI systems, maximizing the usable image encoding performance. Marc Bolinches (UT) Development of DG Compressible Navier-Stokes Solver with MFEM October 20, 2021 | MFEM Workshop 2021 Marc Bolinches from the University of Texas at Austin described a compressible Navier-Stokes solver using MFEM v4,2 which did not include full support for GPUs. The solver uses the discontinuous Galerkin (DG) method as a space discretization and an explicit Runge-Kutta time-integration scheme. An effort has been made to fully support GPU computation by taking over some of the loops internal to the NonLinearForm class. This has also allowed us to implement overlap between computation and communication. The team hopes their open-source code will help other researchers in creating high-fidelity simulations of compressible flows. Robert Rieben (LLNL) The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling October 20, 2021 | MFEM Workshop 2021 High-energy-density physics (HEDP) experiments performed at LLNL and other Department of Energy laboratories require multiphysics simulations to predict the behavior of complex physical systems for applications including inertial confinement fusion, pulsed power, and material strength/equations-of-state studies. Robert Rieben described the variety of mathematical algorithms needed for these simulations, including ALE methods, unstructured adaptive mesh refinement, and high-order discretizations. LLNL\u2019s Multiphysics on Advanced Platforms Project (MAPP) is developing a next-generation multiphysics code, called MARBL, based on high-order numerical methods and modular infrastructure for deployment on advanced HPC architectures. MARBL\u2019s use of high-order methods produce better throughput on GPUs. MARBL uses MFEM for finite elements and mesh/field/operator abstractions while leveraging its support for efficient memory management. Rieben explained that co-design efforts among the MARBL, MFEM, and RAJA (portability software) teams led to better device utilization and improved performance for the MARBL code. Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia) Phase Change Heat and Mass Transfer Simulation with MFEM October 20, 2021 | MFEM Workshop 2021 Three undergraduate students\u2014Felipe G\u00f3mez, Carlos del Valle, and Juli\u00e1n Jim\u00e9nez\u2014from the National University of Colombia presented their work using MFEM in an oceanographic model. Below the Arctic sea ice, and under the right conditions, a flux of icy brine flows down into the sea. The icy brine has a much lower fusion point and a higher density than normal seawater. As a result, it sinks while freezing everything around it, forming an ice channel called a brinicle (also known as ice stalactite). The team shared their simulations of this phenomenon assuming cylindrical symmetry. The fluid is considered viscous and quasi-stationary, and the problem is simulated taking advantage of the setup symmetries. The heat and salt transport are weakly coupled to the fluid motion and are modeled with the corresponding conservation equations, taking into account diffusive and convective effects. The coupled system of partial differential equations is discretized and solved with the help of the MFEM finite element library. Thomas Helfer (CEA) MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic October 20, 2021 | MFEM Workshop 2021 Thomas Helfer from the French Atomic Energy Commission (CEA) introduced the MFEM-MGIS-MFront library (MMM), which aims for efficient use of supercomputers in the field of implicit nonlinear thermomechanics. His team\u2019s primary focus is to develop advanced nuclear fuel element simulations where the evolution of materials under irradiation are influenced by multiple phenomena (e.g., viscoplasticity, damage, phase transitions, swelling due to solid and gaseous fission products). MFEM provides this project with finite element abstractions, adaptive mesh refinement, and a parallel API. However, as applications dedicated to solid mechanics in MFEM are mostly limited to a few constitutive equations such as elasticity and hyperelasticity, Helfer explained that his team extended the software\u2019s functionality to cover a broader spectrum of mechanics. Thus, this MMM project combines MFEM with the MFrontGenericInterfaceSupport (MGIS), an open-source C++ library that provides data structures to support arbitrarily complex nonlinear constitutive equations generated by the MFront code generator. MMM is developed within the scope of CEA\u2019s PLEIADES project. Helfer\u2019s presentation provided (1) an introduction to MMM goals; (2) a tutorial of MMM usage with a focus on the high-level user interface; (3) an overview of the core design choices of MMM and how MFEM was extended to support a range of scenarios; and (4) feedback on the two main issues encountered during MMM development. Jamie Bramwell (LLNL) Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications October 20, 2021 | MFEM Workshop 2021 Jamie Bramwell of LLNL presented an overview of the open-source Serac project , whose goal is to provide user-friendly abstractions and modules that enable rapid development of complex nonlinear multiphysics simulation codes. She provided an overview of both the high-level physics modules (thermal conduction, solid mechanics, incompressible flow, electromagnetics) as well as the serac::Functional framework for quickly developing nonlinear GPU-enabled finite element method kernels. Veselin Dobrev (LLNL) Recent Developments in MFEM October 20, 2021 | MFEM Workshop 2021 Veselin Dobrev of LLNL detailed the project\u2019s recent developments including memory manager improvements; serial support for p- and hp-refinement; high-order/low-order refined solution transfer; GLVis visualization via Jupyter Notebooks; and additional GPU support regarding HYPRE preconditioners, PETSc tools, and mesh optimization. MFEM now also integrates with various new libraries (AmgX, Gingko, FMS, and others), and continuous integration testing has been conducted on LLNL\u2019s Quartz, Lassen, and Corona machines. Additionally, Dobrev summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Exascale Computing Project, SciDAC, the FASTMath Institute, and other projects. Tzanio Kolev (LLNL) The State of MFEM October 20, 2021 | MFEM Workshop 2021 MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities of discretization algorithms, built-in solvers, parallel scalability, adaptive mesh refinement, and support for a range of computing architectures. Kolev also highlighted the global community\u2019s contributions as well as features included in the recent v4.3 software release. Aaron Fisher (LLNL) Welcome and Overview October 20, 2021 | MFEM Workshop 2021 The MFEM community workshop held virtually on October 20, 2021, brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, collaborative breakout sessions, and a simulation contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results. Conferences in 2021 Tzanio Kolev (LLNL) Efficient Finite Element Discretizations for Exascale Applications February 25, 2021 | ExCALIBUR SLE 3 workshop ATPESC 2017, 2018 Tzanio Kolev (LLNL), Mark Shephard (RPI) and Cameron Smith (RPI) Unstructured Meshing Technologies August 6, 2018 | ATPESC 2018 Presented at the Argonne Training Program on Extreme-Scale Computing 2018. Slides for this presentation are available here . Tzanio Kolev (LLNL) and Mark Shephard (RPI) Unstructured Meshing Technologies August 7, 2017 | ATPESC 2017 Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here . Tzanio Kolev (LLNL) and Mark Shephard (RPI) Conforming & Nonconforming Adaptivity for Unstructured Meshes August 7, 2017 | ATPESC 2017 Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here . Other Videos MFEM: Advanced Simulation Algorithms for HPC Applications Jun 24, 2020 | YouTube Overview of MFEM 4.0 featuring some of its developers. Center for Applied Scientific Computing Jul 12, 2019 | YouTube Overview of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, including a highlight of MFEM. S&TR Preview: Exascale Computing October 6, 2016 | YouTube Some early MFEM results in the BLAST project.", "title": "MFEM Videos"}, {"location": "videos3/#mfem-videos", "text": "A collection of MFEM-related videos, including recorded talks from the MFEM workshops and conference presentations.", "title": "MFEM Videos"}, {"location": "videos3/#mfem-workshop-2021", "text": "", "title": "MFEM Workshop 2021"}, {"location": "videos3/#aaron-fisher-llnl", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos3/#wrap-up-and-simulation-contest-winners", "text": "", "title": "Wrap-Up and Simulation Contest Winners"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021", "text": "MFEM\u2019s first community workshop was held virtually on October 20, 2021, with participants around the world. Aaron Fisher of LLNL concluded the workshop by announcing the results of the simulation and visualization contest. The winners represent two very different research applications using MFEM: (1) the electric field generated by electrocardiogram waves of a rabbit\u2019s heart ventricles, rendered by Dennis Ogiermann of Ruhr-University Bochum (Germany); (2) incompressible fluid flow around a rotating turbine, animated by Tamas Horvath of Oakland University (Michigan). Contest submissions are featured in the gallery .", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#will-pazner-llnl", "text": "", "title": "Will Pazner (LLNL)"}, {"location": "videos3/#high-order-matrix-free-solvers", "text": "", "title": "High-Order Matrix-Free Solvers"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_1", "text": "For users unfamiliar with MFEM\u2019s solver library, Will Pazner of LLNL demonstrated a few ways\u2014in some cases adding just a single line of code\u2014to run scalable solvers for differential equations. These solvers execute hierarchical finite element discretizations for both low- and high-order problems.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#vladimir-tomov-llnl", "text": "", "title": "Vladimir Tomov (LLNL)"}, {"location": "videos3/#mfem-capabilities-for-high-order-mesh-optimization", "text": "", "title": "MFEM Capabilities for High-Order Mesh Optimization"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_2", "text": "Vladimir Tomov of LLNL described MFEM\u2019s mesh optimization strategies including ways the user can define target elements. He demonstrated optimizing a mesh\u2019s shape by limiting displacements to preserve a boundary layer and by changing the size of a uniform mesh in a specific region. MFEM\u2019s mesh-optimizing miniapps are available online .", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#william-dawn-ncsu", "text": "", "title": "William Dawn (NCSU)"}, {"location": "videos3/#unstructured-finite-element-neutron-transport-using-mfem", "text": "", "title": "Unstructured Finite Element Neutron Transport using MFEM"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_3", "text": "William Dawn from North Carolina State University described his work with unstructured neutron transport. His team models microreactors, a new class of compact reactor with relatively small electrical output. As part of the Exascale Computing Project, Dawn\u2019s team is modeling the MARVEL reactor, which is planned for construction at Idaho National Laboratory. MFEM satisfies their need for a finite element framework with GPU support and rapid prototyping. With MFEM, the team discretizes a neutron transport equation with six independent variables in space, direction, and energy. Traditional neutron transport methods use a \u201csweeping\u201d method to transport particles through a problem, but this is not feasible for generally unstructured meshes. In Dawn\u2019s models, the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation is used to transform the neutron transport equation from a first-order hyperbolic form to a second-order elliptic form. Then, the SAAF equations are discretized with the finite element method and solved using MFEM. Due to the dependence of the neutron flux on angle and direction, these problems have a high vector-dimension with hundreds to thousands of degrees of freedom (DOF) per mesh vertex. Also, due to the second-order nature of these equations, highly refined meshes are required to sufficiently resolve reactor geometries with millions of vertices in a mesh. Results have been prepared for problems with billions of DOF using the Summit supercomputer at Oak Ridge National Laboratory.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#syunichi-shiraiwa-pppl", "text": "", "title": "Syun\u2019ichi Shiraiwa (PPPL)"}, {"location": "videos3/#development-of-pymfem-python-wrapper-for-mfem-scalable-rf-wave-simulation-for-nuclear-fusion", "text": "", "title": "Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_4", "text": "Syun\u2019ichi Shiraiwa of Pacific Northwest National Laboratory introduced PyMFEM, a Python wrapper for MFEM that his team uses in radiofrequency (RF) wave simulations for the RF-SciDAC project. RF waves can be used to heat plasma in a nuclear fusion reaction. Simulations of this process present multiple challenges when incorporating a variety of antenna structures at different frequencies, waves with different wave lengths in the same space or spatially diverse, and RF wave effects on background plasma. To integrate MFEM, a C++ software library, into their multiphysics platform, Shiraiwa\u2019s team created a code \u201cwrapper\u201d that binds MFEM to the external Python components of RF wave simulations, ultimately extending MFEM\u2019s features to Python users. Shiraiwa described how the PyMFEM module works in serial and parallel and invited the audience to contribute to the open-source code.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#qi-tang-lanl", "text": "", "title": "Qi Tang (LANL)"}, {"location": "videos3/#an-adaptive-scalable-fully-implicit-resistive-mhd-solver", "text": "", "title": "An Adaptive, Scalable Fully Implicit Resistive MHD Solver"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_5", "text": "Qi Tang of Los Alamos National Laboratory described his team\u2019s development of an efficient, scalable solver for tokamak plasma simulations. Magnetohydrodynamics (MHD) equations are important for studying plasma systems, but efficient numerical solutions for MHD are extremely challenging due to disparate time and length scales, strong hyperbolic phenomena, and nonlinearity. Tang\u2019s team has developed a high-order stabilized finite element algorithm for incompressible resistive MHD equations based on MFEM, which provides physics-based preconditioners, adaptive mesh refinement, parallelization, and load balancing. Tang showed animated examples of the model\u2019s scalable and efficient results.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#jan-nikl-eli-beamlines", "text": "", "title": "Jan Nikl (ELI Beamlines)"}, {"location": "videos3/#laser-plasma-modeling-with-high-order-finite-elements", "text": "", "title": "Laser Plasma Modeling with High-Order Finite Elements"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_6", "text": "Jan Nikl outlined how his team at the ELI Beamlines Centre uses MFEM for laser plasma modeling. Lasers have found their application in many scientific disciplines, where generation of plasma, the fourth state of matter, holds great potential for the future. A detailed description of laser produced plasmas is then essential for many applications, like (pre)pulses of ultra-intense lasers and ion acceleration beamlines, laboratory astrophysics, inertial confinement fusion, and many others. All of the mentioned are investigated at ELI Beamlines in the Czech Republic, a European laser facility aiming to operate the most intense laser system in the world. In this context, Nikl described the numerical construction based on the finite element method. This effort concentrates mainly on the Lagrangian hydrodynamics and Vlasov\u2013Fokker\u2013Planck\u2013Maxwell kinetic description of plasma, utilizing the MFEM library for its flexibility and scalability.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#mathias-davids-harvard", "text": "", "title": "Mathias Davids (Harvard)"}, {"location": "videos3/#modeling-peripheral-nerve-stimulations-pns-in-magnetic-resonance-imaging-mri", "text": "", "title": "Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI)"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_7", "text": "Mathias Davids from Harvard Medical School presented MFEM\u2019s use in a medical setting. Peripheral nerve stimulation (PNS) limits the usable image encoding performance in the latest generation of magnetic resonance imaging (MRI) scanners. The rapid switching of the MRI gradient coils\u2019 magnetic fields induces electric fields in the human body strong enough to evoke unwanted action potential in peripheral nerves, leading to muscle contractions or touch perceptions. Despite its limiting role in MRI, PNS effects are not directly included during the coil design phase. Davids\u2019 team developed a modeling tool to predict PNS thresholds and locations in the human body, allowing them to directly incorporate PNS metrics in the numeric coil winding optimization to design PNS-optimized coil layouts. This modeling tool relies on electromagnetic field simulations in heterogeneous finite element body models coupled to neurodynamic models of myelinated nerve fibers. This tool enables researchers to develop strategies that mitigate PNS effects without building expensive prototype MRI systems, maximizing the usable image encoding performance.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#marc-bolinches-ut", "text": "", "title": "Marc Bolinches (UT)"}, {"location": "videos3/#development-of-dg-compressible-navier-stokes-solver-with-mfem", "text": "", "title": "Development of DG Compressible Navier-Stokes Solver with MFEM"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_8", "text": "Marc Bolinches from the University of Texas at Austin described a compressible Navier-Stokes solver using MFEM v4,2 which did not include full support for GPUs. The solver uses the discontinuous Galerkin (DG) method as a space discretization and an explicit Runge-Kutta time-integration scheme. An effort has been made to fully support GPU computation by taking over some of the loops internal to the NonLinearForm class. This has also allowed us to implement overlap between computation and communication. The team hopes their open-source code will help other researchers in creating high-fidelity simulations of compressible flows.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#robert-rieben-llnl", "text": "", "title": "Robert Rieben (LLNL)"}, {"location": "videos3/#the-multiphysics-on-advanced-platforms-project-performance-portability-and-scaling", "text": "", "title": "The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_9", "text": "High-energy-density physics (HEDP) experiments performed at LLNL and other Department of Energy laboratories require multiphysics simulations to predict the behavior of complex physical systems for applications including inertial confinement fusion, pulsed power, and material strength/equations-of-state studies. Robert Rieben described the variety of mathematical algorithms needed for these simulations, including ALE methods, unstructured adaptive mesh refinement, and high-order discretizations. LLNL\u2019s Multiphysics on Advanced Platforms Project (MAPP) is developing a next-generation multiphysics code, called MARBL, based on high-order numerical methods and modular infrastructure for deployment on advanced HPC architectures. MARBL\u2019s use of high-order methods produce better throughput on GPUs. MARBL uses MFEM for finite elements and mesh/field/operator abstractions while leveraging its support for efficient memory management. Rieben explained that co-design efforts among the MARBL, MFEM, and RAJA (portability software) teams led to better device utilization and improved performance for the MARBL code.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#felipe-gomez-carlos-del-valle-julian-jimenez-national-university-of-colombia", "text": "", "title": "Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia)"}, {"location": "videos3/#phase-change-heat-and-mass-transfer-simulation-with-mfem", "text": "", "title": "Phase Change Heat and Mass Transfer Simulation with MFEM"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_10", "text": "Three undergraduate students\u2014Felipe G\u00f3mez, Carlos del Valle, and Juli\u00e1n Jim\u00e9nez\u2014from the National University of Colombia presented their work using MFEM in an oceanographic model. Below the Arctic sea ice, and under the right conditions, a flux of icy brine flows down into the sea. The icy brine has a much lower fusion point and a higher density than normal seawater. As a result, it sinks while freezing everything around it, forming an ice channel called a brinicle (also known as ice stalactite). The team shared their simulations of this phenomenon assuming cylindrical symmetry. The fluid is considered viscous and quasi-stationary, and the problem is simulated taking advantage of the setup symmetries. The heat and salt transport are weakly coupled to the fluid motion and are modeled with the corresponding conservation equations, taking into account diffusive and convective effects. The coupled system of partial differential equations is discretized and solved with the help of the MFEM finite element library.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#thomas-helfer-cea", "text": "", "title": "Thomas Helfer (CEA)"}, {"location": "videos3/#mfem-mgis-mfront-a-mfem-based-library-for-nonlinear-solid-thermomechanic", "text": "", "title": "MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_11", "text": "Thomas Helfer from the French Atomic Energy Commission (CEA) introduced the MFEM-MGIS-MFront library (MMM), which aims for efficient use of supercomputers in the field of implicit nonlinear thermomechanics. His team\u2019s primary focus is to develop advanced nuclear fuel element simulations where the evolution of materials under irradiation are influenced by multiple phenomena (e.g., viscoplasticity, damage, phase transitions, swelling due to solid and gaseous fission products). MFEM provides this project with finite element abstractions, adaptive mesh refinement, and a parallel API. However, as applications dedicated to solid mechanics in MFEM are mostly limited to a few constitutive equations such as elasticity and hyperelasticity, Helfer explained that his team extended the software\u2019s functionality to cover a broader spectrum of mechanics. Thus, this MMM project combines MFEM with the MFrontGenericInterfaceSupport (MGIS), an open-source C++ library that provides data structures to support arbitrarily complex nonlinear constitutive equations generated by the MFront code generator. MMM is developed within the scope of CEA\u2019s PLEIADES project. Helfer\u2019s presentation provided (1) an introduction to MMM goals; (2) a tutorial of MMM usage with a focus on the high-level user interface; (3) an overview of the core design choices of MMM and how MFEM was extended to support a range of scenarios; and (4) feedback on the two main issues encountered during MMM development.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#jamie-bramwell-llnl", "text": "", "title": "Jamie Bramwell (LLNL)"}, {"location": "videos3/#serac-user-friendly-abstractions-for-mfem-based-engineering-applications", "text": "", "title": "Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_12", "text": "Jamie Bramwell of LLNL presented an overview of the open-source Serac project , whose goal is to provide user-friendly abstractions and modules that enable rapid development of complex nonlinear multiphysics simulation codes. She provided an overview of both the high-level physics modules (thermal conduction, solid mechanics, incompressible flow, electromagnetics) as well as the serac::Functional framework for quickly developing nonlinear GPU-enabled finite element method kernels.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#veselin-dobrev-llnl", "text": "", "title": "Veselin Dobrev (LLNL)"}, {"location": "videos3/#recent-developments-in-mfem", "text": "", "title": "Recent Developments in MFEM"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_13", "text": "Veselin Dobrev of LLNL detailed the project\u2019s recent developments including memory manager improvements; serial support for p- and hp-refinement; high-order/low-order refined solution transfer; GLVis visualization via Jupyter Notebooks; and additional GPU support regarding HYPRE preconditioners, PETSc tools, and mesh optimization. MFEM now also integrates with various new libraries (AmgX, Gingko, FMS, and others), and continuous integration testing has been conducted on LLNL\u2019s Quartz, Lassen, and Corona machines. Additionally, Dobrev summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Exascale Computing Project, SciDAC, the FASTMath Institute, and other projects.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#tzanio-kolev-llnl", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos3/#the-state-of-mfem", "text": "", "title": "The State of MFEM"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_14", "text": "MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities of discretization algorithms, built-in solvers, parallel scalability, adaptive mesh refinement, and support for a range of computing architectures. Kolev also highlighted the global community\u2019s contributions as well as features included in the recent v4.3 software release.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#aaron-fisher-llnl_1", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos3/#welcome-and-overview", "text": "", "title": "Welcome and Overview"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_15", "text": "The MFEM community workshop held virtually on October 20, 2021, brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, collaborative breakout sessions, and a simulation contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#conferences-in-2021", "text": "", "title": "Conferences in 2021"}, {"location": "videos3/#tzanio-kolev-llnl_1", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos3/#efficient-finite-element-discretizations-for-exascale-applications", "text": "", "title": "Efficient Finite Element Discretizations for Exascale Applications"}, {"location": "videos3/#february-25-2021-excalibur-sle-3-workshop", "text": "", "title": "February 25, 2021 | ExCALIBUR SLE 3 workshop"}, {"location": "videos3/#atpesc-2017-2018", "text": "", "title": "ATPESC 2017, 2018"}, {"location": "videos3/#tzanio-kolev-llnl-mark-shephard-rpi-and-cameron-smith-rpi", "text": "", "title": "Tzanio Kolev (LLNL), Mark Shephard (RPI) and Cameron Smith (RPI)"}, {"location": "videos3/#unstructured-meshing-technologies", "text": "", "title": "Unstructured Meshing Technologies"}, {"location": "videos3/#august-6-2018-atpesc-2018", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2018. Slides for this presentation are available here .", "title": "August 6, 2018 | ATPESC 2018"}, {"location": "videos3/#tzanio-kolev-llnl-and-mark-shephard-rpi", "text": "", "title": "Tzanio Kolev (LLNL) and Mark Shephard (RPI)"}, {"location": "videos3/#unstructured-meshing-technologies_1", "text": "", "title": "Unstructured Meshing Technologies"}, {"location": "videos3/#august-7-2017-atpesc-2017", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here .", "title": "August 7, 2017 | ATPESC 2017"}, {"location": "videos3/#tzanio-kolev-llnl-and-mark-shephard-rpi_1", "text": "", "title": "Tzanio Kolev (LLNL) and Mark Shephard (RPI)"}, {"location": "videos3/#conforming-nonconforming-adaptivity-for-unstructured-meshes", "text": "", "title": "Conforming & Nonconforming Adaptivity for Unstructured Meshes"}, {"location": "videos3/#august-7-2017-atpesc-2017_1", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here .", "title": "August 7, 2017 | ATPESC 2017"}, {"location": "videos3/#other-videos", "text": "", "title": "Other Videos"}, {"location": "videos3/#mfem-advanced-simulation-algorithms-for-hpc-applications", "text": "", "title": "MFEM: Advanced Simulation Algorithms for HPC Applications"}, {"location": "videos3/#jun-24-2020-youtube", "text": "Overview of MFEM 4.0 featuring some of its developers.", "title": "Jun 24, 2020 | YouTube"}, {"location": "videos3/#center-for-applied-scientific-computing", "text": "", "title": "Center for Applied Scientific Computing"}, {"location": "videos3/#jul-12-2019-youtube", "text": "Overview of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, including a highlight of MFEM.", "title": "Jul 12, 2019 | YouTube"}, {"location": "videos3/#str-preview-exascale-computing", "text": "", "title": "S&TR Preview: Exascale Computing"}, {"location": "videos3/#october-6-2016-youtube", "text": "Some early MFEM results in the BLAST project.", "title": "October 6, 2016 | YouTube"}, {"location": "workshop/", "text": "MFEM Community Workshop October 22-24, 2024 LLNL + Virtual Speakers' slides are linked in the agenda below. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery . Overview The MFEM team is happy to invite you to the 2024 MFEM Community Workshop, which will take place on October 22-24, 2024 in a hybrid format: in-person at Lawrence Livermore National Laboratory (LLNL) + virtually on Zoom. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. We encourage you to join us in person if you can! For questions, please contact the meeting organizers at mfem@llnl.gov . Registration Registration closed on October 15th . Venue The meeting will take place at the University of California Livermore Collaboration Center (UCLCC) which is just outside of LLNL's East Gate. Lodging Options There are many hotels in Livermore, and others are available in Pleasanton and nearby cities. See LLNL's recommended list of area hotels or this Google Maps search . If you stay outside of Livermore, we recommend staying west of the city to have a reverse commute to the Lab. Meeting Format This will be the first hybrid edition of the MFEM community workshop that will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.7 and future roadmap Contributed talks from application developers utilizing MFEM Student lightning talks and visualization contest Office hours on the last day See also the agenda for the previous 2023 , 2022 and 2021 MFEM workshops. Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop. Agenda Tuesday, October 22 Time Activity Presenter 8:00-8:30 Breakfast + Registration on site at UCLCC 8:30-9:00 Welcome & Overview ( PDF , video ) Aaron Fisher (LLNL) 9:00-9:30 The State of MFEM ( PDF , video ) Tzanio Kolev (LLNL) 9:30-10:00 Recent Developments ( video ) Veselin Dobrev (LLNL) 10:00-10:30 Coffee Break discussions on Slack 10:30-12:00 Presentations (30 mins each) Chair: Will Pazner M\u00e1t\u00e9 Kov\u00e1cs (Braid Technologies) Rust Wrapper for MFEM ( PDF ) Rust is quickly emerging as a modern alternative to C++ for systems and performance-critical programming. With a user-centered design, \"batteries included\" philosophy around tooling, and principled approach to correctness, Rust holds a lot of potential to make complex libraries easier to use. Building a Rust wrapper for MFEM would achieve most of the benefits of a rewrite at a fraction of the effort. By showcasing this prototype, I hope to convince you that creating and maintaining a Rust wrapper for MFEM is a worthy goal. I will further argue that the small modifications to the C++ API that may be necessary to reach optimal integration with Rust would also improve the usability for C++. Adrian Butscher (Autodesk Research) Geometrically Constrained Level Set Topology Optimization Using a Novel Hilbert Space Extension Method ( PDF ) We propose an approach for level-set based topology optimization which pairs conventional free-form shape updates with highly constrained shape updates along a user-specified part of the shape boundary. It is intended for the optimal design of shapes where certain parts of the shape boundary are required to preserve their geometry, up to well-defined parametric variations such as translations, rotations, and scalings. For instance, our approach could be used to optimize a shape that must include a circular aperture of optimal radius to accommodate a pin joint to another shape. Our approach allows us to optimize both the free-form geometry of the shape, as well as the position, orientation, and scale of the circular aperture. To generate the shape updates we construct a velocity field over the entire design space and transport the level-set function defining the shape along the field at each iteration. We construct this velocity field using a novel constrained Hilbert space extension (C-HSE) method that expands upon existing Hilbert space extension methods by incorporating the affine motion constraints into the variational problem. As a result, the C-HSE method generates a velocity field for the entire design domain that constitutes a descent direction for a user-specified optimization objective function, while ensuring that all constraints are met. The C-HSE allows multiple distinct regions to have different constraints, with many possible constraint types such as translation, rotation and scaling (or all three simultaneously). We show results on a variety of geometrically constrained boundary conditions on some canonical problems. Ketan Mittal (LLNL) Interpolation at Arbitrary Points in High-Order Meshes on GPUs ( PDF , video ) Robust and scalable arbitrary point interpolation is required in the finite element method and spectral element method for querying the partial differential equation solution at points of interest in the domain, comparison of solution between different meshes, and Lagrangian particle tracking. This is a challenging problem, particularly for high-order unstructured meshes partitioned in parallel with MPI, as it requires identifying the element that overlaps a given point and computing the reference space coordinates inside the element corresponding to the point. We present a robust and efficient way to address this problem for large-scale high-order meshes. First, a combination of globally partitioned and processor-local maps are used to determine a list of candidate MPI ranks and element pairs that could contain the point. Next, element-wise bounding boxes are used to further narrow down the list of candidate elements. Finally, Newton's method with trust region-based approach is used to invert the affine map for the candidate elements and determine the reference space coordinates corresponding to the point. Since GPU-based architectures have demonstrated to accelerate computational analyses using meshes with tensor-product elements, specialized kernel have been developed to effect the arbitrary point search and interpolation on GPUs. We demonstrate the effectiveness of this approach using various high-order meshes. 12:00-1:00 Lunch on site at UCLCC 1:00-2:00 Student Session 1 (10 mins each) Chair: Ketan Mittal Nanna Berre (Norwegian University of Science and Technology) High-Order CutFEM Solvers in MFEM Creating conforming meshes for complex, realistic problems can be challenging and consume a significant portion of the total simulation time. The cut finite element method (CutFEM) allows the geometry to be represented independently of the computational domain, thus circumventing the mesh generation while maintaining the accuracy and robustness of the standard finite element method. In this talk, we present recent implementations of CutFEM solvers in MFEM, along with numerical convergence studies. Julian L\u00fcken (University of Antwerp) Simulating Atom Probe Tomography Using MFEM ( PDF ) In atom probe tomography (APT), spatial reconstruction enables volumetric insight into a specimen's nanostructure. To this day, a fast reconstruction method which utilizes the true potential of APT in terms of resolution does not exist. A model of its effective inverse, the field evaporation, which provides a physically accurate description of the ion trajectories, is a crucial component in reconstruction. The simulation of each individual evaporation while has been time inefficient. We introduce AdAPTS, an adaptive atom probe tomography simulation library based on MFEM. AdAPTS is capable of generating accurate detector hit maps of various specimens, efficiently representing and simulating the experimental domain from specimen to detector. Using AdAPTS, we are able to accurately simulate the field evaporation of various specimens, revealing realistic poles and zone lines. Aditya Parik (Utah State University) Arbitrary Point Search and Interpolation on Surface Meshes ( PDF ) Scalable high-order interpolation at arbitrary locations on finite element meshes is essential in applications such as Lagrangian particle tracking coupled to Eulerian fields, coupled overlapping grids, and grid-to-grid interpolation. This is currently achieved in MFEM for volume meshes using FindPointsGSLIB, which is based on the high-order interpolation library findpts. Therein, global and local hash maps are constructed to rapidly narrow down the search space to determine, first the correct rank, and then the candidate elements on that rank that may contain a given point in physical-space. Next, element-wise bounding boxes help further narrow down the list of candidate elements. Finally, a Newton's method based approach is used to determine if the point overlaps with the element, and the corresponding reference coordinates. Through this work, we extend FindPointsGSLIB to surface meshes where we encounter interesting implementation challenges in the construction of the global and local maps, bounding boxes, and the convergence criterion for the Newton search. The effectiveness of this approach is tested by searching for a large number of points on various 2D and 3D meshes and then obtaining the accuracy of interpolation of a test field at the found coordinates. We also test the GPU scaling characteristics of this approach with respect to the number of points for both search and interpolation operations. Gabriel Pinochet-Soto (Portland State University) Exploring Generalized Jacobi Preconditioners and Smoothers in MFEM ( PDF ) This talk will present a new type of smoother called the L(p,q)-Jacobi family of smoothers, which is a generalization of the L(1)-Jacobi smoother. We will discuss how these smoothers are implemented in MFEM and compare the performance of the solvers. Additionally, we will delve into a specific case of the L(1)-Jacobi preconditioner for partially assembled operators and explain their implementation and effectiveness. 2:00-3:00 Student Session 2 (10 mins each) Chair: Ketan Mittal Matthew Blomquist (University of California Merced) Semi-Lagrangian Characteristic Reconstruction and Projection for Transport under Incompressible Velocity Fields ( PDF ) We present a novel semi-Lagrangian characteristic reconstruction method that leverages a volume preserving projection to advect quantities under incompressible velocity fields. A key advantage of this framework is to see the traditional semi-Lagrangian scheme as the construction of a diffeomorphism between the deformed and original geometry (reference map). This representation allows us to use the local deformation of the geometry to design a projection for the reference map onto the space of volume preserving diffeomorphisms. In the context of the advection of an implicit surface representation (level set method), this results in significant improvements to the interface precision and mass conservation. In this short talk, I will demonstrate our new method with a variety of canonical two-dimensional examples and compare this new approach to traditional schemes. Paul Moujaes (Technical University Dortmund) Clip and Scale Limiting for Remapping H1 Velocity Fields in Lagrangian Hydrodynamics Simulations ( PDF ) The mesh quality in Lagrangian hydrodynamics simulations can worsen drastically over time. Therefore, pausing the simulation and remapping the quantities is needed at some point. The remapping process can be written as a linear advection equation. In this talk, we present the application of the Clip and Scale limiter for remapping the velocity field which is discretized with continuous finite elements. Arjun Vijaywargiya (University of Notre Dame) High Order Computation of MFC Barycenters with MFEM ( PDF ) We develop a class of barycenter problems based on mean field control problems in three dimensions with associated reactive-diffusion systems of unnormalized multi-species densities. The primary objective is to present a comprehensive framework for efficiently computing the proposed variational problem: generalized Benamou-Brenier formulas with multiple input density vectors as boundary conditions. Our approach involves the utilization of high-order finite element discretizations of the spacetime domain to achieve improved accuracy. The discrete optimization problem is then solved using the primal-dual hybrid gradient (PDHG) algorithm, a first-order optimization method for effectively addressing a wide range of constrained optimization problems. The efficacy and robustness of our proposed framework are illustrated through several numerical examples in three dimensions, such as the computation of the barycenter of multi-density systems consisting of Gaussian distributions and reactive-diffusive multi-density systems involving 3D voxel densities. Additional examples highlighting computations on 2D embedded surfaces are also provided. Yi Zong (Tsinghua University) FP16 Acceleration in Structured Multigrid Preconditioner for Real-World Problems ( PDF ) Half-precision hardware support is now almost ubiquitous. In contrast to its active use in AI, half-precision is less commonly employed in scientific and engineering computing. The valuable proposition of accelerating scientific computing applications using half-precision prompted this study. Focusing on solving sparse linear systems in scientific computing, we explore the technique of utilizing FP16 in multigrid preconditioners. Based on observations of sparse matrix formats, numerical features of scientific applications, and the performance characteristics of multigrid, this study formulates four guidelines for FP16 utilization in multigrid. The proposed algorithm demonstrates how to avoid FP16 overflow through scaling. A setup-then-scale strategy prevents FP16\u2019s limited accuracy and narrow range from interfering with the multigrid\u2019s numerical properties. Another strategy, recover-and-rescale on the fly, reduces the memory footprint of hotspot kernels. The extra precision-conversion overhead in mix-precision kernels is addressed by the transformation of storage formats and SIMD implementation. Two ablation experiments validate the effectiveness of our algorithm and parallel kernel implementation on ARM and X86 architectures. We further evaluate three idealized and five real-world problems to demonstrate the advantage of utilizing FP16 in a multigrid preconditioner. The average speedups are approximately 2.75x and 1.95x in preconditioner and end-to-end workflow, respectively. 3:00-3:30 Coffee Break & Group Photo download a virtual background below 3:30-5:00 Presentations (30 mins each) Chair: Tzanio Kolev Yu Leng (Los Alamos National Laboratory) Arbitrary Order Virtual Element Methods for High-Order Phase-Field Modeling of Dynamic Fracture ( PDF ) Accurate modeling of fracture nucleation and propagation in brittle and ductile materials subjected to dynamic loading is important in predicting material damage and failure under extreme conditions. Phase-field fracture models have garnered a lot of attention in recent years due to their success in representing damage and fracture processes in a wide class of materials and under a variety of loading conditions. Second-order phase-field fracture models are by far the most popular among researchers (and increasingly, among practitioners), but fourth-order models have started to gain broader acceptance since their more recent introduction. The exact solution corresponding to these high-order phase-field fracture models has higher regularity. Thus, numerical solutions of the model equations can achieve improved accuracy and higher spatial convergence rates. In this work, we develop a virtual element framework for the high-order phase-field model of dynamic fracture. The virtual element method (VEM) can be regarded as a generalization of the classical finite element method. In addition to many other desirable characteristics, the VEM allows computing on polytopal meshes. Here, we use H1-conforming virtual elements and the generalized-\u03b1 time integration method for the momentum balance equation, and adopt H2-conforming virtual elements for the high-order phase-field equation. We verify our virtual element framework using classical quasi-static benchmark problems and demonstrate its capabilities with the aid of numerical simulations of dynamic fracture in brittle materials. Michael Tupek (LLNL) Automatic Parameter Sensitivities in Serac for Engineering Applications ( PDF , video ) We present a framework for automatically calculating sensitivities for both topology and shape design optimization workflows. Building on MFEM infrastructure, we provide abstractions for quickly specifying, solving, coupling, and differentiating new PDEs for engineering applications. Recent developments in Serac include: highly robust nonlinear solvers, integration of the Tribol library for contact enforcement, coupled thermal-mechanics, differentiable material model library, and checkpointing for transient adjoint calculations. Jan Nikl (LLNL) Hybridization of Convection-Diffusion Systems in MFEM ( PDF , video ) Convection-diffusion systems are likely the most common class of partial differential equations appearing in practically all different applications. However, their mixed formulation typically suffers from prohibitively high computational costs and difficult preconditioning, especially close to the steady state where the system becomes a saddle point problem. The hybridization technique offers an appealing answer to these issues. The new framework for mixed systems enables single-line hybridization, reducing the problem to face traces of the total flux only. Solution of such system is then inexpensive, and preconditioning becomes nearly trivial. Non-linear convection is also supported with the action-based regime of operation. Description of the mechanism as well as code examples to show ease of usage are presented. 5:00 Day 1 Wrap-up MFEM team 5:30-8:00 Workshop Dinner First Street Alehouse Wednesday, October 23 Time Activity Presenter 8:00-8:30 Breakfast on site at UCLCC 8:30-9:00 Visualization Contest Winners Will Pazner (Portland State University) 9:00-10:00 Presentations (30 mins each) Chair: Sohail Reddy Gourab Panigrahi (Indian Institute of Science) Hardware Aware Matrix-Free Approach for Accelerating FE Discretized Eigenvalue Problems: Application to Large-Scale Kohn-Sham Density Functional Theory ( PDF ) The finite-element (FE) discretization of a partial differential equation usually involves construction of a FE discretized operator, and computing its action on trial FE discretized fields for the solution of a linear system of equations or eigenvalue problems using iterative solvers. This is traditionally computed using global sparse-vector multiplication algorithms. However, recent hardware-aware algorithms for evaluating such higher-order FE discretized matrix-vector multiplications suggest that on-the-fly matrix-vector products without building and storing the cell-level dense matrices (cell-matrix approach) reduce both arithmetic complexity and memory footprint and are referred to as matrix-free approaches. These approaches exploit the tensor-structured nature of the FE polynomial basis for evaluating the underlying integrals, and the current state-of-the-art matrix-free implementations deal with the action of FE discretized matrix on a single vector. These are neither optimal nor readily applicable for matrix multi-vector products involving large number of vectors (>1000). We discuss a computationally efficient and scalable matrix-free algorithm and implementation strategies to compute the FE discretized matrix multi-vector products on multi-node GPU architectures. We use batched evaluation strategies, with the batchsize tailored to underlying hardware architectures, leading to better data locality and allowing for parallelization over multiple batches. We devise an algorithm to overlap compute and data movement in conjunction with GPU shared memory, constant memory, and kernel fusion to reduce data accesses to and from device memory and registers to reduce bank conflicts. Further, we propose a strategy where the memory of both the registers and shared memory is utilized to mitigate the memory constraints. We benchmark the performance of our implementation using a representative FE discretized matrix acting on multivectors of various sizes on multi-node GPU architectures and compare the performance against cell-matrix approach and matrix-free approaches implemented in MFEM and deal.ii. Further, usefulness of the proposed approach is demonstrated in accelerating large-scale eigenvalue problems arising in FE discretized Density Functional Theory calculations, a quantum mechanical theory used for first principle material modeling. Julian Andrej (LLNL) Differentiating Large-Scale Finite Element Applications with MFEM This presentation will go over the details of dFEM by explaining how MFEM leverages the Finite Element Operator Decomposition to introduce an automatic differentiation interface. We discuss advantages of this approach over traditional AD techniques and our integration with Enzyme. The talk is concluded with examples and a live demo. 10:00-10:30 Coffee Break discussions on Slack 10:30-12:00 Presentations (30 mins each) Chair: Tzanio Kolev Vladimir Tomov (LLNL) Recent Work in the MFEM Miniapps for Shock Hydro, Field Remap, and Mesh Optimization ( PDF , video ) This presentation discusses recent advancements, research, and exploratory work in the MFEM miniapps for shock hydrodynamics (Laghos), field remap (Remhos), and mesh optimization. For shock hydro, we present the implementation of slip wall boundary conditions for curved domains, along with research involving material interfaces using the shifted interface method or cut-element integration through Algoim and moments-based integration. In the field remap miniapp, we cover developments in stabilized remap for continuous fields, interface sharpening techniques, and matrix-free methods for GPU execution. Lastly, we explore recent progress in mesh optimization, including surface fitting and its GPU implementation, tangential relaxation, automatic differentiation (AD) for complex objective functionals, enhanced metric theory and quality metrics, and hpr-adaptivity for the mesh representation. While some of these advancements are public, general methods that can be applied across various practical miniapps, others are exploratory, demonstrating how the miniapps can serve as a starting point for research in specific areas. Hui-Chia Yu (Michigan State University) Battery Electrode Simulation Toolkit using MFEM (BESFEM) ( PDF ) Conventional sharp-interface simulations require mesh systems conformal to the domain of interest for solving governing equations. Our research team employs an alternative approach, the smoothed boundary method (SBM), that utilizes a continuous domain function to describe geometries and reformulate governing equations. This formulation enables solving governing equations on a regular Cartesian grid, eliminating the need for body-conforming meshes. We have been developing an Open-Source Battery Electrode Simulation Toolkit using MFEM (BESFEM). This toolkit integrates the SBM approach on the MFEM solver library (a product of the DOE's Exascale Computing Project). To enhance accuracy and computational efficiency, our team leverage MFEM's built-in adaptive mesh refinement (AMR) functionality, where elements near SBM diffuse interfaces are multilevel refined. BESFEM will be made fully available as a research and education tool for the battery science and materials science communities. Dylan Copeland (LLNL) Sparse, Approximate Quadrature for Acceleration of Isogeometric Analysis and Reduced Order Models ( PDF , video ) Numerical integration for assembly of FEM systems typically employs quadrature rules selected for the polynomial order of basis functions in each element. In some cases, a much sparser rule can maintain accuracy. We present an algebraic method for constructing sparse rules, by formulating a constraint system of states required to be integrated accurately. A nonnegative least squares solver finds a sparse, approximate solution to this constraint system, yielding a quadrature rule with fewer points. One application we demonstrate is isogeometric analysis, where a NURBS FEM space is defined on patches consisting of many elements. Setup times are greatly accelerated, by using patch-wise integration with sum factorization and reduced quadrature rules constructed on patches. Another area of application is reduced order models (ROM), where the FEM system is restricted to a reduced POD basis formed from training data. Instead of hyper-reduction methods such as DEIM, the empirical quadrature procedure (EQP) can be used to accelerate ROM simulations with a sparse quadrature rule in the reduced subspace. We demonstrate this on several benchmark problems in the Laghos miniapp and show that energy conservation is maintained. 12:00-1:00 Lunch on site at UCLCC 1:00-3:00 Presentations (30 mins each) Chair: Aaron Fisher Jacob Spainhour (CU Boulder) Robust Containment Queries over Collections of Parametric Curves via Generalized Winding Numbers ( PDF , video ) The containment query is an important geometric primitive in many multiphysics applications. For example, when initializing multimaterial Arbitrary Lagrangian-Eulerian (ALE) simulations, we often need to determine whether arbitrary quadrature points from the background mesh are inside or outside the regions associated with each material. However, existing methods require expensive refinement to accurately capture curved regions. At the same time, many methods are wholly incompatible with user-defined geometries that contain geometric and numeric gaps and/or self-intersections. In this work, we develop a containment query for 2D regions defined by rational Bezier curves that operates directly on curved objects. Our method relies on the generalized winding number (GWN), a mathematical construction that can be evaluated for each curve independently, making the derived containment query robust to non-watertightness. We use an adaptive algorithm to compute the GWN field exactly, which permits fast evaluation for points considered \"distant\" to the curve while being numerically stable for points that are arbitrarily close. Overall, this classification scheme greatly expands the types of bounding geometry that can be used directly in shaping applications without the need for otherwise expensive repair techniques. If time permits, we will also discuss our extensions of this idea to 3D shapes defined by parametric surfaces. Alexander Blair (UK Atomic Energy Authority) Platypus: An Open-Source Application for MFEM Problem Set-Up and Assembly in the MOOSE Framework ( PDF ) The large-scale open-source finite element simulation framework MOOSE has built an extensive user community around its capabilities in solving large-scale FE problems across a wide range of physics domains whilst maintaining a simple interface for users. However, it currently lacks support for problem set-up and solution on GPU architectures, due in part to its default finite element library backend libMesh, restricting the range of facilities that it may effectively leverage. Here we present Platypus, an open-source MOOSE application under development for the massively parallel multiphysics simulations of finite element problems using the MFEM finite element library, supporting problem assembly and solves on both CPU and GPU architectures. We shall show some initial results on simple thermal and electromagnetic test problems and outline our development plans for supporting upcoming experiments at UKAEA at the HIVE and CHIMERA facilities. Qi Tang (Georgia Institute of Technology) An Adaptive Newton-Based Free-Boundary Grad-Shafranov Solver ( PDF ) Equilibriums in magnetic confinement devices result from force balancing between the Lorentz force and the plasma pressure gradient. In an axisymmetric configuration like a tokamak, such an equilibrium is described by an elliptic equation for the poloidal magnetic flux, commonly known as the Grad-Shafranov equation. It is challenging to develop a scalable and accurate free-boundary Grad-Shafranov solver, since it is a fully nonlinear optimization problem that simultaneously solves for the magnetic field coil current outside the plasma to control the plasma shape. In this work, we develop a Newton-based free-boundary Grad-Shafranov solver using adaptive finite elements and preconditioning strategies. The free-boundary interaction leads to the evaluation of a domain-dependent nonlinear form of which its contribution to the Jacobian matrix is achieved through shape calculus. The optimization problem aims to minimize the distance between the plasma boundary and specified control points while satisfying two non-trivial constraints, which correspond to the nonlinear finite element discretization of the Grad-Shafranov equation and a constraint on the total plasma current involving a nonlocal coupling term. The linear system is solved by a block factorization, and AMG is called for sub-block elliptic operators. The unique contributions of this work include the treatment of a global constraint, preconditioning strategies, nonlocal reformulation, and the implementation of adaptive finite elements. It is found that the resulting Newton solver is robust, successfully reducing the nonlinear residual to 1e-6 and lower in a small handful of iterations while addressing the challenging case to find a Taylor state equilibrium where conventional Picard-based solvers fail to converge. Dohyun Kim (Brown University) SiMPL Method: A Fast and Simple Method for Density-Based Topology Optimization ( PDF ) This talk will present a new first-order method for density-based topology optimization called SiMPL: Sigmoidal Mirror descent with Projected Lagrangian. This method delivers point-wise bound preserving density fields at every iteration. The design updates are based only on the first-order derivative information of the objective function, significantly simplifying practical implementations. We accelerate this method with adaptive step size and back-tracking line search. We numerically verified the mesh-independent behavior of the SiMPL method and observed significantly faster convergence compared to other popular first-order optimization algorithms for topology optimization. To outline the general applicability of the technique, we also include examples with (self-load) compliance minimization and compliant mechanism problems. 3:00-3:30 Coffee Break discussions on Slack 3:30-5:00 Presentations (30 mins each) Chair: Justin Laughlin Mathias Schmidt (LLNL) Level-Set Topology Optimization with PDE Generated Conformal Meshes ( PDF , video ) The promise of Topology Optimization (TO) is to provide engineers with a systematic computational tool to support the development of optimal designs. A shortcoming of classic density based multi-material TO designs is the nebulous interphase region between materials, which leads to inaccurate response predictions in these very regions. In contrast, designs based on boundary and interface regions, rather than interphase regions, yield accurate response predictions. Level-set based TO is an example of such; however, the analysis of the response often requires repeated mesh generation or non-standard finite element computations. We present a solely PDE-based, level-set topology optimization approach in which geometries are described through the iso-contour of one or multiple level-set fields which are discretized over a mesh. The nodal heights serve as the design parameters. The governing field equations are discretized by a conformal discretization over a separate \u201canalysis\u201d mesh. In the optimization, the \u201canalysis\u201d mesh is morphed such that its boundary and interfaces conform with the isocontours of the LS fields. The mesh morphing is performed using the Target-Matrix Optimization Paradigm (TMOP) approach. Our TMOP formulation is a PDE based mesh morphing operation which aims to improve the interface conformity while preserving mesh quality. Design sensitivities of the optimization cost and constraint functions with respect to all design level-set fields are computed through an adjoint approach which accounts for the mesh morphing process. The proposed analysis and optimization framework is based on MFEM, a free, lightweight, scalable C++ library for finite element methods which supports the optimization of large-scale problems. We investigate the robustness of the proposed optimization methodology by solving two- and three-dimensional multi-material optimization problems involving linear diffusion and elasticity. We discuss the advantages and challenges of our approach with regards to the mesh morphing process. LS regularization techniques are employed to produce a well-behaved mesh morphing problem throughout the optimization. Finally, select aspects and challenges of our approach with respect to parallel computing and processor decomposition are discussed. Milan Holec (Xcimer Energy) Towards Predictive Modeling of the World's Most Powerful Fusion Laser at Xcimer ( PDF ) According to the techno-economic studies, the ultra-violet excimer lasers offer the most straightforward path to the commercial fusion given the lowest J/$ price and their capacity to withstand MJ laser pulses, a fluence when the traditional solid state lasers break. We present our vision on how to model the future laser system spanning the micro-scales at 248nm laser wavelength and macro-scales at tens of meters of the actual laser beamline, where MFEM allows us to design a computationally efficient and accurate discretization based on mathematical details which we will describe in the presentation. Yohann Dudouit (LLNL) Mitigating Rays-Effect in Phase-Space Advection with Matrix-Free High-Dimensional DG Methods ( PDF , video ) The mitigation of the rays-effect in phase-space advection problems is a critical challenge in deterministic transport simulations, particularly when using traditional methods that struggle with numerical artifacts. In this work, we propose a novel high-dimensional matrix-free discontinuous Galerkin (DG) approach designed to address the rays-effect by fully discretizing phase space, including velocity components, up to six dimensions. This methodology avoids the excessive computational cost associated with Monte Carlo simulations while offering a deterministic alternative that preserves accuracy and scalability. A key component of our approach is the use of advanced coordinate transformations, which optimize the coordinate system to minimize the rays-effect by aligning the coordinate system with the net flux. Our matrix-free formulation minimizes memory usage and improves computational efficiency by avoiding the assembly of large sparse matrices, a critical factor when scaling to high-dimensional problems. Numerical experiments demonstrate the effectiveness of this approach in reducing rays-effect artifacts, providing a robust and scalable solution for high-dimensional transport problems. 5:00 Day 2 Wrap-up MFEM team Thursday, October 24 Time Activity Presenter 8:00-8:30 Breakfast on site at UCLCC 8:30-12:00 Office Hours Q&A with MFEM team 12:00-1:00 Lunch on site at UCLCC 1:00-5:00 Additional Meetings and Discussions Simulation and Visualization Contest We will be holding a simulation and visualization contest open to all attendees. Participants can submit visualizations (images or videos) from MFEM-related simulations. The winner of the competition (selected by the organizing committee) will receive an MFEM T-shirt. We will also feature the images in the gallery . Here are the winners from the 2023 workshop: Mehran Ebrahimi : Displacement distribution of a loaded excavator arm under static equilibrium John Camier : Leapfrogging vortex rings using an incompressible Schr\u00f6dinger fluid solver To submit an entry in the contest, please fill out the Google form . Alternatively, you may email your submission to mfem@llnl.gov , including your name, institution, a short description of the simulation (the underlying physics, discretization, application details, etc.), and visualization software used (GLVis, ParaView, VisIt, etc.). Virtual Backgrounds We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally. About Livermore and LLNL Founded in 1869, Livermore is California's oldest wine region, framed by award-winning wineries, farmlands, and ranches that mirror the valley's western heritage. As home to renowned science and technology centers, Lawrence Livermore and Sandia national labs, Livermore is a technological hub and an academically engaged community. It has become an integral part of the Bay Area, successfully competing in the global market powered by its wealth of research, technology, and innovation. For more than 70 years, LLNL has applied science and technology to make the world a safer place. World-class facilities include the National Ignition Facility, the Advanced Manufacturing Laboratory, and the Livermore Computing Center hosting the Sierra supercomputer and home of the future exascale machine, El Capitan. Organizing Committee Holly Auten \u250a Aaron Fisher \u250a Tzanio Kolev \u250a Justin Laughlin \u250a Ketan Mittal \u250a Will Pazner \u250a Sohail Reddy \u250a Haley Shuey Previous Workshops MFEM Community Workshop 2023 MFEM Community Workshop 2022 MFEM Community Workshop 2021", "title": "Workshop"}, {"location": "workshop/#mfem-community-workshop", "text": "", "title": "MFEM Community Workshop"}, {"location": "workshop/#overview", "text": "The MFEM team is happy to invite you to the 2024 MFEM Community Workshop, which will take place on October 22-24, 2024 in a hybrid format: in-person at Lawrence Livermore National Laboratory (LLNL) + virtually on Zoom. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. We encourage you to join us in person if you can! For questions, please contact the meeting organizers at mfem@llnl.gov .", "title": "Overview"}, {"location": "workshop/#registration", "text": "Registration closed on October 15th .", "title": "Registration"}, {"location": "workshop/#venue", "text": "The meeting will take place at the University of California Livermore Collaboration Center (UCLCC) which is just outside of LLNL's East Gate.", "title": "Venue"}, {"location": "workshop/#lodging-options", "text": "There are many hotels in Livermore, and others are available in Pleasanton and nearby cities. See LLNL's recommended list of area hotels or this Google Maps search . If you stay outside of Livermore, we recommend staying west of the city to have a reverse commute to the Lab.", "title": "Lodging Options"}, {"location": "workshop/#meeting-format", "text": "This will be the first hybrid edition of the MFEM community workshop that will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.7 and future roadmap Contributed talks from application developers utilizing MFEM Student lightning talks and visualization contest Office hours on the last day See also the agenda for the previous 2023 , 2022 and 2021 MFEM workshops. Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop.", "title": "Meeting Format"}, {"location": "workshop/#agenda", "text": "", "title": "Agenda"}, {"location": "workshop/#tuesday-october-22", "text": "Time Activity Presenter 8:00-8:30 Breakfast + Registration on site at UCLCC 8:30-9:00 Welcome & Overview ( PDF , video ) Aaron Fisher (LLNL) 9:00-9:30 The State of MFEM ( PDF , video ) Tzanio Kolev (LLNL) 9:30-10:00 Recent Developments ( video ) Veselin Dobrev (LLNL) 10:00-10:30 Coffee Break discussions on Slack 10:30-12:00 Presentations (30 mins each) Chair: Will Pazner M\u00e1t\u00e9 Kov\u00e1cs (Braid Technologies) Rust Wrapper for MFEM ( PDF )", "title": "Tuesday, October 22"}, {"location": "workshop/#wednesday-october-23", "text": "Time Activity Presenter 8:00-8:30 Breakfast on site at UCLCC 8:30-9:00 Visualization Contest Winners Will Pazner (Portland State University) 9:00-10:00 Presentations (30 mins each) Chair: Sohail Reddy Gourab Panigrahi (Indian Institute of Science) Hardware Aware Matrix-Free Approach for Accelerating FE Discretized Eigenvalue Problems: Application to Large-Scale Kohn-Sham Density Functional Theory ( PDF )", "title": "Wednesday, October 23"}, {"location": "workshop/#thursday-october-24", "text": "Time Activity Presenter 8:00-8:30 Breakfast on site at UCLCC 8:30-12:00 Office Hours Q&A with MFEM team 12:00-1:00 Lunch on site at UCLCC 1:00-5:00 Additional Meetings and Discussions", "title": "Thursday, October 24"}, {"location": "workshop/#simulation-and-visualization-contest", "text": "We will be holding a simulation and visualization contest open to all attendees. Participants can submit visualizations (images or videos) from MFEM-related simulations. The winner of the competition (selected by the organizing committee) will receive an MFEM T-shirt. We will also feature the images in the gallery . Here are the winners from the 2023 workshop: Mehran Ebrahimi : Displacement distribution of a loaded excavator arm under static equilibrium John Camier : Leapfrogging vortex rings using an incompressible Schr\u00f6dinger fluid solver To submit an entry in the contest, please fill out the Google form . Alternatively, you may email your submission to mfem@llnl.gov , including your name, institution, a short description of the simulation (the underlying physics, discretization, application details, etc.), and visualization software used (GLVis, ParaView, VisIt, etc.).", "title": "Simulation and Visualization Contest"}, {"location": "workshop/#virtual-backgrounds", "text": "We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally.", "title": "Virtual Backgrounds"}, {"location": "workshop/#about-livermore-and-llnl", "text": "Founded in 1869, Livermore is California's oldest wine region, framed by award-winning wineries, farmlands, and ranches that mirror the valley's western heritage. As home to renowned science and technology centers, Lawrence Livermore and Sandia national labs, Livermore is a technological hub and an academically engaged community. It has become an integral part of the Bay Area, successfully competing in the global market powered by its wealth of research, technology, and innovation. For more than 70 years, LLNL has applied science and technology to make the world a safer place. World-class facilities include the National Ignition Facility, the Advanced Manufacturing Laboratory, and the Livermore Computing Center hosting the Sierra supercomputer and home of the future exascale machine, El Capitan.", "title": "About Livermore and LLNL"}, {"location": "workshop/#organizing-committee", "text": "Holly Auten \u250a Aaron Fisher \u250a Tzanio Kolev \u250a Justin Laughlin \u250a Ketan Mittal \u250a Will Pazner \u250a Sohail Reddy \u250a Haley Shuey", "title": "Organizing Committee"}, {"location": "workshop/#previous-workshops", "text": "MFEM Community Workshop 2023 MFEM Community Workshop 2022 MFEM Community Workshop 2021", "title": "Previous Workshops"}, {"location": "workshop21/", "text": "MFEM Community Workshop October 20, 2021 Virtual Meeting Speakers' slides are linked in the agenda below. Read the article about the workshop on LLNL's Computing website. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery . Overview The MFEM team is happy to announce the first MFEM Community Workshop, which will take place on October 20, 2021, virtually, using WebEx for videoconferencing. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. For questions, please contact the meeting organizers at mfem@llnl.gov . Registration Registration closed on October 18th. Meeting format Depending on the interest and user feedback, the meeting will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.3 and GLVis-4.1 Contributed talks from application developers utilizing MFEM Roadmap discussion for future development Technical discussions in breakout rooms for Electromagnetics, Fluids, and Structural Mechanics applications Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop. The meeting activities will take place 7:45am-2:45pm Pacific Daylight Time (GMT-7): Wednesday, October 20 PDFs and videos are linked below. Time (PDT, GMT-7) Activity Presenter 7:45-8:00 Welcome and Overview ( PDF , video ) Aaron Fisher 8:00-8:30 The State of MFEM ( PDF , video ) Tzanio Kolev 8:30-9:00 Recent Developments in MFEM ( PDF , video ) Veselin Dobrev 9:00-10:00 Talks, Session I (20 mins each) \u2022 Jamie Bramwell (LLNL), Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications ( PDF , video ) \u2022 Thomas Helfer (CEA), MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic ( PDF , video ) \u2022 Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia), Phase Change Heat and Mass Transfer Simulation with MFEM ( PDF , video ) 10:00-10:30 Break & Group Photo All Download a virtual background below 10:30-12:30 Talks, Session II (20 mins each) \u2022 Robert Rieben (LLNL), The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling ( video ) \u2022 Marc Bolinches (UT), Development of DG Compressible Navier-Stokes Solver with MFEM ( PDF , video ) \u2022 Mathias Davids (Harvard), Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI) ( PDF , video ) \u2022 Jan Nikl (ELI Beamlines), Laser Plasma Modeling with High-Order Finite Element ( PDF , video ) \u2022 Qi Tang (LANL), An Adaptive, Scalable Fully Implicit Resistive MHD Solver ( video ) \u2022 Syun\u2019ichi Shiraiwa (PPPL), Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion ( PDF , video ) 12:30-1:00 Break All 1:00-2:00 Talks, Session III (20 mins each) \u2022 William Dawn (NCSU), Unstructured Finite Element Neutron Transport using MFEM ( PDF , video ) \u2022 Vladimir Tomov (LLNL), MFEM Capabilities for High-Order Mesh Optimization ( PDF , video ) \u2022 Will Pazner (LLNL), High-Order Matrix-Free Solvers ( PDF , video ) 2:00-2:30 Wrap-Up and Simulation Contest Winners ( PDF , video ) Aaron Fisher Simulation and Visualization Contest The 2021 MFEM Workshop featured a simulation and visualization contest. The submitted entries can be viewed in the gallery . Virtual Backgrounds We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally. Organizing Committee Holly Auten \u250a Aaron Fisher \u250a Tzanio Kolev \u250a Will Pazner \u250a Mark Stowell", "title": "_Workshop21"}, {"location": "workshop21/#mfem-community-workshop", "text": "", "title": "MFEM Community Workshop"}, {"location": "workshop21/#october-20-2021", "text": "", "title": "October 20, 2021"}, {"location": "workshop21/#virtual-meeting", "text": "Speakers' slides are linked in the agenda below. Read the article about the workshop on LLNL's Computing website. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery .", "title": "Virtual Meeting"}, {"location": "workshop21/#overview", "text": "The MFEM team is happy to announce the first MFEM Community Workshop, which will take place on October 20, 2021, virtually, using WebEx for videoconferencing. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. For questions, please contact the meeting organizers at mfem@llnl.gov .", "title": "Overview"}, {"location": "workshop21/#registration", "text": "Registration closed on October 18th.", "title": "Registration"}, {"location": "workshop21/#meeting-format", "text": "Depending on the interest and user feedback, the meeting will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.3 and GLVis-4.1 Contributed talks from application developers utilizing MFEM Roadmap discussion for future development Technical discussions in breakout rooms for Electromagnetics, Fluids, and Structural Mechanics applications Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop. The meeting activities will take place 7:45am-2:45pm Pacific Daylight Time (GMT-7):", "title": "Meeting format"}, {"location": "workshop21/#wednesday-october-20", "text": "PDFs and videos are linked below. Time (PDT, GMT-7) Activity Presenter 7:45-8:00 Welcome and Overview ( PDF , video ) Aaron Fisher 8:00-8:30 The State of MFEM ( PDF , video ) Tzanio Kolev 8:30-9:00 Recent Developments in MFEM ( PDF , video ) Veselin Dobrev 9:00-10:00 Talks, Session I (20 mins each) \u2022 Jamie Bramwell (LLNL), Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications ( PDF , video ) \u2022 Thomas Helfer (CEA), MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic ( PDF , video ) \u2022 Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia), Phase Change Heat and Mass Transfer Simulation with MFEM ( PDF , video ) 10:00-10:30 Break & Group Photo All Download a virtual background below 10:30-12:30 Talks, Session II (20 mins each) \u2022 Robert Rieben (LLNL), The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling ( video ) \u2022 Marc Bolinches (UT), Development of DG Compressible Navier-Stokes Solver with MFEM ( PDF , video ) \u2022 Mathias Davids (Harvard), Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI) ( PDF , video ) \u2022 Jan Nikl (ELI Beamlines), Laser Plasma Modeling with High-Order Finite Element ( PDF , video ) \u2022 Qi Tang (LANL), An Adaptive, Scalable Fully Implicit Resistive MHD Solver ( video ) \u2022 Syun\u2019ichi Shiraiwa (PPPL), Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion ( PDF , video ) 12:30-1:00 Break All 1:00-2:00 Talks, Session III (20 mins each) \u2022 William Dawn (NCSU), Unstructured Finite Element Neutron Transport using MFEM ( PDF , video ) \u2022 Vladimir Tomov (LLNL), MFEM Capabilities for High-Order Mesh Optimization ( PDF , video ) \u2022 Will Pazner (LLNL), High-Order Matrix-Free Solvers ( PDF , video ) 2:00-2:30 Wrap-Up and Simulation Contest Winners ( PDF , video ) Aaron Fisher", "title": "Wednesday, October 20"}, {"location": "workshop21/#simulation-and-visualization-contest", "text": "The 2021 MFEM Workshop featured a simulation and visualization contest. The submitted entries can be viewed in the gallery .", "title": "Simulation and Visualization Contest"}, {"location": "workshop21/#virtual-backgrounds", "text": "We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally.", "title": "Virtual Backgrounds"}, {"location": "workshop21/#organizing-committee", "text": "Holly Auten \u250a Aaron Fisher \u250a Tzanio Kolev \u250a Will Pazner \u250a Mark Stowell", "title": "Organizing Committee"}, {"location": "workshop22/", "text": "MFEM Community Workshop October 25, 2022 Virtual Meeting Speakers' slides are linked in the agenda below. Read the article about the workshop on LLNL's Computing website. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery . Overview The MFEM team is happy to announce the third MFEM Community Workshop, which will take place on October 25, 2022, virtually, using Zoom for video conferencing. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. For questions, please contact the meeting organizers at mfem@llnl.gov . Registration Registration closed on October 11th. Meeting format Depending on the interest and user feedback, the meeting will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.4 and GLVis-4.2 Contributed talks from application developers utilizing MFEM Roadmap discussion for future development Technical discussions in breakout rooms for Electromagnetics, Fluids, and Structural Mechanics applications See also the agenda for the previous 2021 MFEM workshop. Meeting format Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop. The meeting activities will take place 7:40am-4:00pm Pacific Daylight Time (GMT-7): Tuesday, October 25 Time (PDT, GMT-7) Activity Presenter 7:40-8:00 Welcome & Overview ( PDF , video ) Aaron Fisher (LLNL) 8:00-8:20 The State of MFEM ( PDF , video ) Tzanio Kolev (LLNL) 8:20-8:40 Recent Developments ( PDF , video ) Veselin Dobrev (LLNL) 8:40-9:00 Break All 9:00-10:00 Talks, Session I (20 mins each) Chair: Will Pazner Ben Zwick (University of Western Australia) Solution of the Electroencephalography Forward Problem Using MFEM ( PDF , video ) Carlos Brito Pacheco (Universit\u00e9 Grenoble Alpes) Rodin: Lightweight and Modern C++17 Shape, Density and Topology Optimization Framework ( PDF , video ) Tobias Duswald (CERN | TUM) Solving Stochastic, Fractional PDEs with MFEM with Applications to Random Field Generation and Topology Optimization ( PDF , video ) 10:00-10:20 Break & Group Photo All Download a virtual background below 10:20-11:20 Talks, Session II (20 mins each) Chair: Socratis Petrides Alvaro Sanchez Villar (PPPL) MFEM Application to EM-wave Simulation in ECR Space Plasma Thrusters ( PDF , video ) Brian Young OpenParEM2D: A 2D Simulator for Guided Waves ( PDF , video ) Christina Migliore (MIT) The Development of the EM RF Edge Interactions Miniapp \u201cStix\u201d Using MFEM ( PDF , video ) 11:20-11:40 Break All 11:40-12:40 Talks, Session III (20 mins each) Chair: Aaron Fisher Will Pazner (PDX) High-Order Solvers + GPU Acceleration ( PDF , video ) Jorge-Luis Barrera (LLNL) Shape and Topology Optimization Powered by MFEM ( PDF , video ) Siu Wun Cheung (LLNL) Reduced Order Modeling for Finite Element Simulations through the Partnership of MFEM and libROM ( PDF , video ) 1:00-2:00 Talks, Session IV (20 mins each) Chair: Tzanio Kolev Devlin Hayduke (ReLogic) Project Minerva: Accelerated Deployment of MFEM Based Solvers in Large Scale Industrial Problems ( PDF , video ) Tim Brewer (Synthetik) blastFEM: A GPU-accelerated, Very High-performance and Energy-efficient Solver for Highly Compressible Flows ( PDF , video ) Adolfo Rodriguez (OpenSim) Using MFEM for Wellbore Stability Analysis ( PDF , video ) 2:00-2:20 Break All 2:20-2:40 MFEM AWS tutorial ( Instructions , video ) Julian Andrej (LLNL) 2:40-3:00 Wrap-up & Contest Winners ( PDF , video ) Aaron Fisher (LLNL) 3:00-4:00 Q&A Session MFEM team available on Zoom + Slack Simulation and Visualization Contest We will be holding a simulation and visualization contest open to all attendees. Participants can submit visualizations (images or videos) from MFEM-related simulations. The winner of the competition (selected by the organizing committee) will receive an MFEM T-shirt. We will also feature the images in the gallery . Here are the winners from the 2021 workshop: Dennis Ogiermann : Electric field in rabbit heart Tamas Horvath : Incompressible flow around rotating turbine To submit an entry in the contest, please fill out the Google form . Alternatively, you may email your submission to mfem@llnl.gov , including your name, institution, a short description of the simulation (the underlying physics, discretization, application details, etc.), and visualization software used (GLVis, ParaView, VisIt, etc.). Virtual Backgrounds We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally. Organizing Committee Holly Auten \u250a Aaron Fisher \u250a Tzanio Kolev \u250a Ketan Mittal \u250a Will Pazner \u250a Socratis Petrides Previous Workshops MFEM Community Workshop 2021", "title": "_Workshop22"}, {"location": "workshop22/#mfem-community-workshop", "text": "", "title": "MFEM Community Workshop"}, {"location": "workshop22/#october-25-2022", "text": "", "title": "October 25, 2022"}, {"location": "workshop22/#virtual-meeting", "text": "Speakers' slides are linked in the agenda below. Read the article about the workshop on LLNL's Computing website. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery .", "title": "Virtual Meeting"}, {"location": "workshop22/#overview", "text": "The MFEM team is happy to announce the third MFEM Community Workshop, which will take place on October 25, 2022, virtually, using Zoom for video conferencing. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. For questions, please contact the meeting organizers at mfem@llnl.gov .", "title": "Overview"}, {"location": "workshop22/#registration", "text": "Registration closed on October 11th.", "title": "Registration"}, {"location": "workshop22/#meeting-format", "text": "Depending on the interest and user feedback, the meeting will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.4 and GLVis-4.2 Contributed talks from application developers utilizing MFEM Roadmap discussion for future development Technical discussions in breakout rooms for Electromagnetics, Fluids, and Structural Mechanics applications See also the agenda for the previous 2021 MFEM workshop.", "title": "Meeting format"}, {"location": "workshop22/#meeting-format_1", "text": "Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop. The meeting activities will take place 7:40am-4:00pm Pacific Daylight Time (GMT-7):", "title": "Meeting format"}, {"location": "workshop22/#tuesday-october-25", "text": "Time (PDT, GMT-7) Activity Presenter 7:40-8:00 Welcome & Overview ( PDF , video ) Aaron Fisher (LLNL) 8:00-8:20 The State of MFEM ( PDF , video ) Tzanio Kolev (LLNL) 8:20-8:40 Recent Developments ( PDF , video ) Veselin Dobrev (LLNL) 8:40-9:00 Break All 9:00-10:00 Talks, Session I (20 mins each) Chair: Will Pazner Ben Zwick (University of Western Australia) Solution of the Electroencephalography Forward Problem Using MFEM ( PDF , video ) Carlos Brito Pacheco (Universit\u00e9 Grenoble Alpes) Rodin: Lightweight and Modern C++17 Shape, Density and Topology Optimization Framework ( PDF , video ) Tobias Duswald (CERN | TUM) Solving Stochastic, Fractional PDEs with MFEM with Applications to Random Field Generation and Topology Optimization ( PDF , video ) 10:00-10:20 Break & Group Photo All Download a virtual background below 10:20-11:20 Talks, Session II (20 mins each) Chair: Socratis Petrides Alvaro Sanchez Villar (PPPL) MFEM Application to EM-wave Simulation in ECR Space Plasma Thrusters ( PDF , video ) Brian Young OpenParEM2D: A 2D Simulator for Guided Waves ( PDF , video ) Christina Migliore (MIT) The Development of the EM RF Edge Interactions Miniapp \u201cStix\u201d Using MFEM ( PDF , video ) 11:20-11:40 Break All 11:40-12:40 Talks, Session III (20 mins each) Chair: Aaron Fisher Will Pazner (PDX) High-Order Solvers + GPU Acceleration ( PDF , video ) Jorge-Luis Barrera (LLNL) Shape and Topology Optimization Powered by MFEM ( PDF , video ) Siu Wun Cheung (LLNL) Reduced Order Modeling for Finite Element Simulations through the Partnership of MFEM and libROM ( PDF , video ) 1:00-2:00 Talks, Session IV (20 mins each) Chair: Tzanio Kolev Devlin Hayduke (ReLogic) Project Minerva: Accelerated Deployment of MFEM Based Solvers in Large Scale Industrial Problems ( PDF , video ) Tim Brewer (Synthetik) blastFEM: A GPU-accelerated, Very High-performance and Energy-efficient Solver for Highly Compressible Flows ( PDF , video ) Adolfo Rodriguez (OpenSim) Using MFEM for Wellbore Stability Analysis ( PDF , video ) 2:00-2:20 Break All 2:20-2:40 MFEM AWS tutorial ( Instructions , video ) Julian Andrej (LLNL) 2:40-3:00 Wrap-up & Contest Winners ( PDF , video ) Aaron Fisher (LLNL) 3:00-4:00 Q&A Session MFEM team available on Zoom + Slack", "title": "Tuesday, October 25"}, {"location": "workshop22/#simulation-and-visualization-contest", "text": "We will be holding a simulation and visualization contest open to all attendees. Participants can submit visualizations (images or videos) from MFEM-related simulations. The winner of the competition (selected by the organizing committee) will receive an MFEM T-shirt. We will also feature the images in the gallery . Here are the winners from the 2021 workshop: Dennis Ogiermann : Electric field in rabbit heart Tamas Horvath : Incompressible flow around rotating turbine To submit an entry in the contest, please fill out the Google form . Alternatively, you may email your submission to mfem@llnl.gov , including your name, institution, a short description of the simulation (the underlying physics, discretization, application details, etc.), and visualization software used (GLVis, ParaView, VisIt, etc.).", "title": "Simulation and Visualization Contest"}, {"location": "workshop22/#virtual-backgrounds", "text": "We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally.", "title": "Virtual Backgrounds"}, {"location": "workshop22/#organizing-committee", "text": "Holly Auten \u250a Aaron Fisher \u250a Tzanio Kolev \u250a Ketan Mittal \u250a Will Pazner \u250a Socratis Petrides", "title": "Organizing Committee"}, {"location": "workshop22/#previous-workshops", "text": "MFEM Community Workshop 2021", "title": "Previous Workshops"}, {"location": "workshop23/", "text": "MFEM Community Workshop October 26, 2023 Virtual Meeting Speakers' slides are linked in the agenda below. Read the article about the workshop on LLNL's Computing website. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery . Overview The MFEM team is happy to announce the third MFEM Community Workshop, which will take place on October 26, 2023, virtually, using Zoom for videoconferencing. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. For questions, please contact the meeting organizers at mfem@llnl.gov . Registration Registration closed on October 19th. Meeting format Depending on the interest and user feedback, the meeting will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.5, MFEM-4.5.2 and MFEM-4.6 Contributed talks from application developers utilizing MFEM Roadmap discussion for future development See also the agenda for the previous 2022 and 2021 MFEM workshops. Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop. Agenda The meeting activities will take place 8:00am-4:00pm Pacific Daylight Time (GMT-7): Thursday, October 26 Time Activity Presenter 8:00-8:20 Welcome & Overview ( PDF , video ) Aaron Fisher (LLNL) 8:20-8:40 The State of MFEM ( PDF , video ) Tzanio Kolev (LLNL) 8:40-9:00 Recent Developments ( PDF , video ) Veselin Dobrev (LLNL) 9:00-9:20 Break Discussions on Slack 9:20-10:20 Session I (20 mins each) Chair: Will Pazner Sebastian Grimberg (Amazon Web Services) Palace: PArallel LArge-scale Computational Electromagnetics ( PDF , video ) Palace, for PArallel, LArge-scale Computational Electromagnetics, is a parallel finite element code for full-wave electromagnetics simulations based on the MFEM library. Palace is used at the AWS Center for Quantum Computing to perform large-scale 3D simulations of complex electromagnetics models and enable the design of quantum computing hardware. In this talk we will give an overview of the simulation capabilities of Palace as well as some recent developments for conforming and nonconforming adaptive mesh refinement, operator partial assembly, and GPU support. Jacob Lotz (Delft University of Technology) Computation and Reduced Order Modelling of Periodic Flows ( PDF , video ) Many types of periodic flows can be found in nature and industrial applications and their computation is expensive due to lengthy time simulations. Our work aims to reduce the cost of these computations. We solve periodic flows in a space-time domain in which both ends in time are periodic such that we only have to model one period. MFEM is used to discretise the space-time domain and solve our discretised system of equations. We apply a hyper-reduced Proper Orthogonal Decomposition Galerkin reduced order model to speed up our computations. During the presentation we show (results of) our full order model and our advances in de reduced order modelling. Boyan Lazarov (LLNL) Scalable Design and Optimization with MFEM ( PDF , video ) The talk aims to present recently added and ongoing code development facilitating the solution of shape and topology optimization problems. Both topology and shape optimization are gradient-based iterative algorithms aiming to find a material distribution that minimizes an objective and fulfills a set of constraints. Every optimization step includes a solution to a forward optimization problem, an evaluation of the objective and constraints, a solution to an adjoint problem associated with every objective or constraint, an evaluation of gradients, and an update of the design based on mathematical programming techniques. All these steps can be easily implemented and executed by using MFEM in a scalable manner, allowing the design and optimization of large-scale realistic industrial problems. Thus, the goal is to exemplify these features, highlight the techniques that simplify the implementation of new problems, and provide a glimpse into the future. 10:20-10:40 Break & Group Photo Download a virtual background below 10:40-11:40 Session II (5 mins each) Chair: Milan Holec Student Lightning Talks Part 1 ( video ) Shani Martinez Weissberg (Tel Aviv University) \u00b5FEA of a Rabbit Femur ( PDF ) Given the ethical and practical limitations of conducting preliminary medical studies on humans, New Zealand White (NZW) rabbits serve as a common model for treatment validation. An important such medical study is the prediction of the risk of fracture in femurs with metastatic bone tumors following radiation therapy and image-based treatment. For such studies, micro-computed tomography (\u00b5CT) scans of NZW rabbit femurs are essential for capturing the detailed bone architecture. These \u00b5CT scans are used to construct micro finite element models (\u00b5FEMs) of the femurs that are being virtually loaded to predict the mechanical response required for validation of the \u00b5FEMs via experiments on fresh frozen rabbit femurs. This presentation outlines the step-by-step process of creating patient-specific \u00b5FEMs of rabbit femurs using MFEM. The workflow spans from \u00b5CT imaging to segmentation and 3D reconstruction, culminating in the MFEM solution of a linear elastic problem with over 125 million degrees of freedom. Paul Moujaes (TU-Dortmund) Dissipation-Based Entropy Stabilization for Slope-Limited Discontinuous Galerkin Approximations of Hyperbolic Problems ( PDF ) Dissipation-based entropy stabilization for slope-limited DG-approximations of hyperbolic problems with focus on the Euler equations. Alejandro Mu\u00f1oz (Universidad de Granada) Discontinuous Galerkin in the Time Domain for Maxwell\u2019s Equations ( PDF ) The Discontinuous Galerkin method is a type of finite element method which uses discontinuous basis functions, almost always piecewise polynomials. Through the use of MFEM, we aim to implement an explicit scheme Maxwell Equations' solver capable of 1D, 2D and 3D problem solving. Thanks to the library's capabilities, we can focus on the implementation of operators and integrators while retaining the capacity to use multiple types of meshes with various element types and the posterior visualization through GLVIS or ParaView. Bill Ellis (UKAEA) Comparing Thermo-Mechanical Solves in MOOSE and MFEM ( PDF ) Fusion energy requires confinement of a very hot plasma. Given these high temperatures, it is necessary to model how materials and components react in these environments. The Multiphysics Object-Oriented Simulation Environment (MOOSE) offers functionality to model the mechanical effects of these temperature fields. As MFEM is increasingly utilised for electromagnetic modelling in fusion, interest as to the benefits of a purely MFEM workflow have arisen. This short talk aims to offer a comparison of the performance and stability of some thermal expansion problems in MFEM and MOOSE by modelling some fusion relevant components. Student Lightning Talks Part 2 ( video ) Alexander Mote (Oregon State University) A Neural Network Surrogate Model for Nonlocal Thermal Flux Calculations ( PDF ) Mathematically, a neural network can produce a prediction of thermal flux in a plasma physics simulation as much as 1,000,000 times faster than it would take to calculate computationally. Using a dataset of MFEM simulations, we were able to train a neural network to predict nonlocal thermal flux within a 1D2V ICF simulation with 99.3% accuracy. This model was then used to evolve temperature over time in a similar simulation setup, demonstrating accurate nonlocal heat transport properties useful to experimenters. Amit Rotem (Virginia Tech) GPU Acceleration of IPDG in MFEM ( PDF ) This talk will present the new partial assembly implementation of the DGDiffusion bilinear form integrator. The partial assembly implementation uses sum factorization and can be compiled with CUDA to gain a substantial speed up. In the second half of the talk, an example solving the Wave Equation will be presented. Josiah Brown (Relogic Research) Project Minerva ( PDF ) MFEM is a very fast solver for structural problems due to it being efficiently made and its parallel capability, but due to it being for the most part strictly a C++ library that is used by programming a C++ script, it can be difficult to make a structural mesh. Material solver like Abacus, thought slow in solving for a solution, has many visual aids in creating a structural mesh making it very user friendly. Relogic has created a C++ code that takes Abacus input data, parsers it, generates a mesh file, and then runs MFEM on this data. This program allows one to create a structural mesh in Abacus and solve it in MFEM, this was done in hopes of making MFEM more user friendly and accessible. Mike Pozulp (UC Berkeley) An Implicit Monte Carlo Acceleration Scheme ( PDF ) This is a joint research project with Terry Haut to use Monte Carlo to compute a linear form arising in one of Sam Olivier's DG discretizations of radiation diffusion that Olivier described in his PhD thesis and implemented using MFEM. We are investigating the impact of the Monte Carlo noise on the radiation diffusion solution quality. 11:40-12:00 Break Discussions on Slack 12:00-1:00 Session III (20 mins each) Chair: Tzanio Kolev Syun'ichi Shiraiwa (PPPL) Radio-Frequency Wave Simulation in Hot Magnetized Plasma using Differential Operator for Non-Local Conductivity Response ( PDF , video ) In high-temperature plasmas, the dielectric response to the RF fields is caused by freely moving charged particles, which naturally makes such a response non-local and correspondingly, the Maxwell wave problem becomes an integro-differential equation. A differential form of dielectric operator, based on the small k\u22a5\u03c1 expansion, is widely used. However, they typically includes up-to the second order terms, and thus the use of such an operator is limited to the waves that satisfy k\u22a5\u03c1 < 1. We propose an alternative approach to construct a dielectric operator, which includes all-order finite Larmor radius effects without explicitly containing higher order derivatives. We use a rational approximation of the plasma dielectric tensor in the wave number space, in order to yield a differential operator acting on the dielectric current (J). The 1D O-X-B mode-conversion of the electron Bernstein wave in the non-relativistic Maxwellian plasma was modeled using this approach. An agreement with analytic calculation and the conservation of wave energy carried by the Poynting flux and electron thermal motion (\u201csloshing\u201d) is found. The connection between our construction method and superposition of Green\u2019s function for these screened Poisson\u2019s equations is presented. An approach to extend the operator in a multi-dimensional setting will also be discussed. Tamas Horvath (Oakland University) Implementation of Hybridizable Discontinuous Galerkin Methods via the HDG Branch ( PDF , video ) In this talk, we present the HDG branch, which was initially developed for HDG discretizations of advection-diffusion problems. Recent updates have made the branch highly adaptable for various applications, allowing a flexible implementation of HDG for many different PDEs. We showcase these enhancements and provide insights into their versatile usage across different problems. Yohann Dudouit (LLNL) Empowering MFEM Using libCEED: Features and Performance Analysis ( PDF , video ) This presentation will begin with an overview of the features introduced to MFEM through the integration of libCEED. We will particularly emphasize capabilities that are distinct from native MFEM functionalities, marking an enhancement in the software's suite of tools, such as support for simplices, handling of mixed meshes, and support for p-adaptivity. The presentation will conclude by showcasing benchmarks for various problems executed on different HPC architectures, illustrating the performance gains and efficiencies achieved through the libCEED integration. 1:00-1:20 Break Discussions on Slack 1:20-2:20 Session IV (20 mins each) Chair: Ketan Mittal Zhang Chunyu (Sun Yat-Sen University) Homogenized Energy Theory for Solution of Elasticity Problems with Consideration of Higher-Order Microscopic Deformations ( PDF , video ) The classical continuum mechanics faces difficulties in solving problems involving highly inhomogeneous deformations. The proposed theory investigates the impact of high-order microscopic deformation on modeling of material behaviors and provides a refined interpretation of strain gradients through the averaged strain energy density. Only one scale parameter, i.e., the size of the Representative Volume Element(RVE), is required by the proposed theory. By employing the variational approach and the Augmented Lagrangian Method(ALM), the governing equations for deformation as well as the numerical solution procedure are derived. It is demonstrated that the homogenized energy theory offers plausible explanations and reasonable predictions for the problems yet unsolved by the classical theory such as the size effect of deformation and the stress singularity at the crack tip. The concept of averaged strain energy proves to be more suitable for describing the intricate mechanical behavior of materials. And high order partial differential equations can be effectively solved by the ALM by introducing supplementary variables to lower the highest order of the equations. Eric Chin (LLNL) Contact Constraint Enforcement Using the Tribol Interface Physics Library ( PDF , video ) In this talk, we will discuss recent additions to the Tribol interface physics library to simplify MPI parallel contact constraint enforcement in large deformation, implicit and explicit continuum solid mechanics simulations using MFEM. Tribol is an open-source software package available on GitHub (https://github.com/LLNL/Tribol) and includes tools for contact detection, state-of-the-art Lagrangian contact methods such as common plane and mortar, and various enforcement techniques such as penalty and Lagrange multiplier. Additionally, Tribol recently added a domain redecomposer for coalescing proximal contact pairs on a single rank. Tribol\u2019s features are designed to interact seamlessly with MFEM, and other codes that use MFEM, with native support for MFEM data structures such as ParMesh, ParGridFunction, and HypreParMatrix. We highlight the simplicity of adding Tribol features to an MFEM-based code by looking at integration with Serac: an open-source implicit nonlinear thermal-structural simulation code (https://github.com/LLNL/serac). Milan Holec (LLNL) Deterministic Transport MFEM-Miniapp: Advancing Fidelity of Fusion Energy Simulations ( PDF , video ) We introduce a new multi-dimensional discretization in MFEM enabling efficient high-order phase-space simulations of various types of Boltzmann transport. In terms of a generalized form of the standard discrete ordinate SN method for the phase-space, we carefully design discrete analogs obeying important continuous properties such as conservation of energy, preservation of positivity, preservation of the diffusion limit of transport, preservation of symmetry leading to rays-effect mitigation, and other laws of physics. Finally, we show how to apply this new phase-space MFEM feature to increase the fidelity of modeling of fusion energy experiments. 2:20-2:40 Break Discussions on Slack 2:40-3:00 Wrap-up & Contest Winners ( PDF , video ) Aaron Fisher (LLNL) 3:00-4:00 Q&A Session MFEM team available on Zoom + Slack Simulation and Visualization Contest We will be holding a simulation and visualization contest open to all attendees. Participants can submit visualizations (images or videos) from MFEM-related simulations. The winner of the competition (selected by the organizing committee) will receive an MFEM T-shirt. We will also feature the images in the gallery . Here are the winners from the 2022 workshop: Ben Zwick : Electric field generated by a current dipole source in epilepsy patient Tobias Duswald : Topology-optimized heat sink Will Pazner : Magnetic field computed with GPU-accelerated LOR solvers To submit an entry in the contest, please fill out the Google form . Alternatively, you may email your submission to mfem@llnl.gov , including your name, institution, a short description of the simulation (the underlying physics, discretization, application details, etc.), and visualization software used (GLVis, ParaView, VisIt, etc.). Virtual Backgrounds We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally. Organizing Committee Holly Auten \u250a Aaron Fisher \u250a Milan Holec \u250a Tzanio Kolev \u250a Ketan Mittal \u250a Will Pazner \u250a Socratis Petrides \u250a Vladimir Tomov Previous Workshops MFEM Community Workshop 2022 MFEM Community Workshop 2021", "title": "_Workshop23"}, {"location": "workshop23/#mfem-community-workshop", "text": "", "title": "MFEM Community Workshop"}, {"location": "workshop23/#october-26-2023", "text": "", "title": "October 26, 2023"}, {"location": "workshop23/#virtual-meeting", "text": "Speakers' slides are linked in the agenda below. Read the article about the workshop on LLNL's Computing website. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery .", "title": "Virtual Meeting"}, {"location": "workshop23/#overview", "text": "The MFEM team is happy to announce the third MFEM Community Workshop, which will take place on October 26, 2023, virtually, using Zoom for videoconferencing. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. For questions, please contact the meeting organizers at mfem@llnl.gov .", "title": "Overview"}, {"location": "workshop23/#registration", "text": "Registration closed on October 19th.", "title": "Registration"}, {"location": "workshop23/#meeting-format", "text": "Depending on the interest and user feedback, the meeting will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.5, MFEM-4.5.2 and MFEM-4.6 Contributed talks from application developers utilizing MFEM Roadmap discussion for future development See also the agenda for the previous 2022 and 2021 MFEM workshops. Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop.", "title": "Meeting format"}, {"location": "workshop23/#agenda", "text": "The meeting activities will take place 8:00am-4:00pm Pacific Daylight Time (GMT-7):", "title": "Agenda"}, {"location": "workshop23/#thursday-october-26", "text": "Time Activity Presenter 8:00-8:20 Welcome & Overview ( PDF , video ) Aaron Fisher (LLNL) 8:20-8:40 The State of MFEM ( PDF , video ) Tzanio Kolev (LLNL) 8:40-9:00 Recent Developments ( PDF , video ) Veselin Dobrev (LLNL) 9:00-9:20 Break Discussions on Slack 9:20-10:20 Session I (20 mins each) Chair: Will Pazner Sebastian Grimberg (Amazon Web Services) Palace: PArallel LArge-scale Computational Electromagnetics ( PDF , video )", "title": "Thursday, October 26"}, {"location": "workshop23/#simulation-and-visualization-contest", "text": "We will be holding a simulation and visualization contest open to all attendees. Participants can submit visualizations (images or videos) from MFEM-related simulations. The winner of the competition (selected by the organizing committee) will receive an MFEM T-shirt. We will also feature the images in the gallery . Here are the winners from the 2022 workshop: Ben Zwick : Electric field generated by a current dipole source in epilepsy patient Tobias Duswald : Topology-optimized heat sink Will Pazner : Magnetic field computed with GPU-accelerated LOR solvers To submit an entry in the contest, please fill out the Google form . Alternatively, you may email your submission to mfem@llnl.gov , including your name, institution, a short description of the simulation (the underlying physics, discretization, application details, etc.), and visualization software used (GLVis, ParaView, VisIt, etc.).", "title": "Simulation and Visualization Contest"}, {"location": "workshop23/#virtual-backgrounds", "text": "We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally.", "title": "Virtual Backgrounds"}, {"location": "workshop23/#organizing-committee", "text": "Holly Auten \u250a Aaron Fisher \u250a Milan Holec \u250a Tzanio Kolev \u250a Ketan Mittal \u250a Will Pazner \u250a Socratis Petrides \u250a Vladimir Tomov", "title": "Organizing Committee"}, {"location": "workshop23/#previous-workshops", "text": "MFEM Community Workshop 2022 MFEM Community Workshop 2021", "title": "Previous Workshops"}, {"location": "howto/assembly_levels/", "text": "HowTo: Use partial assembly and matrix-free assembly MFEM provides different levels of assembly for mfem::BilinearForm , mfem::MixedBilinearForm , mfem::DiscreteLinearOperator , and mfem::NonlinearForm based on the operator decomposition: These different levels of assembly are: LEGACY, in the case of a mfem::BilinearForm LEGACY corresponds to a fully assembled form, i.e. a global sparse matrix in MFEM, Hypre or PETSC format. In the case of a mfem::NonlinearForm LEGACY corresponds to an operator that is fully evaluated on the fly. The LEGACY assembly level is ALWAYS performed on the host. FULL, fully assembled form, i.e. a global sparse matrix in MFEM format. This assembly is compatible with device execution, and therefore the sparse matrix is assembled on device if available. This corresponds to storing the whole A = G T B T D B G operator as a sparse matrix. ELEMENT, Form assembled at element level, which computes and stores dense element matrices. This corresponds to storing the element-local dense matrices A E = B T D B. This format allows to have some access to the matrix entries, while also providing a data format that is more friendly with GPU architectures. PARTIAL, Partially-assembled form, which computes and stores data only at quadrature points. This corresponds to storing only quadrature points values D, this format results in significantly faster computations and less storage usage compared to format storing matrices. Only the diagonal entries of the operator are accessible. NONE, \"Matrix-free\" form that computes all of its action on-the-fly without any substantial storage. In this case D is computed on the fly, this format is also significantly faster than the matrix formats, but is currently slower than partial assembly due to the increased number of computations. However, in the case of operators that need to be reassembled frequently this assembly level might be faster than partial assembly by skipping any reassembly steps. The different assembly levels are accessed through the following unified interface: AssemblyLevel assembly_level = ...; a->SetAssemblyLevel(assembly_level); where a is either an mfem::BilinearForm , mfem::MixedBilinearForm , mfem::DiscreteLinearOperator , or mfem::NonlinearForm . Assembly levels and backend device configuration MFEM integrates three backends that interact with the assembly levels, namely the RAJA backend, the OCCA backend, and the libCEED backend. Backends are accessible by configuring the mfem::Device accordingly. Device Configuration cpu Default CPU backend: sequential execution on each MPI rank. omp OpenMP backend. Enabled when MFEM_USE_OPENMP = YES. cuda CUDA backend. Enabled when MFEM_USE_CUDA = YES. hip HIP backend. Enabled when MFEM_USE_HIP = YES. raja-cpu RAJA CPU backend: sequential execution on each MPI rank. Enabled when MFEM_USE_RAJA = YES. raja-omp RAJA OpenMP backend. Enabled when MFEM_USE_RAJA = YES and MFEM_USE_OPENMP = YES. raja-cuda RAJA CUDA backend. Enabled when MFEM_USE_RAJA = YES and MFEM_USE_CUDA = YES. raja-hip RAJA HIP backend. Enabled when MFEM_USE_RAJA = YES and MFEM_USE_HIP = YES. occa-cpu OCCA CPU backend: sequential execution on each MPI rank. Enabled when MFEM_USE_OCCA = YES. occa-omp OCCA OpenMP backend. Enabled when MFEM_USE_OCCA = YES. occa-cuda OCCA CUDA backend. Enabled when MFEM_USE_OCCA = YES and MFEM_USE_CUDA = YES. ceed-cpu CEED CPU backend. GPU backends can still be used, but with expensive memory transfers. Enabled when MFEM_USE_CEED = YES. ceed-cuda CEED CUDA backend working together with the CUDA backend. Enabled when MFEM_USE_CEED = YES and MFEM_USE_CUDA = YES. NOTE: The current default libCEED CUDA backend is non-deterministic! ceed-hip CEED HIP backend working together with the HIP backend. Enabled when MFEM_USE_CEED = YES and MFEM_USE_HIP = YES. debug Debug backend: host memory is READ/WRITE protected while a device is in use. It allows to test the \"device\" code-path (using separate host/device memory pools and host <-> device transfers) without any GPU hardware. As 'DEBUG' is sometimes used as a macro, _DEVICE has been added to avoid conflicts. It is also possible to request the backend of a backend, for instance if we want to use the /gpu/cuda/shared backend of libCEED, one can specify this with the following syntax: mfem::Device device(\"ceed-cuda:/gpu/cuda/shared\"); Device support The native MFEM backend and the RAJA backend support the same features and Integrators. However, the OCCA backend, and the libCEED backend each offer different features, and support different Integrators with different performance characteristics. Supported Integrators native MFEM OCCA backend libCEED backend Mass Integrator \u2705 \u2705 \u2705 Vector Mass Integrator \u2705 \u274c \u2705 Vector FE Mass Integrator \u2705 \u274c \u274c Convection Integrator \u2705 \u274c \u2705 Non-linear Convection Integrator \u2705 \u274c \u2705 Diffusion Integrator \u2705 \u2705 \u2705 Vector Diffusion Integrator \u2705 \u274c \u2705 DGTrace Integrator \u2705 \u274c \u274c Mixed Vector Gradient Integrator \u2705 \u274c \u274c Mixed Vector Curl Integrator \u2705 \u274c \u274c Mixed Vector Weak Curl Integrator \u2705 \u274c \u274c Gradient Integrator \u2705 \u274c \u274c Vector Divergence Integrator \u2705 \u274c \u274c Vector FE Divergence Integrator \u2705 \u274c \u274c Curl Curl Integrator \u2705 \u274c \u274c Div Div Integrator \u2705 \u274c \u274c Features native MFEM OCCA backend libCEED backend Tensor elements support \u2705 \u2705 \u2705 Simplices support \u274c \u274c \u2705 Mixed elements support \u274c \u274c \u2705 Assembly: None \u274c \u274c \u2705 Assembly: Partial \u2705 \u2705 \u2705 Assembly: Element \u2705 \u274c \u274c Assembly: Full \u2705 \u274c \u274c", "title": "HowTo: Use partial assembly and matrix-free assembly"}, {"location": "howto/assembly_levels/#howto-use-partial-assembly-and-matrix-free-assembly", "text": "MFEM provides different levels of assembly for mfem::BilinearForm , mfem::MixedBilinearForm , mfem::DiscreteLinearOperator , and mfem::NonlinearForm based on the operator decomposition: These different levels of assembly are: LEGACY, in the case of a mfem::BilinearForm LEGACY corresponds to a fully assembled form, i.e. a global sparse matrix in MFEM, Hypre or PETSC format. In the case of a mfem::NonlinearForm LEGACY corresponds to an operator that is fully evaluated on the fly. The LEGACY assembly level is ALWAYS performed on the host. FULL, fully assembled form, i.e. a global sparse matrix in MFEM format. This assembly is compatible with device execution, and therefore the sparse matrix is assembled on device if available. This corresponds to storing the whole A = G T B T D B G operator as a sparse matrix. ELEMENT, Form assembled at element level, which computes and stores dense element matrices. This corresponds to storing the element-local dense matrices A E = B T D B. This format allows to have some access to the matrix entries, while also providing a data format that is more friendly with GPU architectures. PARTIAL, Partially-assembled form, which computes and stores data only at quadrature points. This corresponds to storing only quadrature points values D, this format results in significantly faster computations and less storage usage compared to format storing matrices. Only the diagonal entries of the operator are accessible. NONE, \"Matrix-free\" form that computes all of its action on-the-fly without any substantial storage. In this case D is computed on the fly, this format is also significantly faster than the matrix formats, but is currently slower than partial assembly due to the increased number of computations. However, in the case of operators that need to be reassembled frequently this assembly level might be faster than partial assembly by skipping any reassembly steps. The different assembly levels are accessed through the following unified interface: AssemblyLevel assembly_level = ...; a->SetAssemblyLevel(assembly_level); where a is either an mfem::BilinearForm , mfem::MixedBilinearForm , mfem::DiscreteLinearOperator , or mfem::NonlinearForm .", "title": "HowTo: Use partial assembly and matrix-free assembly"}, {"location": "howto/assembly_levels/#assembly-levels-and-backend-device-configuration", "text": "MFEM integrates three backends that interact with the assembly levels, namely the RAJA backend, the OCCA backend, and the libCEED backend. Backends are accessible by configuring the mfem::Device accordingly. Device Configuration cpu Default CPU backend: sequential execution on each MPI rank. omp OpenMP backend. Enabled when MFEM_USE_OPENMP = YES. cuda CUDA backend. Enabled when MFEM_USE_CUDA = YES. hip HIP backend. Enabled when MFEM_USE_HIP = YES. raja-cpu RAJA CPU backend: sequential execution on each MPI rank. Enabled when MFEM_USE_RAJA = YES. raja-omp RAJA OpenMP backend. Enabled when MFEM_USE_RAJA = YES and MFEM_USE_OPENMP = YES. raja-cuda RAJA CUDA backend. Enabled when MFEM_USE_RAJA = YES and MFEM_USE_CUDA = YES. raja-hip RAJA HIP backend. Enabled when MFEM_USE_RAJA = YES and MFEM_USE_HIP = YES. occa-cpu OCCA CPU backend: sequential execution on each MPI rank. Enabled when MFEM_USE_OCCA = YES. occa-omp OCCA OpenMP backend. Enabled when MFEM_USE_OCCA = YES. occa-cuda OCCA CUDA backend. Enabled when MFEM_USE_OCCA = YES and MFEM_USE_CUDA = YES. ceed-cpu CEED CPU backend. GPU backends can still be used, but with expensive memory transfers. Enabled when MFEM_USE_CEED = YES. ceed-cuda CEED CUDA backend working together with the CUDA backend. Enabled when MFEM_USE_CEED = YES and MFEM_USE_CUDA = YES. NOTE: The current default libCEED CUDA backend is non-deterministic! ceed-hip CEED HIP backend working together with the HIP backend. Enabled when MFEM_USE_CEED = YES and MFEM_USE_HIP = YES. debug Debug backend: host memory is READ/WRITE protected while a device is in use. It allows to test the \"device\" code-path (using separate host/device memory pools and host <-> device transfers) without any GPU hardware. As 'DEBUG' is sometimes used as a macro, _DEVICE has been added to avoid conflicts. It is also possible to request the backend of a backend, for instance if we want to use the /gpu/cuda/shared backend of libCEED, one can specify this with the following syntax: mfem::Device device(\"ceed-cuda:/gpu/cuda/shared\");", "title": "Assembly levels and backend device configuration"}, {"location": "howto/assembly_levels/#device-support", "text": "The native MFEM backend and the RAJA backend support the same features and Integrators. However, the OCCA backend, and the libCEED backend each offer different features, and support different Integrators with different performance characteristics. Supported Integrators native MFEM OCCA backend libCEED backend Mass Integrator \u2705 \u2705 \u2705 Vector Mass Integrator \u2705 \u274c \u2705 Vector FE Mass Integrator \u2705 \u274c \u274c Convection Integrator \u2705 \u274c \u2705 Non-linear Convection Integrator \u2705 \u274c \u2705 Diffusion Integrator \u2705 \u2705 \u2705 Vector Diffusion Integrator \u2705 \u274c \u2705 DGTrace Integrator \u2705 \u274c \u274c Mixed Vector Gradient Integrator \u2705 \u274c \u274c Mixed Vector Curl Integrator \u2705 \u274c \u274c Mixed Vector Weak Curl Integrator \u2705 \u274c \u274c Gradient Integrator \u2705 \u274c \u274c Vector Divergence Integrator \u2705 \u274c \u274c Vector FE Divergence Integrator \u2705 \u274c \u274c Curl Curl Integrator \u2705 \u274c \u274c Div Div Integrator \u2705 \u274c \u274c Features native MFEM OCCA backend libCEED backend Tensor elements support \u2705 \u2705 \u2705 Simplices support \u274c \u274c \u2705 Mixed elements support \u274c \u274c \u2705 Assembly: None \u274c \u274c \u2705 Assembly: Partial \u2705 \u2705 \u2705 Assembly: Element \u2705 \u274c \u274c Assembly: Full \u2705 \u274c \u274c", "title": "Device support"}, {"location": "howto/block_operators_matrices/", "text": "HowTo: Use Block Operators and Matrices Some problem formulations are defined in block form and need to be implemented in terms of block operators. Examples include saddle point problems ( ex5.cpp ), DPG discretization ( ex8.cpp ), and problems with multiple variables ( ex19.cpp ). The resulting discretized system is expressed in terms of block operators and vectors, which may be distributed in parallel. This article gives an overview of working with block operators and their matrix representations. It should be noted in general that operators and matrices are appropriate in different situations, regardless of whether they are in block form. Generally, it is preferable to have an operator and not its matrix representation when only its action is needed and can be computed faster than matrix assembly, or when matrix storage requires too much memory. For example, this is the case for high-order FEM, when partial assembly (PA) is used for fast operator multiplication on GPUs without storing matrices. Also, matrix storage becomes increasingly expensive (more nonzeros per row) as FEM order increases, which is another reason to avoid matrix assembly and matrix-based preconditioners for very high order. On the other hand, for low-order FEM, matrices are necessary for example in order to use AMG preconditioning (e.g. with hypre). Thus there are cases where operators or matrices are preferable, in general and in block form. First, it is important to understand how a single, monolithic operator or matrix is distributed in parallel in MFEM. Vectors, matrices, and operators are distributed consistently with hypre, which decomposes the rows of a parallel matrix ( HypreParMatrix , see mfem/hypre.hpp ) but stores all columns of the locally owned rows on each MPI rank. On each process, a Vector or HypreParVector is of size equal to the number of locally owned rows, and a HypreParMatrix stores the local rows. The parallel communication necessary for matrix-vector multiplication is performed in hypre. Similarly, an Operator should act on a Vector of local entries, perform any necessary communication, and compute a Vector of local entries. In the case of block operators and vectors, a Vector stores the local entries for each block contiguously in its data. Offsets define where each block begins and ends. For example, in ex5.cpp , there are two blocks for spaces R_space and W_space , and block_offsets is of size three, storing offsets 0 , R_space->GetVSize() , and R_space->GetVSize() + W_space->GetVSize() . The class BlockOperator (see mfem/linalg/blockoperator.hpp ) can be used to form one operator from operators defining the blocks. It operates on vectors of local entries, stored block-wise. Similarly, a monolithic HypreParMatrix can be constructed, using the function HypreParMatrixFromBlocks (see hypre.hpp ), from blocks defined as HypreParMatrix pointers or null pointers for empty blocks. The blocks may be rectangular, but their sizes must be consistent. Scalar coefficients can optionally be used. The monolithic matrix will have copies of the entries from the blocks, so it can be modified or destroyed independently of the blocks. The unit test mfem/tests/unit/linalg/test_matrix_rectangular.cpp provides an example that compares a BlockOperator and a monolithic HypreParMatrix . As noted above, it is not practical to have both an operator and a matrix, but this test illustrates the equivalence of the two approaches. The capability to form a monolithic matrix is available only for HypreParMatrix , not for the serial class SparseMatrix .", "title": "HowTo: Use Block Operators and Matrices"}, {"location": "howto/block_operators_matrices/#howto-use-block-operators-and-matrices", "text": "Some problem formulations are defined in block form and need to be implemented in terms of block operators. Examples include saddle point problems ( ex5.cpp ), DPG discretization ( ex8.cpp ), and problems with multiple variables ( ex19.cpp ). The resulting discretized system is expressed in terms of block operators and vectors, which may be distributed in parallel. This article gives an overview of working with block operators and their matrix representations. It should be noted in general that operators and matrices are appropriate in different situations, regardless of whether they are in block form. Generally, it is preferable to have an operator and not its matrix representation when only its action is needed and can be computed faster than matrix assembly, or when matrix storage requires too much memory. For example, this is the case for high-order FEM, when partial assembly (PA) is used for fast operator multiplication on GPUs without storing matrices. Also, matrix storage becomes increasingly expensive (more nonzeros per row) as FEM order increases, which is another reason to avoid matrix assembly and matrix-based preconditioners for very high order. On the other hand, for low-order FEM, matrices are necessary for example in order to use AMG preconditioning (e.g. with hypre). Thus there are cases where operators or matrices are preferable, in general and in block form. First, it is important to understand how a single, monolithic operator or matrix is distributed in parallel in MFEM. Vectors, matrices, and operators are distributed consistently with hypre, which decomposes the rows of a parallel matrix ( HypreParMatrix , see mfem/hypre.hpp ) but stores all columns of the locally owned rows on each MPI rank. On each process, a Vector or HypreParVector is of size equal to the number of locally owned rows, and a HypreParMatrix stores the local rows. The parallel communication necessary for matrix-vector multiplication is performed in hypre. Similarly, an Operator should act on a Vector of local entries, perform any necessary communication, and compute a Vector of local entries. In the case of block operators and vectors, a Vector stores the local entries for each block contiguously in its data. Offsets define where each block begins and ends. For example, in ex5.cpp , there are two blocks for spaces R_space and W_space , and block_offsets is of size three, storing offsets 0 , R_space->GetVSize() , and R_space->GetVSize() + W_space->GetVSize() . The class BlockOperator (see mfem/linalg/blockoperator.hpp ) can be used to form one operator from operators defining the blocks. It operates on vectors of local entries, stored block-wise. Similarly, a monolithic HypreParMatrix can be constructed, using the function HypreParMatrixFromBlocks (see hypre.hpp ), from blocks defined as HypreParMatrix pointers or null pointers for empty blocks. The blocks may be rectangular, but their sizes must be consistent. Scalar coefficients can optionally be used. The monolithic matrix will have copies of the entries from the blocks, so it can be modified or destroyed independently of the blocks. The unit test mfem/tests/unit/linalg/test_matrix_rectangular.cpp provides an example that compares a BlockOperator and a monolithic HypreParMatrix . As noted above, it is not practical to have both an operator and a matrix, but this test illustrates the equivalence of the two approaches. The capability to form a monolithic matrix is available only for HypreParMatrix , not for the serial class SparseMatrix .", "title": "HowTo: Use Block Operators and Matrices"}, {"location": "howto/build-systems/", "text": "HowTo: Build and test MFEM, syntax for each build-system MFEM has two build systems: - Makefile. We will refer to it as \"original Makefile\" - CMake, an out-of-source build system generator, that will generate a build-system in Makefile or another language like Ninja . The most important difference between the two is that CMake being an out-of-source build system, it will require the creation of a build directory, and all commands will be run from there. The original Makefile system will build the code in source from the root directory. The original Makefile cd make config [...options...] make all -j 8 # Build everything make test # Run the tests CMake + Makefile (option 1: explicit makefile) cd mkdir build cd build cmake [...options...] .. # Note the \"..\" make -j 8 # Build MFEM make tests -j 8 # Build unit-tests make examples -j 8 # Build examples make miniapps -j 8 # Build miniapps make test # Run the tests CMake + Makefile (option 2: generic build, cmake wrappers) cd mkdir build cd build cmake [...options...] .. # Note the \"..\" cmake --build . -j 8 # Build MFEM cmake --build . --target tests -j 8 # Build unit-tests cmake --build . --target examples -j 8 # Build examples cmake --build . --target miniapps -j 8 # Build miniapps ctest --output-on-failure -T test # Run the tests CMake + Ninja (this is not what we are used to doing, but it works) cd mkdir build cd build cmake [...options...] -GNinja .. # Note the \"..\" cmake --build . -j 8 # Build MFEM cmake --build . --target tests -j 8 # Build unit-tests cmake --build . --target examples -j 8 # Build examples cmake --build . --target miniapps -j 8 # Build miniapps ctest --output-on-failure -T test # Run the tests", "title": "HowTo: Build and test MFEM, syntax for each build-system"}, {"location": "howto/build-systems/#howto-build-and-test-mfem-syntax-for-each-build-system", "text": "MFEM has two build systems: - Makefile. We will refer to it as \"original Makefile\" - CMake, an out-of-source build system generator, that will generate a build-system in Makefile or another language like Ninja . The most important difference between the two is that CMake being an out-of-source build system, it will require the creation of a build directory, and all commands will be run from there. The original Makefile system will build the code in source from the root directory. The original Makefile cd make config [...options...] make all -j 8 # Build everything make test # Run the tests CMake + Makefile (option 1: explicit makefile) cd mkdir build cd build cmake [...options...] .. # Note the \"..\" make -j 8 # Build MFEM make tests -j 8 # Build unit-tests make examples -j 8 # Build examples make miniapps -j 8 # Build miniapps make test # Run the tests CMake + Makefile (option 2: generic build, cmake wrappers) cd mkdir build cd build cmake [...options...] .. # Note the \"..\" cmake --build . -j 8 # Build MFEM cmake --build . --target tests -j 8 # Build unit-tests cmake --build . --target examples -j 8 # Build examples cmake --build . --target miniapps -j 8 # Build miniapps ctest --output-on-failure -T test # Run the tests CMake + Ninja (this is not what we are used to doing, but it works) cd mkdir build cd build cmake [...options...] -GNinja .. # Note the \"..\" cmake --build . -j 8 # Build MFEM cmake --build . --target tests -j 8 # Build unit-tests cmake --build . --target examples -j 8 # Build examples cmake --build . --target miniapps -j 8 # Build miniapps ctest --output-on-failure -T test # Run the tests", "title": "HowTo: Build and test MFEM, syntax for each build-system"}, {"location": "howto/custom_precond/", "text": "HowTo: Create a custom preconditioner using only matrix actions For many problems of interest the off the shelf preconditioners are insufficient and something more tailored to the equations of interest is required. MFEM has a flexible approach to defining preconditioners enabled by deriving from the existing Solver class and overriding the necessary methods to define the action. See the following example: // Define a custom solver class that can be used as the preconditioner for a broader problem solvers // Here we will define the example preconditioner: P x = M x + Ainv x class SumSolver : mfem::Solver { private: const mfem::Operator *M; //Since these are Operators only their const mfem::Operator *Ainv; //actions need to be defined public: SumSolver(const mfem::Operator *M_, const mfem::Operator *Ainv_) : mfem::Solver(M_->Height(), M_->Width(), false) { MFEM_VERIFY(M_->Height() == Ainv_->Height()); MFEM_VERIFY(M_->Width() == Ainv_->Width()); M = M_; Ainv = Ainv_; }; // Define the action of the Solver // y = P x = M x + Ainv x void Mult(const mfem::Vector &x, mfem::Vector &y) const { y = 0.0; mfem::Vector M_x(M->Height()); mfem::Vector Ainv_x(Ainv->Height()); M->Mult(x, M_x); // M_x = A x Ainv->Mult(x, Ainv_x); // Ainv_x = Ainv x y.Add(1.0, M_x); // y += M_x y.Add(1.0, Ainv_x); // y += Ainv_x }; void SetOperator(const Operator &op) { M = &op;}; }; In this example we defined a new MFEM solver that can be applied as a preconditioner for a broader solution. In this case we demonstrated an example where we have a matrix M, the action of the inverse of a matrix A, and we want to define the action of a preconditioner that is the sum of the two. In this case we cannot simply sum the matrices to form the new preconditioner because we don't have access to the elements of Ainv. As you can see this approach is quite flexible and can be utilized to create custom preconditioners of arbitrary complexity.", "title": "HowTo: Create a custom preconditioner using only matrix actions"}, {"location": "howto/custom_precond/#howto-create-a-custom-preconditioner-using-only-matrix-actions", "text": "For many problems of interest the off the shelf preconditioners are insufficient and something more tailored to the equations of interest is required. MFEM has a flexible approach to defining preconditioners enabled by deriving from the existing Solver class and overriding the necessary methods to define the action. See the following example: // Define a custom solver class that can be used as the preconditioner for a broader problem solvers // Here we will define the example preconditioner: P x = M x + Ainv x class SumSolver : mfem::Solver { private: const mfem::Operator *M; //Since these are Operators only their const mfem::Operator *Ainv; //actions need to be defined public: SumSolver(const mfem::Operator *M_, const mfem::Operator *Ainv_) : mfem::Solver(M_->Height(), M_->Width(), false) { MFEM_VERIFY(M_->Height() == Ainv_->Height()); MFEM_VERIFY(M_->Width() == Ainv_->Width()); M = M_; Ainv = Ainv_; }; // Define the action of the Solver // y = P x = M x + Ainv x void Mult(const mfem::Vector &x, mfem::Vector &y) const { y = 0.0; mfem::Vector M_x(M->Height()); mfem::Vector Ainv_x(Ainv->Height()); M->Mult(x, M_x); // M_x = A x Ainv->Mult(x, Ainv_x); // Ainv_x = Ainv x y.Add(1.0, M_x); // y += M_x y.Add(1.0, Ainv_x); // y += Ainv_x }; void SetOperator(const Operator &op) { M = &op;}; }; In this example we defined a new MFEM solver that can be applied as a preconditioner for a broader solution. In this case we demonstrated an example where we have a matrix M, the action of the inverse of a matrix A, and we want to define the action of a preconditioner that is the sum of the two. In this case we cannot simply sum the matrices to form the new preconditioner because we don't have access to the elements of Ainv. As you can see this approach is quite flexible and can be utilized to create custom preconditioners of arbitrary complexity.", "title": "HowTo: Create a custom preconditioner using only matrix actions"}, {"location": "howto/element-local-global-numbering/", "text": "HowTo: Map between local element numbering and parallel global element numbering With MPI parallelization, a distributed mesh is represented by the ParMesh class. On each MPI rank, ParMesh stores data about the local elements owned by the rank. The parallel partitioning of elements is non-overlapping. The local elements have local indexing from 0 to Mesh::GetNE() - 1 . Globally, the elements are numbered sequentially with respect to the MPI ranks and in their local order, starting from 0, so that the global index of an element is the local index plus an offset for its owning rank. The ParMesh class provides functions for mapping between local and global element indices, as described below. These functions support conforming or AMR meshes. Getting the global index corresponding to a local index For a local index local_element_num of an element owned by the current MPI rank, the global index is returned by ParMesh::GetGlobalElementNum(local_element_num) . Getting the local index corresponding to a global index For a global index global_element_num of an element owned by the current MPI rank, the local index is returned by ParMesh::GetLocalElementNum(global_element_num) . The return value is -1 if the element is owned by a different MPI rank. Getting all global indices of locally owned elements ParMesh::GetGlobalElementIndices sets an Array of the global indices of all the locally owned elements on the current MPI rank. The indices set here could alternatively be obtained by calling ParMesh::GetGlobalElementNum(i) for all i from 0 to GetNE() - 1 . Getting global indices of other mesh entities A related topic is how to get global indices for other mesh entities, meaning vertices, edges, or faces. We use the convention that in 1D, edges and faces are actually vertices, and in 2D, faces are actually edges. Whereas elements have local and global indices that are used by ParFiniteElementSpace to determine ordering of local and global finite element degrees of freedom, there are no global indices for the other mesh entities (vertices, edges, and faces). That is, the other mesh entities only have local indices in MFEM, defined in the Mesh class. Although there is no definition or meaning to global indices for the other mesh entities, the user may wish to have global indices for the user's own purposes, and the capability to generate them is provided by the following functions in the ParMesh class: GetGlobalVertexIndices GetGlobalEdgeIndices GetGlobalFaceIndices It should be noted that AMR meshes are currently not supported by these functions (only conforming meshes). Also, since these global indices are meaningless to the MFEM library, their definition is arbitrary and based on lowest-order finite element spaces (H1 for vertices, Nedelec for edges, Raviart-Thomas for faces). There is no implementation of maps between local and global indices for these other mesh entities.", "title": "HowTo: Map between local element numbering and parallel global element numbering"}, {"location": "howto/element-local-global-numbering/#howto-map-between-local-element-numbering-and-parallel-global-element-numbering", "text": "With MPI parallelization, a distributed mesh is represented by the ParMesh class. On each MPI rank, ParMesh stores data about the local elements owned by the rank. The parallel partitioning of elements is non-overlapping. The local elements have local indexing from 0 to Mesh::GetNE() - 1 . Globally, the elements are numbered sequentially with respect to the MPI ranks and in their local order, starting from 0, so that the global index of an element is the local index plus an offset for its owning rank. The ParMesh class provides functions for mapping between local and global element indices, as described below. These functions support conforming or AMR meshes.", "title": "HowTo: Map between local element numbering and parallel global element numbering"}, {"location": "howto/element-local-global-numbering/#getting-the-global-index-corresponding-to-a-local-index", "text": "For a local index local_element_num of an element owned by the current MPI rank, the global index is returned by ParMesh::GetGlobalElementNum(local_element_num) .", "title": "Getting the global index corresponding to a local index"}, {"location": "howto/element-local-global-numbering/#getting-the-local-index-corresponding-to-a-global-index", "text": "For a global index global_element_num of an element owned by the current MPI rank, the local index is returned by ParMesh::GetLocalElementNum(global_element_num) . The return value is -1 if the element is owned by a different MPI rank.", "title": "Getting the local index corresponding to a global index"}, {"location": "howto/element-local-global-numbering/#getting-all-global-indices-of-locally-owned-elements", "text": "ParMesh::GetGlobalElementIndices sets an Array of the global indices of all the locally owned elements on the current MPI rank. The indices set here could alternatively be obtained by calling ParMesh::GetGlobalElementNum(i) for all i from 0 to GetNE() - 1 .", "title": "Getting all global indices of locally owned elements"}, {"location": "howto/element-local-global-numbering/#getting-global-indices-of-other-mesh-entities", "text": "A related topic is how to get global indices for other mesh entities, meaning vertices, edges, or faces. We use the convention that in 1D, edges and faces are actually vertices, and in 2D, faces are actually edges. Whereas elements have local and global indices that are used by ParFiniteElementSpace to determine ordering of local and global finite element degrees of freedom, there are no global indices for the other mesh entities (vertices, edges, and faces). That is, the other mesh entities only have local indices in MFEM, defined in the Mesh class. Although there is no definition or meaning to global indices for the other mesh entities, the user may wish to have global indices for the user's own purposes, and the capability to generate them is provided by the following functions in the ParMesh class: GetGlobalVertexIndices GetGlobalEdgeIndices GetGlobalFaceIndices It should be noted that AMR meshes are currently not supported by these functions (only conforming meshes). Also, since these global indices are meaningless to the MFEM library, their definition is arbitrary and based on lowest-order finite element spaces (H1 for vertices, Nedelec for edges, Raviart-Thomas for faces). There is no implementation of maps between local and global indices for these other mesh entities.", "title": "Getting global indices of other mesh entities"}, {"location": "howto/findpts/", "text": "HowTo: Use FindPointsGSLIB for high-order interpolation FindPointsGSLIB provides a wrapper for high-order interpolation via findpts , a set of routines that were developed as a part of the gather-scatter library, gslib . While findpts was originally developed for interpolation of grid functions in H1 for meshes with quadrilateral or hexahedron elements, FindPointsGSLIB also enables interpolation of functions in L2, H(div), H(curl) on meshes with triangle and tetrahedral elements. The key steps of using FindPointsGSLIB , as demonstrated in the gslib miniapps are: First, setup the internal data structures required by the gslib library for the mesh of interest. This is done by using the FindPointsGSLIB::Setup(mesh) method with the desired mfem::Mesh or mfem::ParMesh . Next, use the FindPointsGSLIB::FindPoints(xyz) method with the mfem::Vector xyz of physical-space coordinates where we seek to interpolate the desired grid function. At this step, findpts determines the computational coordinates ( q j = {e j , r j , p j }) for each point. These computational coordinates include the element number (e j in mfem::Array gsl_elem ) in which the point is found, the reference-space coordinates ( r j in mfem::Vector gsl_ref ) inside e j , and the MPI rank that the element is partitioned on (p j in mfem::Array gsl_proc ). FindPoints also returns a code ( mfem::Array gsl_code ) to indicate weather the point was found inside an element ( gsl_code[j] = 0 ), on the edge/face of an element ( gsl_code[j] = 1 ), or not found at all ( gsl_code[j] = 2 ) for the case when the point is located outside the mesh. Note that if a point ( x j ) is located outside the mesh within a certain tolerance, findpts tries to find the closest location on the mesh surface (i.e. gsl_code[j] = 1 ) and returns the distance ( mfem::Vector gsl_dist ) between the sought point and the point found on the mesh surface. Finally, use FindPointsGSLIB::Interpolate(u, ui) to interpolate the desired mfem::(Par)GridFunction u at the physical-space coordinates given by xyz and return the interpolated values in mfem::Vector ui . If u is in H1 , we use findpts for interpolation. Otherwise, we use findpts only for communicating computational coordinates of each point across MPI ranks, followed by MFEM's internal methods ( mfem::GridFunction::GetValues ) for interpolation. Note , the FindPointsGSLIB::FreeData() method must be used before the program is terminated to free up the memory setup internally by findpts during the setup phase. For convenience, FindPointsGSLIB class provides methods such as FindPointsGSLIB::Interpolate(mesh, xyz, u, ui) which combines the three steps described above (setup, finding the computational coordinates of the sought points, and interpolation) into a single method. Please see the class definition for more details. Application of FindPointsGSLIB The gslib miniapps demonstrate several application of FindPointsGSLIB : findpts/pfindpts miniapps demonstrate high-order interpolation of a function in H1 , L2 , H(div) , or H(curl) at an arbitrary set of points in physical space. field-diff miniapp demonstrates comparison of grid functions defined on two different meshes. field-interp miniapp demonstrates transfer of a grid function from one mesh on to another mesh. schwarz_ex1/ex1p miniapp demonstrates use of overlapping Schwarz method to solve the Poisson problem in overlapping meshes. Here, we use FindPointsGSLIB to transfer solution between overlapping meshes to enforce Dirichlet conditions at the inter-domain boundaries. cht Navier miniapp demonstrates how a conjugate heat transfer problem can be solve with the incompressible Navier-Stokes equations and the unsteady heat equation solved on different grids. Here, FindPointsGSLIB is used to transfer the solution from one mesh to another to couple the two PDEs.", "title": "HowTo: Use FindPointsGSLIB for high-order interpolation"}, {"location": "howto/findpts/#howto-use-findpointsgslib-for-high-order-interpolation", "text": "FindPointsGSLIB provides a wrapper for high-order interpolation via findpts , a set of routines that were developed as a part of the gather-scatter library, gslib . While findpts was originally developed for interpolation of grid functions in H1 for meshes with quadrilateral or hexahedron elements, FindPointsGSLIB also enables interpolation of functions in L2, H(div), H(curl) on meshes with triangle and tetrahedral elements. The key steps of using FindPointsGSLIB , as demonstrated in the gslib miniapps are: First, setup the internal data structures required by the gslib library for the mesh of interest. This is done by using the FindPointsGSLIB::Setup(mesh) method with the desired mfem::Mesh or mfem::ParMesh . Next, use the FindPointsGSLIB::FindPoints(xyz) method with the mfem::Vector xyz of physical-space coordinates where we seek to interpolate the desired grid function. At this step, findpts determines the computational coordinates ( q j = {e j , r j , p j }) for each point. These computational coordinates include the element number (e j in mfem::Array gsl_elem ) in which the point is found, the reference-space coordinates ( r j in mfem::Vector gsl_ref ) inside e j , and the MPI rank that the element is partitioned on (p j in mfem::Array gsl_proc ). FindPoints also returns a code ( mfem::Array gsl_code ) to indicate weather the point was found inside an element ( gsl_code[j] = 0 ), on the edge/face of an element ( gsl_code[j] = 1 ), or not found at all ( gsl_code[j] = 2 ) for the case when the point is located outside the mesh. Note that if a point ( x j ) is located outside the mesh within a certain tolerance, findpts tries to find the closest location on the mesh surface (i.e. gsl_code[j] = 1 ) and returns the distance ( mfem::Vector gsl_dist ) between the sought point and the point found on the mesh surface. Finally, use FindPointsGSLIB::Interpolate(u, ui) to interpolate the desired mfem::(Par)GridFunction u at the physical-space coordinates given by xyz and return the interpolated values in mfem::Vector ui . If u is in H1 , we use findpts for interpolation. Otherwise, we use findpts only for communicating computational coordinates of each point across MPI ranks, followed by MFEM's internal methods ( mfem::GridFunction::GetValues ) for interpolation. Note , the FindPointsGSLIB::FreeData() method must be used before the program is terminated to free up the memory setup internally by findpts during the setup phase. For convenience, FindPointsGSLIB class provides methods such as FindPointsGSLIB::Interpolate(mesh, xyz, u, ui) which combines the three steps described above (setup, finding the computational coordinates of the sought points, and interpolation) into a single method. Please see the class definition for more details.", "title": "HowTo: Use FindPointsGSLIB for high-order interpolation"}, {"location": "howto/findpts/#application-of-findpointsgslib", "text": "The gslib miniapps demonstrate several application of FindPointsGSLIB : findpts/pfindpts miniapps demonstrate high-order interpolation of a function in H1 , L2 , H(div) , or H(curl) at an arbitrary set of points in physical space. field-diff miniapp demonstrates comparison of grid functions defined on two different meshes. field-interp miniapp demonstrates transfer of a grid function from one mesh on to another mesh. schwarz_ex1/ex1p miniapp demonstrates use of overlapping Schwarz method to solve the Poisson problem in overlapping meshes. Here, we use FindPointsGSLIB to transfer solution between overlapping meshes to enforce Dirichlet conditions at the inter-domain boundaries. cht Navier miniapp demonstrates how a conjugate heat transfer problem can be solve with the incompressible Navier-Stokes equations and the unsteady heat equation solved on different grids. Here, FindPointsGSLIB is used to transfer the solution from one mesh to another to couple the two PDEs.", "title": "Application of FindPointsGSLIB"}, {"location": "howto/howto-index/", "text": "HowTo Articles This is a growing collection of \"how-to\" articles on topics encountered by our users in practice. Please feel free to suggest a missing topic! \ud83d\udd0e Search the articles... Build, Install, and Test Overview of the MFEM Build and Test System Install MFEM with Spack Finite Elements Using Partial and Matrix-free Assembly Meshes Navigating Mesh Connectivity Parallel Element Numbering Finding Local Element Coordinates of Physical Points Working with Nonconforming Meshes for AMR Linear Algebra Using Block Operators and Matrices Solvers Using a Custom Preconditioner Boundaries Compute Outer Normals of Boundary Elements Using Periodic Boundaries", "title": "HowTo Articles"}, {"location": "howto/howto-index/#howto-articles", "text": "This is a growing collection of \"how-to\" articles on topics encountered by our users in practice. Please feel free to suggest a missing topic! \ud83d\udd0e Search the articles...", "title": "HowTo Articles"}, {"location": "howto/howto-index/#build-install-and-test", "text": "Overview of the MFEM Build and Test System Install MFEM with Spack", "title": "Build, Install, and Test"}, {"location": "howto/howto-index/#finite-elements", "text": "Using Partial and Matrix-free Assembly", "title": "Finite Elements"}, {"location": "howto/howto-index/#meshes", "text": "Navigating Mesh Connectivity Parallel Element Numbering Finding Local Element Coordinates of Physical Points Working with Nonconforming Meshes for AMR", "title": "Meshes"}, {"location": "howto/howto-index/#linear-algebra", "text": "Using Block Operators and Matrices", "title": "Linear Algebra"}, {"location": "howto/howto-index/#solvers", "text": "Using a Custom Preconditioner", "title": "Solvers"}, {"location": "howto/howto-index/#boundaries", "text": "Compute Outer Normals of Boundary Elements Using Periodic Boundaries", "title": "Boundaries"}, {"location": "howto/install-with-spack/", "text": "HowTo: Use Spack to install MFEM. MFEM can be built with make or CMake . But MFEM has also been packaged to be built with Spack . What does it mean to use Spack, and why use it? Packaging vs. Build-System In concrete terms, packaging with Spack here means that: Spack will interface with the build system: no make or CMake command required. Build options are specified as \"variants\". There may not be a variant for every option or combination of options allowed by building from source \"manually\". Spack will also install the dependencies, which may also be activated using \"variants\". (Note that so far, the MFEM Spack package interfaces with MFEM makefile build system, not CMake.) The first takeaway is that using Spack may not allow as much configuration as possible manually but will manage the installation of dependencies. When to use Spack? Spack is a from source package manager. So Spack will allow you to build mfem from source using the underlying makefile build system. To manage your libraries for development Spack is typically used to deploy software. You may use it to install MFEM among other libraries in a shared location for developers using MFEM as a dependency: all will have access to the same configuration and you will be able to reproduce this installation at will. But you will be limited to a predefined set of versions. Typically the releases and the latest state of master branch. In that sense Spack is not meant to be use to develop in MFEM a priori . (For those looking to use Spack to develop in MFEM, see Spack workflow feature ) To install dependencies automatically Spack will automatically build the dependencies, which can be especially valuable to get started quickly with an advanced configuration of MFEM. This is a great way to get students started quickly with a configuration that would require much too many steps otherwise. To reproduce a vetted configuration Spack is used in GitLab CI context to automate the build on dependencies, easily update those, and improve reproducibility. For more details about this, explore MFEM Uberenv configuration , and the documentation mentioned in the README. How to use Spack to install MFEM. Using Spack is easy to start with, complex when it comes to getting exactly what you want, and can be tedious to maintain on the long term. Best practices for a long-term sane relationship with Spack Unless you want to develop in Spack, those rules will help keeping things under control: Use a single Spack instance. Spack has environments that mimic the way python environments work to allow you to partition things so that all the packages installed do not show up in a big mess. Stick to a release of Spack. Packages evolve along with Spack source code. It means that updating Spack will likely affect reproducing the build of specs already installed. Expect to reinstall everything when you update Spack. Using Spack to install MFEM on LLNL's Lassen and Quartz systems Those machines are used to test MFEM. The tests running in GitLab CI use Spack to manage MFEM dependencies. The configuration used for those tests can be reproduced exactly. This guarantees to get a working installation through Spack. Unfortunately, only a handful of configurations are being tested. But this is a good starting point to explore further. See MFEM Uberenv configuration for more details.", "title": "HowTo: Use Spack to install MFEM."}, {"location": "howto/install-with-spack/#howto-use-spack-to-install-mfem", "text": "MFEM can be built with make or CMake . But MFEM has also been packaged to be built with Spack .", "title": "HowTo: Use Spack to install MFEM."}, {"location": "howto/install-with-spack/#what-does-it-mean-to-use-spack-and-why-use-it", "text": "", "title": "What does it mean to use Spack, and why use it?"}, {"location": "howto/install-with-spack/#packaging-vs-build-system", "text": "In concrete terms, packaging with Spack here means that: Spack will interface with the build system: no make or CMake command required. Build options are specified as \"variants\". There may not be a variant for every option or combination of options allowed by building from source \"manually\". Spack will also install the dependencies, which may also be activated using \"variants\". (Note that so far, the MFEM Spack package interfaces with MFEM makefile build system, not CMake.) The first takeaway is that using Spack may not allow as much configuration as possible manually but will manage the installation of dependencies.", "title": "Packaging vs. Build-System"}, {"location": "howto/install-with-spack/#when-to-use-spack", "text": "Spack is a from source package manager. So Spack will allow you to build mfem from source using the underlying makefile build system.", "title": "When to use Spack?"}, {"location": "howto/install-with-spack/#to-manage-your-libraries-for-development", "text": "Spack is typically used to deploy software. You may use it to install MFEM among other libraries in a shared location for developers using MFEM as a dependency: all will have access to the same configuration and you will be able to reproduce this installation at will. But you will be limited to a predefined set of versions. Typically the releases and the latest state of master branch. In that sense Spack is not meant to be use to develop in MFEM a priori . (For those looking to use Spack to develop in MFEM, see Spack workflow feature )", "title": "To manage your libraries for development"}, {"location": "howto/install-with-spack/#to-install-dependencies-automatically", "text": "Spack will automatically build the dependencies, which can be especially valuable to get started quickly with an advanced configuration of MFEM. This is a great way to get students started quickly with a configuration that would require much too many steps otherwise.", "title": "To install dependencies automatically"}, {"location": "howto/install-with-spack/#to-reproduce-a-vetted-configuration", "text": "Spack is used in GitLab CI context to automate the build on dependencies, easily update those, and improve reproducibility. For more details about this, explore MFEM Uberenv configuration , and the documentation mentioned in the README.", "title": "To reproduce a vetted configuration"}, {"location": "howto/install-with-spack/#how-to-use-spack-to-install-mfem", "text": "Using Spack is easy to start with, complex when it comes to getting exactly what you want, and can be tedious to maintain on the long term.", "title": "How to use Spack to install MFEM."}, {"location": "howto/install-with-spack/#best-practices-for-a-long-term-sane-relationship-with-spack", "text": "Unless you want to develop in Spack, those rules will help keeping things under control: Use a single Spack instance. Spack has environments that mimic the way python environments work to allow you to partition things so that all the packages installed do not show up in a big mess. Stick to a release of Spack. Packages evolve along with Spack source code. It means that updating Spack will likely affect reproducing the build of specs already installed. Expect to reinstall everything when you update Spack.", "title": "Best practices for a long-term sane relationship with Spack"}, {"location": "howto/install-with-spack/#using-spack-to-install-mfem-on-llnls-lassen-and-quartz-systems", "text": "Those machines are used to test MFEM. The tests running in GitLab CI use Spack to manage MFEM dependencies. The configuration used for those tests can be reproduced exactly. This guarantees to get a working installation through Spack. Unfortunately, only a handful of configurations are being tested. But this is a good starting point to explore further. See MFEM Uberenv configuration for more details.", "title": "Using Spack to install MFEM on LLNL's Lassen and Quartz systems"}, {"location": "howto/nav-mesh-connectivity/", "text": "HowTo: Navigate the connections between mesh primitives with Table objects Elements, faces, edges, and vertices are all connected to each other to form a cohesive mesh. In some lower level applications it may be necessary to navigate the MFEM mesh through these connections to find the mesh primitives you need. Each of the mesh primitives has its own numbering and MFEM represents the connections between these primitives in Table objects ( general/table.hpp ) that are stored in the Mesh object ( mesh/mesh.hpp ). You can access these Table object through 7 different accessor methods in mesh: Mesh Method Dimension Mesh object owns data const Table &ElementToElementTable() 1D, 2D, 3D Yes const Table &ElementToFaceTable() 1D, 2D, 3D Yes const Table &ElementToEdgeTable() 1D, 2D, 3D Yes Table *GetFaceEdgeTable() 3D Yes Table *GetEdgeVertexTable() 1D, 2D, 3D Yes Table *GetVertexToElementTable() 1D, 2D, 3D No Table *GetFaceToElementTable() 1D, 2D, 3D No The interfaces for these accessors are unfortunately not uniform, and care must be taken to use them properly. For example the Mesh object owns the data for most, but not all of them, so care must be taken delete the Table objects returned by the last two. In addition, two of the methods are only defined in 3D due to them using the strict definitions of faces and edges while the others use the looser definition by letting the faces be edges in 2D and the edges vertices in 1D. Once you have the table with the information you want you can access it through the table methods as in the following example: const Table &elem_edge = mesh.ElementToEdgeTable(); int num_elems = mesh.GetNE(); for (int elem_id = 0; ei < num_elems; elem_id++) { int num_edges = elem_edge.RowSize(elem_id); const int *edges = elem_edge.GetRow(elem_id); for (int edgei = 0; edgei < num_edges; edgei ++) { int edge_id = edges[edgei]; .... Do something with the edge ID .... } } Another useful method related to navigating mesh connections with these Table objects is the Transpose method. This method takes an A_to_B table and transposes it into a B_to_A table. Usage is as follows: Table &face_edge = *mesh.GetFaceEdgeTable(); Table edge_face; Transpose(face_edge, edge_face); int num_edges = mesh.GetNEdges(); for (int edge_id = 0; ei < num_edges; edge_id++) { .... }", "title": "HowTo: Navigate the connections between mesh primitives with Table objects"}, {"location": "howto/nav-mesh-connectivity/#howto-navigate-the-connections-between-mesh-primitives-with-table-objects", "text": "Elements, faces, edges, and vertices are all connected to each other to form a cohesive mesh. In some lower level applications it may be necessary to navigate the MFEM mesh through these connections to find the mesh primitives you need. Each of the mesh primitives has its own numbering and MFEM represents the connections between these primitives in Table objects ( general/table.hpp ) that are stored in the Mesh object ( mesh/mesh.hpp ). You can access these Table object through 7 different accessor methods in mesh: Mesh Method Dimension Mesh object owns data const Table &ElementToElementTable() 1D, 2D, 3D Yes const Table &ElementToFaceTable() 1D, 2D, 3D Yes const Table &ElementToEdgeTable() 1D, 2D, 3D Yes Table *GetFaceEdgeTable() 3D Yes Table *GetEdgeVertexTable() 1D, 2D, 3D Yes Table *GetVertexToElementTable() 1D, 2D, 3D No Table *GetFaceToElementTable() 1D, 2D, 3D No The interfaces for these accessors are unfortunately not uniform, and care must be taken to use them properly. For example the Mesh object owns the data for most, but not all of them, so care must be taken delete the Table objects returned by the last two. In addition, two of the methods are only defined in 3D due to them using the strict definitions of faces and edges while the others use the looser definition by letting the faces be edges in 2D and the edges vertices in 1D. Once you have the table with the information you want you can access it through the table methods as in the following example: const Table &elem_edge = mesh.ElementToEdgeTable(); int num_elems = mesh.GetNE(); for (int elem_id = 0; ei < num_elems; elem_id++) { int num_edges = elem_edge.RowSize(elem_id); const int *edges = elem_edge.GetRow(elem_id); for (int edgei = 0; edgei < num_edges; edgei ++) { int edge_id = edges[edgei]; .... Do something with the edge ID .... } } Another useful method related to navigating mesh connections with these Table objects is the Transpose method. This method takes an A_to_B table and transposes it into a B_to_A table. Usage is as follows: Table &face_edge = *mesh.GetFaceEdgeTable(); Table edge_face; Transpose(face_edge, edge_face); int num_edges = mesh.GetNEdges(); for (int edge_id = 0; ei < num_edges; edge_id++) { .... }", "title": "HowTo: Navigate the connections between mesh primitives with Table objects"}, {"location": "howto/ncmesh/", "text": "HowTo: Nonconforming and AMR meshes The Mesh class provides basic element refinement capabilities: All elements may be refined uniformly with Mesh::UniformRefinement . Local refinement is supported, but only for simplex elements. The method Mesh::GeneralRefinement uses recursive bisection in this case. These basic refinement methods preserve mesh conformity, i.e., no hanging nodes are created. This also means that quadrilaterals and hexahedra cannot be refined locally by the Mesh class. For more advanced AMR, MFEM has the class NCMesh : Tensor product element refinement (quad, hex, prism) is supported, including anisotropic refinement. Hanging nodes are created and handled transparently. Triangles and tetrahedra use \"red\" (isotropic) refinement, also producing hanging nodes in this mode. Derefinement (coarsening) of previously refined elements is possible. In parallel, the mesh can be load balanced. The user does not interact directly with the NCMesh class \u2014 it is created behind the scenes, and the Mesh class in nonconforming mode, continually updated to contain the finest elements of the refinement hierarchy, still serves as an interface for the user and other MFEM classes. To switch to the nonconforming mode (or convert and existing conforming Mesh ), you need to call EnsureNCMesh , typically at the beginning after loading the mesh: Mesh *mesh = new Mesh(mesh_file, 1, 1); mesh->EnsureNCMesh(true); The boolean parameter, if true , forces simplex meshes to use nonconforming refinement (the default is false ). Nonconforming refinement Once the Mesh is in nonconforming mode, you can simply call Mesh::GeneralRefinement to locally refine a subset of elements: Array refinement_list; for (int i = 0; i < mesh->GetNE(); i++) { if (/*element i refinement condition*/) { refinement_list.Append(i); } } mesh->GeneralRefinement(refinement_list); The resulting hanging nodes will be treated transparently by the FiniteElementSpace and BilinearForm classes: FiniteElementSpace will internally construct a conforming interpolation matrix $P$, that when applied to a vector of unconstrained (\"true\") DOFs, will augment the vector with interpolated constrained DOFs. Once the linear system $Ax = b$ is assembled, BilinearForm::FormLinearSystem will eliminate constrained nodes by transforming the linear system to $P^TAPx = P^Tb$ (see ex1.cpp ). After the reduced system is solved, the conforming solution on all nodes is recovered as $y = P x$ with BilinearForm::RecoverFEMSolution . Limiting the level of hanging nodes By default, MFEM does not limit the sizes of adjacent elements in nonconforming meshes. For some applications, it may be necessary to ensure that the refinement level of neighboring elements differs by at most one, for example. The optional parameter nc_limit of Mesh::GeneralRefinement can be used to control the maximum level of nonconformity. If nc_limit is greater than zero, the method will automatically perform additional refinements to make sure the difference of refinement levels of adjacent elements is at most nc_limit . Anisotropic refinement Uniquely, MFEM offers the capability to perform anisotropic refinement of tensor product elements in both 2D and 3D. The method Mesh::GeneralRefinement has two overloads, one taking a simple list of elements to refine (as seen above), and the other taking a list of struct Refinement { int index; char ref_type; } , where one can specify a refinement type for each element in the list: Array refinement_list; refinement_list.Append(Refinement(0, 2)); refinement_list.Append(Refinement(1, 4)); mesh->GeneralRefinement(refinement_list); This code will refine the first element (index 0) of the mesh in the Y direction only (provided it is a quad or hex element) and the second element (index 1) in the Z direction only. The directions are assumed in the element reference coordinates and are encoded as follows: Note that the refinement type is encoded as a 3-bit number, where bits 0, 1, 2 correspond to the X, Y, Z directions, respectively. Other element geometries allow fewer but similar refinement types: triangle (3), quadrilateral (1, 2, 3), tetrahedron (7), prism (3, 4, 7). In 3D meshes with anisotropic refinements it is easy to arrive at conflicting situations, where the refined faces of adjacent elements are not subsets of each other. For example, running the above code on a mesh with two hexahedra adjacent in the X direction will create an interface that cannot be constrained correctly. In such cases, MFEM will automatically adjust one side of the interface with additional refinements (called forced refinements) to ensure that the mesh remains a valid FEM mesh. In pathological cases the forced refinements may propagate. Using a reasonable nc_limit may reduce this effect. Nevertheless, a valid mesh is produced in all cases. Derefinement To coarsen elements, use the method Mesh::DerefineByError . The interface is different from refinement, because it is not possible to coarsen arbitrary groups of fine elements: it is only possible to reintroduce previously existing coarse elements by undoing their refinement (hence the term \"derefinement\"). Since one cannot supply the indices of elements that no longer exist in the Mesh class (the refinement trees are kept internal to NCMesh ), the method DerefineByError works indirectly by taking an array of \"error\" values corresponding to each element of the current Mesh . If the sum of error values of the children of some coarse element is below a supplied threshold, the children are removed and the coarse element is restored in Mesh . If the user specifies a nonzero nc_limit , care is taken not to derefine elements that are needed to keep the required level of nonconformity. Note: derefinement is not yet supported for meshes containing 3D anisotropic refinements. Parallel nonconforming meshes Just as the Mesh class has a parallel counterpart ( ParMesh ), so does the NCMesh class have a parallel descendant: ParNCMesh . The parallel class is again kept internal and the user can continue to interact with the standard ParMesh class (see examples ex1p , ex6p and ex15p ). The refinement hierarchy in parallel NC mode is fully distributed and scales to billions of elements and hundreds of thousands of MPI tasks. Ghost elements are automatically tracked by the ParNCMesh class, so that a parallel conforming interpolation matrix can be constructed by ParFiniteElementSpace . Depending on the assembly level, ParBilinearForm will either explicitly assemble the parallel $P^TAP$ system using the Hypre library, or the action of the $P$ matrix will be applied during solver iterations. Parallel refinement is still done through Mesh::GeneralRefinement inherited by the ParMesh class. The method takes local element indices and works the same as in serial. All parallel concerns such as keeping the ghost layers synchronized are handled internally in ParNCMesh . Note: parallel anisotropic refinement of 3D meshes is not supported yet. After each mesh operation (refinement, derefinement, load balancing) the ParMesh is updated to reflect the current parallel mesh state (minus the ghost elements, which are not exported to ParMesh ). Communication groups, used in conforming mode for reductions/broadcasts over parallel solution vectors, are approximated in the NC mode as if the mesh was cut along the nonconforming interfaces. Load balancing In conforming mode, a serial Mesh can only be partitioned statically (with METIS) when constructing a ParMesh . In nonconforming mode, the internal ParNCMesh class is capable of load balancing the distributed mesh at any time. This functionality is available to the user through ParMesh::Rebalance (see ex6p and ex15p ). The dynamic load balancing algorithm is based on partitioning a space-filling curve (SFC) that naturally arises when traversing the distributed refinement trees. Compared to spectral partitioners like METIS the partitions are not as high quality but the process is extremely fast and scales to hundreds of thousands of processors. For best results with SFC-based partitioning, one condition has to be met: the elements of the coarse Mesh from which the ParMesh is constructed need to be ordered, ideally as a sequence of face-neighbors. This makes it possible for ParNCMesh to order the leaves of all refinement trees into a global linear sequence, which when equipartitioned should produce compact (albeit not minimal surface) mesh partitions. Take for example a coarse mesh produced by the polar-nc miniapp. Except for two discontinuities, the elements are mostly ordered as a sequence of face-neighbors: When we start refining elements (in both serial and parallel), MFEM will try to keep the space-filling curve continuous by inserting local Hilbert curves in the refined areas (press Ctrl+O in GLVis to visualize the ordering curve): In a parallel computation, the global curve is then used for fast assignment of elements to MPI ranks. In the following run of ex15p , each processor is assigned the same number of elements (+/- one element). Note that the last partition is discontinuous due to a jump in ordering in the coarse mesh. This only affects the efficiency of MPI communication \u2014 the numerical results will be the same regardless of the partitioning. MFEM provides several methods to help with mesh ordering: Procedurally generated rectangular grids ( Mesh::MakeCartesian2D , Mesh::MakeCartesian3D and also MFEM INLINE mesh v1.0 files) are by default ordered along a pseudo-Hilbert curve. Note that even grid dimensions are recommended, as explained here . General unstructured meshes may be ordered by a spatial sort algorithm ( Mesh::GetHilbertElementOrdering ). This is a fast method that will leave a number of jumps in complex meshes, but it is still highly recommended over not ordering the mesh at all. High-quality orderings of general meshes can be obtained with the Gecko library, now included directly in MFEM and available as Mesh::GetGeckoElementOrdering . The optimization algorithm used is more costly than a simple spatial sort, but it should produce better orderings for meshes with complex geometries. Beware the exponential cost of increasing the window parameter. Large meshes should probably be ordered in a preprocessing step (you may use the mesh-explorer miniapp for that). Nonconforming mesh I/O Nonconforming meshes have their own file format MFEM NC mesh v1.0 , which supports all the additional internal structures (refinement trees, hanging nodes, etc.) and works for both serial and parallel NC meshes. The method ParMesh::ParPrint will automatically choose the right format and can be used to save and restart an AMR computation, as demonstrated in example ex6p . ParMesh::ParPrint should not be confused with the method ParMesh::Print , an analog of Mesh::Print , which is only suitable for visualization, as it uses the serial MFEM mesh v1.0 format and only adds the parallel shared faces to the output. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "HowTo: Nonconforming and AMR meshes"}, {"location": "howto/ncmesh/#howto-nonconforming-and-amr-meshes", "text": "The Mesh class provides basic element refinement capabilities: All elements may be refined uniformly with Mesh::UniformRefinement . Local refinement is supported, but only for simplex elements. The method Mesh::GeneralRefinement uses recursive bisection in this case. These basic refinement methods preserve mesh conformity, i.e., no hanging nodes are created. This also means that quadrilaterals and hexahedra cannot be refined locally by the Mesh class. For more advanced AMR, MFEM has the class NCMesh : Tensor product element refinement (quad, hex, prism) is supported, including anisotropic refinement. Hanging nodes are created and handled transparently. Triangles and tetrahedra use \"red\" (isotropic) refinement, also producing hanging nodes in this mode. Derefinement (coarsening) of previously refined elements is possible. In parallel, the mesh can be load balanced. The user does not interact directly with the NCMesh class \u2014 it is created behind the scenes, and the Mesh class in nonconforming mode, continually updated to contain the finest elements of the refinement hierarchy, still serves as an interface for the user and other MFEM classes. To switch to the nonconforming mode (or convert and existing conforming Mesh ), you need to call EnsureNCMesh , typically at the beginning after loading the mesh: Mesh *mesh = new Mesh(mesh_file, 1, 1); mesh->EnsureNCMesh(true); The boolean parameter, if true , forces simplex meshes to use nonconforming refinement (the default is false ).", "title": "HowTo: Nonconforming and AMR meshes"}, {"location": "howto/ncmesh/#nonconforming-refinement", "text": "Once the Mesh is in nonconforming mode, you can simply call Mesh::GeneralRefinement to locally refine a subset of elements: Array refinement_list; for (int i = 0; i < mesh->GetNE(); i++) { if (/*element i refinement condition*/) { refinement_list.Append(i); } } mesh->GeneralRefinement(refinement_list); The resulting hanging nodes will be treated transparently by the FiniteElementSpace and BilinearForm classes: FiniteElementSpace will internally construct a conforming interpolation matrix $P$, that when applied to a vector of unconstrained (\"true\") DOFs, will augment the vector with interpolated constrained DOFs. Once the linear system $Ax = b$ is assembled, BilinearForm::FormLinearSystem will eliminate constrained nodes by transforming the linear system to $P^TAPx = P^Tb$ (see ex1.cpp ). After the reduced system is solved, the conforming solution on all nodes is recovered as $y = P x$ with BilinearForm::RecoverFEMSolution .", "title": "Nonconforming refinement"}, {"location": "howto/ncmesh/#limiting-the-level-of-hanging-nodes", "text": "By default, MFEM does not limit the sizes of adjacent elements in nonconforming meshes. For some applications, it may be necessary to ensure that the refinement level of neighboring elements differs by at most one, for example. The optional parameter nc_limit of Mesh::GeneralRefinement can be used to control the maximum level of nonconformity. If nc_limit is greater than zero, the method will automatically perform additional refinements to make sure the difference of refinement levels of adjacent elements is at most nc_limit .", "title": "Limiting the level of hanging nodes"}, {"location": "howto/ncmesh/#anisotropic-refinement", "text": "Uniquely, MFEM offers the capability to perform anisotropic refinement of tensor product elements in both 2D and 3D. The method Mesh::GeneralRefinement has two overloads, one taking a simple list of elements to refine (as seen above), and the other taking a list of struct Refinement { int index; char ref_type; } , where one can specify a refinement type for each element in the list: Array refinement_list; refinement_list.Append(Refinement(0, 2)); refinement_list.Append(Refinement(1, 4)); mesh->GeneralRefinement(refinement_list); This code will refine the first element (index 0) of the mesh in the Y direction only (provided it is a quad or hex element) and the second element (index 1) in the Z direction only. The directions are assumed in the element reference coordinates and are encoded as follows: Note that the refinement type is encoded as a 3-bit number, where bits 0, 1, 2 correspond to the X, Y, Z directions, respectively. Other element geometries allow fewer but similar refinement types: triangle (3), quadrilateral (1, 2, 3), tetrahedron (7), prism (3, 4, 7). In 3D meshes with anisotropic refinements it is easy to arrive at conflicting situations, where the refined faces of adjacent elements are not subsets of each other. For example, running the above code on a mesh with two hexahedra adjacent in the X direction will create an interface that cannot be constrained correctly. In such cases, MFEM will automatically adjust one side of the interface with additional refinements (called forced refinements) to ensure that the mesh remains a valid FEM mesh. In pathological cases the forced refinements may propagate. Using a reasonable nc_limit may reduce this effect. Nevertheless, a valid mesh is produced in all cases.", "title": "Anisotropic refinement"}, {"location": "howto/ncmesh/#derefinement", "text": "To coarsen elements, use the method Mesh::DerefineByError . The interface is different from refinement, because it is not possible to coarsen arbitrary groups of fine elements: it is only possible to reintroduce previously existing coarse elements by undoing their refinement (hence the term \"derefinement\"). Since one cannot supply the indices of elements that no longer exist in the Mesh class (the refinement trees are kept internal to NCMesh ), the method DerefineByError works indirectly by taking an array of \"error\" values corresponding to each element of the current Mesh . If the sum of error values of the children of some coarse element is below a supplied threshold, the children are removed and the coarse element is restored in Mesh . If the user specifies a nonzero nc_limit , care is taken not to derefine elements that are needed to keep the required level of nonconformity. Note: derefinement is not yet supported for meshes containing 3D anisotropic refinements.", "title": "Derefinement"}, {"location": "howto/ncmesh/#parallel-nonconforming-meshes", "text": "Just as the Mesh class has a parallel counterpart ( ParMesh ), so does the NCMesh class have a parallel descendant: ParNCMesh . The parallel class is again kept internal and the user can continue to interact with the standard ParMesh class (see examples ex1p , ex6p and ex15p ). The refinement hierarchy in parallel NC mode is fully distributed and scales to billions of elements and hundreds of thousands of MPI tasks. Ghost elements are automatically tracked by the ParNCMesh class, so that a parallel conforming interpolation matrix can be constructed by ParFiniteElementSpace . Depending on the assembly level, ParBilinearForm will either explicitly assemble the parallel $P^TAP$ system using the Hypre library, or the action of the $P$ matrix will be applied during solver iterations. Parallel refinement is still done through Mesh::GeneralRefinement inherited by the ParMesh class. The method takes local element indices and works the same as in serial. All parallel concerns such as keeping the ghost layers synchronized are handled internally in ParNCMesh . Note: parallel anisotropic refinement of 3D meshes is not supported yet. After each mesh operation (refinement, derefinement, load balancing) the ParMesh is updated to reflect the current parallel mesh state (minus the ghost elements, which are not exported to ParMesh ). Communication groups, used in conforming mode for reductions/broadcasts over parallel solution vectors, are approximated in the NC mode as if the mesh was cut along the nonconforming interfaces.", "title": "Parallel nonconforming meshes"}, {"location": "howto/ncmesh/#load-balancing", "text": "In conforming mode, a serial Mesh can only be partitioned statically (with METIS) when constructing a ParMesh . In nonconforming mode, the internal ParNCMesh class is capable of load balancing the distributed mesh at any time. This functionality is available to the user through ParMesh::Rebalance (see ex6p and ex15p ). The dynamic load balancing algorithm is based on partitioning a space-filling curve (SFC) that naturally arises when traversing the distributed refinement trees. Compared to spectral partitioners like METIS the partitions are not as high quality but the process is extremely fast and scales to hundreds of thousands of processors. For best results with SFC-based partitioning, one condition has to be met: the elements of the coarse Mesh from which the ParMesh is constructed need to be ordered, ideally as a sequence of face-neighbors. This makes it possible for ParNCMesh to order the leaves of all refinement trees into a global linear sequence, which when equipartitioned should produce compact (albeit not minimal surface) mesh partitions. Take for example a coarse mesh produced by the polar-nc miniapp. Except for two discontinuities, the elements are mostly ordered as a sequence of face-neighbors: When we start refining elements (in both serial and parallel), MFEM will try to keep the space-filling curve continuous by inserting local Hilbert curves in the refined areas (press Ctrl+O in GLVis to visualize the ordering curve): In a parallel computation, the global curve is then used for fast assignment of elements to MPI ranks. In the following run of ex15p , each processor is assigned the same number of elements (+/- one element). Note that the last partition is discontinuous due to a jump in ordering in the coarse mesh. This only affects the efficiency of MPI communication \u2014 the numerical results will be the same regardless of the partitioning. MFEM provides several methods to help with mesh ordering: Procedurally generated rectangular grids ( Mesh::MakeCartesian2D , Mesh::MakeCartesian3D and also MFEM INLINE mesh v1.0 files) are by default ordered along a pseudo-Hilbert curve. Note that even grid dimensions are recommended, as explained here . General unstructured meshes may be ordered by a spatial sort algorithm ( Mesh::GetHilbertElementOrdering ). This is a fast method that will leave a number of jumps in complex meshes, but it is still highly recommended over not ordering the mesh at all. High-quality orderings of general meshes can be obtained with the Gecko library, now included directly in MFEM and available as Mesh::GetGeckoElementOrdering . The optimization algorithm used is more costly than a simple spatial sort, but it should produce better orderings for meshes with complex geometries. Beware the exponential cost of increasing the window parameter. Large meshes should probably be ordered in a preprocessing step (you may use the mesh-explorer miniapp for that).", "title": "Load balancing"}, {"location": "howto/ncmesh/#nonconforming-mesh-io", "text": "Nonconforming meshes have their own file format MFEM NC mesh v1.0 , which supports all the additional internal structures (refinement trees, hanging nodes, etc.) and works for both serial and parallel NC meshes. The method ParMesh::ParPrint will automatically choose the right format and can be used to save and restart an AMR computation, as demonstrated in example ex6p . ParMesh::ParPrint should not be confused with the method ParMesh::Print , an analog of Mesh::Print , which is only suitable for visualization, as it uses the serial MFEM mesh v1.0 format and only adds the parallel shared faces to the output. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Nonconforming mesh I/O"}, {"location": "howto/outer_normals/", "text": "HowTo: Compute the outer normals of the boundary elements of a mesh In numerous applications it is important to obtain the outer normals to the boundary of your mesh. In 2D this will simply be the vector normal to the vector tangent to the boundary point of interest, and in 3D this will be the vector normal to the plane tangent to the boundary point of interest. An easy way to obtain these vector/plane tangents is to use the Jacobian of the element transformation at the point of interest as in the following example: // Loop through the boundary elements and compute the normals at the centers of those elements for (int it = 0; it < fespace->GetNBE(); it++) { Vector normal(dim); ElementTransformation *Trans = fespace->GetBdrElementTransformation(it); Trans->SetIntPoint(&Geometries.GetCenter(Trans->GetGeometryType())); CalcOrtho(Trans->Jacobian(), normal); ... Do something of interest with the normals } The ElementTransformation object handles transformations between the elements and their corresponding reference elements. We start by getting the ElementTransformation object for the boundary element we are interested in. In order to move forward we then need to set the point in the element that we are interested in with the SetIntPoint method. In this we are setting it to the geometric center of the boundary element. Finally, we can get the Jacobian of the boundary element and use the tangent vector/plane that it defines to compute the boundary element normal at the boundary element center. The CalcOrtho method simply takes a 2x1 or 3x2 matrix and compute the normal to the column vectors of that matrix. It should be noted that the vectors that are computed in this process and not necessarily of unit length.", "title": "HowTo: Compute the outer normals of the boundary elements of a mesh"}, {"location": "howto/outer_normals/#howto-compute-the-outer-normals-of-the-boundary-elements-of-a-mesh", "text": "In numerous applications it is important to obtain the outer normals to the boundary of your mesh. In 2D this will simply be the vector normal to the vector tangent to the boundary point of interest, and in 3D this will be the vector normal to the plane tangent to the boundary point of interest. An easy way to obtain these vector/plane tangents is to use the Jacobian of the element transformation at the point of interest as in the following example: // Loop through the boundary elements and compute the normals at the centers of those elements for (int it = 0; it < fespace->GetNBE(); it++) { Vector normal(dim); ElementTransformation *Trans = fespace->GetBdrElementTransformation(it); Trans->SetIntPoint(&Geometries.GetCenter(Trans->GetGeometryType())); CalcOrtho(Trans->Jacobian(), normal); ... Do something of interest with the normals } The ElementTransformation object handles transformations between the elements and their corresponding reference elements. We start by getting the ElementTransformation object for the boundary element we are interested in. In order to move forward we then need to set the point in the element that we are interested in with the SetIntPoint method. In this we are setting it to the geometric center of the boundary element. Finally, we can get the Jacobian of the boundary element and use the tangent vector/plane that it defines to compute the boundary element normal at the boundary element center. The CalcOrtho method simply takes a 2x1 or 3x2 matrix and compute the normal to the column vectors of that matrix. It should be noted that the vectors that are computed in this process and not necessarily of unit length.", "title": "HowTo: Compute the outer normals of the boundary elements of a mesh"}, {"location": "howto/periodic-boundaries/", "text": "HowTo: Use periodic meshes and enforce periodic boundary conditions In order to solve a problem with periodic boundary conditions, the Mesh object should have a periodic topology. This can be achieved in one of two ways: By reading a periodic mesh from disk. By identifying periodic vertices (e.g. through a translation vector), and then creating a new periodic mesh. Reading a periodic mesh from disk MFEM supports reading periodic meshes from a variety of mesh file formats . Several periodic sample meshes are included with MFEM in the data directory: MFEM format: periodic-square.mesh : a 3x3 Cartesian mesh of the (periodic) square [-1,1]^2 periodic-hexagon.mesh : a quad mesh of a periodic hexagonal domain with 12 elements periodic-cube.mesh : a 3x3x3 Cartesian mesh of the (periodic) cube [-1,1]^3 Gmsh format (the corresponding .geo files are also included): periodic-square.msh : a 4x4 Cartesian mesh of the (periodic) unit square periodic-cube.msh : a 4x4x4 Cartesian mesh of the (periodic) unit cube periodic-annulus-sector.msh : a 2D mesh of an annular sector with periodic boundaries defined by a rotation periodic-torus-sector.msh : a 3D mesh of a torus sector with periodic boundaries defined by a rotation Any of these meshes can be loaded as usual using MFEM (e.g. using the -m flag in the MFEM examples ), and the periodic topology will be automatically handled. (Note that some periodic boundaries (such as periodic-cube.mesh ) contain so-called \"internal boundary elements\", which may result in boundary conditions being enforced for some examples.) Example 0 on Periodic Annulus Example 0 on Periodic Torus Creating a periodic mesh by identifying vertices MFEM can also create periodic meshes from non-periodic meshes by identifying periodic vertices. The function Mesh::MakePeriodic creates a periodic mesh from a non-periodic mesh given such a vertex identification. For example, if we wish to create a periodic line segment, then we would like to identify the two endpoints of the line segment since they represent the same point in the periodic topology. An example of creating this vertex mapping in the case of a line segment is described here . It is often more convenient to describe the periodicity constraints in terms of translation vectors . Any two vertices that are coincident under any of the given translation vectors will be considered topologically identical. MFEM can generate a vertex mapping from these translation vectors using the Mesh::CreatePeriodicVertexMapping . An example using this functionality to create a mesh of the periodic square is shown here . (Note that periodic meshes use a discontinuous nodal function for mapping the reference space to the physical one (see Mesh::SetCurvature ). The vertex coordinates are no longer meaningful after calling Mesh::MakePeriodic . You should refrain from accessing them and use the nodal grid function returned by Mesh::GetNodes or single nodes through Mesh::GetNode instead.) Example: creating a periodic line segment with a vertex map Mesh mesh = Mesh::MakeCartesian1D(10);// Make a mesh of the unit interval with 10 elements // Create the vertex mapping. To begin, create the identity mapping. std::vector v2v(mesh.GetNV()); for (int i = 0; i < mesh.GetNV(); ++i) { v2v[i] = i; } // Modify the mapping so that the last vertex gets mapped to the first vertex. v2v.back() = 0; Mesh periodic_mesh = Mesh::MakePeriodic(mesh, v2v); // Create the periodic mesh Example: creating a periodic square with translation vectors // Create a 10x10 quad mesh of the unit square; Mesh mesh = Mesh::MakeCartesian2D(10, 10, Element::QUADRILATERAL); // Create translation vectors defining the periodicity Vector x_translation({1.0, 0.0}); Vector y_translation({0.0, 1.0}); std::vector translations = {x_translation, y_translation}; // Create the periodic mesh using the vertex mapping defined by the translation vectors Mesh periodic_mesh = Mesh::MakePeriodic(mesh, mesh.CreatePeriodicVertexMapping(translations));", "title": "HowTo: Use periodic meshes and enforce periodic boundary conditions"}, {"location": "howto/periodic-boundaries/#howto-use-periodic-meshes-and-enforce-periodic-boundary-conditions", "text": "In order to solve a problem with periodic boundary conditions, the Mesh object should have a periodic topology. This can be achieved in one of two ways: By reading a periodic mesh from disk. By identifying periodic vertices (e.g. through a translation vector), and then creating a new periodic mesh.", "title": "HowTo: Use periodic meshes and enforce periodic boundary conditions"}, {"location": "howto/periodic-boundaries/#reading-a-periodic-mesh-from-disk", "text": "MFEM supports reading periodic meshes from a variety of mesh file formats . Several periodic sample meshes are included with MFEM in the data directory: MFEM format: periodic-square.mesh : a 3x3 Cartesian mesh of the (periodic) square [-1,1]^2 periodic-hexagon.mesh : a quad mesh of a periodic hexagonal domain with 12 elements periodic-cube.mesh : a 3x3x3 Cartesian mesh of the (periodic) cube [-1,1]^3 Gmsh format (the corresponding .geo files are also included): periodic-square.msh : a 4x4 Cartesian mesh of the (periodic) unit square periodic-cube.msh : a 4x4x4 Cartesian mesh of the (periodic) unit cube periodic-annulus-sector.msh : a 2D mesh of an annular sector with periodic boundaries defined by a rotation periodic-torus-sector.msh : a 3D mesh of a torus sector with periodic boundaries defined by a rotation Any of these meshes can be loaded as usual using MFEM (e.g. using the -m flag in the MFEM examples ), and the periodic topology will be automatically handled. (Note that some periodic boundaries (such as periodic-cube.mesh ) contain so-called \"internal boundary elements\", which may result in boundary conditions being enforced for some examples.) Example 0 on Periodic Annulus Example 0 on Periodic Torus", "title": "Reading a periodic mesh from disk"}, {"location": "howto/periodic-boundaries/#creating-a-periodic-mesh-by-identifying-vertices", "text": "MFEM can also create periodic meshes from non-periodic meshes by identifying periodic vertices. The function Mesh::MakePeriodic creates a periodic mesh from a non-periodic mesh given such a vertex identification. For example, if we wish to create a periodic line segment, then we would like to identify the two endpoints of the line segment since they represent the same point in the periodic topology. An example of creating this vertex mapping in the case of a line segment is described here . It is often more convenient to describe the periodicity constraints in terms of translation vectors . Any two vertices that are coincident under any of the given translation vectors will be considered topologically identical. MFEM can generate a vertex mapping from these translation vectors using the Mesh::CreatePeriodicVertexMapping . An example using this functionality to create a mesh of the periodic square is shown here . (Note that periodic meshes use a discontinuous nodal function for mapping the reference space to the physical one (see Mesh::SetCurvature ). The vertex coordinates are no longer meaningful after calling Mesh::MakePeriodic . You should refrain from accessing them and use the nodal grid function returned by Mesh::GetNodes or single nodes through Mesh::GetNode instead.)", "title": "Creating a periodic mesh by identifying vertices"}, {"location": "tutorial/", "text": "MFEM Tutorial on AWS August 22, 2024 Welcome to the MFEM tutorial, part of the LLNL HPC Software Tutorials Series in collaboration with AWS . MFEM is a modular parallel C++ library for finite element methods developed at CASC , LLNL with the help of the MFEM community worldwide. The pages below provide a self-paced overview of MFEM and its use for scalable finite element discretizations and application development. You can follow along in your own Amazon EC2 instance or in a Local Docker Container . No previous experience is necessary. Watch the video import mermaid from 'https://cdn.jsdelivr.net/npm/mermaid@9/dist/mermaid.esm.min.mjs'; mermaid.initialize({ startOnLoad: true }); %%{init: { 'theme': 'base', 'themeVariables': { 'primaryColor': '#deebf7', 'primaryBorderColor': '#3182bd' }}}%% graph LR; A[fa:fa-play-circle Getting Started]; B[fa:fa-book Finite Element Basics]; C[fa:fa-gears Tour of MFEM Examples]; D[fa:fa-picture-o Meshing and Visualization]; E[fa:fa-tasks Solvers and Scalability]; F[fa:fa-rocket Further Steps]; A-->B; B-->C; B-->D; B-->E; C-->F; D-->F; E-->F; click A \"start\" click B \"fem\" click C \"examples\" click D \"meshvis\" click E \"solvers\" click F \"further\" We recommend that you start with the Getting Started and Finite Element Basics lessons, and then, depending on your interests, pick some of the next 3 lessons: Tour of MFEM Examples , Meshing and Visualization , and Solvers and Scalability . The tutorial concludes with additional suggestions in the Further Steps page. Getting Started This is the first page you should visit to setup your tutorial environment. You will learn about: Setting up Visual Studio Code editor and terminal Setting up GLVis for visualization Testing the setup with a simple MFEM example Finite Element Basics Once you have the tutorial environment working, visit this page to learn about the basics of the finite element method and its implementation in MFEM. The lesson covers: Annotated Example 1 Serial and parallel runs GLVis keys/web interface Tour of MFEM Examples This is an optional lesson where you can learn about MFEM's main features: support for high-order methods, adaptive mesh refinement, $H^1$, $H(curl)$, $H(div)$ and $L^2$ discretizations, through several of the examples included with the library: High-order methods for the full de Rham complex (Examples 1, 2, 3, 4) Discontinuous Galerkin (Example 9) Nonlinear elasticity (Example 10) Adaptive mesh refinement (Example 15) Complex methods, PML (Examples 22, 25) Meshing and Visualization This is an optional lesson that illustrates MFEM's support for external mesh generators, internal meshing tools, and external visualization tools. You will learn about: Importing meshes from Gmsh and Cubit MFEM's meshing tools: Mesh Explorer, Mesh Optimizer, and Shaper Visualizing results in VisIt and ParaView Solvers and Scalability This is an optional lesson that showcases MFEM's parallel scalability and support for efficient solvers and preconditioners. The lesson covers: Scalable algebraic multigrid preconditioners from hypre (Examples 1, 2, 3, 4) MFEM's native Multigrid solver (Example 26) Low-order refined methods (Solvers and Transfer miniapps) Additional solver integrations via PETSc, SuperLU, and STRUMPACK Further Steps This is the final lesson with further activities, including: Explore additional examples and miniapps Write your own simple simulation starting from one of the MFEM examples Learn about integrations with other libraries and MFEM's GPU capabilities Visit the MFEM website, watch MFEM-related videos and seminar talks Join the MFEM organization on GitHub to contribute to the project MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Tutorial"}, {"location": "tutorial/#mfem-tutorial-on-aws", "text": "", "title": "MFEM Tutorial on AWS"}, {"location": "tutorial/#getting-started", "text": "This is the first page you should visit to setup your tutorial environment. You will learn about: Setting up Visual Studio Code editor and terminal Setting up GLVis for visualization Testing the setup with a simple MFEM example", "title": " Getting Started"}, {"location": "tutorial/#finite-element-basics", "text": "Once you have the tutorial environment working, visit this page to learn about the basics of the finite element method and its implementation in MFEM. The lesson covers: Annotated Example 1 Serial and parallel runs GLVis keys/web interface", "title": " Finite Element Basics"}, {"location": "tutorial/#tour-of-mfem-examples", "text": "This is an optional lesson where you can learn about MFEM's main features: support for high-order methods, adaptive mesh refinement, $H^1$, $H(curl)$, $H(div)$ and $L^2$ discretizations, through several of the examples included with the library: High-order methods for the full de Rham complex (Examples 1, 2, 3, 4) Discontinuous Galerkin (Example 9) Nonlinear elasticity (Example 10) Adaptive mesh refinement (Example 15) Complex methods, PML (Examples 22, 25)", "title": " Tour of MFEM Examples"}, {"location": "tutorial/#meshing-and-visualization", "text": "This is an optional lesson that illustrates MFEM's support for external mesh generators, internal meshing tools, and external visualization tools. You will learn about: Importing meshes from Gmsh and Cubit MFEM's meshing tools: Mesh Explorer, Mesh Optimizer, and Shaper Visualizing results in VisIt and ParaView", "title": " Meshing and Visualization"}, {"location": "tutorial/#solvers-and-scalability", "text": "This is an optional lesson that showcases MFEM's parallel scalability and support for efficient solvers and preconditioners. The lesson covers: Scalable algebraic multigrid preconditioners from hypre (Examples 1, 2, 3, 4) MFEM's native Multigrid solver (Example 26) Low-order refined methods (Solvers and Transfer miniapps) Additional solver integrations via PETSc, SuperLU, and STRUMPACK", "title": " Solvers and Scalability"}, {"location": "tutorial/#further-steps", "text": "This is the final lesson with further activities, including: Explore additional examples and miniapps Write your own simple simulation starting from one of the MFEM examples Learn about integrations with other libraries and MFEM's GPU capabilities Visit the MFEM website, watch MFEM-related videos and seminar talks Join the MFEM organization on GitHub to contribute to the project MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": " Further Steps"}, {"location": "tutorial/docker/", "text": "Local Docker Container 15 minutes basic You don't need a cloud instance to run the MFEM tutorial. Instead, you can directly run the MFEM Docker container on a computer available to you. The mfem/developer containers has been specifically created to kickstart the exploration of MFEM and its capabilities in a variety of computing environments: from the cloud (like AWS), to HPC clusters, and your own laptop. There are CPU and GPU variations of the image, we will refer to it generically as mfem/developer during the tutorial. Below are instructions on how to start the container on Linux and macOS , and how to use it to run the tutorial locally . You can also use the container (and similar commands) to setup your own cloud instance. See for example this AWS script . Linux Depending on your Linux distribution, you have to first install Docker . See the official instructions for e.g. Ubuntu . Once the installation is complete and the docker command is in your path, pull the prebuilt mfem/developer-cpu container with: docker pull ghcr.io/mfem/containers/developer-cpu:latest Depending on your connection, this may take a while to download and extract (the image is about 2GB). To start the container, run: docker run --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cpu:latest You can later stop this by pressing Ctrl-C . See the docker documentation for more details. We provide two variations of our containers that are configured with CPU or CPU and GPU capabilities. If you have an NVIDIA supported CUDA GPU you have to install the NVIDIA Container Toolkit . Our CUDA images are built with the sm_70 compute capability by default. If your GPU is an sm_70 you can use the prebuilt mfem/developer-cuda-sm70 image with: docker pull ghcr.io/mfem/containers/developer-cuda-sm70:latest To start the container use docker run --gpus all --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cuda-sm70:latest If you need a different compute capability, you can clone the mfem/containers repository and build an image e.g., for sm_80 , with git clone git@github.com:mfem/containers.git cd containers docker-compose build --build-arg cuda_arch_sm=80 cuda && docker image tag cuda:latest cuda-sm80:latest docker-compose build --build-arg cuda_arch_sm=80 cuda-tpls && docker image tag cuda-tpls:latest cuda-tpls-sm80:latest This automatically builds all libraries with the correctly supported CUDA compute capability. Note The forwarding of ports 3000 , 8000 and 8080 is needed for VS Code , GLVis and the websocket connection between them. The --cap-add=SYS_PTRACE option is added to resolve MPI warnings. macOS On macOS we recommend using Podman . See the official installation instructions here . After installing it, use the following commands to create a Podman machine and pull the mfem/developer container: podman machine init podman pull ghcr.io/mfem/containers/developer-cpu:latest Both of these can take a while, depending on your hardware and network connection. To start the virtual machine and the container in it, run: podman machine start podman run --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cpu:latest You can later stop these by pressing Ctrl-C and typing podman machine stop . Note One can also use Docker Desktop on macOS and follow the Linux instructions above. Running the tutorial locally Once the mfem/developer container is running, you can proceed with the Getting Started page using the following IP : 127.0.0.1 . You can alternatively use localhost for the IP . In particular, the VS Code and GLVis windows can be accessed at localhost:3000 and localhost:8000/live respectively. Furthermore, you can use the above pages from any other devices (tablets, phones) that are connected to the same network as the machine running the container. For example you can run an example from the VS Code terminal on your laptop and visualize the results on a GLVis window on your phone. To connect other devices, first run hostname -s to get the local host name and then use that {hostname} for the IP in the rest of the tutorial. Questions? Ask for help in the tutorial Slack channel . Next Steps Go to the Getting Started page. Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Docker"}, {"location": "tutorial/docker/#local-docker-container", "text": "15 minutes basic You don't need a cloud instance to run the MFEM tutorial. Instead, you can directly run the MFEM Docker container on a computer available to you. The mfem/developer containers has been specifically created to kickstart the exploration of MFEM and its capabilities in a variety of computing environments: from the cloud (like AWS), to HPC clusters, and your own laptop. There are CPU and GPU variations of the image, we will refer to it generically as mfem/developer during the tutorial. Below are instructions on how to start the container on Linux and macOS , and how to use it to run the tutorial locally . You can also use the container (and similar commands) to setup your own cloud instance. See for example this AWS script .", "title": "  Local Docker Container"}, {"location": "tutorial/docker/#linux", "text": "Depending on your Linux distribution, you have to first install Docker . See the official instructions for e.g. Ubuntu . Once the installation is complete and the docker command is in your path, pull the prebuilt mfem/developer-cpu container with: docker pull ghcr.io/mfem/containers/developer-cpu:latest Depending on your connection, this may take a while to download and extract (the image is about 2GB). To start the container, run: docker run --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cpu:latest You can later stop this by pressing Ctrl-C . See the docker documentation for more details. We provide two variations of our containers that are configured with CPU or CPU and GPU capabilities. If you have an NVIDIA supported CUDA GPU you have to install the NVIDIA Container Toolkit . Our CUDA images are built with the sm_70 compute capability by default. If your GPU is an sm_70 you can use the prebuilt mfem/developer-cuda-sm70 image with: docker pull ghcr.io/mfem/containers/developer-cuda-sm70:latest To start the container use docker run --gpus all --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cuda-sm70:latest If you need a different compute capability, you can clone the mfem/containers repository and build an image e.g., for sm_80 , with git clone git@github.com:mfem/containers.git cd containers docker-compose build --build-arg cuda_arch_sm=80 cuda && docker image tag cuda:latest cuda-sm80:latest docker-compose build --build-arg cuda_arch_sm=80 cuda-tpls && docker image tag cuda-tpls:latest cuda-tpls-sm80:latest This automatically builds all libraries with the correctly supported CUDA compute capability.", "title": "  Linux"}, {"location": "tutorial/docker/#macos", "text": "On macOS we recommend using Podman . See the official installation instructions here . After installing it, use the following commands to create a Podman machine and pull the mfem/developer container: podman machine init podman pull ghcr.io/mfem/containers/developer-cpu:latest Both of these can take a while, depending on your hardware and network connection. To start the virtual machine and the container in it, run: podman machine start podman run --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cpu:latest You can later stop these by pressing Ctrl-C and typing podman machine stop .", "title": "  macOS"}, {"location": "tutorial/docker/#running-the-tutorial-locally", "text": "Once the mfem/developer container is running, you can proceed with the Getting Started page using the following IP : 127.0.0.1 . You can alternatively use localhost for the IP . In particular, the VS Code and GLVis windows can be accessed at localhost:3000 and localhost:8000/live respectively. Furthermore, you can use the above pages from any other devices (tablets, phones) that are connected to the same network as the machine running the container. For example you can run an example from the VS Code terminal on your laptop and visualize the results on a GLVis window on your phone. To connect other devices, first run hostname -s to get the local host name and then use that {hostname} for the IP in the rest of the tutorial.", "title": "  Running the tutorial locally"}, {"location": "tutorial/examples/", "text": "Tour of MFEM Examples 45 minutes intermediate Lesson Objectives Learn about MFEM's main features through several of the examples included with the library. Note Please complete the Getting Started and Finite Element Basics pages before this lesson. High-order methods MFEM includes support for the full de Rham complex , $H^1-$conforming (continuous), $H(curl)-$conforming (continuous tangential component), $H(div)-$conforming (continuous normal component), and $L^2-$conforming (discontinuous) finite element discretization spaces in 2D and 3D. A compatible high-order de Rham complex on the discrete level can be constructed using the *_FECollection classes with * replaced by H1 , ND , RT , and L2 , respectively. Note that MFEM supports arbitrary discretization order for the full de Rham complex. For example, here is an illustration of the FEM degrees of freedom on quadrilaterals for orders 1\u20143: The first four MFEM examples serve as an introduction on how to construct and use these discrete spaces for the solution of various PDEs. All of them have the -o / --order command line parameter to specify the finite element space order at runtime. Before building the example codes, make sure you are in the examples directory: cd ~/mfem/examples . Note Remember to compile each numbered example before executing its sample runs: make ex* for the serial version or make ex*p for the parallel version. You can build multiple examples in the same command: make ex3 ex4 ex3p ex4p . Example 1 ( ex1.cpp and ex1p.cpp ) solves a simple Poisson problem using a scalar $H^1$ space. More specifically, it solves the problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Try the following sample runs: ./ex1 -m ../data/square-disc.mesh ./ex1 -m ../data/fichera.mesh mpirun -np 4 ex1p -m ../data/star-surf.mesh mpirun -np 4 ex1p -m ../data/mobius-strip.mesh The plot on the right corresponds to the 2nd sample run with i , Z and m pressed in the GLVis window, followed by rotation with the mouse Left button. Example 2 ( ex2.cpp and ex2p.cpp ) solves a linear elasticity problem using a vector $H^1$ space. The problem describes a multi-material cantilever beam. The weak form is $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder with $f$ being a constant pull down vector on boundary elements with attribute 2, and zero otherwise. Try the following sample runs: ./ex2 -m ../data/beam-tri.mesh ./ex2 -m ../data/beam-hex.mesh mpirun -np 4 ex2p -m ../data/beam-wedge.mesh mpirun -np 4 ex2p -m ../data/beam-quad.mesh -o 3 -elast The plot on the right corresponds to the 2nd sample run with m pressed in the GLVis window. Example 3 ( ex3.cpp and ex3p.cpp ) solves a 3D electromagnetic diffusion problem (definite Maxwell) using an $H(curl)$ finite element space. It solves the equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, the r.h.s. $f$ and the boundary condition data are computed using a given exact solution $E$. Try the following sample runs: ./ex3 -m ../data/star.mesh ./ex3 -m ../data/beam-tri.mesh -o 2 mpirun -np 4 ex3p -m ../data/fichera.mesh mpirun -np 4 ex3p -m ../data/escher.mesh -o 2 The plot on the right corresponds to the 3rd sample run with m and A pressed in the GLVis window. Example 4 ( ex4.cpp and ex4p.cpp ) solves a 2D/3D $H(div)$ diffusion problem using an $H(div)$ finite element space. The $H(div)$ diffusion problem corresponds to the second-order definite equation $$-{\\rm grad}(\\alpha\\,{\\rm div}(F)) + \\beta F = f$$ with boundary condition $F \\cdot n$ = \"given normal field\". Here, the r.h.s. $f$ and the boundary condition data are computed using a given exact solution $F$. Try the following sample runs: ./ex4 -m ../data/square-disc.mesh ./ex4 -m ../data/periodic-square.mesh -no-bc mpirun -np 4 ex4p -m ../data/fichera-q2.vtk mpirun -np 4 ex4p -m ../data/amr-quad.mesh The plot on the right is similar to the 1st sample run with R , j and l pressed in the GLVis window. Discontinuous Galerkin MFEM supports high-order Discontinuous Galerkin (DG) discretizations through various face integrators. Additionally, it includes numerous explicit and implicit ODE time integrators which are used for the solution of time-dependent PDEs. Example 9 ( ex9.cpp and ex9p.cpp ) solves the time-dependent advection equation $$\\frac{\\partial u}{\\partial t} + v \\cdot \\nabla u = 0,$$ where $v$ is a given fluid velocity, and $u_0(x)=u(0,x)$ is a given initial condition. The example demonstrates the use of DG bilinear forms, the use of explicit and implicit (with block ILU preconditioning) ODE time integrators, the definition of periodic boundary conditions through periodic meshes, as well as the use of GLVis for persistent visualization of a time-evolving solution. Try the following sample runs: ./ex9 -m ../data/periodic-square.mesh -p 3 -r 4 -dt 0.0025 -tf 9 -vs 20 ./ex9 -m ../data/disc-nurbs.mesh -p 1 -r 3 -dt 0.005 -tf 9 mpirun -np 4 ex9p -m ../data/star-q3.mesh -p 1 -rp 1 -dt 0.004 -tf 9 mpirun -np 16 ex9p -m ../data/amr-hex.mesh -p 1 -rs 1 -rp 0 -dt 0.005 -tf 0.5 The plot on the right corresponds to the 1st sample run with R , j and l pressed in the GLVis window. Note In time-dependent simulations, the GLVis window will be automatically updated with the solutions at the new time steps as they are computed (how frequently this is done is governed by the -vs command line parameter above). To start/pause these updates press space in the GLVis window, or click the icon in the upper center portion of the window. Nonlinear elasticity Example 10 ( ex10.cpp and ex10p.cpp ) solves a time dependent nonlinear elasticity problem of the form $$ \\frac{dv}{dt} = H(x) + S v\\,,\\qquad \\frac{dx}{dt} = v\\,, $$ where $H$ is a hyperelastic model and $S$ is a viscosity operator of Laplacian type. The geometry of the domain is assumed to be as follows: The example demonstrates the use of nonlinear operators, as well as their implicit time integration using a Newton method for solving an associated reduced backward-Euler type nonlinear equation. Each Newton step requires the inversion of a Jacobian matrix, which is done through a (preconditioned) inner solver. Before trying this example, modify the source code of ex10.cpp to disable the second visualization stream as follows: @@ -298,7 +298,7 @@ int main(int argc, char *argv[]) vis_v.precision(8); v.SetFromTrueVector(); x.SetFromTrueVector(); visualize(vis_v, mesh, &x, &v, \"Velocity\", true); - vis_w.open(vishost, visport); + // vis_w.open(vishost, visport); if (vis_w) { oper.GetElasticEnergyDensity(x, w); Make identical change in ex10p.cpp , line 347. Now rebuild both examples: make ex10 ex10p , and try the following sample runs: ./ex10 -m ../data/beam-hex.mesh -s 2 -r 1 -o 2 -dt 3 ./ex10 -m ../data/beam-tri.mesh -s 3 -r 2 -o 2 -dt 3 mpirun -np 4 ex10p -m ../data/beam-wedge.mesh -s 2 -rs 1 -dt 3 mpirun -np 4 ex10p -m ../data/beam-tet.mesh -s 2 -rs 1 -dt 3 The plot on the right corresponds to the 1st sample run. Adaptive mesh refinement MFEM provides support for local conforming and non-conforming adaptive mesh refinement (AMR) with arbitrary-order hanging nodes, anisotropic refinement, derefinement, and parallel load balancing. The AMR support covers the full de Rham complex, i.e., the energy spaces $H^1$, $H(curl)$, $H(div)$ and $L^2$. You can choose from several error estimators, such as the Zienkiewicz-Zhu (ZZ) or the Kelly estimator, to drive the AMRs. We recommend looking at examples 6, 15, 21, and 30 for some simulations with AMR. Example 15 ( ex15.cpp and ex15p.cpp ) demonstrates MFEM's capability to refine, derefine, and load balance non-conforming meshes in 2D and 3D as well as on linear, curved, and surface meshes. In this example the mesh is adapted to a time-dependent solution. At each time step the problem is solved on a sequence of adaptive meshes that are refined based on a simple ZZ estimator. At the end of the refinement process, the error estimates are used to identify elements that are over-refined, and a single derefinement step is performed. Finally, in the parallel case, a load-balancing step is executed. Try the following sample runs: ./ex15 -n 3 ./ex15 -m ../data/square-disc.mesh ./ex15 -est 1 -e 0.0001 mpirun -np 4 ex15p -m ../data/mobius-strip.mesh mpirun -np 4 ex15p -m ../data/fichera.mesh -tf 0.5 The plot on the right is related to the parallel version of the 1st sample run with R , j , l and m pressed in the GLVis window. Complex-valued problems MFEM provides a user-friendly interface for solving complex valued systems. These kinds of problems can be formulated using the classes ComplexOperator , ComplexLinearForm , SesquilinearForm , ComplexGridFunction , and their parallel counterparts. You can define the weak formulation by providing the integrators of real and imaginary parts independently and then use similar methods as in the real problems (such us Assemble , FormLinearSystem , and RecoverFEMSolution ) to recover the solution. Currently, there are two examples demonstrating the use of complex-valued systems. Example 22 ( ex22.cpp and ex22p.cpp ) implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ In each case the field is driven by a forced oscillation, with angular frequency $\\omega$ imposed at the boundary or a portion of the boundary. Before trying this example, modify the source code of ex22.cpp to disable the additional visualization streams as follows: @@ -272,8 +272,8 @@ int main(int argc, char *argv[]) { char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_r(vishost, visport); - socketstream sol_sock_i(vishost, visport); + socketstream sol_sock_r(vishost, visport+1); + socketstream sol_sock_i(vishost, visport+2); sol_sock_r.precision(8); sol_sock_i.precision(8); sol_sock_r < < \"solution\\n\" < < *mesh < < u_exact->real() @@ -482,8 +482,8 @@ int main(int argc, char *argv[]) { char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_r(vishost, visport); - socketstream sol_sock_i(vishost, visport); + socketstream sol_sock_r(vishost, visport+3); + socketstream sol_sock_i(vishost, visport+4); sol_sock_r.precision(8); sol_sock_i.precision(8); sol_sock_r < < \"solution\\n\" < < *mesh < < u.real() @@ -497,8 +497,8 @@ int main(int argc, char *argv[]) char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_r(vishost, visport); - socketstream sol_sock_i(vishost, visport); + socketstream sol_sock_r(vishost, visport+5); + socketstream sol_sock_i(vishost, visport+6); sol_sock_r.precision(8); sol_sock_i.precision(8); sol_sock_r < < \"solution\\n\" < < *mesh < < u_exact->real() @@ -522,7 +522,7 @@ int main(int argc, char *argv[]) < < \" Press space (in the GLVis window) to resume it.\\n\"; int num_frames = 32; int i = 0; - while (sol_sock) + while (sol_sock && i < 3*num_frames) { double t = (double)(i % num_frames) / num_frames; ostringstream oss; Make identical changes in ex22p.cpp , lines 304-305, 532-533, 549-550 and 577. Now rebuild both examples: make ex22 ex22p , and try the following sample runs: ./ex22 -m ../data/inline-quad.mesh -o 3 -p 1 ./ex22 -m ../data/inline-hex.mesh -o 2 -p 2 -pa mpirun -np 1 ex22p -m ../data/star.mesh -o 2 -sigma 10.0 mpirun -np 16 ex22p -m ../data/star.mesh -o 2 -sigma 10.0 -rs 4 -rp 3 -no-vis mpirun -np 1 ex22p -m ../data/inline-pyramid.mesh -o 1 mpirun -np 16 ex22p -m ../data/inline-pyramid.mesh -o 1 -rs 2 -rp 2 -no-vis The plot on the right corresponds to the 3rd and 4th sample runs with R , j and l pressed in the GLVis window. Example 25 ( ex25.cpp and ex25p.cpp ) illustrates the use of a Perfectly Matched Layer (PML) for the simulation of time-harmonic electromagnetic waves propagating in unbounded domains. The implementation involves the introduction of an artificial absorbing layer that minimizes undesired reflections. Inside this layer a complex coordinate stretching map forces the wave modes to decay exponentially. The example solves the indefinite Maxwell equations $$ \\nabla \\times (a \\nabla \\times E) - \\omega^2 b E = f $$ where $a = \\mu^{-1} |J|^{-1} J^T J$, $b= \\epsilon |J| J^{-1} J^{-T}$ and $J$ is the Jacobian matrix of the coordinate transformation. Before trying this example, modify the source code of ex25.cpp to disable the additional visualization streams as follows: @@ -570,13 +570,13 @@ int main(int argc, char *argv[]) char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_re(vishost, visport); + socketstream sol_sock_re(vishost, visport+1); sol_sock_re.precision(8); sol_sock_re < < \"solution\\n\" < < *mesh < < x.real() < < keys < < \"window_title 'Solution real part'\" < < flush; - socketstream sol_sock_im(vishost, visport); + socketstream sol_sock_im(vishost, visport+2); sol_sock_im.precision(8); sol_sock_im < < \"solution\\n\" < < *mesh < < x.imag() < < keys @@ -594,7 +594,7 @@ int main(int argc, char *argv[]) < < \" Press space (in the GLVis window) to resume it.\\n\"; int num_frames = 32; int i = 0; - while (sol_sock) + while (sol_sock && i < 3*num_frames) { double t = (double)(i % num_frames) / num_frames; ostringstream oss; Make identical changes in ex25p.cpp , lines 638, 647 and 674. Now rebuild both examples: make ex25 ex25p , and try the following sample runs: ./ex25 -o 2 -f 5.0 -ref 4 -prob 2 ./ex25 -o 2 -f 1.0 -ref 2 -prob 3 mpirun -np 1 ex25p -o 2 -f 8.0 -rs 2 -rp 2 -prob 4 -m ../data/inline-quad.mesh mpirun -np 32 ex25p -o 2 -f 8.0 -rs 3 -rp 3 -prob 4 -m ../data/inline-quad.mesh -no-vis mpirun -np 1 ex25p -o 2 -f 1.0 -rs 2 -rp 2 -prob 0 -m ../data/beam-quad.mesh mpirun -np 48 ex25p -o 2 -f 1.0 -rs 4 -rp 4 -prob 0 -m ../data/beam-quad.mesh -no-vis The plot on the right corresponds to the 1st sample run with aaa , mm , c and several p pressed in the GLVis window. Questions? Ask for help in the tutorial Slack channel . Next Steps Depending on your interests pick one of the following lessons: Meshing and Visualization Solvers and Scalability Further Steps Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Examples"}, {"location": "tutorial/examples/#tour-of-mfem-examples", "text": "45 minutes intermediate", "title": "  Tour of MFEM Examples"}, {"location": "tutorial/examples/#high-order-methods", "text": "MFEM includes support for the full de Rham complex , $H^1-$conforming (continuous), $H(curl)-$conforming (continuous tangential component), $H(div)-$conforming (continuous normal component), and $L^2-$conforming (discontinuous) finite element discretization spaces in 2D and 3D. A compatible high-order de Rham complex on the discrete level can be constructed using the *_FECollection classes with * replaced by H1 , ND , RT , and L2 , respectively. Note that MFEM supports arbitrary discretization order for the full de Rham complex. For example, here is an illustration of the FEM degrees of freedom on quadrilaterals for orders 1\u20143: The first four MFEM examples serve as an introduction on how to construct and use these discrete spaces for the solution of various PDEs. All of them have the -o / --order command line parameter to specify the finite element space order at runtime. Before building the example codes, make sure you are in the examples directory: cd ~/mfem/examples .", "title": "  High-order methods"}, {"location": "tutorial/examples/#discontinuous-galerkin", "text": "MFEM supports high-order Discontinuous Galerkin (DG) discretizations through various face integrators. Additionally, it includes numerous explicit and implicit ODE time integrators which are used for the solution of time-dependent PDEs. Example 9 ( ex9.cpp and ex9p.cpp ) solves the time-dependent advection equation $$\\frac{\\partial u}{\\partial t} + v \\cdot \\nabla u = 0,$$ where $v$ is a given fluid velocity, and $u_0(x)=u(0,x)$ is a given initial condition. The example demonstrates the use of DG bilinear forms, the use of explicit and implicit (with block ILU preconditioning) ODE time integrators, the definition of periodic boundary conditions through periodic meshes, as well as the use of GLVis for persistent visualization of a time-evolving solution. Try the following sample runs: ./ex9 -m ../data/periodic-square.mesh -p 3 -r 4 -dt 0.0025 -tf 9 -vs 20 ./ex9 -m ../data/disc-nurbs.mesh -p 1 -r 3 -dt 0.005 -tf 9 mpirun -np 4 ex9p -m ../data/star-q3.mesh -p 1 -rp 1 -dt 0.004 -tf 9 mpirun -np 16 ex9p -m ../data/amr-hex.mesh -p 1 -rs 1 -rp 0 -dt 0.005 -tf 0.5 The plot on the right corresponds to the 1st sample run with R , j and l pressed in the GLVis window.", "title": "  Discontinuous Galerkin"}, {"location": "tutorial/examples/#nonlinear-elasticity", "text": "Example 10 ( ex10.cpp and ex10p.cpp ) solves a time dependent nonlinear elasticity problem of the form $$ \\frac{dv}{dt} = H(x) + S v\\,,\\qquad \\frac{dx}{dt} = v\\,, $$ where $H$ is a hyperelastic model and $S$ is a viscosity operator of Laplacian type. The geometry of the domain is assumed to be as follows: The example demonstrates the use of nonlinear operators, as well as their implicit time integration using a Newton method for solving an associated reduced backward-Euler type nonlinear equation. Each Newton step requires the inversion of a Jacobian matrix, which is done through a (preconditioned) inner solver. Before trying this example, modify the source code of ex10.cpp to disable the second visualization stream as follows: @@ -298,7 +298,7 @@ int main(int argc, char *argv[]) vis_v.precision(8); v.SetFromTrueVector(); x.SetFromTrueVector(); visualize(vis_v, mesh, &x, &v, \"Velocity\", true); - vis_w.open(vishost, visport); + // vis_w.open(vishost, visport); if (vis_w) { oper.GetElasticEnergyDensity(x, w); Make identical change in ex10p.cpp , line 347. Now rebuild both examples: make ex10 ex10p , and try the following sample runs: ./ex10 -m ../data/beam-hex.mesh -s 2 -r 1 -o 2 -dt 3 ./ex10 -m ../data/beam-tri.mesh -s 3 -r 2 -o 2 -dt 3 mpirun -np 4 ex10p -m ../data/beam-wedge.mesh -s 2 -rs 1 -dt 3 mpirun -np 4 ex10p -m ../data/beam-tet.mesh -s 2 -rs 1 -dt 3 The plot on the right corresponds to the 1st sample run.", "title": "  Nonlinear elasticity"}, {"location": "tutorial/examples/#adaptive-mesh-refinement", "text": "MFEM provides support for local conforming and non-conforming adaptive mesh refinement (AMR) with arbitrary-order hanging nodes, anisotropic refinement, derefinement, and parallel load balancing. The AMR support covers the full de Rham complex, i.e., the energy spaces $H^1$, $H(curl)$, $H(div)$ and $L^2$. You can choose from several error estimators, such as the Zienkiewicz-Zhu (ZZ) or the Kelly estimator, to drive the AMRs. We recommend looking at examples 6, 15, 21, and 30 for some simulations with AMR. Example 15 ( ex15.cpp and ex15p.cpp ) demonstrates MFEM's capability to refine, derefine, and load balance non-conforming meshes in 2D and 3D as well as on linear, curved, and surface meshes. In this example the mesh is adapted to a time-dependent solution. At each time step the problem is solved on a sequence of adaptive meshes that are refined based on a simple ZZ estimator. At the end of the refinement process, the error estimates are used to identify elements that are over-refined, and a single derefinement step is performed. Finally, in the parallel case, a load-balancing step is executed. Try the following sample runs: ./ex15 -n 3 ./ex15 -m ../data/square-disc.mesh ./ex15 -est 1 -e 0.0001 mpirun -np 4 ex15p -m ../data/mobius-strip.mesh mpirun -np 4 ex15p -m ../data/fichera.mesh -tf 0.5 The plot on the right is related to the parallel version of the 1st sample run with R , j , l and m pressed in the GLVis window.", "title": "  Adaptive mesh refinement"}, {"location": "tutorial/examples/#complex-valued-problems", "text": "MFEM provides a user-friendly interface for solving complex valued systems. These kinds of problems can be formulated using the classes ComplexOperator , ComplexLinearForm , SesquilinearForm , ComplexGridFunction , and their parallel counterparts. You can define the weak formulation by providing the integrators of real and imaginary parts independently and then use similar methods as in the real problems (such us Assemble , FormLinearSystem , and RecoverFEMSolution ) to recover the solution. Currently, there are two examples demonstrating the use of complex-valued systems. Example 22 ( ex22.cpp and ex22p.cpp ) implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ In each case the field is driven by a forced oscillation, with angular frequency $\\omega$ imposed at the boundary or a portion of the boundary. Before trying this example, modify the source code of ex22.cpp to disable the additional visualization streams as follows: @@ -272,8 +272,8 @@ int main(int argc, char *argv[]) { char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_r(vishost, visport); - socketstream sol_sock_i(vishost, visport); + socketstream sol_sock_r(vishost, visport+1); + socketstream sol_sock_i(vishost, visport+2); sol_sock_r.precision(8); sol_sock_i.precision(8); sol_sock_r < < \"solution\\n\" < < *mesh < < u_exact->real() @@ -482,8 +482,8 @@ int main(int argc, char *argv[]) { char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_r(vishost, visport); - socketstream sol_sock_i(vishost, visport); + socketstream sol_sock_r(vishost, visport+3); + socketstream sol_sock_i(vishost, visport+4); sol_sock_r.precision(8); sol_sock_i.precision(8); sol_sock_r < < \"solution\\n\" < < *mesh < < u.real() @@ -497,8 +497,8 @@ int main(int argc, char *argv[]) char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_r(vishost, visport); - socketstream sol_sock_i(vishost, visport); + socketstream sol_sock_r(vishost, visport+5); + socketstream sol_sock_i(vishost, visport+6); sol_sock_r.precision(8); sol_sock_i.precision(8); sol_sock_r < < \"solution\\n\" < < *mesh < < u_exact->real() @@ -522,7 +522,7 @@ int main(int argc, char *argv[]) < < \" Press space (in the GLVis window) to resume it.\\n\"; int num_frames = 32; int i = 0; - while (sol_sock) + while (sol_sock && i < 3*num_frames) { double t = (double)(i % num_frames) / num_frames; ostringstream oss; Make identical changes in ex22p.cpp , lines 304-305, 532-533, 549-550 and 577. Now rebuild both examples: make ex22 ex22p , and try the following sample runs: ./ex22 -m ../data/inline-quad.mesh -o 3 -p 1 ./ex22 -m ../data/inline-hex.mesh -o 2 -p 2 -pa mpirun -np 1 ex22p -m ../data/star.mesh -o 2 -sigma 10.0 mpirun -np 16 ex22p -m ../data/star.mesh -o 2 -sigma 10.0 -rs 4 -rp 3 -no-vis mpirun -np 1 ex22p -m ../data/inline-pyramid.mesh -o 1 mpirun -np 16 ex22p -m ../data/inline-pyramid.mesh -o 1 -rs 2 -rp 2 -no-vis The plot on the right corresponds to the 3rd and 4th sample runs with R , j and l pressed in the GLVis window. Example 25 ( ex25.cpp and ex25p.cpp ) illustrates the use of a Perfectly Matched Layer (PML) for the simulation of time-harmonic electromagnetic waves propagating in unbounded domains. The implementation involves the introduction of an artificial absorbing layer that minimizes undesired reflections. Inside this layer a complex coordinate stretching map forces the wave modes to decay exponentially. The example solves the indefinite Maxwell equations $$ \\nabla \\times (a \\nabla \\times E) - \\omega^2 b E = f $$ where $a = \\mu^{-1} |J|^{-1} J^T J$, $b= \\epsilon |J| J^{-1} J^{-T}$ and $J$ is the Jacobian matrix of the coordinate transformation. Before trying this example, modify the source code of ex25.cpp to disable the additional visualization streams as follows: @@ -570,13 +570,13 @@ int main(int argc, char *argv[]) char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_re(vishost, visport); + socketstream sol_sock_re(vishost, visport+1); sol_sock_re.precision(8); sol_sock_re < < \"solution\\n\" < < *mesh < < x.real() < < keys < < \"window_title 'Solution real part'\" < < flush; - socketstream sol_sock_im(vishost, visport); + socketstream sol_sock_im(vishost, visport+2); sol_sock_im.precision(8); sol_sock_im < < \"solution\\n\" < < *mesh < < x.imag() < < keys @@ -594,7 +594,7 @@ int main(int argc, char *argv[]) < < \" Press space (in the GLVis window) to resume it.\\n\"; int num_frames = 32; int i = 0; - while (sol_sock) + while (sol_sock && i < 3*num_frames) { double t = (double)(i % num_frames) / num_frames; ostringstream oss; Make identical changes in ex25p.cpp , lines 638, 647 and 674. Now rebuild both examples: make ex25 ex25p , and try the following sample runs: ./ex25 -o 2 -f 5.0 -ref 4 -prob 2 ./ex25 -o 2 -f 1.0 -ref 2 -prob 3 mpirun -np 1 ex25p -o 2 -f 8.0 -rs 2 -rp 2 -prob 4 -m ../data/inline-quad.mesh mpirun -np 32 ex25p -o 2 -f 8.0 -rs 3 -rp 3 -prob 4 -m ../data/inline-quad.mesh -no-vis mpirun -np 1 ex25p -o 2 -f 1.0 -rs 2 -rp 2 -prob 0 -m ../data/beam-quad.mesh mpirun -np 48 ex25p -o 2 -f 1.0 -rs 4 -rp 4 -prob 0 -m ../data/beam-quad.mesh -no-vis The plot on the right corresponds to the 1st sample run with aaa , mm , c and several p pressed in the GLVis window.", "title": "  Complex-valued problems"}, {"location": "tutorial/fem/", "text": "Finite Element Basics 45 minutes basic Lesson Objectives Understand a basic finite element discretization of the Poisson equation in MFEM. Learn how to launch serial and parallel runs of MFEM examples. Learn how to visualize the results of MFEM simulations. Note Please complete the Getting Started page before this lesson. Poisson equation The Poisson Equation is a partial differential equation (PDE) that can be used to model steady-state heat conduction, electric potentials, and gravitational fields. In mathematical terms $$ -\\Delta u = f $$ where u is the potential field and f is the source function. This PDE is a generalization of the Laplace Equation . To approximately solve the above continuous equation on computers, we need to discretize it by introducing a finite (discrete) number of unknowns to compute for. In the Finite Element Method (FEM), this is done using the concept of basis functions . Instead of calculating the exact analytic solution u , we approximate it $$ u \\approx u_h := \\sum_{j=1}^n c_j \\varphi_j $$ where $u_h$ is the finite element approximation with degrees of freedom (unknown coefficients) $c_j$, and $\\varphi_j$ are known basis functions . The FEM basis functions are typically piecewise-polynomial functions on a given computational mesh, which are only non-zero on small portions of the mesh. With finite elements, the mesh can be totally unstructured, curved, and non-conforming: To solve for the unknown coefficients in (2), we consider the weak (or variational) form of the Poisson equation. This is obtained by first multiplying with another (test) basis function $\\varphi_i$: $$-\\sum_{j=1}^n c_j \\int_\\Omega \\Delta \\varphi_j \\varphi_i = \\int_\\Omega f \\varphi_i$$ and then integrating by parts using the divergence theorem : $$\\sum_{j=1}^n c_j \\int_\\Omega \\nabla \\varphi_j \\cdot \\nabla \\varphi_i = \\int_\\Omega f \\varphi_i$$ Here we are assuming that the boundary term vanishes due to homogeneous Dirichlet boundary conditions corresponding, for example, to zero temperature on the whole boundary. Since the basis functions are known, we can rewrite (4) as $$ A x = b $$ where $$ A_{ij} = \\int_\\Omega \\nabla \\varphi_j \\cdot \\nabla \\varphi_i $$ $$ b_i = \\int_\\Omega f \\varphi_i $$ $$ x_j = c_j $$ This is a $n \\times n$ linear system that can be solved directly or iteratively for the unknown coefficients. Note that we are free to choose the computational mesh and the basis functions $\\varphi_i$, and therefore the finite space, as we see fit. Note The above is a basic introduction to finite elements in the simplest possible settings. To learn more, you can visit MFEM's Finite Element Method page. Annotated Example 1 MFEM's Example 1 implements the above simple FEM for the Poisson problem in the source file examples/ex1.cpp . We set $f=1$ in (1) and enforce homogeneous Dirichlet boundary conditions on the whole boundary. Below we highlight selected portions of the example code and connect them with the description in the previous section. You can follow along by browsing ex1.cpp in your VS Code browser window. In the settings of this tutorial, the visualization will automatically update in the GLVis browser window. The computational mesh is provided as input (option -m ) that could be 3D, 2D, surface, hex/tet, etc. (It defaults to star.mesh in line 77 .) The code in lines 120-124 loads the mesh from the given file, mesh_file and creates the corresponding MFEM object mesh of class Mesh . Mesh mesh(mesh_file, 1, 1); int dim = mesh.Dimension(); The following code (lines 126-137 ) refines the mesh uniformly to about 50,000 elements. You can easily modify the refinement by changing the definition of ref_levels . int ref_levels = (int)floor(log(50000./mesh.GetNE())/log(2.)/dim); for (int l = 0; l < ref_levels; l++) { mesh.UniformRefinement(); } In the next section we create the finite element space, i.e., specify the finite element basis functions $\\varphi_j$ on the mesh. This involves the MFEM classes FiniteElementCollection , which specifies the space (including its order , provided as input via -o ), and FiniteElementSpace , which connects the space and the mesh. Focusing on the common case order > 0 , the code in lines 139-162 is essentially: FiniteElementCollection *fec = new H1_FECollection(order, dim); FiniteElementSpace fespace(&mesh, fec); cout << \"Number of finite element unknowns: \" << fespace.GetTrueVSize() << endl; The printed number of finite element unknowns (typically) corresponds to the size of the linear system $n$ from the previous section. The finite element degrees of freedom that are on the domain boundary are then extracted in lines 164-174 . We need those to impose the Dirichlet boundary conditions. Array ess_tdof_list; if (mesh.bdr_attributes.Size()) { Array ess_bdr(mesh.bdr_attributes.Max()); ess_bdr = 1; fespace.GetEssentialTrueDofs(ess_bdr, ess_tdof_list); } The method GetEssentialTrueDofs takes a marker array of Mesh boundary attributes and returns the FiniteElementSpace degrees of freedom that belong to the marked attributes (the non-zero entries of ess_bdr ). The right-hand side $b$ is constructed in lines 176-182 . In MFEM terminology, integrals of the form (7) are implemented in the class LinearForm . The Coefficient object corresponds to $f$ from the previous section, which here is set to $1$. You can easily specify more general $f$ with other coefficient classes, e.g., FunctionCoefficient . LinearForm b(&fespace); ConstantCoefficient one(1.0); b.AddDomainIntegrator(new DomainLFIntegrator(one)); b.Assemble(); The finite element approximation $u_h$ is described in MFEM as a GridFunction belonging to the FiniteElementSpace . Note that a GridFunction object can be viewed both as the function $u_h$ in (2) as well as the vector of degrees of freedom $x$ in (8). See lines 184-188 . GridFunction x(&fespace); x = 0.0; We need to initialize x with the boundary values we want to impose as Dirichlet boundary conditions (for simplicity, here we just set x=0 in the whole domain). The matrix $A$ is represented as a BilinearForm object, with a specific DiffusionIntegrator corresponding to the weak form (6). See lines 190-210 . BilinearForm a(&fespace); if (pa) { a.SetAssemblyLevel(AssemblyLevel::PARTIAL); } if (fa) { a.SetAssemblyLevel(AssemblyLevel::FULL); } a.AddDomainIntegrator(new DiffusionIntegrator(one)); a.Assemble(); MFEM supports different assembly levels for $A$ (from global matrix to matrix-free) and many different integrators . You can also provide a variety of coefficients to the integrator, for example, PWConstCoefficient to specify different material properties in different portions of the domain. The linear system (5) is formed in lines 212-216 and solved with a variety of options in lines 218-252 . One simple case is: OperatorPtr A; Vector B, X; a.FormLinearSystem(ess_tdof_list, x, b, A, X, B); cout << \"Size of linear system: \" << A->Height() << endl; GSSmoother M((SparseMatrix&)(*A)); PCG(*A, M, B, X, 1, 200, 1e-12, 0.0); The method FormLinearSystem takes the BilinearForm , LinearForm , GridFunction , and boundary conditions (i.e., a , b , x , and ess_tdof_list ); applies any necessary transformations such as eliminating boundary conditions (specified by the boundary values of x , applying conforming constraints for non-conforming AMR, static condensation, etc.); and produces the corresponding matrix $A$, right-hand side vector $B$, and unknown vector $X$. In the above example, we then solve A X = B with conjugate gradient iterations, using a simple Gauss-Seidel preconditioner. We set the maximum number of iterations to 200 and a convergence criteria of residual norm reduction by 6 orders of magnitude ( 1e-12 is the square of that relative tolerance). Solving the linear system is one of the main computational bottlenecks in the FEM. It can take many preconditioned conjugate gradient (PCG) iterations depending on the problem size, the difficulty of the problem, and the choice of the preconditioner. Once the linear system is solved, we recover the solution as a finite element grid function, and then visualize and save the final results to disk (files refined.mesh and sol.gf ). See lines 254-274 . a.RecoverFEMSolution(X, b, x); ofstream mesh_ofs(\"refined.mesh\"); mesh.Print(mesh_ofs); ofstream sol_ofs(\"sol.gf\"); x.Save(sol_ofs); socketstream sol_sock(\"localhost\", 19916); sol_sock << \"solution\\n\" << mesh << x << flush; Parallel Example 1p Like most MFEM examples, Example 1 has also a parallel version in the source file examples/ex1p.cpp , which illustrates the ease of transition between sequential and MPI-parallel code. The parallel version supports all options of the serial example, and can be executed on varying numbers of MPI ranks, e.g., with mpirun -np . Besides MPI, in parallel we also depend on METIS for mesh partitioning and hypre for solvers. The differences between the two versions are small, and you can compare them for yourself by opening both files in your VS Code window. The main additions in ex1p.cpp compared to ex1.cpp are: Initializing MPI and hypre Mpi::Init(); Hypre::Init(); Splitting the serial mesh in parallel with additional parallel refinement ParMesh pmesh(MPI_COMM_WORLD, mesh); int par_ref_levels = 2; for (int l = 0; l < par_ref_levels; l++) { pmesh.UniformRefinement(); } Using the Par -prefixed versions of the classes ParFiniteElementSpace fespace(&pmesh, fec); ParLinearForm b(&fespace); ParGridFunction x(&fespace); ParBilinearForm a(&fespace); Parallel PCG with hypre's algebraic multigrid BoomerAMG preconditioner Solver *prec = new HypreBoomerAMG; CGSolver cg(MPI_COMM_WORLD); cg.SetRelTol(1e-12); cg.SetMaxIter(2000); cg.SetPrintLevel(1); cg.SetPreconditioner(*prec); cg.SetOperator(*A); cg.Mult(B, X); Note Unlike in the serial version, we expect the number of PCG iterations to remain relatively bounded with the BoomerAMG preconditioner independent of the mesh size, coefficient jumps, and number of MPI ranks. Note, however, that algebraic multigrid has a non-trivial setup phase, which can be comparable in terms of time with the PCG solve phase. For more details, see the Solvers and Scalability page. Serial and parallel runs Both ex1 and ex1p come pre-built in the tutorial environment. You can see a number of sample runs at the beginning of their corresponding source files when you open them in VS Code. To get a feel for how these examples work, you can copy and paste some of these runs from the source to the terminal in VS Code. Try this! Specify a couple different meshes with -m in the VS Code terminal to see how the image rendered by GLVis changes. Run ./ex1 -m ../data/escher.mesh ./ex1 -m ../data/l-shape.mesh ./ex1 -m ../data/mobius-strip.mesh Warning The current directory is not in the VS Code PATH so make sure to add ./ before the executable, e.g., ./ex1 -m ../data/pipe-nurbs.mesh not ex1 -m ../data/pipe-nurbs.mesh . Note The GLVis visualization is local to your browser, so it may take a while to update after a sample run. Once the data arrives, interaction with the visualization window should be fast. Try this! Now try out some sample parallel runs: mpirun -np 16 ex1p mpirun -np 16 ex1p -m ../data/pipe-nurbs.mesh mpirun -np 48 ex1p -m ../data/escher-p2.mesh Warning If you are getting errors from mpirun that there are \"not enough slots available in the system\" , try adding the --oversubscribe option. For example: mpirun --oversubscribe -np 16 ex1p The above plot shows the parallel decomposition in the first sample run, with the following manipulations in the GLVis window: pressing keys R , j , b , g , F11 twice, p a number of times, and zooming in with the Right mouse button. GPU runs If your container supports CUDA you can explore GPU computations with: mpirun -np 4 ex1p -pa -d cuda Additionally you can try out AmgX by changing your directory to examples/amgx and building: cd amgx && make ex1p After that you can run the example with mpirun -np 4 ex1p -d cuda --amgx-file amg_pcg.json GLVis interface GLVis is a lightweight tool for accurate and flexible finite element visualization based on MFEM. In this tutorial we use its web version, which should work on any machine with a modern browser, including mobile touch devices such as tablets and phones. Note The GLVis and VS Code browser windows do not need to be on the same device. For example, you can run VS Code on a computer, while GLVis shows the results on your phone/tablet. GLVis natively understands finite element data and can manipulate it in various ways through the web interface or by typing (case sensitive) keystrokes in the GLVis window. To access the web interface, move to the top right of the GLVis window and press the Visualization controls icon . This will open a number of buttons for controlling the mesh, colors, and position of the plot: You can perform additional operations with the GLVis key commands and mouse functions. Most of them are described in the Help window that appears when clicking the icon in the upper left corner, or by pressing the h key. Some of the more useful key commands and mouse functions are: A \u2014 Turn on/off the use of anti-aliasing/multi-sampling b \u2014 Toggle the boundary in 2D scalar mode c \u2014 Show/hide color bar F11 / F12 \u2014 Shrink/Zoom parallel subdomains g \u2014 Toggle background color (white/black) i \u2014 Toggle cutting plane j \u2014 Turn on/off perspective Left \u2014 Rotate the plot Left + Shift \u2014 Spin the plot (according to the dragging vector) m \u2014 Toggle the mesh state. p / P \u2014 Cycle through color palettes (lots of options) r \u2014 Reset the plot to 3D view R \u2014 Cycle through 2D projections (looking above/below in x / y / z directions) Right \u2014 Zoom in/out S \u2014 Take an image snapshot space \u2014 Pause solution update in time-dependent simulations t \u2014 Cycle materials and lights x / X \u2014 Rotate cutting plane ( \\phi ) in 3D y / Y \u2014 Rotate cutting plane ( \\theta ) in 3D z / Z \u2014 Translate cutting plane in 3D Note that you may need to press fn and/or Ctrl to escape some of the function keys. Try this! After running Example 1, experiment with the key command m in the GLVis window to change the appearance of the mesh. Use i to make a cut through the visual and y to change the position of the cutting plane. For more details, see the full list of key commands and mouse functions in the GLVis README . Warning If the GLVis window becomes unresponsive, try disconnecting and connecting again. If this doesn't help, run the following in the VS Code terminal: pkill -f glvis-browser-server , then force-reload the GLVis browser window and connect again. Questions? Ask for help in the tutorial Slack channel . Next Steps Depending on your interests pick one of the following lessons: Tour of MFEM Examples Meshing and Visualization Solvers and Scalability Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Fem"}, {"location": "tutorial/fem/#finite-element-basics", "text": "45 minutes basic", "title": "  Finite Element Basics"}, {"location": "tutorial/fem/#poisson-equation", "text": "The Poisson Equation is a partial differential equation (PDE) that can be used to model steady-state heat conduction, electric potentials, and gravitational fields. In mathematical terms $$ -\\Delta u = f $$ where u is the potential field and f is the source function. This PDE is a generalization of the Laplace Equation . To approximately solve the above continuous equation on computers, we need to discretize it by introducing a finite (discrete) number of unknowns to compute for. In the Finite Element Method (FEM), this is done using the concept of basis functions . Instead of calculating the exact analytic solution u , we approximate it $$ u \\approx u_h := \\sum_{j=1}^n c_j \\varphi_j $$ where $u_h$ is the finite element approximation with degrees of freedom (unknown coefficients) $c_j$, and $\\varphi_j$ are known basis functions . The FEM basis functions are typically piecewise-polynomial functions on a given computational mesh, which are only non-zero on small portions of the mesh. With finite elements, the mesh can be totally unstructured, curved, and non-conforming: To solve for the unknown coefficients in (2), we consider the weak (or variational) form of the Poisson equation. This is obtained by first multiplying with another (test) basis function $\\varphi_i$: $$-\\sum_{j=1}^n c_j \\int_\\Omega \\Delta \\varphi_j \\varphi_i = \\int_\\Omega f \\varphi_i$$ and then integrating by parts using the divergence theorem : $$\\sum_{j=1}^n c_j \\int_\\Omega \\nabla \\varphi_j \\cdot \\nabla \\varphi_i = \\int_\\Omega f \\varphi_i$$ Here we are assuming that the boundary term vanishes due to homogeneous Dirichlet boundary conditions corresponding, for example, to zero temperature on the whole boundary. Since the basis functions are known, we can rewrite (4) as $$ A x = b $$ where $$ A_{ij} = \\int_\\Omega \\nabla \\varphi_j \\cdot \\nabla \\varphi_i $$ $$ b_i = \\int_\\Omega f \\varphi_i $$ $$ x_j = c_j $$ This is a $n \\times n$ linear system that can be solved directly or iteratively for the unknown coefficients. Note that we are free to choose the computational mesh and the basis functions $\\varphi_i$, and therefore the finite space, as we see fit.", "title": "  Poisson equation"}, {"location": "tutorial/fem/#annotated-example-1", "text": "MFEM's Example 1 implements the above simple FEM for the Poisson problem in the source file examples/ex1.cpp . We set $f=1$ in (1) and enforce homogeneous Dirichlet boundary conditions on the whole boundary. Below we highlight selected portions of the example code and connect them with the description in the previous section. You can follow along by browsing ex1.cpp in your VS Code browser window. In the settings of this tutorial, the visualization will automatically update in the GLVis browser window. The computational mesh is provided as input (option -m ) that could be 3D, 2D, surface, hex/tet, etc. (It defaults to star.mesh in line 77 .) The code in lines 120-124 loads the mesh from the given file, mesh_file and creates the corresponding MFEM object mesh of class Mesh . Mesh mesh(mesh_file, 1, 1); int dim = mesh.Dimension(); The following code (lines 126-137 ) refines the mesh uniformly to about 50,000 elements. You can easily modify the refinement by changing the definition of ref_levels . int ref_levels = (int)floor(log(50000./mesh.GetNE())/log(2.)/dim); for (int l = 0; l < ref_levels; l++) { mesh.UniformRefinement(); } In the next section we create the finite element space, i.e., specify the finite element basis functions $\\varphi_j$ on the mesh. This involves the MFEM classes FiniteElementCollection , which specifies the space (including its order , provided as input via -o ), and FiniteElementSpace , which connects the space and the mesh. Focusing on the common case order > 0 , the code in lines 139-162 is essentially: FiniteElementCollection *fec = new H1_FECollection(order, dim); FiniteElementSpace fespace(&mesh, fec); cout << \"Number of finite element unknowns: \" << fespace.GetTrueVSize() << endl; The printed number of finite element unknowns (typically) corresponds to the size of the linear system $n$ from the previous section. The finite element degrees of freedom that are on the domain boundary are then extracted in lines 164-174 . We need those to impose the Dirichlet boundary conditions. Array ess_tdof_list; if (mesh.bdr_attributes.Size()) { Array ess_bdr(mesh.bdr_attributes.Max()); ess_bdr = 1; fespace.GetEssentialTrueDofs(ess_bdr, ess_tdof_list); } The method GetEssentialTrueDofs takes a marker array of Mesh boundary attributes and returns the FiniteElementSpace degrees of freedom that belong to the marked attributes (the non-zero entries of ess_bdr ). The right-hand side $b$ is constructed in lines 176-182 . In MFEM terminology, integrals of the form (7) are implemented in the class LinearForm . The Coefficient object corresponds to $f$ from the previous section, which here is set to $1$. You can easily specify more general $f$ with other coefficient classes, e.g., FunctionCoefficient . LinearForm b(&fespace); ConstantCoefficient one(1.0); b.AddDomainIntegrator(new DomainLFIntegrator(one)); b.Assemble(); The finite element approximation $u_h$ is described in MFEM as a GridFunction belonging to the FiniteElementSpace . Note that a GridFunction object can be viewed both as the function $u_h$ in (2) as well as the vector of degrees of freedom $x$ in (8). See lines 184-188 . GridFunction x(&fespace); x = 0.0; We need to initialize x with the boundary values we want to impose as Dirichlet boundary conditions (for simplicity, here we just set x=0 in the whole domain). The matrix $A$ is represented as a BilinearForm object, with a specific DiffusionIntegrator corresponding to the weak form (6). See lines 190-210 . BilinearForm a(&fespace); if (pa) { a.SetAssemblyLevel(AssemblyLevel::PARTIAL); } if (fa) { a.SetAssemblyLevel(AssemblyLevel::FULL); } a.AddDomainIntegrator(new DiffusionIntegrator(one)); a.Assemble(); MFEM supports different assembly levels for $A$ (from global matrix to matrix-free) and many different integrators . You can also provide a variety of coefficients to the integrator, for example, PWConstCoefficient to specify different material properties in different portions of the domain. The linear system (5) is formed in lines 212-216 and solved with a variety of options in lines 218-252 . One simple case is: OperatorPtr A; Vector B, X; a.FormLinearSystem(ess_tdof_list, x, b, A, X, B); cout << \"Size of linear system: \" << A->Height() << endl; GSSmoother M((SparseMatrix&)(*A)); PCG(*A, M, B, X, 1, 200, 1e-12, 0.0); The method FormLinearSystem takes the BilinearForm , LinearForm , GridFunction , and boundary conditions (i.e., a , b , x , and ess_tdof_list ); applies any necessary transformations such as eliminating boundary conditions (specified by the boundary values of x , applying conforming constraints for non-conforming AMR, static condensation, etc.); and produces the corresponding matrix $A$, right-hand side vector $B$, and unknown vector $X$. In the above example, we then solve A X = B with conjugate gradient iterations, using a simple Gauss-Seidel preconditioner. We set the maximum number of iterations to 200 and a convergence criteria of residual norm reduction by 6 orders of magnitude ( 1e-12 is the square of that relative tolerance). Solving the linear system is one of the main computational bottlenecks in the FEM. It can take many preconditioned conjugate gradient (PCG) iterations depending on the problem size, the difficulty of the problem, and the choice of the preconditioner. Once the linear system is solved, we recover the solution as a finite element grid function, and then visualize and save the final results to disk (files refined.mesh and sol.gf ). See lines 254-274 . a.RecoverFEMSolution(X, b, x); ofstream mesh_ofs(\"refined.mesh\"); mesh.Print(mesh_ofs); ofstream sol_ofs(\"sol.gf\"); x.Save(sol_ofs); socketstream sol_sock(\"localhost\", 19916); sol_sock << \"solution\\n\" << mesh << x << flush;", "title": "  Annotated Example 1"}, {"location": "tutorial/fem/#parallel-example-1p", "text": "Like most MFEM examples, Example 1 has also a parallel version in the source file examples/ex1p.cpp , which illustrates the ease of transition between sequential and MPI-parallel code. The parallel version supports all options of the serial example, and can be executed on varying numbers of MPI ranks, e.g., with mpirun -np . Besides MPI, in parallel we also depend on METIS for mesh partitioning and hypre for solvers. The differences between the two versions are small, and you can compare them for yourself by opening both files in your VS Code window. The main additions in ex1p.cpp compared to ex1.cpp are: Initializing MPI and hypre Mpi::Init(); Hypre::Init(); Splitting the serial mesh in parallel with additional parallel refinement ParMesh pmesh(MPI_COMM_WORLD, mesh); int par_ref_levels = 2; for (int l = 0; l < par_ref_levels; l++) { pmesh.UniformRefinement(); } Using the Par -prefixed versions of the classes ParFiniteElementSpace fespace(&pmesh, fec); ParLinearForm b(&fespace); ParGridFunction x(&fespace); ParBilinearForm a(&fespace); Parallel PCG with hypre's algebraic multigrid BoomerAMG preconditioner Solver *prec = new HypreBoomerAMG; CGSolver cg(MPI_COMM_WORLD); cg.SetRelTol(1e-12); cg.SetMaxIter(2000); cg.SetPrintLevel(1); cg.SetPreconditioner(*prec); cg.SetOperator(*A); cg.Mult(B, X);", "title": "  Parallel Example 1p"}, {"location": "tutorial/fem/#serial-and-parallel-runs", "text": "Both ex1 and ex1p come pre-built in the tutorial environment. You can see a number of sample runs at the beginning of their corresponding source files when you open them in VS Code. To get a feel for how these examples work, you can copy and paste some of these runs from the source to the terminal in VS Code.", "title": "  Serial and parallel runs"}, {"location": "tutorial/fem/#gpu-runs", "text": "If your container supports CUDA you can explore GPU computations with: mpirun -np 4 ex1p -pa -d cuda Additionally you can try out AmgX by changing your directory to examples/amgx and building: cd amgx && make ex1p After that you can run the example with mpirun -np 4 ex1p -d cuda --amgx-file amg_pcg.json", "title": "  GPU runs"}, {"location": "tutorial/fem/#glvis-interface", "text": "GLVis is a lightweight tool for accurate and flexible finite element visualization based on MFEM. In this tutorial we use its web version, which should work on any machine with a modern browser, including mobile touch devices such as tablets and phones.", "title": "  GLVis interface"}, {"location": "tutorial/further/", "text": "Further Steps 30 minutes advanced Lesson Objectives Explore additional examples and miniapps. Write a simple simulation by extending existing examples. Learn more about MFEM and join the community. Note Please complete Getting Started , Finite Element Basics and at least one of the Tour of MFEM Examples , Meshing and Visualization , or Solvers and Scalability pages before this lesson. Explore additional examples and miniapps MFEM includes a number of well-documented example codes and miniapps that can be used as tutorials, as well as simple starting points for user applications. These examples and miniapps are available in the mfem/examples and mfem/miniapps subdirectories of your VS Code terminal. The full list of examples is below. Feel free to explore any of them depending on your interests, but we recommend starting with the ones marked with a \u2b50. Example 0 \u2014 Simplest MFEM example, good starting point for new users (nodal H1 FEM for the Laplace problem). \u2b50 Example 1 \u2014 Nodal H1 FEM for the Laplace problem. \u2b50 Example 2 \u2014 Vector FEM for linear elasticity. Example 3 \u2014 Nedelec H(curl) FEM for the definite Maxwell problem. Example 4 \u2014 Raviart-Thomas H(div) FEM for the grad-div problem. Example 5 \u2014 Mixed pressure-velocity FEM for the Darcy problem. Example 6 \u2014 Non-conforming adaptive mesh refinement (AMR) for the Laplace problem. Example 7 \u2014 Laplace problem on a surface (the unit sphere). \u2b50 Example 8 \u2014 Discontinuous Petrov-Galerkin (DPG) for the Laplace problem. Example 9 \u2014 Discontinuous Galerkin (DG) time-dependent advection. \u2b50 Example 10 \u2014 Time-dependent implicit nonlinear elasticity. \u2b50 Example 11 \u2014 Parallel Laplace eigensolver. Example 12 \u2014 Parallel linear elasticity eigensolver. Example 13 \u2014 Parallel Maxwell eigensolver. Example 14 \u2014 DG for the Laplace problem. Example 15 \u2014 Dynamic AMR for Laplace with prescribed time-dependent source. \u2b50 Example 16 \u2014 Time-dependent nonlinear heat equation. Example 17 \u2014 DG for linear elasticity. Example 18 \u2014 DG for the Euler equations. Example 19 \u2014 Incompressible nonlinear elasticity. Example 20 \u2014 Symplectic ODE integration. Example 21 \u2014 AMR for linear elasticity. Example 22 \u2014 Complex-valued linear systems. \u2b50 Example 23 \u2014 Second-order in time wave equation. \u2b50 Example 24 \u2014 Mixed finite element spaces and interpolators. Example 25 \u2014 Perfectly Matched Layer (PML) for Maxwell equations. Example 26 \u2014 Multigrid preconditioner for the Laplace problem. \u2b50 Example 27 \u2014 Boundary conditions for the Laplace problem. Example 28 \u2014 Constraints and sliding boundary conditions. Example 29 \u2014 Solving PDEs on embedded surfaces. Example 30 \u2014 Mesh preprocessing, resolving problem data. Example 31 \u2014 Nedelec H(curl) FEM for the anisotropic definite Maxwell problem. Example 32 \u2014 Parallel Nedelec Maxwell eigensolver with anisotropic permittivity. Example 33 \u2014 Nodal C0 FEM for the fractional Laplacian problem. Example 34 \u2014 Source function from SubMesh. Example 35 \u2014 Port boundary condition from SubMesh. Example 36 \u2014 High-order FEM for the obstacle problem. Example 37 \u2014 Topology optimization. Example 38 \u2014 Cut-surface and cut-volume integration. Example 39 \u2014 Named mesh attributes. Most of these examples have a serial and a parallel version, illustrating the ease of transition and the minimal code changes between the two. Many examples also have modifications that take advantage of optional third-party libraries such as PETSc , SLEPc , SUNDIALS , PUMI , Ginkgo , and HiOp . Beyond the examples, a number of miniapps are available that are more representative of the advanced usage of the library in physics/application codes. Some of the included miniapps are: Volta \u2014 Simple electrostatics simulation code. Tesla \u2014 Simple magnetostatics simulation code. Maxwell \u2014 Transient electromagnetics simulation code. Joule \u2014 Transient magnetics and Joule heating miniapp. Navier \u2014 Solver for the incompressible time-dependent Navier-Stokes equations. Mesh Explorer \u2014 Visualize and manipulate meshes. Mesh Optimizer \u2014 Optimize high-order meshes. Shaper \u2014 Resolve material interfaces by mesh refinement. Interpolation \u2014 Evaluation of high-order finite element functions in physical space. Overlapping Grids \u2014 Schwarz coupling of single- and multi-physics problems. Extrapolation \u2014 Finite element extrapolation solver. Distance \u2014 Finite element distance solver. Shifted Diffusion \u2014 High-Order shifted boundary method for non body-fitted meshes. Minimal Surface \u2014 Compute the minimal surface of a given mesh. Display Basis \u2014 Visualize finite element basis functions. LOR Transfer \u2014 Map functions between high-order and low-order-refined spaces. SPDE \u2014 Generate a Gaussian random field via the SPDE method; i.e., by solving a fractional PDE with random load. Contact \u2014 Mortar contact patch test for elasticity using the Tribol library. Multidomain \u2014 Multidomain and SubMesh demonstration Miniapp. DPG \u2014 Discontinuous Petrov-Galerkin (DPG) for various examples. In addition, the sources for several external benchmark/proxy-apps built on top of MFEM are available: Laghos \u2014 High-Order Lagrangian hydrodynamics miniapp. Remhos \u2014 High-Order advection remap miniapp. Mulard \u2014 Multigroup thermal radiation diffusion miniapp. A handful of \"toy\" miniapps of a less serious nature demonstrate the flexibility of MFEM (and provide a bit of fun): Automata \u2014 Model of a simple cellular automata. Life \u2014 Model of Conway's game of life. Lissajous \u2014 Spinning optical illusion. Mandel \u2014 Fractal visualization with AMR. Mondrian \u2014 Convert any image to an AMR mesh. Rubik \u2014 Interactive Rubik's Cube\u2122 puzzle. Snake \u2014 Model of the Rubik's Snake\u2122 puzzle. Write a simple simulation Modify the miniapps and example codes to create a simple simulation of your own. You can edit the source code and rebuild the binary simply with make . For example, you can solve a steady-state heat conduction problem in 2D and 3D using the shaper miniapp (modified for the cable shape) to define the mesh and ex1 or ex1p to solve it (modified to include separate coefficients for air and cable). Please consult the MFEM code documentation and don't hesitate to ask if you have any implementation questions. We want to see your creativity! Post your visualization images in the Slack channel for a chance to be featured on MFEM's gallery page ! Install MFEM + GLVis on your own machine Download MFEM from mfem.org/download or clone it from GitHub and follow the building instructions here: mfem.org/building . You should be able to download and install the serial version in 10 minutes. The parallel version of MFEM requires installing hypre and METIS (see the building instructions ). Alternatively, if you already have Spack, you can build with spack install mfem glvis . With your own installation, you can explore additional topics not covered in this tutorial such as: Partial Assembly and the Finite Element Operator Decomposition . GPU Support on NVIDIA and AMD hardware. Integrations with PETSc , SUNDIALS , SuperLU , libCEED , PUMI , Ginkgo , HiOp , and more. Python support with the PyMFEM wrapper and Jupyter notebooks . Visit the MFEM website For more information about MFEM, visit the website, mfem.org , including the Features , Examples , Publications , and Finite Elements , pages. Review the Videos for recordings from MFEM seminars , workshops , and conference presentations: You may also be interested in visiting the websites of the related GLVis , CEED , and BLAST projects. Join the community If MFEM looks exciting to you, please join the community on GitHub and help us make it better! \ud83d\ude80 We welcome contributions and feedback at all levels: bugfixes; code improvements; simplifications; new mesh, discretization, or solver capabilities; improved documentation; new examples and miniapps; HPC performance improvements; etc. See CONTRIBUTING.md for more details. You can contact the MFEM team by posting to the GitHub issue tracker or at mfem-dev@llnl.gov . Thank you! Thank you for participating in the MFEM tutorial. Please let us know if you have any questions in the Slack channel . Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Further"}, {"location": "tutorial/further/#further-steps", "text": "30 minutes advanced", "title": "  Further Steps"}, {"location": "tutorial/further/#explore-additional-examples-and-miniapps", "text": "MFEM includes a number of well-documented example codes and miniapps that can be used as tutorials, as well as simple starting points for user applications. These examples and miniapps are available in the mfem/examples and mfem/miniapps subdirectories of your VS Code terminal. The full list of examples is below. Feel free to explore any of them depending on your interests, but we recommend starting with the ones marked with a \u2b50. Example 0 \u2014 Simplest MFEM example, good starting point for new users (nodal H1 FEM for the Laplace problem). \u2b50 Example 1 \u2014 Nodal H1 FEM for the Laplace problem. \u2b50 Example 2 \u2014 Vector FEM for linear elasticity. Example 3 \u2014 Nedelec H(curl) FEM for the definite Maxwell problem. Example 4 \u2014 Raviart-Thomas H(div) FEM for the grad-div problem. Example 5 \u2014 Mixed pressure-velocity FEM for the Darcy problem. Example 6 \u2014 Non-conforming adaptive mesh refinement (AMR) for the Laplace problem. Example 7 \u2014 Laplace problem on a surface (the unit sphere). \u2b50 Example 8 \u2014 Discontinuous Petrov-Galerkin (DPG) for the Laplace problem. Example 9 \u2014 Discontinuous Galerkin (DG) time-dependent advection. \u2b50 Example 10 \u2014 Time-dependent implicit nonlinear elasticity. \u2b50 Example 11 \u2014 Parallel Laplace eigensolver. Example 12 \u2014 Parallel linear elasticity eigensolver. Example 13 \u2014 Parallel Maxwell eigensolver. Example 14 \u2014 DG for the Laplace problem. Example 15 \u2014 Dynamic AMR for Laplace with prescribed time-dependent source. \u2b50 Example 16 \u2014 Time-dependent nonlinear heat equation. Example 17 \u2014 DG for linear elasticity. Example 18 \u2014 DG for the Euler equations. Example 19 \u2014 Incompressible nonlinear elasticity. Example 20 \u2014 Symplectic ODE integration. Example 21 \u2014 AMR for linear elasticity. Example 22 \u2014 Complex-valued linear systems. \u2b50 Example 23 \u2014 Second-order in time wave equation. \u2b50 Example 24 \u2014 Mixed finite element spaces and interpolators. Example 25 \u2014 Perfectly Matched Layer (PML) for Maxwell equations. Example 26 \u2014 Multigrid preconditioner for the Laplace problem. \u2b50 Example 27 \u2014 Boundary conditions for the Laplace problem. Example 28 \u2014 Constraints and sliding boundary conditions. Example 29 \u2014 Solving PDEs on embedded surfaces. Example 30 \u2014 Mesh preprocessing, resolving problem data. Example 31 \u2014 Nedelec H(curl) FEM for the anisotropic definite Maxwell problem. Example 32 \u2014 Parallel Nedelec Maxwell eigensolver with anisotropic permittivity. Example 33 \u2014 Nodal C0 FEM for the fractional Laplacian problem. Example 34 \u2014 Source function from SubMesh. Example 35 \u2014 Port boundary condition from SubMesh. Example 36 \u2014 High-order FEM for the obstacle problem. Example 37 \u2014 Topology optimization. Example 38 \u2014 Cut-surface and cut-volume integration. Example 39 \u2014 Named mesh attributes. Most of these examples have a serial and a parallel version, illustrating the ease of transition and the minimal code changes between the two. Many examples also have modifications that take advantage of optional third-party libraries such as PETSc , SLEPc , SUNDIALS , PUMI , Ginkgo , and HiOp . Beyond the examples, a number of miniapps are available that are more representative of the advanced usage of the library in physics/application codes. Some of the included miniapps are: Volta \u2014 Simple electrostatics simulation code. Tesla \u2014 Simple magnetostatics simulation code. Maxwell \u2014 Transient electromagnetics simulation code. Joule \u2014 Transient magnetics and Joule heating miniapp. Navier \u2014 Solver for the incompressible time-dependent Navier-Stokes equations. Mesh Explorer \u2014 Visualize and manipulate meshes. Mesh Optimizer \u2014 Optimize high-order meshes. Shaper \u2014 Resolve material interfaces by mesh refinement. Interpolation \u2014 Evaluation of high-order finite element functions in physical space. Overlapping Grids \u2014 Schwarz coupling of single- and multi-physics problems. Extrapolation \u2014 Finite element extrapolation solver. Distance \u2014 Finite element distance solver. Shifted Diffusion \u2014 High-Order shifted boundary method for non body-fitted meshes. Minimal Surface \u2014 Compute the minimal surface of a given mesh. Display Basis \u2014 Visualize finite element basis functions. LOR Transfer \u2014 Map functions between high-order and low-order-refined spaces. SPDE \u2014 Generate a Gaussian random field via the SPDE method; i.e., by solving a fractional PDE with random load. Contact \u2014 Mortar contact patch test for elasticity using the Tribol library. Multidomain \u2014 Multidomain and SubMesh demonstration Miniapp. DPG \u2014 Discontinuous Petrov-Galerkin (DPG) for various examples. In addition, the sources for several external benchmark/proxy-apps built on top of MFEM are available: Laghos \u2014 High-Order Lagrangian hydrodynamics miniapp. Remhos \u2014 High-Order advection remap miniapp. Mulard \u2014 Multigroup thermal radiation diffusion miniapp. A handful of \"toy\" miniapps of a less serious nature demonstrate the flexibility of MFEM (and provide a bit of fun): Automata \u2014 Model of a simple cellular automata. Life \u2014 Model of Conway's game of life. Lissajous \u2014 Spinning optical illusion. Mandel \u2014 Fractal visualization with AMR. Mondrian \u2014 Convert any image to an AMR mesh. Rubik \u2014 Interactive Rubik's Cube\u2122 puzzle. Snake \u2014 Model of the Rubik's Snake\u2122 puzzle.", "title": "  Explore additional examples and miniapps"}, {"location": "tutorial/further/#write-a-simple-simulation", "text": "Modify the miniapps and example codes to create a simple simulation of your own. You can edit the source code and rebuild the binary simply with make . For example, you can solve a steady-state heat conduction problem in 2D and 3D using the shaper miniapp (modified for the cable shape) to define the mesh and ex1 or ex1p to solve it (modified to include separate coefficients for air and cable). Please consult the MFEM code documentation and don't hesitate to ask if you have any implementation questions.", "title": "  Write a simple simulation"}, {"location": "tutorial/further/#install-mfem-glvis-on-your-own-machine", "text": "Download MFEM from mfem.org/download or clone it from GitHub and follow the building instructions here: mfem.org/building . You should be able to download and install the serial version in 10 minutes. The parallel version of MFEM requires installing hypre and METIS (see the building instructions ). Alternatively, if you already have Spack, you can build with spack install mfem glvis . With your own installation, you can explore additional topics not covered in this tutorial such as: Partial Assembly and the Finite Element Operator Decomposition . GPU Support on NVIDIA and AMD hardware. Integrations with PETSc , SUNDIALS , SuperLU , libCEED , PUMI , Ginkgo , HiOp , and more. Python support with the PyMFEM wrapper and Jupyter notebooks .", "title": "  Install MFEM + GLVis on your own machine"}, {"location": "tutorial/further/#visit-the-mfem-website", "text": "For more information about MFEM, visit the website, mfem.org , including the Features , Examples , Publications , and Finite Elements , pages. Review the Videos for recordings from MFEM seminars , workshops , and conference presentations: You may also be interested in visiting the websites of the related GLVis , CEED , and BLAST projects.", "title": "  Visit the MFEM website"}, {"location": "tutorial/further/#join-the-community", "text": "If MFEM looks exciting to you, please join the community on GitHub and help us make it better! \ud83d\ude80 We welcome contributions and feedback at all levels: bugfixes; code improvements; simplifications; new mesh, discretization, or solver capabilities; improved documentation; new examples and miniapps; HPC performance improvements; etc. See CONTRIBUTING.md for more details. You can contact the MFEM team by posting to the GitHub issue tracker or at mfem-dev@llnl.gov .", "title": "  Join the community"}, {"location": "tutorial/meshvis/", "text": "Meshing and Visualization 45 minutes intermediate Lesson Objectives Learn about external mesh generators that can be used with MFEM. Learn about MFEM's internal meshing tools. Learn about external visualization tools that can be used with MFEM. Note Please complete the Getting Started and Finite Element Basics pages before this lesson. Importing meshes from Gmsh and Cubit In this section we demonstrate the common steps necessary for generating high-quality meshes in Gmsh and Cubit and how to use them in finite element simulations with MFEM. Gmsh is an open-source, freely available mesh generation tool with built-in computer-aided design (CAD) functionality and a postprocessor. The input to Gmsh can be a simple text file that provides a description of the geometry of the finite element model. The geometry can be generated using the Gmsh graphical user interface (GUI), simple text editors such as Vi/Vim/Emacs, or using more sophisticated CAD tools such as SolidWorks or Autocad. CAD models in IGES or STEP formats can be imported by the CAD engine of Gmsh, meshed, and prepared as inputs to the MFEM examples. Here, however, we focus on simpler examples showing the process of generating meshes suitable for MFEM and not on the actual geometry. Many examples together with documentation on the input syntax can be found at the Gmsh website . Users familiar with Gmsh can skip the first steps and download already prepared geometries for meshing. If Gmsh is not installed on your local machine, please download it and follow the installation instructions . We will start with the definitions of a cube with edge length L=1 and two cylinders with a radius L/10 and heights equal to L. The following snippet defines these objects: SetFactory(\"OpenCASCADE\"); Mesh.Algorithm = 6; Mesh.CharacteristicLengthMin = 0.1; Mesh.CharacteristicLengthMax = 0.1; L=1.0; Box(1) = {0,0,0,L,L,L}; Rc=L/10; Cylinder(2) = {L/8, 0.0, L/4, 0, L, 0, Rc , 2*Pi}; Cylinder(3) = {4*L/8, 0.0, L/4, 0, L, 0, Rc , 2*Pi}; Here is a screenshot of the GUI of Gmsh with the generated objects: The first line in the Gmsh input file defines the geometric engine. Here it is assumed that Gmsh is compiled with CAD support. Such precompiled binaries for Windows, Mac, and Linux can be downloaded from the Gmsh website . The next three lines define the mesh algorithm, which will be used later for generating the mesh and the associated characteristic length scale. Finer or coarser meshes can be obtained by adjusting these numbers. The following line defines a parameter L which is utilized in the definition of the cube. A parameter R defines the radius of the base of the two cylinders. The final geometry, which will be used for simulations, is obtained by subtracting the two cylinders from the cube as: BooleanDifference(50) = { Volume{1}; Delete; }{ Volume{2,3}; Delete; }; Gmsh uses the obtained geometry for generating the mesh. However, without additional specifications, we cannot impose boundary conditions without any attributes assigned to the boundaries. Different attributes can be assigned to the volumetric part of the mesh for using different material coefficients within the domain. Here, however, we use only a single attribute, as the first example uses only a single diffusion coefficient. Physical Volume(1) = {50}; Physical Surface(1) = {1,6,8}; Mesh.MshFileVersion = 2.2; The first line from the above snippet defines physical volume 1 to coincide with the geometry volume 50, which is the final volume obtained by the Boolean operation. The second line defines physical surface 1 to include geometric surfaces {1,6,8}. Finally, the last line specifies the file format. Note that MFEM can only read ASCII Gmsh format version 2.2. The generated mesh is shown in the figures above. Careful inspection reveals that the cylindrical surface is not represented well by the linear elements. We can improve the representation by refining the mesh. We encourage you to play with the mesh and to generate finer discretizations for the simulations. You can download the Gmsh input file here and the resulting mesh file here . For users without access to the Gmsh GUI, a mesh can be generated in your local terminal with the following command: gmsh -3 cross_heat.geo To run simulations with the generated mesh, drag-and-drop the mesh file from your computer to the AWS browser window in the MFEM examples directory: To run Example 1 with the newly prepared mesh, be sure you are in the examples directory and then run the following command: mpirun -np 24 ./ex1p -m cross_heat.msh -no-vis The solution of the diffusion equation for the generated mesh is shown in the following two pictures. The figures are generated with ParaView, and the process of visualization is explained at the end of this tutorial session. If we want to enforce Dirichlet boundary conditions different than zero on some other surface, we must export it as a physical surface. For example, to enforce value one on the other cylindrical surface, add the following line to the cross_heat.geo file: Physical Surface(2) = {7}; The line should be inserted in any place after the definition of geometrical surface 7, e.g., after the boolean operation defining the final geometry. If we run ex1.cpp without modifications, a zero value will be assigned to the newly defined surface. Thus, in order to set it to one, modify section 10 in ex1p.cpp : // 10. Define the solution vector x as a parallel finite element grid // function corresponding to fespace. Initialize x with initial guess of // zero, which satisfies the boundary conditions. ParGridFunction x(&fespace); x = 0.0; { Array ess_bdr(pmesh.bdr_attributes.Max()); ess_bdr = 0; ess_bdr[1] = 1; ConstantCoefficient zero(0.0); Coefficient* coeff[1]; coeff[0]=&one; x.ProjectBdrCoefficient(coeff,ess_bdr); } In the above snippet, we project coefficient one on the degrees of freedom associated with physical surface 2 (the indexing starts at zero). Executing the modified code with the newly created mesh will result in the following solution: The results can be seen in the GLVis windows as well. However, the users will see only the defined physical surfaces (1,2) and the boundaries between the parallel partitions. Any 2D cuts will work as usual. MFEM can import meshes saved in Exodus II format generated with Cubit . However, this feature requires compilation of the library with HDF5, NetCDF, and Exodus, which is not available in the AWS tutorial image. MFEM's meshing tools MFEM provides many tools, routines, and examples for mesh manipulation. The miniapp examples illustrate a large part of the MFEM functionality in the miniapps/meshing subdirectory. Below we provide more details about only two of these miniapps. However, users are encouraged to also explore the other meshing miniapps . Mesh Explorer The mesh explorer miniapp is a handy tool to examine, visualize and manipulate a given mesh. Users have to compile it in the miniapps/meshing subdirectory: cd ~/mfem/miniapps/meshing make mesh-explorer Once compiled, it can be executed in the same directory by typing in the terminal ./mesh-explorer Before executing it, users should ensure that the GLVis window is open and connected to the AWS machine. Once started, many options will appear in the terminal window. An example screenshot of provided below By pressing the corresponding keys, a number of operations can be performed on the input mesh files, including: Visualizing of mesh materials with m , and individual mesh elements with e . Mesh refinement with r , scaling with s , randomization with j , and transformation with t . Manipulation of the mesh curvature with c . The ability to simulate parallel partitioning with p . Quantitative and visual reports of mesh quality with x , h and J . Saving the resulting mesh with in MFEM or VTK format with S and V . For example, selecting v in the prompt and pressing enter will display the default mesh of a hex-meshed beam in the GLVis window. To operate on a different mesh, users should exit the miniapp with q and start it again with the following line ./mesh-explorer -m new_mesh_file.msh Here new_mesh_file.msh is the mesh file selected by the user. The input mesh can be in any format supported by MFEM. In addition, the miniapp can save the loaded mesh in native MFEM and VTK formats. Shaper Shaper is a miniapp that performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. To experiment with it, go to the miniapps/meshing subdirectory and type: cd ~/mfem/miniapps/meshing make shaper ./shaper The result of the execution with five levels of refinement and default setting can be seen in the following screenshot. Users can specify different material distributions by modifying the function int material(Vector &x, Vector &xmin, Vector &xmax) in the begging of shaper.cpp . The current function returns integer values of 1 if a point is located within a simple annulus/shell with a relative inner radius of 0.4 and outer radius of 0.6 and 2 otherwise. The coordinates of a point within the mesh are mapped to values between minus one and one. Users are encouraged to modify the material distribution function and use different meshes as input. The refinement level is controlled in the terminal by pressing y for further refinement or n for completing the run. The resulting mesh is written in a file shaper.mesh . Once the mesh is written, users can use it as an input to other examples or miniapps. Note See also the related Mandel and Mondrian miniapps in the miniapps/toys subdirectory. Visualizing results in ParaView and VisIt To save the simulation results from the parallel version of Example 1 ( ex1p.cpp ) in ParaView format, add the following lines just before step 17 in the file. { ParaViewDataCollection *pd = NULL; pd = new ParaViewDataCollection(\"Example1P\", &pmesh); pd->SetPrefixPath(\"ParaView\"); pd->RegisterField(\"solution\", &x); pd->SetLevelsOfDetail(order); pd->SetDataFormat(VTKFormat::BINARY); pd->SetHighOrderOutput(true); pd->SetCycle(0); pd->SetTime(0.0); pd->Save(); delete pd; } The first line defines a ParaViewDataCollection for saving data in ParaView data format. The following two lines define the name of the data collection and the prefix path, which is set to ParaView. Thus, the data set will be written in the directory ParaView relative to the current execution path. The following line registers the ParGridFunction x in the data collection. The remaining lines set different parameters for the format and the data set, and finally, the set is saved and deleted. See MFEM documentation for more detailed information about ParaView. Compile and execute the modified example. To download the results saved in ParaView format to your local machine, compress and gather all files in a single archive with the following command: tar cvfz paraview.tgz ParaView/ which will generate the file paraview.tgz in the current directory. Download the file to your local machine by dragging it from the Explorer window: Then go to the download location and extract the archive with tar vxfz paraview.tgz ParaView/ The above assumes a UNIX type of environment. Windows users could use the GUI or WSL/WSL2 engines. ParaView can be freely downloaded both as a source code or precompiled binaries. The precompiled binaries are available for Linux, macOS, and Windows. Please follow the instructions for the corresponding operating system for installation instructions. To visualize the downloaded simulation data, run ParaView and open the file Example1P.pvd in the ParaView/Example1P directory, where the path is relative to the directory where the archive was downloaded. Next, click on the Apply button and select Solution in the drop-down menu in the second row of buttons. The geometry, together with the solution, can be rotated on the screen by holding and dragging the mouse. Replacing ParaviewDataCollection with VisItDataCollection allows you to write data in VisIt data format. VisIt can be freely downloaded and installed on Linux, macOS, and Windows and provides another alternative to ParaView. The steps for downloading and the simulation data are the same as the steps outlined above for ParaView. Questions? Ask for help in the tutorial Slack channel . Next Steps Depending on your interests pick one of the following lessons: Tour of MFEM Examples Solvers and Scalability Further Steps Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Meshvis"}, {"location": "tutorial/meshvis/#meshing-and-visualization", "text": "45 minutes intermediate", "title": "  Meshing and Visualization"}, {"location": "tutorial/meshvis/#importing-meshes-from-gmsh-and-cubit", "text": "In this section we demonstrate the common steps necessary for generating high-quality meshes in Gmsh and Cubit and how to use them in finite element simulations with MFEM. Gmsh is an open-source, freely available mesh generation tool with built-in computer-aided design (CAD) functionality and a postprocessor. The input to Gmsh can be a simple text file that provides a description of the geometry of the finite element model. The geometry can be generated using the Gmsh graphical user interface (GUI), simple text editors such as Vi/Vim/Emacs, or using more sophisticated CAD tools such as SolidWorks or Autocad. CAD models in IGES or STEP formats can be imported by the CAD engine of Gmsh, meshed, and prepared as inputs to the MFEM examples. Here, however, we focus on simpler examples showing the process of generating meshes suitable for MFEM and not on the actual geometry. Many examples together with documentation on the input syntax can be found at the Gmsh website . Users familiar with Gmsh can skip the first steps and download already prepared geometries for meshing. If Gmsh is not installed on your local machine, please download it and follow the installation instructions . We will start with the definitions of a cube with edge length L=1 and two cylinders with a radius L/10 and heights equal to L. The following snippet defines these objects: SetFactory(\"OpenCASCADE\"); Mesh.Algorithm = 6; Mesh.CharacteristicLengthMin = 0.1; Mesh.CharacteristicLengthMax = 0.1; L=1.0; Box(1) = {0,0,0,L,L,L}; Rc=L/10; Cylinder(2) = {L/8, 0.0, L/4, 0, L, 0, Rc , 2*Pi}; Cylinder(3) = {4*L/8, 0.0, L/4, 0, L, 0, Rc , 2*Pi}; Here is a screenshot of the GUI of Gmsh with the generated objects: The first line in the Gmsh input file defines the geometric engine. Here it is assumed that Gmsh is compiled with CAD support. Such precompiled binaries for Windows, Mac, and Linux can be downloaded from the Gmsh website . The next three lines define the mesh algorithm, which will be used later for generating the mesh and the associated characteristic length scale. Finer or coarser meshes can be obtained by adjusting these numbers. The following line defines a parameter L which is utilized in the definition of the cube. A parameter R defines the radius of the base of the two cylinders. The final geometry, which will be used for simulations, is obtained by subtracting the two cylinders from the cube as: BooleanDifference(50) = { Volume{1}; Delete; }{ Volume{2,3}; Delete; }; Gmsh uses the obtained geometry for generating the mesh. However, without additional specifications, we cannot impose boundary conditions without any attributes assigned to the boundaries. Different attributes can be assigned to the volumetric part of the mesh for using different material coefficients within the domain. Here, however, we use only a single attribute, as the first example uses only a single diffusion coefficient. Physical Volume(1) = {50}; Physical Surface(1) = {1,6,8}; Mesh.MshFileVersion = 2.2; The first line from the above snippet defines physical volume 1 to coincide with the geometry volume 50, which is the final volume obtained by the Boolean operation. The second line defines physical surface 1 to include geometric surfaces {1,6,8}. Finally, the last line specifies the file format. Note that MFEM can only read ASCII Gmsh format version 2.2. The generated mesh is shown in the figures above. Careful inspection reveals that the cylindrical surface is not represented well by the linear elements. We can improve the representation by refining the mesh. We encourage you to play with the mesh and to generate finer discretizations for the simulations. You can download the Gmsh input file here and the resulting mesh file here . For users without access to the Gmsh GUI, a mesh can be generated in your local terminal with the following command: gmsh -3 cross_heat.geo To run simulations with the generated mesh, drag-and-drop the mesh file from your computer to the AWS browser window in the MFEM examples directory: To run Example 1 with the newly prepared mesh, be sure you are in the examples directory and then run the following command: mpirun -np 24 ./ex1p -m cross_heat.msh -no-vis The solution of the diffusion equation for the generated mesh is shown in the following two pictures. The figures are generated with ParaView, and the process of visualization is explained at the end of this tutorial session. If we want to enforce Dirichlet boundary conditions different than zero on some other surface, we must export it as a physical surface. For example, to enforce value one on the other cylindrical surface, add the following line to the cross_heat.geo file: Physical Surface(2) = {7}; The line should be inserted in any place after the definition of geometrical surface 7, e.g., after the boolean operation defining the final geometry. If we run ex1.cpp without modifications, a zero value will be assigned to the newly defined surface. Thus, in order to set it to one, modify section 10 in ex1p.cpp : // 10. Define the solution vector x as a parallel finite element grid // function corresponding to fespace. Initialize x with initial guess of // zero, which satisfies the boundary conditions. ParGridFunction x(&fespace); x = 0.0; { Array ess_bdr(pmesh.bdr_attributes.Max()); ess_bdr = 0; ess_bdr[1] = 1; ConstantCoefficient zero(0.0); Coefficient* coeff[1]; coeff[0]=&one; x.ProjectBdrCoefficient(coeff,ess_bdr); } In the above snippet, we project coefficient one on the degrees of freedom associated with physical surface 2 (the indexing starts at zero). Executing the modified code with the newly created mesh will result in the following solution: The results can be seen in the GLVis windows as well. However, the users will see only the defined physical surfaces (1,2) and the boundaries between the parallel partitions. Any 2D cuts will work as usual. MFEM can import meshes saved in Exodus II format generated with Cubit . However, this feature requires compilation of the library with HDF5, NetCDF, and Exodus, which is not available in the AWS tutorial image.", "title": "  Importing meshes from Gmsh and Cubit"}, {"location": "tutorial/meshvis/#mfems-meshing-tools", "text": "MFEM provides many tools, routines, and examples for mesh manipulation. The miniapp examples illustrate a large part of the MFEM functionality in the miniapps/meshing subdirectory. Below we provide more details about only two of these miniapps. However, users are encouraged to also explore the other meshing miniapps .", "title": "  MFEM's meshing tools"}, {"location": "tutorial/meshvis/#mesh-explorer", "text": "The mesh explorer miniapp is a handy tool to examine, visualize and manipulate a given mesh. Users have to compile it in the miniapps/meshing subdirectory: cd ~/mfem/miniapps/meshing make mesh-explorer Once compiled, it can be executed in the same directory by typing in the terminal ./mesh-explorer Before executing it, users should ensure that the GLVis window is open and connected to the AWS machine. Once started, many options will appear in the terminal window. An example screenshot of provided below By pressing the corresponding keys, a number of operations can be performed on the input mesh files, including: Visualizing of mesh materials with m , and individual mesh elements with e . Mesh refinement with r , scaling with s , randomization with j , and transformation with t . Manipulation of the mesh curvature with c . The ability to simulate parallel partitioning with p . Quantitative and visual reports of mesh quality with x , h and J . Saving the resulting mesh with in MFEM or VTK format with S and V . For example, selecting v in the prompt and pressing enter will display the default mesh of a hex-meshed beam in the GLVis window. To operate on a different mesh, users should exit the miniapp with q and start it again with the following line ./mesh-explorer -m new_mesh_file.msh Here new_mesh_file.msh is the mesh file selected by the user. The input mesh can be in any format supported by MFEM. In addition, the miniapp can save the loaded mesh in native MFEM and VTK formats.", "title": "  Mesh Explorer"}, {"location": "tutorial/meshvis/#shaper", "text": "Shaper is a miniapp that performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. To experiment with it, go to the miniapps/meshing subdirectory and type: cd ~/mfem/miniapps/meshing make shaper ./shaper The result of the execution with five levels of refinement and default setting can be seen in the following screenshot. Users can specify different material distributions by modifying the function int material(Vector &x, Vector &xmin, Vector &xmax) in the begging of shaper.cpp . The current function returns integer values of 1 if a point is located within a simple annulus/shell with a relative inner radius of 0.4 and outer radius of 0.6 and 2 otherwise. The coordinates of a point within the mesh are mapped to values between minus one and one. Users are encouraged to modify the material distribution function and use different meshes as input. The refinement level is controlled in the terminal by pressing y for further refinement or n for completing the run. The resulting mesh is written in a file shaper.mesh . Once the mesh is written, users can use it as an input to other examples or miniapps.", "title": "  Shaper"}, {"location": "tutorial/meshvis/#visualizing-results-in-paraview-and-visit", "text": "To save the simulation results from the parallel version of Example 1 ( ex1p.cpp ) in ParaView format, add the following lines just before step 17 in the file. { ParaViewDataCollection *pd = NULL; pd = new ParaViewDataCollection(\"Example1P\", &pmesh); pd->SetPrefixPath(\"ParaView\"); pd->RegisterField(\"solution\", &x); pd->SetLevelsOfDetail(order); pd->SetDataFormat(VTKFormat::BINARY); pd->SetHighOrderOutput(true); pd->SetCycle(0); pd->SetTime(0.0); pd->Save(); delete pd; } The first line defines a ParaViewDataCollection for saving data in ParaView data format. The following two lines define the name of the data collection and the prefix path, which is set to ParaView. Thus, the data set will be written in the directory ParaView relative to the current execution path. The following line registers the ParGridFunction x in the data collection. The remaining lines set different parameters for the format and the data set, and finally, the set is saved and deleted. See MFEM documentation for more detailed information about ParaView. Compile and execute the modified example. To download the results saved in ParaView format to your local machine, compress and gather all files in a single archive with the following command: tar cvfz paraview.tgz ParaView/ which will generate the file paraview.tgz in the current directory. Download the file to your local machine by dragging it from the Explorer window: Then go to the download location and extract the archive with tar vxfz paraview.tgz ParaView/ The above assumes a UNIX type of environment. Windows users could use the GUI or WSL/WSL2 engines. ParaView can be freely downloaded both as a source code or precompiled binaries. The precompiled binaries are available for Linux, macOS, and Windows. Please follow the instructions for the corresponding operating system for installation instructions. To visualize the downloaded simulation data, run ParaView and open the file Example1P.pvd in the ParaView/Example1P directory, where the path is relative to the directory where the archive was downloaded. Next, click on the Apply button and select Solution in the drop-down menu in the second row of buttons. The geometry, together with the solution, can be rotated on the screen by holding and dragging the mouse. Replacing ParaviewDataCollection with VisItDataCollection allows you to write data in VisIt data format. VisIt can be freely downloaded and installed on Linux, macOS, and Windows and provides another alternative to ParaView. The steps for downloading and the simulation data are the same as the steps outlined above for ParaView.", "title": "  Visualizing results in ParaView and VisIt"}, {"location": "tutorial/solvers/", "text": "Solvers and Scalability 45 minutes intermediate Lesson Objectives Learn about MFEM's parallel scalability. Learn about MFEM's support for efficient solvers and preconditioners. Note Please complete the Getting Started and Finite Element Basics pages before this lesson. MFEM is designed to be highly scalable and efficient on a wide variety of platforms: from laptops to GPU-accelerated supercomputers . The solvers described in this lesson play a critical role in this parallel scalability. Scalable algebraic multigrid preconditioners from hypre MFEM comes with a large number of example codes that demonstrate different physical applications, finite element discretizations, and linear solvers: Example 1 solves a Poisson problem, Example 2 solves a linear elasticity problem, Example 3 solves a definite Maxwell (electromagnetics) problem, and Example 4 solves grad-div diffusion problem. The parallel versions of these examples ( ex1p , ex2p , ex3p , and ex4p ) each use suitable algebraic multigrid (AMG) preconditioners from the hypre solvers library. We describe sample runs with each of these examples in more details below. Example 1: Poisson problem and AMG First, make sure you are in the examples subdirectory: cd ~/mfem/examples Build the parallel version of Example 1: make ex1p Run the parallel version of Example 1, solving a Poisson problem: ./ex1p After forming the linear system, MFEM uses hypre to construct and apply an AMG preconditioner. Details of the AMG preconditioner are provided in the example output under the headers BoomerAMG SETUP PARAMETERS and BoomerAMG SOLVER PARAMETERS . Click here to view the terminal output A key feature of AMG methods is their scalability: with default options, convergence is achieved in only 18 conjugate gradient iterations. Let's see what happens if we increase the mesh refinement. Edit ex1p.cpp changing line 153 as follows: @@ -150,7 +150,7 @@ int main(int argc, char *argv[]) ParMesh pmesh(MPI_COMM_WORLD, mesh); mesh.Clear(); { - int par_ref_levels = 2; + int par_ref_levels = 3; for (int l = 0; l < par_ref_levels; l++) { pmesh.UniformRefinement(); This adds one additional level of refinement, making the problem roughly 4 times as large in 2D, or 8 times as large in 3D. Rebuild the example ( make ex1p ) and re-run it: ./ex1p Although the number of unknowns for this problem has increased by roughly 4x, the iteration count remains at 18 due to the scalability of the AMG preconditioner. Let's now try a 3D problem. For that, we just need to choose a 3D mesh using the -m or --mesh command line argument. Because these problems are more computationally expensive, let's first reduce the refinement level, setting int par_ref_levels = 1; in the ex1p.cpp source code. Rebuild the example ( make ex1p ) and re-run it using the three-dimensional Fichera mesh: ./ex1p -m ../data/fichera.mesh . Convergence is attained in only 16 iterations. Finally, let's take a look at the parallel scalability of the solvers: Increase the refinement level: int par_ref_levels = 2; Recompile: make ex1p Now run the 3D example on 8 cores: mpirun -np 8 ./ex1p -m ../data/fichera.mesh This is an example of a weak scaling test : the problem size and the number of processors are both increased by a factor of 8. Because the PCG iteration counts remain roughly constant, the total time to solution should remain roughly fixed (minus some overhead and communication cost), even though we are solving a problem that is 8 times larger. Example 2: Linear Elasticity This example demonstrates solving a linear elasticity cantilever beam problem with different materials. This example is designed to work with any of the \"beam\" meshes provided by MFEM. Run ls ../data | grep beam to list the available 2D and 3D meshes: beam-hex-nurbs.mesh , beam-hex.mesh , beam-hex.vtk , beam-quad-amr.mesh , beam-quad-nurbs.mesh , beam-quad.mesh , beam-quad.vtk , beam-tet.mesh , beam-tet.vtk , beam-tri.mesh , beam-tri.vtk , beam-wedge.mesh , and beam-wedge.vtk . The elements and boundaries of these meshes are assigned attributes/materials suitable for the cantilever problem: +----------+----------+ boundary --->| material | material |<--- boundary attribute 1 | 1 | 2 | attribute 2 (fixed) +----------+----------+ (pull down) Build the example with make ex2p . Try running ./ex2p in the terminal to run a 2D elasticity problem. As in Example 1, the linear system is solved using AMG. For this example, two types of AMG solvers can be used: A special version of AMG designed specifically for elasticity ( see this paper ). AMG for systems. To enable the special elasticity AMG, add the flag -elast to the command line, otherwise, AMG for systems will be used. For example: ./ex2p -elast . The polynomial degree (order) can be changed with the --order command line argument ( -o for short). For example: ./ex2p -o 2 . By default, low-order $(p=1)$ elements are used. Warning Using higher-order elements can quickly become computationally expensive. See the section below on Low-order-refined methods for a more efficient approach. Additionally, static condensation can be used to eliminate interior high-order degrees of freedom and obtain a smaller system. For --order 1 , this has no effect. For higher-order problems, static condensation can improve efficiency. In this example, as before, the mesh refinement level can be controlled in the source code through par_ref_levels . Note Remember to recompile the example after editing the source code ( make ex2p ). Running with more than one MPI rank will partition the mesh and run the problem in parallel. Here is a sample 3D run: mpirun -np 8 ./ex2p -m ../data/beam-hex.mesh Try experimenting with different discretization, solver, and parallelization options. Examples 3 and 4: the de Rham Complex The next two examples demonstrate the use of vector finite element spaces . Example 3 solves an electromagnetics problem using $H(\\mathrm{curl})$ finite elements. Example 4 solves a grad-div problem using $H(\\mathrm{div})$ finite elements. Standard multigrid methods don't always work well for these problems, so we need specialized solvers! (See here for a paper on this topic.) For $H(\\mathrm{curl})$ problems, we use the AMS solver from hypre. For $H(\\mathrm{div})$ problems, we either use the ADS solver from hypre or a special hybridization solver . A recent saddle-point $H(\\mathrm{div})$ solver is also available in the miniapps/hdiv-linear-solver directory . See this paper for more details. Try experimenting with different options to get a feel for the performance of the discretizations and solvers: Change the mesh (2D or 3D) using the --mesh ( -m ) command line argument. For example: mpirun -np 16 ex3p -m ../data/beam-hex.mesh . Change the polynomial degree using the --order ( -o ) command line argument. For example: mpirun -np 32 ex4p -m ../data/square-disc-nurbs.mesh -o 3 . Run problems in parallel using mpirun . For ex4p , enable hybridization using the -hb flag. For example: mpirun -np 48 ex4p -m ../data/star-surf.mesh -o 3 -hb . Note Remember to build the examples first: make ex3 ex4 ex3p ex4p MFEM's native Multigrid solver The previous examples ( ex1p , ex2p , ex3p , and ex4p ) all used algebraic multigrid methods. MFEM also supports geometric ($h$- and $p$-multigrid) methods. These solvers are illustrated in Example 26 (and its parallel variant); see the ex26.cpp and ex26p.cpp source files. Mesh refinement can be set using the --geometric-refinements ( -gr ) command line argument. The finite element order can be controlled using the --order-refinements ( -or ) command line argument. Warning Each additional order refinement increases the order by a factor of 2. This quickly becomes computationally expensive, so be careful when increasing the order refinements. This example runs matrix-free using MFEM's partial assembly algorithms . Matrix-free methods are much more efficient for high-order problems and also work better on GPU architectures. Try comparing the performance of ex1p and ex26p for higher-order problems. For example, compare the run time of the following two runs: mpirun -np 32 ./ex26p -m ../data/fichera.mesh -or 2 mpirun -np 32 ./ex1p -m ../data/fichera.mesh -o 1 Both examples solve a degree-4 Poisson problem with 1,884,545 degrees of freedom, but one of them is significantly faster. Explore how the number of CG iterations changes as -or and -gr are increased. (For large problems, it may be worth running ex26p in parallel with mpirun .) Low-order-refined methods Examples 1, 2, 3, and 4 used algebraic methods applied to the discretization matrix for each of the problems. Example 26 showed how to use geometric multigrid together with matrix-free methods. Low-order-refined (LOR) is an alternative matrix-free methodology for solving these problems. The LOR solvers miniapp provides matrix-free solvers for the same problems solved in Examples 1, 3, and 4. Go to the LOR solvers miniapp directory: cd ~/mfem/miniapps/solvers Run make plor_solvers to build the parallel LOR solvers miniapp. The --fe-type (or -fe ) command line argument can be used to choose the problem type. -fe h solves an $H^1$ problem (Poisson, equivalent to ex1 ). -fe n solves a Nedelec problem (Maxwell in $H(\\mathrm{curl})$, equivalent to ex3 ). -fe r solves a Raviart-Thomas problem (grad-div in $H(\\mathrm{div})$, equivalent to ex4 ). As usual, the --mesh ( -m ) argument can be used to choose the mesh file. (Keep in mind that MFEM's meshes in the data directory are now found in ../../data relative to the miniapp directory.) The number of mesh refinements in serial and parallel can be controlled with the --refine-serial and --refine-parallel ( -rs and -rp ) command line arguments The polynomial degree can be controlled with the --order ( -o ) argument. Compare the performance of high-order problems with plor_solvers to that of Examples 1, 3, and 4. Here are some sample runs to compare: // 2D, 5th order, 256,800 DOFs mpirun -np 8 ./plor_solvers -fe n -m ../../data/star.mesh -rs 2 -rp 2 -o 5 -no-vis mpirun -np 8 ../../examples/ex3p -m ../../data/star.mesh -o 5 // 3D, 2nd order, 2,378,016 DOFs mpirun -np 24 ./plor_solvers -fe n -m ../../data/fichera.mesh -rs 2 -rp 2 -o 3 -no-vis mpirun -np 24 ../../examples/ex3p -m ../../data/fichera.mesh -o 3 For more details on how LOR solvers work in MFEM, see the High-Order Matrix-Free Solvers talk ( PDF , video ) from the 2021 MFEM community workshop . Additional solver integrations In addition to the hypre AMG solvers and MFEM's built-in solvers illustrated above, MFEM also integrates with a number of third-party solver libraries, including: PETSc \u2014 see the ~/mfem/examples/petsc directory SuperLU \u2014 see the ~/mfem/examples/superlu directory STRUMPACK \u2014 see ~/mfem/examples/ex11p.cpp Ginkgo \u2014 see the ~/mfem/examples/ginkgo directory AmgX \u2014 see the ~/mfem/examples/amgx directory Most third-party libraries are not pre-installed in the AWS image, but you can still peruse the example source code to see the capabilities of the various integrations. You can check the containers repository to see which third-party libraries are available for the image you chose. As of December 2023, we pre-install PETSc and SuperLU for the CPU images and AmgX for the CUDA images. Note If you install MFEM locally , you can enable these third-party solver library integrations with the MFEM_USE_* configuration variables, e.g., by specifying MFEM_USE_PETSC=YES . Questions? Ask for help in the tutorial Slack channel . Next Steps Depending on your interests pick one of the following lessons: Tour of MFEM Examples Meshing and Visualization Further Steps Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Solvers"}, {"location": "tutorial/solvers/#solvers-and-scalability", "text": "45 minutes intermediate", "title": "  Solvers and Scalability"}, {"location": "tutorial/solvers/#scalable-algebraic-multigrid-preconditioners-from-hypre", "text": "MFEM comes with a large number of example codes that demonstrate different physical applications, finite element discretizations, and linear solvers: Example 1 solves a Poisson problem, Example 2 solves a linear elasticity problem, Example 3 solves a definite Maxwell (electromagnetics) problem, and Example 4 solves grad-div diffusion problem. The parallel versions of these examples ( ex1p , ex2p , ex3p , and ex4p ) each use suitable algebraic multigrid (AMG) preconditioners from the hypre solvers library. We describe sample runs with each of these examples in more details below.", "title": "  Scalable algebraic multigrid preconditioners from hypre"}, {"location": "tutorial/solvers/#example-1-poisson-problem-and-amg", "text": "First, make sure you are in the examples subdirectory: cd ~/mfem/examples Build the parallel version of Example 1: make ex1p Run the parallel version of Example 1, solving a Poisson problem: ./ex1p After forming the linear system, MFEM uses hypre to construct and apply an AMG preconditioner. Details of the AMG preconditioner are provided in the example output under the headers BoomerAMG SETUP PARAMETERS and BoomerAMG SOLVER PARAMETERS . Click here to view the terminal output A key feature of AMG methods is their scalability: with default options, convergence is achieved in only 18 conjugate gradient iterations. Let's see what happens if we increase the mesh refinement. Edit ex1p.cpp changing line 153 as follows: @@ -150,7 +150,7 @@ int main(int argc, char *argv[]) ParMesh pmesh(MPI_COMM_WORLD, mesh); mesh.Clear(); { - int par_ref_levels = 2; + int par_ref_levels = 3; for (int l = 0; l < par_ref_levels; l++) { pmesh.UniformRefinement(); This adds one additional level of refinement, making the problem roughly 4 times as large in 2D, or 8 times as large in 3D. Rebuild the example ( make ex1p ) and re-run it: ./ex1p Although the number of unknowns for this problem has increased by roughly 4x, the iteration count remains at 18 due to the scalability of the AMG preconditioner. Let's now try a 3D problem. For that, we just need to choose a 3D mesh using the -m or --mesh command line argument. Because these problems are more computationally expensive, let's first reduce the refinement level, setting int par_ref_levels = 1; in the ex1p.cpp source code. Rebuild the example ( make ex1p ) and re-run it using the three-dimensional Fichera mesh: ./ex1p -m ../data/fichera.mesh . Convergence is attained in only 16 iterations. Finally, let's take a look at the parallel scalability of the solvers: Increase the refinement level: int par_ref_levels = 2; Recompile: make ex1p Now run the 3D example on 8 cores: mpirun -np 8 ./ex1p -m ../data/fichera.mesh This is an example of a weak scaling test : the problem size and the number of processors are both increased by a factor of 8. Because the PCG iteration counts remain roughly constant, the total time to solution should remain roughly fixed (minus some overhead and communication cost), even though we are solving a problem that is 8 times larger.", "title": "  Example 1: Poisson problem and AMG"}, {"location": "tutorial/solvers/#example-2-linear-elasticity", "text": "This example demonstrates solving a linear elasticity cantilever beam problem with different materials. This example is designed to work with any of the \"beam\" meshes provided by MFEM. Run ls ../data | grep beam to list the available 2D and 3D meshes: beam-hex-nurbs.mesh , beam-hex.mesh , beam-hex.vtk , beam-quad-amr.mesh , beam-quad-nurbs.mesh , beam-quad.mesh , beam-quad.vtk , beam-tet.mesh , beam-tet.vtk , beam-tri.mesh , beam-tri.vtk , beam-wedge.mesh , and beam-wedge.vtk . The elements and boundaries of these meshes are assigned attributes/materials suitable for the cantilever problem: +----------+----------+ boundary --->| material | material |<--- boundary attribute 1 | 1 | 2 | attribute 2 (fixed) +----------+----------+ (pull down) Build the example with make ex2p . Try running ./ex2p in the terminal to run a 2D elasticity problem. As in Example 1, the linear system is solved using AMG. For this example, two types of AMG solvers can be used: A special version of AMG designed specifically for elasticity ( see this paper ). AMG for systems. To enable the special elasticity AMG, add the flag -elast to the command line, otherwise, AMG for systems will be used. For example: ./ex2p -elast . The polynomial degree (order) can be changed with the --order command line argument ( -o for short). For example: ./ex2p -o 2 . By default, low-order $(p=1)$ elements are used.", "title": "  Example 2: Linear Elasticity"}, {"location": "tutorial/solvers/#examples-3-and-4-the-de-rham-complex", "text": "The next two examples demonstrate the use of vector finite element spaces . Example 3 solves an electromagnetics problem using $H(\\mathrm{curl})$ finite elements. Example 4 solves a grad-div problem using $H(\\mathrm{div})$ finite elements. Standard multigrid methods don't always work well for these problems, so we need specialized solvers! (See here for a paper on this topic.) For $H(\\mathrm{curl})$ problems, we use the AMS solver from hypre. For $H(\\mathrm{div})$ problems, we either use the ADS solver from hypre or a special hybridization solver . A recent saddle-point $H(\\mathrm{div})$ solver is also available in the miniapps/hdiv-linear-solver directory . See this paper for more details. Try experimenting with different options to get a feel for the performance of the discretizations and solvers: Change the mesh (2D or 3D) using the --mesh ( -m ) command line argument. For example: mpirun -np 16 ex3p -m ../data/beam-hex.mesh . Change the polynomial degree using the --order ( -o ) command line argument. For example: mpirun -np 32 ex4p -m ../data/square-disc-nurbs.mesh -o 3 . Run problems in parallel using mpirun . For ex4p , enable hybridization using the -hb flag. For example: mpirun -np 48 ex4p -m ../data/star-surf.mesh -o 3 -hb .", "title": "  Examples 3 and 4: the de Rham Complex"}, {"location": "tutorial/solvers/#mfems-native-multigrid-solver", "text": "The previous examples ( ex1p , ex2p , ex3p , and ex4p ) all used algebraic multigrid methods. MFEM also supports geometric ($h$- and $p$-multigrid) methods. These solvers are illustrated in Example 26 (and its parallel variant); see the ex26.cpp and ex26p.cpp source files. Mesh refinement can be set using the --geometric-refinements ( -gr ) command line argument. The finite element order can be controlled using the --order-refinements ( -or ) command line argument.", "title": "  MFEM's native Multigrid solver"}, {"location": "tutorial/solvers/#low-order-refined-methods", "text": "Examples 1, 2, 3, and 4 used algebraic methods applied to the discretization matrix for each of the problems. Example 26 showed how to use geometric multigrid together with matrix-free methods. Low-order-refined (LOR) is an alternative matrix-free methodology for solving these problems. The LOR solvers miniapp provides matrix-free solvers for the same problems solved in Examples 1, 3, and 4. Go to the LOR solvers miniapp directory: cd ~/mfem/miniapps/solvers Run make plor_solvers to build the parallel LOR solvers miniapp. The --fe-type (or -fe ) command line argument can be used to choose the problem type. -fe h solves an $H^1$ problem (Poisson, equivalent to ex1 ). -fe n solves a Nedelec problem (Maxwell in $H(\\mathrm{curl})$, equivalent to ex3 ). -fe r solves a Raviart-Thomas problem (grad-div in $H(\\mathrm{div})$, equivalent to ex4 ). As usual, the --mesh ( -m ) argument can be used to choose the mesh file. (Keep in mind that MFEM's meshes in the data directory are now found in ../../data relative to the miniapp directory.) The number of mesh refinements in serial and parallel can be controlled with the --refine-serial and --refine-parallel ( -rs and -rp ) command line arguments The polynomial degree can be controlled with the --order ( -o ) argument. Compare the performance of high-order problems with plor_solvers to that of Examples 1, 3, and 4. Here are some sample runs to compare: // 2D, 5th order, 256,800 DOFs mpirun -np 8 ./plor_solvers -fe n -m ../../data/star.mesh -rs 2 -rp 2 -o 5 -no-vis mpirun -np 8 ../../examples/ex3p -m ../../data/star.mesh -o 5 // 3D, 2nd order, 2,378,016 DOFs mpirun -np 24 ./plor_solvers -fe n -m ../../data/fichera.mesh -rs 2 -rp 2 -o 3 -no-vis mpirun -np 24 ../../examples/ex3p -m ../../data/fichera.mesh -o 3 For more details on how LOR solvers work in MFEM, see the High-Order Matrix-Free Solvers talk ( PDF , video ) from the 2021 MFEM community workshop .", "title": "  Low-order-refined methods"}, {"location": "tutorial/solvers/#additional-solver-integrations", "text": "In addition to the hypre AMG solvers and MFEM's built-in solvers illustrated above, MFEM also integrates with a number of third-party solver libraries, including: PETSc \u2014 see the ~/mfem/examples/petsc directory SuperLU \u2014 see the ~/mfem/examples/superlu directory STRUMPACK \u2014 see ~/mfem/examples/ex11p.cpp Ginkgo \u2014 see the ~/mfem/examples/ginkgo directory AmgX \u2014 see the ~/mfem/examples/amgx directory Most third-party libraries are not pre-installed in the AWS image, but you can still peruse the example source code to see the capabilities of the various integrations. You can check the containers repository to see which third-party libraries are available for the image you chose. As of December 2023, we pre-install PETSc and SuperLU for the CPU images and AmgX for the CUDA images.", "title": "  Additional solver integrations"}, {"location": "tutorial/start/", "text": "Getting Started 15 minutes basic Lesson Objectives Setup a browser-based MFEM development environment. Run a simple MFEM code to test the environment. Note You need an IP address to follow the steps described below. If you are part of the HPC software tutorial series , you should have received an email with the AWS instance IP address allocated to you. Use that in place of IP in the instructions below. If you are running a Docker container locally, as described in the Local Docker Container page, use localhost in place of IP in the instructions below. If you setup your own cloud instance with the Docker container, you should use the cloud instance IP address. Warning If you use VPN, make sure to turn it off before following the instructions below. Set up VS Code Open a new browser window and load http://IP:3000 . You should see the Visual Studio Code (VS Code) interface. Click on Mark Done to continue. Click on open a folder (under Recent ), then select mfem , then click OK . In the left pane, open examples and select ex1.cpp . Open a new terminal by clicking on in the upper left corner, then Terminal , and then New Terminal . Alternatively you can open a new terminal by pressing Ctrl + Shift + ` . You should now see the MFEM source tree and a terminal in the ~/mfem directory. Note The browser window contains a fully functioning copy of Visual Studio Code. You can customize it further, and adjust it similarly to the desktop version. Set up GLVis In this tutorial we use GLVis for finite element visualization based on MFEM. Open a new browser window and load http://IP:8000/live . When you move the mouse to the top of the window you should see the GLVis interface: Click on the Connect to socket icon in the upper left corner, then click CONNECT . Note The Host field in the Connect to socket dialog should match your IP . When the button switches to DISCONNECT , click outside of the Connect to socket dialog to close it. Your environment should now look like: Simple test To test your environment, run ex1 , which together with the MFEM library itself, comes pre-build in the AWS image. In the VS Code terminal, type cd examples ./ex1 You should see 111 iterations printed in the terminal and the image in the GLVis window should change: To test the visualization, click in the GLVis window, and make sure you can rotate the plot with the Left mouse button and zoom in/out with the Right mouse button. Questions? Ask for help in the tutorial Slack channel . Next Steps Go to the Finite Element Basics page. Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Start"}, {"location": "tutorial/start/#getting-started", "text": "15 minutes basic", "title": "  Getting Started"}, {"location": "tutorial/start/#set-up-vs-code", "text": "Open a new browser window and load http://IP:3000 . You should see the Visual Studio Code (VS Code) interface. Click on Mark Done to continue. Click on open a folder (under Recent ), then select mfem , then click OK . In the left pane, open examples and select ex1.cpp . Open a new terminal by clicking on in the upper left corner, then Terminal , and then New Terminal . Alternatively you can open a new terminal by pressing Ctrl + Shift + ` . You should now see the MFEM source tree and a terminal in the ~/mfem directory.", "title": "  Set up VS Code"}, {"location": "tutorial/start/#set-up-glvis", "text": "In this tutorial we use GLVis for finite element visualization based on MFEM. Open a new browser window and load http://IP:8000/live . When you move the mouse to the top of the window you should see the GLVis interface: Click on the Connect to socket icon in the upper left corner, then click CONNECT .", "title": "  Set up GLVis"}, {"location": "tutorial/start/#simple-test", "text": "To test your environment, run ex1 , which together with the MFEM library itself, comes pre-build in the AWS image. In the VS Code terminal, type cd examples ./ex1 You should see 111 iterations printed in the terminal and the image in the GLVis window should change: To test the visualization, click in the GLVis window, and make sure you can rotate the plot with the Left mouse button and zoom in/out with the Right mouse button.", "title": "  Simple test"}]} \ No newline at end of file +{"config": {"lang": ["en"], "prebuild_index": false, "separator": "[\\s\\-]+"}, "docs": [{"location": "", "text": "2024 Visualization Contest Winner Mathias Schmidt 2024 Visualization Contest Winner Jan Nikl Electromagnetic wave propagation in the NSTX-U tokamak High-order multi-material hydrodynamics in the BLAST code Topology optimization of a drone body using LLNL's LiDO code , based on MFEM Non-conforming adaptive mesh refinement with parallel load-balancing Previous Next MFEM is a free , lightweight , scalable C++ library for finite element methods. Features Arbitrary high-order finite element meshes and spaces . Wide variety of finite element discretization approaches. Conforming and nonconforming adaptive mesh refinement . Scalable from laptops to GPU-accelerated supercomputers. ... and many more . MFEM is used in many projects, including BLAST , Cardioid , Palace , VisIt , RF-SciDAC , FASTMath , xSDK , and CEED in the Exascale Computing Project . We host an annual workshop and FEM@LLNL seminar series series. See also our Gallery , Publications , Videos and News pages. News Date Message Nov 25, 2024 Recap of the 2024 MFEM Community Workshop . Oct 28, 2024 Postdoc position on the MFEM team. Apply May 7, 2024 Version 4.7 released . May 2, 2024 New MFEM paper in IJHPCA. Feb 22, 2023 AWS releases Palace based on MFEM. Latest Release New features \u250a Examples \u250a Code documentation \u250a Sources Download mfem-4.7.tgz Older releases \u250a Python wrapper \u250a Documentation Building MFEM \u250a Getting Started \u250a Finite Elements \u250a Performance New users should start by examining the example codes . We also recommend using GLVis for visualization. Contact Use the GitHub issue tracker to report bugs or post questions or comments . See the About page for citation information.", "title": "Home"}, {"location": "#features", "text": "Arbitrary high-order finite element meshes and spaces . Wide variety of finite element discretization approaches. Conforming and nonconforming adaptive mesh refinement . Scalable from laptops to GPU-accelerated supercomputers. ... and many more . MFEM is used in many projects, including BLAST , Cardioid , Palace , VisIt , RF-SciDAC , FASTMath , xSDK , and CEED in the Exascale Computing Project . We host an annual workshop and FEM@LLNL seminar series series. See also our Gallery , Publications , Videos and News pages.", "title": "Features"}, {"location": "#news", "text": "Date Message Nov 25, 2024 Recap of the 2024 MFEM Community Workshop . Oct 28, 2024 Postdoc position on the MFEM team. Apply May 7, 2024 Version 4.7 released . May 2, 2024 New MFEM paper in IJHPCA. Feb 22, 2023 AWS releases Palace based on MFEM.", "title": "News"}, {"location": "#latest-release", "text": "New features \u250a Examples \u250a Code documentation \u250a Sources Download mfem-4.7.tgz Older releases \u250a Python wrapper \u250a", "title": "Latest Release"}, {"location": "#documentation", "text": "Building MFEM \u250a Getting Started \u250a Finite Elements \u250a Performance New users should start by examining the example codes . We also recommend using GLVis for visualization.", "title": "Documentation"}, {"location": "#contact", "text": "Use the GitHub issue tracker to report bugs or post questions or comments . See the About page for citation information.", "title": "Contact"}, {"location": "about/", "text": "About MFEM MFEM originates from previous research effort in the (unreleased) AggieFEM/aFEM project. Please cite with: @article{mfem, title = {{MFEM}: A Modular Finite Element Methods Library}, author = {R. Anderson and J. Andrej and A. Barker and J. Bramwell and J.-S. Camier and J. Cerveny and V. Dobrev and Y. Dudouit and A. Fisher and Tz. Kolev and W. Pazner and M. Stowell and V. Tomov and I. Akkerman and J. Dahm and D. Medina and S. Zampini}, journal = {Computers \\& Mathematics with Applications}, doi = {10.1016/j.camwa.2020.06.009}, volume = {81}, pages = {42-74}, year = {2021} } @misc{mfem-web, key = {mfem}, title = {{MFEM}: Modular Finite Element Methods {[Software]}}, howpublished = {\\url{mfem.org}}, doi = {10.11578/dc.20171025.1248} } Contributors Ido Akkerman Robert Anderson Thomas Anderson Julian Andrej Mikhail Artemyev Nabil Atallah Tucker Babcock Jan-Phillip B\u00e4cker Cody Balos Andrew Barker Natalie Beams Thomas Benson Adrien Bernede Aaron Black Jamie Bramwell Thomas Brunner Jean-Sylvain Camier Hugh Carson Robert Carson Eric Chin Lenka \u010cerven\u00e1 Jakub \u010cerven\u00fd Dylan Copeland Johann Dahm William Dawn Victor DeCaria Veselin Dobrev Daniel Drzisga Yohann Dudouit Tobias Duswald Truman Ellis Josh Essman Aaron Fisher David Gardner Pieter Ghysels Andrew Gillette Sebastian Grimberg Hennes Hajduk Cyrus Harrison Stefan Henneking Milan Holec Delyan Kalchev Kazem Kamran Brendan Keith Dohyun Kim Patrick Knupp Tzanio Kolev \u2014 Project Leader Chris Laganella Ilya Lashuk Boyan Lazarov Chak Shing Lee Jacob Lotz Scott MacLachlan Peter Maginot Victor Magri David Medina Mark Miller Ketan Mittal William Moses Jan Nikl Dennis Ogiermann Geoffrey Oxberry Will Pazner Cosmin Petra Socratis Petrides Robert Rieben Amit Rotem Michael Schneier Joachim Sch\u00f6berl Jean Sexton Syun'ichi Shiraiwa Morteza Siboni Joseph Signorelli Cameron Smith Vanessa Sochat Gabriel Pinochet-Soto Ben Southworth Mike Stees Thomas Stitt Mark Stowell Jeremy Thompson Stanimire Tomov Vladimir Tomov Jean-\u00c9tienne Tremblay Arturo Vargas Umberto Villa Chris Vogl Seth Watts Kenneth Weiss Daniel White Brad Whitlock Christian Woltering Jonathan Wong Max Yang George Zagaris Stefano Zampini Patrick Zulian License BSD This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore Laboratory under Contract DE-AC52-07NA27344. Software release number: LLNL-CODE-806117. DOI: 10.11578/dc.20171025.1248 . Website built with MkDocs , Bootstrap and Bootswatch . Hosted on GitHub .", "title": "About"}, {"location": "about/#about-mfem", "text": "MFEM originates from previous research effort in the (unreleased) AggieFEM/aFEM project. Please cite with: @article{mfem, title = {{MFEM}: A Modular Finite Element Methods Library}, author = {R. Anderson and J. Andrej and A. Barker and J. Bramwell and J.-S. Camier and J. Cerveny and V. Dobrev and Y. Dudouit and A. Fisher and Tz. Kolev and W. Pazner and M. Stowell and V. Tomov and I. Akkerman and J. Dahm and D. Medina and S. Zampini}, journal = {Computers \\& Mathematics with Applications}, doi = {10.1016/j.camwa.2020.06.009}, volume = {81}, pages = {42-74}, year = {2021} } @misc{mfem-web, key = {mfem}, title = {{MFEM}: Modular Finite Element Methods {[Software]}}, howpublished = {\\url{mfem.org}}, doi = {10.11578/dc.20171025.1248} }", "title": "About MFEM"}, {"location": "about/#contributors", "text": "Ido Akkerman Robert Anderson Thomas Anderson Julian Andrej Mikhail Artemyev Nabil Atallah Tucker Babcock Jan-Phillip B\u00e4cker Cody Balos Andrew Barker Natalie Beams Thomas Benson Adrien Bernede Aaron Black Jamie Bramwell Thomas Brunner Jean-Sylvain Camier Hugh Carson Robert Carson Eric Chin Lenka \u010cerven\u00e1 Jakub \u010cerven\u00fd Dylan Copeland Johann Dahm William Dawn Victor DeCaria Veselin Dobrev Daniel Drzisga Yohann Dudouit Tobias Duswald Truman Ellis Josh Essman Aaron Fisher David Gardner Pieter Ghysels Andrew Gillette Sebastian Grimberg Hennes Hajduk Cyrus Harrison Stefan Henneking Milan Holec Delyan Kalchev Kazem Kamran Brendan Keith Dohyun Kim Patrick Knupp Tzanio Kolev \u2014 Project Leader Chris Laganella Ilya Lashuk Boyan Lazarov Chak Shing Lee Jacob Lotz Scott MacLachlan Peter Maginot Victor Magri David Medina Mark Miller Ketan Mittal William Moses Jan Nikl Dennis Ogiermann Geoffrey Oxberry Will Pazner Cosmin Petra Socratis Petrides Robert Rieben Amit Rotem Michael Schneier Joachim Sch\u00f6berl Jean Sexton Syun'ichi Shiraiwa Morteza Siboni Joseph Signorelli Cameron Smith Vanessa Sochat Gabriel Pinochet-Soto Ben Southworth Mike Stees Thomas Stitt Mark Stowell Jeremy Thompson Stanimire Tomov Vladimir Tomov Jean-\u00c9tienne Tremblay Arturo Vargas Umberto Villa Chris Vogl Seth Watts Kenneth Weiss Daniel White Brad Whitlock Christian Woltering Jonathan Wong Max Yang George Zagaris Stefano Zampini Patrick Zulian", "title": "Contributors"}, {"location": "about/#license", "text": "BSD This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore Laboratory under Contract DE-AC52-07NA27344. Software release number: LLNL-CODE-806117. DOI: 10.11578/dc.20171025.1248 . Website built with MkDocs , Bootstrap and Bootswatch . Hosted on GitHub .", "title": "License"}, {"location": "autodiff/", "text": "Automatic Differentiation Mini Applications The code in the miniapps/autodiff subdirectory of MFEM provides methods for automatic differentiation (AD) of arbitrary functions implemented in C++, either as lambda functions or functors. AD consists of a set of techniques to evaluate the derivative of a function implemented as a computer program. AD does not provide a symbolic form of the derivatives, and AD is not a numerical approximation technique. Instead, the derivatives obtained by AD are exact and exploit the fact that every function implemented on a computer can be represented by a sequence of arithmetic operations and basic functions, i.e., addition, multiplication, sin , cos , log , etc. The derivatives in AD with respect to the input arguments are obtained by applying the chain rule on the recorded sequence of operations. For more theoretical details, the users are referred to 1 . AD can be implemented on a compiler level by source code transformations or by using some of the features of modern object-oriented languages like operator overloading and templating. Even though several AD implementations on a compiler level exist, they are often utilized for simple functions written in languages like Fortran and C, and developments for general C++ applications are still in their infancy. The MFEM implementation relies on native and external C++ libraries like CoDiPack 2 . The users can choose the AD engine during the configuration phase. The choice does not affect the actual utilization of AD in the code, and it can impact only the performance and memory utilization. Two distinguished modes, forward and reverse, can be easily identified in software implementations of automatic differentiation. For a function $f(\\mathbf{x}):\\mathbb{R}^n \\rightarrow\\mathbb{R}^m$, forward mode implementations evaluate \\begin{align} \\dot{\\mathbf{y}}&=f'\\left(\\mathbf{x}\\right)\\dot{\\mathbf{x}}, \\quad \\dot{\\mathbf{x}}\\in\\mathbb{R}^{n\\times 1}, \\dot{\\mathbf{y}}\\in\\mathbb{R}^{m\\times1}, \\end{align} where the vector $\\dot{\\mathbf{x}}$ is specified by the user. Therefore, to extract the Jacobian $f'\\left(\\mathbf{x}\\right)\\dot{\\mathbf{x}}$ one has to call the AD procedure $n$ times with $n$ different vectors $\\dot{\\mathbf{x}}$, where the values of vector $j=1,\\ldots, n$ are defined as $\\dot{\\mathbf{x}}_i=\\delta_{i,j}$. The Jacobian is extracted column by column. For a function $f(\\mathbf{x}):\\mathbb{R}^n \\rightarrow\\mathbb{R}^m$, reverse mode evaluates \\begin{align} \\bar{\\mathbf{x}}^{\\sf{T}}&=\\bar{\\mathbf{y}}^{\\sf{T}} f'\\left(\\mathbf{x}\\right), &\\bar{\\mathbf{x}}\\in\\mathbb{R}^{n\\times 1}, \\bar{\\mathbf{y}}\\in\\mathbb{R}^{m\\times1}, \\end{align} where $\\bar{\\mathbf{y}}$ is a vector specified by the user. In contrast to forward mode, the Jacobian, in this case, can be extracted row by row. Thus, for a vector function with a number of arguments smaller than the size of the output, the forward mode will be the preferable one. For a vector function with a number of arguments larger than the size of the output, the reverse mode will be the preferable one. It should be mentioned that reverse mode introduces additional overhead for storing the computational graph in the memory, which might easily fill up the available memory. The interested users are referred to 3 for detailed comparisons. In MFEM, users can choose between a native implementation using AD in a forward mode and both forward and reverse mode implementation based on CoDiPack 2 . The native implementation is based on the so-called dual numbers briefly described below. Dual numbers In forward mode, the derivative information propagates from the input arguments to the output results. In MFEM, this is achieved with the help of the so-called dual number arithmetic. The native low-level implementation can be found in the header file fdual.hpp . The file implements a large number of basic functions, and if necessary additional basic and more complex functions can be easily added by following the examples. A dual number $x+\\varepsilon x'$ consists of a primal/real part and a dual part dragging the derivative information. Every real number can be represented as $x+\\varepsilon 0 $. The arithmetic is defined with the help of dummy symbol $\\varepsilon$ by specifying that $\\varepsilon^2=0$. Based on the above, the following set of rules can be easily derived. $\\left(x+\\varepsilon x'\\right)+\\left(y+\\varepsilon y'\\right)=\\left(x+y\\right)+\\varepsilon\\left(x'+y'\\right)$ $\\left(x+\\varepsilon x'\\right)*\\left(y+\\varepsilon y'\\right)=xy+\\varepsilon\\left(yx'+xy'\\right)$ $f\\left(x+\\varepsilon x'\\right)=f\\left(x\\right)+\\varepsilon f'\\left(x\\right)x'$ $f\\left(g \\left(x+\\varepsilon x'\\right) \\right)= f\\left(g \\left(x\\right)+\\varepsilon g'\\left(x\\right) x'\\right) = f\\left(g \\left(x \\right)\\right)+\\varepsilon f'\\left(g \\left(x \\right)\\right) g'\\left(x\\right) x'$ Example of AD differentiated function The following vector function, defined as lambda expression, has two parameters kappa and load . The input of the function input_vector is a vector $\\left[\\partial u/\\partial x, \\partial u/\\partial y,\\partial u/\\partial z,u \\right]^{\\sf{T}}$ with 4 components (the last one is not used in the output of the function), and the result is a vector $\\left[\\kappa \\partial u/\\partial x, \\kappa \\partial u/\\partial y, \\kappa \\partial u/\\partial z, -f \\right]$ output_vector of size 4. //using lambda expression auto func = [](mfem::Vector& vparam, mfem::ad::ADVectorType& input_vector, mfem::ad::ADVectorType& output_vector) { auto kappa = vparam[0]; //diffusion coefficient auto load = vparam[1]; //volumetric influx output_vector[0] = kappa * input_vector[0]; output_vector[1] = kappa * input_vector[1]; output_vector[2] = kappa * input_vector[2]; output_vector[3] = -load; }; The gradient of output_vector will be a matrix of size 4x4 and is computed with the help of the following object: constexpr int output_length = 4; constexpr int input_length = 4; constexpr int parameter_length = 2; mfem::VectorFuncAutoDiff function_derivative(func); The first parameter in the above template specifies the length of the result, the second parameter the length of the input vector input_vector , and the third template parameter specifies the length of vparam . Once function_derivative is defined, the following statement computes the gradients: function_derivative.Jacobian(param,state, grad_mat); The input consists of parameters and a state vector, and the output is 4x4 grad_mat matrix. The parameter vector consists of the coefficients $\\kappa$ and $f$ (referred to as load in the code). Example of AD differentiated function using functors The following vector function, defined as a functor, has zero parameters. The input of the function input_vector is a vector with 6 components, and the result is a vector output_vector of size 3. template class ExampleResidual { public: void operator ()(ParamVector& vparam, StateVector& input_vector, StateVector& output_vector) { output_vector[0]=sin(input_vector[0]+input_vector[1]+input_vector[2]); output_vector[1]=cos(input_vector[1]+input_vector[2]+input_vector[3]); output_vector[2]=tan(input_vector[2]+input_vector[3]+input_vector[4]+input_vector[5]); } }; The gradient of output_vector will be a matrix of size 3x6 and is computed with the help of the following object: constexpr int output_length = 3; constexpr int input_length = 6; constexpr int parameter_length = 0; mfem::VectorFuncAutoDiff erdf; The Jacobian for a vector input_vector is calculated using the following lines: mfem::DenseMatrix jac(3,6); mfem::Vector param; //dummy vector - we do not have parameters mfem::Vector input_vector(6); input_vector=1.0; // all values are set to one erdf.Jacobian(param,input_vector,jac); The elements of the state vector input_vector are set to one. In real application they should be set to the actual arguments of the function. The Jacobian is returned in the matrix jac(3,6) . The template parameters output_length , input_length ,and parameter_length should match the vector function signature. It is important to mention that the current AD interface is intended to be used at the integration point level. Thus, all vectors and matrices used as arguments in the functors and the lambda expressions should be serial objects. The provided set of examples, in the mini-app directory, for solving a $p$-Laplacian problem further exemplifies the intended use of the current implementation. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); Griewank, A. & Walther, A. Evaluating derivatives: principles and techniques of algorithmic differentiation SIAM, 2008 \u21a9 Sagebaum, M.; Albring, T. & Gauger, N. R. High-Performance Derivative Computations Using CoDiPack ACM Trans. Math. Softw., Association for Computing Machinery, 2019, 45 \u21a9 \u21a9 N\u00f8rgaard, S. A.; Sagebaum, M.; Gauger, N. R. & Lazarov, B. Applications of automatic differentiation in topology optimization Structural and Multidisciplinary Optimization, 2017, 56, 1135-1146 \u21a9", "title": "Automatic Differentiation"}, {"location": "autodiff/#automatic-differentiation-mini-applications", "text": "The code in the miniapps/autodiff subdirectory of MFEM provides methods for automatic differentiation (AD) of arbitrary functions implemented in C++, either as lambda functions or functors. AD consists of a set of techniques to evaluate the derivative of a function implemented as a computer program. AD does not provide a symbolic form of the derivatives, and AD is not a numerical approximation technique. Instead, the derivatives obtained by AD are exact and exploit the fact that every function implemented on a computer can be represented by a sequence of arithmetic operations and basic functions, i.e., addition, multiplication, sin , cos , log , etc. The derivatives in AD with respect to the input arguments are obtained by applying the chain rule on the recorded sequence of operations. For more theoretical details, the users are referred to 1 . AD can be implemented on a compiler level by source code transformations or by using some of the features of modern object-oriented languages like operator overloading and templating. Even though several AD implementations on a compiler level exist, they are often utilized for simple functions written in languages like Fortran and C, and developments for general C++ applications are still in their infancy. The MFEM implementation relies on native and external C++ libraries like CoDiPack 2 . The users can choose the AD engine during the configuration phase. The choice does not affect the actual utilization of AD in the code, and it can impact only the performance and memory utilization. Two distinguished modes, forward and reverse, can be easily identified in software implementations of automatic differentiation. For a function $f(\\mathbf{x}):\\mathbb{R}^n \\rightarrow\\mathbb{R}^m$, forward mode implementations evaluate \\begin{align} \\dot{\\mathbf{y}}&=f'\\left(\\mathbf{x}\\right)\\dot{\\mathbf{x}}, \\quad \\dot{\\mathbf{x}}\\in\\mathbb{R}^{n\\times 1}, \\dot{\\mathbf{y}}\\in\\mathbb{R}^{m\\times1}, \\end{align} where the vector $\\dot{\\mathbf{x}}$ is specified by the user. Therefore, to extract the Jacobian $f'\\left(\\mathbf{x}\\right)\\dot{\\mathbf{x}}$ one has to call the AD procedure $n$ times with $n$ different vectors $\\dot{\\mathbf{x}}$, where the values of vector $j=1,\\ldots, n$ are defined as $\\dot{\\mathbf{x}}_i=\\delta_{i,j}$. The Jacobian is extracted column by column. For a function $f(\\mathbf{x}):\\mathbb{R}^n \\rightarrow\\mathbb{R}^m$, reverse mode evaluates \\begin{align} \\bar{\\mathbf{x}}^{\\sf{T}}&=\\bar{\\mathbf{y}}^{\\sf{T}} f'\\left(\\mathbf{x}\\right), &\\bar{\\mathbf{x}}\\in\\mathbb{R}^{n\\times 1}, \\bar{\\mathbf{y}}\\in\\mathbb{R}^{m\\times1}, \\end{align} where $\\bar{\\mathbf{y}}$ is a vector specified by the user. In contrast to forward mode, the Jacobian, in this case, can be extracted row by row. Thus, for a vector function with a number of arguments smaller than the size of the output, the forward mode will be the preferable one. For a vector function with a number of arguments larger than the size of the output, the reverse mode will be the preferable one. It should be mentioned that reverse mode introduces additional overhead for storing the computational graph in the memory, which might easily fill up the available memory. The interested users are referred to 3 for detailed comparisons. In MFEM, users can choose between a native implementation using AD in a forward mode and both forward and reverse mode implementation based on CoDiPack 2 . The native implementation is based on the so-called dual numbers briefly described below.", "title": "Automatic Differentiation Mini Applications"}, {"location": "autodiff/#dual-numbers", "text": "In forward mode, the derivative information propagates from the input arguments to the output results. In MFEM, this is achieved with the help of the so-called dual number arithmetic. The native low-level implementation can be found in the header file fdual.hpp . The file implements a large number of basic functions, and if necessary additional basic and more complex functions can be easily added by following the examples. A dual number $x+\\varepsilon x'$ consists of a primal/real part and a dual part dragging the derivative information. Every real number can be represented as $x+\\varepsilon 0 $. The arithmetic is defined with the help of dummy symbol $\\varepsilon$ by specifying that $\\varepsilon^2=0$. Based on the above, the following set of rules can be easily derived. $\\left(x+\\varepsilon x'\\right)+\\left(y+\\varepsilon y'\\right)=\\left(x+y\\right)+\\varepsilon\\left(x'+y'\\right)$ $\\left(x+\\varepsilon x'\\right)*\\left(y+\\varepsilon y'\\right)=xy+\\varepsilon\\left(yx'+xy'\\right)$ $f\\left(x+\\varepsilon x'\\right)=f\\left(x\\right)+\\varepsilon f'\\left(x\\right)x'$ $f\\left(g \\left(x+\\varepsilon x'\\right) \\right)= f\\left(g \\left(x\\right)+\\varepsilon g'\\left(x\\right) x'\\right) = f\\left(g \\left(x \\right)\\right)+\\varepsilon f'\\left(g \\left(x \\right)\\right) g'\\left(x\\right) x'$", "title": "Dual numbers"}, {"location": "autodiff/#example-of-ad-differentiated-function", "text": "The following vector function, defined as lambda expression, has two parameters kappa and load . The input of the function input_vector is a vector $\\left[\\partial u/\\partial x, \\partial u/\\partial y,\\partial u/\\partial z,u \\right]^{\\sf{T}}$ with 4 components (the last one is not used in the output of the function), and the result is a vector $\\left[\\kappa \\partial u/\\partial x, \\kappa \\partial u/\\partial y, \\kappa \\partial u/\\partial z, -f \\right]$ output_vector of size 4. //using lambda expression auto func = [](mfem::Vector& vparam, mfem::ad::ADVectorType& input_vector, mfem::ad::ADVectorType& output_vector) { auto kappa = vparam[0]; //diffusion coefficient auto load = vparam[1]; //volumetric influx output_vector[0] = kappa * input_vector[0]; output_vector[1] = kappa * input_vector[1]; output_vector[2] = kappa * input_vector[2]; output_vector[3] = -load; }; The gradient of output_vector will be a matrix of size 4x4 and is computed with the help of the following object: constexpr int output_length = 4; constexpr int input_length = 4; constexpr int parameter_length = 2; mfem::VectorFuncAutoDiff function_derivative(func); The first parameter in the above template specifies the length of the result, the second parameter the length of the input vector input_vector , and the third template parameter specifies the length of vparam . Once function_derivative is defined, the following statement computes the gradients: function_derivative.Jacobian(param,state, grad_mat); The input consists of parameters and a state vector, and the output is 4x4 grad_mat matrix. The parameter vector consists of the coefficients $\\kappa$ and $f$ (referred to as load in the code).", "title": "Example of AD differentiated function"}, {"location": "autodiff/#example-of-ad-differentiated-function-using-functors", "text": "The following vector function, defined as a functor, has zero parameters. The input of the function input_vector is a vector with 6 components, and the result is a vector output_vector of size 3. template class ExampleResidual { public: void operator ()(ParamVector& vparam, StateVector& input_vector, StateVector& output_vector) { output_vector[0]=sin(input_vector[0]+input_vector[1]+input_vector[2]); output_vector[1]=cos(input_vector[1]+input_vector[2]+input_vector[3]); output_vector[2]=tan(input_vector[2]+input_vector[3]+input_vector[4]+input_vector[5]); } }; The gradient of output_vector will be a matrix of size 3x6 and is computed with the help of the following object: constexpr int output_length = 3; constexpr int input_length = 6; constexpr int parameter_length = 0; mfem::VectorFuncAutoDiff erdf; The Jacobian for a vector input_vector is calculated using the following lines: mfem::DenseMatrix jac(3,6); mfem::Vector param; //dummy vector - we do not have parameters mfem::Vector input_vector(6); input_vector=1.0; // all values are set to one erdf.Jacobian(param,input_vector,jac); The elements of the state vector input_vector are set to one. In real application they should be set to the actual arguments of the function. The Jacobian is returned in the matrix jac(3,6) . The template parameters output_length , input_length ,and parameter_length should match the vector function signature. It is important to mention that the current AD interface is intended to be used at the integration point level. Thus, all vectors and matrices used as arguments in the functors and the lambda expressions should be serial objects. The provided set of examples, in the mini-app directory, for solving a $p$-Laplacian problem further exemplifies the intended use of the current implementation. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); Griewank, A. & Walther, A. Evaluating derivatives: principles and techniques of algorithmic differentiation SIAM, 2008 \u21a9 Sagebaum, M.; Albring, T. & Gauger, N. R. High-Performance Derivative Computations Using CoDiPack ACM Trans. Math. Softw., Association for Computing Machinery, 2019, 45 \u21a9 \u21a9 N\u00f8rgaard, S. A.; Sagebaum, M.; Gauger, N. R. & Lazarov, B. Applications of automatic differentiation in topology optimization Structural and Multidisciplinary Optimization, 2017, 56, 1135-1146 \u21a9", "title": "Example of AD differentiated function using functors"}, {"location": "bilininteg/", "text": "Bilinear Form Integrators $ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} $ Bilinear form integrators are at the heart of any finite element method, they are used to compute the integrals of products of basis functions over individual mesh elements (or sometimes over edges or faces). Typically each element is contained in the support of several basis functions of both the domain and range spaces, therefore bilinear integrators simultaneously compute the integrals of all combinations of the relevant basis functions from the domain and range spaces. This produces a two dimensional array of results that are arranged into a small dense matrix of integral values called a local element (stiffness) matrix . To put this another way, the BilinearForm class builds a global, sparse, finite element matrix, glb_mat , by performing the outer loop in the following pseudocode snippet whereas the BilinearFormIntegrator class performs the nested inner loops to compute the dense local element matrix, loc_mat . for each elem in elements loc_mat = 0.0 for each pt in quadrature_points for each u_j in elem for each v_i in elem loc_mat(i,j) += w(pt) * u_j(pt) v_i(pt) end end end glb_mat += loc_mat end There are three types of integrals that typically arise although many other, more exotic, forms are possible: Integrals involving Scalar basis functions: $\\int_\\Omega \\lambda\\, u v$ Integrals involving Vector basis functions: $\\int_\\Omega \\lambda\\, \\vec{u}\\cdot\\vec{v}$ Integrals involving Scalar and Vector basis functions: $\\int_\\Omega u\\,\\vec{\\lambda}\\cdot\\vec{v}$ The BilinearFormIntegrator classes allow MFEM to produce a wide variety of local element matrices without modifying the BilinearForm class. Many of the possible operators are collected below into tables that briefly describe their action and requirements. For more information on integration and developing custom BilinearFormIntegrator classes see Integration . In the tables below the Space column refers to finite element spaces which implement the following methods: Space Operator Derivative Operator H1 CalcShape CalcDShape ND CalcVShape CalcCurlShape RT CalcVShape CalcDivShape L2 CalcShape None The Coef. column refers to the types of coefficients that are available. A boldface coefficient type is required whereas most coefficients are optional. Coef. Type of Function Argument Type S Scalar Valued Function Coefficient V Vector Valued Function VectorCoefficient D Diagonal Matrix Function VectorCoefficient M General Matrix Function MatrixCoefficient Notation: The integrals performed by the various integrators listed below are shown using inner product notation, $(\\cdot,\\cdot)$, defined as follows. $$(\\lambda u, v)\\equiv \\int_\\Omega \\lambda u v$$ $$(\\lambda\\vec{u}, \\vec{v})\\equiv \\int_\\Omega\\lambda\\vec{u}\\cdot\\vec{v}$$ Where $u$ or $\\vec{u}$ is a function in the domain (or trial) space and $v$ or $\\vec{v}$ is in the range (or test) space. For boundary integrators, the integrals are over $\\partial \\Omega$. Face integrators integrate over the interior and boundary faces of mesh elements and are denoted with $\\left<\\cdot,\\cdot\\right>$. Note that any operators involving a derivative of the range function $v$ or $\\vec{v}$ are computed using integration by parts. This leads to a boundary integral which can be used to apply Neumann boundary conditions. Some of these operators are listed along with their boundary terms in section Weak Operators . Scalar Field Operators These operators require scalar-valued trial spaces. Many of these operators will work with either H1 or L2 basis functions but some that require a gradient operator should be used with H1. Square Operators These integrators are designed to be used with the BilinearForm object to assemble square linear operators. Class Name Spaces Coef. Operator Continuous Op. Dimension MassIntegrator H1, L2 S $(\\lambda u, v)$ $\\lambda u$ 1D, 2D, 3D DiffusionIntegrator H1 S, M $(\\lambda\\grad u, \\grad v)$ $-\\div(\\lambda\\grad u)$ 1D, 2D, 3D Mixed Operators These integrators are designed to be used with the MixedBilinearForm object to assemble square or rectangular linear operators. Class Name Domain Range Coef. Operator Continuous Op. Dimension MixedScalarMassIntegrator H1, L2 H1, L2 S $(\\lambda u, v)$ $\\lambda u$ 1D, 2D, 3D MixedScalarWeakDivergenceIntegrator H1, L2 H1 V $(-\\vec{\\lambda}u,\\grad v)$ $\\div(\\vec{\\lambda}u)$ 2D, 3D MixedScalarWeakDerivativeIntegrator H1, L2 H1 S $(-\\lambda u, \\ddx{v})$ $\\ddx{}(\\lambda u)\\;$ 1D MixedScalarWeakCurlIntegrator H1, L2 ND S $(\\lambda u,\\curl\\vec{v})$ $\\curl(\\lambda\\,u\\,\\hat{z})\\;$ 2D MixedVectorProductIntegrator H1, L2 ND, RT V $(\\vec{\\lambda}u,\\vec{v})$ $\\vec{\\lambda}u$ 2D, 3D MixedScalarWeakCrossProductIntegrator H1, L2 ND, RT V $(\\vec{\\lambda} u\\,\\hat{z},\\vec{v})$ $\\vec{\\lambda}\\times\\,\\hat{z}\\,u$ 2D MixedScalarWeakGradientIntegrator H1, L2 RT S $(-\\lambda u, \\div\\vec{v})$ $\\grad(\\lambda u)$ 2D, 3D MixedDirectionalDerivativeIntegrator H1 H1, L2 V $(\\vec{\\lambda}\\cdot\\grad u, v)$ $\\vec{\\lambda}\\cdot\\grad u$ 2D, 3D MixedScalarCrossGradIntegrator H1 H1, L2 V $(\\vec{\\lambda}\\cross\\grad u, v)$ $\\vec{\\lambda}\\cross\\grad u$ 2D MixedScalarDerivativeIntegrator H1 H1, L2 S $(\\lambda \\ddx{u}, v)$ $\\lambda\\ddx{u}\\;$ 1D MixedGradGradIntegrator H1 H1 S, D, M $(\\lambda\\grad u,\\grad v)$ $-\\div(\\lambda\\grad u)$ 2D, 3D MixedCrossGradGradIntegrator H1 H1 V $(\\vec{\\lambda}\\cross\\grad u,\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\grad u)$ 2D, 3D MixedVectorGradientIntegrator H1 ND, RT S, D, M $(\\lambda\\grad u,\\vec{v})$ $\\lambda\\grad u$ 2D, 3D MixedCrossGradIntegrator H1 ND, RT V $(\\vec{\\lambda}\\cross\\grad u,\\vec{v})$ $\\vec{\\lambda}\\cross\\grad u$ 3D MixedCrossGradCurlIntegrator H1 ND V $(\\vec{\\lambda}\\times\\grad u, \\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\times\\grad u)$ 3D MixedGradDivIntegrator H1 RT V $(\\vec{\\lambda}\\cdot\\grad u, \\div\\vec{v})$ $-\\grad(\\vec{\\lambda}\\cdot\\grad u)$ 2D, 3D Other Scalar Operators Class Name Domain Range Coef. Dimension Operator Notes DerivativeIntegrator H1, L2 H1, L2 S 1D, 2D, 3D $(\\lambda\\frac{\\partial u}{\\partial x_i}, v)$ The direction index \"i\" is passed by the user. See MixedDirectionalDerivativeIntegrator for a more general alternative. ConvectionIntegrator H1 H1 V 1D, 2D, 3D $(\\vec{\\lambda}\\cdot\\grad u, v)$ This is designed to be used with BilinearForm to produce a square matrix. See MixedDirectionalDerivativeIntegrator for a rectangular version. GroupConvectionIntegrator H1 H1 V 1D, 2D, 3D $(\\alpha\\vec{\\lambda}\\cdot\\grad u, v)$ Uses the \"group\" finite element formulation for advection due to Fletcher . BoundaryMassIntegrator H1, L2 H1, L2 S 1D, 2D, 3D $(\\lambda\\,u,v)$ Computes a mass matrix on the exterior faces of a domain. See MassIntegrator above for a more general version. Vector Finite Element Operators These operators require vector-valued basis functions in the trial space. Many of these operators will work with either ND or RT basis functions but others require one or the other. Square Operators These integrators are designed to be used with the BilinearForm object to assemble square linear operators. Class Name Spaces Coef. Operator Continuous Op. Dimension VectorFEMassIntegrator ND, RT S, D, M $(\\lambda\\vec{u},\\vec{v})$ $\\lambda\\vec{u}$ 2D, 3D CurlCurlIntegrator ND S, D, M $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ 2D, 3D DivDivIntegrator RT S $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ 2D, 3D Mixed Operators These integrators are designed to be used with the MixedBilinearForm object to assemble square or rectangular linear operators. Class Name Domain Range Coef. Operator Continuous Op. Dimension MixedDotProductIntegrator ND, RT H1, L2 V $(\\vec{\\lambda}\\cdot\\vec{u},v)$ $\\vec{\\lambda}\\cdot\\vec{u}$ 2D, 3D MixedScalarCrossProductIntegrator ND, RT H1, L2 V $(\\vec{\\lambda}\\cross\\vec{u},v)$ $\\vec{\\lambda}\\cross\\vec{u}$ 2D MixedVectorWeakDivergenceIntegrator ND, RT H1 S, D, M $(-\\lambda\\vec{u},\\grad v)$ $\\div(\\lambda\\vec{u})$ 2D, 3D MixedWeakDivCrossIntegrator ND, RT H1 V $(-\\vec{\\lambda}\\cross\\vec{u},\\grad v)$ $\\div(\\vec{\\lambda}\\cross\\vec{u})$ 3D MixedVectorMassIntegrator ND, RT ND, RT S, D, M $(\\lambda\\vec{u},\\vec{v})$ $\\lambda\\vec{u}$ 2D, 3D MixedCrossProductIntegrator ND, RT ND, RT V $(\\vec{\\lambda}\\cross\\vec{u},\\vec{v})$ $\\vec{\\lambda}\\cross\\vec{u}$ 3D MixedVectorWeakCurlIntegrator ND, RT ND S, D, M $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\vec{u})$ 3D MixedWeakCurlCrossIntegrator ND, RT ND V $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ 3D MixedScalarWeakCurlCrossIntegrator ND, RT ND V $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ 2D MixedWeakGradDotIntegrator ND, RT RT V $(-\\vec{\\lambda}\\cdot\\vec{u},\\div\\vec{v})$ $\\grad(\\vec{\\lambda}\\cdot\\vec{u})$ 2D, 3D MixedScalarCurlIntegrator ND H1, L2 S $(\\lambda\\curl\\vec{u},v)$ $\\lambda\\curl\\vec{u}\\;$ 2D MixedCrossCurlGradIntegrator ND H1 V $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\curl\\vec{u})$ 3D MixedVectorCurlIntegrator ND ND, RT S, D, M $(\\lambda\\curl\\vec{u},\\vec{v})$ $\\lambda\\curl\\vec{u}$ 3D MixedCrossCurlIntegrator ND ND, RT V $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\vec{v})$ $\\vec{\\lambda}\\cross\\curl\\vec{u}$ 3D MixedScalarCrossCurlIntegrator ND ND, RT V $(\\vec{\\lambda}\\cross\\hat{z}\\,\\curl\\vec{u},\\vec{v})$ $\\vec{\\lambda}\\cross\\hat{z}\\,\\curl\\vec{u}$ 2D MixedCurlCurlIntegrator ND ND S, D, M $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ 3D MixedCrossCurlCurlIntegrator ND ND V $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\curl\\vec{u})$ 3D MixedScalarDivergenceIntegrator RT H1, L2 S $(\\lambda\\div\\vec{u}, v)$ $\\lambda \\div\\vec{u}$ 2D, 3D MixedDivGradIntegrator RT H1 V $(\\vec{\\lambda}\\div\\vec{u}, \\grad v)$ $-\\div(\\vec{\\lambda}\\div\\vec{u})$ 2D, 3D MixedVectorDivergenceIntegrator RT ND, RT V $(\\vec{\\lambda}\\div\\vec{u}, \\vec{v})$ $\\vec{\\lambda}\\div\\vec{u}$ 2D, 3D Other Vector Finite Element Operators Class Name Domain Range Coef. Operator Dimension Notes VectorFEDivergenceIntegrator RT H1, L2 S $(\\lambda\\div\\vec{u}, v)$ 2D, 3D Alternate implementation of MixedScalarDivergenceIntegrator. VectorFEWeakDivergenceIntegrator ND H1 S $(-\\lambda\\vec{u},\\grad v)$ 2D, 3D See MixedVectorWeakDivergenceIntegrator for a more general implementation. VectorFECurlIntegrator ND, RT ND, RT S $(\\lambda\\curl\\vec{u},\\vec{v})$ or $(\\lambda\\vec{u},\\curl\\vec{v})$ 3D If the domain is ND then the Curl operator is returned, if the range is ND then the weak Curl is returned, otherwise a failure is encountered. See MixedVectorCurlIntegrator and MixedVectorWeakCurlIntegrator for more general implementations. Vector Field Operators These operators require vector-valued basis functions constructed by using multiple copies of scalar fields. In each of these integrators the scalar basis function index increments most quickly followed by the vector index. This leads to local element matrices that have a block structure. Square Operators Class Name Spaces Coef. Dimension Operator Notes VectorMassIntegrator H1$^d$, L2$^d$ S, D, M 1D, 2D, 3D $(\\lambda\\vec{u},\\vec{v})$ VectorCurlCurlIntegrator H1$^d$, L2$^d$ S 2D, 3D $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ VectorDiffusionIntegrator H1$^d$, L2$^d$ S 1D, 2D, 3D $(\\lambda\\grad u_i,\\grad v_i)$ Produces a block diagonal matrix where $i\\in[0,dim)$ indicates the index of the block ElasticityIntegrator H1$^d$, L2$^d$ $2\\times$S 1D, 2D, 3D $(c_{ikjl}\\grad u_j,\\grad v_i)$ Takes two scalar coefficients $\\lambda$ and $\\mu$ and produces a $dim\\times dim$ block structured matrix where $i$ and $j$ are indices in this matrix. The coefficient is defined by $c_{ikjl} = \\lambda\\delta_{ik}\\delta_{jl}+\\mu(\\delta_{ij}\\delta_{kl}+\\delta_{il}\\delta_{jk})$ Mixed Operators Class Name Domain Range Coef. Dimension Operator VectorDivergenceIntegrator H1$^d$, L2$^d$ H1, L2 S 1D, 2D, 3D $(\\lambda\\div\\vec{u},v)$ GradientIntegrator H1 H1$^d$, L2$^d$ S 1D, 2D, 3D $(\\lambda\\grad u, \\vec{v})$ Discontinuous Galerkin Operators Class Name Domain Range Operator Notes DGTraceIntegrator H1, L2 H1, L2 $\\alpha \\left<\\rho_u(\\vec{u}\\cdot\\hat{n}) \\{v\\},[w]\\right> \\\\ + \\beta \\left<\\rho_u \\abs{\\vec{u}\\cdot\\hat{n}}[v],[w]\\right>$ DGDiffusionIntegrator H1, L2 H1, L2 $-\\left<\\{Q\\grad u\\cdot\\hat{n}\\},[v]\\right> \\\\ + \\sigma \\left<[u],\\{Q\\grad v\\cdot\\hat{n}\\}\\right> \\\\ + \\kappa \\left<\\{h^{-1}Q\\}[u],[v]\\right> $ DGElasticityIntegrator H1, L2 H1, L2 see $(\\ref{dg-elast})$ TraceJumpIntegrator $\\left< v, [w] \\right>$ NormalTraceJumpIntegrator $\\left< v, \\left[\\vec{w}\\cdot \\hat{n}\\right] \\right>$ Integrator for the DG elasticity form, for the formulations see: PhD Thesis of Jonas De Basabe, High-Order Finite Element Methods for Seismic Wave Propagation, UT Austin, 2009, p. 23, and references therein Peter Hansbo and Mats G. Larson, Discontinuous Galerkin and the Crouzeix-Raviart Element: Application to Elasticity, PREPRINT 2000-09, p.3 $$ - \\left< \\{ \\tau(u) \\}, [v] \\right> + \\alpha \\left< \\{ \\tau(v) \\}, [u] \\right> + \\kappa \\left< h^{-1} \\{ \\lambda + 2 \\mu \\} [u], [v] \\right> $$ where $ \\left< u, v\\right> = \\int_{F} u \\cdot v $, and $ F $ is a face which is either a boundary face $ F_b $ of an element $ K $ or an interior face $ F_i $ separating elements $ K_1 $ and $ K_2 $. In the bilinear form above $ \\tau(u) $ is traction, and it's also $ \\tau(u) = \\sigma(u) \\cdot \\hat{n} $, where $ \\sigma(u) $ is stress, and $ \\hat{n} $ is the unit normal vector w.r.t. to $ F $. In other words, we have $$\\label{dg-elast} - \\left< \\{ \\sigma(u) \\cdot \\hat{n} \\}, [v] \\right> + \\alpha \\left< \\{ \\sigma(v) \\cdot \\hat{n} \\}, [u] \\right> + \\kappa \\left< h^{-1} \\{ \\lambda + 2 \\mu \\} [u], [v] \\right> $$ For isotropic media $$ \\begin{split} \\sigma(u) &= \\lambda \\nabla \\cdot u I + 2 \\mu \\varepsilon(u) \\\\ &= \\lambda \\nabla \\cdot u I + 2 \\mu \\frac{1}{2} \\left( \\nabla u + \\nabla u^T \\right) \\\\ &= \\lambda \\nabla \\cdot u I + \\mu \\left( \\nabla u + \\nabla u^T \\right) \\end{split} $$ where $ I $ is identity matrix, $ \\lambda $ and $ \\mu $ are Lame coefficients (see ElasticityIntegrator), $ u, v $ are the trial and test functions, respectively. The parameters $ \\alpha $ and $ \\kappa $ determine the DG method to use (when this integrator is added to the \"broken\" ElasticityIntegrator): IIPG , $\\alpha = 0$, C. Dawson, S. Sun, M. Wheeler, Compatible algorithms for coupled flow and transport, Comp. Meth. Appl. Mech. Eng., 193(23-26), 2565-2580, 2004. SIPG , $\\alpha = -1$, M. Grote, A. Schneebeli, D. Schotzau, Discontinuous Galerkin Finite Element Method for the Wave Equation, SINUM, 44(6), 2408-2431, 2006. NIPG , $\\alpha = 1$, B. Riviere, M. Wheeler, V. Girault, A Priori Error Estimates for Finite Element Methods Based on Discontinuous Approximation Spaces for Elliptic Problems, SINUM, 39(3), 902-931, 2001. This is a 'Vector' integrator, i.e. defined for FE spaces using multiple copies of a scalar FE space. Special Purpose Integrators These \"integrators\" do not actually perform integrations they merely alter the results of other integrators. As such they provide a convenient and easy way to reuse existing integrators in special situations rather than needing to reimplement their functionality. Class Name Description TransposeIntegrator Returns the transpose of the local matrix computed by another BilinearFormIntegrator LumpedIntegrator Returns a diagonal local matrix where each entry is the sum of the corresponding row of a local matrix computed by another BilinearFormIntegrator (only implemented for square matrices) InverseIntegrator Returns the inverse of the local matrix computed by another BilinearFormIntegrator which produces a square local matrix SumIntegrator Returns the sum of a series of integrators with compatible dimensions (only implemented for square matrices) Weak Operators and Their Boundary Integrals Weak operators use integration by parts to move a spatial derivative onto the test function. This results in an implied boundary integral that is often assumed to be zero but can be used to apply a non-homogeneous Neumann boundary condition given a known function $u_\\mathrm{bc}$ (or $\\vec{u}_\\mathrm{bc}$ for operators with a vector domain). Operator with Scalar Range The following weak operators require the range (or test) space to be $H_1$ i.e. a scalar basis function with a gradient operator. The implied natural boundary condition when using these operators is for the continuous boundary operator (shown in the last column) to be equal to zero. On the other hand an inhomogeneous Neumann boundary condition can be applied by using a linear form boundary integrator to compute this boundary term for a known function e.g. when using the DiffusionIntegrator one could provide a known function for $\\lambda\\,\\grad u_\\mathrm{bc}$ to the BoundaryNormalLFIntegrator which would then integrate the normal component of this function over the boundary of the domain. See Linear Form Integrators for more information. Class Name Operator Continuous Op. Continuous Boundary Op. DiffusionIntegrator $(\\lambda\\grad u, \\grad v)$ $-\\div(\\lambda\\grad u)$ $\\lambda\\,\\hat{n}\\cdot\\grad u_\\mathrm{bc}$ MixedGradGradIntegrator $(\\lambda\\grad u, \\grad v)$ $-\\div(\\lambda\\grad u)$ $\\lambda\\,\\hat{n}\\cdot\\grad u_\\mathrm{bc}$ MixedCrossGradGradIntegrator $(\\vec{\\lambda}\\cross\\grad u,\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\grad u)$ $\\hat{n}\\cdot(\\vec{\\lambda}\\times\\grad u_\\mathrm{bc})$ MixedScalarWeakDivergenceIntegrator $(-\\vec{\\lambda}u,\\grad v)$ $\\div(\\vec{\\lambda}u)$ $-\\hat{n}\\cdot\\vec{\\lambda}\\,u_\\mathrm{bc}$ MixedScalarWeakDerivativeIntegrator $(-\\lambda u, \\ddx{v})$ $\\ddx{}(\\lambda u)\\;$ $-\\hat{n}\\cdot\\hat{x}\\,\\lambda\\,u_\\mathrm{bc}$ MixedVectorWeakDivergenceIntegrator $(-\\lambda\\vec{u},\\grad v)$ $\\div(\\lambda\\vec{u})$ $-\\hat{n}\\cdot(\\lambda\\,\\vec{u}_\\mathrm{bc})$ MixedWeakDivCrossIntegrator $(-\\vec{\\lambda}\\cross\\vec{u},\\grad v)$ $\\div(\\vec{\\lambda}\\cross\\vec{u})$ $-\\hat{n}\\cdot(\\vec{\\lambda}\\times\\vec{u}_\\mathrm{bc})$ MixedCrossCurlGradIntegrator $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\curl\\vec{u})$ $\\hat{n}\\cdot(\\vec{\\lambda}\\cross\\curl\\vec{u}_\\mathrm{bc})$ MixedDivGradIntegrator $(\\vec{\\lambda}\\div\\vec{u}, \\grad v)$ $-\\div(\\vec{\\lambda}\\div\\vec{u})$ $\\hat{n}\\cdot(\\vec{\\lambda}\\div\\vec{u}_\\mathrm{bc})$ Operator with Vector Range The following weak operators require the range (or test) space to be H(Curl) i.e. a vector basis function with a curl operator. The implied natural boundary condition when using these operators is for the continuous boundary operator (shown in the last column) to be equal to zero. On the other hand a non-homogeneous Neumann boundary condition can be applied by using a linear form boundary integrator to compute this boundary term for a known function e.g. when using the CurlCurlIntegrator one could provide a known function for $-\\lambda\\,\\curl\\vec{u}_\\mathrm{bc}$ to the VectorFEBoundaryTangentLFIntegrator which would then integrate the product of the tangential portion of this function with that of the ND basis function over the boundary of the domain. See Linear Form Integrators for more information. Class Name Operator Continuous Op. Continuous Boundary Op. CurlCurlIntegrator $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $-\\lambda\\,\\hat{n}\\times\\curl\\vec{u}_\\mathrm{bc}$ MixedCurlCurlIntegrator $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $-\\lambda\\,\\hat{n}\\times\\curl\\vec{u}_\\mathrm{bc}$ MixedCrossCurlCurlIntegrator $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\curl\\vec{u})$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\curl\\vec{u}_\\mathrm{bc})$ MixedCrossGradCurlIntegrator $(\\vec{\\lambda}\\cross\\grad u,\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\grad u)$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\grad u_\\mathrm{bc})$ MixedVectorWeakCurlIntegrator $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\vec{u})$ $-\\lambda\\,\\hat{n}\\times\\vec{u}_\\mathrm{bc}$ MixedScalarWeakCurlIntegrator $(\\lambda u,\\curl\\vec{v})$ $\\curl(\\lambda\\,u\\,\\hat{z})\\;$ $-\\lambda\\,u_\\mathrm{bc}\\,\\hat{n}\\times\\hat{z}$ MixedWeakCurlCrossIntegrator $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\vec{u}_\\mathrm{bc})$ MixedScalarWeakCurlCrossIntegrator $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\vec{u}_\\mathrm{bc})$ The following weak operators require the range (or test) space to be H(Div) i.e. a vector basis function with a divergence operator. The implied natural boundary condition when using these operators is for the continuous boundary operator (shown in the last column) to be equal to zero. On the other hand a non-homogeneous Neumann boundary condition can be applied by using a linear form boundary integrator to compute this boundary term for a known function e.g. when using the DivDivIntegrator one could provide a known function for $\\lambda\\,\\div\\vec{u}_\\mathrm{bc}$ to the VectorFEBoundaryFluxLFIntegrator which would then integrate the product of this function with the normal component of the RT basis function over the boundary of the domain. See Linear Form Integrators for more information. Class Name Operator Continuous Op. Continuous Boundary Op. DivDivIntegrator $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ $\\lambda\\div\\vec{u}_\\mathrm{bc}\\,\\hat{n}$ MixedGradDivIntegrator $(\\vec{\\lambda}\\cdot\\grad u, \\div\\vec{v})$ $-\\grad(\\vec{\\lambda}\\cdot\\grad u)$ $\\vec{\\lambda}\\cdot\\grad u_\\mathrm{bc}\\,\\hat{n}$ MixedScalarWeakGradientIntegrator $(-\\lambda u, \\div\\vec{v})$ $\\grad(\\lambda u)$ $-\\lambda u_\\mathrm{bc}\\,\\hat{n}$ MixedWeakGradDotIntegrator $(-\\vec{\\lambda}\\cdot\\vec{u},\\div\\vec{v})$ $\\grad(\\vec{\\lambda}\\cdot\\vec{u})$ $-\\vec{\\lambda}\\cdot\\vec{u}_\\mathrm{bc}\\,\\hat{n}$ Device support A list of the MFEM integrators that support device acceleration is available here . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Bilinear Form Integrators"}, {"location": "bilininteg/#bilinear-form-integrators", "text": "$ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} $ Bilinear form integrators are at the heart of any finite element method, they are used to compute the integrals of products of basis functions over individual mesh elements (or sometimes over edges or faces). Typically each element is contained in the support of several basis functions of both the domain and range spaces, therefore bilinear integrators simultaneously compute the integrals of all combinations of the relevant basis functions from the domain and range spaces. This produces a two dimensional array of results that are arranged into a small dense matrix of integral values called a local element (stiffness) matrix . To put this another way, the BilinearForm class builds a global, sparse, finite element matrix, glb_mat , by performing the outer loop in the following pseudocode snippet whereas the BilinearFormIntegrator class performs the nested inner loops to compute the dense local element matrix, loc_mat . for each elem in elements loc_mat = 0.0 for each pt in quadrature_points for each u_j in elem for each v_i in elem loc_mat(i,j) += w(pt) * u_j(pt) v_i(pt) end end end glb_mat += loc_mat end There are three types of integrals that typically arise although many other, more exotic, forms are possible: Integrals involving Scalar basis functions: $\\int_\\Omega \\lambda\\, u v$ Integrals involving Vector basis functions: $\\int_\\Omega \\lambda\\, \\vec{u}\\cdot\\vec{v}$ Integrals involving Scalar and Vector basis functions: $\\int_\\Omega u\\,\\vec{\\lambda}\\cdot\\vec{v}$ The BilinearFormIntegrator classes allow MFEM to produce a wide variety of local element matrices without modifying the BilinearForm class. Many of the possible operators are collected below into tables that briefly describe their action and requirements. For more information on integration and developing custom BilinearFormIntegrator classes see Integration . In the tables below the Space column refers to finite element spaces which implement the following methods: Space Operator Derivative Operator H1 CalcShape CalcDShape ND CalcVShape CalcCurlShape RT CalcVShape CalcDivShape L2 CalcShape None The Coef. column refers to the types of coefficients that are available. A boldface coefficient type is required whereas most coefficients are optional. Coef. Type of Function Argument Type S Scalar Valued Function Coefficient V Vector Valued Function VectorCoefficient D Diagonal Matrix Function VectorCoefficient M General Matrix Function MatrixCoefficient Notation: The integrals performed by the various integrators listed below are shown using inner product notation, $(\\cdot,\\cdot)$, defined as follows. $$(\\lambda u, v)\\equiv \\int_\\Omega \\lambda u v$$ $$(\\lambda\\vec{u}, \\vec{v})\\equiv \\int_\\Omega\\lambda\\vec{u}\\cdot\\vec{v}$$ Where $u$ or $\\vec{u}$ is a function in the domain (or trial) space and $v$ or $\\vec{v}$ is in the range (or test) space. For boundary integrators, the integrals are over $\\partial \\Omega$. Face integrators integrate over the interior and boundary faces of mesh elements and are denoted with $\\left<\\cdot,\\cdot\\right>$. Note that any operators involving a derivative of the range function $v$ or $\\vec{v}$ are computed using integration by parts. This leads to a boundary integral which can be used to apply Neumann boundary conditions. Some of these operators are listed along with their boundary terms in section Weak Operators .", "title": "Bilinear Form Integrators"}, {"location": "bilininteg/#scalar-field-operators", "text": "These operators require scalar-valued trial spaces. Many of these operators will work with either H1 or L2 basis functions but some that require a gradient operator should be used with H1.", "title": "Scalar Field Operators"}, {"location": "bilininteg/#square-operators", "text": "These integrators are designed to be used with the BilinearForm object to assemble square linear operators. Class Name Spaces Coef. Operator Continuous Op. Dimension MassIntegrator H1, L2 S $(\\lambda u, v)$ $\\lambda u$ 1D, 2D, 3D DiffusionIntegrator H1 S, M $(\\lambda\\grad u, \\grad v)$ $-\\div(\\lambda\\grad u)$ 1D, 2D, 3D", "title": "Square Operators"}, {"location": "bilininteg/#mixed-operators", "text": "These integrators are designed to be used with the MixedBilinearForm object to assemble square or rectangular linear operators. Class Name Domain Range Coef. Operator Continuous Op. Dimension MixedScalarMassIntegrator H1, L2 H1, L2 S $(\\lambda u, v)$ $\\lambda u$ 1D, 2D, 3D MixedScalarWeakDivergenceIntegrator H1, L2 H1 V $(-\\vec{\\lambda}u,\\grad v)$ $\\div(\\vec{\\lambda}u)$ 2D, 3D MixedScalarWeakDerivativeIntegrator H1, L2 H1 S $(-\\lambda u, \\ddx{v})$ $\\ddx{}(\\lambda u)\\;$ 1D MixedScalarWeakCurlIntegrator H1, L2 ND S $(\\lambda u,\\curl\\vec{v})$ $\\curl(\\lambda\\,u\\,\\hat{z})\\;$ 2D MixedVectorProductIntegrator H1, L2 ND, RT V $(\\vec{\\lambda}u,\\vec{v})$ $\\vec{\\lambda}u$ 2D, 3D MixedScalarWeakCrossProductIntegrator H1, L2 ND, RT V $(\\vec{\\lambda} u\\,\\hat{z},\\vec{v})$ $\\vec{\\lambda}\\times\\,\\hat{z}\\,u$ 2D MixedScalarWeakGradientIntegrator H1, L2 RT S $(-\\lambda u, \\div\\vec{v})$ $\\grad(\\lambda u)$ 2D, 3D MixedDirectionalDerivativeIntegrator H1 H1, L2 V $(\\vec{\\lambda}\\cdot\\grad u, v)$ $\\vec{\\lambda}\\cdot\\grad u$ 2D, 3D MixedScalarCrossGradIntegrator H1 H1, L2 V $(\\vec{\\lambda}\\cross\\grad u, v)$ $\\vec{\\lambda}\\cross\\grad u$ 2D MixedScalarDerivativeIntegrator H1 H1, L2 S $(\\lambda \\ddx{u}, v)$ $\\lambda\\ddx{u}\\;$ 1D MixedGradGradIntegrator H1 H1 S, D, M $(\\lambda\\grad u,\\grad v)$ $-\\div(\\lambda\\grad u)$ 2D, 3D MixedCrossGradGradIntegrator H1 H1 V $(\\vec{\\lambda}\\cross\\grad u,\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\grad u)$ 2D, 3D MixedVectorGradientIntegrator H1 ND, RT S, D, M $(\\lambda\\grad u,\\vec{v})$ $\\lambda\\grad u$ 2D, 3D MixedCrossGradIntegrator H1 ND, RT V $(\\vec{\\lambda}\\cross\\grad u,\\vec{v})$ $\\vec{\\lambda}\\cross\\grad u$ 3D MixedCrossGradCurlIntegrator H1 ND V $(\\vec{\\lambda}\\times\\grad u, \\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\times\\grad u)$ 3D MixedGradDivIntegrator H1 RT V $(\\vec{\\lambda}\\cdot\\grad u, \\div\\vec{v})$ $-\\grad(\\vec{\\lambda}\\cdot\\grad u)$ 2D, 3D", "title": "Mixed Operators"}, {"location": "bilininteg/#other-scalar-operators", "text": "Class Name Domain Range Coef. Dimension Operator Notes DerivativeIntegrator H1, L2 H1, L2 S 1D, 2D, 3D $(\\lambda\\frac{\\partial u}{\\partial x_i}, v)$ The direction index \"i\" is passed by the user. See MixedDirectionalDerivativeIntegrator for a more general alternative. ConvectionIntegrator H1 H1 V 1D, 2D, 3D $(\\vec{\\lambda}\\cdot\\grad u, v)$ This is designed to be used with BilinearForm to produce a square matrix. See MixedDirectionalDerivativeIntegrator for a rectangular version. GroupConvectionIntegrator H1 H1 V 1D, 2D, 3D $(\\alpha\\vec{\\lambda}\\cdot\\grad u, v)$ Uses the \"group\" finite element formulation for advection due to Fletcher . BoundaryMassIntegrator H1, L2 H1, L2 S 1D, 2D, 3D $(\\lambda\\,u,v)$ Computes a mass matrix on the exterior faces of a domain. See MassIntegrator above for a more general version.", "title": "Other Scalar Operators"}, {"location": "bilininteg/#vector-finite-element-operators", "text": "These operators require vector-valued basis functions in the trial space. Many of these operators will work with either ND or RT basis functions but others require one or the other.", "title": "Vector Finite Element Operators"}, {"location": "bilininteg/#square-operators_1", "text": "These integrators are designed to be used with the BilinearForm object to assemble square linear operators. Class Name Spaces Coef. Operator Continuous Op. Dimension VectorFEMassIntegrator ND, RT S, D, M $(\\lambda\\vec{u},\\vec{v})$ $\\lambda\\vec{u}$ 2D, 3D CurlCurlIntegrator ND S, D, M $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ 2D, 3D DivDivIntegrator RT S $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ 2D, 3D", "title": "Square Operators"}, {"location": "bilininteg/#mixed-operators_1", "text": "These integrators are designed to be used with the MixedBilinearForm object to assemble square or rectangular linear operators. Class Name Domain Range Coef. Operator Continuous Op. Dimension MixedDotProductIntegrator ND, RT H1, L2 V $(\\vec{\\lambda}\\cdot\\vec{u},v)$ $\\vec{\\lambda}\\cdot\\vec{u}$ 2D, 3D MixedScalarCrossProductIntegrator ND, RT H1, L2 V $(\\vec{\\lambda}\\cross\\vec{u},v)$ $\\vec{\\lambda}\\cross\\vec{u}$ 2D MixedVectorWeakDivergenceIntegrator ND, RT H1 S, D, M $(-\\lambda\\vec{u},\\grad v)$ $\\div(\\lambda\\vec{u})$ 2D, 3D MixedWeakDivCrossIntegrator ND, RT H1 V $(-\\vec{\\lambda}\\cross\\vec{u},\\grad v)$ $\\div(\\vec{\\lambda}\\cross\\vec{u})$ 3D MixedVectorMassIntegrator ND, RT ND, RT S, D, M $(\\lambda\\vec{u},\\vec{v})$ $\\lambda\\vec{u}$ 2D, 3D MixedCrossProductIntegrator ND, RT ND, RT V $(\\vec{\\lambda}\\cross\\vec{u},\\vec{v})$ $\\vec{\\lambda}\\cross\\vec{u}$ 3D MixedVectorWeakCurlIntegrator ND, RT ND S, D, M $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\vec{u})$ 3D MixedWeakCurlCrossIntegrator ND, RT ND V $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ 3D MixedScalarWeakCurlCrossIntegrator ND, RT ND V $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ 2D MixedWeakGradDotIntegrator ND, RT RT V $(-\\vec{\\lambda}\\cdot\\vec{u},\\div\\vec{v})$ $\\grad(\\vec{\\lambda}\\cdot\\vec{u})$ 2D, 3D MixedScalarCurlIntegrator ND H1, L2 S $(\\lambda\\curl\\vec{u},v)$ $\\lambda\\curl\\vec{u}\\;$ 2D MixedCrossCurlGradIntegrator ND H1 V $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\curl\\vec{u})$ 3D MixedVectorCurlIntegrator ND ND, RT S, D, M $(\\lambda\\curl\\vec{u},\\vec{v})$ $\\lambda\\curl\\vec{u}$ 3D MixedCrossCurlIntegrator ND ND, RT V $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\vec{v})$ $\\vec{\\lambda}\\cross\\curl\\vec{u}$ 3D MixedScalarCrossCurlIntegrator ND ND, RT V $(\\vec{\\lambda}\\cross\\hat{z}\\,\\curl\\vec{u},\\vec{v})$ $\\vec{\\lambda}\\cross\\hat{z}\\,\\curl\\vec{u}$ 2D MixedCurlCurlIntegrator ND ND S, D, M $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ 3D MixedCrossCurlCurlIntegrator ND ND V $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\curl\\vec{u})$ 3D MixedScalarDivergenceIntegrator RT H1, L2 S $(\\lambda\\div\\vec{u}, v)$ $\\lambda \\div\\vec{u}$ 2D, 3D MixedDivGradIntegrator RT H1 V $(\\vec{\\lambda}\\div\\vec{u}, \\grad v)$ $-\\div(\\vec{\\lambda}\\div\\vec{u})$ 2D, 3D MixedVectorDivergenceIntegrator RT ND, RT V $(\\vec{\\lambda}\\div\\vec{u}, \\vec{v})$ $\\vec{\\lambda}\\div\\vec{u}$ 2D, 3D", "title": "Mixed Operators"}, {"location": "bilininteg/#other-vector-finite-element-operators", "text": "Class Name Domain Range Coef. Operator Dimension Notes VectorFEDivergenceIntegrator RT H1, L2 S $(\\lambda\\div\\vec{u}, v)$ 2D, 3D Alternate implementation of MixedScalarDivergenceIntegrator. VectorFEWeakDivergenceIntegrator ND H1 S $(-\\lambda\\vec{u},\\grad v)$ 2D, 3D See MixedVectorWeakDivergenceIntegrator for a more general implementation. VectorFECurlIntegrator ND, RT ND, RT S $(\\lambda\\curl\\vec{u},\\vec{v})$ or $(\\lambda\\vec{u},\\curl\\vec{v})$ 3D If the domain is ND then the Curl operator is returned, if the range is ND then the weak Curl is returned, otherwise a failure is encountered. See MixedVectorCurlIntegrator and MixedVectorWeakCurlIntegrator for more general implementations.", "title": "Other Vector Finite Element Operators"}, {"location": "bilininteg/#vector-field-operators", "text": "These operators require vector-valued basis functions constructed by using multiple copies of scalar fields. In each of these integrators the scalar basis function index increments most quickly followed by the vector index. This leads to local element matrices that have a block structure.", "title": "Vector Field Operators"}, {"location": "bilininteg/#square-operators_2", "text": "Class Name Spaces Coef. Dimension Operator Notes VectorMassIntegrator H1$^d$, L2$^d$ S, D, M 1D, 2D, 3D $(\\lambda\\vec{u},\\vec{v})$ VectorCurlCurlIntegrator H1$^d$, L2$^d$ S 2D, 3D $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ VectorDiffusionIntegrator H1$^d$, L2$^d$ S 1D, 2D, 3D $(\\lambda\\grad u_i,\\grad v_i)$ Produces a block diagonal matrix where $i\\in[0,dim)$ indicates the index of the block ElasticityIntegrator H1$^d$, L2$^d$ $2\\times$S 1D, 2D, 3D $(c_{ikjl}\\grad u_j,\\grad v_i)$ Takes two scalar coefficients $\\lambda$ and $\\mu$ and produces a $dim\\times dim$ block structured matrix where $i$ and $j$ are indices in this matrix. The coefficient is defined by $c_{ikjl} = \\lambda\\delta_{ik}\\delta_{jl}+\\mu(\\delta_{ij}\\delta_{kl}+\\delta_{il}\\delta_{jk})$", "title": "Square Operators"}, {"location": "bilininteg/#mixed-operators_2", "text": "Class Name Domain Range Coef. Dimension Operator VectorDivergenceIntegrator H1$^d$, L2$^d$ H1, L2 S 1D, 2D, 3D $(\\lambda\\div\\vec{u},v)$ GradientIntegrator H1 H1$^d$, L2$^d$ S 1D, 2D, 3D $(\\lambda\\grad u, \\vec{v})$", "title": "Mixed Operators"}, {"location": "bilininteg/#discontinuous-galerkin-operators", "text": "Class Name Domain Range Operator Notes DGTraceIntegrator H1, L2 H1, L2 $\\alpha \\left<\\rho_u(\\vec{u}\\cdot\\hat{n}) \\{v\\},[w]\\right> \\\\ + \\beta \\left<\\rho_u \\abs{\\vec{u}\\cdot\\hat{n}}[v],[w]\\right>$ DGDiffusionIntegrator H1, L2 H1, L2 $-\\left<\\{Q\\grad u\\cdot\\hat{n}\\},[v]\\right> \\\\ + \\sigma \\left<[u],\\{Q\\grad v\\cdot\\hat{n}\\}\\right> \\\\ + \\kappa \\left<\\{h^{-1}Q\\}[u],[v]\\right> $ DGElasticityIntegrator H1, L2 H1, L2 see $(\\ref{dg-elast})$ TraceJumpIntegrator $\\left< v, [w] \\right>$ NormalTraceJumpIntegrator $\\left< v, \\left[\\vec{w}\\cdot \\hat{n}\\right] \\right>$ Integrator for the DG elasticity form, for the formulations see: PhD Thesis of Jonas De Basabe, High-Order Finite Element Methods for Seismic Wave Propagation, UT Austin, 2009, p. 23, and references therein Peter Hansbo and Mats G. Larson, Discontinuous Galerkin and the Crouzeix-Raviart Element: Application to Elasticity, PREPRINT 2000-09, p.3 $$ - \\left< \\{ \\tau(u) \\}, [v] \\right> + \\alpha \\left< \\{ \\tau(v) \\}, [u] \\right> + \\kappa \\left< h^{-1} \\{ \\lambda + 2 \\mu \\} [u], [v] \\right> $$ where $ \\left< u, v\\right> = \\int_{F} u \\cdot v $, and $ F $ is a face which is either a boundary face $ F_b $ of an element $ K $ or an interior face $ F_i $ separating elements $ K_1 $ and $ K_2 $. In the bilinear form above $ \\tau(u) $ is traction, and it's also $ \\tau(u) = \\sigma(u) \\cdot \\hat{n} $, where $ \\sigma(u) $ is stress, and $ \\hat{n} $ is the unit normal vector w.r.t. to $ F $. In other words, we have $$\\label{dg-elast} - \\left< \\{ \\sigma(u) \\cdot \\hat{n} \\}, [v] \\right> + \\alpha \\left< \\{ \\sigma(v) \\cdot \\hat{n} \\}, [u] \\right> + \\kappa \\left< h^{-1} \\{ \\lambda + 2 \\mu \\} [u], [v] \\right> $$ For isotropic media $$ \\begin{split} \\sigma(u) &= \\lambda \\nabla \\cdot u I + 2 \\mu \\varepsilon(u) \\\\ &= \\lambda \\nabla \\cdot u I + 2 \\mu \\frac{1}{2} \\left( \\nabla u + \\nabla u^T \\right) \\\\ &= \\lambda \\nabla \\cdot u I + \\mu \\left( \\nabla u + \\nabla u^T \\right) \\end{split} $$ where $ I $ is identity matrix, $ \\lambda $ and $ \\mu $ are Lame coefficients (see ElasticityIntegrator), $ u, v $ are the trial and test functions, respectively. The parameters $ \\alpha $ and $ \\kappa $ determine the DG method to use (when this integrator is added to the \"broken\" ElasticityIntegrator): IIPG , $\\alpha = 0$, C. Dawson, S. Sun, M. Wheeler, Compatible algorithms for coupled flow and transport, Comp. Meth. Appl. Mech. Eng., 193(23-26), 2565-2580, 2004. SIPG , $\\alpha = -1$, M. Grote, A. Schneebeli, D. Schotzau, Discontinuous Galerkin Finite Element Method for the Wave Equation, SINUM, 44(6), 2408-2431, 2006. NIPG , $\\alpha = 1$, B. Riviere, M. Wheeler, V. Girault, A Priori Error Estimates for Finite Element Methods Based on Discontinuous Approximation Spaces for Elliptic Problems, SINUM, 39(3), 902-931, 2001. This is a 'Vector' integrator, i.e. defined for FE spaces using multiple copies of a scalar FE space.", "title": "Discontinuous Galerkin Operators"}, {"location": "bilininteg/#special-purpose-integrators", "text": "These \"integrators\" do not actually perform integrations they merely alter the results of other integrators. As such they provide a convenient and easy way to reuse existing integrators in special situations rather than needing to reimplement their functionality. Class Name Description TransposeIntegrator Returns the transpose of the local matrix computed by another BilinearFormIntegrator LumpedIntegrator Returns a diagonal local matrix where each entry is the sum of the corresponding row of a local matrix computed by another BilinearFormIntegrator (only implemented for square matrices) InverseIntegrator Returns the inverse of the local matrix computed by another BilinearFormIntegrator which produces a square local matrix SumIntegrator Returns the sum of a series of integrators with compatible dimensions (only implemented for square matrices)", "title": "Special Purpose Integrators"}, {"location": "bilininteg/#weak-operators-and-their-boundary-integrals", "text": "Weak operators use integration by parts to move a spatial derivative onto the test function. This results in an implied boundary integral that is often assumed to be zero but can be used to apply a non-homogeneous Neumann boundary condition given a known function $u_\\mathrm{bc}$ (or $\\vec{u}_\\mathrm{bc}$ for operators with a vector domain).", "title": "Weak Operators and Their Boundary Integrals"}, {"location": "bilininteg/#operator-with-scalar-range", "text": "The following weak operators require the range (or test) space to be $H_1$ i.e. a scalar basis function with a gradient operator. The implied natural boundary condition when using these operators is for the continuous boundary operator (shown in the last column) to be equal to zero. On the other hand an inhomogeneous Neumann boundary condition can be applied by using a linear form boundary integrator to compute this boundary term for a known function e.g. when using the DiffusionIntegrator one could provide a known function for $\\lambda\\,\\grad u_\\mathrm{bc}$ to the BoundaryNormalLFIntegrator which would then integrate the normal component of this function over the boundary of the domain. See Linear Form Integrators for more information. Class Name Operator Continuous Op. Continuous Boundary Op. DiffusionIntegrator $(\\lambda\\grad u, \\grad v)$ $-\\div(\\lambda\\grad u)$ $\\lambda\\,\\hat{n}\\cdot\\grad u_\\mathrm{bc}$ MixedGradGradIntegrator $(\\lambda\\grad u, \\grad v)$ $-\\div(\\lambda\\grad u)$ $\\lambda\\,\\hat{n}\\cdot\\grad u_\\mathrm{bc}$ MixedCrossGradGradIntegrator $(\\vec{\\lambda}\\cross\\grad u,\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\grad u)$ $\\hat{n}\\cdot(\\vec{\\lambda}\\times\\grad u_\\mathrm{bc})$ MixedScalarWeakDivergenceIntegrator $(-\\vec{\\lambda}u,\\grad v)$ $\\div(\\vec{\\lambda}u)$ $-\\hat{n}\\cdot\\vec{\\lambda}\\,u_\\mathrm{bc}$ MixedScalarWeakDerivativeIntegrator $(-\\lambda u, \\ddx{v})$ $\\ddx{}(\\lambda u)\\;$ $-\\hat{n}\\cdot\\hat{x}\\,\\lambda\\,u_\\mathrm{bc}$ MixedVectorWeakDivergenceIntegrator $(-\\lambda\\vec{u},\\grad v)$ $\\div(\\lambda\\vec{u})$ $-\\hat{n}\\cdot(\\lambda\\,\\vec{u}_\\mathrm{bc})$ MixedWeakDivCrossIntegrator $(-\\vec{\\lambda}\\cross\\vec{u},\\grad v)$ $\\div(\\vec{\\lambda}\\cross\\vec{u})$ $-\\hat{n}\\cdot(\\vec{\\lambda}\\times\\vec{u}_\\mathrm{bc})$ MixedCrossCurlGradIntegrator $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\grad v)$ $-\\div(\\vec{\\lambda}\\cross\\curl\\vec{u})$ $\\hat{n}\\cdot(\\vec{\\lambda}\\cross\\curl\\vec{u}_\\mathrm{bc})$ MixedDivGradIntegrator $(\\vec{\\lambda}\\div\\vec{u}, \\grad v)$ $-\\div(\\vec{\\lambda}\\div\\vec{u})$ $\\hat{n}\\cdot(\\vec{\\lambda}\\div\\vec{u}_\\mathrm{bc})$", "title": "Operator with Scalar Range"}, {"location": "bilininteg/#operator-with-vector-range", "text": "The following weak operators require the range (or test) space to be H(Curl) i.e. a vector basis function with a curl operator. The implied natural boundary condition when using these operators is for the continuous boundary operator (shown in the last column) to be equal to zero. On the other hand a non-homogeneous Neumann boundary condition can be applied by using a linear form boundary integrator to compute this boundary term for a known function e.g. when using the CurlCurlIntegrator one could provide a known function for $-\\lambda\\,\\curl\\vec{u}_\\mathrm{bc}$ to the VectorFEBoundaryTangentLFIntegrator which would then integrate the product of the tangential portion of this function with that of the ND basis function over the boundary of the domain. See Linear Form Integrators for more information. Class Name Operator Continuous Op. Continuous Boundary Op. CurlCurlIntegrator $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $-\\lambda\\,\\hat{n}\\times\\curl\\vec{u}_\\mathrm{bc}$ MixedCurlCurlIntegrator $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $-\\lambda\\,\\hat{n}\\times\\curl\\vec{u}_\\mathrm{bc}$ MixedCrossCurlCurlIntegrator $(\\vec{\\lambda}\\cross\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\curl\\vec{u})$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\curl\\vec{u}_\\mathrm{bc})$ MixedCrossGradCurlIntegrator $(\\vec{\\lambda}\\cross\\grad u,\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\grad u)$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\grad u_\\mathrm{bc})$ MixedVectorWeakCurlIntegrator $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\vec{u})$ $-\\lambda\\,\\hat{n}\\times\\vec{u}_\\mathrm{bc}$ MixedScalarWeakCurlIntegrator $(\\lambda u,\\curl\\vec{v})$ $\\curl(\\lambda\\,u\\,\\hat{z})\\;$ $-\\lambda\\,u_\\mathrm{bc}\\,\\hat{n}\\times\\hat{z}$ MixedWeakCurlCrossIntegrator $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\vec{u}_\\mathrm{bc})$ MixedScalarWeakCurlCrossIntegrator $(\\vec{\\lambda}\\cross\\vec{u},\\curl\\vec{v})$ $\\curl(\\vec{\\lambda}\\cross\\vec{u})$ $-\\hat{n}\\times(\\vec{\\lambda}\\cross\\vec{u}_\\mathrm{bc})$ The following weak operators require the range (or test) space to be H(Div) i.e. a vector basis function with a divergence operator. The implied natural boundary condition when using these operators is for the continuous boundary operator (shown in the last column) to be equal to zero. On the other hand a non-homogeneous Neumann boundary condition can be applied by using a linear form boundary integrator to compute this boundary term for a known function e.g. when using the DivDivIntegrator one could provide a known function for $\\lambda\\,\\div\\vec{u}_\\mathrm{bc}$ to the VectorFEBoundaryFluxLFIntegrator which would then integrate the product of this function with the normal component of the RT basis function over the boundary of the domain. See Linear Form Integrators for more information. Class Name Operator Continuous Op. Continuous Boundary Op. DivDivIntegrator $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ $\\lambda\\div\\vec{u}_\\mathrm{bc}\\,\\hat{n}$ MixedGradDivIntegrator $(\\vec{\\lambda}\\cdot\\grad u, \\div\\vec{v})$ $-\\grad(\\vec{\\lambda}\\cdot\\grad u)$ $\\vec{\\lambda}\\cdot\\grad u_\\mathrm{bc}\\,\\hat{n}$ MixedScalarWeakGradientIntegrator $(-\\lambda u, \\div\\vec{v})$ $\\grad(\\lambda u)$ $-\\lambda u_\\mathrm{bc}\\,\\hat{n}$ MixedWeakGradDotIntegrator $(-\\vec{\\lambda}\\cdot\\vec{u},\\div\\vec{v})$ $\\grad(\\vec{\\lambda}\\cdot\\vec{u})$ $-\\vec{\\lambda}\\cdot\\vec{u}_\\mathrm{bc}\\,\\hat{n}$", "title": "Operator with Vector Range"}, {"location": "bilininteg/#device-support", "text": "A list of the MFEM integrators that support device acceleration is available here . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Device support"}, {"location": "building/", "text": "Building MFEM A simple tutorial on how to build and run the serial and parallel versions of MFEM together with GLVis. For more details, see the INSTALL file and make help . In addition to the native build system described below, MFEM packages are also available in the following package managers: Homebrew Spack OpenHPC MFEM can also be installed as part of xSDK E4S FASTMath RADIUSS CEED A pre-built version of MFEM is also available in a container form, see our AWS tutorial and the mfem/containers repo. Instructions Download MFEM and GLVis https://mfem.org https://glvis.org Below we assume that we are working with versions mfem-4.5 and glvis-4.2 . Serial version of MFEM and GLVis Put everything in the same directory: ~> ls glvis-4.2.tgz mfem-4.5.tgz Build the serial version of MFEM: ~> tar -zxvf mfem-4.5.tgz ~> cd mfem-4.5 ~/mfem-4.5> make serial -j ~/mfem-4.5> cd .. Build GLVis: ~> tar -zxvf glvis-4.2.tgz ~> cd glvis-4.2 ~/glvis-4.2> make MFEM_DIR=../mfem-4.5 -j ~/glvis-4.2> cd .. That's it! The MFEM library can be found in mfem-4.5/libmfem.a , while the glvis executable will be in the glvis-4.2 directory. Note: as of version 4.0, GLVis has additional dependencies that need to be installed first, see its building documentation . To start a GLVis server, open a new terminal and type ~> cd glvis-4.2 ~/glvis-4.2> ./glvis The serial examples can be built with: ~> cd mfem-4.5/examples ~/mfem-4.5/examples> make -j All serial examples and miniapps can be built with: ~> cd mfem-4.5 ~/mfem-4.5> make all -j Parallel MPI version of MFEM Download hypre and METIS from https://github.com/hypre-space/hypre/tags https://github.com/mfem/tpls Note: We recommend MFEM's mirror of metis-4.0.3 and metis-5.1.0 above because the METIS webpage , is often down and we don't support yet the new GitHub repo . Below we assume that we are working with hypre-2.26.0 and metis-4.0.3 (see below for METIS version 5 and later). We also assume that the serial version of MFEM and GLVis have been built as described above. Put everything in the same directory: ~> ls glvis-4.2/ hypre-2.26.0.tar.gz metis-4.0.3.tar.gz mfem-4.5/ Build hypre: ~> tar -zxvf hypre-2.26.0.tar.gz ~> cd hypre-2.26.0/src/ ~/hypre-2.26.0/src> ./configure --disable-fortran ~/hypre-2.26.0/src> make -j ~/hypre-2.26.0/src> cd ../.. ~> ln -s hypre-2.26.0 hypre Build METIS: ~> tar -zxvf metis-4.0.3.tar.gz ~> cd metis-4.0.3 ~/metis-4.0.3> make OPTFLAGS=-Wno-error=implicit-function-declaration ~/metis-4.0.3> cd .. ~> ln -s metis-4.0.3 metis-4.0 (If you are using METIS 5, see the instructions below .) Build the parallel version of MFEM: ~> cd mfem-4.5 ~/mfem-4.5> make parallel -j ~/mfem-4.5> cd .. Note that if hypre or METIS are in different locations, or you have different versions of these libraries, you will need to update the corresponding paths in the config/defaults.mk file, or create you own config/user.mk , as described in the INSTALL file. The parallel examples can be built with: ~> cd mfem-4.5/examples ~/mfem-4.5/examples> make -j The serial examples can also be built with the parallel version of the library, e.g. ~/mfem-4.5/examples> make ex1 ex2 All parallel examples and miniapps can be built with: ~> cd mfem-4.5 ~/mfem-4.5> make all -j One can also use the parallel library to optionally (re-)build GLVis: ~> cd glvis-4.2 ~/glvis-4.2> make clean ~/glvis-4.2> make MFEM_DIR=../mfem-4.5 -j This, however, is generally not recommended , since the additional MPI thread can interfere with the other GLVis threads. Parallel build using METIS 5 Build METIS 5: ~> tar zvxf metis-5.1.0.tar.gz ~> cd metis-5.1.0 ~/metis-5.1.0> make BUILDDIR=lib config ~/metis-5.1.0> make BUILDDIR=lib ~/metis-5.1.0> cp lib/libmetis/libmetis.a lib Build the parallel version of MFEM, setting the options MFEM_USE_METIS_5 and METIS_DIR , e.g.: ~> cd mfem-4.5 ~/mfem-4.5> make parallel -j MFEM_USE_METIS_5=YES METIS_DIR=@MFEM_DIR@/../metis-5.1.0 CUDA version of MFEM To build the CUDA version of MFEM, one needs to specify the CUDA compute capability , with the CUDA_ARCH flag. In the examples below we use CUDA_ARCH=sm_70 to build the MFEM serial and parallel versions for compute capability 7.0 (V100). Build the serial CUDA version of MFEM: ~/mfem> make cuda CUDA_ARCH=sm_70 -j Build the parallel CUDA version of MFEM: ~/mfem> make pcuda CUDA_ARCH=sm_70 -j To use hypre with CUDA support in MFEM, follow the instructions above but configure it with the following command, specifying the CUDA compute capability: ~/hypre-2.26.0/src> ./configure --with-cuda --with-gpu-arch=\"70\" --disable-fortran HIP version of MFEM To build the HIP version of MFEM, one needs to specify the HIP architecture , with the HIP_ARCH flag. In the examples below we use HIP_ARCH=gfx908 to build the MFEM serial and parallel versions for gfx908 (MI100). Build the serial HIP version of MFEM: ~/mfem> make hip HIP_ARCH=gfx908 -j Build the parallel HIP version of MFEM: ~/mfem> make phip HIP_ARCH=gfx908 -j To use hypre with HIP support in MFEM, follow the instructions above but configure it with the following command, specifying the HIP architecture: ~/hypre-2.26.0/src> ./configure --with-hip --with-gpu-arch=\"gfx908\" --disable-fortran Installing MFEM with Spack If Spack is already available on your system and is visible in your PATH , you can install the MFEM software simply with: spack install mfem To enable package testing during the build process, use instead: spack install -v --test=all mfem If you don't have Spack, you can download it and install MFEM with the following commands: git clone https://github.com/spack/spack.git cd spack ./bin/spack install -v mfem", "title": "_Building MFEM"}, {"location": "building/#building-mfem", "text": "A simple tutorial on how to build and run the serial and parallel versions of MFEM together with GLVis. For more details, see the INSTALL file and make help . In addition to the native build system described below, MFEM packages are also available in the following package managers: Homebrew Spack OpenHPC MFEM can also be installed as part of xSDK E4S FASTMath RADIUSS CEED A pre-built version of MFEM is also available in a container form, see our AWS tutorial and the mfem/containers repo.", "title": "Building MFEM"}, {"location": "building/#instructions", "text": "Download MFEM and GLVis https://mfem.org https://glvis.org Below we assume that we are working with versions mfem-4.5 and glvis-4.2 .", "title": "Instructions"}, {"location": "building/#serial-version-of-mfem-and-glvis", "text": "Put everything in the same directory: ~> ls glvis-4.2.tgz mfem-4.5.tgz Build the serial version of MFEM: ~> tar -zxvf mfem-4.5.tgz ~> cd mfem-4.5 ~/mfem-4.5> make serial -j ~/mfem-4.5> cd .. Build GLVis: ~> tar -zxvf glvis-4.2.tgz ~> cd glvis-4.2 ~/glvis-4.2> make MFEM_DIR=../mfem-4.5 -j ~/glvis-4.2> cd .. That's it! The MFEM library can be found in mfem-4.5/libmfem.a , while the glvis executable will be in the glvis-4.2 directory. Note: as of version 4.0, GLVis has additional dependencies that need to be installed first, see its building documentation . To start a GLVis server, open a new terminal and type ~> cd glvis-4.2 ~/glvis-4.2> ./glvis The serial examples can be built with: ~> cd mfem-4.5/examples ~/mfem-4.5/examples> make -j All serial examples and miniapps can be built with: ~> cd mfem-4.5 ~/mfem-4.5> make all -j", "title": "Serial version of MFEM and GLVis"}, {"location": "building/#parallel-mpi-version-of-mfem", "text": "Download hypre and METIS from https://github.com/hypre-space/hypre/tags https://github.com/mfem/tpls Note: We recommend MFEM's mirror of metis-4.0.3 and metis-5.1.0 above because the METIS webpage , is often down and we don't support yet the new GitHub repo . Below we assume that we are working with hypre-2.26.0 and metis-4.0.3 (see below for METIS version 5 and later). We also assume that the serial version of MFEM and GLVis have been built as described above. Put everything in the same directory: ~> ls glvis-4.2/ hypre-2.26.0.tar.gz metis-4.0.3.tar.gz mfem-4.5/ Build hypre: ~> tar -zxvf hypre-2.26.0.tar.gz ~> cd hypre-2.26.0/src/ ~/hypre-2.26.0/src> ./configure --disable-fortran ~/hypre-2.26.0/src> make -j ~/hypre-2.26.0/src> cd ../.. ~> ln -s hypre-2.26.0 hypre Build METIS: ~> tar -zxvf metis-4.0.3.tar.gz ~> cd metis-4.0.3 ~/metis-4.0.3> make OPTFLAGS=-Wno-error=implicit-function-declaration ~/metis-4.0.3> cd .. ~> ln -s metis-4.0.3 metis-4.0 (If you are using METIS 5, see the instructions below .) Build the parallel version of MFEM: ~> cd mfem-4.5 ~/mfem-4.5> make parallel -j ~/mfem-4.5> cd .. Note that if hypre or METIS are in different locations, or you have different versions of these libraries, you will need to update the corresponding paths in the config/defaults.mk file, or create you own config/user.mk , as described in the INSTALL file. The parallel examples can be built with: ~> cd mfem-4.5/examples ~/mfem-4.5/examples> make -j The serial examples can also be built with the parallel version of the library, e.g. ~/mfem-4.5/examples> make ex1 ex2 All parallel examples and miniapps can be built with: ~> cd mfem-4.5 ~/mfem-4.5> make all -j One can also use the parallel library to optionally (re-)build GLVis: ~> cd glvis-4.2 ~/glvis-4.2> make clean ~/glvis-4.2> make MFEM_DIR=../mfem-4.5 -j This, however, is generally not recommended , since the additional MPI thread can interfere with the other GLVis threads.", "title": "Parallel MPI version of MFEM"}, {"location": "building/#parallel-build-using-metis-5", "text": "Build METIS 5: ~> tar zvxf metis-5.1.0.tar.gz ~> cd metis-5.1.0 ~/metis-5.1.0> make BUILDDIR=lib config ~/metis-5.1.0> make BUILDDIR=lib ~/metis-5.1.0> cp lib/libmetis/libmetis.a lib Build the parallel version of MFEM, setting the options MFEM_USE_METIS_5 and METIS_DIR , e.g.: ~> cd mfem-4.5 ~/mfem-4.5> make parallel -j MFEM_USE_METIS_5=YES METIS_DIR=@MFEM_DIR@/../metis-5.1.0", "title": "Parallel build using METIS 5"}, {"location": "building/#cuda-version-of-mfem", "text": "To build the CUDA version of MFEM, one needs to specify the CUDA compute capability , with the CUDA_ARCH flag. In the examples below we use CUDA_ARCH=sm_70 to build the MFEM serial and parallel versions for compute capability 7.0 (V100). Build the serial CUDA version of MFEM: ~/mfem> make cuda CUDA_ARCH=sm_70 -j Build the parallel CUDA version of MFEM: ~/mfem> make pcuda CUDA_ARCH=sm_70 -j To use hypre with CUDA support in MFEM, follow the instructions above but configure it with the following command, specifying the CUDA compute capability: ~/hypre-2.26.0/src> ./configure --with-cuda --with-gpu-arch=\"70\" --disable-fortran", "title": "CUDA version of MFEM"}, {"location": "building/#hip-version-of-mfem", "text": "To build the HIP version of MFEM, one needs to specify the HIP architecture , with the HIP_ARCH flag. In the examples below we use HIP_ARCH=gfx908 to build the MFEM serial and parallel versions for gfx908 (MI100). Build the serial HIP version of MFEM: ~/mfem> make hip HIP_ARCH=gfx908 -j Build the parallel HIP version of MFEM: ~/mfem> make phip HIP_ARCH=gfx908 -j To use hypre with HIP support in MFEM, follow the instructions above but configure it with the following command, specifying the HIP architecture: ~/hypre-2.26.0/src> ./configure --with-hip --with-gpu-arch=\"gfx908\" --disable-fortran", "title": "HIP version of MFEM"}, {"location": "building/#installing-mfem-with-spack", "text": "If Spack is already available on your system and is visible in your PATH , you can install the MFEM software simply with: spack install mfem To enable package testing during the build process, use instead: spack install -v --test=all mfem If you don't have Spack, you can download it and install MFEM with the following commands: git clone https://github.com/spack/spack.git cd spack ./bin/spack install -v mfem", "title": "Installing MFEM with Spack"}, {"location": "coefficient/", "text": "Coefficients Coefficient objects serve many purposes within MFEM. As the name suggests they often represent the material coefficients appearing in partial differential equations. However, Coefficients can also be used to specify initial conditions, boundary conditions, exact solutions, etc.. Coefficients come in three varieties; scalar-valued, vector-valued, and matrix-valued. The primary purpose of any Coefficient class is to define an Eval method which returns a scalar, vector, or matrix given an element and a location within that element expressed as a point in reference space i.e. an IntegrationPoint . Coefficients can also be time dependent. Time is treated as a parameter which changes infrequently by passing the current time though a SetTime(t) method. A Coefficient's Eval method depends on not only the position within an element but also on the element attribute number which allows the Coefficient to return different results from different regions of the domain or boundary. This can be a powerful feature but it can lead to unexpected results. As a rule domain integrals will have access to element attributes and boundary integrals will access the boundary attributes. This seems obvious but there may be cases where the outcome is not so clear cut and careful thought is required. It is important to know when a Coefficient will be accessed, particularly in the case of time-dependent or field-dependent coefficients. When used with GridFunction::Project , GridFunction::ComputeL2Error , and other GridFunction methods the Coefficient is used immediately. When used in BilinearForm and LinearForm objects the coefficients are only accessed during calls to the Assemble methods. An important side note is that GridFunction and LinearForm objects will overwrite their values during such calls but a BilinearForm will not. Consequently, when using a time-dependent coefficient with a BilinearForm object it is crucial that the user calls BilinearForm::Update to reset the internally stored matrix to zero before calling BilinearForm::Assemble . Otherwise the new matrix entries will be added to the previous values leading to odd behavior. Scalar Coefficients Basic Scalar Coefficients Class Name Description ConstantCoefficient Returns a constant value: $\\alpha$ FunctionCoefficient Computes a value from a standard function, $f(\\vec{x},t)$, or a lambda expression PWConstCoefficient Returns different constants based e.g. on element attribute GridFunctionCoefficient Returns values interpolated from a scalar-valued GridFunction : $u(\\vec{x})$ DivergenceGridFunctionCoefficient Returns the divergence of a vector-valued GridFunction : $\\nabla\\cdot\\vec{u}$ DeltaCoefficient A weighted Dirac delta function: $s\\,w(\\vec{x},t)\\,T(t)\\,\\delta(\\vec{x}-\\vec{x}_c)$ Derived Scalar Coefficients These classes provide a means of creating functions of existing coefficients. In performance critical situations it would clearly be preferable to write specialized Coefficient classes but these offer a quick and, hopefully, easy to use alternative. Class Name Formula TransformedCoefficient $T(Q_1(\\vec{x},t))\\mbox{ or }T(Q_1(\\vec{x},t),Q_2(\\vec{x},t))$ RestrictedCoefficient $Q(\\vec{x})\\,\\forall a\\in A, 0\\mbox{ otherwise}$ SumCoefficient $\\alpha\\,Q_1(\\vec{x}) + \\beta\\,Q_2(\\vec{x})$ ProductCoefficient $Q_1(\\vec{x})\\,Q_2(\\vec{x})$ PowerCoefficient $Q(\\vec{x})^p$ InnerProductCoefficient $\\vec{Q}_1\\cdot\\vec{Q}_2$ VectorRotProductCoefficient $\\vec{Q}_1\\times\\vec{Q}_2\\mbox{ in }\\mathbb{R}^2$ DeterminantCoefficient $|\\overleftrightarrow{Q}|$ Vector Coefficients Basic Vector Coefficients Class Name Description VectorConstantCoefficient Returns a constant vector value: $\\vec{\\alpha}$ VectorFunctionCoefficient Computes a value from a standard function, $\\vec{f}(\\vec{x})$, or a lambda expression VectorGridFunctionCoefficient Returns values interpolated from a vector-valued GridFunction : $\\vec{u}(\\vec{x})$ GradientGridFunctionCoefficient Returns the gradient of a scalar-valued GridFunction : $\\nabla u(\\vec{x})$ CurlGridFunctionCoefficient Returns the curl of a vector-valued GridFunction : $\\nabla\\times\\vec{u}(\\vec{x})$ VectorDeltaCoefficient $s\\,\\vec{\\alpha}\\,\\delta(\\vec{x}-\\vec{x}_c)$ Derived Vector Coefficients Again these classes provide a means of creating functions of existing coefficients. Class Name Formula VectorArrayCoefficient Construct a vector value from an array of scalar coefficients: $\\vec{Q}_a$ VectorRestrictedCoefficient $\\vec{Q}(\\vec{x})\\,\\forall a\\in A, 0\\mbox{ otherwise}$ VectorSumCoefficient $\\alpha\\,\\vec{Q}_1(\\vec{x}) + \\beta\\,\\vec{Q}_2(\\vec{x})$ ScalarVectorProductCoefficient $Q_1\\,\\vec{Q}_2$ VectorCrossProductCoefficient $\\vec{Q}_1\\times\\vec{Q}_2$ MatVecCoefficient $\\overleftrightarrow{Q}_1\\cdot\\vec{Q}_2$ Matrix Coefficients Basic Matrix Coefficients Class Name Description MatrixConstantCoefficient Returns a constant matrix value: $\\overleftrightarrow{\\alpha}$ MatrixFunctionCoefficient Computes a value from a standard function, $\\overleftrightarrow{f}$, or a lambda expression IdentityMatrixCoefficient Returns the identity matrix of the appropriate dimension: $\\overleftrightarrow{I}$ Derived Matrix Coefficients Again these classes provide a means of creating functions of existing coefficients. Class Name Formula MatrixArrayCoefficient Construct a matrix value from an array of scalar coefficients: $\\overleftrightarrow{Q}_a$ MatrixRestrictedCoefficient $\\overleftrightarrow{Q}(\\vec{x})\\,\\forall a\\in A, 0\\mbox{ otherwise}$ MatrixSumCoefficient $\\alpha\\,\\overleftrightarrow{Q}_1(\\vec{x}) + \\beta\\,\\overleftrightarrow{Q}_2(\\vec{x})$ ScalarMatrixProductCoefficient $Q_1\\,\\overleftrightarrow{Q}_2$ TransposeMatrixCoefficient $\\overleftrightarrow{Q}^T$ InverseMatrixCoefficient $\\overleftrightarrow{Q}^{-1}$ OuterProductCoefficient $\\vec{Q}_1\\otimes\\vec{Q}_2$ MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Coefficients"}, {"location": "coefficient/#coefficients", "text": "Coefficient objects serve many purposes within MFEM. As the name suggests they often represent the material coefficients appearing in partial differential equations. However, Coefficients can also be used to specify initial conditions, boundary conditions, exact solutions, etc.. Coefficients come in three varieties; scalar-valued, vector-valued, and matrix-valued. The primary purpose of any Coefficient class is to define an Eval method which returns a scalar, vector, or matrix given an element and a location within that element expressed as a point in reference space i.e. an IntegrationPoint . Coefficients can also be time dependent. Time is treated as a parameter which changes infrequently by passing the current time though a SetTime(t) method. A Coefficient's Eval method depends on not only the position within an element but also on the element attribute number which allows the Coefficient to return different results from different regions of the domain or boundary. This can be a powerful feature but it can lead to unexpected results. As a rule domain integrals will have access to element attributes and boundary integrals will access the boundary attributes. This seems obvious but there may be cases where the outcome is not so clear cut and careful thought is required. It is important to know when a Coefficient will be accessed, particularly in the case of time-dependent or field-dependent coefficients. When used with GridFunction::Project , GridFunction::ComputeL2Error , and other GridFunction methods the Coefficient is used immediately. When used in BilinearForm and LinearForm objects the coefficients are only accessed during calls to the Assemble methods. An important side note is that GridFunction and LinearForm objects will overwrite their values during such calls but a BilinearForm will not. Consequently, when using a time-dependent coefficient with a BilinearForm object it is crucial that the user calls BilinearForm::Update to reset the internally stored matrix to zero before calling BilinearForm::Assemble . Otherwise the new matrix entries will be added to the previous values leading to odd behavior.", "title": "Coefficients"}, {"location": "coefficient/#scalar-coefficients", "text": "", "title": "Scalar Coefficients"}, {"location": "coefficient/#basic-scalar-coefficients", "text": "Class Name Description ConstantCoefficient Returns a constant value: $\\alpha$ FunctionCoefficient Computes a value from a standard function, $f(\\vec{x},t)$, or a lambda expression PWConstCoefficient Returns different constants based e.g. on element attribute GridFunctionCoefficient Returns values interpolated from a scalar-valued GridFunction : $u(\\vec{x})$ DivergenceGridFunctionCoefficient Returns the divergence of a vector-valued GridFunction : $\\nabla\\cdot\\vec{u}$ DeltaCoefficient A weighted Dirac delta function: $s\\,w(\\vec{x},t)\\,T(t)\\,\\delta(\\vec{x}-\\vec{x}_c)$", "title": "Basic Scalar Coefficients"}, {"location": "coefficient/#derived-scalar-coefficients", "text": "These classes provide a means of creating functions of existing coefficients. In performance critical situations it would clearly be preferable to write specialized Coefficient classes but these offer a quick and, hopefully, easy to use alternative. Class Name Formula TransformedCoefficient $T(Q_1(\\vec{x},t))\\mbox{ or }T(Q_1(\\vec{x},t),Q_2(\\vec{x},t))$ RestrictedCoefficient $Q(\\vec{x})\\,\\forall a\\in A, 0\\mbox{ otherwise}$ SumCoefficient $\\alpha\\,Q_1(\\vec{x}) + \\beta\\,Q_2(\\vec{x})$ ProductCoefficient $Q_1(\\vec{x})\\,Q_2(\\vec{x})$ PowerCoefficient $Q(\\vec{x})^p$ InnerProductCoefficient $\\vec{Q}_1\\cdot\\vec{Q}_2$ VectorRotProductCoefficient $\\vec{Q}_1\\times\\vec{Q}_2\\mbox{ in }\\mathbb{R}^2$ DeterminantCoefficient $|\\overleftrightarrow{Q}|$", "title": "Derived Scalar Coefficients"}, {"location": "coefficient/#vector-coefficients", "text": "", "title": "Vector Coefficients"}, {"location": "coefficient/#basic-vector-coefficients", "text": "Class Name Description VectorConstantCoefficient Returns a constant vector value: $\\vec{\\alpha}$ VectorFunctionCoefficient Computes a value from a standard function, $\\vec{f}(\\vec{x})$, or a lambda expression VectorGridFunctionCoefficient Returns values interpolated from a vector-valued GridFunction : $\\vec{u}(\\vec{x})$ GradientGridFunctionCoefficient Returns the gradient of a scalar-valued GridFunction : $\\nabla u(\\vec{x})$ CurlGridFunctionCoefficient Returns the curl of a vector-valued GridFunction : $\\nabla\\times\\vec{u}(\\vec{x})$ VectorDeltaCoefficient $s\\,\\vec{\\alpha}\\,\\delta(\\vec{x}-\\vec{x}_c)$", "title": "Basic Vector Coefficients"}, {"location": "coefficient/#derived-vector-coefficients", "text": "Again these classes provide a means of creating functions of existing coefficients. Class Name Formula VectorArrayCoefficient Construct a vector value from an array of scalar coefficients: $\\vec{Q}_a$ VectorRestrictedCoefficient $\\vec{Q}(\\vec{x})\\,\\forall a\\in A, 0\\mbox{ otherwise}$ VectorSumCoefficient $\\alpha\\,\\vec{Q}_1(\\vec{x}) + \\beta\\,\\vec{Q}_2(\\vec{x})$ ScalarVectorProductCoefficient $Q_1\\,\\vec{Q}_2$ VectorCrossProductCoefficient $\\vec{Q}_1\\times\\vec{Q}_2$ MatVecCoefficient $\\overleftrightarrow{Q}_1\\cdot\\vec{Q}_2$", "title": "Derived Vector Coefficients"}, {"location": "coefficient/#matrix-coefficients", "text": "", "title": "Matrix Coefficients"}, {"location": "coefficient/#basic-matrix-coefficients", "text": "Class Name Description MatrixConstantCoefficient Returns a constant matrix value: $\\overleftrightarrow{\\alpha}$ MatrixFunctionCoefficient Computes a value from a standard function, $\\overleftrightarrow{f}$, or a lambda expression IdentityMatrixCoefficient Returns the identity matrix of the appropriate dimension: $\\overleftrightarrow{I}$", "title": "Basic Matrix Coefficients"}, {"location": "coefficient/#derived-matrix-coefficients", "text": "Again these classes provide a means of creating functions of existing coefficients. Class Name Formula MatrixArrayCoefficient Construct a matrix value from an array of scalar coefficients: $\\overleftrightarrow{Q}_a$ MatrixRestrictedCoefficient $\\overleftrightarrow{Q}(\\vec{x})\\,\\forall a\\in A, 0\\mbox{ otherwise}$ MatrixSumCoefficient $\\alpha\\,\\overleftrightarrow{Q}_1(\\vec{x}) + \\beta\\,\\overleftrightarrow{Q}_2(\\vec{x})$ ScalarMatrixProductCoefficient $Q_1\\,\\overleftrightarrow{Q}_2$ TransposeMatrixCoefficient $\\overleftrightarrow{Q}^T$ InverseMatrixCoefficient $\\overleftrightarrow{Q}^{-1}$ OuterProductCoefficient $\\vec{Q}_1\\otimes\\vec{Q}_2$ MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Derived Matrix Coefficients"}, {"location": "dox/", "text": "", "title": "Doxygen"}, {"location": "electromagnetics/", "text": "Electromagnetics Mini Applications $\\newcommand{\\A}{\\vec{A}}\\newcommand{\\B}{\\vec{B}} \\newcommand{\\D}{\\vec{D}}\\newcommand{\\E}{\\vec{E}} \\newcommand{\\H}{\\vec{H}}\\newcommand{\\J}{\\vec{J}} \\newcommand{\\M}{\\vec{M}}\\newcommand{\\P}{\\vec{P}} \\newcommand{\\F}{\\vec{F}} \\newcommand{\\dd}[2]{\\frac{\\partial #1}{\\partial #2}} \\newcommand{\\cross}{\\times}\\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot}\\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla}$ The miniapps/electromagnetics directory contains a collection of electromagnetic miniapps based on MFEM. Compared to the example codes , the miniapps are more complex, demonstrating more advanced usage of the library. They are intended to be more representative of MFEM-based application codes. We recommend that new users start with the example codes before moving to the miniapps. The current electromagnetic miniapps are described below. Electromagnetics The equations describing electromagnetic phenomena are known collectively as the Maxwell Equations. They are usually given as: $$\\begin{align} \\curl\\H - \\dd{\\D}{t} & = \\J \\label{ampere} \\\\ \\curl\\E + \\dd{\\B}{t} & = 0 \\label{faraday} \\\\ \\div\\D & = \\rho \\label{gauss} \\\\ \\div\\B & = 0 \\label{divb} \\end{align}$$ Where equation \\eqref{ampere} can be referred to as Amp\u00e8re's Law , equation \\eqref{faraday} is called Faraday's Law , equation \\eqref{gauss} is Gauss's Law , and equation \\eqref{divb} doesn't generally have a name but is related to the nonexistence of magnetic monopoles. The various fields in these equations are: Symbol Name SI Units $\\H$ magnetic field Ampere/meter $\\B$ magnetic flux density Tesla $\\E$ electric field Volt/meter $\\D$ electric displacement Coulomb/meter$^2$ $\\J$ current density Ampere/meter$^2$ $\\rho$ charge density Coulomb/meter$^3$ In the literature these names do vary, particularly those for $\\H$ and $\\B$, but in this document we will try to adhere to the convention laid out above. Generally we also need constitutive relations between $\\E$ and $\\D$ and/or between $\\H$ and $\\B$. These relations start with the definitions: $$\\begin{align} \\D & = \\epsilon_0\\E + \\P \\label{const_d} \\\\ \\B & = \\mu_0(\\H + \\M) \\label{const_b} \\end{align}$$ Where $\\P$ is the polarization density , and $\\M$ is the magnetization . Also, $\\epsilon_0$ is the permittivity of free space and $\\mu_0$ is the permeability of free space which are both constants of nature. In many common materials the polarization density can be approximated as a scalar multiple of the electric field, i.e., $\\P = \\epsilon_0\\chi\\E$, where $\\chi$ is called the electric susceptibility . In such cases we usually use the relation $\\D = \\epsilon\\E$ with $\\epsilon = \\epsilon_0(1 + \\chi)$ and call $\\epsilon$ the permittivity of the material. The nature of magnetization is more complicated but we will take a very simplified view which is valid in many situations. Specifically, we will assume that either $\\M$ is proportional to $\\H$ yielding the relation $\\B = \\mu\\H$ where $\\mu = \\mu_0(1 + \\chi_M)$ and $\\chi_M$ is the magnetic susceptibility or that $\\M$ is independent of the applied field. The former case pertains to both diamagnetic and paramagnetic materials and the latter to ferromagnetic materials. Finally we should note that equations \\eqref{ampere} and \\eqref{gauss} can be combined to yield the equation of charge continuity $\\dd{\\rho}{t} + \\div\\J = 0$ which can be important in plasma physics and magnetohydrodynamics (MHD). Electrostatics Electrostatic problems come in a variety of subtypes but they all derive from Gauss's Law and Faraday's Law (equations \\eqref{gauss} and \\eqref{faraday}). When we assume no time variation, Faraday's Law becomes simply $\\curl\\E = 0$. This suggests that the electric field can be expressed as the gradient of a scalar field which is traditionally taken to be $-\\varphi$, i.e. $$\\E = -\\grad\\varphi \\label{gradphi}$$ where $\\varphi$ is called the electric potential and has units of Volts in the SI system. Inserting this definition into equation \\eqref{gauss} gives: $$-\\div\\epsilon\\grad\\varphi = \\rho - \\div\\P \\label{poisson}$$ which is Poisson's equation for the electric potential, where we have assumed a linear constitutive relation between $\\D$ and $\\E$ of the form $\\D = \\epsilon\\E + \\P$. This allows a polarization which is proportional to $\\E$ as well as a polarization independent of $\\E$. If this relation happens to be nonlinear then Poisson's equation would need to be replaced with a more complicated nonlinear expression. The solutions to equation \\eqref{poisson} are non unique because they can be shifted by any additive constant. This means that we must apply a Dirichlet boundary condition at least at one point in the problem domain in order to obtain a solution. Typically this point will be on the boundary but it need not be so. Such a Dirichlet value is equivalent to fixing the voltage (a.k.a. potential) at one or more locations. Additionally, this equation admits a normal derivative boundary condition. This corresponds to setting $\\hat{n}\\cdot\\D$ to a prescribed value on some portion of the boundary. This is equivalent to defining a surface charge density on that portion of the boundary. Volta Mini Application The electrostatics mini application, named volta after the inventor of the voltaic pile , is intended to demonstrate how to solve standard electrostatics problems in MFEM. Its source terms and boundary conditions are simple but they should indicate how more specialized sources or boundary conditions could be implemented. Note that this application assumes the mesh coordinates are given in meters. Mini Application Features Permittivity: The permittivity, $\\epsilon$, is assumed to be that of free space except for an optional sphere of dielectric material which can be defined by the user. The command line option -ds can be used to set the parameters for this dielectric sphere. For example, to produce a sphere at the origin with a radius of 0.5 and a relative permittivity of 3 the user would specify: -ds '0 0 0 0.5 3' . Charge Density: The charge density, $\\rho$, is assumed to be zero except for an optional sphere of uniform charge density which can be defined by the user. The command line option for this is -cs which follows the same pattern as the dielectric sphere. Note that the last entry is the total charge of the sphere and not its charge density. Polarization: A polarization vector function, $\\P$, can be imposed as a source of the electric field. The command line option -vp creates a polarization due to a simple voltaic pile, i.e., a cylinder which is electrically polarized along its axis. The user should specify the two end points of the cylinder axis, its radius and the magnitude of the polarization vector. Dirichlet BC: Dirichlet boundary conditions can either specify piecewise constant voltages on a collection of surfaces or they can specify a gradient field which approximates a uniform applied electric field. In either case the user specifies the surfaces where the Dirichlet boundary condition should be applied using the -dbcs option followed by a list of boundary attributes. For example to select surfaces 2, 3, and 4 the user would use the following: -dbcs '2 3 4' . To apply a gradient field on these surfaces the user would also use the -dbcg option. This defaults to the uniform field $\\E = (0,0,1)$ in 3D or $\\E = (0,1)$ in 2D. An arbitrary vector can be specified with -uebc followed by the desired vector, e.g., to apply $\\E = (1,2,3)$ the user would supply: -uebc '1 2 3' . To specify piecewise constant potential values the user would list the desired values after -dbcv as follows: -dbcv '0.0 1.0 -1.0' . Neumann BC: Neumann boundary conditions set the normal component of the electric displacement on portions of the boundary. This normal component is equivalent to the surface charge density on the surface. This is rarely used because surface charge densities are rarely known unless they are known to be zero. However, if the surface charge density is zero then the Neumann BCs are not needed because this is the natural boundary condition. Only piecewise constant Neumann boundary conditions are supported. They can be set analogously to piecewise Dirichlet boundary conditions but using options -nbcs and -nbcv . Magnetostatics Magnetostatic problems arise when we assume no time variation in Amp\u00e8re's Law \\eqref{ampere} which leads to: $$\\curl\\H = \\J \\nonumber$$ We will again assume a somewhat more general constitutive relation between $\\H$ and $\\vec{B}$ than is normally seen: $$\\B = \\mu\\H + \\mu_0\\M = \\mu_0(1 + \\chi_M)\\H + \\mu_0\\M \\nonumber$$ Where the magnetization is split into two portions; one which is proportional to $\\H$ and given by $\\chi_M\\H$, and another which is independent of $\\H$ and is given by $\\M$. This allows for paramagnetic and/or diamagnetic materials defined through $\\mu$ as well as ferromagnetic materials represented by $\\M$. This choice yields: $$\\curl\\mu^{-1}\\B = \\J + \\curl\\mu^{-1}\\mu_0\\M \\nonumber$$ Which, when combined with equation \\eqref{divb}, becomes: $$\\curl\\mu^{-1}\\curl\\A = \\J + \\curl\\mu^{-1}\\mu_0\\M $$ If $\\J$ happens to be zero we have another option because we can assume that $\\H = -\\grad\\varphi_M$ for some scalar potential $\\varphi_M$. When combined with equation \\eqref{divb} this leads to: $$\\div\\mu\\grad\\varphi_M = \\div\\mu_0\\M $$ Currently only the vector potential equation is used so we will focus on that for the remainder of this document. The vector potential is again non unique so we must apply additional constraints in order to arrive at a solution for $\\A$. When working analytically it is common to constrain the solution by restricting the divergence of $\\A$ but numerically this leads to other complications. For our problems of interest it will be necessary to require Dirichlet boundary conditions on the entire outer surface in order to sufficiently constrain the solution. Dirichlet boundary conditions for the vector potential on a surface provide a means to specify the component of $\\B$ normal to that surface. For example, setting the tangential components of $\\A$ to be zero on a particular surface results in a magnetic flux density which must be tangent to that surface. Tesla Mini Application The magnetostatics mini application, named tesla after the unit of magnetic field strength (and of course the man Nikola Tesla), is intended to demonstrate how to solve standard magnetostatics problems in MFEM. Its source terms and boundary conditions are simple but they should indicate how more specialized sources of boundary conditions could be implemented. A detailed overview of the equations being solved and their discretization can be found here: Tesla Theory Notes . Note that this application assumes the mesh coordinates are given in meters. Mini Application Features Permeability: The permeability, $\\mu$, is assumed to be that of free space except for an optional spherical shell of diamagnetic or paramagnetic material which can be defined by the user. The command line option -ms can be used to set the parameters for this shell. For example, to produce a shell at the origin with inner and outer radii of 0.4 and 0.5 respectively and a relative permeability of 3 the user would specify: -ms '0 0 0 0.4 0.5 3' . Current Density: The current density, $\\J$, is assumed to be zero except for an optional ring of constant current which can be defined by the user. The command line option for this is -cr which requires two points giving the end points of the ring's axis, inner and outer radii, and a constant total current. For example, to specify a ring centered at the origin and laying in the XY plane with a thickness of 0.2 and radii 0.4 and 0.5, and a current of 2 amps the user would give: -cr 0 0 -0.1 0 0 0.1 0.4 0.5 2 . Magnetization: A permanent magnetization, $\\M$, can be applied in the form of a cylindrical magnet with poles at its circular ends. The command line option is -bm which indicates a 'bar magnet'. The option requires the two end points of the cylinder's axis, its radius, and the magnitude of the magnetization. Surface Current Density: A surface current can be imposed indirectly by specifying separate surface patches with different voltages as well as a collection of surface patches connecting the voltages through which the current will flow. The voltage surfaces and their voltages can be specified using -vbcs followed by the indices of the surfaces and -vbcv followed by their voltages. The path for the surface current ($\\vec{K}$) is specified by using -kbcs followed by a set of surface indices. For example, applying voltages 1 and -1 to surfaces 2 and 3 with a current path along surfaces 4 and 6 would be specified as: -vbcs '2 3' -vbcv '1 -1' -kbcs '4 6' . Any surfaces not listed as voltage or current surfaces will be assigned as homogeneous Dirichlet boundaries. Note that when this option is selected an auxiliary electrostatic problem will be solved on the surface of the geometry to compute the surface current. Dirichlet BC: Dirichlet boundary conditions are required if a surface current density is not defined. For this reason the user need not specify boundary surfaces by number since the boundary condition must be applied on all of them. The default boundary condition is a homogeneous Dirichlet boundary condition on all outer surfaces. This means that the normal component of $\\B$ will be zero at the outer boundary. An alternative is to specify a desired uniform magnetic flux density on the entire outer surface. This is accomplished with the -ubbc command line option followed by the desired $\\B$ vector. Transient Full-Wave Electromagnetics Transient electromagnetics problems are governed by the time-dependent Maxwell equations \\eqref{ampere} and \\eqref{faraday} when combined using the constitutive relations \\eqref{const_d} and \\eqref{const_b}. When combined these equations can describe the evolution and propagation of electromagnetic waves. $$\\begin{align} \\dd{(\\epsilon\\E)}{t} & = \\curl(\\mu^{-1}\\B) - \\sigma \\E - \\J \\\\ \\dd{\\B}{t} & = - \\curl\\E \\end{align}$$ The term $\\sigma\\E$ arises in the presence of electrically conductive materials where the electric field induces a current which can be separated from $\\J$. In such cases the total current appearing in Amp\u00e8re's Law \\eqref{ampere} can be expressed as the sum of an applied current (also labeled as $\\J$) and an induced current $\\sigma\\E$. Solving these equations requires initial conditions for both the electric and magnetic fields $\\E$ and $\\B$ as well as boundary conditions related to the tangential components of $\\E$ or $\\H$. Other formulations are possible such as evolving $\\H$ and $\\D$ or the potentials $\\varphi$ and $\\A$. This system of equations can also be written as a single second order equation involving only $\\E$, $\\H$, $\\varphi$, or $\\A$. Each of these formulations has a different set of sources, initial and boundary conditions for which it is well-suited. The choice we make here is perhaps the most common but it may not be the most convenient choice for a given application. These equations can be used to evolve their initial conditions or they can be driven by either a current source or through time-varying boundary conditions. It is also possible to combine all three of these sources in a single simulation. Maxwell Mini Application The electrodynamics mini application, named maxwell after James Clerk Maxwell who first formulated the classical theory of electromagnetic radiation, is intended to demonstrate how to solve transient wave problems in MFEM. Its source terms and boundary conditions are simple but they should indicate how more specialized sources or boundary conditions could be implemented. A detailed overview of the equations being solved and their discretization can be found here: Maxwell Theory Notes . An example simulation is depicted below (click to animate the wave propagation). Time integration is handled by a variable order symplectic time integration algorithm. This algorithm is designed for systems of equations which are derived from a Hamiltonian and it helps to ensure energy conservation within some tolerance. The time step used during integration is automatically chosen based on the largest stable time step as computed from the largest eigenvalue of the update equations. This determination involves a user-adjustable factor which creates a safety margin. By default the actual time step is less than 95% of the estimate for the largest stable time step. Note that this application assumes the mesh coordinates are given in meters. Internally the code assumes time is in seconds but the command line options use nanoseconds for convenience. Mini Application Features Time Evolution: The initial and final times for the simulation can be specified, in nanoseconds, with the -ti and -tf options. Visualization snapshots of data will be written out after time intervals specified by -ts which again given in nanoseconds. The order of the time integration can be specified, from 1 to 4, using the -to option. Permittivity: The permittivity, $\\epsilon$, is assumed to be that of free space except for an optional sphere of dielectric material which can be defined by the user. The command line option -ds can be used to set the parameters for this dielectric sphere. For example, to produce a sphere at the origin with a radius of 0.5 and a relative permittivity of 3 the user would specify: -ds '0 0 0 0.5 3' . Permeability: The permeability, $\\mu$, is assumed to be that of free space except for an optional spherical shell of diamagnetic or paramagnetic material which can be defined by the user. The command line option -ms can be used to set the parameters for this shell. For example, to produce a shell at the origin with inner and outer radii of 0.4 and 0.5 respectively and a relative permeability of 3 the user would specify: -ms '0 0 0 0.4 0.5 3' . Conductivity: The conductivity, $\\sigma$, is assumed to be zero except for an optional sphere of conductive material which can be defined by the user. The command line option -cs can be used to set the parameters for this conductive sphere. For example, to produce a sphere at the origin with a radius of 0.5 and a conductivity of 3,000,000 S/m the user would specify: -cs '0 0 0 0.5 3e6' . Current Density: The current density, $\\J$, is assumed to be zero except for an optional cylinder of pulsed current which can be defined by the user. The command line option for this is -dp , short for 'dipole pulse', which requires two points giving the end points of the cylinder's axis, radius, amplitude ($\\alpha$), pulse center ($\\beta$), and a pulse width ($\\gamma$). The time dependence of this pulse is given by: $$\\J(t) = \\hat{a} \\alpha e^{-(t-\\beta)^2/(2\\gamma^2)}$$ Where $\\hat{a}$ is the unit vector along the cylinder's axis and both $\\beta$ and $\\gamma$ are specified in nanoseconds. Dirichlet BC: Homogeneous Dirichlet boundary conditions, which constrain the tangential components of $\\frac{\\partial\\E}{\\partial t}$ to be zero, can be activated on a portion of the boundary by specifying a list of boundary attributes such as -dbcs '4 8' . For convenience a boundary attribute of '-1' can be used to specify all boundary surfaces. Non-Homogeneous, time-dependent Dirichlet boundary conditions are supported by the Maxwell solver so a user can edit maxwell.cpp and supply their own function if desired. Absorbing BC: A first order Sommerfeld absorbing boundary condition can be applied to a portion of the boundary using the -abcs option along with a list of boundary attributes such as -abcs '4 18' . Again, the special purpose boundary attribute '-1' can be used to specify all boundary surfaces. This boundary condition depends on a coefficient, $\\eta^{-1}=\\sqrt{\\epsilon/\\mu}$, which must be matched to the materials just inside the boundary. The code assumes that the permittivity and permeability are those of the vacuum near the surface but, if this is not the case, an ambitious user can replace etaInvCoef_ with a more appropriate function. Transient Magnetics and Joule Heating Joule Mini Application The transient magnetics mini application, named joule after the SI unit of energy (and the scientist James Prescott Joule, who was also a brewer), is intended to demonstrate how to solve transient implicit diffusion problems. The equations of low-frequency electromagnetics are coupled with the equations of heat transfer. The coupling is one way, electromagnetics generates Joule heating, but the heating does not affect the electromagnetics. The thermal problem is solved using an $H(\\mathrm{div})$ method, i.e. temperature is discontinuous and the thermal flux $\\F$ is in $H(\\mathrm{div})$. There are three linear solves per time step: Poisson's equation for the scalar electric potential is solved using the AMG preconditioner, the electric diffusion equation is solved using the AMS preconditioner, and the thermal diffusion equation is solved using the ADS preconditioner. Two example meshes are provided, one is a straight circular metal rod in vacuum, the other is a helical coil in vacuum (the latter is 21MB and can be downloaded from here ). The idea is that a voltage is applied to the ends of the rod/coil, the electric field diffuses into the metal, the metal is heated by Joule heating, the heat diffuses out. The equations are: $$\\begin{align} \\div\\sigma\\grad\\Phi &= 0 \\\\ \\sigma \\E &= \\curl\\mu^{-1} \\B - \\sigma \\grad \\Phi \\\\ \\frac{d \\B}{d t} &= - \\curl \\E \\\\ \\F &= -k \\grad T \\\\ c \\frac{d T}{d t} &= - \\div \\F + \\sigma \\E \\cdot \\E \\end{align}$$ The equations are integrated in time using implicit time integration, either midpoint or higher order SDIRK. Since there are three solves, three sets of boundary conditions must be specified. The essential BC's are the scalar potential, the electric field, and the thermal flux. These are not set via command line arguments, you have to edit the code to change these. To change these, search the code for ess_bdr . There are conducting and non-conducting material regions, and the mesh must have integer attributes to specify these regions. To change these, search the code for std::map this maps the integer attribute to the floating-point material value. Note that this application assumes the mesh coordinates are given in meters. The above picture shows Joule heating of a cylinder using the mesh cylinder-hex.mesh . The cylinder is surrounded by vacuum. The black arrows show the magnetic field $\\B$, the magenta arrows show the heat flux $\\F$, and the pseudocolor in the center of the cylinder shows the temperature. Mini Application Features Boundary Conditions: Since there are three solves, three sets of boundary conditions must be specified. The essential BC's are the voltage for the scalar potential, the tangential electric field, and the normal thermal flux. These are not set via command line arguments, you have to edit the code to change these. To change these, search the code for ess_bdr . Note that the essential BC's can be time varying. Material Properties: There are conducting and non-conducting material regions, and the mesh must have integer attributes to specify these regions. To change these, search the code for std::map this maps the integer attribute to the floating-point material value. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Electromagnetics"}, {"location": "electromagnetics/#electromagnetics-mini-applications", "text": "$\\newcommand{\\A}{\\vec{A}}\\newcommand{\\B}{\\vec{B}} \\newcommand{\\D}{\\vec{D}}\\newcommand{\\E}{\\vec{E}} \\newcommand{\\H}{\\vec{H}}\\newcommand{\\J}{\\vec{J}} \\newcommand{\\M}{\\vec{M}}\\newcommand{\\P}{\\vec{P}} \\newcommand{\\F}{\\vec{F}} \\newcommand{\\dd}[2]{\\frac{\\partial #1}{\\partial #2}} \\newcommand{\\cross}{\\times}\\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot}\\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla}$ The miniapps/electromagnetics directory contains a collection of electromagnetic miniapps based on MFEM. Compared to the example codes , the miniapps are more complex, demonstrating more advanced usage of the library. They are intended to be more representative of MFEM-based application codes. We recommend that new users start with the example codes before moving to the miniapps. The current electromagnetic miniapps are described below.", "title": "Electromagnetics Mini Applications"}, {"location": "electromagnetics/#electromagnetics", "text": "The equations describing electromagnetic phenomena are known collectively as the Maxwell Equations. They are usually given as: $$\\begin{align} \\curl\\H - \\dd{\\D}{t} & = \\J \\label{ampere} \\\\ \\curl\\E + \\dd{\\B}{t} & = 0 \\label{faraday} \\\\ \\div\\D & = \\rho \\label{gauss} \\\\ \\div\\B & = 0 \\label{divb} \\end{align}$$ Where equation \\eqref{ampere} can be referred to as Amp\u00e8re's Law , equation \\eqref{faraday} is called Faraday's Law , equation \\eqref{gauss} is Gauss's Law , and equation \\eqref{divb} doesn't generally have a name but is related to the nonexistence of magnetic monopoles. The various fields in these equations are: Symbol Name SI Units $\\H$ magnetic field Ampere/meter $\\B$ magnetic flux density Tesla $\\E$ electric field Volt/meter $\\D$ electric displacement Coulomb/meter$^2$ $\\J$ current density Ampere/meter$^2$ $\\rho$ charge density Coulomb/meter$^3$ In the literature these names do vary, particularly those for $\\H$ and $\\B$, but in this document we will try to adhere to the convention laid out above. Generally we also need constitutive relations between $\\E$ and $\\D$ and/or between $\\H$ and $\\B$. These relations start with the definitions: $$\\begin{align} \\D & = \\epsilon_0\\E + \\P \\label{const_d} \\\\ \\B & = \\mu_0(\\H + \\M) \\label{const_b} \\end{align}$$ Where $\\P$ is the polarization density , and $\\M$ is the magnetization . Also, $\\epsilon_0$ is the permittivity of free space and $\\mu_0$ is the permeability of free space which are both constants of nature. In many common materials the polarization density can be approximated as a scalar multiple of the electric field, i.e., $\\P = \\epsilon_0\\chi\\E$, where $\\chi$ is called the electric susceptibility . In such cases we usually use the relation $\\D = \\epsilon\\E$ with $\\epsilon = \\epsilon_0(1 + \\chi)$ and call $\\epsilon$ the permittivity of the material. The nature of magnetization is more complicated but we will take a very simplified view which is valid in many situations. Specifically, we will assume that either $\\M$ is proportional to $\\H$ yielding the relation $\\B = \\mu\\H$ where $\\mu = \\mu_0(1 + \\chi_M)$ and $\\chi_M$ is the magnetic susceptibility or that $\\M$ is independent of the applied field. The former case pertains to both diamagnetic and paramagnetic materials and the latter to ferromagnetic materials. Finally we should note that equations \\eqref{ampere} and \\eqref{gauss} can be combined to yield the equation of charge continuity $\\dd{\\rho}{t} + \\div\\J = 0$ which can be important in plasma physics and magnetohydrodynamics (MHD).", "title": "Electromagnetics"}, {"location": "electromagnetics/#electrostatics", "text": "Electrostatic problems come in a variety of subtypes but they all derive from Gauss's Law and Faraday's Law (equations \\eqref{gauss} and \\eqref{faraday}). When we assume no time variation, Faraday's Law becomes simply $\\curl\\E = 0$. This suggests that the electric field can be expressed as the gradient of a scalar field which is traditionally taken to be $-\\varphi$, i.e. $$\\E = -\\grad\\varphi \\label{gradphi}$$ where $\\varphi$ is called the electric potential and has units of Volts in the SI system. Inserting this definition into equation \\eqref{gauss} gives: $$-\\div\\epsilon\\grad\\varphi = \\rho - \\div\\P \\label{poisson}$$ which is Poisson's equation for the electric potential, where we have assumed a linear constitutive relation between $\\D$ and $\\E$ of the form $\\D = \\epsilon\\E + \\P$. This allows a polarization which is proportional to $\\E$ as well as a polarization independent of $\\E$. If this relation happens to be nonlinear then Poisson's equation would need to be replaced with a more complicated nonlinear expression. The solutions to equation \\eqref{poisson} are non unique because they can be shifted by any additive constant. This means that we must apply a Dirichlet boundary condition at least at one point in the problem domain in order to obtain a solution. Typically this point will be on the boundary but it need not be so. Such a Dirichlet value is equivalent to fixing the voltage (a.k.a. potential) at one or more locations. Additionally, this equation admits a normal derivative boundary condition. This corresponds to setting $\\hat{n}\\cdot\\D$ to a prescribed value on some portion of the boundary. This is equivalent to defining a surface charge density on that portion of the boundary.", "title": "Electrostatics"}, {"location": "electromagnetics/#volta-mini-application", "text": "The electrostatics mini application, named volta after the inventor of the voltaic pile , is intended to demonstrate how to solve standard electrostatics problems in MFEM. Its source terms and boundary conditions are simple but they should indicate how more specialized sources or boundary conditions could be implemented. Note that this application assumes the mesh coordinates are given in meters.", "title": "Volta Mini Application"}, {"location": "electromagnetics/#mini-application-features", "text": "Permittivity: The permittivity, $\\epsilon$, is assumed to be that of free space except for an optional sphere of dielectric material which can be defined by the user. The command line option -ds can be used to set the parameters for this dielectric sphere. For example, to produce a sphere at the origin with a radius of 0.5 and a relative permittivity of 3 the user would specify: -ds '0 0 0 0.5 3' . Charge Density: The charge density, $\\rho$, is assumed to be zero except for an optional sphere of uniform charge density which can be defined by the user. The command line option for this is -cs which follows the same pattern as the dielectric sphere. Note that the last entry is the total charge of the sphere and not its charge density. Polarization: A polarization vector function, $\\P$, can be imposed as a source of the electric field. The command line option -vp creates a polarization due to a simple voltaic pile, i.e., a cylinder which is electrically polarized along its axis. The user should specify the two end points of the cylinder axis, its radius and the magnitude of the polarization vector. Dirichlet BC: Dirichlet boundary conditions can either specify piecewise constant voltages on a collection of surfaces or they can specify a gradient field which approximates a uniform applied electric field. In either case the user specifies the surfaces where the Dirichlet boundary condition should be applied using the -dbcs option followed by a list of boundary attributes. For example to select surfaces 2, 3, and 4 the user would use the following: -dbcs '2 3 4' . To apply a gradient field on these surfaces the user would also use the -dbcg option. This defaults to the uniform field $\\E = (0,0,1)$ in 3D or $\\E = (0,1)$ in 2D. An arbitrary vector can be specified with -uebc followed by the desired vector, e.g., to apply $\\E = (1,2,3)$ the user would supply: -uebc '1 2 3' . To specify piecewise constant potential values the user would list the desired values after -dbcv as follows: -dbcv '0.0 1.0 -1.0' . Neumann BC: Neumann boundary conditions set the normal component of the electric displacement on portions of the boundary. This normal component is equivalent to the surface charge density on the surface. This is rarely used because surface charge densities are rarely known unless they are known to be zero. However, if the surface charge density is zero then the Neumann BCs are not needed because this is the natural boundary condition. Only piecewise constant Neumann boundary conditions are supported. They can be set analogously to piecewise Dirichlet boundary conditions but using options -nbcs and -nbcv .", "title": "Mini Application Features"}, {"location": "electromagnetics/#magnetostatics", "text": "Magnetostatic problems arise when we assume no time variation in Amp\u00e8re's Law \\eqref{ampere} which leads to: $$\\curl\\H = \\J \\nonumber$$ We will again assume a somewhat more general constitutive relation between $\\H$ and $\\vec{B}$ than is normally seen: $$\\B = \\mu\\H + \\mu_0\\M = \\mu_0(1 + \\chi_M)\\H + \\mu_0\\M \\nonumber$$ Where the magnetization is split into two portions; one which is proportional to $\\H$ and given by $\\chi_M\\H$, and another which is independent of $\\H$ and is given by $\\M$. This allows for paramagnetic and/or diamagnetic materials defined through $\\mu$ as well as ferromagnetic materials represented by $\\M$. This choice yields: $$\\curl\\mu^{-1}\\B = \\J + \\curl\\mu^{-1}\\mu_0\\M \\nonumber$$ Which, when combined with equation \\eqref{divb}, becomes: $$\\curl\\mu^{-1}\\curl\\A = \\J + \\curl\\mu^{-1}\\mu_0\\M $$ If $\\J$ happens to be zero we have another option because we can assume that $\\H = -\\grad\\varphi_M$ for some scalar potential $\\varphi_M$. When combined with equation \\eqref{divb} this leads to: $$\\div\\mu\\grad\\varphi_M = \\div\\mu_0\\M $$ Currently only the vector potential equation is used so we will focus on that for the remainder of this document. The vector potential is again non unique so we must apply additional constraints in order to arrive at a solution for $\\A$. When working analytically it is common to constrain the solution by restricting the divergence of $\\A$ but numerically this leads to other complications. For our problems of interest it will be necessary to require Dirichlet boundary conditions on the entire outer surface in order to sufficiently constrain the solution. Dirichlet boundary conditions for the vector potential on a surface provide a means to specify the component of $\\B$ normal to that surface. For example, setting the tangential components of $\\A$ to be zero on a particular surface results in a magnetic flux density which must be tangent to that surface.", "title": "Magnetostatics"}, {"location": "electromagnetics/#tesla-mini-application", "text": "The magnetostatics mini application, named tesla after the unit of magnetic field strength (and of course the man Nikola Tesla), is intended to demonstrate how to solve standard magnetostatics problems in MFEM. Its source terms and boundary conditions are simple but they should indicate how more specialized sources of boundary conditions could be implemented. A detailed overview of the equations being solved and their discretization can be found here: Tesla Theory Notes . Note that this application assumes the mesh coordinates are given in meters.", "title": "Tesla Mini Application"}, {"location": "electromagnetics/#mini-application-features_1", "text": "Permeability: The permeability, $\\mu$, is assumed to be that of free space except for an optional spherical shell of diamagnetic or paramagnetic material which can be defined by the user. The command line option -ms can be used to set the parameters for this shell. For example, to produce a shell at the origin with inner and outer radii of 0.4 and 0.5 respectively and a relative permeability of 3 the user would specify: -ms '0 0 0 0.4 0.5 3' . Current Density: The current density, $\\J$, is assumed to be zero except for an optional ring of constant current which can be defined by the user. The command line option for this is -cr which requires two points giving the end points of the ring's axis, inner and outer radii, and a constant total current. For example, to specify a ring centered at the origin and laying in the XY plane with a thickness of 0.2 and radii 0.4 and 0.5, and a current of 2 amps the user would give: -cr 0 0 -0.1 0 0 0.1 0.4 0.5 2 . Magnetization: A permanent magnetization, $\\M$, can be applied in the form of a cylindrical magnet with poles at its circular ends. The command line option is -bm which indicates a 'bar magnet'. The option requires the two end points of the cylinder's axis, its radius, and the magnitude of the magnetization. Surface Current Density: A surface current can be imposed indirectly by specifying separate surface patches with different voltages as well as a collection of surface patches connecting the voltages through which the current will flow. The voltage surfaces and their voltages can be specified using -vbcs followed by the indices of the surfaces and -vbcv followed by their voltages. The path for the surface current ($\\vec{K}$) is specified by using -kbcs followed by a set of surface indices. For example, applying voltages 1 and -1 to surfaces 2 and 3 with a current path along surfaces 4 and 6 would be specified as: -vbcs '2 3' -vbcv '1 -1' -kbcs '4 6' . Any surfaces not listed as voltage or current surfaces will be assigned as homogeneous Dirichlet boundaries. Note that when this option is selected an auxiliary electrostatic problem will be solved on the surface of the geometry to compute the surface current. Dirichlet BC: Dirichlet boundary conditions are required if a surface current density is not defined. For this reason the user need not specify boundary surfaces by number since the boundary condition must be applied on all of them. The default boundary condition is a homogeneous Dirichlet boundary condition on all outer surfaces. This means that the normal component of $\\B$ will be zero at the outer boundary. An alternative is to specify a desired uniform magnetic flux density on the entire outer surface. This is accomplished with the -ubbc command line option followed by the desired $\\B$ vector.", "title": "Mini Application Features"}, {"location": "electromagnetics/#transient-full-wave-electromagnetics", "text": "Transient electromagnetics problems are governed by the time-dependent Maxwell equations \\eqref{ampere} and \\eqref{faraday} when combined using the constitutive relations \\eqref{const_d} and \\eqref{const_b}. When combined these equations can describe the evolution and propagation of electromagnetic waves. $$\\begin{align} \\dd{(\\epsilon\\E)}{t} & = \\curl(\\mu^{-1}\\B) - \\sigma \\E - \\J \\\\ \\dd{\\B}{t} & = - \\curl\\E \\end{align}$$ The term $\\sigma\\E$ arises in the presence of electrically conductive materials where the electric field induces a current which can be separated from $\\J$. In such cases the total current appearing in Amp\u00e8re's Law \\eqref{ampere} can be expressed as the sum of an applied current (also labeled as $\\J$) and an induced current $\\sigma\\E$. Solving these equations requires initial conditions for both the electric and magnetic fields $\\E$ and $\\B$ as well as boundary conditions related to the tangential components of $\\E$ or $\\H$. Other formulations are possible such as evolving $\\H$ and $\\D$ or the potentials $\\varphi$ and $\\A$. This system of equations can also be written as a single second order equation involving only $\\E$, $\\H$, $\\varphi$, or $\\A$. Each of these formulations has a different set of sources, initial and boundary conditions for which it is well-suited. The choice we make here is perhaps the most common but it may not be the most convenient choice for a given application. These equations can be used to evolve their initial conditions or they can be driven by either a current source or through time-varying boundary conditions. It is also possible to combine all three of these sources in a single simulation.", "title": "Transient Full-Wave Electromagnetics"}, {"location": "electromagnetics/#maxwell-mini-application", "text": "The electrodynamics mini application, named maxwell after James Clerk Maxwell who first formulated the classical theory of electromagnetic radiation, is intended to demonstrate how to solve transient wave problems in MFEM. Its source terms and boundary conditions are simple but they should indicate how more specialized sources or boundary conditions could be implemented. A detailed overview of the equations being solved and their discretization can be found here: Maxwell Theory Notes . An example simulation is depicted below (click to animate the wave propagation). Time integration is handled by a variable order symplectic time integration algorithm. This algorithm is designed for systems of equations which are derived from a Hamiltonian and it helps to ensure energy conservation within some tolerance. The time step used during integration is automatically chosen based on the largest stable time step as computed from the largest eigenvalue of the update equations. This determination involves a user-adjustable factor which creates a safety margin. By default the actual time step is less than 95% of the estimate for the largest stable time step. Note that this application assumes the mesh coordinates are given in meters. Internally the code assumes time is in seconds but the command line options use nanoseconds for convenience.", "title": "Maxwell Mini Application"}, {"location": "electromagnetics/#mini-application-features_2", "text": "Time Evolution: The initial and final times for the simulation can be specified, in nanoseconds, with the -ti and -tf options. Visualization snapshots of data will be written out after time intervals specified by -ts which again given in nanoseconds. The order of the time integration can be specified, from 1 to 4, using the -to option. Permittivity: The permittivity, $\\epsilon$, is assumed to be that of free space except for an optional sphere of dielectric material which can be defined by the user. The command line option -ds can be used to set the parameters for this dielectric sphere. For example, to produce a sphere at the origin with a radius of 0.5 and a relative permittivity of 3 the user would specify: -ds '0 0 0 0.5 3' . Permeability: The permeability, $\\mu$, is assumed to be that of free space except for an optional spherical shell of diamagnetic or paramagnetic material which can be defined by the user. The command line option -ms can be used to set the parameters for this shell. For example, to produce a shell at the origin with inner and outer radii of 0.4 and 0.5 respectively and a relative permeability of 3 the user would specify: -ms '0 0 0 0.4 0.5 3' . Conductivity: The conductivity, $\\sigma$, is assumed to be zero except for an optional sphere of conductive material which can be defined by the user. The command line option -cs can be used to set the parameters for this conductive sphere. For example, to produce a sphere at the origin with a radius of 0.5 and a conductivity of 3,000,000 S/m the user would specify: -cs '0 0 0 0.5 3e6' . Current Density: The current density, $\\J$, is assumed to be zero except for an optional cylinder of pulsed current which can be defined by the user. The command line option for this is -dp , short for 'dipole pulse', which requires two points giving the end points of the cylinder's axis, radius, amplitude ($\\alpha$), pulse center ($\\beta$), and a pulse width ($\\gamma$). The time dependence of this pulse is given by: $$\\J(t) = \\hat{a} \\alpha e^{-(t-\\beta)^2/(2\\gamma^2)}$$ Where $\\hat{a}$ is the unit vector along the cylinder's axis and both $\\beta$ and $\\gamma$ are specified in nanoseconds. Dirichlet BC: Homogeneous Dirichlet boundary conditions, which constrain the tangential components of $\\frac{\\partial\\E}{\\partial t}$ to be zero, can be activated on a portion of the boundary by specifying a list of boundary attributes such as -dbcs '4 8' . For convenience a boundary attribute of '-1' can be used to specify all boundary surfaces. Non-Homogeneous, time-dependent Dirichlet boundary conditions are supported by the Maxwell solver so a user can edit maxwell.cpp and supply their own function if desired. Absorbing BC: A first order Sommerfeld absorbing boundary condition can be applied to a portion of the boundary using the -abcs option along with a list of boundary attributes such as -abcs '4 18' . Again, the special purpose boundary attribute '-1' can be used to specify all boundary surfaces. This boundary condition depends on a coefficient, $\\eta^{-1}=\\sqrt{\\epsilon/\\mu}$, which must be matched to the materials just inside the boundary. The code assumes that the permittivity and permeability are those of the vacuum near the surface but, if this is not the case, an ambitious user can replace etaInvCoef_ with a more appropriate function.", "title": "Mini Application Features"}, {"location": "electromagnetics/#transient-magnetics-and-joule-heating", "text": "", "title": "Transient Magnetics and Joule Heating"}, {"location": "electromagnetics/#joule-mini-application", "text": "The transient magnetics mini application, named joule after the SI unit of energy (and the scientist James Prescott Joule, who was also a brewer), is intended to demonstrate how to solve transient implicit diffusion problems. The equations of low-frequency electromagnetics are coupled with the equations of heat transfer. The coupling is one way, electromagnetics generates Joule heating, but the heating does not affect the electromagnetics. The thermal problem is solved using an $H(\\mathrm{div})$ method, i.e. temperature is discontinuous and the thermal flux $\\F$ is in $H(\\mathrm{div})$. There are three linear solves per time step: Poisson's equation for the scalar electric potential is solved using the AMG preconditioner, the electric diffusion equation is solved using the AMS preconditioner, and the thermal diffusion equation is solved using the ADS preconditioner. Two example meshes are provided, one is a straight circular metal rod in vacuum, the other is a helical coil in vacuum (the latter is 21MB and can be downloaded from here ). The idea is that a voltage is applied to the ends of the rod/coil, the electric field diffuses into the metal, the metal is heated by Joule heating, the heat diffuses out. The equations are: $$\\begin{align} \\div\\sigma\\grad\\Phi &= 0 \\\\ \\sigma \\E &= \\curl\\mu^{-1} \\B - \\sigma \\grad \\Phi \\\\ \\frac{d \\B}{d t} &= - \\curl \\E \\\\ \\F &= -k \\grad T \\\\ c \\frac{d T}{d t} &= - \\div \\F + \\sigma \\E \\cdot \\E \\end{align}$$ The equations are integrated in time using implicit time integration, either midpoint or higher order SDIRK. Since there are three solves, three sets of boundary conditions must be specified. The essential BC's are the scalar potential, the electric field, and the thermal flux. These are not set via command line arguments, you have to edit the code to change these. To change these, search the code for ess_bdr . There are conducting and non-conducting material regions, and the mesh must have integer attributes to specify these regions. To change these, search the code for std::map this maps the integer attribute to the floating-point material value. Note that this application assumes the mesh coordinates are given in meters. The above picture shows Joule heating of a cylinder using the mesh cylinder-hex.mesh . The cylinder is surrounded by vacuum. The black arrows show the magnetic field $\\B$, the magenta arrows show the heat flux $\\F$, and the pseudocolor in the center of the cylinder shows the temperature.", "title": "Joule Mini Application"}, {"location": "electromagnetics/#mini-application-features_3", "text": "Boundary Conditions: Since there are three solves, three sets of boundary conditions must be specified. The essential BC's are the voltage for the scalar potential, the tangential electric field, and the normal thermal flux. These are not set via command line arguments, you have to edit the code to change these. To change these, search the code for ess_bdr . Note that the essential BC's can be time varying. Material Properties: There are conducting and non-conducting material regions, and the mesh must have integer attributes to specify these regions. To change these, search the code for std::map this maps the integer attribute to the floating-point material value. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Mini Application Features"}, {"location": "examples-orig/", "text": "MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$']]}}); Example Codes and Miniapps This page provides a brief overview of MFEM's example codes and miniapps. For detailed documentation of the MFEM sources, including the examples, see the online Doxygen documentation , or the doc directory in the distribution. The goal of the example codes is to provide a step-by-step introduction to MFEM in simple model settings. The miniapps are more complex, and are intended to be more representative of the advanced usage of the library in physics/application codes. We recommend that new users start with the example codes before moving to the miniapps. Select from the categories below to display examples and miniapps that contain the respective feature. All examples support (arbitrarily) high-order meshes and finite element spaces . The numerical results from the example codes can be visualized using the GLVis visualization tool (based on MFEM). See the GLVis website for more details. Users are encouraged to submit any example codes and miniapps that they have created and would like to share. Contact a member of the MFEM team to report bugs or post questions or comments . Application (PDE) All Diffusion Convection-diffusion Elasticity Electromagnetics Acoustics grad-div Darcy Advection Conduction Wave Compressible flow Incompressible flow Meshing Nonlocal Stochastic Free boundary Finite Elements All H1 nodal elements L2 discontinuous elements H(curl) Nedelec elements H(div) Raviart-Thomas elements H^{1/2} interfacial elements H^{-1/2} interfacial elements Discretization All Galerkin FEM Mixed FEM Discontinuous Galerkin (DG) Discont. Petrov-Galerkin (DPG) Hybridization Static condensation Isogeometric analysis (NURBS) Adaptive mesh refinement (AMR) Partial assembly Solver All Jacobi Gauss-Seidel PCG MINRES GMRES Algebraic Multigrid (BoomerAMG) Auxiliary-space Maxwell Solver (AMS) Auxiliary-space Divergence Solver (ADS) SuperLU/STRUMPACK (parallel direct) UMFPACK (serial direct) Newton method (nonlinear solver) Explicit Runge-Kutta (ODE integration) Implicit Runge-Kutta (ODE integration) Newmark (ODE Integration) Symplectic Algorithm (ODE Integration) LOBPCG, AME (eigensolvers) SUNDIALS solvers PETSc solvers SLEPc eigensolvers HiOp solvers None Example 0: Simplest Laplace Problem This is the simplest MFEM example and a good starting point for new users. The example demonstrates the use of MFEM to define and solve an $H^1$ finite element discretization of the Laplace problem $$-\\Delta u = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ The example illustrates the use of the basic MFEM classes for defining the mesh, finite element space, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. The example has serial ( ex0.cpp ) and parallel ( ex0p.cpp ) versions. Example 1: Laplace Problem This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Specifically, we discretize with the finite element space coming from the mesh (linear by default, quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The problem solved in this example is the same as ex0 , but with more sophisticated options and features. The example highlights the use of mesh refinement, finite element grid functions, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. We also cover the explicit elimination of essential boundary conditions, static condensation, and the optional connection to the GLVis tool for visualization. The example has a serial ( ex1.cpp ), a parallel ( ex1p.cpp ), and HPC versions: performance/ex1.cpp , performance/ex1p.cpp . It also has a PETSc modification in examples/petsc , a PUMI modification in examples/pumi and a Ginkgo modification in examples/ginkgo . Partial assembly and GPU devices are supported. Example 2: Linear Elasticity This example code solves a simple linear elasticity problem describing a multi-material cantilever beam. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder with $f$ being a constant pull down vector on boundary elements with attribute 2, and zero otherwise. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order and NURBS vector finite element spaces with the linear elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and vector coefficient objects. Static condensation is also illustrated. The example has a serial ( ex2.cpp ) and a parallel ( ex2p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . We recommend viewing Example 1 before viewing this example. Example 3: Definite Maxwell Problem This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 2D or 3D. The example demonstrates the use of $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. Static condensation is also illustrated. The example has a serial ( ex3.cpp ) and a parallel ( ex3p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-2 before viewing this example. Example 4: Grad-div Problem This example code solves a simple 2D/3D $H(div)$ diffusion problem corresponding to the second order definite equation $$-{\\rm grad}(\\alpha\\,{\\rm div}(F)) + \\beta F = f$$ with boundary condition $F \\cdot n$ = \"given normal field\". Here we use a given exact solution $F$ and compute the corresponding right hand side $f$. We discretize with the Raviart-Thomas finite elements. The example demonstrates the use of $H(div)$ finite element spaces with the grad-div and $H(div)$ vector finite element mass bilinear form, as well as the computation of discretization error when the exact solution is known. Bilinear form hybridization and static condensation are also illustrated. The example has a serial ( ex4.cpp ) and a parallel ( ex4p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-3 before viewing this example. Example 5: Darcy Problem This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize with Raviart-Thomas finite elements (velocity $\\bf u$) and piecewise discontinuous polynomials (pressure $p$). The example demonstrates the use of the BlockMatrix and BlockOperator classes, as well as the collective saving of several grid functions in VisIt and ParaView formats. The example has a serial ( ex5.cpp ) and a parallel ( ex5p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly is supported. We recommend viewing examples 1-4 before viewing this example. Example 6: Laplace Problem with AMR This is a version of Example 1 with a simple adaptive mesh refinement loop. The problem being solved is again the Laplace equation $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear, curved and surface meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex6.cpp ) and a parallel ( ex6p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . Partial assembly and GPU devices are supported. We recommend viewing Example 1 before viewing this example. Example 7: Surface Meshes This example code demonstrates the use of MFEM to define a triangulation of a unit sphere and a simple isoparametric finite element discretization of the Laplace problem with mass term, $$-\\Delta u + u = f.$$ The example highlights mesh generation, the use of mesh refinement, high-order meshes and finite elements, as well as surface-based linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. Simple local mesh refinement is also demonstrated. The example has a serial ( ex7.cpp ) and a parallel ( ex7p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 8: DPG for the Laplace Problem This example code demonstrates the use of the Discontinuous Petrov-Galerkin (DPG) method in its primal 2x2 block form as a simple finite element discretization of the Laplace problem $$-\\Delta u = f$$ with homogeneous Dirichlet boundary conditions. We use high-order continuous trial space, a high-order interfacial (trace) space, and a high-order discontinuous test space defining a local dual ($H^{-1}$) norm. We use the primal form of DPG, see \"A primal DPG method without a first-order reformulation\" , Demkowicz and Gopalakrishnan, CAM 2013. The example highlights the use of interfacial (trace) finite elements and spaces, trace face integrators and the definition of block operators and preconditioners. The example has a serial ( ex8.cpp ) and a parallel ( ex8p.cpp ) version. We recommend viewing examples 1-5 before viewing this example. Example 9: DG Advection This example code solves the time-dependent advection equation $$\\frac{\\partial u}{\\partial t} + v \\cdot \\nabla u = 0,$$ where $v$ is a given fluid velocity, and $u_0(x)=u(0,x)$ is a given initial condition. The example demonstrates the use of Discontinuous Galerkin (DG) bilinear forms in MFEM (face integrators), the use of explicit and implicit (with block ILU preconditioning) ODE time integrators, the definition of periodic boundary conditions through periodic meshes, as well as the use of GLVis for persistent visualization of a time-evolving solution. The saving of time-dependent data files for external visualization with VisIt and ParaView is also illustrated. The example has a serial ( ex9.cpp ) and a parallel ( ex9p.cpp ) version. It also has a SUNDIALS modification in examples/sundials , a PETSc modification in examples/petsc , and a HiOp modification in examples/hiop . Example 10: Nonlinear Elasticity This example solves a time dependent nonlinear elasticity problem of the form $$ \\frac{dv}{dt} = H(x) + S v\\,,\\qquad \\frac{dx}{dt} = v\\,, $$ where $H$ is a hyperelastic model and $S$ is a viscosity operator of Laplacian type. The geometry of the domain is assumed to be as follows: The example demonstrates the use of nonlinear operators, as well as their implicit time integration using a Newton method for solving an associated reduced backward-Euler type nonlinear equation. Each Newton step requires the inversion of a Jacobian matrix, which is done through a (preconditioned) inner solver. The example has a serial ( ex10.cpp ) and a parallel ( ex10p.cpp ) version. It also has a SUNDIALS modification in examples/sundials and a PETSc modification in examples/petsc . We recommend viewing examples 2 and 9 before viewing this example. Example 11: Laplace Eigenproblem This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex11p.cpp ) version. It also has a SLEPc modification in examples/petsc . We recommend viewing Example 1 before viewing this example. Example 12: Linear Elasticity Eigenproblem This example code solves the linear elasticity eigenvalue problem for a multi-material cantilever beam. Specifically, we compute a number of the lowest eigenmodes by approximating the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = \\lambda {\\bf u} \\,,$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field $\\bf u$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder. The geometry of the domain is assumed to be as follows: The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex12p.cpp ) version. We recommend viewing examples 2 and 11 before viewing this example. Example 13: Maxwell Eigenproblem This example code solves the Maxwell (electromagnetic) eigenvalue problem $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 2D or 3D. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex13p.cpp ) version. We recommend viewing examples 3 and 11 before viewing this example. Example 14: DG Diffusion This example code demonstrates the use of MFEM to define a discontinuous Galerkin (DG) finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Finite element spaces of any order, including zero on regular grids, are supported. The example highlights the use of discontinuous spaces and DG-specific face integrators. The example has a serial ( ex14.cpp ) and a parallel ( ex14p.cpp ) version. We recommend viewing examples 1 and 9 before viewing this example. Example 15: Dynamic AMR Building on Example 6 , this example demonstrates dynamic adaptive mesh refinement. The mesh is adapted to a time-dependent solution by refinement as well as by derefinement. For simplicity, the solution is prescribed and no time integration is done. However, the error estimation and refinement/derefinement decisions are realistic. At each outer iteration the right hand side function is changed to mimic a time dependent problem. Within each inner iteration the problem is solved on a sequence of meshes which are locally refined according to a simple ZZ error estimator. At the end of the inner iteration the error estimates are also used to identify any elements which may be over-refined and a single derefinement step is performed. After each refinement or derefinement step a rebalance operation is performed to keep the mesh evenly distributed among the available processors. The example demonstrates MFEM's capability to refine, derefine and load balance nonconforming meshes, in 2D and 3D, and on linear, curved and surface meshes. Interpolation of functions between coarse and fine meshes, persistent GLVis visualization, and saving of time-dependent fields for external visualization with VisIt are also illustrated. The example has a serial ( ex15.cpp ) and a parallel ( ex15p.cpp ) version. We recommend viewing examples 1, 6 and 9 before viewing this example. Example 16: Time Dependent Heat Conduction This example code solves a simple 2D/3D time dependent nonlinear heat conduction problem $$\\frac{du}{dt} = \\nabla \\cdot \\left( \\kappa + \\alpha u \\right) \\nabla u$$ with a natural insulating boundary condition $\\frac{du}{dn} = 0$. We linearize the problem by using the temperature field $u$ from the previous time step to compute the conductivity coefficient. This example demonstrates both implicit and explicit time integration as well as a single Picard step method for linearization. The saving of time dependent data files for external visualization with VisIt is also illustrated. The example has a serial ( ex16.cpp ) and a parallel ( ex16p.cpp ) version. We recommend viewing examples 2, 9, and 10 before viewing this example. Example 17: DG Linear Elasticity This example code solves a simple linear elasticity problem describing a multi-material cantilever beam using symmetric or non-symmetric discontinuous Galerkin (DG) formulation. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are Dirichlet, $\\bf{u}=\\bf{u_D}$, on the fixed part of the boundary, namely boundary attributes 1 and 2; on the rest of the boundary we use ${\\sigma}({\\bf u})\\cdot n = {\\bf 0}$. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order DG vector finite element spaces with the linear DG elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and function vector-coefficient objects. The use of non-homogeneous Dirichlet b.c. imposed weakly, is also illustrated. The example has a serial ( ex17.cpp ) and a parallel ( ex17p.cpp ) version. We recommend viewing examples 2 and 14 before viewing this example. Example 18: DG Euler Equations This example code solves the compressible Euler system of equations, a model nonlinear hyperbolic PDE, with a discontinuous Galerkin (DG) formulation. The primary purpose is to show how a transient system of nonlinear equations can be formulated in MFEM. The equations are solved in conservative form $$\\frac{\\partial u}{\\partial t} + \\nabla \\cdot {\\bf F}(u) = 0$$ with a state vector $u = [ \\rho, \\rho v_0, \\rho v_1, \\rho E ]$, where $\\rho$ is the density, $v_i$ is the velocity in the $i^{\\rm th}$ direction, $E$ is the total specific energy, and $H = E + p / \\rho$ is the total specific enthalpy. The pressure, $p$ is computed through a simple equation of state (EOS) call. The conservative hydrodynamic flux ${\\bf F}$ in each direction $i$ is $${\\bf F_{\\it i}} = [ \\rho v_i, \\rho v_0 v_i + p \\delta_{i,0}, \\rho v_1 v_i + p \\delta_{i,1}, \\rho v_i H ]$$ Specifically, the example solves for an exact solution of the equations whereby a vortex is transported by a uniform flow. Since all boundaries are periodic here, the method's accuracy can be assessed by measuring the difference between the solution and the initial condition at a later time when the vortex returns to its initial location. Note that as the order of the spatial discretization increases, the timestep must become smaller. This example currently uses a simple estimate derived by Cockburn and Shu for the 1D RKDG method. An additional factor can be tuned by passing the --cfl (or -c shorter) flag. The example demonstrates user-defined nonlinear form with hyperbolic form integrator for systems of equations that are defined with block vectors, and how these are used with an operator for explicit time integrators. In this case the system also involves an external approximate Riemann solver for the DG interface flux. It also demonstrates how to use GLVis for in-situ visualization of vector grid functions. The example has a serial ( ex18.cpp ) and a parallel ( ex18p.cpp ) version. We recommend viewing examples 9, 14 and 17 before viewing this example. Example 19: Incompressible Nonlinear Elasticity This example code solves the quasi-static incompressible nonlinear hyperelasticity equations. Specifically, it solves the nonlinear equation $$ \\nabla \\cdot \\sigma(F) = 0 $$ subject to the constraint $$ \\text{det } F = 1 $$ where $\\sigma$ is the Cauchy stress and $F_{ij} = \\delta_{ij} + u_{i,j}$ is the deformation gradient. To handle the incompressibility constraint, pressure is included as an independent unknown $p$ and the stress response is modeled as an incompressible neo-Hookean hyperelastic solid . The geometry of the domain is assumed to be as follows: This formulation requires solving the saddle point system $$ \\left[ \\begin{array}{cc} K &B^T \\\\ B & 0 \\end{array} \\right] \\left[\\begin{array}{c} \\Delta u \\\\ \\Delta p \\end{array} \\right] = \\left[\\begin{array}{c} R_u \\\\ R_p \\end{array} \\right] $$ at each Newton step. To solve this linear system, we implement a specialized block preconditioner of the form $$ P^{-1} = \\left[\\begin{array}{cc} I & -\\tilde{K}^{-1}B^T \\\\ 0 & I \\end{array} \\right] \\left[\\begin{array}{cc} \\tilde{K}^{-1} & 0 \\\\ 0 & -\\gamma \\tilde{S}^{-1} \\end{array} \\right] $$ where $\\tilde{K}^{-1}$ is an approximation of the inverse of the stiffness matrix $K$ and $\\tilde{S}^{-1}$ is an approximation of the inverse of the Schur complement $S = BK^{-1}B^T$. To approximate the Schur complement, we use the mass matrix for the pressure variable $p$. The example demonstrates how to solve nonlinear systems of equations that are defined with block vectors as well as how to implement specialized block preconditioners for use in iterative solvers. The example has a serial ( ex19.cpp ) and a parallel ( ex19p.cpp ) version. We recommend viewing examples 2, 5 and 10 before viewing this example. Example 20: Symplectic Integration of Hamiltonian Systems This example demonstrates the use of the variable order, symplectic time integration algorithm. Symplectic integration algorithms are designed to conserve energy when integrating systems of ODEs which are derived from Hamiltonian systems. Hamiltonian systems define the energy of a system as a function of time (t), a set of generalized coordinates (q), and their corresponding generalized momenta (p). $$ H(q,p,t) = T(p) + V(q,t) $$ Hamilton's equations then specify how q and p evolve in time: $$ \\frac{dq}{dt} = \\frac{dH}{dp}\\,,\\qquad \\frac{dp}{dt} = -\\frac{dH}{dq} $$ To use the symplectic integration classes we need to define an mfem::Operator ${\\bf P}$ which evaluates the action of dH/dp, and an mfem::TimeDependentOperator ${\\bf F}$ which computes -dH/dq. This example visualizes its results as an evolution in phase space by defining the axes to be $q$, $p$, and $t$ rather than $x$, $y$, and $z$. In this space we build a ribbon-like mesh with nodes at $(0,0,t)$ and $(q,p,t)$. Finally we plot the energy as a function of time as a scalar field on this ribbon-like mesh. This scheme highlights any variations in the energy of the system. This example offers five simple 1D Hamiltonians: Simple Harmonic Oscillator (mass on a spring) $$H = \\frac{1}{2}\\left( \\frac{p^2}{m} + \\frac{q^2}{k} \\right)$$ Pendulum $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} - k \\left( 1 - cos(q) \\right) \\right]$$ Gaussian Potential Well $$H = \\frac{p^2}{2m} - k e^{-q^2 / 2}$$ Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 + q^2 \\right) q^2 \\right]$$ Negative Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 - \\frac{q^2}{8} \\right) q^2 \\right]$$ In all cases these Hamiltonians are shifted by constant values so that the energy will remain positive. The mean and standard deviation of the computed energies at each time step are displayed upon completion. When run in parallel, each processor integrates the same Hamiltonian system but starting from different initial conditions. The example has a serial ( ex20.cpp ) and a parallel ( ex20p.cpp ) version. See the Maxwell miniapp for another application of symplectic integration. Example 21: Adaptive mesh refinement for linear elasticity This is a version of Example 2 with a simple adaptive mesh refinement loop. The problem being solved is again linear elasticity describing a multi-material cantilever beam. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear and curved meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex21.cpp ) and a parallel ( ex21p.cpp ) version. We recommend viewing Examples 2 and 6 before viewing this example. Example 22: Complex Linear Systems This example code demonstrates the use of MFEM to define and solve a complex-valued linear system. It implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. The example also demonstrates how to display a time-varying solution as a sequence of fields sent to a single GLVis socket. The example has a serial ( ex22.cpp ) and a parallel ( ex22p.cpp ) version. We recommend viewing examples 1, 3, and 4 before viewing this example. Example 23: Wave Problem This example code solves a simple 2D/3D wave equation with a second order time derivative: $$\\frac{\\partial^2 u}{\\partial t^2} - c^2\\Delta u = 0$$ The boundary conditions are either Dirichlet or Neumann. The example demonstrates the use of time dependent operators, implicit solvers and second order time integration. The example has only a serial ( ex23.cpp ) version. We recommend viewing examples 9 and 10 before viewing this example. Example 24: Mixed finite element spaces This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. Partial assembly and GPU devices are supported. The example has a serial ( ex24.cpp ) and a parallel ( ex24p.cpp ) version. We recommend viewing examples 1 and 3 before viewing this example. Example 25: Perfectly Matched Layers The example illustrates the use of a Perfectly Matched Layer (PML) for the simulation of time-harmonic electromagnetic waves propagating in unbounded domains. PML was originally introduced by Berenger in \"A Perfectly Matched Layer for the Absorption of Electromagnetic Waves\" . It is a technique used to solve wave propagation problems posed in infinite domains. The implementation involves the introduction of an artificial absorbing layer that minimizes undesired reflections. Inside this layer a complex coordinate stretching map is used which forces the wave modes to decay exponentially. The example solves the indefinite Maxwell equations $$\\nabla \\times (a \\nabla \\times E) - \\omega^2 b E = f.$$ where $a = \\mu^{-1} |J|^{-1} J^T J$, $b= \\epsilon |J| J^{-1} J^{-T}$ and $J$ is the Jacobian matrix of the coordinate transformation. The example demonstrates discretization with Nedelec finite elements in 2D or 3D, as well as the use of complex-valued bilinear and linear forms. Several test problems are included, with known exact solutions. The example has a serial ( ex25.cpp ) and a parallel ( ex25p.cpp ) version. We recommend viewing Example 22 before viewing this example. Example 26: Multigrid Preconditioner This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions and how to solve it efficiently using a matrix-free multigrid preconditioner. The example highlights on the creation of a hierarchy of discretization spaces and diffusion bilinear forms using partial assembly. The levels in the hierarchy of finite element spaces maybe constructed through geometric or order refinements. Moreover, the construction of a multigrid preconditioner for the PCG solver is shown. The multigrid uses a PCG solver on the coarsest level and second order Chebyshev accelerated smoothers on the other levels. The example has a serial ( ex26.cpp ) and a parallel ( ex26p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 27: Laplace Boundary Conditions This example code demonstrates the use of MFEM to define a simple finite element discretization of the Laplace problem: $$ -\\Delta u = 0 $$ with a variety of boundary conditions. Specifically, we discretize using a FE space of the specified order using a continuous or discontinuous space. We then apply Dirichlet, Neumann (both homogeneous and inhomogeneous), Robin, and Periodic boundary conditions on different portions of a predefined mesh. Boundary conditions: $u = u_{dbc}$ on $\\Gamma_{dbc}$ $\\hat{n}\\cdot\\nabla u = g_{nbc}$ on $\\Gamma_{nbc}$ $\\hat{n}\\cdot\\nabla u = 0$ on $\\Gamma_{nbc_0}$ $\\hat{n}\\cdot\\nabla u + a u = b$ on $\\Gamma_{rbc}$ as well as periodic boundary conditions which are enforced topologically. The example has a serial ( ex27.cpp ) and a parallel ( ex27p.cpp ) version. We recommend viewing examples 1 and 14 before viewing this example. Example 28: Constraints and Sliding Boundary Conditions This example code illustrates the use of constraints in linear solvers by solving an elasticity problem where the normal component of the displacement is constrained to zero on two boundaries but tangential displacement is allowed. The constraints can be enforced in several different ways, including eliminating them from the linear system or solving a saddle-point system that explicitly includes constraint conditions. The example has a serial ( ex28.cpp ) and a parallel ( ex28p.cpp ) version. We recommend viewing example 2 before viewing this example. Example 29: Solving PDEs on embedded surfaces This example demonstrates setting up and solving an anisotropic Laplace problem $$-\\nabla\\cdot(\\sigma\\nabla u) = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ where $\\Omega$ is a two dimensional curved surface embedded in three dimensions and $\\sigma$ is an anisotropic diffusion tensor. The example demonstrates and validates our DiffusionIntegrator 's ability to properly integrate three dimensional fluxes on a two dimensional domain. Not all of our integrators currently support such cases but the DiffusionIntegrator can be used as a simple example of how extend other integrators when necessary. The example has a serial ( ex29.cpp ) and a parallel ( ex29p.cpp ) version. We recommend viewing examples 1 and 7 before viewing this example. Example 30: Resolving rough and fine-scale problem data Unresolved problem data will affect the accuracy of a discretized PDE solution as well as a posteriori estimates of the solution error. This example uses a CoefficientRefiner object to preprocess an input mesh until the resolution of the prescribed problem data $f \\in L^2$ is below a prescribed tolerance. In this example, the resolution is identified with a data oscillation function on the mesh $\\mathcal{T}$, defined $$ \\mathrm{osc}(f) = \\Big( \\sum_{T\\in\\mathcal{T}} \\| h \\cdot (I - \\Pi)\\, f \\|^2_{L^2(T)} \\Big)^{1/2}, $$ where $h$ is the local element size function and $\\Pi$ is a finite element projection operator, and the sum is taken over all elements $T$ in the mesh. In this example, the coarse initial mesh is adaptively refined until $\\mathrm{osc}(f)$ is below a prescribed tolerance for various candidate functions $f \\in L^2$. When using rough problem data, it is recommended to perform this type of preprocessing before a posteriori error estimation. The example has a serial ( ex30.cpp ) and a parallel ( ex30p.cpp ) version. We recommend viewing examples 1 and 6 before viewing this example. Example 31: Anisotropic Definite Maxwell Problem This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + \\sigma E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". In this example $\\sigma$ is an anisotropic 3x3 tensor. Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example has a serial ( ex31.cpp ) and a parallel ( ex31p.cpp ) version. We recommend viewing example 3 before viewing this example. Example 32: Anisotropic Maxwell Eigenproblem This example code solves the Maxwell (electromagnetic) eigenvalue problem with anisotropic permittivity, $\\epsilon$ $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, \\epsilon E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces in an eigenmode context. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing multiple GLVis visualization windows for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex32p.cpp ) version. We recommend viewing examples 13 and 31 before viewing this example. Example 33: Spectral fractional Laplacian This example code demonstrates the use of MFEM to solve the fractional Laplacian problem $$ (-\\Delta)^\\alpha u = 1, \\quad \\alpha > 0, $$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is similar to ex1 , but involves a fractional-order diffusion operator whose inverse can be approximated by a series of inverses of integer-order diffusion operators. Solving each of these independent integer-order PDEs with MFEM and summing their solutions results in a discrete solution to the fractional Laplacian problem above. The example has a serial ( ex33.cpp ) and a parallel ( ex33p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 34: Source Function using a SubMesh Transfer This example demonstrates the use of a SubMesh object to transfer solution data from a sub-domain and use this as a source function on the full domain. In this case we compute a volumetric current density $\\vec{J}$ as the gradient of a scalar potential $\\varphi$ on a portion of the domain. $$\\nabla\\cdot(\\sigma\\nabla\\varphi)=0$$ $$\\vec{J} = -\\sigma\\nabla\\varphi$$ Where a voltage difference is applied on surfaces of the sub-domain (shown on the left) to generate the current density restricted to this sub-domain. The current density is then transferred to the full domain (shown on the right) using a SubMesh object. We then use this current density on the full domain as a source term in a magnetostatic solve for a vector potential $\\vec{A}$. $$\\nabla\\times(\\mu^{-1}\\nabla\\times\\vec{A})=\\vec{J}$$ $$\\vec{B} = \\nabla\\times\\vec{A}$$ This example verifies the recreation of boundary attributes on a sub-domain mesh as well as transfer of Raviart-Thomas vector fields between the SubMesh and the full Mesh. Note that the data transfer in this particular example involves arbitrary order Raviart-Thomas degrees of freedom on a mixture of tetrahedral and triangular prism elements. The example has a serial ( ex34.cpp ) and a parallel ( ex34p.cpp ) version. We recommend viewing Examples 1 and 3 before viewing this example. Example 35: Port Boundary Conditions using SubMesh Transfers This example demonstrates the use of a SubMesh object to transfer a port boundary condition from a portion of the boundary to the corresponding portion of the full domain. Just as in Example 22 this example implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0\\mbox{ with }u|_\\Gamma=v$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\times(\\vec{u}\\times\\hat{n})|_\\Gamma=\\vec{v}$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\cdot\\vec{u}|_\\Gamma=v$$ Where $\\Gamma$ is a portion of the boundary called the port . In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. In Example 22 this boundary condition was simply a constant in space. In this example the boundary condition is an eigenmode of a lower dimensional eigenvalue problem defined on a portion of the boundary as follows: For the scalar $H^1$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }v|_{\\partial\\Gamma}=0$$ For the vector $H(curl)$ field: $$\\nabla\\times\\left(\\nabla\\times\\vec{v}\\right) = \\lambda\\,\\vec{v}\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\times\\vec{v}|_{\\partial\\Gamma}=0$$ For the vector $H(div)$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\cdot\\nabla v|_{\\partial\\Gamma}=0$$ The different cases implemented in this example can be used to verify the transfer of an $H^1$ scalar field, the tangential components of an $H(curl)$ vector field, and the normal component of an $H(div)$ vector field (as a scalar $L^2$ field in this case) between a SubMesh and its parent mesh. The example has only a parallel ( ex35p.cpp ) version because the eigenmode solver used to compute the field on the port is only implemented in parallel. We recommend viewing Examples 11, 13, and 22 before viewing this example. Example 36: Obstacle Problem This example code solves the pointwise bound-constrained energy minimization problem $$ \\text{minimize } \\frac{1}{2}\\|\\nabla u\\|^2 \\text{ in } H^1_0(\\Omega)\\, \\text{ subject to } u \\ge \\varphi\\,.$$ This is known as the obstacle problem, and it is a classical motivating example in the study of variational inequalities and free boundary problems. In this example, the obstacle $\\varphi$ is the graph of a half-sphere centered at the origin of a circular domain $\\Omega$. After solving to a specified tolerance, the numerical solution is compared to a closed-form exact solution to assess its accuracy. The problem is solved using the Proximal Galerkin finite element method, which is a nonlinear, structure-preserving mixed method for pointwise bound constraints proposed by Keith and Surowiec . In turn, this example highlights MFEM's ability to deliver high-order solutions to variational inequalities and showcases how to set up and solve nonlinear mixed methods. The example has a serial ( ex36.cpp ) and a parallel ( ex36p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 37: Topology Optimization Density field $\\rho$ Problem set-up and domain $\\Omega$ This example code solves a classical cantilever beam topology optimization problem. The aim is to find an optimal material density field $\\rho$ in $L^1(\\Omega)$ to minimize the elastic compliance; i.e., $$\\begin{align} &\\text{minimize} \\int_\\Omega \\mathbf{f} \\cdot \\mathbf{u}(\\rho)\\, \\mathrm{d}x\\, \\text{ over }\\, \\rho \\in L^1(\\Omega) \\\\ &\\text{subject to }\\, 0 \\leq \\rho \\leq 1\\, \\text{ and } \\int_\\Omega \\rho\\, \\mathrm{d}x = \\theta\\, \\mathrm{vol}(\\Omega) \\,. \\end{align}$$ In this problem, $\\mathbf{f}$ is a localized force and the linearly elastic displacement field $\\mathbf{u} = \\mathbf{u}(\\rho)$ is determined by a material density field $\\rho$ with total volume fraction $0<\\theta<1$. The problem is solved using a mirror descent algorithm proposed by Keith and Surowiec . For further details, see the more elaborate description of this PDE-constrained optimization problem given in the example code and the aforementioned paper. The example has a serial ( ex37.cpp ) and a parallel ( ex37p.cpp ) version. We recommend viewing Example 2 before viewing this example. Example 38: Cut-Volume and Cut-Surface Integration This example code demonstrates construction of cut-surface and cut-volume IntegrationRules. The cut is specified by the zero level set of a given Coefficient $\\phi$. The resulting IntegrationRules are combined with standard LinearFormIntegrators to demonstrate integration of a function $u$ over an implicit interface, and a subdomain bounded by an implicit interface: $$ S = \\int_{\\phi = 0} u(x) ~ ds, \\quad V = \\int_{\\phi > 0} u(x) ~ dx. $$ The IntegrationRules are constructed by the moment-fitting algorithm introduced by M\u00fcller, Kummer and Oberlack . Through a set of basis functions, for each element the method defines and solves a local under-determined system for the vector of quadrature weights. All surface and volume integrals, which are required to form the system, are reduced to 1D integration over intersected segments. The example has only a serial ( ex38.cpp ) version, because the construction of the integration rules is an element-local procedure. It requires MFEM to be built with LAPACK, which is used to find the optimal solution of an under-determined system of equations. Example 39: Named Attribute Sets This example uses the Poisson equation to demonstrate the use of named attribute sets in MFEM to specify material regions, boundary regions, or source regions by name rather than attribute numbers. It also demonstrates how new named attribute sets may be created from arbitrary groupings of attribute numbers and used as a convenient shorthand to refer to those groupings in other portions of the application or through the command line. Named attribute sets also required changes to MFEM's mesh file formats. This example makes use of a custom input mesh file ( compass.msh ) produced using Gmsh which includes named regions and boundaries. A related mesh file ( compass.mesh ) illustrates MFEM's representation of the new named attribute sets. See file formats for details of the augmented mesh file format. The example has a serial ( ex39.cpp ) and a parallel ( ex39p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 40: Eikonal Equation This example highlights MFEM's ability to solve a fully-nonlinear, first-order PDE with high-order finite elements. In particular this example uses the proximal Galerkin method to solve the eikonal equation, $$ |\\nabla u| = 1 \\text{ in } \\Omega, \\quad u = 0 \\text{ on } \\partial \\Omega. $$ At each point $x$ in the domain $\\Omega$, the solution of this PDE provides the Euclidean distance to the domain boundary, $u(x) = \\min \\{ | x - y| : y \\in \\partial \\Omega\\}$. The problem is solved by recasting $u$ as the solution of the nonlinear program $$ \\text{maximize } \\int_\\Omega u\\, \\mathrm{d} x\\, \\text{ in } W^{1,\\infty}_0(\\Omega)\\, \\text{ subject to } |\\nabla u | \\leq 1 \\text{ a.e. in } \\Omega.$$ A solution is then obtained by discretizing and solving a sequence of nonlinear saddle-point problems. See the example code for a more detailed description of the method. The example has a serial ( ex40.cpp ) and a parallel ( ex40p.cpp ) version. We recommend viewing Example 5 and Example 36 before viewing this example. NURBS Example 1: Laplace Problem This example code demonstrates the use of MFEM to define a simple isogeometric NURBS discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is the same as Example 1 . The example has a serial ( nurbs_ex1.cpp ) and a parallel ( nurbs_ex1p.cpp ) version. There is also a version that demonstrates efficient patchwise quadrature ( nurbs_ex1 patch.cpp ). NURBS Example 3: Definite Maxwell Problem This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with NURBS-based $H(curl)$elements in 2D or 3D. The problem solved in this example is the same as Ezample 3 . The example has only a serial ( nurbs_ex1.cpp ) version. NURBS Example 5: Darcy Problem This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize the velocity ($\\bf u$) with NURBS-based $H(div)$ elements and the pressure ($p$) with a compatible NURBS-based $H_1$ elements. The problem solved in this example is the same as Example 5 . The example only has a serial ( nurbs_ex5.cpp ). NURBS Example 11: Laplace Eigenproblem This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The problem solved in this example is the same as Example 11 . The example has only a parallel ( nurbs_ex11p.cpp ) version. NURBS Example 24: Mixed finite element spaces The problem solved in this example is the same as Example 24 , but NURBS-based elements are also supported. This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. The example has a serial ( nurbs_ex24.cpp ). Volta Miniapp: Electrostatics This miniapp demonstrates the use of MFEM to solve realistic problems in the field of linear electrostatics. Its features include: dielectric materials charge densities surface charge densities prescribed voltages applied polarizations high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( volta.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Tesla Miniapp: Magnetostatics This miniapp showcases many of MFEM's features while solving a variety of realistic magnetostatics problems. Its features include: diamagnetic and/or paramagnetic materials ferromagnetic materials volumetric current densities surface current densities external fields high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( tesla.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Maxwell Miniapp: Transient Full-Wave Electromagnetics This miniapp solves the equations of transient full-wave electromagnetics. Its features include: mixed formulation of the coupled first-order Maxwell equations $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic flux energy conserving, variable order, implicit time integration dielectric materials diamagnetic and/or paramagnetic materials conductive materials volumetric current densities Sommerfeld absorbing boundary conditions high order meshes high order basis functions advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( maxwell.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Joule Miniapp: Transient Magnetics and Joule Heating This miniapp solves the equations of transient low-frequency (a.k.a. eddy current) electromagnetics, and simultaneously computes transient heat transfer with the heat source given by the electromagnetic Joule heating. Its features include: $H^1$ discretization of the electrostatic potential $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic field $H(\\mathrm{div})$ discretization of the heat flux $L^2$ discretization of the temperature implicit transient time integration high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( joule.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mobius Strip Miniapp This miniapp generates various Mobius strip-like surface meshes. It is a good way to generate complex surface meshes. Manipulating the mesh topology and performing mesh transformation are demonstrated. The mobius-strip mesh in the data directory was generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mobius-strip.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Klein Bottle Miniapp This miniapp generates three types of Klein bottle surfaces. It is similar to the mobius-strip miniapp. Manipulating the mesh topology and performing mesh transformation are demonstrated. The klein-bottle and klein-donut meshes in the data directory were generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( klein-bottle.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Toroid Miniapp This miniapp generates two types of toroidal volume meshes; one with triangular cross sections and one with square cross sections. It works by defining a stack of individual elements and bending them so that the bottom and top of the stack can be joined to form a torus. It supports various options including: The element type: 0 - Wedge, 1 - Hexahedron The geometric order of the elements The major and minor radii The number of elements in the azimuthal direction The number of nodes to offset by before rejoining the stack The initial angle of the cross sectional shape The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( toroid.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Twist Miniapp This miniapp generates simple periodic meshes to demonstrate MFEM's handling of periodic domains. MFEM's strategy is to use a discontinuous vector field to define the mesh coordinates on a topologically periodic mesh. It works by defining a stack of individual elements and stitching together the top and bottom of the mesh. The stack can also be twisted so that the vertices of the bottom and top can be joined with any integer offset (for tetrahedral and wedge meshes only even offsets are supported). The Twist miniapp supports various options including: The element type: 4 - Tetrahedron, 6 - Wedge, 8 - Hexahedron The geometric order of the elements The dimensions of the initial brick-shaped stack of elements The number of elements in the z direction The number of nodes to offset by before rejoining the stack The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( twist.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Extruder Miniapp This miniapp creates higher dimensional meshes from lower dimensional meshes by extrusion. Simple coordinate transformations can also be applied if desired. The initial mesh can be 1D or 2D 1D meshes can be extruded in both the y and z directions 2D meshes can be triangular, quadrilateral, or contain both element types Meshes with high order geometry are supported User can specify the number of elements and the distance to extrude Geometric order of the transformed mesh can be user selected or automatic This miniapp provides another demonstration of how simple meshes can be constructed and transformed in MFEM. This miniapp has only a serial ( extruder.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Trimmer Miniapp This miniapp creates a new mesh file from an existing mesh by trimming away elements with selected attributes. Newly exposed boundary elements will be assigned new or user specified boundary attributes. The initial mesh can be 2D or 3D Meshes with high order geometry are supported Periodic meshes are supported NURBS meshes are not supported This miniapp provides another demonstration of how simple meshes can be constructed in MFEM. This miniapp has only a serial ( trimmer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Polar-NC Miniapp This miniapp generates a circular sector mesh that consist of quadrilaterals and triangles of similar sizes. The 3D version of the mesh is made of prisms and tetrahedra. The mesh is non-conforming by design, and can optionally be made curvilinear. The elements are ordered along a space-filling curve by default, which makes the mesh ready for parallel non-conforming AMR in MFEM. The implementation also demonstrates how to initialize a non-conforming mesh on the fly by marking hanging nodes with Mesh::AddVertexParents . For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( polar-nc.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Shaper Miniapp This miniapp performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( shaper.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mesh Explorer Miniapp This miniapp is a handy tool to examine, visualize and manipulate a given mesh. Some of its features are: visualizing of mesh materials and individual mesh elements mesh scaling, randomization, and general transformation manipulation of the mesh curvature the ability to simulate parallel partitioning quantitative and visual reports of mesh quality For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mesh-explorer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mesh Optimizer Miniapp This miniapp performs mesh optimization using the Target-Matrix Optimization Paradigm (TMOP) by P. Knupp , and a global variational minimization approach ( Dobrev et al. ). It minimizes the quantity $$\\sum_T \\int_T \\mu(J(x)),$$ where $T$ are the target (ideal) elements, $J$ is the Jacobian of the transformation from the target to the physical element, and $\\mu$ is the mesh quality metric. This metric can measure shape, size or alignment of the region around each quadrature point. The combination of targets and quality metrics is used to optimize the physical node positions, i.e., they must be as close as possible to the shape / size / alignment of their targets. This code also demonstrates a possible use of nonlinear operators, as well as their coupling to Newton methods for solving minimization problems. Note that the utilized Newton methods are oriented towards avoiding invalid meshes with negative Jacobian determinants. Each Newton step requires the inversion of a Jacobian matrix, which is done through an inner linear solver. The miniapp has a serial ( mesh-optimizer.cpp ) and a parallel ( pmesh-optimizer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mesh Fitting Miniapp This miniapp builds upon the mesh optimizer miniapp to enable mesh alignment with the zero isosurface of a discrete level-set. The approach is based on Dobrev et al. and Mittal et al. , where we minimize the quantity $$\\sum_T \\int_T \\mu(J(x)) + \\sum_{s \\in S} w \\,\\, \\sigma^2(x_s).$$ Here, the first term controls mesh quality and the second term enforces weak alignment of a selected subset of mesh-nodes ($s \\in S$) with the zero isosurface of the discrete level-set function ($\\sigma$). Click on the image on the right to see a demonstration of this method for generating body-fitted meshes for topology optimization in LiDO to maximize beam stiffness under a downward force on the right wall. The miniapp has a parallel ( pmesh-fitting.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Minimal Surface Miniapp This miniapp solves Plateau's problem: the Dirichlet problem for the minimal surface equation. Options to solve the minimal surface equations of both parametric surfaces as well as surfaces restricted to be graphs of the form $z=f(x,y)$ are supported, including a number of examples such as the Catenoid, Helicoid, Costa and Scherk surfaces. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has a serial ( minimal-surface.cpp ) and a parallel ( pminimal-surface.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Low-Order Refined Transfer Miniapp The lor-transfer miniapp, found under miniapps/tools demonstrates the capability to generate a low-order refined mesh from a high-order mesh, and to transfer solutions between these meshes. Grid functions can be transferred between the coarse, high-order mesh and the low-order refined mesh using either $L^2$ projection or pointwise evaluation. These transfer operators can be designed to discretely conserve mass and to recover the original high-order solution when transferring a low-order grid function that was obtained by restricting a high-order grid function to the low-order refined space. The miniapp has only a serial ( lor-transfer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Interpolation Miniapps The interpolation miniapp, found under miniapps/gslib , demonstrate the capability to interpolate high-order finite element functions at given set of points in physical space. These miniapps utilize the gslib library's high-order interpolation utility for quad and hex meshes: Find Points miniapp has a serial ( findpts.cpp ) and a parallel ( pfindpts.cpp ) version that demonstrate the basic procedures for point search and evaluation of grid functions. Field Interp miniapp ( field-interp.cpp ) demonstrates how grid functions can be transferred between meshes. Field Diff miniapp ( field-diff.cpp ) demonstrates how grid functions on two different meshes can be compared with each other. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps. Extrapolation Miniapp The extrapolate miniapp, found in the miniapps/shifted directory, extrapolates a finite element function from a set of elements (known values) to the rest of the domain. The set of elements that contains the known values is specified by the positive values of a level set Coefficient. The known values are not modified. The miniapp supports two PDE-based approaches ( Aslam , Bochkov & Gibou ), both of which rely on solving a sequence of advection problems in the direction of the unknown parts of the domain. The extrapolation can be constant (1st order), linear (2nd order), or quadratic (3rd order). These formal orders hold for a limited band around the zero level set, see the above references for further information. The miniapp has only a parallel ( extrapolate.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Distance Solver Miniapp The distance miniapp, found in the miniapps/shifted directory demonstrates the capability to compute the \"distance\" to a given point source or to the zero level set of a given function. Here \"distance\" refers to the length of the shortest path through the mesh. The input can be a DeltaCoefficient (representing a point source), or any Coefficient (for the case of a level set). The output is a ParGridFunction that can be scalar (representing the scalar distance), or a vector (its magnitude is the distance, and its direction is the starting direction of the shortest path). The miniapp has only a parallel ( distance.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Shifted Diffusion Miniapp The diffusion miniapp, found in the miniapps/shifted directory, demonstrates the capability to formulate a boundary value problem using a surrogate computational domain. The method uses a distance function to the true boundary to enforce Dirichlet boundary conditions on the (non-aligned) mesh faces, therefore \"shifting\" the location where boundary conditions are imposed. The implementation in the miniapp is a high-order extension of the second-generation shifted boundary method . The miniapp has only a parallel ( diffusion.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Laghos Miniapp Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. The computational motives captured in Laghos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements (triangular and tetrahedral elements can also be used, but with the less efficient full assembly option). Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Laghos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Continuous and discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Separation between the assembly and the quadrature point-based computations. Point-wise definition of mesh size, time-step estimate and artificial viscosity coefficient. Constant-in-time velocity mass operator that is inverted iteratively on each time step. This is an example of an operator that is prepared once (fully or partially assembled), but is applied many times. The application cost is dominant for this operator. Time-dependent force matrix that is prepared every time step (fully or partially assembled) and is applied just twice per \"assembly\". Both the preparation and the application costs are important for this operator. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization / data analysis with VisIt . The Laghos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Laghos . Remhos Miniapp Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. The computational motives captured in Remhos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements. Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Remhos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Mass operator that is local per each zone. It is inverted by iterative or exact methods at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Advection operator that couples neighboring zones. It is applied once at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization and data analysis with VisIt . The Remhos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Remhos . Navier Miniapp Navier is a miniapp that solves the time-dependent Navier-Stokes equations of incompressible fluid dynamics \\begin{align} \\frac{\\partial u}{\\partial t} + (u \\cdot \\nabla) u - \\frac{1}{Re} \\nabla^2 u - \\nabla p &= f \\\\ \\nabla \\cdot u &= 0 \\end{align} using a spatially high-order finite element discretization. The time-dependent problem is solved using a (up to) third order implicit-explicit method which leverages an extrapolation scheme for the convective parts and a backward-difference formulation for the viscous parts of the equation. The miniapp supports: Arbitrary order H1 elements High order mesh elements IMEX (EXTk-BDFk) time-stepping up to third order Convenient interface for new users A variety of test cases and benchmarks This miniapp has only a parallel ( navier_solver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Block Solvers Miniapp The Block Solvers miniapp, found under miniapps/solvers , compares various linear solvers for the saddle point system obtained from mixed finite element discretization of the Darcy's flow problem \\begin{array}{rcl} k{\\bf u} & + \\nabla p & = f \\\\ -\\nabla \\cdot {\\bf u} & & = g \\end{array} The solvers being compared include: The divergence-free solver (couple and decoupled modes), which is based on a multilevel decomposition of the Raviart-Thomas finite element space and its divergence-free subspace. MINRES preconditioned by the block diagonal preconditioner in ex5p.cpp . For more details, please see the documentation in the miniapps/solvers directory. The miniapp supports: Arbitrary order mixed finite element pair (Raviart-Thomas elements + piecewise discontinuous polynomials) Various combination of essential and natural boundary conditions Homogeneous or heterogeneous scalar coefficient k This miniapp has only a parallel ( block-solvers.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Overlapping Grids Miniapps Overlapping grids-based frameworks can often make problems tractable that are otherwise inaccessible with a single conforming grid. The following gslib -based miniapps in MFEM demonstrate how to set up and use overlapping grids: The Schwarz Example 1 miniapp in miniapps/gslib has a serial ( schwarz_ex1.cpp ) a parallel ( schwarz_ex1p.cpp ) version that solves the Poisson problem on overlapping grids. The serial version is restricted to use two overlapping grids, while the parallel version supports arbitrary number of overlapping grids. The Navier Conjugate Heat Transfer miniapp in miniapps/navier ( navier_cht.cpp ) demonstrates how a conjugate heat transfer problem can be solved with the fluid dynamics (incompressible Navier-Stokes equations) and heat transfer (advection-diffusion equation) PDEs modeled on different meshes. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps. ParELAG AMGe for H(curl) and H(div) Miniapp This is a miniapp that exhibits the ParELAG library and part of its capabilities. The miniapp employs MFEM and ParELAG to solve $H(\\mathrm{curl})$- and $H(\\mathrm{div})$-elliptic forms by an element based algebraic multigrid (AMGe). ParELAG is a library mostly developed at the Center for Applied Scientific Computing of Lawrence Livermore National Laboratory, California, USA. The miniapp uses: A multilevel hierarchy of de Rham complexes of finite element spaces, built by ParELAG; Hiptmair-type (hybrid) smoothers, implemented in ParELAG; AMS (Auxiliary-space Maxwell Solver) or ADS (Auxiliary-space Divergence Solver), from HYPRE, for preconditioning or solving on the coarsest levels. Alternatively, it is possible to precondition or solve the $H(\\mathrm{div})$ form on the coarsest level via a hybridization approach. However, this is not yet implemented in ParELAG for the coarse levels. Only the hybridization solver that is directly applicable to an $H(\\mathrm{div})$-$L^2$ mixed (saddle-point) system is currently available in ParELAG. We recommend viewing ex3p.cpp and ex4p.cpp before viewing this miniapp. For more details, please see the documentation in the miniapps/parelag directory. This miniapp has only a parallel ( MultilevelHcurlHdivSolver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Generating Gaussian Random Fields via the SPDE Method This miniapp generates Gaussian random fields on meshed domains $\\Omega \\subset \\mathbb{R}^n$ via the SPDE method. The method exploits a stochastic, fractional PDE whose full-space solutions yield Gaussian random fields with a Mat\u00e9rn covariance. The method was introduced and popularized by Lindgren et. al in 2010. In this miniapp, we use a slightly modified representation following Khristenko et. al . More specifically, we solve the equation \\begin{equation} \\left( -\\frac{1}{2\\nu} \\nabla \\cdot \\left( \\Theta \\nabla \\right) + \\mathbf{1} \\right)^{\\frac{2\\nu+n}{4}} u(x,w) = \\eta W(x,w) \\ \\ \\ \\text{in} \\ \\ \\Omega, \\end{equation} with various boundary conditions. Solving this equation on $\\Omega = \\mathbb{R}^n$ delivers a homogeneous Gaussian random field with zero mean and Mat\u00e9rn covariance, \\begin{align}\\label{eq:MaternCovariance} C(x,y) &= \\sigma^2M_\\nu \\left(\\sqrt{2\\nu}\\, \\| x-y \\|_{\\Theta} \\right) , \\end{align} where $M_\\nu(z) = \\frac{2^{1-\\nu}}{\\Gamma(\\nu)} z ^{\\nu} K_\\nu \\left( z \\right)$ and $\\| x-y \\|_{\\Theta}^2 = (x-y)^\\top\\Theta (x-y)$. The Mat\u00e9rn model provides the regularity parameter $\\nu > 0$ and the anisotropic diffusion tensor $\\Theta \\in \\mathbb{R}^{n\\times n}$, which determines the spatial structure (correlation lengths). However, applying boundary conditions to the SPDE above provides the ability to model a significantly larger class of inhomogeneous random fields on complex domains. For further details, see the miniapp README . We recommend viewing ex33p.cpp before viewing this miniapp. This miniapp ( generate_random_field.cpp ) has only a parallel implementation. It further requires MFEM to be built with LAPACK, otherwise you may only use predefined values for $\\nu$. We recommend that new users start with the example codes before moving to the miniapps. Multidomain and SubMesh demonstration Miniapp This miniapp aims to demonstrate how to solve two PDEs, that represent different physics, on the same domain. MFEM's SubMesh interface is used to compute on and transfer between the spaces of predefined parts of the domain. For the sake of simplicity, the spaces on each domain are using the same order H1 finite elements. This does not mean that the approach is limited to this configuration. A 3D domain comprised of an outer box with a cylinder shaped inside is used. A heat equation is described on the outer box domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T &&\\mbox{in outer box}\\\\ T &= T_{wall} &&\\mbox{on outside wall}\\\\ \\nabla T \\cdot \\vec{n} &= 0 &&\\mbox{on inside (cylinder) wall} \\end{align} with temperature $T$ and coefficient $\\kappa$ (non-physical in this example). A convection-diffusion equation is described inside the cylinder domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T - \\alpha \\nabla \\cdot (\\vec{b} T) & &\\mbox{in inner cylinder}\\\\ T &= T_{wall} & &\\mbox{on cylinder wall}\\\\ \\nabla T \\cdot \\vec{n} &= 0 & &\\mbox{else} \\end{align} with temperature $T$, coefficients $\\kappa$, $\\alpha$ and prescribed velocity profile $\\vec{b}$, and $T_{wall}$ obtained from the heat equation. To couple the solutions of both equations, a segregated solve with one way coupling approach is used. The heat equation of the outer box is solved from the timestep $T_{box}(t)$ to $T_{box}(t+dt)$. Then for the convection-diffusion equation $T_{wall}$ is set to $T_{box}(t+dt)$ and the equation is solved for $T(t+dt)$ which results in a first-order one way coupling. This miniapp has only a parallel ( multidomain.cpp ) implementation. We recommend that new users start with the example codes before moving to the miniapps. DPG miniapp This miniapp demonstrates how to discretize and solve various PDEs using the Discontinuous Petrov-Galerkin (DPG) method. It utilizes a new user-friendly interface to assemble the block DPG systems arising from the discretization of any DPG formulation (such as Ultraweak or Primal). In addition, the miniapp supports complex-valued systems, static condensation for block systems, and AMR using the built-in DPG residual-based error indicator. This capability is showcased in the following DPG examples in miniapps/dpg . Ultraweak DPG formulation for diffusion . This example solves the simple Poisson equation $$-\u0394 u = f$$ and computes rates of convergence under successive uniform h-refinements for a smooth manufactured solution. The parallel version also includes an AMR implementation for the L-shape benchmark problem. This example has a serial ( diffusion.cpp ) and a parallel ( pdiffusion.cpp ) version. Ultraweak DPG formulation for convection-diffusion . This example solves the convection-diffusion problem: \\begin{align} -\\epsilon \\Delta u + \\nabla \\cdot (\\beta u) &= f\\\\ \\end{align} using AMR. The example demonstrates the use of mesh-dependent test norms which are suitable for problems with solutions that exhibit large gradients present in internal or boundary layers . The example has a serial ( convection-diffusion.cpp ) and a parallel ( pconvection-diffusion.cpp ) version. Ultraweak DPG formulation for time-harmonic linear acoustics . This example solves the indefinite Helmholtz equation \\begin{align} -\\Delta u - \\omega^2 u &= f\\\\ \\end{align} The example includes formulations with manufactured plane-wave solutions as well as high-frequency scattering problems and the use of Perfectly Match Layers (PML). It also demonstrates how to set up complex-valued systems and preconditioners for their solutions. The example has a serial ( acoustics.cpp ) and parallel ( pacoustics.cpp ) version. Ultraweak DPG formulation for time-harmonic Maxwell . This example solves the indefinite Maxwell problem \\begin{align} \\nabla \u00d7 (\\mu^{-1} \\nabla \\times E) - \\omega^2 \\epsilon E&= J\\\\ \\end{align} The example includes formulations with smooth manufactured solutions, AMR formulations for high-frequency scattering problems as well as a problem with a singular solution. The example has a serial ( maxwell.cpp ) and a parallel ( pmaxwell.cpp ) version. Tribol miniapp This miniapp demonstrates how to use Tribol's mortar method to solve a contact patch test. A contact patch test places two aligned, linear elastic cubes in contact, then verifies that the exact elasticity solution for this problem is recovered. The exact solution requires transmission of a uniform pressure field across a (not necessarily conforming) interface (i.e. the contact surface). Mortar methods (including the one implemented in Tribol) are generally able to pass the contact patch test. The test assumes small deformations and no accelerations, so the relationship between forces/contact pressures and deformations/contact gaps is linear and, therefore, the problem can be solved exactly with a single linear solve. The mortar implementation is based on Puso and Laursen (2004) . A description of the Tribol implementation is available in Serac documentation . Lagrange multipliers are used to solve for the pressure required to prevent violation of the contact constraints. This miniapp has only a parallel ( contact-patch-test.cpp ) implementation. For more details, please see the documentation in miniapps/tribol/README.md . We recommend that new users start with the example codes before moving to the miniapps. No examples or miniapps match your criteria. ", "title": "Examples orig"}, {"location": "examples-orig/#example-codes-and-miniapps", "text": "This page provides a brief overview of MFEM's example codes and miniapps. For detailed documentation of the MFEM sources, including the examples, see the online Doxygen documentation , or the doc directory in the distribution. The goal of the example codes is to provide a step-by-step introduction to MFEM in simple model settings. The miniapps are more complex, and are intended to be more representative of the advanced usage of the library in physics/application codes. We recommend that new users start with the example codes before moving to the miniapps. Select from the categories below to display examples and miniapps that contain the respective feature. All examples support (arbitrarily) high-order meshes and finite element spaces . The numerical results from the example codes can be visualized using the GLVis visualization tool (based on MFEM). See the GLVis website for more details. Users are encouraged to submit any example codes and miniapps that they have created and would like to share. Contact a member of the MFEM team to report bugs or post questions or comments .", "title": "Example Codes and Miniapps"}, {"location": "examples-orig/#example-0-simplest-laplace-problem", "text": "This is the simplest MFEM example and a good starting point for new users. The example demonstrates the use of MFEM to define and solve an $H^1$ finite element discretization of the Laplace problem $$-\\Delta u = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ The example illustrates the use of the basic MFEM classes for defining the mesh, finite element space, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. The example has serial ( ex0.cpp ) and parallel ( ex0p.cpp ) versions.", "title": "Example 0: Simplest Laplace Problem"}, {"location": "examples-orig/#example-1-laplace-problem", "text": "This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Specifically, we discretize with the finite element space coming from the mesh (linear by default, quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The problem solved in this example is the same as ex0 , but with more sophisticated options and features. The example highlights the use of mesh refinement, finite element grid functions, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. We also cover the explicit elimination of essential boundary conditions, static condensation, and the optional connection to the GLVis tool for visualization. The example has a serial ( ex1.cpp ), a parallel ( ex1p.cpp ), and HPC versions: performance/ex1.cpp , performance/ex1p.cpp . It also has a PETSc modification in examples/petsc , a PUMI modification in examples/pumi and a Ginkgo modification in examples/ginkgo . Partial assembly and GPU devices are supported.", "title": "Example 1: Laplace Problem"}, {"location": "examples-orig/#example-2-linear-elasticity", "text": "This example code solves a simple linear elasticity problem describing a multi-material cantilever beam. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder with $f$ being a constant pull down vector on boundary elements with attribute 2, and zero otherwise. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order and NURBS vector finite element spaces with the linear elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and vector coefficient objects. Static condensation is also illustrated. The example has a serial ( ex2.cpp ) and a parallel ( ex2p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . We recommend viewing Example 1 before viewing this example.", "title": "Example 2: Linear Elasticity"}, {"location": "examples-orig/#example-3-definite-maxwell-problem", "text": "This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 2D or 3D. The example demonstrates the use of $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. Static condensation is also illustrated. The example has a serial ( ex3.cpp ) and a parallel ( ex3p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-2 before viewing this example.", "title": "Example 3: Definite Maxwell Problem"}, {"location": "examples-orig/#example-4-grad-div-problem", "text": "This example code solves a simple 2D/3D $H(div)$ diffusion problem corresponding to the second order definite equation $$-{\\rm grad}(\\alpha\\,{\\rm div}(F)) + \\beta F = f$$ with boundary condition $F \\cdot n$ = \"given normal field\". Here we use a given exact solution $F$ and compute the corresponding right hand side $f$. We discretize with the Raviart-Thomas finite elements. The example demonstrates the use of $H(div)$ finite element spaces with the grad-div and $H(div)$ vector finite element mass bilinear form, as well as the computation of discretization error when the exact solution is known. Bilinear form hybridization and static condensation are also illustrated. The example has a serial ( ex4.cpp ) and a parallel ( ex4p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-3 before viewing this example.", "title": "Example 4: Grad-div Problem"}, {"location": "examples-orig/#example-5-darcy-problem", "text": "This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize with Raviart-Thomas finite elements (velocity $\\bf u$) and piecewise discontinuous polynomials (pressure $p$). The example demonstrates the use of the BlockMatrix and BlockOperator classes, as well as the collective saving of several grid functions in VisIt and ParaView formats. The example has a serial ( ex5.cpp ) and a parallel ( ex5p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly is supported. We recommend viewing examples 1-4 before viewing this example.", "title": "Example 5: Darcy Problem"}, {"location": "examples-orig/#example-6-laplace-problem-with-amr", "text": "This is a version of Example 1 with a simple adaptive mesh refinement loop. The problem being solved is again the Laplace equation $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear, curved and surface meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex6.cpp ) and a parallel ( ex6p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . Partial assembly and GPU devices are supported. We recommend viewing Example 1 before viewing this example.", "title": "Example 6: Laplace Problem with AMR"}, {"location": "examples-orig/#example-7-surface-meshes", "text": "This example code demonstrates the use of MFEM to define a triangulation of a unit sphere and a simple isoparametric finite element discretization of the Laplace problem with mass term, $$-\\Delta u + u = f.$$ The example highlights mesh generation, the use of mesh refinement, high-order meshes and finite elements, as well as surface-based linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. Simple local mesh refinement is also demonstrated. The example has a serial ( ex7.cpp ) and a parallel ( ex7p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 7: Surface Meshes"}, {"location": "examples-orig/#example-8-dpg-for-the-laplace-problem", "text": "This example code demonstrates the use of the Discontinuous Petrov-Galerkin (DPG) method in its primal 2x2 block form as a simple finite element discretization of the Laplace problem $$-\\Delta u = f$$ with homogeneous Dirichlet boundary conditions. We use high-order continuous trial space, a high-order interfacial (trace) space, and a high-order discontinuous test space defining a local dual ($H^{-1}$) norm. We use the primal form of DPG, see \"A primal DPG method without a first-order reformulation\" , Demkowicz and Gopalakrishnan, CAM 2013. The example highlights the use of interfacial (trace) finite elements and spaces, trace face integrators and the definition of block operators and preconditioners. The example has a serial ( ex8.cpp ) and a parallel ( ex8p.cpp ) version. We recommend viewing examples 1-5 before viewing this example.", "title": "Example 8: DPG for the Laplace Problem"}, {"location": "examples-orig/#example-9-dg-advection", "text": "This example code solves the time-dependent advection equation $$\\frac{\\partial u}{\\partial t} + v \\cdot \\nabla u = 0,$$ where $v$ is a given fluid velocity, and $u_0(x)=u(0,x)$ is a given initial condition. The example demonstrates the use of Discontinuous Galerkin (DG) bilinear forms in MFEM (face integrators), the use of explicit and implicit (with block ILU preconditioning) ODE time integrators, the definition of periodic boundary conditions through periodic meshes, as well as the use of GLVis for persistent visualization of a time-evolving solution. The saving of time-dependent data files for external visualization with VisIt and ParaView is also illustrated. The example has a serial ( ex9.cpp ) and a parallel ( ex9p.cpp ) version. It also has a SUNDIALS modification in examples/sundials , a PETSc modification in examples/petsc , and a HiOp modification in examples/hiop .", "title": "Example 9: DG Advection"}, {"location": "examples-orig/#example-10-nonlinear-elasticity", "text": "This example solves a time dependent nonlinear elasticity problem of the form $$ \\frac{dv}{dt} = H(x) + S v\\,,\\qquad \\frac{dx}{dt} = v\\,, $$ where $H$ is a hyperelastic model and $S$ is a viscosity operator of Laplacian type. The geometry of the domain is assumed to be as follows: The example demonstrates the use of nonlinear operators, as well as their implicit time integration using a Newton method for solving an associated reduced backward-Euler type nonlinear equation. Each Newton step requires the inversion of a Jacobian matrix, which is done through a (preconditioned) inner solver. The example has a serial ( ex10.cpp ) and a parallel ( ex10p.cpp ) version. It also has a SUNDIALS modification in examples/sundials and a PETSc modification in examples/petsc . We recommend viewing examples 2 and 9 before viewing this example.", "title": "Example 10: Nonlinear Elasticity"}, {"location": "examples-orig/#example-11-laplace-eigenproblem", "text": "This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex11p.cpp ) version. It also has a SLEPc modification in examples/petsc . We recommend viewing Example 1 before viewing this example.", "title": "Example 11: Laplace Eigenproblem"}, {"location": "examples-orig/#example-12-linear-elasticity-eigenproblem", "text": "This example code solves the linear elasticity eigenvalue problem for a multi-material cantilever beam. Specifically, we compute a number of the lowest eigenmodes by approximating the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = \\lambda {\\bf u} \\,,$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field $\\bf u$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder. The geometry of the domain is assumed to be as follows: The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex12p.cpp ) version. We recommend viewing examples 2 and 11 before viewing this example.", "title": "Example 12: Linear Elasticity Eigenproblem"}, {"location": "examples-orig/#example-13-maxwell-eigenproblem", "text": "This example code solves the Maxwell (electromagnetic) eigenvalue problem $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 2D or 3D. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex13p.cpp ) version. We recommend viewing examples 3 and 11 before viewing this example.", "title": "Example 13: Maxwell Eigenproblem"}, {"location": "examples-orig/#example-14-dg-diffusion", "text": "This example code demonstrates the use of MFEM to define a discontinuous Galerkin (DG) finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Finite element spaces of any order, including zero on regular grids, are supported. The example highlights the use of discontinuous spaces and DG-specific face integrators. The example has a serial ( ex14.cpp ) and a parallel ( ex14p.cpp ) version. We recommend viewing examples 1 and 9 before viewing this example.", "title": "Example 14: DG Diffusion"}, {"location": "examples-orig/#example-15-dynamic-amr", "text": "Building on Example 6 , this example demonstrates dynamic adaptive mesh refinement. The mesh is adapted to a time-dependent solution by refinement as well as by derefinement. For simplicity, the solution is prescribed and no time integration is done. However, the error estimation and refinement/derefinement decisions are realistic. At each outer iteration the right hand side function is changed to mimic a time dependent problem. Within each inner iteration the problem is solved on a sequence of meshes which are locally refined according to a simple ZZ error estimator. At the end of the inner iteration the error estimates are also used to identify any elements which may be over-refined and a single derefinement step is performed. After each refinement or derefinement step a rebalance operation is performed to keep the mesh evenly distributed among the available processors. The example demonstrates MFEM's capability to refine, derefine and load balance nonconforming meshes, in 2D and 3D, and on linear, curved and surface meshes. Interpolation of functions between coarse and fine meshes, persistent GLVis visualization, and saving of time-dependent fields for external visualization with VisIt are also illustrated. The example has a serial ( ex15.cpp ) and a parallel ( ex15p.cpp ) version. We recommend viewing examples 1, 6 and 9 before viewing this example.", "title": "Example 15: Dynamic AMR"}, {"location": "examples-orig/#example-16-time-dependent-heat-conduction", "text": "This example code solves a simple 2D/3D time dependent nonlinear heat conduction problem $$\\frac{du}{dt} = \\nabla \\cdot \\left( \\kappa + \\alpha u \\right) \\nabla u$$ with a natural insulating boundary condition $\\frac{du}{dn} = 0$. We linearize the problem by using the temperature field $u$ from the previous time step to compute the conductivity coefficient. This example demonstrates both implicit and explicit time integration as well as a single Picard step method for linearization. The saving of time dependent data files for external visualization with VisIt is also illustrated. The example has a serial ( ex16.cpp ) and a parallel ( ex16p.cpp ) version. We recommend viewing examples 2, 9, and 10 before viewing this example.", "title": "Example 16: Time Dependent Heat Conduction"}, {"location": "examples-orig/#example-17-dg-linear-elasticity", "text": "This example code solves a simple linear elasticity problem describing a multi-material cantilever beam using symmetric or non-symmetric discontinuous Galerkin (DG) formulation. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are Dirichlet, $\\bf{u}=\\bf{u_D}$, on the fixed part of the boundary, namely boundary attributes 1 and 2; on the rest of the boundary we use ${\\sigma}({\\bf u})\\cdot n = {\\bf 0}$. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order DG vector finite element spaces with the linear DG elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and function vector-coefficient objects. The use of non-homogeneous Dirichlet b.c. imposed weakly, is also illustrated. The example has a serial ( ex17.cpp ) and a parallel ( ex17p.cpp ) version. We recommend viewing examples 2 and 14 before viewing this example.", "title": "Example 17: DG Linear Elasticity"}, {"location": "examples-orig/#example-18-dg-euler-equations", "text": "This example code solves the compressible Euler system of equations, a model nonlinear hyperbolic PDE, with a discontinuous Galerkin (DG) formulation. The primary purpose is to show how a transient system of nonlinear equations can be formulated in MFEM. The equations are solved in conservative form $$\\frac{\\partial u}{\\partial t} + \\nabla \\cdot {\\bf F}(u) = 0$$ with a state vector $u = [ \\rho, \\rho v_0, \\rho v_1, \\rho E ]$, where $\\rho$ is the density, $v_i$ is the velocity in the $i^{\\rm th}$ direction, $E$ is the total specific energy, and $H = E + p / \\rho$ is the total specific enthalpy. The pressure, $p$ is computed through a simple equation of state (EOS) call. The conservative hydrodynamic flux ${\\bf F}$ in each direction $i$ is $${\\bf F_{\\it i}} = [ \\rho v_i, \\rho v_0 v_i + p \\delta_{i,0}, \\rho v_1 v_i + p \\delta_{i,1}, \\rho v_i H ]$$ Specifically, the example solves for an exact solution of the equations whereby a vortex is transported by a uniform flow. Since all boundaries are periodic here, the method's accuracy can be assessed by measuring the difference between the solution and the initial condition at a later time when the vortex returns to its initial location. Note that as the order of the spatial discretization increases, the timestep must become smaller. This example currently uses a simple estimate derived by Cockburn and Shu for the 1D RKDG method. An additional factor can be tuned by passing the --cfl (or -c shorter) flag. The example demonstrates user-defined nonlinear form with hyperbolic form integrator for systems of equations that are defined with block vectors, and how these are used with an operator for explicit time integrators. In this case the system also involves an external approximate Riemann solver for the DG interface flux. It also demonstrates how to use GLVis for in-situ visualization of vector grid functions. The example has a serial ( ex18.cpp ) and a parallel ( ex18p.cpp ) version. We recommend viewing examples 9, 14 and 17 before viewing this example.", "title": "Example 18: DG Euler Equations"}, {"location": "examples-orig/#example-19-incompressible-nonlinear-elasticity", "text": "This example code solves the quasi-static incompressible nonlinear hyperelasticity equations. Specifically, it solves the nonlinear equation $$ \\nabla \\cdot \\sigma(F) = 0 $$ subject to the constraint $$ \\text{det } F = 1 $$ where $\\sigma$ is the Cauchy stress and $F_{ij} = \\delta_{ij} + u_{i,j}$ is the deformation gradient. To handle the incompressibility constraint, pressure is included as an independent unknown $p$ and the stress response is modeled as an incompressible neo-Hookean hyperelastic solid . The geometry of the domain is assumed to be as follows: This formulation requires solving the saddle point system $$ \\left[ \\begin{array}{cc} K &B^T \\\\ B & 0 \\end{array} \\right] \\left[\\begin{array}{c} \\Delta u \\\\ \\Delta p \\end{array} \\right] = \\left[\\begin{array}{c} R_u \\\\ R_p \\end{array} \\right] $$ at each Newton step. To solve this linear system, we implement a specialized block preconditioner of the form $$ P^{-1} = \\left[\\begin{array}{cc} I & -\\tilde{K}^{-1}B^T \\\\ 0 & I \\end{array} \\right] \\left[\\begin{array}{cc} \\tilde{K}^{-1} & 0 \\\\ 0 & -\\gamma \\tilde{S}^{-1} \\end{array} \\right] $$ where $\\tilde{K}^{-1}$ is an approximation of the inverse of the stiffness matrix $K$ and $\\tilde{S}^{-1}$ is an approximation of the inverse of the Schur complement $S = BK^{-1}B^T$. To approximate the Schur complement, we use the mass matrix for the pressure variable $p$. The example demonstrates how to solve nonlinear systems of equations that are defined with block vectors as well as how to implement specialized block preconditioners for use in iterative solvers. The example has a serial ( ex19.cpp ) and a parallel ( ex19p.cpp ) version. We recommend viewing examples 2, 5 and 10 before viewing this example.", "title": "Example 19: Incompressible Nonlinear Elasticity"}, {"location": "examples-orig/#example-20-symplectic-integration-of-hamiltonian-systems", "text": "This example demonstrates the use of the variable order, symplectic time integration algorithm. Symplectic integration algorithms are designed to conserve energy when integrating systems of ODEs which are derived from Hamiltonian systems. Hamiltonian systems define the energy of a system as a function of time (t), a set of generalized coordinates (q), and their corresponding generalized momenta (p). $$ H(q,p,t) = T(p) + V(q,t) $$ Hamilton's equations then specify how q and p evolve in time: $$ \\frac{dq}{dt} = \\frac{dH}{dp}\\,,\\qquad \\frac{dp}{dt} = -\\frac{dH}{dq} $$ To use the symplectic integration classes we need to define an mfem::Operator ${\\bf P}$ which evaluates the action of dH/dp, and an mfem::TimeDependentOperator ${\\bf F}$ which computes -dH/dq. This example visualizes its results as an evolution in phase space by defining the axes to be $q$, $p$, and $t$ rather than $x$, $y$, and $z$. In this space we build a ribbon-like mesh with nodes at $(0,0,t)$ and $(q,p,t)$. Finally we plot the energy as a function of time as a scalar field on this ribbon-like mesh. This scheme highlights any variations in the energy of the system. This example offers five simple 1D Hamiltonians: Simple Harmonic Oscillator (mass on a spring) $$H = \\frac{1}{2}\\left( \\frac{p^2}{m} + \\frac{q^2}{k} \\right)$$ Pendulum $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} - k \\left( 1 - cos(q) \\right) \\right]$$ Gaussian Potential Well $$H = \\frac{p^2}{2m} - k e^{-q^2 / 2}$$ Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 + q^2 \\right) q^2 \\right]$$ Negative Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 - \\frac{q^2}{8} \\right) q^2 \\right]$$ In all cases these Hamiltonians are shifted by constant values so that the energy will remain positive. The mean and standard deviation of the computed energies at each time step are displayed upon completion. When run in parallel, each processor integrates the same Hamiltonian system but starting from different initial conditions. The example has a serial ( ex20.cpp ) and a parallel ( ex20p.cpp ) version. See the Maxwell miniapp for another application of symplectic integration.", "title": "Example 20: Symplectic Integration of Hamiltonian Systems"}, {"location": "examples-orig/#example-21-adaptive-mesh-refinement-for-linear-elasticity", "text": "This is a version of Example 2 with a simple adaptive mesh refinement loop. The problem being solved is again linear elasticity describing a multi-material cantilever beam. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear and curved meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex21.cpp ) and a parallel ( ex21p.cpp ) version. We recommend viewing Examples 2 and 6 before viewing this example.", "title": "Example 21: Adaptive mesh refinement for linear elasticity"}, {"location": "examples-orig/#example-22-complex-linear-systems", "text": "This example code demonstrates the use of MFEM to define and solve a complex-valued linear system. It implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. The example also demonstrates how to display a time-varying solution as a sequence of fields sent to a single GLVis socket. The example has a serial ( ex22.cpp ) and a parallel ( ex22p.cpp ) version. We recommend viewing examples 1, 3, and 4 before viewing this example.", "title": "Example 22: Complex Linear Systems"}, {"location": "examples-orig/#example-23-wave-problem", "text": "This example code solves a simple 2D/3D wave equation with a second order time derivative: $$\\frac{\\partial^2 u}{\\partial t^2} - c^2\\Delta u = 0$$ The boundary conditions are either Dirichlet or Neumann. The example demonstrates the use of time dependent operators, implicit solvers and second order time integration. The example has only a serial ( ex23.cpp ) version. We recommend viewing examples 9 and 10 before viewing this example.", "title": "Example 23: Wave Problem"}, {"location": "examples-orig/#example-24-mixed-finite-element-spaces", "text": "This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. Partial assembly and GPU devices are supported. The example has a serial ( ex24.cpp ) and a parallel ( ex24p.cpp ) version. We recommend viewing examples 1 and 3 before viewing this example.", "title": "Example 24: Mixed finite element spaces"}, {"location": "examples-orig/#example-25-perfectly-matched-layers", "text": "The example illustrates the use of a Perfectly Matched Layer (PML) for the simulation of time-harmonic electromagnetic waves propagating in unbounded domains. PML was originally introduced by Berenger in \"A Perfectly Matched Layer for the Absorption of Electromagnetic Waves\" . It is a technique used to solve wave propagation problems posed in infinite domains. The implementation involves the introduction of an artificial absorbing layer that minimizes undesired reflections. Inside this layer a complex coordinate stretching map is used which forces the wave modes to decay exponentially. The example solves the indefinite Maxwell equations $$\\nabla \\times (a \\nabla \\times E) - \\omega^2 b E = f.$$ where $a = \\mu^{-1} |J|^{-1} J^T J$, $b= \\epsilon |J| J^{-1} J^{-T}$ and $J$ is the Jacobian matrix of the coordinate transformation. The example demonstrates discretization with Nedelec finite elements in 2D or 3D, as well as the use of complex-valued bilinear and linear forms. Several test problems are included, with known exact solutions. The example has a serial ( ex25.cpp ) and a parallel ( ex25p.cpp ) version. We recommend viewing Example 22 before viewing this example.", "title": "Example 25: Perfectly Matched Layers"}, {"location": "examples-orig/#example-26-multigrid-preconditioner", "text": "This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions and how to solve it efficiently using a matrix-free multigrid preconditioner. The example highlights on the creation of a hierarchy of discretization spaces and diffusion bilinear forms using partial assembly. The levels in the hierarchy of finite element spaces maybe constructed through geometric or order refinements. Moreover, the construction of a multigrid preconditioner for the PCG solver is shown. The multigrid uses a PCG solver on the coarsest level and second order Chebyshev accelerated smoothers on the other levels. The example has a serial ( ex26.cpp ) and a parallel ( ex26p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 26: Multigrid Preconditioner"}, {"location": "examples-orig/#example-27-laplace-boundary-conditions", "text": "This example code demonstrates the use of MFEM to define a simple finite element discretization of the Laplace problem: $$ -\\Delta u = 0 $$ with a variety of boundary conditions. Specifically, we discretize using a FE space of the specified order using a continuous or discontinuous space. We then apply Dirichlet, Neumann (both homogeneous and inhomogeneous), Robin, and Periodic boundary conditions on different portions of a predefined mesh. Boundary conditions: $u = u_{dbc}$ on $\\Gamma_{dbc}$ $\\hat{n}\\cdot\\nabla u = g_{nbc}$ on $\\Gamma_{nbc}$ $\\hat{n}\\cdot\\nabla u = 0$ on $\\Gamma_{nbc_0}$ $\\hat{n}\\cdot\\nabla u + a u = b$ on $\\Gamma_{rbc}$ as well as periodic boundary conditions which are enforced topologically. The example has a serial ( ex27.cpp ) and a parallel ( ex27p.cpp ) version. We recommend viewing examples 1 and 14 before viewing this example.", "title": "Example 27: Laplace Boundary Conditions"}, {"location": "examples-orig/#example-28-constraints-and-sliding-boundary-conditions", "text": "This example code illustrates the use of constraints in linear solvers by solving an elasticity problem where the normal component of the displacement is constrained to zero on two boundaries but tangential displacement is allowed. The constraints can be enforced in several different ways, including eliminating them from the linear system or solving a saddle-point system that explicitly includes constraint conditions. The example has a serial ( ex28.cpp ) and a parallel ( ex28p.cpp ) version. We recommend viewing example 2 before viewing this example.", "title": "Example 28: Constraints and Sliding Boundary Conditions"}, {"location": "examples-orig/#example-29-solving-pdes-on-embedded-surfaces", "text": "This example demonstrates setting up and solving an anisotropic Laplace problem $$-\\nabla\\cdot(\\sigma\\nabla u) = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ where $\\Omega$ is a two dimensional curved surface embedded in three dimensions and $\\sigma$ is an anisotropic diffusion tensor. The example demonstrates and validates our DiffusionIntegrator 's ability to properly integrate three dimensional fluxes on a two dimensional domain. Not all of our integrators currently support such cases but the DiffusionIntegrator can be used as a simple example of how extend other integrators when necessary. The example has a serial ( ex29.cpp ) and a parallel ( ex29p.cpp ) version. We recommend viewing examples 1 and 7 before viewing this example.", "title": "Example 29: Solving PDEs on embedded surfaces"}, {"location": "examples-orig/#example-30-resolving-rough-and-fine-scale-problem-data", "text": "Unresolved problem data will affect the accuracy of a discretized PDE solution as well as a posteriori estimates of the solution error. This example uses a CoefficientRefiner object to preprocess an input mesh until the resolution of the prescribed problem data $f \\in L^2$ is below a prescribed tolerance. In this example, the resolution is identified with a data oscillation function on the mesh $\\mathcal{T}$, defined $$ \\mathrm{osc}(f) = \\Big( \\sum_{T\\in\\mathcal{T}} \\| h \\cdot (I - \\Pi)\\, f \\|^2_{L^2(T)} \\Big)^{1/2}, $$ where $h$ is the local element size function and $\\Pi$ is a finite element projection operator, and the sum is taken over all elements $T$ in the mesh. In this example, the coarse initial mesh is adaptively refined until $\\mathrm{osc}(f)$ is below a prescribed tolerance for various candidate functions $f \\in L^2$. When using rough problem data, it is recommended to perform this type of preprocessing before a posteriori error estimation. The example has a serial ( ex30.cpp ) and a parallel ( ex30p.cpp ) version. We recommend viewing examples 1 and 6 before viewing this example.", "title": "Example 30: Resolving rough and fine-scale problem data"}, {"location": "examples-orig/#example-31-anisotropic-definite-maxwell-problem", "text": "This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + \\sigma E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". In this example $\\sigma$ is an anisotropic 3x3 tensor. Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example has a serial ( ex31.cpp ) and a parallel ( ex31p.cpp ) version. We recommend viewing example 3 before viewing this example.", "title": "Example 31: Anisotropic Definite Maxwell Problem"}, {"location": "examples-orig/#example-32-anisotropic-maxwell-eigenproblem", "text": "This example code solves the Maxwell (electromagnetic) eigenvalue problem with anisotropic permittivity, $\\epsilon$ $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, \\epsilon E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces in an eigenmode context. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing multiple GLVis visualization windows for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex32p.cpp ) version. We recommend viewing examples 13 and 31 before viewing this example.", "title": "Example 32: Anisotropic Maxwell Eigenproblem"}, {"location": "examples-orig/#example-33-spectral-fractional-laplacian", "text": "This example code demonstrates the use of MFEM to solve the fractional Laplacian problem $$ (-\\Delta)^\\alpha u = 1, \\quad \\alpha > 0, $$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is similar to ex1 , but involves a fractional-order diffusion operator whose inverse can be approximated by a series of inverses of integer-order diffusion operators. Solving each of these independent integer-order PDEs with MFEM and summing their solutions results in a discrete solution to the fractional Laplacian problem above. The example has a serial ( ex33.cpp ) and a parallel ( ex33p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 33: Spectral fractional Laplacian"}, {"location": "examples-orig/#example-34-source-function-using-a-submesh-transfer", "text": "This example demonstrates the use of a SubMesh object to transfer solution data from a sub-domain and use this as a source function on the full domain. In this case we compute a volumetric current density $\\vec{J}$ as the gradient of a scalar potential $\\varphi$ on a portion of the domain. $$\\nabla\\cdot(\\sigma\\nabla\\varphi)=0$$ $$\\vec{J} = -\\sigma\\nabla\\varphi$$ Where a voltage difference is applied on surfaces of the sub-domain (shown on the left) to generate the current density restricted to this sub-domain. The current density is then transferred to the full domain (shown on the right) using a SubMesh object. We then use this current density on the full domain as a source term in a magnetostatic solve for a vector potential $\\vec{A}$. $$\\nabla\\times(\\mu^{-1}\\nabla\\times\\vec{A})=\\vec{J}$$ $$\\vec{B} = \\nabla\\times\\vec{A}$$ This example verifies the recreation of boundary attributes on a sub-domain mesh as well as transfer of Raviart-Thomas vector fields between the SubMesh and the full Mesh. Note that the data transfer in this particular example involves arbitrary order Raviart-Thomas degrees of freedom on a mixture of tetrahedral and triangular prism elements. The example has a serial ( ex34.cpp ) and a parallel ( ex34p.cpp ) version. We recommend viewing Examples 1 and 3 before viewing this example.", "title": "Example 34: Source Function using a SubMesh Transfer"}, {"location": "examples-orig/#example-35-port-boundary-conditions-using-submesh-transfers", "text": "This example demonstrates the use of a SubMesh object to transfer a port boundary condition from a portion of the boundary to the corresponding portion of the full domain. Just as in Example 22 this example implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0\\mbox{ with }u|_\\Gamma=v$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\times(\\vec{u}\\times\\hat{n})|_\\Gamma=\\vec{v}$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\cdot\\vec{u}|_\\Gamma=v$$ Where $\\Gamma$ is a portion of the boundary called the port . In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. In Example 22 this boundary condition was simply a constant in space. In this example the boundary condition is an eigenmode of a lower dimensional eigenvalue problem defined on a portion of the boundary as follows: For the scalar $H^1$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }v|_{\\partial\\Gamma}=0$$ For the vector $H(curl)$ field: $$\\nabla\\times\\left(\\nabla\\times\\vec{v}\\right) = \\lambda\\,\\vec{v}\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\times\\vec{v}|_{\\partial\\Gamma}=0$$ For the vector $H(div)$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\cdot\\nabla v|_{\\partial\\Gamma}=0$$ The different cases implemented in this example can be used to verify the transfer of an $H^1$ scalar field, the tangential components of an $H(curl)$ vector field, and the normal component of an $H(div)$ vector field (as a scalar $L^2$ field in this case) between a SubMesh and its parent mesh. The example has only a parallel ( ex35p.cpp ) version because the eigenmode solver used to compute the field on the port is only implemented in parallel. We recommend viewing Examples 11, 13, and 22 before viewing this example.", "title": "Example 35: Port Boundary Conditions using SubMesh Transfers"}, {"location": "examples-orig/#example-36-obstacle-problem", "text": "This example code solves the pointwise bound-constrained energy minimization problem $$ \\text{minimize } \\frac{1}{2}\\|\\nabla u\\|^2 \\text{ in } H^1_0(\\Omega)\\, \\text{ subject to } u \\ge \\varphi\\,.$$ This is known as the obstacle problem, and it is a classical motivating example in the study of variational inequalities and free boundary problems. In this example, the obstacle $\\varphi$ is the graph of a half-sphere centered at the origin of a circular domain $\\Omega$. After solving to a specified tolerance, the numerical solution is compared to a closed-form exact solution to assess its accuracy. The problem is solved using the Proximal Galerkin finite element method, which is a nonlinear, structure-preserving mixed method for pointwise bound constraints proposed by Keith and Surowiec . In turn, this example highlights MFEM's ability to deliver high-order solutions to variational inequalities and showcases how to set up and solve nonlinear mixed methods. The example has a serial ( ex36.cpp ) and a parallel ( ex36p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 36: Obstacle Problem"}, {"location": "examples-orig/#example-37-topology-optimization", "text": "Density field $\\rho$ Problem set-up and domain $\\Omega$ This example code solves a classical cantilever beam topology optimization problem. The aim is to find an optimal material density field $\\rho$ in $L^1(\\Omega)$ to minimize the elastic compliance; i.e., $$\\begin{align} &\\text{minimize} \\int_\\Omega \\mathbf{f} \\cdot \\mathbf{u}(\\rho)\\, \\mathrm{d}x\\, \\text{ over }\\, \\rho \\in L^1(\\Omega) \\\\ &\\text{subject to }\\, 0 \\leq \\rho \\leq 1\\, \\text{ and } \\int_\\Omega \\rho\\, \\mathrm{d}x = \\theta\\, \\mathrm{vol}(\\Omega) \\,. \\end{align}$$ In this problem, $\\mathbf{f}$ is a localized force and the linearly elastic displacement field $\\mathbf{u} = \\mathbf{u}(\\rho)$ is determined by a material density field $\\rho$ with total volume fraction $0<\\theta<1$. The problem is solved using a mirror descent algorithm proposed by Keith and Surowiec . For further details, see the more elaborate description of this PDE-constrained optimization problem given in the example code and the aforementioned paper. The example has a serial ( ex37.cpp ) and a parallel ( ex37p.cpp ) version. We recommend viewing Example 2 before viewing this example.", "title": "Example 37: Topology Optimization"}, {"location": "examples-orig/#example-38-cut-volume-and-cut-surface-integration", "text": "This example code demonstrates construction of cut-surface and cut-volume IntegrationRules. The cut is specified by the zero level set of a given Coefficient $\\phi$. The resulting IntegrationRules are combined with standard LinearFormIntegrators to demonstrate integration of a function $u$ over an implicit interface, and a subdomain bounded by an implicit interface: $$ S = \\int_{\\phi = 0} u(x) ~ ds, \\quad V = \\int_{\\phi > 0} u(x) ~ dx. $$ The IntegrationRules are constructed by the moment-fitting algorithm introduced by M\u00fcller, Kummer and Oberlack . Through a set of basis functions, for each element the method defines and solves a local under-determined system for the vector of quadrature weights. All surface and volume integrals, which are required to form the system, are reduced to 1D integration over intersected segments. The example has only a serial ( ex38.cpp ) version, because the construction of the integration rules is an element-local procedure. It requires MFEM to be built with LAPACK, which is used to find the optimal solution of an under-determined system of equations.", "title": "Example 38: Cut-Volume and Cut-Surface Integration"}, {"location": "examples-orig/#example-39-named-attribute-sets", "text": "This example uses the Poisson equation to demonstrate the use of named attribute sets in MFEM to specify material regions, boundary regions, or source regions by name rather than attribute numbers. It also demonstrates how new named attribute sets may be created from arbitrary groupings of attribute numbers and used as a convenient shorthand to refer to those groupings in other portions of the application or through the command line. Named attribute sets also required changes to MFEM's mesh file formats. This example makes use of a custom input mesh file ( compass.msh ) produced using Gmsh which includes named regions and boundaries. A related mesh file ( compass.mesh ) illustrates MFEM's representation of the new named attribute sets. See file formats for details of the augmented mesh file format. The example has a serial ( ex39.cpp ) and a parallel ( ex39p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 39: Named Attribute Sets"}, {"location": "examples-orig/#example-40-eikonal-equation", "text": "This example highlights MFEM's ability to solve a fully-nonlinear, first-order PDE with high-order finite elements. In particular this example uses the proximal Galerkin method to solve the eikonal equation, $$ |\\nabla u| = 1 \\text{ in } \\Omega, \\quad u = 0 \\text{ on } \\partial \\Omega. $$ At each point $x$ in the domain $\\Omega$, the solution of this PDE provides the Euclidean distance to the domain boundary, $u(x) = \\min \\{ | x - y| : y \\in \\partial \\Omega\\}$. The problem is solved by recasting $u$ as the solution of the nonlinear program $$ \\text{maximize } \\int_\\Omega u\\, \\mathrm{d} x\\, \\text{ in } W^{1,\\infty}_0(\\Omega)\\, \\text{ subject to } |\\nabla u | \\leq 1 \\text{ a.e. in } \\Omega.$$ A solution is then obtained by discretizing and solving a sequence of nonlinear saddle-point problems. See the example code for a more detailed description of the method. The example has a serial ( ex40.cpp ) and a parallel ( ex40p.cpp ) version. We recommend viewing Example 5 and Example 36 before viewing this example.", "title": "Example 40: Eikonal Equation"}, {"location": "examples-orig/#nurbs-example-1-laplace-problem", "text": "This example code demonstrates the use of MFEM to define a simple isogeometric NURBS discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is the same as Example 1 . The example has a serial ( nurbs_ex1.cpp ) and a parallel ( nurbs_ex1p.cpp ) version. There is also a version that demonstrates efficient patchwise quadrature ( nurbs_ex1 patch.cpp ).", "title": "NURBS Example 1: Laplace Problem"}, {"location": "examples-orig/#nurbs-example-3-definite-maxwell-problem", "text": "This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with NURBS-based $H(curl)$elements in 2D or 3D. The problem solved in this example is the same as Ezample 3 . The example has only a serial ( nurbs_ex1.cpp ) version.", "title": "NURBS Example 3: Definite Maxwell Problem"}, {"location": "examples-orig/#nurbs-example-5-darcy-problem", "text": "This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize the velocity ($\\bf u$) with NURBS-based $H(div)$ elements and the pressure ($p$) with a compatible NURBS-based $H_1$ elements. The problem solved in this example is the same as Example 5 . The example only has a serial ( nurbs_ex5.cpp ).", "title": "NURBS Example 5: Darcy Problem"}, {"location": "examples-orig/#nurbs-example-11-laplace-eigenproblem", "text": "This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The problem solved in this example is the same as Example 11 . The example has only a parallel ( nurbs_ex11p.cpp ) version.", "title": "NURBS Example 11: Laplace Eigenproblem"}, {"location": "examples-orig/#nurbs-example-24-mixed-finite-element-spaces", "text": "The problem solved in this example is the same as Example 24 , but NURBS-based elements are also supported. This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. The example has a serial ( nurbs_ex24.cpp ).", "title": "NURBS Example 24: Mixed finite element spaces"}, {"location": "examples-orig/#volta-miniapp-electrostatics", "text": "This miniapp demonstrates the use of MFEM to solve realistic problems in the field of linear electrostatics. Its features include: dielectric materials charge densities surface charge densities prescribed voltages applied polarizations high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( volta.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Volta Miniapp: Electrostatics"}, {"location": "examples-orig/#tesla-miniapp-magnetostatics", "text": "This miniapp showcases many of MFEM's features while solving a variety of realistic magnetostatics problems. Its features include: diamagnetic and/or paramagnetic materials ferromagnetic materials volumetric current densities surface current densities external fields high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( tesla.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Tesla Miniapp: Magnetostatics"}, {"location": "examples-orig/#maxwell-miniapp-transient-full-wave-electromagnetics", "text": "This miniapp solves the equations of transient full-wave electromagnetics. Its features include: mixed formulation of the coupled first-order Maxwell equations $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic flux energy conserving, variable order, implicit time integration dielectric materials diamagnetic and/or paramagnetic materials conductive materials volumetric current densities Sommerfeld absorbing boundary conditions high order meshes high order basis functions advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( maxwell.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Maxwell Miniapp: Transient Full-Wave Electromagnetics"}, {"location": "examples-orig/#joule-miniapp-transient-magnetics-and-joule-heating", "text": "This miniapp solves the equations of transient low-frequency (a.k.a. eddy current) electromagnetics, and simultaneously computes transient heat transfer with the heat source given by the electromagnetic Joule heating. Its features include: $H^1$ discretization of the electrostatic potential $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic field $H(\\mathrm{div})$ discretization of the heat flux $L^2$ discretization of the temperature implicit transient time integration high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( joule.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Joule Miniapp: Transient Magnetics and Joule Heating"}, {"location": "examples-orig/#mobius-strip-miniapp", "text": "This miniapp generates various Mobius strip-like surface meshes. It is a good way to generate complex surface meshes. Manipulating the mesh topology and performing mesh transformation are demonstrated. The mobius-strip mesh in the data directory was generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mobius-strip.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mobius Strip Miniapp"}, {"location": "examples-orig/#klein-bottle-miniapp", "text": "This miniapp generates three types of Klein bottle surfaces. It is similar to the mobius-strip miniapp. Manipulating the mesh topology and performing mesh transformation are demonstrated. The klein-bottle and klein-donut meshes in the data directory were generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( klein-bottle.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Klein Bottle Miniapp"}, {"location": "examples-orig/#toroid-miniapp", "text": "This miniapp generates two types of toroidal volume meshes; one with triangular cross sections and one with square cross sections. It works by defining a stack of individual elements and bending them so that the bottom and top of the stack can be joined to form a torus. It supports various options including: The element type: 0 - Wedge, 1 - Hexahedron The geometric order of the elements The major and minor radii The number of elements in the azimuthal direction The number of nodes to offset by before rejoining the stack The initial angle of the cross sectional shape The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( toroid.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Toroid Miniapp"}, {"location": "examples-orig/#twist-miniapp", "text": "This miniapp generates simple periodic meshes to demonstrate MFEM's handling of periodic domains. MFEM's strategy is to use a discontinuous vector field to define the mesh coordinates on a topologically periodic mesh. It works by defining a stack of individual elements and stitching together the top and bottom of the mesh. The stack can also be twisted so that the vertices of the bottom and top can be joined with any integer offset (for tetrahedral and wedge meshes only even offsets are supported). The Twist miniapp supports various options including: The element type: 4 - Tetrahedron, 6 - Wedge, 8 - Hexahedron The geometric order of the elements The dimensions of the initial brick-shaped stack of elements The number of elements in the z direction The number of nodes to offset by before rejoining the stack The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( twist.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Twist Miniapp"}, {"location": "examples-orig/#extruder-miniapp", "text": "This miniapp creates higher dimensional meshes from lower dimensional meshes by extrusion. Simple coordinate transformations can also be applied if desired. The initial mesh can be 1D or 2D 1D meshes can be extruded in both the y and z directions 2D meshes can be triangular, quadrilateral, or contain both element types Meshes with high order geometry are supported User can specify the number of elements and the distance to extrude Geometric order of the transformed mesh can be user selected or automatic This miniapp provides another demonstration of how simple meshes can be constructed and transformed in MFEM. This miniapp has only a serial ( extruder.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Extruder Miniapp"}, {"location": "examples-orig/#trimmer-miniapp", "text": "This miniapp creates a new mesh file from an existing mesh by trimming away elements with selected attributes. Newly exposed boundary elements will be assigned new or user specified boundary attributes. The initial mesh can be 2D or 3D Meshes with high order geometry are supported Periodic meshes are supported NURBS meshes are not supported This miniapp provides another demonstration of how simple meshes can be constructed in MFEM. This miniapp has only a serial ( trimmer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Trimmer Miniapp"}, {"location": "examples-orig/#polar-nc-miniapp", "text": "This miniapp generates a circular sector mesh that consist of quadrilaterals and triangles of similar sizes. The 3D version of the mesh is made of prisms and tetrahedra. The mesh is non-conforming by design, and can optionally be made curvilinear. The elements are ordered along a space-filling curve by default, which makes the mesh ready for parallel non-conforming AMR in MFEM. The implementation also demonstrates how to initialize a non-conforming mesh on the fly by marking hanging nodes with Mesh::AddVertexParents . For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( polar-nc.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Polar-NC Miniapp"}, {"location": "examples-orig/#shaper-miniapp", "text": "This miniapp performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( shaper.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Shaper Miniapp"}, {"location": "examples-orig/#mesh-explorer-miniapp", "text": "This miniapp is a handy tool to examine, visualize and manipulate a given mesh. Some of its features are: visualizing of mesh materials and individual mesh elements mesh scaling, randomization, and general transformation manipulation of the mesh curvature the ability to simulate parallel partitioning quantitative and visual reports of mesh quality For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mesh-explorer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mesh Explorer Miniapp"}, {"location": "examples-orig/#mesh-optimizer-miniapp", "text": "This miniapp performs mesh optimization using the Target-Matrix Optimization Paradigm (TMOP) by P. Knupp , and a global variational minimization approach ( Dobrev et al. ). It minimizes the quantity $$\\sum_T \\int_T \\mu(J(x)),$$ where $T$ are the target (ideal) elements, $J$ is the Jacobian of the transformation from the target to the physical element, and $\\mu$ is the mesh quality metric. This metric can measure shape, size or alignment of the region around each quadrature point. The combination of targets and quality metrics is used to optimize the physical node positions, i.e., they must be as close as possible to the shape / size / alignment of their targets. This code also demonstrates a possible use of nonlinear operators, as well as their coupling to Newton methods for solving minimization problems. Note that the utilized Newton methods are oriented towards avoiding invalid meshes with negative Jacobian determinants. Each Newton step requires the inversion of a Jacobian matrix, which is done through an inner linear solver. The miniapp has a serial ( mesh-optimizer.cpp ) and a parallel ( pmesh-optimizer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mesh Optimizer Miniapp"}, {"location": "examples-orig/#mesh-fitting-miniapp", "text": "This miniapp builds upon the mesh optimizer miniapp to enable mesh alignment with the zero isosurface of a discrete level-set. The approach is based on Dobrev et al. and Mittal et al. , where we minimize the quantity $$\\sum_T \\int_T \\mu(J(x)) + \\sum_{s \\in S} w \\,\\, \\sigma^2(x_s).$$ Here, the first term controls mesh quality and the second term enforces weak alignment of a selected subset of mesh-nodes ($s \\in S$) with the zero isosurface of the discrete level-set function ($\\sigma$). Click on the image on the right to see a demonstration of this method for generating body-fitted meshes for topology optimization in LiDO to maximize beam stiffness under a downward force on the right wall. The miniapp has a parallel ( pmesh-fitting.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mesh Fitting Miniapp"}, {"location": "examples-orig/#minimal-surface-miniapp", "text": "This miniapp solves Plateau's problem: the Dirichlet problem for the minimal surface equation. Options to solve the minimal surface equations of both parametric surfaces as well as surfaces restricted to be graphs of the form $z=f(x,y)$ are supported, including a number of examples such as the Catenoid, Helicoid, Costa and Scherk surfaces. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has a serial ( minimal-surface.cpp ) and a parallel ( pminimal-surface.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Minimal Surface Miniapp"}, {"location": "examples-orig/#low-order-refined-transfer-miniapp", "text": "The lor-transfer miniapp, found under miniapps/tools demonstrates the capability to generate a low-order refined mesh from a high-order mesh, and to transfer solutions between these meshes. Grid functions can be transferred between the coarse, high-order mesh and the low-order refined mesh using either $L^2$ projection or pointwise evaluation. These transfer operators can be designed to discretely conserve mass and to recover the original high-order solution when transferring a low-order grid function that was obtained by restricting a high-order grid function to the low-order refined space. The miniapp has only a serial ( lor-transfer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Low-Order Refined Transfer Miniapp"}, {"location": "examples-orig/#interpolation-miniapps", "text": "The interpolation miniapp, found under miniapps/gslib , demonstrate the capability to interpolate high-order finite element functions at given set of points in physical space. These miniapps utilize the gslib library's high-order interpolation utility for quad and hex meshes: Find Points miniapp has a serial ( findpts.cpp ) and a parallel ( pfindpts.cpp ) version that demonstrate the basic procedures for point search and evaluation of grid functions. Field Interp miniapp ( field-interp.cpp ) demonstrates how grid functions can be transferred between meshes. Field Diff miniapp ( field-diff.cpp ) demonstrates how grid functions on two different meshes can be compared with each other. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Interpolation Miniapps"}, {"location": "examples-orig/#extrapolation-miniapp", "text": "The extrapolate miniapp, found in the miniapps/shifted directory, extrapolates a finite element function from a set of elements (known values) to the rest of the domain. The set of elements that contains the known values is specified by the positive values of a level set Coefficient. The known values are not modified. The miniapp supports two PDE-based approaches ( Aslam , Bochkov & Gibou ), both of which rely on solving a sequence of advection problems in the direction of the unknown parts of the domain. The extrapolation can be constant (1st order), linear (2nd order), or quadratic (3rd order). These formal orders hold for a limited band around the zero level set, see the above references for further information. The miniapp has only a parallel ( extrapolate.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Extrapolation Miniapp"}, {"location": "examples-orig/#distance-solver-miniapp", "text": "The distance miniapp, found in the miniapps/shifted directory demonstrates the capability to compute the \"distance\" to a given point source or to the zero level set of a given function. Here \"distance\" refers to the length of the shortest path through the mesh. The input can be a DeltaCoefficient (representing a point source), or any Coefficient (for the case of a level set). The output is a ParGridFunction that can be scalar (representing the scalar distance), or a vector (its magnitude is the distance, and its direction is the starting direction of the shortest path). The miniapp has only a parallel ( distance.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Distance Solver Miniapp"}, {"location": "examples-orig/#shifted-diffusion-miniapp", "text": "The diffusion miniapp, found in the miniapps/shifted directory, demonstrates the capability to formulate a boundary value problem using a surrogate computational domain. The method uses a distance function to the true boundary to enforce Dirichlet boundary conditions on the (non-aligned) mesh faces, therefore \"shifting\" the location where boundary conditions are imposed. The implementation in the miniapp is a high-order extension of the second-generation shifted boundary method . The miniapp has only a parallel ( diffusion.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Shifted Diffusion Miniapp"}, {"location": "examples-orig/#laghos-miniapp", "text": "Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. The computational motives captured in Laghos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements (triangular and tetrahedral elements can also be used, but with the less efficient full assembly option). Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Laghos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Continuous and discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Separation between the assembly and the quadrature point-based computations. Point-wise definition of mesh size, time-step estimate and artificial viscosity coefficient. Constant-in-time velocity mass operator that is inverted iteratively on each time step. This is an example of an operator that is prepared once (fully or partially assembled), but is applied many times. The application cost is dominant for this operator. Time-dependent force matrix that is prepared every time step (fully or partially assembled) and is applied just twice per \"assembly\". Both the preparation and the application costs are important for this operator. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization / data analysis with VisIt . The Laghos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Laghos .", "title": "Laghos Miniapp"}, {"location": "examples-orig/#remhos-miniapp", "text": "Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. The computational motives captured in Remhos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements. Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Remhos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Mass operator that is local per each zone. It is inverted by iterative or exact methods at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Advection operator that couples neighboring zones. It is applied once at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization and data analysis with VisIt . The Remhos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Remhos .", "title": "Remhos Miniapp"}, {"location": "examples-orig/#navier-miniapp", "text": "Navier is a miniapp that solves the time-dependent Navier-Stokes equations of incompressible fluid dynamics \\begin{align} \\frac{\\partial u}{\\partial t} + (u \\cdot \\nabla) u - \\frac{1}{Re} \\nabla^2 u - \\nabla p &= f \\\\ \\nabla \\cdot u &= 0 \\end{align} using a spatially high-order finite element discretization. The time-dependent problem is solved using a (up to) third order implicit-explicit method which leverages an extrapolation scheme for the convective parts and a backward-difference formulation for the viscous parts of the equation. The miniapp supports: Arbitrary order H1 elements High order mesh elements IMEX (EXTk-BDFk) time-stepping up to third order Convenient interface for new users A variety of test cases and benchmarks This miniapp has only a parallel ( navier_solver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Navier Miniapp"}, {"location": "examples-orig/#block-solvers-miniapp", "text": "The Block Solvers miniapp, found under miniapps/solvers , compares various linear solvers for the saddle point system obtained from mixed finite element discretization of the Darcy's flow problem \\begin{array}{rcl} k{\\bf u} & + \\nabla p & = f \\\\ -\\nabla \\cdot {\\bf u} & & = g \\end{array} The solvers being compared include: The divergence-free solver (couple and decoupled modes), which is based on a multilevel decomposition of the Raviart-Thomas finite element space and its divergence-free subspace. MINRES preconditioned by the block diagonal preconditioner in ex5p.cpp . For more details, please see the documentation in the miniapps/solvers directory. The miniapp supports: Arbitrary order mixed finite element pair (Raviart-Thomas elements + piecewise discontinuous polynomials) Various combination of essential and natural boundary conditions Homogeneous or heterogeneous scalar coefficient k This miniapp has only a parallel ( block-solvers.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Block Solvers Miniapp"}, {"location": "examples-orig/#overlapping-grids-miniapps", "text": "Overlapping grids-based frameworks can often make problems tractable that are otherwise inaccessible with a single conforming grid. The following gslib -based miniapps in MFEM demonstrate how to set up and use overlapping grids: The Schwarz Example 1 miniapp in miniapps/gslib has a serial ( schwarz_ex1.cpp ) a parallel ( schwarz_ex1p.cpp ) version that solves the Poisson problem on overlapping grids. The serial version is restricted to use two overlapping grids, while the parallel version supports arbitrary number of overlapping grids. The Navier Conjugate Heat Transfer miniapp in miniapps/navier ( navier_cht.cpp ) demonstrates how a conjugate heat transfer problem can be solved with the fluid dynamics (incompressible Navier-Stokes equations) and heat transfer (advection-diffusion equation) PDEs modeled on different meshes. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Overlapping Grids Miniapps"}, {"location": "examples-orig/#parelag-amge-for-hcurl-and-hdiv-miniapp", "text": "This is a miniapp that exhibits the ParELAG library and part of its capabilities. The miniapp employs MFEM and ParELAG to solve $H(\\mathrm{curl})$- and $H(\\mathrm{div})$-elliptic forms by an element based algebraic multigrid (AMGe). ParELAG is a library mostly developed at the Center for Applied Scientific Computing of Lawrence Livermore National Laboratory, California, USA. The miniapp uses: A multilevel hierarchy of de Rham complexes of finite element spaces, built by ParELAG; Hiptmair-type (hybrid) smoothers, implemented in ParELAG; AMS (Auxiliary-space Maxwell Solver) or ADS (Auxiliary-space Divergence Solver), from HYPRE, for preconditioning or solving on the coarsest levels. Alternatively, it is possible to precondition or solve the $H(\\mathrm{div})$ form on the coarsest level via a hybridization approach. However, this is not yet implemented in ParELAG for the coarse levels. Only the hybridization solver that is directly applicable to an $H(\\mathrm{div})$-$L^2$ mixed (saddle-point) system is currently available in ParELAG. We recommend viewing ex3p.cpp and ex4p.cpp before viewing this miniapp. For more details, please see the documentation in the miniapps/parelag directory. This miniapp has only a parallel ( MultilevelHcurlHdivSolver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "ParELAG AMGe for H(curl) and H(div) Miniapp"}, {"location": "examples-orig/#generating-gaussian-random-fields-via-the-spde-method", "text": "This miniapp generates Gaussian random fields on meshed domains $\\Omega \\subset \\mathbb{R}^n$ via the SPDE method. The method exploits a stochastic, fractional PDE whose full-space solutions yield Gaussian random fields with a Mat\u00e9rn covariance. The method was introduced and popularized by Lindgren et. al in 2010. In this miniapp, we use a slightly modified representation following Khristenko et. al . More specifically, we solve the equation \\begin{equation} \\left( -\\frac{1}{2\\nu} \\nabla \\cdot \\left( \\Theta \\nabla \\right) + \\mathbf{1} \\right)^{\\frac{2\\nu+n}{4}} u(x,w) = \\eta W(x,w) \\ \\ \\ \\text{in} \\ \\ \\Omega, \\end{equation} with various boundary conditions. Solving this equation on $\\Omega = \\mathbb{R}^n$ delivers a homogeneous Gaussian random field with zero mean and Mat\u00e9rn covariance, \\begin{align}\\label{eq:MaternCovariance} C(x,y) &= \\sigma^2M_\\nu \\left(\\sqrt{2\\nu}\\, \\| x-y \\|_{\\Theta} \\right) , \\end{align} where $M_\\nu(z) = \\frac{2^{1-\\nu}}{\\Gamma(\\nu)} z ^{\\nu} K_\\nu \\left( z \\right)$ and $\\| x-y \\|_{\\Theta}^2 = (x-y)^\\top\\Theta (x-y)$. The Mat\u00e9rn model provides the regularity parameter $\\nu > 0$ and the anisotropic diffusion tensor $\\Theta \\in \\mathbb{R}^{n\\times n}$, which determines the spatial structure (correlation lengths). However, applying boundary conditions to the SPDE above provides the ability to model a significantly larger class of inhomogeneous random fields on complex domains. For further details, see the miniapp README . We recommend viewing ex33p.cpp before viewing this miniapp. This miniapp ( generate_random_field.cpp ) has only a parallel implementation. It further requires MFEM to be built with LAPACK, otherwise you may only use predefined values for $\\nu$. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Generating Gaussian Random Fields via the SPDE Method"}, {"location": "examples-orig/#multidomain-and-submesh-demonstration-miniapp", "text": "This miniapp aims to demonstrate how to solve two PDEs, that represent different physics, on the same domain. MFEM's SubMesh interface is used to compute on and transfer between the spaces of predefined parts of the domain. For the sake of simplicity, the spaces on each domain are using the same order H1 finite elements. This does not mean that the approach is limited to this configuration. A 3D domain comprised of an outer box with a cylinder shaped inside is used. A heat equation is described on the outer box domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T &&\\mbox{in outer box}\\\\ T &= T_{wall} &&\\mbox{on outside wall}\\\\ \\nabla T \\cdot \\vec{n} &= 0 &&\\mbox{on inside (cylinder) wall} \\end{align} with temperature $T$ and coefficient $\\kappa$ (non-physical in this example). A convection-diffusion equation is described inside the cylinder domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T - \\alpha \\nabla \\cdot (\\vec{b} T) & &\\mbox{in inner cylinder}\\\\ T &= T_{wall} & &\\mbox{on cylinder wall}\\\\ \\nabla T \\cdot \\vec{n} &= 0 & &\\mbox{else} \\end{align} with temperature $T$, coefficients $\\kappa$, $\\alpha$ and prescribed velocity profile $\\vec{b}$, and $T_{wall}$ obtained from the heat equation. To couple the solutions of both equations, a segregated solve with one way coupling approach is used. The heat equation of the outer box is solved from the timestep $T_{box}(t)$ to $T_{box}(t+dt)$. Then for the convection-diffusion equation $T_{wall}$ is set to $T_{box}(t+dt)$ and the equation is solved for $T(t+dt)$ which results in a first-order one way coupling. This miniapp has only a parallel ( multidomain.cpp ) implementation. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Multidomain and SubMesh demonstration Miniapp"}, {"location": "examples-orig/#dpg-miniapp", "text": "This miniapp demonstrates how to discretize and solve various PDEs using the Discontinuous Petrov-Galerkin (DPG) method. It utilizes a new user-friendly interface to assemble the block DPG systems arising from the discretization of any DPG formulation (such as Ultraweak or Primal). In addition, the miniapp supports complex-valued systems, static condensation for block systems, and AMR using the built-in DPG residual-based error indicator. This capability is showcased in the following DPG examples in miniapps/dpg . Ultraweak DPG formulation for diffusion . This example solves the simple Poisson equation $$-\u0394 u = f$$ and computes rates of convergence under successive uniform h-refinements for a smooth manufactured solution. The parallel version also includes an AMR implementation for the L-shape benchmark problem. This example has a serial ( diffusion.cpp ) and a parallel ( pdiffusion.cpp ) version. Ultraweak DPG formulation for convection-diffusion . This example solves the convection-diffusion problem: \\begin{align} -\\epsilon \\Delta u + \\nabla \\cdot (\\beta u) &= f\\\\ \\end{align} using AMR. The example demonstrates the use of mesh-dependent test norms which are suitable for problems with solutions that exhibit large gradients present in internal or boundary layers . The example has a serial ( convection-diffusion.cpp ) and a parallel ( pconvection-diffusion.cpp ) version. Ultraweak DPG formulation for time-harmonic linear acoustics . This example solves the indefinite Helmholtz equation \\begin{align} -\\Delta u - \\omega^2 u &= f\\\\ \\end{align} The example includes formulations with manufactured plane-wave solutions as well as high-frequency scattering problems and the use of Perfectly Match Layers (PML). It also demonstrates how to set up complex-valued systems and preconditioners for their solutions. The example has a serial ( acoustics.cpp ) and parallel ( pacoustics.cpp ) version. Ultraweak DPG formulation for time-harmonic Maxwell . This example solves the indefinite Maxwell problem \\begin{align} \\nabla \u00d7 (\\mu^{-1} \\nabla \\times E) - \\omega^2 \\epsilon E&= J\\\\ \\end{align} The example includes formulations with smooth manufactured solutions, AMR formulations for high-frequency scattering problems as well as a problem with a singular solution. The example has a serial ( maxwell.cpp ) and a parallel ( pmaxwell.cpp ) version.", "title": "DPG miniapp"}, {"location": "examples-orig/#tribol-miniapp", "text": "This miniapp demonstrates how to use Tribol's mortar method to solve a contact patch test. A contact patch test places two aligned, linear elastic cubes in contact, then verifies that the exact elasticity solution for this problem is recovered. The exact solution requires transmission of a uniform pressure field across a (not necessarily conforming) interface (i.e. the contact surface). Mortar methods (including the one implemented in Tribol) are generally able to pass the contact patch test. The test assumes small deformations and no accelerations, so the relationship between forces/contact pressures and deformations/contact gaps is linear and, therefore, the problem can be solved exactly with a single linear solve. The mortar implementation is based on Puso and Laursen (2004) . A description of the Tribol implementation is available in Serac documentation . Lagrange multipliers are used to solve for the pressure required to prevent violation of the contact constraints. This miniapp has only a parallel ( contact-patch-test.cpp ) implementation. For more details, please see the documentation in miniapps/tribol/README.md . We recommend that new users start with the example codes before moving to the miniapps. No examples or miniapps match your criteria. ", "title": "Tribol miniapp"}, {"location": "examples/", "text": "MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$']]}}); Example Codes and Miniapps This page provides a brief overview of MFEM's example codes and miniapps. For detailed documentation of the MFEM sources, including the examples, see the online Doxygen documentation , or the doc directory in the distribution. The goal of the example codes is to provide a step-by-step introduction to MFEM in simple model settings. The miniapps are more complex, and are intended to be more representative of the advanced usage of the library in physics/application codes. We recommend that new users start with the example codes before moving to the miniapps. Select from the categories below to display examples and miniapps that contain the respective feature. All examples support (arbitrarily) high-order meshes and finite element spaces . The numerical results from the example codes can be visualized using the GLVis visualization tool (based on MFEM). See the GLVis website for more details. Users are encouraged to submit any example codes and miniapps that they have created and would like to share. Contact a member of the MFEM team to report bugs or post questions or comments . Application (PDE) All Diffusion Convection-diffusion Elasticity Electromagnetics Acoustics grad-div Darcy Advection Conduction Wave Compressible flow Incompressible flow Meshing Nonlocal Stochastic Free boundary Finite Elements All H1 nodal elements L2 discontinuous elements H(curl) Nedelec elements H(div) Raviart-Thomas elements H^{1/2} interfacial elements H^{-1/2} interfacial elements Discretization All Galerkin FEM Mixed FEM Discontinuous Galerkin (DG) Discont. Petrov-Galerkin (DPG) Hybridization Static condensation Isogeometric analysis (NURBS) Adaptive mesh refinement (AMR) Partial assembly Solver All Jacobi Gauss-Seidel PCG MINRES GMRES Algebraic Multigrid (BoomerAMG) Auxiliary-space Maxwell Solver (AMS) Auxiliary-space Divergence Solver (ADS) SuperLU/STRUMPACK (parallel direct) UMFPACK (serial direct) Newton method (nonlinear solver) Explicit Runge-Kutta (ODE integration) Implicit Runge-Kutta (ODE integration) Newmark (ODE Integration) Symplectic Algorithm (ODE Integration) LOBPCG, AME (eigensolvers) SUNDIALS solvers PETSc solvers SLEPc eigensolvers HiOp solvers None Example 0: Simplest Laplace Problem This is the simplest MFEM example and a good starting point for new users. The example demonstrates the use of MFEM to define and solve an $H^1$ finite element discretization of the Laplace problem $$-\\Delta u = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ The example illustrates the use of the basic MFEM classes for defining the mesh, finite element space, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. The example has serial ( ex0.cpp ) and parallel ( ex0p.cpp ) versions. Example 1: Laplace Problem This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Specifically, we discretize with the finite element space coming from the mesh (linear by default, quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The problem solved in this example is the same as ex0 , but with more sophisticated options and features. The example highlights the use of mesh refinement, finite element grid functions, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. We also cover the explicit elimination of essential boundary conditions, static condensation, and the optional connection to the GLVis tool for visualization. The example has a serial ( ex1.cpp ), a parallel ( ex1p.cpp ), and HPC versions: performance/ex1.cpp , performance/ex1p.cpp . It also has a PETSc modification in examples/petsc , a PUMI modification in examples/pumi and a Ginkgo modification in examples/ginkgo . Partial assembly and GPU devices are supported. Example 2: Linear Elasticity This example code solves a simple linear elasticity problem describing a multi-material cantilever beam. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder with $f$ being a constant pull down vector on boundary elements with attribute 2, and zero otherwise. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order and NURBS vector finite element spaces with the linear elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and vector coefficient objects. Static condensation is also illustrated. The example has a serial ( ex2.cpp ) and a parallel ( ex2p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . We recommend viewing Example 1 before viewing this example. Example 3: Definite Maxwell Problem This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 2D or 3D. The example demonstrates the use of $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. Static condensation is also illustrated. The example has a serial ( ex3.cpp ) and a parallel ( ex3p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-2 before viewing this example. Example 4: Grad-div Problem This example code solves a simple 2D/3D $H(div)$ diffusion problem corresponding to the second order definite equation $$-{\\rm grad}(\\alpha\\,{\\rm div}(F)) + \\beta F = f$$ with boundary condition $F \\cdot n$ = \"given normal field\". Here we use a given exact solution $F$ and compute the corresponding right hand side $f$. We discretize with the Raviart-Thomas finite elements. The example demonstrates the use of $H(div)$ finite element spaces with the grad-div and $H(div)$ vector finite element mass bilinear form, as well as the computation of discretization error when the exact solution is known. Bilinear form hybridization and static condensation are also illustrated. The example has a serial ( ex4.cpp ) and a parallel ( ex4p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-3 before viewing this example. Example 5: Darcy Problem This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize with Raviart-Thomas finite elements (velocity $\\bf u$) and piecewise discontinuous polynomials (pressure $p$). The example demonstrates the use of the BlockMatrix and BlockOperator classes, as well as the collective saving of several grid functions in VisIt and ParaView formats. The example has a serial ( ex5.cpp ) and a parallel ( ex5p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly is supported. We recommend viewing examples 1-4 before viewing this example. Example 6: Laplace Problem with AMR This is a version of Example 1 with a simple adaptive mesh refinement loop. The problem being solved is again the Laplace equation $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear, curved and surface meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex6.cpp ) and a parallel ( ex6p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . Partial assembly and GPU devices are supported. We recommend viewing Example 1 before viewing this example. Example 7: Surface Meshes This example code demonstrates the use of MFEM to define a triangulation of a unit sphere and a simple isoparametric finite element discretization of the Laplace problem with mass term, $$-\\Delta u + u = f.$$ The example highlights mesh generation, the use of mesh refinement, high-order meshes and finite elements, as well as surface-based linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. Simple local mesh refinement is also demonstrated. The example has a serial ( ex7.cpp ) and a parallel ( ex7p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 8: DPG for the Laplace Problem This example code demonstrates the use of the Discontinuous Petrov-Galerkin (DPG) method in its primal 2x2 block form as a simple finite element discretization of the Laplace problem $$-\\Delta u = f$$ with homogeneous Dirichlet boundary conditions. We use high-order continuous trial space, a high-order interfacial (trace) space, and a high-order discontinuous test space defining a local dual ($H^{-1}$) norm. We use the primal form of DPG, see \"A primal DPG method without a first-order reformulation\" , Demkowicz and Gopalakrishnan, CAM 2013. The example highlights the use of interfacial (trace) finite elements and spaces, trace face integrators and the definition of block operators and preconditioners. The example has a serial ( ex8.cpp ) and a parallel ( ex8p.cpp ) version. We recommend viewing examples 1-5 before viewing this example. Example 9: DG Advection This example code solves the time-dependent advection equation $$\\frac{\\partial u}{\\partial t} + v \\cdot \\nabla u = 0,$$ where $v$ is a given fluid velocity, and $u_0(x)=u(0,x)$ is a given initial condition. The example demonstrates the use of Discontinuous Galerkin (DG) bilinear forms in MFEM (face integrators), the use of explicit and implicit (with block ILU preconditioning) ODE time integrators, the definition of periodic boundary conditions through periodic meshes, as well as the use of GLVis for persistent visualization of a time-evolving solution. The saving of time-dependent data files for external visualization with VisIt and ParaView is also illustrated. The example has a serial ( ex9.cpp ) and a parallel ( ex9p.cpp ) version. It also has a SUNDIALS modification in examples/sundials , a PETSc modification in examples/petsc , and a HiOp modification in examples/hiop . Example 10: Nonlinear Elasticity This example solves a time dependent nonlinear elasticity problem of the form $$ \\frac{dv}{dt} = H(x) + S v\\,,\\qquad \\frac{dx}{dt} = v\\,, $$ where $H$ is a hyperelastic model and $S$ is a viscosity operator of Laplacian type. The geometry of the domain is assumed to be as follows: The example demonstrates the use of nonlinear operators, as well as their implicit time integration using a Newton method for solving an associated reduced backward-Euler type nonlinear equation. Each Newton step requires the inversion of a Jacobian matrix, which is done through a (preconditioned) inner solver. The example has a serial ( ex10.cpp ) and a parallel ( ex10p.cpp ) version. It also has a SUNDIALS modification in examples/sundials and a PETSc modification in examples/petsc . We recommend viewing examples 2 and 9 before viewing this example. Example 11: Laplace Eigenproblem This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex11p.cpp ) version. It also has a SLEPc modification in examples/petsc . We recommend viewing Example 1 before viewing this example. Example 12: Linear Elasticity Eigenproblem This example code solves the linear elasticity eigenvalue problem for a multi-material cantilever beam. Specifically, we compute a number of the lowest eigenmodes by approximating the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = \\lambda {\\bf u} \\,,$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field $\\bf u$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder. The geometry of the domain is assumed to be as follows: The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex12p.cpp ) version. We recommend viewing examples 2 and 11 before viewing this example. Example 13: Maxwell Eigenproblem This example code solves the Maxwell (electromagnetic) eigenvalue problem $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 2D or 3D. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex13p.cpp ) version. We recommend viewing examples 3 and 11 before viewing this example. Example 14: DG Diffusion This example code demonstrates the use of MFEM to define a discontinuous Galerkin (DG) finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Finite element spaces of any order, including zero on regular grids, are supported. The example highlights the use of discontinuous spaces and DG-specific face integrators. The example has a serial ( ex14.cpp ) and a parallel ( ex14p.cpp ) version. We recommend viewing examples 1 and 9 before viewing this example. Example 15: Dynamic AMR Building on Example 6 , this example demonstrates dynamic adaptive mesh refinement. The mesh is adapted to a time-dependent solution by refinement as well as by derefinement. For simplicity, the solution is prescribed and no time integration is done. However, the error estimation and refinement/derefinement decisions are realistic. At each outer iteration the right hand side function is changed to mimic a time dependent problem. Within each inner iteration the problem is solved on a sequence of meshes which are locally refined according to a simple ZZ error estimator. At the end of the inner iteration the error estimates are also used to identify any elements which may be over-refined and a single derefinement step is performed. After each refinement or derefinement step a rebalance operation is performed to keep the mesh evenly distributed among the available processors. The example demonstrates MFEM's capability to refine, derefine and load balance nonconforming meshes, in 2D and 3D, and on linear, curved and surface meshes. Interpolation of functions between coarse and fine meshes, persistent GLVis visualization, and saving of time-dependent fields for external visualization with VisIt are also illustrated. The example has a serial ( ex15.cpp ) and a parallel ( ex15p.cpp ) version. We recommend viewing examples 1, 6 and 9 before viewing this example. Example 16: Time Dependent Heat Conduction This example code solves a simple 2D/3D time dependent nonlinear heat conduction problem $$\\frac{du}{dt} = \\nabla \\cdot \\left( \\kappa + \\alpha u \\right) \\nabla u$$ with a natural insulating boundary condition $\\frac{du}{dn} = 0$. We linearize the problem by using the temperature field $u$ from the previous time step to compute the conductivity coefficient. This example demonstrates both implicit and explicit time integration as well as a single Picard step method for linearization. The saving of time dependent data files for external visualization with VisIt is also illustrated. The example has a serial ( ex16.cpp ) and a parallel ( ex16p.cpp ) version. We recommend viewing examples 2, 9, and 10 before viewing this example. Example 17: DG Linear Elasticity This example code solves a simple linear elasticity problem describing a multi-material cantilever beam using symmetric or non-symmetric discontinuous Galerkin (DG) formulation. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are Dirichlet, $\\bf{u}=\\bf{u_D}$, on the fixed part of the boundary, namely boundary attributes 1 and 2; on the rest of the boundary we use ${\\sigma}({\\bf u})\\cdot n = {\\bf 0}$. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order DG vector finite element spaces with the linear DG elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and function vector-coefficient objects. The use of non-homogeneous Dirichlet b.c. imposed weakly, is also illustrated. The example has a serial ( ex17.cpp ) and a parallel ( ex17p.cpp ) version. We recommend viewing examples 2 and 14 before viewing this example. Example 18: DG Euler Equations This example code solves the compressible Euler system of equations, a model nonlinear hyperbolic PDE, with a discontinuous Galerkin (DG) formulation. The primary purpose is to show how a transient system of nonlinear equations can be formulated in MFEM. The equations are solved in conservative form $$\\frac{\\partial u}{\\partial t} + \\nabla \\cdot {\\bf F}(u) = 0$$ with a state vector $u = [ \\rho, \\rho v_0, \\rho v_1, \\rho E ]$, where $\\rho$ is the density, $v_i$ is the velocity in the $i^{\\rm th}$ direction, $E$ is the total specific energy, and $H = E + p / \\rho$ is the total specific enthalpy. The pressure, $p$ is computed through a simple equation of state (EOS) call. The conservative hydrodynamic flux ${\\bf F}$ in each direction $i$ is $${\\bf F_{\\it i}} = [ \\rho v_i, \\rho v_0 v_i + p \\delta_{i,0}, \\rho v_1 v_i + p \\delta_{i,1}, \\rho v_i H ]$$ Specifically, the example solves for an exact solution of the equations whereby a vortex is transported by a uniform flow. Since all boundaries are periodic here, the method's accuracy can be assessed by measuring the difference between the solution and the initial condition at a later time when the vortex returns to its initial location. Note that as the order of the spatial discretization increases, the timestep must become smaller. This example currently uses a simple estimate derived by Cockburn and Shu for the 1D RKDG method. An additional factor can be tuned by passing the --cfl (or -c shorter) flag. The example demonstrates user-defined nonlinear form with hyperbolic form integrator for systems of equations that are defined with block vectors, and how these are used with an operator for explicit time integrators. In this case the system also involves an external approximate Riemann solver for the DG interface flux. It also demonstrates how to use GLVis for in-situ visualization of vector grid functions. The example has a serial ( ex18.cpp ) and a parallel ( ex18p.cpp ) version. We recommend viewing examples 9, 14 and 17 before viewing this example. Example 19: Incompressible Nonlinear Elasticity This example code solves the quasi-static incompressible nonlinear hyperelasticity equations. Specifically, it solves the nonlinear equation $$ \\nabla \\cdot \\sigma(F) = 0 $$ subject to the constraint $$ \\text{det } F = 1 $$ where $\\sigma$ is the Cauchy stress and $F_{ij} = \\delta_{ij} + u_{i,j}$ is the deformation gradient. To handle the incompressibility constraint, pressure is included as an independent unknown $p$ and the stress response is modeled as an incompressible neo-Hookean hyperelastic solid . The geometry of the domain is assumed to be as follows: This formulation requires solving the saddle point system $$ \\left[ \\begin{array}{cc} K &B^T \\\\ B & 0 \\end{array} \\right] \\left[\\begin{array}{c} \\Delta u \\\\ \\Delta p \\end{array} \\right] = \\left[\\begin{array}{c} R_u \\\\ R_p \\end{array} \\right] $$ at each Newton step. To solve this linear system, we implement a specialized block preconditioner of the form $$ P^{-1} = \\left[\\begin{array}{cc} I & -\\tilde{K}^{-1}B^T \\\\ 0 & I \\end{array} \\right] \\left[\\begin{array}{cc} \\tilde{K}^{-1} & 0 \\\\ 0 & -\\gamma \\tilde{S}^{-1} \\end{array} \\right] $$ where $\\tilde{K}^{-1}$ is an approximation of the inverse of the stiffness matrix $K$ and $\\tilde{S}^{-1}$ is an approximation of the inverse of the Schur complement $S = BK^{-1}B^T$. To approximate the Schur complement, we use the mass matrix for the pressure variable $p$. The example demonstrates how to solve nonlinear systems of equations that are defined with block vectors as well as how to implement specialized block preconditioners for use in iterative solvers. The example has a serial ( ex19.cpp ) and a parallel ( ex19p.cpp ) version. We recommend viewing examples 2, 5 and 10 before viewing this example. Example 20: Symplectic Integration of Hamiltonian Systems This example demonstrates the use of the variable order, symplectic time integration algorithm. Symplectic integration algorithms are designed to conserve energy when integrating systems of ODEs which are derived from Hamiltonian systems. Hamiltonian systems define the energy of a system as a function of time (t), a set of generalized coordinates (q), and their corresponding generalized momenta (p). $$ H(q,p,t) = T(p) + V(q,t) $$ Hamilton's equations then specify how q and p evolve in time: $$ \\frac{dq}{dt} = \\frac{dH}{dp}\\,,\\qquad \\frac{dp}{dt} = -\\frac{dH}{dq} $$ To use the symplectic integration classes we need to define an mfem::Operator ${\\bf P}$ which evaluates the action of dH/dp, and an mfem::TimeDependentOperator ${\\bf F}$ which computes -dH/dq. This example visualizes its results as an evolution in phase space by defining the axes to be $q$, $p$, and $t$ rather than $x$, $y$, and $z$. In this space we build a ribbon-like mesh with nodes at $(0,0,t)$ and $(q,p,t)$. Finally we plot the energy as a function of time as a scalar field on this ribbon-like mesh. This scheme highlights any variations in the energy of the system. This example offers five simple 1D Hamiltonians: Simple Harmonic Oscillator (mass on a spring) $$H = \\frac{1}{2}\\left( \\frac{p^2}{m} + \\frac{q^2}{k} \\right)$$ Pendulum $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} - k \\left( 1 - cos(q) \\right) \\right]$$ Gaussian Potential Well $$H = \\frac{p^2}{2m} - k e^{-q^2 / 2}$$ Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 + q^2 \\right) q^2 \\right]$$ Negative Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 - \\frac{q^2}{8} \\right) q^2 \\right]$$ In all cases these Hamiltonians are shifted by constant values so that the energy will remain positive. The mean and standard deviation of the computed energies at each time step are displayed upon completion. When run in parallel, each processor integrates the same Hamiltonian system but starting from different initial conditions. The example has a serial ( ex20.cpp ) and a parallel ( ex20p.cpp ) version. See the Maxwell miniapp for another application of symplectic integration. Example 21: Adaptive mesh refinement for linear elasticity This is a version of Example 2 with a simple adaptive mesh refinement loop. The problem being solved is again linear elasticity describing a multi-material cantilever beam. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear and curved meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex21.cpp ) and a parallel ( ex21p.cpp ) version. We recommend viewing Examples 2 and 6 before viewing this example. Example 22: Complex Linear Systems This example code demonstrates the use of MFEM to define and solve a complex-valued linear system. It implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. The example also demonstrates how to display a time-varying solution as a sequence of fields sent to a single GLVis socket. The example has a serial ( ex22.cpp ) and a parallel ( ex22p.cpp ) version. We recommend viewing examples 1, 3, and 4 before viewing this example. Example 23: Wave Problem This example code solves a simple 2D/3D wave equation with a second order time derivative: $$\\frac{\\partial^2 u}{\\partial t^2} - c^2\\Delta u = 0$$ The boundary conditions are either Dirichlet or Neumann. The example demonstrates the use of time dependent operators, implicit solvers and second order time integration. The example has only a serial ( ex23.cpp ) version. We recommend viewing examples 9 and 10 before viewing this example. Example 24: Mixed finite element spaces This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. Partial assembly and GPU devices are supported. The example has a serial ( ex24.cpp ) and a parallel ( ex24p.cpp ) version. We recommend viewing examples 1 and 3 before viewing this example. Example 25: Perfectly Matched Layers The example illustrates the use of a Perfectly Matched Layer (PML) for the simulation of time-harmonic electromagnetic waves propagating in unbounded domains. PML was originally introduced by Berenger in \"A Perfectly Matched Layer for the Absorption of Electromagnetic Waves\" . It is a technique used to solve wave propagation problems posed in infinite domains. The implementation involves the introduction of an artificial absorbing layer that minimizes undesired reflections. Inside this layer a complex coordinate stretching map is used which forces the wave modes to decay exponentially. The example solves the indefinite Maxwell equations $$\\nabla \\times (a \\nabla \\times E) - \\omega^2 b E = f.$$ where $a = \\mu^{-1} |J|^{-1} J^T J$, $b= \\epsilon |J| J^{-1} J^{-T}$ and $J$ is the Jacobian matrix of the coordinate transformation. The example demonstrates discretization with Nedelec finite elements in 2D or 3D, as well as the use of complex-valued bilinear and linear forms. Several test problems are included, with known exact solutions. The example has a serial ( ex25.cpp ) and a parallel ( ex25p.cpp ) version. We recommend viewing Example 22 before viewing this example. Example 26: Multigrid Preconditioner This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions and how to solve it efficiently using a matrix-free multigrid preconditioner. The example highlights on the creation of a hierarchy of discretization spaces and diffusion bilinear forms using partial assembly. The levels in the hierarchy of finite element spaces maybe constructed through geometric or order refinements. Moreover, the construction of a multigrid preconditioner for the PCG solver is shown. The multigrid uses a PCG solver on the coarsest level and second order Chebyshev accelerated smoothers on the other levels. The example has a serial ( ex26.cpp ) and a parallel ( ex26p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 27: Laplace Boundary Conditions This example code demonstrates the use of MFEM to define a simple finite element discretization of the Laplace problem: $$ -\\Delta u = 0 $$ with a variety of boundary conditions. Specifically, we discretize using a FE space of the specified order using a continuous or discontinuous space. We then apply Dirichlet, Neumann (both homogeneous and inhomogeneous), Robin, and Periodic boundary conditions on different portions of a predefined mesh. Boundary conditions: $u = u_{dbc}$ on $\\Gamma_{dbc}$ $\\hat{n}\\cdot\\nabla u = g_{nbc}$ on $\\Gamma_{nbc}$ $\\hat{n}\\cdot\\nabla u = 0$ on $\\Gamma_{nbc_0}$ $\\hat{n}\\cdot\\nabla u + a u = b$ on $\\Gamma_{rbc}$ as well as periodic boundary conditions which are enforced topologically. The example has a serial ( ex27.cpp ) and a parallel ( ex27p.cpp ) version. We recommend viewing examples 1 and 14 before viewing this example. Example 28: Constraints and Sliding Boundary Conditions This example code illustrates the use of constraints in linear solvers by solving an elasticity problem where the normal component of the displacement is constrained to zero on two boundaries but tangential displacement is allowed. The constraints can be enforced in several different ways, including eliminating them from the linear system or solving a saddle-point system that explicitly includes constraint conditions. The example has a serial ( ex28.cpp ) and a parallel ( ex28p.cpp ) version. We recommend viewing example 2 before viewing this example. Example 29: Solving PDEs on embedded surfaces This example demonstrates setting up and solving an anisotropic Laplace problem $$-\\nabla\\cdot(\\sigma\\nabla u) = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ where $\\Omega$ is a two dimensional curved surface embedded in three dimensions and $\\sigma$ is an anisotropic diffusion tensor. The example demonstrates and validates our DiffusionIntegrator 's ability to properly integrate three dimensional fluxes on a two dimensional domain. Not all of our integrators currently support such cases but the DiffusionIntegrator can be used as a simple example of how extend other integrators when necessary. The example has a serial ( ex29.cpp ) and a parallel ( ex29p.cpp ) version. We recommend viewing examples 1 and 7 before viewing this example. Example 30: Resolving rough and fine-scale problem data Unresolved problem data will affect the accuracy of a discretized PDE solution as well as a posteriori estimates of the solution error. This example uses a CoefficientRefiner object to preprocess an input mesh until the resolution of the prescribed problem data $f \\in L^2$ is below a prescribed tolerance. In this example, the resolution is identified with a data oscillation function on the mesh $\\mathcal{T}$, defined $$ \\mathrm{osc}(f) = \\Big( \\sum_{T\\in\\mathcal{T}} \\| h \\cdot (I - \\Pi)\\, f \\|^2_{L^2(T)} \\Big)^{1/2}, $$ where $h$ is the local element size function and $\\Pi$ is a finite element projection operator, and the sum is taken over all elements $T$ in the mesh. In this example, the coarse initial mesh is adaptively refined until $\\mathrm{osc}(f)$ is below a prescribed tolerance for various candidate functions $f \\in L^2$. When using rough problem data, it is recommended to perform this type of preprocessing before a posteriori error estimation. The example has a serial ( ex30.cpp ) and a parallel ( ex30p.cpp ) version. We recommend viewing examples 1 and 6 before viewing this example. Example 31: Anisotropic Definite Maxwell Problem This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + \\sigma E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". In this example $\\sigma$ is an anisotropic 3x3 tensor. Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example has a serial ( ex31.cpp ) and a parallel ( ex31p.cpp ) version. We recommend viewing example 3 before viewing this example. Example 32: Anisotropic Maxwell Eigenproblem This example code solves the Maxwell (electromagnetic) eigenvalue problem with anisotropic permittivity, $\\epsilon$ $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, \\epsilon E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces in an eigenmode context. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing multiple GLVis visualization windows for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex32p.cpp ) version. We recommend viewing examples 13 and 31 before viewing this example. Example 33: Spectral fractional Laplacian This example code demonstrates the use of MFEM to solve the fractional Laplacian problem $$ (-\\Delta)^\\alpha u = 1, \\quad \\alpha > 0, $$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is similar to ex1 , but involves a fractional-order diffusion operator whose inverse can be approximated by a series of inverses of integer-order diffusion operators. Solving each of these independent integer-order PDEs with MFEM and summing their solutions results in a discrete solution to the fractional Laplacian problem above. The example has a serial ( ex33.cpp ) and a parallel ( ex33p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 34: Source Function using a SubMesh Transfer This example demonstrates the use of a SubMesh object to transfer solution data from a sub-domain and use this as a source function on the full domain. In this case we compute a volumetric current density $\\vec{J}$ as the gradient of a scalar potential $\\varphi$ on a portion of the domain. $$\\nabla\\cdot(\\sigma\\nabla\\varphi)=0$$ $$\\vec{J} = -\\sigma\\nabla\\varphi$$ Where a voltage difference is applied on surfaces of the sub-domain (shown on the left) to generate the current density restricted to this sub-domain. The current density is then transferred to the full domain (shown on the right) using a SubMesh object. We then use this current density on the full domain as a source term in a magnetostatic solve for a vector potential $\\vec{A}$. $$\\nabla\\times(\\mu^{-1}\\nabla\\times\\vec{A})=\\vec{J}$$ $$\\vec{B} = \\nabla\\times\\vec{A}$$ This example verifies the recreation of boundary attributes on a sub-domain mesh as well as transfer of Raviart-Thomas vector fields between the SubMesh and the full Mesh. Note that the data transfer in this particular example involves arbitrary order Raviart-Thomas degrees of freedom on a mixture of tetrahedral and triangular prism elements. The example has a serial ( ex34.cpp ) and a parallel ( ex34p.cpp ) version. We recommend viewing Examples 1 and 3 before viewing this example. Example 35: Port Boundary Conditions using SubMesh Transfers This example demonstrates the use of a SubMesh object to transfer a port boundary condition from a portion of the boundary to the corresponding portion of the full domain. Just as in Example 22 this example implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0\\mbox{ with }u|_\\Gamma=v$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\times(\\vec{u}\\times\\hat{n})|_\\Gamma=\\vec{v}$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\cdot\\vec{u}|_\\Gamma=v$$ Where $\\Gamma$ is a portion of the boundary called the port . In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. In Example 22 this boundary condition was simply a constant in space. In this example the boundary condition is an eigenmode of a lower dimensional eigenvalue problem defined on a portion of the boundary as follows: For the scalar $H^1$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }v|_{\\partial\\Gamma}=0$$ For the vector $H(curl)$ field: $$\\nabla\\times\\left(\\nabla\\times\\vec{v}\\right) = \\lambda\\,\\vec{v}\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\times\\vec{v}|_{\\partial\\Gamma}=0$$ For the vector $H(div)$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\cdot\\nabla v|_{\\partial\\Gamma}=0$$ The different cases implemented in this example can be used to verify the transfer of an $H^1$ scalar field, the tangential components of an $H(curl)$ vector field, and the normal component of an $H(div)$ vector field (as a scalar $L^2$ field in this case) between a SubMesh and its parent mesh. The example has only a parallel ( ex35p.cpp ) version because the eigenmode solver used to compute the field on the port is only implemented in parallel. We recommend viewing Examples 11, 13, and 22 before viewing this example. Example 36: Obstacle Problem This example code solves the pointwise bound-constrained energy minimization problem $$ \\text{minimize } \\frac{1}{2}\\|\\nabla u\\|^2 \\text{ in } H^1_0(\\Omega)\\, \\text{ subject to } u \\ge \\varphi\\,.$$ This is known as the obstacle problem, and it is a classical motivating example in the study of variational inequalities and free boundary problems. In this example, the obstacle $\\varphi$ is the graph of a half-sphere centered at the origin of a circular domain $\\Omega$. After solving to a specified tolerance, the numerical solution is compared to a closed-form exact solution to assess its accuracy. The problem is solved using the Proximal Galerkin finite element method, which is a nonlinear, structure-preserving mixed method for pointwise bound constraints proposed by Keith and Surowiec . In turn, this example highlights MFEM's ability to deliver high-order solutions to variational inequalities and showcases how to set up and solve nonlinear mixed methods. The example has a serial ( ex36.cpp ) and a parallel ( ex36p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 37: Topology Optimization Density field $\\rho$ Problem set-up and domain $\\Omega$ This example code solves a classical cantilever beam topology optimization problem. The aim is to find an optimal material density field $\\rho$ in $L^1(\\Omega)$ to minimize the elastic compliance; i.e., $$\\begin{align} &\\text{minimize} \\int_\\Omega \\mathbf{f} \\cdot \\mathbf{u}(\\rho)\\, \\mathrm{d}x\\, \\text{ over }\\, \\rho \\in L^1(\\Omega) \\\\ &\\text{subject to }\\, 0 \\leq \\rho \\leq 1\\, \\text{ and } \\int_\\Omega \\rho\\, \\mathrm{d}x = \\theta\\, \\mathrm{vol}(\\Omega) \\,. \\end{align}$$ In this problem, $\\mathbf{f}$ is a localized force and the linearly elastic displacement field $\\mathbf{u} = \\mathbf{u}(\\rho)$ is determined by a material density field $\\rho$ with total volume fraction $0<\\theta<1$. The problem is solved using a mirror descent algorithm proposed by Keith and Surowiec . For further details, see the more elaborate description of this PDE-constrained optimization problem given in the example code and the aforementioned paper. The example has a serial ( ex37.cpp ) and a parallel ( ex37p.cpp ) version. We recommend viewing Example 2 before viewing this example. Example 38: Cut-Volume and Cut-Surface Integration This example code demonstrates construction of cut-surface and cut-volume IntegrationRules. The cut is specified by the zero level set of a given Coefficient $\\phi$. The resulting IntegrationRules are combined with standard LinearFormIntegrators to demonstrate integration of a function $u$ over an implicit interface, and a subdomain bounded by an implicit interface: $$ S = \\int_{\\phi = 0} u(x) ~ ds, \\quad V = \\int_{\\phi > 0} u(x) ~ dx. $$ The IntegrationRules are constructed by the moment-fitting algorithm introduced by M\u00fcller, Kummer and Oberlack . Through a set of basis functions, for each element the method defines and solves a local under-determined system for the vector of quadrature weights. All surface and volume integrals, which are required to form the system, are reduced to 1D integration over intersected segments. The example has only a serial ( ex38.cpp ) version, because the construction of the integration rules is an element-local procedure. It requires MFEM to be built with LAPACK, which is used to find the optimal solution of an under-determined system of equations. Example 39: Named Attribute Sets This example uses the Poisson equation to demonstrate the use of named attribute sets in MFEM to specify material regions, boundary regions, or source regions by name rather than attribute numbers. It also demonstrates how new named attribute sets may be created from arbitrary groupings of attribute numbers and used as a convenient shorthand to refer to those groupings in other portions of the application or through the command line. Named attribute sets also required changes to MFEM's mesh file formats. This example makes use of a custom input mesh file ( compass.msh ) produced using Gmsh which includes named regions and boundaries. A related mesh file ( compass.mesh ) illustrates MFEM's representation of the new named attribute sets. See file formats for details of the augmented mesh file format. The example has a serial ( ex39.cpp ) and a parallel ( ex39p.cpp ) version. We recommend viewing Example 1 before viewing this example. Example 40: Eikonal Equation This example highlights MFEM's ability to solve a fully-nonlinear, first-order PDE with high-order finite elements. In particular this example uses the proximal Galerkin method to solve the eikonal equation, $$ |\\nabla u| = 1 \\text{ in } \\Omega, \\quad u = 0 \\text{ on } \\partial \\Omega. $$ At each point $x$ in the domain $\\Omega$, the solution of this PDE provides the Euclidean distance to the domain boundary, $u(x) = \\min \\{ | x - y| : y \\in \\partial \\Omega\\}$. The problem is solved by recasting $u$ as the solution of the nonlinear program $$ \\text{maximize } \\int_\\Omega u\\, \\mathrm{d} x\\, \\text{ in } W^{1,\\infty}_0(\\Omega)\\, \\text{ subject to } |\\nabla u | \\leq 1 \\text{ a.e. in } \\Omega.$$ A solution is then obtained by discretizing and solving a sequence of nonlinear saddle-point problems. See the example code for a more detailed description of the method. The example has a serial ( ex40.cpp ) and a parallel ( ex40p.cpp ) version. We recommend viewing Example 5 and Example 36 before viewing this example. NURBS Example 1: Laplace Problem This example code demonstrates the use of MFEM to define a simple isogeometric NURBS discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is the same as Example 1 . The example has a serial ( nurbs_ex1.cpp ) and a parallel ( nurbs_ex1p.cpp ) version. There is also a version that demonstrates efficient patchwise quadrature ( nurbs_ex1 patch.cpp ). NURBS Example 3: Definite Maxwell Problem This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with NURBS-based $H(curl)$elements in 2D or 3D. The problem solved in this example is the same as Ezample 3 . The example has only a serial ( nurbs_ex1.cpp ) version. NURBS Example 5: Darcy Problem This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize the velocity ($\\bf u$) with NURBS-based $H(div)$ elements and the pressure ($p$) with a compatible NURBS-based $H_1$ elements. The problem solved in this example is the same as Example 5 . The example only has a serial ( nurbs_ex5.cpp ). NURBS Example 11: Laplace Eigenproblem This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The problem solved in this example is the same as Example 11 . The example has only a parallel ( nurbs_ex11p.cpp ) version. NURBS Example 24: Mixed finite element spaces The problem solved in this example is the same as Example 24 , but NURBS-based elements are also supported. This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. The example has a serial ( nurbs_ex24.cpp ). Volta Miniapp: Electrostatics This miniapp demonstrates the use of MFEM to solve realistic problems in the field of linear electrostatics. Its features include: dielectric materials charge densities surface charge densities prescribed voltages applied polarizations high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( volta.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Tesla Miniapp: Magnetostatics This miniapp showcases many of MFEM's features while solving a variety of realistic magnetostatics problems. Its features include: diamagnetic and/or paramagnetic materials ferromagnetic materials volumetric current densities surface current densities external fields high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( tesla.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Maxwell Miniapp: Transient Full-Wave Electromagnetics This miniapp solves the equations of transient full-wave electromagnetics. Its features include: mixed formulation of the coupled first-order Maxwell equations $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic flux energy conserving, variable order, implicit time integration dielectric materials diamagnetic and/or paramagnetic materials conductive materials volumetric current densities Sommerfeld absorbing boundary conditions high order meshes high order basis functions advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( maxwell.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Joule Miniapp: Transient Magnetics and Joule Heating This miniapp solves the equations of transient low-frequency (a.k.a. eddy current) electromagnetics, and simultaneously computes transient heat transfer with the heat source given by the electromagnetic Joule heating. Its features include: $H^1$ discretization of the electrostatic potential $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic field $H(\\mathrm{div})$ discretization of the heat flux $L^2$ discretization of the temperature implicit transient time integration high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( joule.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mobius Strip Miniapp This miniapp generates various Mobius strip-like surface meshes. It is a good way to generate complex surface meshes. Manipulating the mesh topology and performing mesh transformation are demonstrated. The mobius-strip mesh in the data directory was generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mobius-strip.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Klein Bottle Miniapp This miniapp generates three types of Klein bottle surfaces. It is similar to the mobius-strip miniapp. Manipulating the mesh topology and performing mesh transformation are demonstrated. The klein-bottle and klein-donut meshes in the data directory were generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( klein-bottle.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Toroid Miniapp This miniapp generates two types of toroidal volume meshes; one with triangular cross sections and one with square cross sections. It works by defining a stack of individual elements and bending them so that the bottom and top of the stack can be joined to form a torus. It supports various options including: The element type: 0 - Wedge, 1 - Hexahedron The geometric order of the elements The major and minor radii The number of elements in the azimuthal direction The number of nodes to offset by before rejoining the stack The initial angle of the cross sectional shape The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( toroid.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Twist Miniapp This miniapp generates simple periodic meshes to demonstrate MFEM's handling of periodic domains. MFEM's strategy is to use a discontinuous vector field to define the mesh coordinates on a topologically periodic mesh. It works by defining a stack of individual elements and stitching together the top and bottom of the mesh. The stack can also be twisted so that the vertices of the bottom and top can be joined with any integer offset (for tetrahedral and wedge meshes only even offsets are supported). The Twist miniapp supports various options including: The element type: 4 - Tetrahedron, 6 - Wedge, 8 - Hexahedron The geometric order of the elements The dimensions of the initial brick-shaped stack of elements The number of elements in the z direction The number of nodes to offset by before rejoining the stack The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( twist.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Extruder Miniapp This miniapp creates higher dimensional meshes from lower dimensional meshes by extrusion. Simple coordinate transformations can also be applied if desired. The initial mesh can be 1D or 2D 1D meshes can be extruded in both the y and z directions 2D meshes can be triangular, quadrilateral, or contain both element types Meshes with high order geometry are supported User can specify the number of elements and the distance to extrude Geometric order of the transformed mesh can be user selected or automatic This miniapp provides another demonstration of how simple meshes can be constructed and transformed in MFEM. This miniapp has only a serial ( extruder.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Trimmer Miniapp This miniapp creates a new mesh file from an existing mesh by trimming away elements with selected attributes. Newly exposed boundary elements will be assigned new or user specified boundary attributes. The initial mesh can be 2D or 3D Meshes with high order geometry are supported Periodic meshes are supported NURBS meshes are not supported This miniapp provides another demonstration of how simple meshes can be constructed in MFEM. This miniapp has only a serial ( trimmer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Polar-NC Miniapp This miniapp generates a circular sector mesh that consist of quadrilaterals and triangles of similar sizes. The 3D version of the mesh is made of prisms and tetrahedra. The mesh is non-conforming by design, and can optionally be made curvilinear. The elements are ordered along a space-filling curve by default, which makes the mesh ready for parallel non-conforming AMR in MFEM. The implementation also demonstrates how to initialize a non-conforming mesh on the fly by marking hanging nodes with Mesh::AddVertexParents . For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( polar-nc.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Shaper Miniapp This miniapp performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( shaper.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mesh Explorer Miniapp This miniapp is a handy tool to examine, visualize and manipulate a given mesh. Some of its features are: visualizing of mesh materials and individual mesh elements mesh scaling, randomization, and general transformation manipulation of the mesh curvature the ability to simulate parallel partitioning quantitative and visual reports of mesh quality For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mesh-explorer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mesh Optimizer Miniapp This miniapp performs mesh optimization using the Target-Matrix Optimization Paradigm (TMOP) by P. Knupp , and a global variational minimization approach ( Dobrev et al. ). It minimizes the quantity $$\\sum_T \\int_T \\mu(J(x)),$$ where $T$ are the target (ideal) elements, $J$ is the Jacobian of the transformation from the target to the physical element, and $\\mu$ is the mesh quality metric. This metric can measure shape, size or alignment of the region around each quadrature point. The combination of targets and quality metrics is used to optimize the physical node positions, i.e., they must be as close as possible to the shape / size / alignment of their targets. This code also demonstrates a possible use of nonlinear operators, as well as their coupling to Newton methods for solving minimization problems. Note that the utilized Newton methods are oriented towards avoiding invalid meshes with negative Jacobian determinants. Each Newton step requires the inversion of a Jacobian matrix, which is done through an inner linear solver. The miniapp has a serial ( mesh-optimizer.cpp ) and a parallel ( pmesh-optimizer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Mesh Fitting Miniapp This miniapp builds upon the mesh optimizer miniapp to enable mesh alignment with the zero isosurface of a discrete level-set. The approach is based on Dobrev et al. and Mittal et al. , where we minimize the quantity $$\\sum_T \\int_T \\mu(J(x)) + \\sum_{s \\in S} w \\,\\, \\sigma^2(x_s).$$ Here, the first term controls mesh quality and the second term enforces weak alignment of a selected subset of mesh-nodes ($s \\in S$) with the zero isosurface of the discrete level-set function ($\\sigma$). Click on the image on the right to see a demonstration of this method for generating body-fitted meshes for topology optimization in LiDO to maximize beam stiffness under a downward force on the right wall. The miniapp has a parallel ( pmesh-fitting.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Minimal Surface Miniapp This miniapp solves Plateau's problem: the Dirichlet problem for the minimal surface equation. Options to solve the minimal surface equations of both parametric surfaces as well as surfaces restricted to be graphs of the form $z=f(x,y)$ are supported, including a number of examples such as the Catenoid, Helicoid, Costa and Scherk surfaces. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has a serial ( minimal-surface.cpp ) and a parallel ( pminimal-surface.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Low-Order Refined Transfer Miniapp The lor-transfer miniapp, found under miniapps/tools demonstrates the capability to generate a low-order refined mesh from a high-order mesh, and to transfer solutions between these meshes. Grid functions can be transferred between the coarse, high-order mesh and the low-order refined mesh using either $L^2$ projection or pointwise evaluation. These transfer operators can be designed to discretely conserve mass and to recover the original high-order solution when transferring a low-order grid function that was obtained by restricting a high-order grid function to the low-order refined space. The miniapp has only a serial ( lor-transfer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Interpolation Miniapps The interpolation miniapp, found under miniapps/gslib , demonstrate the capability to interpolate high-order finite element functions at given set of points in physical space. These miniapps utilize the gslib library's high-order interpolation utility for quad and hex meshes: Find Points miniapp has a serial ( findpts.cpp ) and a parallel ( pfindpts.cpp ) version that demonstrate the basic procedures for point search and evaluation of grid functions. Field Interp miniapp ( field-interp.cpp ) demonstrates how grid functions can be transferred between meshes. Field Diff miniapp ( field-diff.cpp ) demonstrates how grid functions on two different meshes can be compared with each other. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps. Extrapolation Miniapp The extrapolate miniapp, found in the miniapps/shifted directory, extrapolates a finite element function from a set of elements (known values) to the rest of the domain. The set of elements that contains the known values is specified by the positive values of a level set Coefficient. The known values are not modified. The miniapp supports two PDE-based approaches ( Aslam , Bochkov & Gibou ), both of which rely on solving a sequence of advection problems in the direction of the unknown parts of the domain. The extrapolation can be constant (1st order), linear (2nd order), or quadratic (3rd order). These formal orders hold for a limited band around the zero level set, see the above references for further information. The miniapp has only a parallel ( extrapolate.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Distance Solver Miniapp The distance miniapp, found in the miniapps/shifted directory demonstrates the capability to compute the \"distance\" to a given point source or to the zero level set of a given function. Here \"distance\" refers to the length of the shortest path through the mesh. The input can be a DeltaCoefficient (representing a point source), or any Coefficient (for the case of a level set). The output is a ParGridFunction that can be scalar (representing the scalar distance), or a vector (its magnitude is the distance, and its direction is the starting direction of the shortest path). The miniapp has only a parallel ( distance.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Shifted Diffusion Miniapp The diffusion miniapp, found in the miniapps/shifted directory, demonstrates the capability to formulate a boundary value problem using a surrogate computational domain. The method uses a distance function to the true boundary to enforce Dirichlet boundary conditions on the (non-aligned) mesh faces, therefore \"shifting\" the location where boundary conditions are imposed. The implementation in the miniapp is a high-order extension of the second-generation shifted boundary method . The miniapp has only a parallel ( diffusion.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Laghos Miniapp Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. The computational motives captured in Laghos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements (triangular and tetrahedral elements can also be used, but with the less efficient full assembly option). Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Laghos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Continuous and discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Separation between the assembly and the quadrature point-based computations. Point-wise definition of mesh size, time-step estimate and artificial viscosity coefficient. Constant-in-time velocity mass operator that is inverted iteratively on each time step. This is an example of an operator that is prepared once (fully or partially assembled), but is applied many times. The application cost is dominant for this operator. Time-dependent force matrix that is prepared every time step (fully or partially assembled) and is applied just twice per \"assembly\". Both the preparation and the application costs are important for this operator. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization / data analysis with VisIt . The Laghos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Laghos . Remhos Miniapp Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. The computational motives captured in Remhos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements. Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Remhos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Mass operator that is local per each zone. It is inverted by iterative or exact methods at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Advection operator that couples neighboring zones. It is applied once at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization and data analysis with VisIt . The Remhos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Remhos . Navier Miniapp Navier is a miniapp that solves the time-dependent Navier-Stokes equations of incompressible fluid dynamics \\begin{align} \\frac{\\partial u}{\\partial t} + (u \\cdot \\nabla) u - \\frac{1}{Re} \\nabla^2 u - \\nabla p &= f \\\\ \\nabla \\cdot u &= 0 \\end{align} using a spatially high-order finite element discretization. The time-dependent problem is solved using a (up to) third order implicit-explicit method which leverages an extrapolation scheme for the convective parts and a backward-difference formulation for the viscous parts of the equation. The miniapp supports: Arbitrary order H1 elements High order mesh elements IMEX (EXTk-BDFk) time-stepping up to third order Convenient interface for new users A variety of test cases and benchmarks This miniapp has only a parallel ( navier_solver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Block Solvers Miniapp The Block Solvers miniapp, found under miniapps/solvers , compares various linear solvers for the saddle point system obtained from mixed finite element discretization of the Darcy's flow problem \\begin{array}{rcl} k{\\bf u} & + \\nabla p & = f \\\\ -\\nabla \\cdot {\\bf u} & & = g \\end{array} The solvers being compared include: The divergence-free solver (couple and decoupled modes), which is based on a multilevel decomposition of the Raviart-Thomas finite element space and its divergence-free subspace. MINRES preconditioned by the block diagonal preconditioner in ex5p.cpp . For more details, please see the documentation in the miniapps/solvers directory. The miniapp supports: Arbitrary order mixed finite element pair (Raviart-Thomas elements + piecewise discontinuous polynomials) Various combination of essential and natural boundary conditions Homogeneous or heterogeneous scalar coefficient k This miniapp has only a parallel ( block-solvers.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Overlapping Grids Miniapps Overlapping grids-based frameworks can often make problems tractable that are otherwise inaccessible with a single conforming grid. The following gslib -based miniapps in MFEM demonstrate how to set up and use overlapping grids: The Schwarz Example 1 miniapp in miniapps/gslib has a serial ( schwarz_ex1.cpp ) a parallel ( schwarz_ex1p.cpp ) version that solves the Poisson problem on overlapping grids. The serial version is restricted to use two overlapping grids, while the parallel version supports arbitrary number of overlapping grids. The Navier Conjugate Heat Transfer miniapp in miniapps/navier ( navier_cht.cpp ) demonstrates how a conjugate heat transfer problem can be solved with the fluid dynamics (incompressible Navier-Stokes equations) and heat transfer (advection-diffusion equation) PDEs modeled on different meshes. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps. ParELAG AMGe for H(curl) and H(div) Miniapp This is a miniapp that exhibits the ParELAG library and part of its capabilities. The miniapp employs MFEM and ParELAG to solve $H(\\mathrm{curl})$- and $H(\\mathrm{div})$-elliptic forms by an element based algebraic multigrid (AMGe). ParELAG is a library mostly developed at the Center for Applied Scientific Computing of Lawrence Livermore National Laboratory, California, USA. The miniapp uses: A multilevel hierarchy of de Rham complexes of finite element spaces, built by ParELAG; Hiptmair-type (hybrid) smoothers, implemented in ParELAG; AMS (Auxiliary-space Maxwell Solver) or ADS (Auxiliary-space Divergence Solver), from HYPRE, for preconditioning or solving on the coarsest levels. Alternatively, it is possible to precondition or solve the $H(\\mathrm{div})$ form on the coarsest level via a hybridization approach. However, this is not yet implemented in ParELAG for the coarse levels. Only the hybridization solver that is directly applicable to an $H(\\mathrm{div})$-$L^2$ mixed (saddle-point) system is currently available in ParELAG. We recommend viewing ex3p.cpp and ex4p.cpp before viewing this miniapp. For more details, please see the documentation in the miniapps/parelag directory. This miniapp has only a parallel ( MultilevelHcurlHdivSolver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps. Generating Gaussian Random Fields via the SPDE Method This miniapp generates Gaussian random fields on meshed domains $\\Omega \\subset \\mathbb{R}^n$ via the SPDE method. The method exploits a stochastic, fractional PDE whose full-space solutions yield Gaussian random fields with a Mat\u00e9rn covariance. The method was introduced and popularized by Lindgren et. al in 2010. In this miniapp, we use a slightly modified representation following Khristenko et. al . More specifically, we solve the equation \\begin{equation} \\left( -\\frac{1}{2\\nu} \\nabla \\cdot \\left( \\Theta \\nabla \\right) + \\mathbf{1} \\right)^{\\frac{2\\nu+n}{4}} u(x,w) = \\eta W(x,w) \\ \\ \\ \\text{in} \\ \\ \\Omega, \\end{equation} with various boundary conditions. Solving this equation on $\\Omega = \\mathbb{R}^n$ delivers a homogeneous Gaussian random field with zero mean and Mat\u00e9rn covariance, \\begin{align}\\label{eq:MaternCovariance} C(x,y) &= \\sigma^2M_\\nu \\left(\\sqrt{2\\nu}\\, \\| x-y \\|_{\\Theta} \\right) , \\end{align} where $M_\\nu(z) = \\frac{2^{1-\\nu}}{\\Gamma(\\nu)} z ^{\\nu} K_\\nu \\left( z \\right)$ and $\\| x-y \\|_{\\Theta}^2 = (x-y)^\\top\\Theta (x-y)$. The Mat\u00e9rn model provides the regularity parameter $\\nu > 0$ and the anisotropic diffusion tensor $\\Theta \\in \\mathbb{R}^{n\\times n}$, which determines the spatial structure (correlation lengths). However, applying boundary conditions to the SPDE above provides the ability to model a significantly larger class of inhomogeneous random fields on complex domains. For further details, see the miniapp README . We recommend viewing ex33p.cpp before viewing this miniapp. This miniapp ( generate_random_field.cpp ) has only a parallel implementation. It further requires MFEM to be built with LAPACK, otherwise you may only use predefined values for $\\nu$. We recommend that new users start with the example codes before moving to the miniapps. Multidomain and SubMesh demonstration Miniapp This miniapp aims to demonstrate how to solve two PDEs, that represent different physics, on the same domain. MFEM's SubMesh interface is used to compute on and transfer between the spaces of predefined parts of the domain. For the sake of simplicity, the spaces on each domain are using the same order H1 finite elements. This does not mean that the approach is limited to this configuration. A 3D domain comprised of an outer box with a cylinder shaped inside is used. A heat equation is described on the outer box domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T &&\\mbox{in outer box}\\\\ T &= T_{wall} &&\\mbox{on outside wall}\\\\ \\nabla T \\cdot \\hat{n} &= 0 &&\\mbox{on inside (cylinder) wall} \\end{align} with temperature $T$ and coefficient $\\kappa$ (non-physical in this example). A convection-diffusion equation is described inside the cylinder domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T - \\alpha \\nabla \\cdot (\\vec{b} T) & &\\mbox{in inner cylinder}\\\\ T &= T_{wall} & &\\mbox{on cylinder wall}\\\\ \\nabla T \\cdot \\hat{n} &= 0 & &\\mbox{else} \\end{align} with temperature $T$, coefficients $\\kappa$, $\\alpha$ and prescribed velocity profile $\\vec{b}$, and $T_{wall}$ obtained from the heat equation. To couple the solutions of both equations, a segregated solve with one way coupling approach is used. The heat equation of the outer box is solved from the timestep $T_{box}(t)$ to $T_{box}(t+dt)$. Then for the convection-diffusion equation $T_{wall}$ is set to $T_{box}(t+dt)$ and the equation is solved for $T(t+dt)$ which results in a first-order one way coupling. This miniapp has only a parallel ( multidomain.cpp ) implementation. We recommend that new users start with the example codes before moving to the miniapps. DPG miniapp This miniapp demonstrates how to discretize and solve various PDEs using the Discontinuous Petrov-Galerkin (DPG) method. It utilizes a new user-friendly interface to assemble the block DPG systems arising from the discretization of any DPG formulation (such as Ultraweak or Primal). In addition, the miniapp supports complex-valued systems, static condensation for block systems, and AMR using the built-in DPG residual-based error indicator. This capability is showcased in the following DPG examples in miniapps/dpg . Ultraweak DPG formulation for diffusion . This example solves the simple Poisson equation $$-\u0394 u = f$$ and computes rates of convergence under successive uniform h-refinements for a smooth manufactured solution. The parallel version also includes an AMR implementation for the L-shape benchmark problem. This example has a serial ( diffusion.cpp ) and a parallel ( pdiffusion.cpp ) version. Ultraweak DPG formulation for convection-diffusion . This example solves the convection-diffusion problem: \\begin{align} -\\epsilon \\Delta u + \\nabla \\cdot (\\beta u) &= f\\\\ \\end{align} using AMR. The example demonstrates the use of mesh-dependent test norms which are suitable for problems with solutions that exhibit large gradients present in internal or boundary layers . The example has a serial ( convection-diffusion.cpp ) and a parallel ( pconvection-diffusion.cpp ) version. Ultraweak DPG formulation for time-harmonic linear acoustics . This example solves the indefinite Helmholtz equation \\begin{align} -\\Delta u - \\omega^2 u &= f\\\\ \\end{align} The example includes formulations with manufactured plane-wave solutions as well as high-frequency scattering problems and the use of Perfectly Match Layers (PML). It also demonstrates how to set up complex-valued systems and preconditioners for their solutions. The example has a serial ( acoustics.cpp ) and parallel ( pacoustics.cpp ) version. Ultraweak DPG formulation for time-harmonic Maxwell . This example solves the indefinite Maxwell problem \\begin{align} \\nabla \u00d7 (\\mu^{-1} \\nabla \\times E) - \\omega^2 \\epsilon E&= J\\\\ \\end{align} The example includes formulations with smooth manufactured solutions, AMR formulations for high-frequency scattering problems as well as a problem with a singular solution. The example has a serial ( maxwell.cpp ) and a parallel ( pmaxwell.cpp ) version. Tribol miniapp This miniapp demonstrates how to use Tribol's mortar method to solve a contact patch test. A contact patch test places two aligned, linear elastic cubes in contact, then verifies that the exact elasticity solution for this problem is recovered. The exact solution requires transmission of a uniform pressure field across a (not necessarily conforming) interface (i.e. the contact surface). Mortar methods (including the one implemented in Tribol) are generally able to pass the contact patch test. The test assumes small deformations and no accelerations, so the relationship between forces/contact pressures and deformations/contact gaps is linear and, therefore, the problem can be solved exactly with a single linear solve. The mortar implementation is based on Puso and Laursen (2004) . A description of the Tribol implementation is available in Serac documentation . Lagrange multipliers are used to solve for the pressure required to prevent violation of the contact constraints. This miniapp has only a parallel ( contact-patch-test.cpp ) implementation. For more details, please see the documentation in miniapps/tribol/README.md . We recommend that new users start with the example codes before moving to the miniapps. No examples or miniapps match your criteria. ", "title": "Example Codes"}, {"location": "examples/#example-codes-and-miniapps", "text": "This page provides a brief overview of MFEM's example codes and miniapps. For detailed documentation of the MFEM sources, including the examples, see the online Doxygen documentation , or the doc directory in the distribution. The goal of the example codes is to provide a step-by-step introduction to MFEM in simple model settings. The miniapps are more complex, and are intended to be more representative of the advanced usage of the library in physics/application codes. We recommend that new users start with the example codes before moving to the miniapps. Select from the categories below to display examples and miniapps that contain the respective feature. All examples support (arbitrarily) high-order meshes and finite element spaces . The numerical results from the example codes can be visualized using the GLVis visualization tool (based on MFEM). See the GLVis website for more details. Users are encouraged to submit any example codes and miniapps that they have created and would like to share. Contact a member of the MFEM team to report bugs or post questions or comments .", "title": "Example Codes and Miniapps"}, {"location": "examples/#example-0-simplest-laplace-problem", "text": "This is the simplest MFEM example and a good starting point for new users. The example demonstrates the use of MFEM to define and solve an $H^1$ finite element discretization of the Laplace problem $$-\\Delta u = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ The example illustrates the use of the basic MFEM classes for defining the mesh, finite element space, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. The example has serial ( ex0.cpp ) and parallel ( ex0p.cpp ) versions.", "title": "Example 0: Simplest Laplace Problem"}, {"location": "examples/#example-1-laplace-problem", "text": "This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Specifically, we discretize with the finite element space coming from the mesh (linear by default, quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The problem solved in this example is the same as ex0 , but with more sophisticated options and features. The example highlights the use of mesh refinement, finite element grid functions, as well as linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. We also cover the explicit elimination of essential boundary conditions, static condensation, and the optional connection to the GLVis tool for visualization. The example has a serial ( ex1.cpp ), a parallel ( ex1p.cpp ), and HPC versions: performance/ex1.cpp , performance/ex1p.cpp . It also has a PETSc modification in examples/petsc , a PUMI modification in examples/pumi and a Ginkgo modification in examples/ginkgo . Partial assembly and GPU devices are supported.", "title": "Example 1: Laplace Problem"}, {"location": "examples/#example-2-linear-elasticity", "text": "This example code solves a simple linear elasticity problem describing a multi-material cantilever beam. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder with $f$ being a constant pull down vector on boundary elements with attribute 2, and zero otherwise. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order and NURBS vector finite element spaces with the linear elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and vector coefficient objects. Static condensation is also illustrated. The example has a serial ( ex2.cpp ) and a parallel ( ex2p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . We recommend viewing Example 1 before viewing this example.", "title": "Example 2: Linear Elasticity"}, {"location": "examples/#example-3-definite-maxwell-problem", "text": "This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 2D or 3D. The example demonstrates the use of $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. Static condensation is also illustrated. The example has a serial ( ex3.cpp ) and a parallel ( ex3p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-2 before viewing this example.", "title": "Example 3: Definite Maxwell Problem"}, {"location": "examples/#example-4-grad-div-problem", "text": "This example code solves a simple 2D/3D $H(div)$ diffusion problem corresponding to the second order definite equation $$-{\\rm grad}(\\alpha\\,{\\rm div}(F)) + \\beta F = f$$ with boundary condition $F \\cdot n$ = \"given normal field\". Here we use a given exact solution $F$ and compute the corresponding right hand side $f$. We discretize with the Raviart-Thomas finite elements. The example demonstrates the use of $H(div)$ finite element spaces with the grad-div and $H(div)$ vector finite element mass bilinear form, as well as the computation of discretization error when the exact solution is known. Bilinear form hybridization and static condensation are also illustrated. The example has a serial ( ex4.cpp ) and a parallel ( ex4p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly and GPU devices are supported. We recommend viewing examples 1-3 before viewing this example.", "title": "Example 4: Grad-div Problem"}, {"location": "examples/#example-5-darcy-problem", "text": "This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize with Raviart-Thomas finite elements (velocity $\\bf u$) and piecewise discontinuous polynomials (pressure $p$). The example demonstrates the use of the BlockMatrix and BlockOperator classes, as well as the collective saving of several grid functions in VisIt and ParaView formats. The example has a serial ( ex5.cpp ) and a parallel ( ex5p.cpp ) version. It also has a PETSc modification in examples/petsc . Partial assembly is supported. We recommend viewing examples 1-4 before viewing this example.", "title": "Example 5: Darcy Problem"}, {"location": "examples/#example-6-laplace-problem-with-amr", "text": "This is a version of Example 1 with a simple adaptive mesh refinement loop. The problem being solved is again the Laplace equation $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear, curved and surface meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex6.cpp ) and a parallel ( ex6p.cpp ) version. It also has a PETSc modification in examples/petsc and a PUMI modification in examples/pumi . Partial assembly and GPU devices are supported. We recommend viewing Example 1 before viewing this example.", "title": "Example 6: Laplace Problem with AMR"}, {"location": "examples/#example-7-surface-meshes", "text": "This example code demonstrates the use of MFEM to define a triangulation of a unit sphere and a simple isoparametric finite element discretization of the Laplace problem with mass term, $$-\\Delta u + u = f.$$ The example highlights mesh generation, the use of mesh refinement, high-order meshes and finite elements, as well as surface-based linear and bilinear forms corresponding to the left-hand side and right-hand side of the discrete linear system. Simple local mesh refinement is also demonstrated. The example has a serial ( ex7.cpp ) and a parallel ( ex7p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 7: Surface Meshes"}, {"location": "examples/#example-8-dpg-for-the-laplace-problem", "text": "This example code demonstrates the use of the Discontinuous Petrov-Galerkin (DPG) method in its primal 2x2 block form as a simple finite element discretization of the Laplace problem $$-\\Delta u = f$$ with homogeneous Dirichlet boundary conditions. We use high-order continuous trial space, a high-order interfacial (trace) space, and a high-order discontinuous test space defining a local dual ($H^{-1}$) norm. We use the primal form of DPG, see \"A primal DPG method without a first-order reformulation\" , Demkowicz and Gopalakrishnan, CAM 2013. The example highlights the use of interfacial (trace) finite elements and spaces, trace face integrators and the definition of block operators and preconditioners. The example has a serial ( ex8.cpp ) and a parallel ( ex8p.cpp ) version. We recommend viewing examples 1-5 before viewing this example.", "title": "Example 8: DPG for the Laplace Problem"}, {"location": "examples/#example-9-dg-advection", "text": "This example code solves the time-dependent advection equation $$\\frac{\\partial u}{\\partial t} + v \\cdot \\nabla u = 0,$$ where $v$ is a given fluid velocity, and $u_0(x)=u(0,x)$ is a given initial condition. The example demonstrates the use of Discontinuous Galerkin (DG) bilinear forms in MFEM (face integrators), the use of explicit and implicit (with block ILU preconditioning) ODE time integrators, the definition of periodic boundary conditions through periodic meshes, as well as the use of GLVis for persistent visualization of a time-evolving solution. The saving of time-dependent data files for external visualization with VisIt and ParaView is also illustrated. The example has a serial ( ex9.cpp ) and a parallel ( ex9p.cpp ) version. It also has a SUNDIALS modification in examples/sundials , a PETSc modification in examples/petsc , and a HiOp modification in examples/hiop .", "title": "Example 9: DG Advection"}, {"location": "examples/#example-10-nonlinear-elasticity", "text": "This example solves a time dependent nonlinear elasticity problem of the form $$ \\frac{dv}{dt} = H(x) + S v\\,,\\qquad \\frac{dx}{dt} = v\\,, $$ where $H$ is a hyperelastic model and $S$ is a viscosity operator of Laplacian type. The geometry of the domain is assumed to be as follows: The example demonstrates the use of nonlinear operators, as well as their implicit time integration using a Newton method for solving an associated reduced backward-Euler type nonlinear equation. Each Newton step requires the inversion of a Jacobian matrix, which is done through a (preconditioned) inner solver. The example has a serial ( ex10.cpp ) and a parallel ( ex10p.cpp ) version. It also has a SUNDIALS modification in examples/sundials and a PETSc modification in examples/petsc . We recommend viewing examples 2 and 9 before viewing this example.", "title": "Example 10: Nonlinear Elasticity"}, {"location": "examples/#example-11-laplace-eigenproblem", "text": "This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex11p.cpp ) version. It also has a SLEPc modification in examples/petsc . We recommend viewing Example 1 before viewing this example.", "title": "Example 11: Laplace Eigenproblem"}, {"location": "examples/#example-12-linear-elasticity-eigenproblem", "text": "This example code solves the linear elasticity eigenvalue problem for a multi-material cantilever beam. Specifically, we compute a number of the lowest eigenmodes by approximating the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = \\lambda {\\bf u} \\,,$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field $\\bf u$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder. The geometry of the domain is assumed to be as follows: The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex12p.cpp ) version. We recommend viewing examples 2 and 11 before viewing this example.", "title": "Example 12: Linear Elasticity Eigenproblem"}, {"location": "examples/#example-13-maxwell-eigenproblem", "text": "This example code solves the Maxwell (electromagnetic) eigenvalue problem $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 2D or 3D. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex13p.cpp ) version. We recommend viewing examples 3 and 11 before viewing this example.", "title": "Example 13: Maxwell Eigenproblem"}, {"location": "examples/#example-14-dg-diffusion", "text": "This example code demonstrates the use of MFEM to define a discontinuous Galerkin (DG) finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Finite element spaces of any order, including zero on regular grids, are supported. The example highlights the use of discontinuous spaces and DG-specific face integrators. The example has a serial ( ex14.cpp ) and a parallel ( ex14p.cpp ) version. We recommend viewing examples 1 and 9 before viewing this example.", "title": "Example 14: DG Diffusion"}, {"location": "examples/#example-15-dynamic-amr", "text": "Building on Example 6 , this example demonstrates dynamic adaptive mesh refinement. The mesh is adapted to a time-dependent solution by refinement as well as by derefinement. For simplicity, the solution is prescribed and no time integration is done. However, the error estimation and refinement/derefinement decisions are realistic. At each outer iteration the right hand side function is changed to mimic a time dependent problem. Within each inner iteration the problem is solved on a sequence of meshes which are locally refined according to a simple ZZ error estimator. At the end of the inner iteration the error estimates are also used to identify any elements which may be over-refined and a single derefinement step is performed. After each refinement or derefinement step a rebalance operation is performed to keep the mesh evenly distributed among the available processors. The example demonstrates MFEM's capability to refine, derefine and load balance nonconforming meshes, in 2D and 3D, and on linear, curved and surface meshes. Interpolation of functions between coarse and fine meshes, persistent GLVis visualization, and saving of time-dependent fields for external visualization with VisIt are also illustrated. The example has a serial ( ex15.cpp ) and a parallel ( ex15p.cpp ) version. We recommend viewing examples 1, 6 and 9 before viewing this example.", "title": "Example 15: Dynamic AMR"}, {"location": "examples/#example-16-time-dependent-heat-conduction", "text": "This example code solves a simple 2D/3D time dependent nonlinear heat conduction problem $$\\frac{du}{dt} = \\nabla \\cdot \\left( \\kappa + \\alpha u \\right) \\nabla u$$ with a natural insulating boundary condition $\\frac{du}{dn} = 0$. We linearize the problem by using the temperature field $u$ from the previous time step to compute the conductivity coefficient. This example demonstrates both implicit and explicit time integration as well as a single Picard step method for linearization. The saving of time dependent data files for external visualization with VisIt is also illustrated. The example has a serial ( ex16.cpp ) and a parallel ( ex16p.cpp ) version. We recommend viewing examples 2, 9, and 10 before viewing this example.", "title": "Example 16: Time Dependent Heat Conduction"}, {"location": "examples/#example-17-dg-linear-elasticity", "text": "This example code solves a simple linear elasticity problem describing a multi-material cantilever beam using symmetric or non-symmetric discontinuous Galerkin (DG) formulation. Specifically, we approximate the weak form of $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are Dirichlet, $\\bf{u}=\\bf{u_D}$, on the fixed part of the boundary, namely boundary attributes 1 and 2; on the rest of the boundary we use ${\\sigma}({\\bf u})\\cdot n = {\\bf 0}$. The geometry of the domain is assumed to be as follows: The example demonstrates the use of high-order DG vector finite element spaces with the linear DG elasticity bilinear form, meshes with curved elements, and the definition of piece-wise constant and function vector-coefficient objects. The use of non-homogeneous Dirichlet b.c. imposed weakly, is also illustrated. The example has a serial ( ex17.cpp ) and a parallel ( ex17p.cpp ) version. We recommend viewing examples 2 and 14 before viewing this example.", "title": "Example 17: DG Linear Elasticity"}, {"location": "examples/#example-18-dg-euler-equations", "text": "This example code solves the compressible Euler system of equations, a model nonlinear hyperbolic PDE, with a discontinuous Galerkin (DG) formulation. The primary purpose is to show how a transient system of nonlinear equations can be formulated in MFEM. The equations are solved in conservative form $$\\frac{\\partial u}{\\partial t} + \\nabla \\cdot {\\bf F}(u) = 0$$ with a state vector $u = [ \\rho, \\rho v_0, \\rho v_1, \\rho E ]$, where $\\rho$ is the density, $v_i$ is the velocity in the $i^{\\rm th}$ direction, $E$ is the total specific energy, and $H = E + p / \\rho$ is the total specific enthalpy. The pressure, $p$ is computed through a simple equation of state (EOS) call. The conservative hydrodynamic flux ${\\bf F}$ in each direction $i$ is $${\\bf F_{\\it i}} = [ \\rho v_i, \\rho v_0 v_i + p \\delta_{i,0}, \\rho v_1 v_i + p \\delta_{i,1}, \\rho v_i H ]$$ Specifically, the example solves for an exact solution of the equations whereby a vortex is transported by a uniform flow. Since all boundaries are periodic here, the method's accuracy can be assessed by measuring the difference between the solution and the initial condition at a later time when the vortex returns to its initial location. Note that as the order of the spatial discretization increases, the timestep must become smaller. This example currently uses a simple estimate derived by Cockburn and Shu for the 1D RKDG method. An additional factor can be tuned by passing the --cfl (or -c shorter) flag. The example demonstrates user-defined nonlinear form with hyperbolic form integrator for systems of equations that are defined with block vectors, and how these are used with an operator for explicit time integrators. In this case the system also involves an external approximate Riemann solver for the DG interface flux. It also demonstrates how to use GLVis for in-situ visualization of vector grid functions. The example has a serial ( ex18.cpp ) and a parallel ( ex18p.cpp ) version. We recommend viewing examples 9, 14 and 17 before viewing this example.", "title": "Example 18: DG Euler Equations"}, {"location": "examples/#example-19-incompressible-nonlinear-elasticity", "text": "This example code solves the quasi-static incompressible nonlinear hyperelasticity equations. Specifically, it solves the nonlinear equation $$ \\nabla \\cdot \\sigma(F) = 0 $$ subject to the constraint $$ \\text{det } F = 1 $$ where $\\sigma$ is the Cauchy stress and $F_{ij} = \\delta_{ij} + u_{i,j}$ is the deformation gradient. To handle the incompressibility constraint, pressure is included as an independent unknown $p$ and the stress response is modeled as an incompressible neo-Hookean hyperelastic solid . The geometry of the domain is assumed to be as follows: This formulation requires solving the saddle point system $$ \\left[ \\begin{array}{cc} K &B^T \\\\ B & 0 \\end{array} \\right] \\left[\\begin{array}{c} \\Delta u \\\\ \\Delta p \\end{array} \\right] = \\left[\\begin{array}{c} R_u \\\\ R_p \\end{array} \\right] $$ at each Newton step. To solve this linear system, we implement a specialized block preconditioner of the form $$ P^{-1} = \\left[\\begin{array}{cc} I & -\\tilde{K}^{-1}B^T \\\\ 0 & I \\end{array} \\right] \\left[\\begin{array}{cc} \\tilde{K}^{-1} & 0 \\\\ 0 & -\\gamma \\tilde{S}^{-1} \\end{array} \\right] $$ where $\\tilde{K}^{-1}$ is an approximation of the inverse of the stiffness matrix $K$ and $\\tilde{S}^{-1}$ is an approximation of the inverse of the Schur complement $S = BK^{-1}B^T$. To approximate the Schur complement, we use the mass matrix for the pressure variable $p$. The example demonstrates how to solve nonlinear systems of equations that are defined with block vectors as well as how to implement specialized block preconditioners for use in iterative solvers. The example has a serial ( ex19.cpp ) and a parallel ( ex19p.cpp ) version. We recommend viewing examples 2, 5 and 10 before viewing this example.", "title": "Example 19: Incompressible Nonlinear Elasticity"}, {"location": "examples/#example-20-symplectic-integration-of-hamiltonian-systems", "text": "This example demonstrates the use of the variable order, symplectic time integration algorithm. Symplectic integration algorithms are designed to conserve energy when integrating systems of ODEs which are derived from Hamiltonian systems. Hamiltonian systems define the energy of a system as a function of time (t), a set of generalized coordinates (q), and their corresponding generalized momenta (p). $$ H(q,p,t) = T(p) + V(q,t) $$ Hamilton's equations then specify how q and p evolve in time: $$ \\frac{dq}{dt} = \\frac{dH}{dp}\\,,\\qquad \\frac{dp}{dt} = -\\frac{dH}{dq} $$ To use the symplectic integration classes we need to define an mfem::Operator ${\\bf P}$ which evaluates the action of dH/dp, and an mfem::TimeDependentOperator ${\\bf F}$ which computes -dH/dq. This example visualizes its results as an evolution in phase space by defining the axes to be $q$, $p$, and $t$ rather than $x$, $y$, and $z$. In this space we build a ribbon-like mesh with nodes at $(0,0,t)$ and $(q,p,t)$. Finally we plot the energy as a function of time as a scalar field on this ribbon-like mesh. This scheme highlights any variations in the energy of the system. This example offers five simple 1D Hamiltonians: Simple Harmonic Oscillator (mass on a spring) $$H = \\frac{1}{2}\\left( \\frac{p^2}{m} + \\frac{q^2}{k} \\right)$$ Pendulum $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} - k \\left( 1 - cos(q) \\right) \\right]$$ Gaussian Potential Well $$H = \\frac{p^2}{2m} - k e^{-q^2 / 2}$$ Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 + q^2 \\right) q^2 \\right]$$ Negative Quartic Potential $$H = \\frac{1}{2}\\left[ \\frac{p^2}{m} + k \\left( 1 - \\frac{q^2}{8} \\right) q^2 \\right]$$ In all cases these Hamiltonians are shifted by constant values so that the energy will remain positive. The mean and standard deviation of the computed energies at each time step are displayed upon completion. When run in parallel, each processor integrates the same Hamiltonian system but starting from different initial conditions. The example has a serial ( ex20.cpp ) and a parallel ( ex20p.cpp ) version. See the Maxwell miniapp for another application of symplectic integration.", "title": "Example 20: Symplectic Integration of Hamiltonian Systems"}, {"location": "examples/#example-21-adaptive-mesh-refinement-for-linear-elasticity", "text": "This is a version of Example 2 with a simple adaptive mesh refinement loop. The problem being solved is again linear elasticity describing a multi-material cantilever beam. The problem is solved on a sequence of meshes which are locally refined in a conforming (triangles, tetrahedrons) or non-conforming (quadrilaterals, hexahedra) manner according to a simple ZZ error estimator. The example demonstrates MFEM's capability to work with both conforming and nonconforming refinements, in 2D and 3D, on linear and curved meshes. Interpolation of functions from coarse to fine meshes, as well as persistent GLVis visualization are also illustrated. The example has a serial ( ex21.cpp ) and a parallel ( ex21p.cpp ) version. We recommend viewing Examples 2 and 6 before viewing this example.", "title": "Example 21: Adaptive mesh refinement for linear elasticity"}, {"location": "examples/#example-22-complex-linear-systems", "text": "This example code demonstrates the use of MFEM to define and solve a complex-valued linear system. It implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. The example also demonstrates how to display a time-varying solution as a sequence of fields sent to a single GLVis socket. The example has a serial ( ex22.cpp ) and a parallel ( ex22p.cpp ) version. We recommend viewing examples 1, 3, and 4 before viewing this example.", "title": "Example 22: Complex Linear Systems"}, {"location": "examples/#example-23-wave-problem", "text": "This example code solves a simple 2D/3D wave equation with a second order time derivative: $$\\frac{\\partial^2 u}{\\partial t^2} - c^2\\Delta u = 0$$ The boundary conditions are either Dirichlet or Neumann. The example demonstrates the use of time dependent operators, implicit solvers and second order time integration. The example has only a serial ( ex23.cpp ) version. We recommend viewing examples 9 and 10 before viewing this example.", "title": "Example 23: Wave Problem"}, {"location": "examples/#example-24-mixed-finite-element-spaces", "text": "This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. Partial assembly and GPU devices are supported. The example has a serial ( ex24.cpp ) and a parallel ( ex24p.cpp ) version. We recommend viewing examples 1 and 3 before viewing this example.", "title": "Example 24: Mixed finite element spaces"}, {"location": "examples/#example-25-perfectly-matched-layers", "text": "The example illustrates the use of a Perfectly Matched Layer (PML) for the simulation of time-harmonic electromagnetic waves propagating in unbounded domains. PML was originally introduced by Berenger in \"A Perfectly Matched Layer for the Absorption of Electromagnetic Waves\" . It is a technique used to solve wave propagation problems posed in infinite domains. The implementation involves the introduction of an artificial absorbing layer that minimizes undesired reflections. Inside this layer a complex coordinate stretching map is used which forces the wave modes to decay exponentially. The example solves the indefinite Maxwell equations $$\\nabla \\times (a \\nabla \\times E) - \\omega^2 b E = f.$$ where $a = \\mu^{-1} |J|^{-1} J^T J$, $b= \\epsilon |J| J^{-1} J^{-T}$ and $J$ is the Jacobian matrix of the coordinate transformation. The example demonstrates discretization with Nedelec finite elements in 2D or 3D, as well as the use of complex-valued bilinear and linear forms. Several test problems are included, with known exact solutions. The example has a serial ( ex25.cpp ) and a parallel ( ex25p.cpp ) version. We recommend viewing Example 22 before viewing this example.", "title": "Example 25: Perfectly Matched Layers"}, {"location": "examples/#example-26-multigrid-preconditioner", "text": "This example code demonstrates the use of MFEM to define a simple isoparametric finite element discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions and how to solve it efficiently using a matrix-free multigrid preconditioner. The example highlights on the creation of a hierarchy of discretization spaces and diffusion bilinear forms using partial assembly. The levels in the hierarchy of finite element spaces maybe constructed through geometric or order refinements. Moreover, the construction of a multigrid preconditioner for the PCG solver is shown. The multigrid uses a PCG solver on the coarsest level and second order Chebyshev accelerated smoothers on the other levels. The example has a serial ( ex26.cpp ) and a parallel ( ex26p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 26: Multigrid Preconditioner"}, {"location": "examples/#example-27-laplace-boundary-conditions", "text": "This example code demonstrates the use of MFEM to define a simple finite element discretization of the Laplace problem: $$ -\\Delta u = 0 $$ with a variety of boundary conditions. Specifically, we discretize using a FE space of the specified order using a continuous or discontinuous space. We then apply Dirichlet, Neumann (both homogeneous and inhomogeneous), Robin, and Periodic boundary conditions on different portions of a predefined mesh. Boundary conditions: $u = u_{dbc}$ on $\\Gamma_{dbc}$ $\\hat{n}\\cdot\\nabla u = g_{nbc}$ on $\\Gamma_{nbc}$ $\\hat{n}\\cdot\\nabla u = 0$ on $\\Gamma_{nbc_0}$ $\\hat{n}\\cdot\\nabla u + a u = b$ on $\\Gamma_{rbc}$ as well as periodic boundary conditions which are enforced topologically. The example has a serial ( ex27.cpp ) and a parallel ( ex27p.cpp ) version. We recommend viewing examples 1 and 14 before viewing this example.", "title": "Example 27: Laplace Boundary Conditions"}, {"location": "examples/#example-28-constraints-and-sliding-boundary-conditions", "text": "This example code illustrates the use of constraints in linear solvers by solving an elasticity problem where the normal component of the displacement is constrained to zero on two boundaries but tangential displacement is allowed. The constraints can be enforced in several different ways, including eliminating them from the linear system or solving a saddle-point system that explicitly includes constraint conditions. The example has a serial ( ex28.cpp ) and a parallel ( ex28p.cpp ) version. We recommend viewing example 2 before viewing this example.", "title": "Example 28: Constraints and Sliding Boundary Conditions"}, {"location": "examples/#example-29-solving-pdes-on-embedded-surfaces", "text": "This example demonstrates setting up and solving an anisotropic Laplace problem $$-\\nabla\\cdot(\\sigma\\nabla u) = 1 \\quad\\text{in } \\Omega$$ with homogeneous Dirichlet boundary conditions $$ u = 0 \\quad\\text{on } \\partial\\Omega$$ where $\\Omega$ is a two dimensional curved surface embedded in three dimensions and $\\sigma$ is an anisotropic diffusion tensor. The example demonstrates and validates our DiffusionIntegrator 's ability to properly integrate three dimensional fluxes on a two dimensional domain. Not all of our integrators currently support such cases but the DiffusionIntegrator can be used as a simple example of how extend other integrators when necessary. The example has a serial ( ex29.cpp ) and a parallel ( ex29p.cpp ) version. We recommend viewing examples 1 and 7 before viewing this example.", "title": "Example 29: Solving PDEs on embedded surfaces"}, {"location": "examples/#example-30-resolving-rough-and-fine-scale-problem-data", "text": "Unresolved problem data will affect the accuracy of a discretized PDE solution as well as a posteriori estimates of the solution error. This example uses a CoefficientRefiner object to preprocess an input mesh until the resolution of the prescribed problem data $f \\in L^2$ is below a prescribed tolerance. In this example, the resolution is identified with a data oscillation function on the mesh $\\mathcal{T}$, defined $$ \\mathrm{osc}(f) = \\Big( \\sum_{T\\in\\mathcal{T}} \\| h \\cdot (I - \\Pi)\\, f \\|^2_{L^2(T)} \\Big)^{1/2}, $$ where $h$ is the local element size function and $\\Pi$ is a finite element projection operator, and the sum is taken over all elements $T$ in the mesh. In this example, the coarse initial mesh is adaptively refined until $\\mathrm{osc}(f)$ is below a prescribed tolerance for various candidate functions $f \\in L^2$. When using rough problem data, it is recommended to perform this type of preprocessing before a posteriori error estimation. The example has a serial ( ex30.cpp ) and a parallel ( ex30p.cpp ) version. We recommend viewing examples 1 and 6 before viewing this example.", "title": "Example 30: Resolving rough and fine-scale problem data"}, {"location": "examples/#example-31-anisotropic-definite-maxwell-problem", "text": "This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + \\sigma E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". In this example $\\sigma$ is an anisotropic 3x3 tensor. Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example has a serial ( ex31.cpp ) and a parallel ( ex31p.cpp ) version. We recommend viewing example 3 before viewing this example.", "title": "Example 31: Anisotropic Definite Maxwell Problem"}, {"location": "examples/#example-32-anisotropic-maxwell-eigenproblem", "text": "This example code solves the Maxwell (electromagnetic) eigenvalue problem with anisotropic permittivity, $\\epsilon$ $$\\nabla\\times\\nabla\\times\\, E = \\lambda\\, \\epsilon E $$ with homogeneous Dirichlet boundary conditions $E \\times n = 0$. We compute a number of the lowest nonzero eigenmodes by discretizing the curl curl operator using a Nedelec finite element space of the specified order in 1D, 2D, or 3D. The example demonstrates the use of restricted $H(curl)$ finite element spaces in an eigenmode context. These restricted spaces allow the solution of 1D or 2D electromagnetic problems which involve 3D field vectors. Such problems arise in plasma physics and crystallography. The example highlights the use of the AME subspace eigenvalue solver from HYPRE, which uses LOBPCG and AMS internally. Reusing multiple GLVis visualization windows for multiple eigenfunctions is also illustrated. The example has only a parallel ( ex32p.cpp ) version. We recommend viewing examples 13 and 31 before viewing this example.", "title": "Example 32: Anisotropic Maxwell Eigenproblem"}, {"location": "examples/#example-33-spectral-fractional-laplacian", "text": "This example code demonstrates the use of MFEM to solve the fractional Laplacian problem $$ (-\\Delta)^\\alpha u = 1, \\quad \\alpha > 0, $$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is similar to ex1 , but involves a fractional-order diffusion operator whose inverse can be approximated by a series of inverses of integer-order diffusion operators. Solving each of these independent integer-order PDEs with MFEM and summing their solutions results in a discrete solution to the fractional Laplacian problem above. The example has a serial ( ex33.cpp ) and a parallel ( ex33p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 33: Spectral fractional Laplacian"}, {"location": "examples/#example-34-source-function-using-a-submesh-transfer", "text": "This example demonstrates the use of a SubMesh object to transfer solution data from a sub-domain and use this as a source function on the full domain. In this case we compute a volumetric current density $\\vec{J}$ as the gradient of a scalar potential $\\varphi$ on a portion of the domain. $$\\nabla\\cdot(\\sigma\\nabla\\varphi)=0$$ $$\\vec{J} = -\\sigma\\nabla\\varphi$$ Where a voltage difference is applied on surfaces of the sub-domain (shown on the left) to generate the current density restricted to this sub-domain. The current density is then transferred to the full domain (shown on the right) using a SubMesh object. We then use this current density on the full domain as a source term in a magnetostatic solve for a vector potential $\\vec{A}$. $$\\nabla\\times(\\mu^{-1}\\nabla\\times\\vec{A})=\\vec{J}$$ $$\\vec{B} = \\nabla\\times\\vec{A}$$ This example verifies the recreation of boundary attributes on a sub-domain mesh as well as transfer of Raviart-Thomas vector fields between the SubMesh and the full Mesh. Note that the data transfer in this particular example involves arbitrary order Raviart-Thomas degrees of freedom on a mixture of tetrahedral and triangular prism elements. The example has a serial ( ex34.cpp ) and a parallel ( ex34p.cpp ) version. We recommend viewing Examples 1 and 3 before viewing this example.", "title": "Example 34: Source Function using a SubMesh Transfer"}, {"location": "examples/#example-35-port-boundary-conditions-using-submesh-transfers", "text": "This example demonstrates the use of a SubMesh object to transfer a port boundary condition from a portion of the boundary to the corresponding portion of the full domain. Just as in Example 22 this example implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0\\mbox{ with }u|_\\Gamma=v$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\times(\\vec{u}\\times\\hat{n})|_\\Gamma=\\vec{v}$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0\\mbox{ with }\\hat{n}\\cdot\\vec{u}|_\\Gamma=v$$ Where $\\Gamma$ is a portion of the boundary called the port . In each case the field is driven by a forced oscillation, with angular frequency $\\omega$, imposed at the boundary or a portion of the boundary. In Example 22 this boundary condition was simply a constant in space. In this example the boundary condition is an eigenmode of a lower dimensional eigenvalue problem defined on a portion of the boundary as follows: For the scalar $H^1$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }v|_{\\partial\\Gamma}=0$$ For the vector $H(curl)$ field: $$\\nabla\\times\\left(\\nabla\\times\\vec{v}\\right) = \\lambda\\,\\vec{v}\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\times\\vec{v}|_{\\partial\\Gamma}=0$$ For the vector $H(div)$ field: $$-\\nabla\\cdot\\left(\\nabla v\\right) = \\lambda\\,v\\mbox{ with }\\hat{n}_{\\partial\\Gamma}\\cdot\\nabla v|_{\\partial\\Gamma}=0$$ The different cases implemented in this example can be used to verify the transfer of an $H^1$ scalar field, the tangential components of an $H(curl)$ vector field, and the normal component of an $H(div)$ vector field (as a scalar $L^2$ field in this case) between a SubMesh and its parent mesh. The example has only a parallel ( ex35p.cpp ) version because the eigenmode solver used to compute the field on the port is only implemented in parallel. We recommend viewing Examples 11, 13, and 22 before viewing this example.", "title": "Example 35: Port Boundary Conditions using SubMesh Transfers"}, {"location": "examples/#example-36-obstacle-problem", "text": "This example code solves the pointwise bound-constrained energy minimization problem $$ \\text{minimize } \\frac{1}{2}\\|\\nabla u\\|^2 \\text{ in } H^1_0(\\Omega)\\, \\text{ subject to } u \\ge \\varphi\\,.$$ This is known as the obstacle problem, and it is a classical motivating example in the study of variational inequalities and free boundary problems. In this example, the obstacle $\\varphi$ is the graph of a half-sphere centered at the origin of a circular domain $\\Omega$. After solving to a specified tolerance, the numerical solution is compared to a closed-form exact solution to assess its accuracy. The problem is solved using the Proximal Galerkin finite element method, which is a nonlinear, structure-preserving mixed method for pointwise bound constraints proposed by Keith and Surowiec . In turn, this example highlights MFEM's ability to deliver high-order solutions to variational inequalities and showcases how to set up and solve nonlinear mixed methods. The example has a serial ( ex36.cpp ) and a parallel ( ex36p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 36: Obstacle Problem"}, {"location": "examples/#example-37-topology-optimization", "text": "Density field $\\rho$ Problem set-up and domain $\\Omega$ This example code solves a classical cantilever beam topology optimization problem. The aim is to find an optimal material density field $\\rho$ in $L^1(\\Omega)$ to minimize the elastic compliance; i.e., $$\\begin{align} &\\text{minimize} \\int_\\Omega \\mathbf{f} \\cdot \\mathbf{u}(\\rho)\\, \\mathrm{d}x\\, \\text{ over }\\, \\rho \\in L^1(\\Omega) \\\\ &\\text{subject to }\\, 0 \\leq \\rho \\leq 1\\, \\text{ and } \\int_\\Omega \\rho\\, \\mathrm{d}x = \\theta\\, \\mathrm{vol}(\\Omega) \\,. \\end{align}$$ In this problem, $\\mathbf{f}$ is a localized force and the linearly elastic displacement field $\\mathbf{u} = \\mathbf{u}(\\rho)$ is determined by a material density field $\\rho$ with total volume fraction $0<\\theta<1$. The problem is solved using a mirror descent algorithm proposed by Keith and Surowiec . For further details, see the more elaborate description of this PDE-constrained optimization problem given in the example code and the aforementioned paper. The example has a serial ( ex37.cpp ) and a parallel ( ex37p.cpp ) version. We recommend viewing Example 2 before viewing this example.", "title": "Example 37: Topology Optimization"}, {"location": "examples/#example-38-cut-volume-and-cut-surface-integration", "text": "This example code demonstrates construction of cut-surface and cut-volume IntegrationRules. The cut is specified by the zero level set of a given Coefficient $\\phi$. The resulting IntegrationRules are combined with standard LinearFormIntegrators to demonstrate integration of a function $u$ over an implicit interface, and a subdomain bounded by an implicit interface: $$ S = \\int_{\\phi = 0} u(x) ~ ds, \\quad V = \\int_{\\phi > 0} u(x) ~ dx. $$ The IntegrationRules are constructed by the moment-fitting algorithm introduced by M\u00fcller, Kummer and Oberlack . Through a set of basis functions, for each element the method defines and solves a local under-determined system for the vector of quadrature weights. All surface and volume integrals, which are required to form the system, are reduced to 1D integration over intersected segments. The example has only a serial ( ex38.cpp ) version, because the construction of the integration rules is an element-local procedure. It requires MFEM to be built with LAPACK, which is used to find the optimal solution of an under-determined system of equations.", "title": "Example 38: Cut-Volume and Cut-Surface Integration"}, {"location": "examples/#example-39-named-attribute-sets", "text": "This example uses the Poisson equation to demonstrate the use of named attribute sets in MFEM to specify material regions, boundary regions, or source regions by name rather than attribute numbers. It also demonstrates how new named attribute sets may be created from arbitrary groupings of attribute numbers and used as a convenient shorthand to refer to those groupings in other portions of the application or through the command line. Named attribute sets also required changes to MFEM's mesh file formats. This example makes use of a custom input mesh file ( compass.msh ) produced using Gmsh which includes named regions and boundaries. A related mesh file ( compass.mesh ) illustrates MFEM's representation of the new named attribute sets. See file formats for details of the augmented mesh file format. The example has a serial ( ex39.cpp ) and a parallel ( ex39p.cpp ) version. We recommend viewing Example 1 before viewing this example.", "title": "Example 39: Named Attribute Sets"}, {"location": "examples/#example-40-eikonal-equation", "text": "This example highlights MFEM's ability to solve a fully-nonlinear, first-order PDE with high-order finite elements. In particular this example uses the proximal Galerkin method to solve the eikonal equation, $$ |\\nabla u| = 1 \\text{ in } \\Omega, \\quad u = 0 \\text{ on } \\partial \\Omega. $$ At each point $x$ in the domain $\\Omega$, the solution of this PDE provides the Euclidean distance to the domain boundary, $u(x) = \\min \\{ | x - y| : y \\in \\partial \\Omega\\}$. The problem is solved by recasting $u$ as the solution of the nonlinear program $$ \\text{maximize } \\int_\\Omega u\\, \\mathrm{d} x\\, \\text{ in } W^{1,\\infty}_0(\\Omega)\\, \\text{ subject to } |\\nabla u | \\leq 1 \\text{ a.e. in } \\Omega.$$ A solution is then obtained by discretizing and solving a sequence of nonlinear saddle-point problems. See the example code for a more detailed description of the method. The example has a serial ( ex40.cpp ) and a parallel ( ex40p.cpp ) version. We recommend viewing Example 5 and Example 36 before viewing this example.", "title": "Example 40: Eikonal Equation"}, {"location": "examples/#nurbs-example-1-laplace-problem", "text": "This example code demonstrates the use of MFEM to define a simple isogeometric NURBS discretization of the Laplace problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. The problem solved in this example is the same as Example 1 . The example has a serial ( nurbs_ex1.cpp ) and a parallel ( nurbs_ex1p.cpp ) version. There is also a version that demonstrates efficient patchwise quadrature ( nurbs_ex1 patch.cpp ).", "title": "NURBS Example 1: Laplace Problem"}, {"location": "examples/#nurbs-example-3-definite-maxwell-problem", "text": "This example code solves a simple electromagnetic diffusion problem corresponding to the second order definite Maxwell equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with NURBS-based $H(curl)$elements in 2D or 3D. The problem solved in this example is the same as Ezample 3 . The example has only a serial ( nurbs_ex1.cpp ) version.", "title": "NURBS Example 3: Definite Maxwell Problem"}, {"location": "examples/#nurbs-example-5-darcy-problem", "text": "This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system $$ \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} $$ with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize the velocity ($\\bf u$) with NURBS-based $H(div)$ elements and the pressure ($p$) with a compatible NURBS-based $H_1$ elements. The problem solved in this example is the same as Example 5 . The example only has a serial ( nurbs_ex5.cpp ).", "title": "NURBS Example 5: Darcy Problem"}, {"location": "examples/#nurbs-example-11-laplace-eigenproblem", "text": "This example code demonstrates the use of MFEM to solve the eigenvalue problem $$-\\Delta u = \\lambda u$$ with homogeneous Dirichlet boundary conditions. We compute a number of the lowest eigenmodes by discretizing the Laplacian and Mass operators using a finite element space of the specified order, or an isoparametric/isogeometric space if order < 1 (quadratic for quadratic curvilinear mesh, NURBS for NURBS mesh, etc.) The example highlights the use of the LOBPCG eigenvalue solver together with the BoomerAMG preconditioner in HYPRE, as well as optionally the SuperLU or STRUMPACK parallel direct solvers. Reusing a single GLVis visualization window for multiple eigenfunctions is also illustrated. The problem solved in this example is the same as Example 11 . The example has only a parallel ( nurbs_ex11p.cpp ) version.", "title": "NURBS Example 11: Laplace Eigenproblem"}, {"location": "examples/#nurbs-example-24-mixed-finite-element-spaces", "text": "The problem solved in this example is the same as Example 24 , but NURBS-based elements are also supported. This example code illustrates usage of mixed finite element spaces, with three variants: $H^1 \\times H(curl)$ $H(curl) \\times H(div)$ $H(div) \\times L_2$ Using different approaches for demonstration purposes, we project or interpolate a gradient, curl, or divergence in the appropriate spaces, comparing the errors in each case. The example has a serial ( nurbs_ex24.cpp ).", "title": "NURBS Example 24: Mixed finite element spaces"}, {"location": "examples/#volta-miniapp-electrostatics", "text": "This miniapp demonstrates the use of MFEM to solve realistic problems in the field of linear electrostatics. Its features include: dielectric materials charge densities surface charge densities prescribed voltages applied polarizations high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( volta.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Volta Miniapp: Electrostatics"}, {"location": "examples/#tesla-miniapp-magnetostatics", "text": "This miniapp showcases many of MFEM's features while solving a variety of realistic magnetostatics problems. Its features include: diamagnetic and/or paramagnetic materials ferromagnetic materials volumetric current densities surface current densities external fields high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( tesla.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Tesla Miniapp: Magnetostatics"}, {"location": "examples/#maxwell-miniapp-transient-full-wave-electromagnetics", "text": "This miniapp solves the equations of transient full-wave electromagnetics. Its features include: mixed formulation of the coupled first-order Maxwell equations $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic flux energy conserving, variable order, implicit time integration dielectric materials diamagnetic and/or paramagnetic materials conductive materials volumetric current densities Sommerfeld absorbing boundary conditions high order meshes high order basis functions advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( maxwell.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Maxwell Miniapp: Transient Full-Wave Electromagnetics"}, {"location": "examples/#joule-miniapp-transient-magnetics-and-joule-heating", "text": "This miniapp solves the equations of transient low-frequency (a.k.a. eddy current) electromagnetics, and simultaneously computes transient heat transfer with the heat source given by the electromagnetic Joule heating. Its features include: $H^1$ discretization of the electrostatic potential $H(\\mathrm{curl})$ discretization of the electric field $H(\\mathrm{div})$ discretization of the magnetic field $H(\\mathrm{div})$ discretization of the heat flux $L^2$ discretization of the temperature implicit transient time integration high order meshes high order basis functions adaptive mesh refinement advanced visualization For more details, please see the documentation in the miniapps/electromagnetics directory. The miniapp has only a parallel ( joule.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Joule Miniapp: Transient Magnetics and Joule Heating"}, {"location": "examples/#mobius-strip-miniapp", "text": "This miniapp generates various Mobius strip-like surface meshes. It is a good way to generate complex surface meshes. Manipulating the mesh topology and performing mesh transformation are demonstrated. The mobius-strip mesh in the data directory was generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mobius-strip.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mobius Strip Miniapp"}, {"location": "examples/#klein-bottle-miniapp", "text": "This miniapp generates three types of Klein bottle surfaces. It is similar to the mobius-strip miniapp. Manipulating the mesh topology and performing mesh transformation are demonstrated. The klein-bottle and klein-donut meshes in the data directory were generated with this miniapp. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( klein-bottle.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Klein Bottle Miniapp"}, {"location": "examples/#toroid-miniapp", "text": "This miniapp generates two types of toroidal volume meshes; one with triangular cross sections and one with square cross sections. It works by defining a stack of individual elements and bending them so that the bottom and top of the stack can be joined to form a torus. It supports various options including: The element type: 0 - Wedge, 1 - Hexahedron The geometric order of the elements The major and minor radii The number of elements in the azimuthal direction The number of nodes to offset by before rejoining the stack The initial angle of the cross sectional shape The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( toroid.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Toroid Miniapp"}, {"location": "examples/#twist-miniapp", "text": "This miniapp generates simple periodic meshes to demonstrate MFEM's handling of periodic domains. MFEM's strategy is to use a discontinuous vector field to define the mesh coordinates on a topologically periodic mesh. It works by defining a stack of individual elements and stitching together the top and bottom of the mesh. The stack can also be twisted so that the vertices of the bottom and top can be joined with any integer offset (for tetrahedral and wedge meshes only even offsets are supported). The Twist miniapp supports various options including: The element type: 4 - Tetrahedron, 6 - Wedge, 8 - Hexahedron The geometric order of the elements The dimensions of the initial brick-shaped stack of elements The number of elements in the z direction The number of nodes to offset by before rejoining the stack The number of uniform refinement steps to apply Along with producing some visually interesting meshes, this miniapp demonstrates how simple 3D meshes can be constructed and transformed in MFEM. It also produces a family of meshes with simple but non-trivial topology for testing various features in MFEM. This miniapp has only a serial ( twist.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Twist Miniapp"}, {"location": "examples/#extruder-miniapp", "text": "This miniapp creates higher dimensional meshes from lower dimensional meshes by extrusion. Simple coordinate transformations can also be applied if desired. The initial mesh can be 1D or 2D 1D meshes can be extruded in both the y and z directions 2D meshes can be triangular, quadrilateral, or contain both element types Meshes with high order geometry are supported User can specify the number of elements and the distance to extrude Geometric order of the transformed mesh can be user selected or automatic This miniapp provides another demonstration of how simple meshes can be constructed and transformed in MFEM. This miniapp has only a serial ( extruder.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Extruder Miniapp"}, {"location": "examples/#trimmer-miniapp", "text": "This miniapp creates a new mesh file from an existing mesh by trimming away elements with selected attributes. Newly exposed boundary elements will be assigned new or user specified boundary attributes. The initial mesh can be 2D or 3D Meshes with high order geometry are supported Periodic meshes are supported NURBS meshes are not supported This miniapp provides another demonstration of how simple meshes can be constructed in MFEM. This miniapp has only a serial ( trimmer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Trimmer Miniapp"}, {"location": "examples/#polar-nc-miniapp", "text": "This miniapp generates a circular sector mesh that consist of quadrilaterals and triangles of similar sizes. The 3D version of the mesh is made of prisms and tetrahedra. The mesh is non-conforming by design, and can optionally be made curvilinear. The elements are ordered along a space-filling curve by default, which makes the mesh ready for parallel non-conforming AMR in MFEM. The implementation also demonstrates how to initialize a non-conforming mesh on the fly by marking hanging nodes with Mesh::AddVertexParents . For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( polar-nc.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Polar-NC Miniapp"}, {"location": "examples/#shaper-miniapp", "text": "This miniapp performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( shaper.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Shaper Miniapp"}, {"location": "examples/#mesh-explorer-miniapp", "text": "This miniapp is a handy tool to examine, visualize and manipulate a given mesh. Some of its features are: visualizing of mesh materials and individual mesh elements mesh scaling, randomization, and general transformation manipulation of the mesh curvature the ability to simulate parallel partitioning quantitative and visual reports of mesh quality For more details, please see the documentation in the miniapps/meshing directory. The miniapp has only a serial ( mesh-explorer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mesh Explorer Miniapp"}, {"location": "examples/#mesh-optimizer-miniapp", "text": "This miniapp performs mesh optimization using the Target-Matrix Optimization Paradigm (TMOP) by P. Knupp , and a global variational minimization approach ( Dobrev et al. ). It minimizes the quantity $$\\sum_T \\int_T \\mu(J(x)),$$ where $T$ are the target (ideal) elements, $J$ is the Jacobian of the transformation from the target to the physical element, and $\\mu$ is the mesh quality metric. This metric can measure shape, size or alignment of the region around each quadrature point. The combination of targets and quality metrics is used to optimize the physical node positions, i.e., they must be as close as possible to the shape / size / alignment of their targets. This code also demonstrates a possible use of nonlinear operators, as well as their coupling to Newton methods for solving minimization problems. Note that the utilized Newton methods are oriented towards avoiding invalid meshes with negative Jacobian determinants. Each Newton step requires the inversion of a Jacobian matrix, which is done through an inner linear solver. The miniapp has a serial ( mesh-optimizer.cpp ) and a parallel ( pmesh-optimizer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mesh Optimizer Miniapp"}, {"location": "examples/#mesh-fitting-miniapp", "text": "This miniapp builds upon the mesh optimizer miniapp to enable mesh alignment with the zero isosurface of a discrete level-set. The approach is based on Dobrev et al. and Mittal et al. , where we minimize the quantity $$\\sum_T \\int_T \\mu(J(x)) + \\sum_{s \\in S} w \\,\\, \\sigma^2(x_s).$$ Here, the first term controls mesh quality and the second term enforces weak alignment of a selected subset of mesh-nodes ($s \\in S$) with the zero isosurface of the discrete level-set function ($\\sigma$). Click on the image on the right to see a demonstration of this method for generating body-fitted meshes for topology optimization in LiDO to maximize beam stiffness under a downward force on the right wall. The miniapp has a parallel ( pmesh-fitting.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Mesh Fitting Miniapp"}, {"location": "examples/#minimal-surface-miniapp", "text": "This miniapp solves Plateau's problem: the Dirichlet problem for the minimal surface equation. Options to solve the minimal surface equations of both parametric surfaces as well as surfaces restricted to be graphs of the form $z=f(x,y)$ are supported, including a number of examples such as the Catenoid, Helicoid, Costa and Scherk surfaces. For more details, please see the documentation in the miniapps/meshing directory. The miniapp has a serial ( minimal-surface.cpp ) and a parallel ( pminimal-surface.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Minimal Surface Miniapp"}, {"location": "examples/#low-order-refined-transfer-miniapp", "text": "The lor-transfer miniapp, found under miniapps/tools demonstrates the capability to generate a low-order refined mesh from a high-order mesh, and to transfer solutions between these meshes. Grid functions can be transferred between the coarse, high-order mesh and the low-order refined mesh using either $L^2$ projection or pointwise evaluation. These transfer operators can be designed to discretely conserve mass and to recover the original high-order solution when transferring a low-order grid function that was obtained by restricting a high-order grid function to the low-order refined space. The miniapp has only a serial ( lor-transfer.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Low-Order Refined Transfer Miniapp"}, {"location": "examples/#interpolation-miniapps", "text": "The interpolation miniapp, found under miniapps/gslib , demonstrate the capability to interpolate high-order finite element functions at given set of points in physical space. These miniapps utilize the gslib library's high-order interpolation utility for quad and hex meshes: Find Points miniapp has a serial ( findpts.cpp ) and a parallel ( pfindpts.cpp ) version that demonstrate the basic procedures for point search and evaluation of grid functions. Field Interp miniapp ( field-interp.cpp ) demonstrates how grid functions can be transferred between meshes. Field Diff miniapp ( field-diff.cpp ) demonstrates how grid functions on two different meshes can be compared with each other. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Interpolation Miniapps"}, {"location": "examples/#extrapolation-miniapp", "text": "The extrapolate miniapp, found in the miniapps/shifted directory, extrapolates a finite element function from a set of elements (known values) to the rest of the domain. The set of elements that contains the known values is specified by the positive values of a level set Coefficient. The known values are not modified. The miniapp supports two PDE-based approaches ( Aslam , Bochkov & Gibou ), both of which rely on solving a sequence of advection problems in the direction of the unknown parts of the domain. The extrapolation can be constant (1st order), linear (2nd order), or quadratic (3rd order). These formal orders hold for a limited band around the zero level set, see the above references for further information. The miniapp has only a parallel ( extrapolate.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Extrapolation Miniapp"}, {"location": "examples/#distance-solver-miniapp", "text": "The distance miniapp, found in the miniapps/shifted directory demonstrates the capability to compute the \"distance\" to a given point source or to the zero level set of a given function. Here \"distance\" refers to the length of the shortest path through the mesh. The input can be a DeltaCoefficient (representing a point source), or any Coefficient (for the case of a level set). The output is a ParGridFunction that can be scalar (representing the scalar distance), or a vector (its magnitude is the distance, and its direction is the starting direction of the shortest path). The miniapp has only a parallel ( distance.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Distance Solver Miniapp"}, {"location": "examples/#shifted-diffusion-miniapp", "text": "The diffusion miniapp, found in the miniapps/shifted directory, demonstrates the capability to formulate a boundary value problem using a surrogate computational domain. The method uses a distance function to the true boundary to enforce Dirichlet boundary conditions on the (non-aligned) mesh faces, therefore \"shifting\" the location where boundary conditions are imposed. The implementation in the miniapp is a high-order extension of the second-generation shifted boundary method . The miniapp has only a parallel ( diffusion.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Shifted Diffusion Miniapp"}, {"location": "examples/#laghos-miniapp", "text": "Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. The computational motives captured in Laghos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements (triangular and tetrahedral elements can also be used, but with the less efficient full assembly option). Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Laghos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Continuous and discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Separation between the assembly and the quadrature point-based computations. Point-wise definition of mesh size, time-step estimate and artificial viscosity coefficient. Constant-in-time velocity mass operator that is inverted iteratively on each time step. This is an example of an operator that is prepared once (fully or partially assembled), but is applied many times. The application cost is dominant for this operator. Time-dependent force matrix that is prepared every time step (fully or partially assembled) and is applied just twice per \"assembly\". Both the preparation and the application costs are important for this operator. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization / data analysis with VisIt . The Laghos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Laghos .", "title": "Laghos Miniapp"}, {"location": "examples/#remhos-miniapp", "text": "Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. The computational motives captured in Remhos include: Support for unstructured meshes, in 2D and 3D, with quadrilateral and hexahedral elements. Serial and parallel mesh refinement options can be set via a command-line flag. Explicit time-stepping loop with a variety of time integrator options. Remhos supports Runge-Kutta ODE solvers of orders 1, 2, 3, 4 and 6. Discontinuous high-order finite element discretization spaces of runtime-specified order. Moving (high-order) meshes. Mass operator that is local per each zone. It is inverted by iterative or exact methods at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Advection operator that couples neighboring zones. It is applied once at each time step. This operator is constant in time (transport mode) or changing in time (remap mode). Options for full or partial assembly. Domain-decomposed MPI parallelism. Optional in-situ visualization with GLVis and data output for visualization and data analysis with VisIt . The Remhos miniapp is part of the CEED software suite , a collection of software benchmarks, miniapps, libraries and APIs for efficient exascale discretizations based on high-order finite element and spectral element methods. See https://github.com/ceed for more information and source code availability. This is an external miniapp, available at https://github.com/CEED/Remhos .", "title": "Remhos Miniapp"}, {"location": "examples/#navier-miniapp", "text": "Navier is a miniapp that solves the time-dependent Navier-Stokes equations of incompressible fluid dynamics \\begin{align} \\frac{\\partial u}{\\partial t} + (u \\cdot \\nabla) u - \\frac{1}{Re} \\nabla^2 u - \\nabla p &= f \\\\ \\nabla \\cdot u &= 0 \\end{align} using a spatially high-order finite element discretization. The time-dependent problem is solved using a (up to) third order implicit-explicit method which leverages an extrapolation scheme for the convective parts and a backward-difference formulation for the viscous parts of the equation. The miniapp supports: Arbitrary order H1 elements High order mesh elements IMEX (EXTk-BDFk) time-stepping up to third order Convenient interface for new users A variety of test cases and benchmarks This miniapp has only a parallel ( navier_solver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Navier Miniapp"}, {"location": "examples/#block-solvers-miniapp", "text": "The Block Solvers miniapp, found under miniapps/solvers , compares various linear solvers for the saddle point system obtained from mixed finite element discretization of the Darcy's flow problem \\begin{array}{rcl} k{\\bf u} & + \\nabla p & = f \\\\ -\\nabla \\cdot {\\bf u} & & = g \\end{array} The solvers being compared include: The divergence-free solver (couple and decoupled modes), which is based on a multilevel decomposition of the Raviart-Thomas finite element space and its divergence-free subspace. MINRES preconditioned by the block diagonal preconditioner in ex5p.cpp . For more details, please see the documentation in the miniapps/solvers directory. The miniapp supports: Arbitrary order mixed finite element pair (Raviart-Thomas elements + piecewise discontinuous polynomials) Various combination of essential and natural boundary conditions Homogeneous or heterogeneous scalar coefficient k This miniapp has only a parallel ( block-solvers.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Block Solvers Miniapp"}, {"location": "examples/#overlapping-grids-miniapps", "text": "Overlapping grids-based frameworks can often make problems tractable that are otherwise inaccessible with a single conforming grid. The following gslib -based miniapps in MFEM demonstrate how to set up and use overlapping grids: The Schwarz Example 1 miniapp in miniapps/gslib has a serial ( schwarz_ex1.cpp ) a parallel ( schwarz_ex1p.cpp ) version that solves the Poisson problem on overlapping grids. The serial version is restricted to use two overlapping grids, while the parallel version supports arbitrary number of overlapping grids. The Navier Conjugate Heat Transfer miniapp in miniapps/navier ( navier_cht.cpp ) demonstrates how a conjugate heat transfer problem can be solved with the fluid dynamics (incompressible Navier-Stokes equations) and heat transfer (advection-diffusion equation) PDEs modeled on different meshes. These miniapps require installation of the gslib library. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Overlapping Grids Miniapps"}, {"location": "examples/#parelag-amge-for-hcurl-and-hdiv-miniapp", "text": "This is a miniapp that exhibits the ParELAG library and part of its capabilities. The miniapp employs MFEM and ParELAG to solve $H(\\mathrm{curl})$- and $H(\\mathrm{div})$-elliptic forms by an element based algebraic multigrid (AMGe). ParELAG is a library mostly developed at the Center for Applied Scientific Computing of Lawrence Livermore National Laboratory, California, USA. The miniapp uses: A multilevel hierarchy of de Rham complexes of finite element spaces, built by ParELAG; Hiptmair-type (hybrid) smoothers, implemented in ParELAG; AMS (Auxiliary-space Maxwell Solver) or ADS (Auxiliary-space Divergence Solver), from HYPRE, for preconditioning or solving on the coarsest levels. Alternatively, it is possible to precondition or solve the $H(\\mathrm{div})$ form on the coarsest level via a hybridization approach. However, this is not yet implemented in ParELAG for the coarse levels. Only the hybridization solver that is directly applicable to an $H(\\mathrm{div})$-$L^2$ mixed (saddle-point) system is currently available in ParELAG. We recommend viewing ex3p.cpp and ex4p.cpp before viewing this miniapp. For more details, please see the documentation in the miniapps/parelag directory. This miniapp has only a parallel ( MultilevelHcurlHdivSolver.cpp ) version. We recommend that new users start with the example codes before moving to the miniapps.", "title": "ParELAG AMGe for H(curl) and H(div) Miniapp"}, {"location": "examples/#generating-gaussian-random-fields-via-the-spde-method", "text": "This miniapp generates Gaussian random fields on meshed domains $\\Omega \\subset \\mathbb{R}^n$ via the SPDE method. The method exploits a stochastic, fractional PDE whose full-space solutions yield Gaussian random fields with a Mat\u00e9rn covariance. The method was introduced and popularized by Lindgren et. al in 2010. In this miniapp, we use a slightly modified representation following Khristenko et. al . More specifically, we solve the equation \\begin{equation} \\left( -\\frac{1}{2\\nu} \\nabla \\cdot \\left( \\Theta \\nabla \\right) + \\mathbf{1} \\right)^{\\frac{2\\nu+n}{4}} u(x,w) = \\eta W(x,w) \\ \\ \\ \\text{in} \\ \\ \\Omega, \\end{equation} with various boundary conditions. Solving this equation on $\\Omega = \\mathbb{R}^n$ delivers a homogeneous Gaussian random field with zero mean and Mat\u00e9rn covariance, \\begin{align}\\label{eq:MaternCovariance} C(x,y) &= \\sigma^2M_\\nu \\left(\\sqrt{2\\nu}\\, \\| x-y \\|_{\\Theta} \\right) , \\end{align} where $M_\\nu(z) = \\frac{2^{1-\\nu}}{\\Gamma(\\nu)} z ^{\\nu} K_\\nu \\left( z \\right)$ and $\\| x-y \\|_{\\Theta}^2 = (x-y)^\\top\\Theta (x-y)$. The Mat\u00e9rn model provides the regularity parameter $\\nu > 0$ and the anisotropic diffusion tensor $\\Theta \\in \\mathbb{R}^{n\\times n}$, which determines the spatial structure (correlation lengths). However, applying boundary conditions to the SPDE above provides the ability to model a significantly larger class of inhomogeneous random fields on complex domains. For further details, see the miniapp README . We recommend viewing ex33p.cpp before viewing this miniapp. This miniapp ( generate_random_field.cpp ) has only a parallel implementation. It further requires MFEM to be built with LAPACK, otherwise you may only use predefined values for $\\nu$. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Generating Gaussian Random Fields via the SPDE Method"}, {"location": "examples/#multidomain-and-submesh-demonstration-miniapp", "text": "This miniapp aims to demonstrate how to solve two PDEs, that represent different physics, on the same domain. MFEM's SubMesh interface is used to compute on and transfer between the spaces of predefined parts of the domain. For the sake of simplicity, the spaces on each domain are using the same order H1 finite elements. This does not mean that the approach is limited to this configuration. A 3D domain comprised of an outer box with a cylinder shaped inside is used. A heat equation is described on the outer box domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T &&\\mbox{in outer box}\\\\ T &= T_{wall} &&\\mbox{on outside wall}\\\\ \\nabla T \\cdot \\hat{n} &= 0 &&\\mbox{on inside (cylinder) wall} \\end{align} with temperature $T$ and coefficient $\\kappa$ (non-physical in this example). A convection-diffusion equation is described inside the cylinder domain \\begin{align} \\frac{\\partial T}{\\partial t} &= \\kappa \\Delta T - \\alpha \\nabla \\cdot (\\vec{b} T) & &\\mbox{in inner cylinder}\\\\ T &= T_{wall} & &\\mbox{on cylinder wall}\\\\ \\nabla T \\cdot \\hat{n} &= 0 & &\\mbox{else} \\end{align} with temperature $T$, coefficients $\\kappa$, $\\alpha$ and prescribed velocity profile $\\vec{b}$, and $T_{wall}$ obtained from the heat equation. To couple the solutions of both equations, a segregated solve with one way coupling approach is used. The heat equation of the outer box is solved from the timestep $T_{box}(t)$ to $T_{box}(t+dt)$. Then for the convection-diffusion equation $T_{wall}$ is set to $T_{box}(t+dt)$ and the equation is solved for $T(t+dt)$ which results in a first-order one way coupling. This miniapp has only a parallel ( multidomain.cpp ) implementation. We recommend that new users start with the example codes before moving to the miniapps.", "title": "Multidomain and SubMesh demonstration Miniapp"}, {"location": "examples/#dpg-miniapp", "text": "This miniapp demonstrates how to discretize and solve various PDEs using the Discontinuous Petrov-Galerkin (DPG) method. It utilizes a new user-friendly interface to assemble the block DPG systems arising from the discretization of any DPG formulation (such as Ultraweak or Primal). In addition, the miniapp supports complex-valued systems, static condensation for block systems, and AMR using the built-in DPG residual-based error indicator. This capability is showcased in the following DPG examples in miniapps/dpg . Ultraweak DPG formulation for diffusion . This example solves the simple Poisson equation $$-\u0394 u = f$$ and computes rates of convergence under successive uniform h-refinements for a smooth manufactured solution. The parallel version also includes an AMR implementation for the L-shape benchmark problem. This example has a serial ( diffusion.cpp ) and a parallel ( pdiffusion.cpp ) version. Ultraweak DPG formulation for convection-diffusion . This example solves the convection-diffusion problem: \\begin{align} -\\epsilon \\Delta u + \\nabla \\cdot (\\beta u) &= f\\\\ \\end{align} using AMR. The example demonstrates the use of mesh-dependent test norms which are suitable for problems with solutions that exhibit large gradients present in internal or boundary layers . The example has a serial ( convection-diffusion.cpp ) and a parallel ( pconvection-diffusion.cpp ) version. Ultraweak DPG formulation for time-harmonic linear acoustics . This example solves the indefinite Helmholtz equation \\begin{align} -\\Delta u - \\omega^2 u &= f\\\\ \\end{align} The example includes formulations with manufactured plane-wave solutions as well as high-frequency scattering problems and the use of Perfectly Match Layers (PML). It also demonstrates how to set up complex-valued systems and preconditioners for their solutions. The example has a serial ( acoustics.cpp ) and parallel ( pacoustics.cpp ) version. Ultraweak DPG formulation for time-harmonic Maxwell . This example solves the indefinite Maxwell problem \\begin{align} \\nabla \u00d7 (\\mu^{-1} \\nabla \\times E) - \\omega^2 \\epsilon E&= J\\\\ \\end{align} The example includes formulations with smooth manufactured solutions, AMR formulations for high-frequency scattering problems as well as a problem with a singular solution. The example has a serial ( maxwell.cpp ) and a parallel ( pmaxwell.cpp ) version.", "title": "DPG miniapp"}, {"location": "examples/#tribol-miniapp", "text": "This miniapp demonstrates how to use Tribol's mortar method to solve a contact patch test. A contact patch test places two aligned, linear elastic cubes in contact, then verifies that the exact elasticity solution for this problem is recovered. The exact solution requires transmission of a uniform pressure field across a (not necessarily conforming) interface (i.e. the contact surface). Mortar methods (including the one implemented in Tribol) are generally able to pass the contact patch test. The test assumes small deformations and no accelerations, so the relationship between forces/contact pressures and deformations/contact gaps is linear and, therefore, the problem can be solved exactly with a single linear solve. The mortar implementation is based on Puso and Laursen (2004) . A description of the Tribol implementation is available in Serac documentation . Lagrange multipliers are used to solve for the pressure required to prevent violation of the contact constraints. This miniapp has only a parallel ( contact-patch-test.cpp ) implementation. For more details, please see the documentation in miniapps/tribol/README.md . We recommend that new users start with the example codes before moving to the miniapps. No examples or miniapps match your criteria. ", "title": "Tribol miniapp"}, {"location": "fem/", "text": "Finite Element Method The finite element method is a general discretization technique that can utilize unstructured grids to approximate the solutions of many partial differential equations (PDEs). There is a large body of literature on finite elements, including the following excellent books: Numerical Solution of Partial Differential Equations by the Finite Element Method by Claes Johnson Theory and Practice of Finite Elements by Alexandre Ern and Jean-Luc Guermond Higher-Order Finite Element Methods by Pavel \u0160ol\u00edn , Karel Segeth and Ivo Dole\u017eel High-Order Methods for Incompressible Fluid Flow by Michel Deville , Paul Fischer and Ernest Mund Finite Elements: Theory, Fast Solvers, and Applications in Elasticity Theory by Dietrich Braess The Finite Element Method for Elliptic Problems by Philippe Ciarlet The Mathematical Theory of Finite Element Methods by Susanne Brenner and Ridgway Scott An Analysis of the Finite Element Method by Gilbert Strang and George Fix The Finite Element Method: Its Basis and Fundamentals by Olek Zienkiewicz , Robert Taylor and J.Z. Zhu The MFEM library is designed to be lightweight, general and highly scalable finite element toolkit that provides the building blocks for developing finite element algorithms in a manner similar to that of MATLAB for linear algebra methods. Some of the C++ classes for the finite element realizations of these PDE-level concepts in MFEM are described below. Primal and Dual Vectors The finite element method uses vectors of data in a variety of ways and the differences can be subtle. MFEM defines GridFunction , LinearForm , and Vector classes which help to distinguish the different roles that vectors of data can play. Bilinear Form Integrators Bilinear form integrators are at the heart of any finite element method, they are used to compute the integrals of products of basis functions over individual mesh elements (or sometimes over edges or faces). The BilinearForm class adds several BilinearFormIntegrator s together to build the global sparse finite element matrix. Linear Form Integrators Linear form integrators are used to compute the integrals of products of a basis function with a given source function over individual mesh elements (or sometimes over edges or faces). The LinearForm class adds several LinearFormIntegrator s together to build the global right-hand side for the finite element linear system. Integration This page offers guidance on writing custom Bilinear Form or Linear Form Integrators. Coefficients The Coefficient objects in MFEM are general functions on continuous level that are used to represent the PDE coefficients of linear and bilinear forms, as well as to specify initial conditions, boundary conditions, exact solutions, etc. Nonlinear Form Integrators Nonlinear form integrators are used to express the local action of a general nonlinear finite element operator. In addition, they may provide the capability to assemble the local gradient operator and to compute the local energy. Linear Interpolators Unlike Bilinear and Linear forms, Linear Interpolators do not perform integrations, but project one basis function (or a linear function of a basis function) onto another basis function. The DiscreteLinearOperator class adds one or more LinearInterpolators together to build a global sparse matrix representation of the linear operator. Weak Formulations Weak formulations are at the heart of the finite element method. Finite element approximations are almost always less smooth than the solutions we hope to approximate. Weak formulations provide a means of approximating derivatives of non-differentiable functions. Boundary Conditions The types of available boundary conditions and how to apply them depend on the discretizations being used. This page describes how to enforce various boundary conditions for certain classes of problems.", "title": "Finite Elements"}, {"location": "fem/#finite-element-method", "text": "The finite element method is a general discretization technique that can utilize unstructured grids to approximate the solutions of many partial differential equations (PDEs). There is a large body of literature on finite elements, including the following excellent books: Numerical Solution of Partial Differential Equations by the Finite Element Method by Claes Johnson Theory and Practice of Finite Elements by Alexandre Ern and Jean-Luc Guermond Higher-Order Finite Element Methods by Pavel \u0160ol\u00edn , Karel Segeth and Ivo Dole\u017eel High-Order Methods for Incompressible Fluid Flow by Michel Deville , Paul Fischer and Ernest Mund Finite Elements: Theory, Fast Solvers, and Applications in Elasticity Theory by Dietrich Braess The Finite Element Method for Elliptic Problems by Philippe Ciarlet The Mathematical Theory of Finite Element Methods by Susanne Brenner and Ridgway Scott An Analysis of the Finite Element Method by Gilbert Strang and George Fix The Finite Element Method: Its Basis and Fundamentals by Olek Zienkiewicz , Robert Taylor and J.Z. Zhu The MFEM library is designed to be lightweight, general and highly scalable finite element toolkit that provides the building blocks for developing finite element algorithms in a manner similar to that of MATLAB for linear algebra methods. Some of the C++ classes for the finite element realizations of these PDE-level concepts in MFEM are described below.", "title": "Finite Element Method"}, {"location": "fem/#primal-and-dual-vectors", "text": "The finite element method uses vectors of data in a variety of ways and the differences can be subtle. MFEM defines GridFunction , LinearForm , and Vector classes which help to distinguish the different roles that vectors of data can play.", "title": "Primal and Dual Vectors"}, {"location": "fem/#bilinear-form-integrators", "text": "Bilinear form integrators are at the heart of any finite element method, they are used to compute the integrals of products of basis functions over individual mesh elements (or sometimes over edges or faces). The BilinearForm class adds several BilinearFormIntegrator s together to build the global sparse finite element matrix.", "title": "Bilinear Form Integrators"}, {"location": "fem/#linear-form-integrators", "text": "Linear form integrators are used to compute the integrals of products of a basis function with a given source function over individual mesh elements (or sometimes over edges or faces). The LinearForm class adds several LinearFormIntegrator s together to build the global right-hand side for the finite element linear system.", "title": "Linear Form Integrators"}, {"location": "fem/#integration", "text": "This page offers guidance on writing custom Bilinear Form or Linear Form Integrators.", "title": "Integration"}, {"location": "fem/#coefficients", "text": "The Coefficient objects in MFEM are general functions on continuous level that are used to represent the PDE coefficients of linear and bilinear forms, as well as to specify initial conditions, boundary conditions, exact solutions, etc.", "title": "Coefficients"}, {"location": "fem/#nonlinear-form-integrators", "text": "Nonlinear form integrators are used to express the local action of a general nonlinear finite element operator. In addition, they may provide the capability to assemble the local gradient operator and to compute the local energy.", "title": "Nonlinear Form Integrators"}, {"location": "fem/#linear-interpolators", "text": "Unlike Bilinear and Linear forms, Linear Interpolators do not perform integrations, but project one basis function (or a linear function of a basis function) onto another basis function. The DiscreteLinearOperator class adds one or more LinearInterpolators together to build a global sparse matrix representation of the linear operator.", "title": "Linear Interpolators"}, {"location": "fem/#weak-formulations", "text": "Weak formulations are at the heart of the finite element method. Finite element approximations are almost always less smooth than the solutions we hope to approximate. Weak formulations provide a means of approximating derivatives of non-differentiable functions.", "title": "Weak Formulations"}, {"location": "fem/#boundary-conditions", "text": "The types of available boundary conditions and how to apply them depend on the discretizations being used. This page describes how to enforce various boundary conditions for certain classes of problems.", "title": "Boundary Conditions"}, {"location": "fem_bc/", "text": "Boundary Conditions $ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} \\newcommand{\\dO}{{\\partial\\Omega}} $ MFEM supports boundary conditions of mixed type through the definition of boundary attributes on the mesh. A boundary attribute is a positive integer assigned to each boundary element of the mesh. Since each boundary element can have only one attribute number the boundary attributes split the boundary into a group of disjoint sets. MFEM allows the user to define boundary conditions on a subset of boundary attributes. Typically mixed boundary conditions are imposed on disjoint portions of the boundary defined as: Symbol Description $\\Gamma\\equiv\\dO$ Boundary of the Domain ($\\Omega$) $\\Gamma_D$ Dirichlet Boundary $\\Gamma_N$ Neumann Boundary $\\Gamma_R$ Robin Boundary $\\Gamma_0$ Natural Boundary Where we assume $\\Gamma = \\Gamma_D\\cup\\Gamma_N\\cup\\Gamma_R\\cup\\Gamma_0$. In MFEM boundaries are usually described by \"marker arrays\". A marker array is an array of integers containing zeros and ones with a length equal to the largest boundary attribute index. // Assume we start with an array containing boundary attribute numbers // stored in bdr_attr. // // Prepare a marker array from a set of attributes Array bdr_marker(pmesh.bdr_attributes.Max()); bdr_marker = 0; for (int i=0; i ess_tdof_list(0); fespace.GetEssentialTrueDofs(dbc_marker, ess_tdof_list); // Prepare the linear system with enforcement of the essential boundary // conditions OperatorPtr A; Vector B, X; a.FormLinearSystem(ess_tdof_list, u, b, A, X, B); Natural Boundary Conditions The so called \"Natural Boundary Conditions\" arise whenever weak derivatives occur in a PDE (see below for more on weak derivatives ). Weak derivatives must be handled using integration by parts which introduces a boundary integral. If this boundary integral is ignored, its value is implicitly set to zero which creates an implicit constraint on the solution called a \"natural boundary condition\". Continuous Operator Weak Operator Natural BC $-\\div(\\lambda\\grad u)$ $(\\lambda\\grad u,\\grad v)$ $\\hat{n}\\cdot(\\lambda\\grad u)=0$ on $\\Gamma_0$ $\\curl(\\lambda\\curl\\vec{u})$ $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\hat{n}\\cross(\\lambda\\curl\\vec{u})=0$ on $\\Gamma_0$ $-\\grad(\\lambda\\div\\vec{u})$ $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $\\lambda\\div\\vec{u}=0$ on $\\Gamma_0$ $\\div(\\vec{\\lambda}u)$ $(-\\vec{\\lambda}u,\\grad v)$ $\\hat{n}\\cdot\\vec{\\lambda}u = 0$ on $\\Gamma_0$ $\\curl(\\lambda\\vec{u})$ $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\hat{n}\\cross(\\lambda\\vec{u})=0$ on $\\Gamma_0$ $-\\div(\\lambda\\grad u) + \\div(\\vec{\\beta}u)$ $(\\lambda\\grad u - \\vec{\\beta}u,\\grad v)$ $\\hat{n}\\cdot(\\lambda\\grad u-\\vec{\\beta}u)=0$ on $\\Gamma_0$ No additional implementation is necessary to impose natural boundary conditions. Any portion of the boundary where a Dirichlet, Neumann, or Robin boundary condition has not been applied will receive a natural boundary condition by default. Neumann Boundary Conditions Neumann boundary conditions are closely related to natural boundary conditions. Rather than ignoring the boundary integral we integrate a known function on the boundary which approximates the desired value of the boundary condition (often a involving a derivative of the field). The following table shows a variety of common operators and their related Neumann boundary condition. Operator Continuous Operator Neumann BC $(\\lambda\\grad u,\\grad v)$ $-\\div(\\lambda\\grad u)$ $\\hat{n}\\cdot(\\lambda\\grad u)=f$ on $\\Gamma_N$ $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $\\hat{n}\\cross(\\lambda\\curl\\vec{u})=\\hat{n}\\cross\\vec{f}$ on $\\Gamma_N$ $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ $\\lambda\\div\\vec{u}=\\hat{n}\\cdot\\vec{f}$ on $\\Gamma_N$ $(-\\vec{\\lambda}u,\\grad v)$ $\\div(\\vec{\\lambda}u)$ $\\hat{n}\\cdot\\vec{\\lambda}u = f$ on $\\Gamma_N$ $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\vec{u})$ $\\hat{n}\\cross(\\lambda\\vec{u})=\\hat{n}\\cross\\vec{f}$ on $\\Gamma_N$ $(\\lambda\\grad u - \\vec{\\beta}u,\\grad v)$ $-\\div(\\lambda\\grad u) + \\div(\\vec{\\beta}u)$ $\\hat{n}\\cdot(\\lambda\\grad u-\\vec{\\beta}u)=f$ on $\\Gamma_N$ To impose these boundary conditions in MFEM simply modify the right-hand side of your linear system by adding the appropriate boundary integral of either $f$ or $\\vec{f}$. For $H^1$ or $L^2$ fields this can be accomplished by adding the BoundaryLFIntegrator with an appropriate coefficient for $f$ to a [Par]LinearForm object. Neumann boundary conditions can be added to the above example code by adding the following line before the call to b.Assemble() . // Add Neumann BCs n.(matCoef Grad u) = nbcCoef on the boundary marked in // the nbc_marker array. b.AddBoundaryIntegrator(new BoundaryLFIntegrator(nbcCoef), nbc_marker); For H(Curl) fields this can be accomplished by adding the VectorFEBoundaryTangentLFIntegrator with an appropriate vector coefficient for $\\vec{f}$ to a [Par]LinearForm object. And finally, for H(Div) fields this can be accomplished by adding the VectorFEBoundaryFluxLFIntegrator with an appropriate scalar coefficient for $f = \\hat{n}\\cdot\\vec{f}$ to a [Par]LinearForm object. Other integrators may be appropriate if it is desirable to express the functions $\\,f$ or $\\vec{f}$ in other ways. Robin Boundary Conditions Robin boundary conditions typically involve a linear function of the field and its normal derivative. As such they also arise from weak derivatives and the boundary integrals they introduce to the system of equations. Typical forms of the Robin boundary condition include the following. Operator Continuous Operator Robin BC $(\\lambda\\grad u,\\grad v)$ $-\\div(\\lambda\\grad u)$ $\\hat{n}\\cdot(\\lambda\\grad u)+\\gamma\\,u=f$ on $\\Gamma_R$ $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $\\hat{n}\\cross(\\lambda\\curl\\vec{u}+\\gamma\\,\\hat{n}\\cross\\vec{u})=\\hat{n}\\cross\\vec{f}$ on $\\Gamma_R$ $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ $\\lambda\\div\\vec{u}+\\gamma\\,\\hat{n}\\cdot\\vec{u}=\\hat{n}\\cdot\\vec{f}$ on $\\Gamma_R$ $(\\lambda\\grad u - \\vec{\\beta}u,\\grad v)$ $-\\div(\\lambda\\grad u) + \\div(\\vec{\\beta}u)$ $\\hat{n}\\cdot(\\lambda\\grad u-\\vec{\\beta}u)+\\gamma\\,u=f$ on $\\Gamma_R$ Robin boundary conditions are applied in the same manner as Neumann boundary conditions except that one must also add a boundary integral to the [Par]BilinearForm object to account for the term involving $\\gamma$. For example, when solving for an $H^1$ field one should add a MassIntegrator with an appropriate scalar coefficient for $\\gamma$. The implementation of a Robin boundary condition requires precisely the same change to the right-hand-side as the Neumann boundary condition as well as a new term in the bilinear form before a.Assemble() : // Add Robin BCs n.(matCoef Grad u) + rbcACoef u = rbcBCoef on the boundary // marked in the rbc_marker array. b.AddBoundaryIntegrator(new BoundaryLFIntegrator(rbcBCoef), rbc_marker); ... // Add Robin BCs n.(matCoef grad u) + rbcACoef u = rbcBCoef on the boundary // marked in the rbc_marker array. a.AddBoundaryIntegrator(new MassIntegrator(rbcACoef), rbc_marker); Discontinuous Galerkin Formulations In the Discontinuous Galerkin (DG) formulation the Natural , Neumann , and Robin can be implemented in a similar the same manner as in the continuous case (adding the appropriate LinearFormIntegrator as a boundary face integrator instead of a boundary integrator ). However, since DG basis functions have no degrees of freedom associated with the boundary, Dirichlet boundary conditions must be handled differently. // Add the desired value for the Dirichlet constraint on the boundary // marked in the dbc_marker array. b.AddBdrFaceIntegrator(new DGDirichletLFIntegrator(dbcCoef, matCoef, sigma, kappa), dbc_marker); ... // Add the n.Grad(u) boundary integral on the Dirichlet portion of the // boundary marked in the dbc_marker array. a.AddBdrFaceIntegrator(new DGDiffusionIntegrator(matCoef, sigma, kappa), dbc_marker); Where sigma and kappa are parameters controlling the symmetry and interior penalty used in the DG diffusion formulation. These two integrators work together to balance the natural boundary condition associated with the DiffusionIntegrator and to penalize solutions which differ from the desired Dirichlet value near the boundary. Similar pairs of integrators can be implemented to accommodate other PDEs. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Boundary Conditions"}, {"location": "fem_bc/#boundary-conditions", "text": "$ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} \\newcommand{\\dO}{{\\partial\\Omega}} $ MFEM supports boundary conditions of mixed type through the definition of boundary attributes on the mesh. A boundary attribute is a positive integer assigned to each boundary element of the mesh. Since each boundary element can have only one attribute number the boundary attributes split the boundary into a group of disjoint sets. MFEM allows the user to define boundary conditions on a subset of boundary attributes. Typically mixed boundary conditions are imposed on disjoint portions of the boundary defined as: Symbol Description $\\Gamma\\equiv\\dO$ Boundary of the Domain ($\\Omega$) $\\Gamma_D$ Dirichlet Boundary $\\Gamma_N$ Neumann Boundary $\\Gamma_R$ Robin Boundary $\\Gamma_0$ Natural Boundary Where we assume $\\Gamma = \\Gamma_D\\cup\\Gamma_N\\cup\\Gamma_R\\cup\\Gamma_0$. In MFEM boundaries are usually described by \"marker arrays\". A marker array is an array of integers containing zeros and ones with a length equal to the largest boundary attribute index. // Assume we start with an array containing boundary attribute numbers // stored in bdr_attr. // // Prepare a marker array from a set of attributes Array bdr_marker(pmesh.bdr_attributes.Max()); bdr_marker = 0; for (int i=0; i ess_tdof_list(0); fespace.GetEssentialTrueDofs(dbc_marker, ess_tdof_list); // Prepare the linear system with enforcement of the essential boundary // conditions OperatorPtr A; Vector B, X; a.FormLinearSystem(ess_tdof_list, u, b, A, X, B);", "title": "Dirichlet (Essential) Boundary Conditions"}, {"location": "fem_bc/#natural-boundary-conditions", "text": "The so called \"Natural Boundary Conditions\" arise whenever weak derivatives occur in a PDE (see below for more on weak derivatives ). Weak derivatives must be handled using integration by parts which introduces a boundary integral. If this boundary integral is ignored, its value is implicitly set to zero which creates an implicit constraint on the solution called a \"natural boundary condition\". Continuous Operator Weak Operator Natural BC $-\\div(\\lambda\\grad u)$ $(\\lambda\\grad u,\\grad v)$ $\\hat{n}\\cdot(\\lambda\\grad u)=0$ on $\\Gamma_0$ $\\curl(\\lambda\\curl\\vec{u})$ $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\hat{n}\\cross(\\lambda\\curl\\vec{u})=0$ on $\\Gamma_0$ $-\\grad(\\lambda\\div\\vec{u})$ $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $\\lambda\\div\\vec{u}=0$ on $\\Gamma_0$ $\\div(\\vec{\\lambda}u)$ $(-\\vec{\\lambda}u,\\grad v)$ $\\hat{n}\\cdot\\vec{\\lambda}u = 0$ on $\\Gamma_0$ $\\curl(\\lambda\\vec{u})$ $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\hat{n}\\cross(\\lambda\\vec{u})=0$ on $\\Gamma_0$ $-\\div(\\lambda\\grad u) + \\div(\\vec{\\beta}u)$ $(\\lambda\\grad u - \\vec{\\beta}u,\\grad v)$ $\\hat{n}\\cdot(\\lambda\\grad u-\\vec{\\beta}u)=0$ on $\\Gamma_0$ No additional implementation is necessary to impose natural boundary conditions. Any portion of the boundary where a Dirichlet, Neumann, or Robin boundary condition has not been applied will receive a natural boundary condition by default.", "title": "Natural Boundary Conditions"}, {"location": "fem_bc/#neumann-boundary-conditions", "text": "Neumann boundary conditions are closely related to natural boundary conditions. Rather than ignoring the boundary integral we integrate a known function on the boundary which approximates the desired value of the boundary condition (often a involving a derivative of the field). The following table shows a variety of common operators and their related Neumann boundary condition. Operator Continuous Operator Neumann BC $(\\lambda\\grad u,\\grad v)$ $-\\div(\\lambda\\grad u)$ $\\hat{n}\\cdot(\\lambda\\grad u)=f$ on $\\Gamma_N$ $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $\\hat{n}\\cross(\\lambda\\curl\\vec{u})=\\hat{n}\\cross\\vec{f}$ on $\\Gamma_N$ $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ $\\lambda\\div\\vec{u}=\\hat{n}\\cdot\\vec{f}$ on $\\Gamma_N$ $(-\\vec{\\lambda}u,\\grad v)$ $\\div(\\vec{\\lambda}u)$ $\\hat{n}\\cdot\\vec{\\lambda}u = f$ on $\\Gamma_N$ $(\\lambda\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\vec{u})$ $\\hat{n}\\cross(\\lambda\\vec{u})=\\hat{n}\\cross\\vec{f}$ on $\\Gamma_N$ $(\\lambda\\grad u - \\vec{\\beta}u,\\grad v)$ $-\\div(\\lambda\\grad u) + \\div(\\vec{\\beta}u)$ $\\hat{n}\\cdot(\\lambda\\grad u-\\vec{\\beta}u)=f$ on $\\Gamma_N$ To impose these boundary conditions in MFEM simply modify the right-hand side of your linear system by adding the appropriate boundary integral of either $f$ or $\\vec{f}$. For $H^1$ or $L^2$ fields this can be accomplished by adding the BoundaryLFIntegrator with an appropriate coefficient for $f$ to a [Par]LinearForm object. Neumann boundary conditions can be added to the above example code by adding the following line before the call to b.Assemble() . // Add Neumann BCs n.(matCoef Grad u) = nbcCoef on the boundary marked in // the nbc_marker array. b.AddBoundaryIntegrator(new BoundaryLFIntegrator(nbcCoef), nbc_marker); For H(Curl) fields this can be accomplished by adding the VectorFEBoundaryTangentLFIntegrator with an appropriate vector coefficient for $\\vec{f}$ to a [Par]LinearForm object. And finally, for H(Div) fields this can be accomplished by adding the VectorFEBoundaryFluxLFIntegrator with an appropriate scalar coefficient for $f = \\hat{n}\\cdot\\vec{f}$ to a [Par]LinearForm object. Other integrators may be appropriate if it is desirable to express the functions $\\,f$ or $\\vec{f}$ in other ways.", "title": "Neumann Boundary Conditions"}, {"location": "fem_bc/#robin-boundary-conditions", "text": "Robin boundary conditions typically involve a linear function of the field and its normal derivative. As such they also arise from weak derivatives and the boundary integrals they introduce to the system of equations. Typical forms of the Robin boundary condition include the following. Operator Continuous Operator Robin BC $(\\lambda\\grad u,\\grad v)$ $-\\div(\\lambda\\grad u)$ $\\hat{n}\\cdot(\\lambda\\grad u)+\\gamma\\,u=f$ on $\\Gamma_R$ $(\\lambda\\curl\\vec{u},\\curl\\vec{v})$ $\\curl(\\lambda\\curl\\vec{u})$ $\\hat{n}\\cross(\\lambda\\curl\\vec{u}+\\gamma\\,\\hat{n}\\cross\\vec{u})=\\hat{n}\\cross\\vec{f}$ on $\\Gamma_R$ $(\\lambda\\div\\vec{u},\\div\\vec{v})$ $-\\grad(\\lambda\\div\\vec{u})$ $\\lambda\\div\\vec{u}+\\gamma\\,\\hat{n}\\cdot\\vec{u}=\\hat{n}\\cdot\\vec{f}$ on $\\Gamma_R$ $(\\lambda\\grad u - \\vec{\\beta}u,\\grad v)$ $-\\div(\\lambda\\grad u) + \\div(\\vec{\\beta}u)$ $\\hat{n}\\cdot(\\lambda\\grad u-\\vec{\\beta}u)+\\gamma\\,u=f$ on $\\Gamma_R$ Robin boundary conditions are applied in the same manner as Neumann boundary conditions except that one must also add a boundary integral to the [Par]BilinearForm object to account for the term involving $\\gamma$. For example, when solving for an $H^1$ field one should add a MassIntegrator with an appropriate scalar coefficient for $\\gamma$. The implementation of a Robin boundary condition requires precisely the same change to the right-hand-side as the Neumann boundary condition as well as a new term in the bilinear form before a.Assemble() : // Add Robin BCs n.(matCoef Grad u) + rbcACoef u = rbcBCoef on the boundary // marked in the rbc_marker array. b.AddBoundaryIntegrator(new BoundaryLFIntegrator(rbcBCoef), rbc_marker); ... // Add Robin BCs n.(matCoef grad u) + rbcACoef u = rbcBCoef on the boundary // marked in the rbc_marker array. a.AddBoundaryIntegrator(new MassIntegrator(rbcACoef), rbc_marker);", "title": "Robin Boundary Conditions"}, {"location": "fem_bc/#discontinuous-galerkin-formulations", "text": "In the Discontinuous Galerkin (DG) formulation the Natural , Neumann , and Robin can be implemented in a similar the same manner as in the continuous case (adding the appropriate LinearFormIntegrator as a boundary face integrator instead of a boundary integrator ). However, since DG basis functions have no degrees of freedom associated with the boundary, Dirichlet boundary conditions must be handled differently. // Add the desired value for the Dirichlet constraint on the boundary // marked in the dbc_marker array. b.AddBdrFaceIntegrator(new DGDirichletLFIntegrator(dbcCoef, matCoef, sigma, kappa), dbc_marker); ... // Add the n.Grad(u) boundary integral on the Dirichlet portion of the // boundary marked in the dbc_marker array. a.AddBdrFaceIntegrator(new DGDiffusionIntegrator(matCoef, sigma, kappa), dbc_marker); Where sigma and kappa are parameters controlling the symmetry and interior penalty used in the DG diffusion formulation. These two integrators work together to balance the natural boundary condition associated with the DiffusionIntegrator and to penalize solutions which differ from the desired Dirichlet value near the boundary. Similar pairs of integrators can be implemented to accommodate other PDEs. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Discontinuous Galerkin Formulations"}, {"location": "fem_weak_form/", "text": "Weak Formulations $ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} \\newcommand{\\dO}{{\\partial\\Omega}} $ Spaces of finite element basis functions are rarely rich enough to contain exact solutions to partial differential equations (PDEs) of interest. This is particularly true when we consider the irregular domains that often arise in practical simulations. One consequence of this is that finite element solutions often don't precisely satisfy the continuous PDEs being modeled. The goal is to build a finite element solution which approximates the true solution and satisfies the PDE in a weaker sense. Consider a general linear differential operator $L(u)$ and the partial differential equation: $$L(u) = f\\mbox{ on }\\Omega$$ We approximate the solution using a linear combination of finite element basis functions which we'll call $\\varphi_i$. $$u\\approx u_h\\equiv\\sum_i\\alpha_i\\varphi_i(\\vec{x})$$ The basis functions $\\varphi_i$ are known but we need to find the degrees of freedom, $\\alpha_i$, which produce a reasonable approximation of $u$. In Galerkin finite element methods this is done by multiplying the PDE by each of the basis functions and integrating over the problem domain. If we have a total of $N$ finite element basis functions, this leads to a set of $N$ equations for the $N$ unknowns. The resulting system of equations for the $\\alpha_i$ is called the \"weak formulation\" of the PDE. The weak formulation of this problem can be written as: $$\\sum_j\\alpha_j\\int_\\Omega L(\\varphi_j)\\varphi_i\\dO = \\int_\\Omega f\\varphi_i\\dO$$ or by the matrix equation: $$M\\vec{\\alpha}=\\vec{f}$$ Where the matrix entries $M_{ij}\\equiv\\int_\\Omega L(\\varphi_j)\\varphi_i\\dO$ and the entries of $\\,\\vec{f}$ are given by $\\,f_i\\equiv\\int_\\Omega f\\varphi_i\\dO$. However, it is much more common to write these integrals using inner product notation: $$(L(u),v)_\\Omega=(f, v)_\\Omega\\,\\forall v\\in V$$ Where $V$ is space spanned by the basis functions $\\varphi_i$. The next step is to examine the linear operator $L(u)$ and determine how to compute the integral $(L(u),v)_\\Omega$ in the most accurate manner possible which leads us to \"weak derivatives\". Weak Derivatives A \"weak derivative\" is a generalization of the notion of a derivative for integrable functions whose derivatives do not exist in the strong sense. When using the finite element method weak derivatives are required whenever terms in a PDE require derivatives of discontinuous or otherwise non-differentiable quantities. Finite element basis functions are typically not smooth functions. Even if they happen to be continuous their derivatives are often at least partially discontinuous. Also, coefficient functions can be discontinuous but, more importantly, their derivatives are often not known. For these reasons PDE terms similar to $\\grad(\\lambda u)$ or $\\div\\grad u$ cannot be accurately computed using finite element basis functions without employing weak derivatives. Consider the following discontinuous approximation to the function $\\cos(2\\pi x)e^{-2x}$. Piecewise linear, discontinuous basis functions can approximate this function rather well on this coarse 4 element mesh. If we simply ignore the discontinuities and compute the piecewise derivatives of the basis functions we obtain the following approximation of the continuous function's derivative. This is a reasonable, albeit quite crude, approximation of the derivative. Expending a little more effort to compute the weak derivative using continuous 2nd order basis functions produces a far superior approximation. Clearly we will benefit from using weak derivatives to handle derivatives of discontinuous functions which arise in our linear operators. Weak Divergence Consider a linear operator of the form $L(u)=-\\div\\vec{\\alpha}(u)$ with $\\vec{\\alpha}\\equiv\\vec{\\beta}u+\\gamma\\grad u$, where $\\vec{\\beta}$ is a vector-valued function and $\\gamma$ is a scalar or tensor-valued function. The function $\\vec{\\alpha}$ is a general linear function of $u$ and its gradient. The weak divergence of this quantity would be calculated by multiplying $\\div\\vec{\\alpha}$ by a test function, $v$, and integrating over the domain $\\Omega$. $$(-\\div\\vec{\\alpha},v)_\\Omega \\equiv-\\int_\\Omega(\\div\\vec{\\alpha})v\\,d\\Omega$$ The negative sign in this expression is only a matter of convention. Using the vector calculus identity, $\\div(\\vec{\\alpha}v) = (\\div\\vec{\\alpha})v + \\vec{\\alpha}\\cdot\\grad v$, we find: $$(-\\div\\vec{\\alpha}, v)_\\Omega = (\\vec{\\alpha}, \\grad v)_\\Omega - \\int_\\Omega\\div(\\vec{\\alpha}v)\\,d\\Omega$$ We then use the Divergence theorem to obtain: $$(-\\div\\vec{\\alpha}, v)_\\Omega = (\\vec{\\alpha}, \\grad v)_\\Omega - \\int_\\dO(\\hat{n}\\cdot\\vec{\\alpha})v\\,d\\Gamma = (\\vec{\\alpha}, \\grad v)_\\Omega - (\\hat{n}\\cdot\\vec{\\alpha},v)_\\dO $$ Where $d\\Gamma$ is the area element on the boundary of $\\Omega$. For linear operators of this type the bilinear form $\\,(\\vec{\\alpha}, \\grad v)_\\Omega$ can be much more accurately approximated than the original bilinear form $\\,(-\\div\\vec{\\alpha}, v)_\\Omega$ provided we can accurately manage the boundary integral $\\,(\\hat{n}\\cdot\\vec{\\alpha},v)_\\dO$. Boundary integrals such as this can be used to incorporate Neumann boundary conditions into a PDE. See the Boundary Conditions page for more information on this. Weak Curl For the next example consider the weak curl of a vector operator. Let $L(u)=\\curl\\vec{\\alpha}(u)$ with $\\vec{\\alpha} \\equiv \\beta\\vec{u}+\\gamma\\curl\\vec{u}$, where $\\beta$ and $\\gamma$ are either scalar or tensor-valued functions. The function $\\vec{\\alpha}$ is a general linear function of $\\vec{u}$ and its curl. The weak curl of this quantity would be calculated by multiplying $\\curl\\vec{\\alpha}$ by a test function, $\\vec{v}$, and integrating over the domain $\\Omega$. $$(\\curl\\vec{\\alpha},\\vec{v})_\\Omega \\equiv \\int_\\Omega(\\curl\\vec{\\alpha})\\cdot\\vec{v}\\,d\\Omega$$ Using the vector calculus identity, $\\div(\\vec{\\alpha}\\cross\\vec{v}) = (\\curl\\vec{\\alpha})\\cdot\\vec{v} - \\vec{\\alpha}\\cdot(\\curl\\vec{v})$, we find: $$(\\curl\\vec{\\alpha},\\vec{v})_\\Omega = (\\vec{\\alpha},\\curl\\vec{v})_\\Omega + \\int_\\Omega\\div(\\vec{\\alpha}\\times\\vec{v})\\,d\\Omega$$ We again use the Divergence theorem to obtain: $$(\\curl\\vec{\\alpha},\\vec{v})_\\Omega = (\\vec{\\alpha},\\curl\\vec{v})_\\Omega + \\int_\\dO\\hat{n}\\cdot(\\vec{\\alpha}\\times\\vec{v})\\,d\\Gamma = (\\vec{\\alpha},\\curl\\vec{v})_\\Omega + (\\hat{n}\\cross\\vec{\\alpha},\\vec{v})_\\dO$$ Where we also made use of the scalar triple product, $\\hat{n}\\cdot(\\vec{\\alpha}\\cross\\vec{v}) = \\vec{v}\\cdot(\\hat{n}\\cross\\vec{\\alpha})$, in the last equality. Again it will be more accurate to use the bilinear form $(\\vec{\\alpha},\\curl\\vec{v})_\\Omega$ and a Neumann boundary condition will arise from the boundary integral. Weak Gradient For the last example consider the weak gradient of a scalar operator. Let $L(u)=-\\grad\\alpha(u)$ with $\\alpha\\equiv\\vec{\\beta}\\cdot\\vec{u}+\\gamma\\div\\vec{u}$, where $\\vec{\\beta}$ is a vector-valued function and $\\gamma$ is a scalar-valued function. The function $\\alpha$ is a general linear function of $\\vec{u}$ and its divergence. The weak gradient of this quantity would be calculated by multiplying $\\grad\\alpha$ by a test function, $\\vec{v}$, and integrating over the domain $\\Omega$. $$(-\\grad\\alpha,\\vec{v})_\\Omega \\equiv -\\int_\\Omega(\\grad\\alpha)\\cdot\\vec{v}\\,d\\Omega$$ The negative sign in this expression is again only a matter of convention. Using the vector calculus identity, $\\div(\\alpha\\vec{v}) = (\\grad\\alpha)\\cdot\\vec{v} + \\alpha\\div\\vec{v}$, we find: $$(-\\grad\\alpha,\\vec{v})_\\Omega = (\\alpha,\\div\\vec{v})_\\Omega - \\int_\\Omega\\div(\\alpha\\vec{v})\\,d\\Omega$$ We again use the Divergence theorem to obtain: $$(-\\grad\\alpha,\\vec{v})_\\Omega = (\\alpha,\\div\\vec{v})_\\Omega - \\int_\\dO\\hat{n}\\cdot(\\alpha\\vec{v})\\,d\\Gamma = (\\alpha,\\div\\vec{v})_\\Omega - (\\alpha\\hat{n},\\vec{v})_\\dO$$ Once again we find a complimentary bilinear form in $(\\alpha,\\div\\vec{v})_\\Omega$ and a boundary integral leading to a Neumann boundary condition. Other Types of Terms Partial differential equations with other types of terms such as spatial derivatives of order three or higher (e.g. $\\nabla^4u$) or coefficients in inconvenient locations (e.g. $\\alpha\\div(\\beta\\grad u)$) will often require the introduction of auxiliary variables unless algebraic manipulations can remove the inconvenient factors. For example, $$\\nabla^4 u=f$$ can be split into a pair of coupled equations: $$ \\begin{align*} \\nabla^2u &= \\psi\\\\ \\nabla^2\\psi &= f \\end{align*} $$ and $$\\alpha\\div(\\beta\\grad u)=f$$ can be split into: $$ \\begin{align*} \\beta\\grad u &= \\psi\\\\ \\alpha\\div\\psi &= f \\end{align*} $$ Careful examination of the required derivatives will often suggest the most appropriate choice for the basis functions to be used for such auxiliary fields. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Weak Formulations"}, {"location": "fem_weak_form/#weak-formulations", "text": "$ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} \\newcommand{\\dO}{{\\partial\\Omega}} $ Spaces of finite element basis functions are rarely rich enough to contain exact solutions to partial differential equations (PDEs) of interest. This is particularly true when we consider the irregular domains that often arise in practical simulations. One consequence of this is that finite element solutions often don't precisely satisfy the continuous PDEs being modeled. The goal is to build a finite element solution which approximates the true solution and satisfies the PDE in a weaker sense. Consider a general linear differential operator $L(u)$ and the partial differential equation: $$L(u) = f\\mbox{ on }\\Omega$$ We approximate the solution using a linear combination of finite element basis functions which we'll call $\\varphi_i$. $$u\\approx u_h\\equiv\\sum_i\\alpha_i\\varphi_i(\\vec{x})$$ The basis functions $\\varphi_i$ are known but we need to find the degrees of freedom, $\\alpha_i$, which produce a reasonable approximation of $u$. In Galerkin finite element methods this is done by multiplying the PDE by each of the basis functions and integrating over the problem domain. If we have a total of $N$ finite element basis functions, this leads to a set of $N$ equations for the $N$ unknowns. The resulting system of equations for the $\\alpha_i$ is called the \"weak formulation\" of the PDE. The weak formulation of this problem can be written as: $$\\sum_j\\alpha_j\\int_\\Omega L(\\varphi_j)\\varphi_i\\dO = \\int_\\Omega f\\varphi_i\\dO$$ or by the matrix equation: $$M\\vec{\\alpha}=\\vec{f}$$ Where the matrix entries $M_{ij}\\equiv\\int_\\Omega L(\\varphi_j)\\varphi_i\\dO$ and the entries of $\\,\\vec{f}$ are given by $\\,f_i\\equiv\\int_\\Omega f\\varphi_i\\dO$. However, it is much more common to write these integrals using inner product notation: $$(L(u),v)_\\Omega=(f, v)_\\Omega\\,\\forall v\\in V$$ Where $V$ is space spanned by the basis functions $\\varphi_i$. The next step is to examine the linear operator $L(u)$ and determine how to compute the integral $(L(u),v)_\\Omega$ in the most accurate manner possible which leads us to \"weak derivatives\".", "title": "Weak Formulations"}, {"location": "fem_weak_form/#weak-derivatives", "text": "A \"weak derivative\" is a generalization of the notion of a derivative for integrable functions whose derivatives do not exist in the strong sense. When using the finite element method weak derivatives are required whenever terms in a PDE require derivatives of discontinuous or otherwise non-differentiable quantities. Finite element basis functions are typically not smooth functions. Even if they happen to be continuous their derivatives are often at least partially discontinuous. Also, coefficient functions can be discontinuous but, more importantly, their derivatives are often not known. For these reasons PDE terms similar to $\\grad(\\lambda u)$ or $\\div\\grad u$ cannot be accurately computed using finite element basis functions without employing weak derivatives. Consider the following discontinuous approximation to the function $\\cos(2\\pi x)e^{-2x}$. Piecewise linear, discontinuous basis functions can approximate this function rather well on this coarse 4 element mesh. If we simply ignore the discontinuities and compute the piecewise derivatives of the basis functions we obtain the following approximation of the continuous function's derivative. This is a reasonable, albeit quite crude, approximation of the derivative. Expending a little more effort to compute the weak derivative using continuous 2nd order basis functions produces a far superior approximation. Clearly we will benefit from using weak derivatives to handle derivatives of discontinuous functions which arise in our linear operators.", "title": "Weak Derivatives"}, {"location": "fem_weak_form/#weak-divergence", "text": "Consider a linear operator of the form $L(u)=-\\div\\vec{\\alpha}(u)$ with $\\vec{\\alpha}\\equiv\\vec{\\beta}u+\\gamma\\grad u$, where $\\vec{\\beta}$ is a vector-valued function and $\\gamma$ is a scalar or tensor-valued function. The function $\\vec{\\alpha}$ is a general linear function of $u$ and its gradient. The weak divergence of this quantity would be calculated by multiplying $\\div\\vec{\\alpha}$ by a test function, $v$, and integrating over the domain $\\Omega$. $$(-\\div\\vec{\\alpha},v)_\\Omega \\equiv-\\int_\\Omega(\\div\\vec{\\alpha})v\\,d\\Omega$$ The negative sign in this expression is only a matter of convention. Using the vector calculus identity, $\\div(\\vec{\\alpha}v) = (\\div\\vec{\\alpha})v + \\vec{\\alpha}\\cdot\\grad v$, we find: $$(-\\div\\vec{\\alpha}, v)_\\Omega = (\\vec{\\alpha}, \\grad v)_\\Omega - \\int_\\Omega\\div(\\vec{\\alpha}v)\\,d\\Omega$$ We then use the Divergence theorem to obtain: $$(-\\div\\vec{\\alpha}, v)_\\Omega = (\\vec{\\alpha}, \\grad v)_\\Omega - \\int_\\dO(\\hat{n}\\cdot\\vec{\\alpha})v\\,d\\Gamma = (\\vec{\\alpha}, \\grad v)_\\Omega - (\\hat{n}\\cdot\\vec{\\alpha},v)_\\dO $$ Where $d\\Gamma$ is the area element on the boundary of $\\Omega$. For linear operators of this type the bilinear form $\\,(\\vec{\\alpha}, \\grad v)_\\Omega$ can be much more accurately approximated than the original bilinear form $\\,(-\\div\\vec{\\alpha}, v)_\\Omega$ provided we can accurately manage the boundary integral $\\,(\\hat{n}\\cdot\\vec{\\alpha},v)_\\dO$. Boundary integrals such as this can be used to incorporate Neumann boundary conditions into a PDE. See the Boundary Conditions page for more information on this.", "title": "Weak Divergence"}, {"location": "fem_weak_form/#weak-curl", "text": "For the next example consider the weak curl of a vector operator. Let $L(u)=\\curl\\vec{\\alpha}(u)$ with $\\vec{\\alpha} \\equiv \\beta\\vec{u}+\\gamma\\curl\\vec{u}$, where $\\beta$ and $\\gamma$ are either scalar or tensor-valued functions. The function $\\vec{\\alpha}$ is a general linear function of $\\vec{u}$ and its curl. The weak curl of this quantity would be calculated by multiplying $\\curl\\vec{\\alpha}$ by a test function, $\\vec{v}$, and integrating over the domain $\\Omega$. $$(\\curl\\vec{\\alpha},\\vec{v})_\\Omega \\equiv \\int_\\Omega(\\curl\\vec{\\alpha})\\cdot\\vec{v}\\,d\\Omega$$ Using the vector calculus identity, $\\div(\\vec{\\alpha}\\cross\\vec{v}) = (\\curl\\vec{\\alpha})\\cdot\\vec{v} - \\vec{\\alpha}\\cdot(\\curl\\vec{v})$, we find: $$(\\curl\\vec{\\alpha},\\vec{v})_\\Omega = (\\vec{\\alpha},\\curl\\vec{v})_\\Omega + \\int_\\Omega\\div(\\vec{\\alpha}\\times\\vec{v})\\,d\\Omega$$ We again use the Divergence theorem to obtain: $$(\\curl\\vec{\\alpha},\\vec{v})_\\Omega = (\\vec{\\alpha},\\curl\\vec{v})_\\Omega + \\int_\\dO\\hat{n}\\cdot(\\vec{\\alpha}\\times\\vec{v})\\,d\\Gamma = (\\vec{\\alpha},\\curl\\vec{v})_\\Omega + (\\hat{n}\\cross\\vec{\\alpha},\\vec{v})_\\dO$$ Where we also made use of the scalar triple product, $\\hat{n}\\cdot(\\vec{\\alpha}\\cross\\vec{v}) = \\vec{v}\\cdot(\\hat{n}\\cross\\vec{\\alpha})$, in the last equality. Again it will be more accurate to use the bilinear form $(\\vec{\\alpha},\\curl\\vec{v})_\\Omega$ and a Neumann boundary condition will arise from the boundary integral.", "title": "Weak Curl"}, {"location": "fem_weak_form/#weak-gradient", "text": "For the last example consider the weak gradient of a scalar operator. Let $L(u)=-\\grad\\alpha(u)$ with $\\alpha\\equiv\\vec{\\beta}\\cdot\\vec{u}+\\gamma\\div\\vec{u}$, where $\\vec{\\beta}$ is a vector-valued function and $\\gamma$ is a scalar-valued function. The function $\\alpha$ is a general linear function of $\\vec{u}$ and its divergence. The weak gradient of this quantity would be calculated by multiplying $\\grad\\alpha$ by a test function, $\\vec{v}$, and integrating over the domain $\\Omega$. $$(-\\grad\\alpha,\\vec{v})_\\Omega \\equiv -\\int_\\Omega(\\grad\\alpha)\\cdot\\vec{v}\\,d\\Omega$$ The negative sign in this expression is again only a matter of convention. Using the vector calculus identity, $\\div(\\alpha\\vec{v}) = (\\grad\\alpha)\\cdot\\vec{v} + \\alpha\\div\\vec{v}$, we find: $$(-\\grad\\alpha,\\vec{v})_\\Omega = (\\alpha,\\div\\vec{v})_\\Omega - \\int_\\Omega\\div(\\alpha\\vec{v})\\,d\\Omega$$ We again use the Divergence theorem to obtain: $$(-\\grad\\alpha,\\vec{v})_\\Omega = (\\alpha,\\div\\vec{v})_\\Omega - \\int_\\dO\\hat{n}\\cdot(\\alpha\\vec{v})\\,d\\Gamma = (\\alpha,\\div\\vec{v})_\\Omega - (\\alpha\\hat{n},\\vec{v})_\\dO$$ Once again we find a complimentary bilinear form in $(\\alpha,\\div\\vec{v})_\\Omega$ and a boundary integral leading to a Neumann boundary condition.", "title": "Weak Gradient"}, {"location": "fem_weak_form/#other-types-of-terms", "text": "Partial differential equations with other types of terms such as spatial derivatives of order three or higher (e.g. $\\nabla^4u$) or coefficients in inconvenient locations (e.g. $\\alpha\\div(\\beta\\grad u)$) will often require the introduction of auxiliary variables unless algebraic manipulations can remove the inconvenient factors. For example, $$\\nabla^4 u=f$$ can be split into a pair of coupled equations: $$ \\begin{align*} \\nabla^2u &= \\psi\\\\ \\nabla^2\\psi &= f \\end{align*} $$ and $$\\alpha\\div(\\beta\\grad u)=f$$ can be split into: $$ \\begin{align*} \\beta\\grad u &= \\psi\\\\ \\alpha\\div\\psi &= f \\end{align*} $$ Careful examination of the required derivatives will often suggest the most appropriate choice for the basis functions to be used for such auxiliary fields. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Other Types of Terms"}, {"location": "fluids/", "text": "Navier-Stokes Mini Application The solver implemented in this miniapp solves the transient incompressible Navier-Stokes equations. Theory The equations are given in the non-dimensionalized form \\begin{align} \\frac{\\partial u}{\\partial t} + (u \\cdot \\nabla) u - \\frac{1}{Re} \\nabla^2 u + \\nabla p &= f & \\quad \\text{in } \\Omega\\\\ \\nabla \\cdot u &= 0 & \\quad \\text{in } \\Omega \\end{align} where $Re$ represents the Reynolds number. In order to solve these equations, the method presented in Tomboulides (1997) 1 is used, which is based on an equal order finite element discretization on quadrilateral or hexahedral elements of high polynomial order. The method describes an implicit-explicit time-integration scheme for the viscous and convective terms respectively. Introducing the following notation the nonlinear term $N(u) = -(u \\cdot \\nabla) u$ and the time-extrapolated form \\begin{align} \\label{eq:Next} N^*(u^{n+1}) = \\sum_{j=1}^k a_j N(u^{n+1-j}) \\end{align} where $a_j$ are coefficients from the corresponding explicit time integration method. Applying a BDF method with coefficients $b_j$ to the initial equation using the introduced forms yields \\begin{align} \\sum_{j=0}^k \\frac{b_j}{\\Delta t} u^{n+1-j} = -\\nabla p^{n+1} + L(u^{n+1}) + N^*(u^{n+1}) + f^{n+1}. \\end{align} Collecting all known quantities at a given time with \\begin{align} F^*(u^{n+1}) = -\\sum_{j=1}^k \\frac{b_j}{\\Delta t} u^{n+1-j} + N^*(u^{n+1}) + f^{n+1} \\end{align} the BDF expression reduces to \\begin{align} \\label{eq:bdf_short} \\frac{b_0}{\\Delta t} u^{n+1} = -\\nabla p^{n+1} + L(u^{n+1}) + F^*(u^{n+1}). \\end{align} To achieve a high order convergence in space, the linear term $L(u)$ is replaced by \\begin{align} L_{\\times}(u) = \\nu \\nabla(\\nabla \\cdot u) - \\nu \\nabla \\times \\nabla \\times u \\end{align} which is used to weakly enforce incompressibility by setting the first term to zero. Like in \\eqref{eq:Next} we introduce the time extrapolated term \\begin{align} L^*_{\\times}(u^{n+1}) = \\sum_{j=1}^k a_j L_{\\times}(u^{n+1-j}). \\end{align} To compute the pressure we rearrange \\eqref{eq:bdf_short} and take the divergence on both sides \\begin{align} \\label{eq:prespois} \\nabla^2 p^{n+1} = \\nabla \\cdot (L_{\\times}^*(u^{n+1}) + F^*(u^{n+1})), \\end{align} which is closed by the Neumann type boundary condition \\begin{align} \\nabla p^{n+1} \\cdot \\hat{n} = -\\frac{b_0}{\\Delta t} u^{n+1} \\cdot \\hat{n} + (L_{\\times}^*(u^{n+1}) + F^*(u^{n+1})) \\cdot \\hat{n}. \\end{align} We will refer to this as the pressure Poisson equation in the following. The last step is a Helmholtz type equation to solve for the implicit (viscous) velocity part which is also derived from \\eqref{eq:bdf_short}. Consider \\begin{align} \\label{eq:hlm} \\frac{b_0}{\\Delta t} u^{n+1} - L(u^{n+1}) = -\\nabla p^{n+1} + F^*(u^{n+1}) \\end{align} with the Dirichlet (essential type) boundary condition \\begin{align} u^{n+1} = g_D^{n+1}. \\end{align} A detailed walk through can also be found in Franco et al (2020) 2 . Note The notation is very similar to what is used in the code to make it easy to follow the theoretical explanation and understand what is done in the implementation. Boundary Conditions Inflow and no-slip walls For inflow or no-slip wall boundary conditions one should use the method NavierSolver::AddVelDirichletBC . This enforces the value on $u^{n+1}$ in \\eqref{eq:hlm}. It is valid to call this method multiple times on different boundary attributes of the mesh. The NavierSolver instance keeps track of the associated Coefficient and accompanying boundary attribute. The passed attribute array can be modified, deleted or reused, since a copy is created. Pressure outlet If an outlet of a domain is supposed to represent a pressure outlet (e.g. zero-pressure), one should use the method NavierSolver::AddPresDirichletBC . This enforces the pressure value $p^{n+1}$ in \\eqref{eq:prespois}. Zero-stress This boundary condition is used to represent an outflow attribute. Due to the nature of the $H^1$ finite-element discretization, the terms arise naturally in \\eqref{eq:prespois} and \\eqref{eq:hlm} resulting in \\begin{align} \\nu \\nabla u \\cdot \\hat{n} - p \\mathbb{I} \\cdot \\hat{n} = 0, \\end{align} where $\\mathbb{I}$ represents the identity tensor. If there is no other boundary condition applied to a certain attribute, this boundary condition is applied automatically (not through modification but rather through the formulation). Solvers and preconditioners The choice of solvers and preconditioners for \\eqref{eq:prespois} and \\eqref{eq:hlm} are essential for the performance and robustness of the simulation. The pressure Poisson equation \\eqref{eq:prespois} is solved using the CG Krylov method in combination with the low-order refined preconditioning technique coupled with AMG (c.f. Franco et al (2020) 2 ). Due to the nature of the explicit time discretization of the nonlinear term, the method used is CFL (and therefore time step) bound. As a result the time derivative term in \\eqref{eq:hlm} is dominating and a CG Krylov method preconditioned with Jacobi is sufficient. Depending on the problem, this results in the majority of time per time step being spent in the pressure Poisson solve. At the moment there is no interface to change the default options for the solvers, but a user can easily modify them in the code itself. FAQ You are using the spectral element method, why is the mass matrix not a vector representing the condensed diagonal? This is a design choice. It is possible to use the \"numerical integration\" option, which produces a diagonal mass matrix with 1 non zero value per row. This leaves freedom to experiment. Do you support simulations using real parameters? No, right now you have to non-dimensionalize your problem. Not doing this impacts the performance a lot. I want to implement turbulence model X, how do I dot that? This is another design choice to make and should be discussed, preferably in a Github issue. Why doesn't it have adaptive time stepping? While it is possible and there exists a branch that works with varying step sizes (variable order/variable step size IMEX), I have not found a reliable and robust method to determine the step size (CFL based error estimators are very squishy here or have to use a very conservative limit). How do I compute steady state solutions with this? There is no acceleration to steady state algorithm implemented right now. Your only option is to run the transient case until you reach a steady state criterion. (See adaptive time stepping FAQ above). MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); A. G. Tomboulides, J. C. Y. Lee & S. A. Orszag (1997) Numerical Simulation of Low Mach Number Reactive Flows \u21a9 Michael Franco, Jean-Sylvain Camier, Julian Andrej, Will Pazner (2020) High-order matrix-free incompressible flow solvers with GPU acceleration and low-order refined preconditioners (https://arxiv.org/abs/1910.03032) \u21a9 \u21a9", "title": "Fluid Dynamics"}, {"location": "fluids/#navier-stokes-mini-application", "text": "The solver implemented in this miniapp solves the transient incompressible Navier-Stokes equations.", "title": "Navier-Stokes Mini Application"}, {"location": "fluids/#theory", "text": "The equations are given in the non-dimensionalized form \\begin{align} \\frac{\\partial u}{\\partial t} + (u \\cdot \\nabla) u - \\frac{1}{Re} \\nabla^2 u + \\nabla p &= f & \\quad \\text{in } \\Omega\\\\ \\nabla \\cdot u &= 0 & \\quad \\text{in } \\Omega \\end{align} where $Re$ represents the Reynolds number. In order to solve these equations, the method presented in Tomboulides (1997) 1 is used, which is based on an equal order finite element discretization on quadrilateral or hexahedral elements of high polynomial order. The method describes an implicit-explicit time-integration scheme for the viscous and convective terms respectively. Introducing the following notation the nonlinear term $N(u) = -(u \\cdot \\nabla) u$ and the time-extrapolated form \\begin{align} \\label{eq:Next} N^*(u^{n+1}) = \\sum_{j=1}^k a_j N(u^{n+1-j}) \\end{align} where $a_j$ are coefficients from the corresponding explicit time integration method. Applying a BDF method with coefficients $b_j$ to the initial equation using the introduced forms yields \\begin{align} \\sum_{j=0}^k \\frac{b_j}{\\Delta t} u^{n+1-j} = -\\nabla p^{n+1} + L(u^{n+1}) + N^*(u^{n+1}) + f^{n+1}. \\end{align} Collecting all known quantities at a given time with \\begin{align} F^*(u^{n+1}) = -\\sum_{j=1}^k \\frac{b_j}{\\Delta t} u^{n+1-j} + N^*(u^{n+1}) + f^{n+1} \\end{align} the BDF expression reduces to \\begin{align} \\label{eq:bdf_short} \\frac{b_0}{\\Delta t} u^{n+1} = -\\nabla p^{n+1} + L(u^{n+1}) + F^*(u^{n+1}). \\end{align} To achieve a high order convergence in space, the linear term $L(u)$ is replaced by \\begin{align} L_{\\times}(u) = \\nu \\nabla(\\nabla \\cdot u) - \\nu \\nabla \\times \\nabla \\times u \\end{align} which is used to weakly enforce incompressibility by setting the first term to zero. Like in \\eqref{eq:Next} we introduce the time extrapolated term \\begin{align} L^*_{\\times}(u^{n+1}) = \\sum_{j=1}^k a_j L_{\\times}(u^{n+1-j}). \\end{align} To compute the pressure we rearrange \\eqref{eq:bdf_short} and take the divergence on both sides \\begin{align} \\label{eq:prespois} \\nabla^2 p^{n+1} = \\nabla \\cdot (L_{\\times}^*(u^{n+1}) + F^*(u^{n+1})), \\end{align} which is closed by the Neumann type boundary condition \\begin{align} \\nabla p^{n+1} \\cdot \\hat{n} = -\\frac{b_0}{\\Delta t} u^{n+1} \\cdot \\hat{n} + (L_{\\times}^*(u^{n+1}) + F^*(u^{n+1})) \\cdot \\hat{n}. \\end{align} We will refer to this as the pressure Poisson equation in the following. The last step is a Helmholtz type equation to solve for the implicit (viscous) velocity part which is also derived from \\eqref{eq:bdf_short}. Consider \\begin{align} \\label{eq:hlm} \\frac{b_0}{\\Delta t} u^{n+1} - L(u^{n+1}) = -\\nabla p^{n+1} + F^*(u^{n+1}) \\end{align} with the Dirichlet (essential type) boundary condition \\begin{align} u^{n+1} = g_D^{n+1}. \\end{align} A detailed walk through can also be found in Franco et al (2020) 2 . Note The notation is very similar to what is used in the code to make it easy to follow the theoretical explanation and understand what is done in the implementation.", "title": "Theory"}, {"location": "fluids/#boundary-conditions", "text": "", "title": "Boundary Conditions"}, {"location": "fluids/#inflow-and-no-slip-walls", "text": "For inflow or no-slip wall boundary conditions one should use the method NavierSolver::AddVelDirichletBC . This enforces the value on $u^{n+1}$ in \\eqref{eq:hlm}. It is valid to call this method multiple times on different boundary attributes of the mesh. The NavierSolver instance keeps track of the associated Coefficient and accompanying boundary attribute. The passed attribute array can be modified, deleted or reused, since a copy is created.", "title": "Inflow and no-slip walls"}, {"location": "fluids/#pressure-outlet", "text": "If an outlet of a domain is supposed to represent a pressure outlet (e.g. zero-pressure), one should use the method NavierSolver::AddPresDirichletBC . This enforces the pressure value $p^{n+1}$ in \\eqref{eq:prespois}.", "title": "Pressure outlet"}, {"location": "fluids/#zero-stress", "text": "This boundary condition is used to represent an outflow attribute. Due to the nature of the $H^1$ finite-element discretization, the terms arise naturally in \\eqref{eq:prespois} and \\eqref{eq:hlm} resulting in \\begin{align} \\nu \\nabla u \\cdot \\hat{n} - p \\mathbb{I} \\cdot \\hat{n} = 0, \\end{align} where $\\mathbb{I}$ represents the identity tensor. If there is no other boundary condition applied to a certain attribute, this boundary condition is applied automatically (not through modification but rather through the formulation).", "title": "Zero-stress"}, {"location": "fluids/#solvers-and-preconditioners", "text": "The choice of solvers and preconditioners for \\eqref{eq:prespois} and \\eqref{eq:hlm} are essential for the performance and robustness of the simulation. The pressure Poisson equation \\eqref{eq:prespois} is solved using the CG Krylov method in combination with the low-order refined preconditioning technique coupled with AMG (c.f. Franco et al (2020) 2 ). Due to the nature of the explicit time discretization of the nonlinear term, the method used is CFL (and therefore time step) bound. As a result the time derivative term in \\eqref{eq:hlm} is dominating and a CG Krylov method preconditioned with Jacobi is sufficient. Depending on the problem, this results in the majority of time per time step being spent in the pressure Poisson solve. At the moment there is no interface to change the default options for the solvers, but a user can easily modify them in the code itself.", "title": "Solvers and preconditioners"}, {"location": "fluids/#faq", "text": "You are using the spectral element method, why is the mass matrix not a vector representing the condensed diagonal? This is a design choice. It is possible to use the \"numerical integration\" option, which produces a diagonal mass matrix with 1 non zero value per row. This leaves freedom to experiment. Do you support simulations using real parameters? No, right now you have to non-dimensionalize your problem. Not doing this impacts the performance a lot. I want to implement turbulence model X, how do I dot that? This is another design choice to make and should be discussed, preferably in a Github issue. Why doesn't it have adaptive time stepping? While it is possible and there exists a branch that works with varying step sizes (variable order/variable step size IMEX), I have not found a reliable and robust method to determine the step size (CFL based error estimators are very squishy here or have to use a very conservative limit). How do I compute steady state solutions with this? There is no acceleration to steady state algorithm implemented right now. Your only option is to run the transient case until you reach a steady state criterion. (See adaptive time stepping FAQ above). MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); A. G. Tomboulides, J. C. Y. Lee & S. A. Orszag (1997) Numerical Simulation of Low Mach Number Reactive Flows \u21a9 Michael Franco, Jean-Sylvain Camier, Julian Andrej, Will Pazner (2020) High-order matrix-free incompressible flow solvers with GPU acceleration and low-order refined preconditioners (https://arxiv.org/abs/1910.03032) \u21a9 \u21a9", "title": "FAQ"}, {"location": "gallery/", "text": "Gallery This page collects screenshots from various simulations based on MFEM. Image captions with \ud83c\udfac link to simulation videos. Additional images can be found in the GLVis gallery . A version of the MFEM logo demonstrating curvilinear elements, adaptive mesh refinement and (idealized) parallel partitioning. Visualization with GLVis . Incompressible Taylor-Green vortex simulation with high-order finite elements. Visualization with ParaView . Fibers generated by LDRB approach based on 4 Laplacian solves in the Cardioid project. Solution of a Maxwell problem on a Klein bottle. Mesh generated with the klein-bottle miniapp. Solution with Example 3 . Comparisons of equipotential surfaces and force lines from Maxwell's Treatise on Electricity and Magnetism with results from MFEM's Volta miniapp . Level surfaces in the interior of the solution from Example 1 on escher.mesh . Visualization with GLVis . 3D Arbitrary Lagrangian-Eulerian (ALE) simulation of a shock-triple point interaction with Q2-Q1 elements in the MFEM-based BLAST shock hydrodynamics code. Volume visualization with VisIt . Modeling elastic-plastic flow in the 3D Taylor high-velocity impact problem using 4th order mixed elements in the MFEM-based BLAST shock hydrodynamics code. Visualization with VisIt . Poisson problem on a \"Breather\" surface. Mesh generated with the Mesh Explorer miniapp. Solution with Example 1 . Triple point shock interaction on 4 elements of order 12. Note the element curvature and the high variation of the field inside the lower right element. Visualization of the electric field generated by the electrical wave on rabbit heart ventricles during depolarization of the heart. Image courtesy of Dennis Ogiermann, winner of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Incompressible fluid flow around a rotating turbine using a space-time embedded-hybridized discontinuous Galerkin discretization. Image courtesy of Tamas Horvath, winner of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Magnetic diffusion problem solved to compute the magnetic field induced by current running through copper wire in air. Image courtesy of Will Pazner, winner of the 2022 MFEM Workshop Visualization Contest. \ud83c\udfac Shock-bubble-interaction using a Property-preserving discontinuous Galerkin scheme, see book . Image courtesy of Hennes Hajduk, as part of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Re=50,000 incompressible Navier-Stokes wall-resolved LES of a NACA 0012 airfoil in stall regime using MFEM's Navier miniapp. Image courtesy of \u00c9tienne Spieser, as part of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Plane wave scattering from a cube using a DPG Ultraweak formulation in MFEM to solve the time-harmonic linear acoustics equations. Image courtesy of Socratis Petrides, as part of the 2023 MFEM Workshop Visualization Contest. Density-based Topology Optimization for Cantilever beam with SiMPL method . Image courtesy of Dohyun Kim, as part of the 2024 MFEM Workshop Visualization Contest. \ud83c\udfac Shape interpolation between a torus and a bunny by computing their generalized Wasserstein barycenter. This barycenter is obtained by solving a mean-field optimal control problem. Image courtesy of Arjun Vijaywargiya, as part of the 2024 MFEM Workshop Visualization Contest. Streamlines of the magnetic field from a parallel computation of the magnetostatic interaction of two magnetic orbs. Visualization with VTK . Test of the propagation of a spherical shock wave through a random non-conforming mesh in the MFEM-based BLAST shock hydrodynamics code. Visualization with GLVis . Slice image of the high harmonic fast wave propagation in the NSTX-U magnetic fusion device. Computed using MFEM's 4th order H(curl) elements by the RF-SciDAC project . An electromagnetic eigenmode of a star-shaped domain computed with 3rd order finite elements computed with Example 13 . High-order multi-material inertial confinement fusion (ICF)-like implosion in the MFEM-based BLAST shock hydrodynamics code. Visualization with VisIt . Two-region AMR mesh generated by the Shaper miniapp from successive adaptation to the outlines of Australia. Radiating Kelvin-Helmholtz modeled with the MFEM-based BLAST shock hydrodynamics code. Volume visualization with VisIt . \ud83c\udfac Simulation-driven r-adaptivity using TMOP for a three-material high-velocity gas impact in BLAST . Visualization with VisIt . The Shaper miniapp applied to a multi-material input functions described by the iterates of the Mandelbrot set. Visualization with GLVis . Topology optimization of a drone body using LLNL's LiDO project , based on MFEM. Compressible Euler equations, Mach 3 flow around a cylinder in 2D, stabilized DG-P1 spacial discretization. Image courtesy of Hennes Hajduk, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Axisymmetric computation of an air flow in a tube with continuous Galerkin discretization. Image courtesy of Raphael Zanella, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Inviscid Kelvin-Helmholtz instability using high-order invariant domain preserving discontinuous Galerkin methods with convex limiting. Image courtesy of Will Pazner, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Compressible Euler in Lagrangian frame using the Laghos miniapp. Image courtesy of Vladimir Tomov, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Adaptive, implicit resistive MHD solver (from TDS-SciDAC ) resolves multi-scale features of plasmoid instability. \ud83c\udfac Topology-optimized heat sink obtained by minimizing the thermal energy in a domain with constant internal heating. Image courtesy of Tobias Duswald, winner of the 2022 MFEM Workshop Visualization Contest. \ud83c\udfac Leapfrogging vortex rings using an incompressible Schr\u00f6dinger fluid solver in MFEM. Image courtesy of John Camier, winner of the 2023 MFEM Workshop Visualization Contest. Displacement distribution of a loaded excavator arm under static equilibrium using MFEM's API in an external library. Image courtesy of Mehran Ebrahimi, winner of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Multi-component topology optimization with conformal meshes. Image courtesy of Mathias Schmidt, winner of the 2024 MFEM Workshop Visualization Contest. Electric field induced by an MRI gradient coil in a human body. Simulation by the Magnetic Resonance Physics and Instrumentation Group at Harvard Medical School. Multi-mode Rayleigh-Taylor instability simulation using 4th order mixed elements in the MFEM-based BLAST shock hydrodynamics code. Visualization with VisIt . Purely Lagrangian Rayleigh-Taylor instability simulation using 8th order mixed elements in the MFEM-based BLAST shock hydrodynamics code. Visualization with GLVis . Anisotropic refinement in a 2D shock-like AMR test problem. Visualization with GLVis . Parallel version of Example 1 on 100 processors with a relatively coarse version of square-disc.mesh . Visualization with GLVis . Anisotropic refinement in a 3D version of the AMR test. Portion of the spherical domain is cut away in GLVis . Structural topology optimization with MFEM in LLNL's Center for Design and Optimization . Test of the anisotropic refinement feature on a random mesh. A slightly modified version of Example 1 . Visualization with GLVis . Level lines in a cutting plane of the solution from the parallel version of Example 1 on 64 processors with fichera.mesh . Visualization with GLVis . Cut image of the solution from Example 1 on a sharply twisted, high order toroidal mesh. The mesh was generated with the toroid miniapp. Cut image of an induction coil mesh and three sub-meshes created with the Trimmer miniapp. Visualization with VisIt . Viscoelastic flow of blood through an artery with aneurysm modeled by the Hookean dumbbell model discretized with BCF-method (Navier-Stokes+SUPG). Image courtesy of Andreas Meier, as part of the 2021 MFEM Workshop Visualization Contest. Visualization of time-averaged mean flow from a compressible, DG Navier-Stokes solver using MFEM modeling a plasma torch. Image courtesy of Karl W. Schulz, as part of the 2021 MFEM Workshop Visualization Contest. Streamlines of the electric field generated by a current dipole source located in the temporal lobe of an epilepsy patient. Image courtesy of Ben Zwick, winner of the 2022 MFEM Workshop Visualization Contest. Flow through periodic Gyroid micro-cell, MFEM Navier mini-app with additional Brinkman penalization. Image courtesy of Mathias Schmidt, as part of the 2022 MFEM Workshop Visualization Contest. \ud83c\udfac Turbulence effect of the Kelvin-Helmholtz instability in tokamak edge plasma using an MHDeX code developed at LLNL. Image courtesy of Milan Holec, as part of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Topology optimization with conformal meshes to maximize beam stiffness under a downward force on the right wall. Image courtesy of Ketan Mittal and Mathias Schmidt, as part of the 2023 MFEM Workshop Visualization Contest. Penrose unilluminable room appears rather illuminable in 3D (at least when constructed as a solid of revolution). Image courtesy of Amit Rotem, as part of the 2023 MFEM Workshop Visualization Contest. Heat flux magnitude in a convection - (anisotropic) diffusion simulation with MFEM text as the initial temperature profile. A single implicit step of the HDG scheme was used. Image courtesy of Jan Nikl, winner of the 2024 MFEM Workshop Visualization Contest.", "title": "Gallery"}, {"location": "gallery/#gallery", "text": "This page collects screenshots from various simulations based on MFEM. Image captions with \ud83c\udfac link to simulation videos. Additional images can be found in the GLVis gallery . A version of the MFEM logo demonstrating curvilinear elements, adaptive mesh refinement and (idealized) parallel partitioning. Visualization with GLVis . Incompressible Taylor-Green vortex simulation with high-order finite elements. Visualization with ParaView . Fibers generated by LDRB approach based on 4 Laplacian solves in the Cardioid project. Solution of a Maxwell problem on a Klein bottle. Mesh generated with the klein-bottle miniapp. Solution with Example 3 . Comparisons of equipotential surfaces and force lines from Maxwell's Treatise on Electricity and Magnetism with results from MFEM's Volta miniapp . Level surfaces in the interior of the solution from Example 1 on escher.mesh . Visualization with GLVis . 3D Arbitrary Lagrangian-Eulerian (ALE) simulation of a shock-triple point interaction with Q2-Q1 elements in the MFEM-based BLAST shock hydrodynamics code. Volume visualization with VisIt . Modeling elastic-plastic flow in the 3D Taylor high-velocity impact problem using 4th order mixed elements in the MFEM-based BLAST shock hydrodynamics code. Visualization with VisIt . Poisson problem on a \"Breather\" surface. Mesh generated with the Mesh Explorer miniapp. Solution with Example 1 . Triple point shock interaction on 4 elements of order 12. Note the element curvature and the high variation of the field inside the lower right element. Visualization of the electric field generated by the electrical wave on rabbit heart ventricles during depolarization of the heart. Image courtesy of Dennis Ogiermann, winner of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Incompressible fluid flow around a rotating turbine using a space-time embedded-hybridized discontinuous Galerkin discretization. Image courtesy of Tamas Horvath, winner of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Magnetic diffusion problem solved to compute the magnetic field induced by current running through copper wire in air. Image courtesy of Will Pazner, winner of the 2022 MFEM Workshop Visualization Contest. \ud83c\udfac Shock-bubble-interaction using a Property-preserving discontinuous Galerkin scheme, see book . Image courtesy of Hennes Hajduk, as part of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Re=50,000 incompressible Navier-Stokes wall-resolved LES of a NACA 0012 airfoil in stall regime using MFEM's Navier miniapp. Image courtesy of \u00c9tienne Spieser, as part of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Plane wave scattering from a cube using a DPG Ultraweak formulation in MFEM to solve the time-harmonic linear acoustics equations. Image courtesy of Socratis Petrides, as part of the 2023 MFEM Workshop Visualization Contest. Density-based Topology Optimization for Cantilever beam with SiMPL method . Image courtesy of Dohyun Kim, as part of the 2024 MFEM Workshop Visualization Contest. \ud83c\udfac Shape interpolation between a torus and a bunny by computing their generalized Wasserstein barycenter. This barycenter is obtained by solving a mean-field optimal control problem. Image courtesy of Arjun Vijaywargiya, as part of the 2024 MFEM Workshop Visualization Contest. Streamlines of the magnetic field from a parallel computation of the magnetostatic interaction of two magnetic orbs. Visualization with VTK . Test of the propagation of a spherical shock wave through a random non-conforming mesh in the MFEM-based BLAST shock hydrodynamics code. Visualization with GLVis . Slice image of the high harmonic fast wave propagation in the NSTX-U magnetic fusion device. Computed using MFEM's 4th order H(curl) elements by the RF-SciDAC project . An electromagnetic eigenmode of a star-shaped domain computed with 3rd order finite elements computed with Example 13 . High-order multi-material inertial confinement fusion (ICF)-like implosion in the MFEM-based BLAST shock hydrodynamics code. Visualization with VisIt . Two-region AMR mesh generated by the Shaper miniapp from successive adaptation to the outlines of Australia. Radiating Kelvin-Helmholtz modeled with the MFEM-based BLAST shock hydrodynamics code. Volume visualization with VisIt . \ud83c\udfac Simulation-driven r-adaptivity using TMOP for a three-material high-velocity gas impact in BLAST . Visualization with VisIt . The Shaper miniapp applied to a multi-material input functions described by the iterates of the Mandelbrot set. Visualization with GLVis . Topology optimization of a drone body using LLNL's LiDO project , based on MFEM. Compressible Euler equations, Mach 3 flow around a cylinder in 2D, stabilized DG-P1 spacial discretization. Image courtesy of Hennes Hajduk, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Axisymmetric computation of an air flow in a tube with continuous Galerkin discretization. Image courtesy of Raphael Zanella, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Inviscid Kelvin-Helmholtz instability using high-order invariant domain preserving discontinuous Galerkin methods with convex limiting. Image courtesy of Will Pazner, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Compressible Euler in Lagrangian frame using the Laghos miniapp. Image courtesy of Vladimir Tomov, as part of the 2021 MFEM Workshop Visualization Contest. \ud83c\udfac Adaptive, implicit resistive MHD solver (from TDS-SciDAC ) resolves multi-scale features of plasmoid instability. \ud83c\udfac Topology-optimized heat sink obtained by minimizing the thermal energy in a domain with constant internal heating. Image courtesy of Tobias Duswald, winner of the 2022 MFEM Workshop Visualization Contest. \ud83c\udfac Leapfrogging vortex rings using an incompressible Schr\u00f6dinger fluid solver in MFEM. Image courtesy of John Camier, winner of the 2023 MFEM Workshop Visualization Contest. Displacement distribution of a loaded excavator arm under static equilibrium using MFEM's API in an external library. Image courtesy of Mehran Ebrahimi, winner of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Multi-component topology optimization with conformal meshes. Image courtesy of Mathias Schmidt, winner of the 2024 MFEM Workshop Visualization Contest. Electric field induced by an MRI gradient coil in a human body. Simulation by the Magnetic Resonance Physics and Instrumentation Group at Harvard Medical School. Multi-mode Rayleigh-Taylor instability simulation using 4th order mixed elements in the MFEM-based BLAST shock hydrodynamics code. Visualization with VisIt . Purely Lagrangian Rayleigh-Taylor instability simulation using 8th order mixed elements in the MFEM-based BLAST shock hydrodynamics code. Visualization with GLVis . Anisotropic refinement in a 2D shock-like AMR test problem. Visualization with GLVis . Parallel version of Example 1 on 100 processors with a relatively coarse version of square-disc.mesh . Visualization with GLVis . Anisotropic refinement in a 3D version of the AMR test. Portion of the spherical domain is cut away in GLVis . Structural topology optimization with MFEM in LLNL's Center for Design and Optimization . Test of the anisotropic refinement feature on a random mesh. A slightly modified version of Example 1 . Visualization with GLVis . Level lines in a cutting plane of the solution from the parallel version of Example 1 on 64 processors with fichera.mesh . Visualization with GLVis . Cut image of the solution from Example 1 on a sharply twisted, high order toroidal mesh. The mesh was generated with the toroid miniapp. Cut image of an induction coil mesh and three sub-meshes created with the Trimmer miniapp. Visualization with VisIt . Viscoelastic flow of blood through an artery with aneurysm modeled by the Hookean dumbbell model discretized with BCF-method (Navier-Stokes+SUPG). Image courtesy of Andreas Meier, as part of the 2021 MFEM Workshop Visualization Contest. Visualization of time-averaged mean flow from a compressible, DG Navier-Stokes solver using MFEM modeling a plasma torch. Image courtesy of Karl W. Schulz, as part of the 2021 MFEM Workshop Visualization Contest. Streamlines of the electric field generated by a current dipole source located in the temporal lobe of an epilepsy patient. Image courtesy of Ben Zwick, winner of the 2022 MFEM Workshop Visualization Contest. Flow through periodic Gyroid micro-cell, MFEM Navier mini-app with additional Brinkman penalization. Image courtesy of Mathias Schmidt, as part of the 2022 MFEM Workshop Visualization Contest. \ud83c\udfac Turbulence effect of the Kelvin-Helmholtz instability in tokamak edge plasma using an MHDeX code developed at LLNL. Image courtesy of Milan Holec, as part of the 2023 MFEM Workshop Visualization Contest. \ud83c\udfac Topology optimization with conformal meshes to maximize beam stiffness under a downward force on the right wall. Image courtesy of Ketan Mittal and Mathias Schmidt, as part of the 2023 MFEM Workshop Visualization Contest. Penrose unilluminable room appears rather illuminable in 3D (at least when constructed as a solid of revolution). Image courtesy of Amit Rotem, as part of the 2023 MFEM Workshop Visualization Contest. Heat flux magnitude in a convection - (anisotropic) diffusion simulation with MFEM text as the initial temperature profile. A single implicit step of the HDG scheme was used. Image courtesy of Jan Nikl, winner of the 2024 MFEM Workshop Visualization Contest.", "title": "Gallery"}, {"location": "getting-started/", "text": "Getting Started We recommend that new users start with these articles: Building and Running Examples Building MFEM Serial Tutorial Parallel Tutorial Browse Example Codes Code Documentation Code Overview Finite Element Classes and Concepts Doxygen Documentation HowTo Articles More Advanced Topics GPU Support Performance and Partial Assembly Example Mini Applications Electromagnetics Miniapps Fluid Dynamics Miniapp Meshing Miniapps AD Miniapps Mini Application Theory Notes Tesla Miniapp Theory Maxwell Miniapp Theory", "title": "Getting Started"}, {"location": "getting-started/#getting-started", "text": "We recommend that new users start with these articles:", "title": "Getting Started"}, {"location": "getting-started/#building-and-running-examples", "text": "Building MFEM Serial Tutorial Parallel Tutorial Browse Example Codes", "title": "Building and Running Examples"}, {"location": "getting-started/#code-documentation", "text": "Code Overview Finite Element Classes and Concepts Doxygen Documentation HowTo Articles", "title": "Code Documentation"}, {"location": "getting-started/#more-advanced-topics", "text": "GPU Support Performance and Partial Assembly", "title": "More Advanced Topics"}, {"location": "getting-started/#example-mini-applications", "text": "Electromagnetics Miniapps Fluid Dynamics Miniapp Meshing Miniapps AD Miniapps", "title": "Example Mini Applications"}, {"location": "getting-started/#mini-application-theory-notes", "text": "Tesla Miniapp Theory Maxwell Miniapp Theory", "title": "Mini Application Theory Notes"}, {"location": "gpu-support/", "text": "GPU support in MFEM MFEM relies mainly on two features for running algorithms on devices such as GPUs: The memory manager handles transparently the moving of data between the host (CPU) and the device (e.g. GPU), The mfem::forall function to abstract for loops to parallelize the execution on an arbitrary device. Vector u; Vector v; // ... const auto u_data = u.Read(); // Express the intent to read u auto v_data = v.ReadWrite(); // Express the intent to read and write v // Abstract the loop: for(int i=0; i objects. The Memory objects handle host and device pointers, memory allocations, and data synchronizations between host and device. To get the pointer T* from a Memory object, one has to use the Read() , Write() , or ReadWrite() methods. Read() returns a const T* pointer, and should be used when the data will only be read, Write() returns a T* pointer, and should be used when writing data without using any previously contained data, ReadWrite() returns T* pointer, and should be used when read and write access to the pointer are required. Read() , Write() , and ReadWrite() automatically handle data movement between the host and device. They can optimize data transfer, since e.g. data that is declared as Write() on the host/device need not be updated from the device/host. The method void UseDevice(bool) specifies if a Memory object is intended for computation on host or on device. The Read() , Write() , and ReadWrite() methods will return device pointer if using the device has been set to true with UseDevice , by default it is false and will return a host pointer. Sometimes, it is necessary to access the data specifically on the host. In this case the HostRead() , HostWrite() , and HostReadWrite() methods should be used. In practice, developers rarely have to manipulate Memory objects, instead objects data can be stored using Vector and Array . Vector and Array data pointers can be accessed with the same methods as Memory . Vector v; v.UseDevice(true); const double *device_ptr = v.Read(); const double *host_ptr = v.HostRead(); mfem::forall The idea behind the mfem::forall function is to have the same behavior as a for loop and hide all device-specific code in order to enable performance portability. Example: for (int i = 0; i < N; i++) { ... } becomes mfem::forall(N, [=] MFEM_HOST_DEVICE (int i) { ... }); One class that is convenient to use in combination with the memory manager and mfem::forall is DeviceTensor : an N dimensional array containing elements of type T , which by default is double . The Reshape function reshapes its input into such an N dimensional array: Vector a; a.UseDevice(true); const int p = ...; const int q = ...; const int r = ...; const int N = ...; auto A = Reshape(a.Write(), p, q, r, N); // returns a DeviceTensor<4,double> mfem::forall(N, [=] MFEM_HOST_DEVICE (int n) { for (int k = 0; k < r; k++) for (int j = 0; j < q; j++) for (int i = 0; i < p; i++) A(i,j,k,n) = ...; }); Several variants of mfem::forall exist, such as mfem::forall_2D and mfem::forall_3D , to help map 2D or 3D blocks of threads to the hardware more efficiently. In the case of a GPU, mfem::forall_3D(N, X,Y,Z, [=] MFEM_HOST_DEVICE (int n){...}) will declare N block of threads each of size X x Y x Z threads, whereas mfem::forall uses N/MFEM_CUDA_BLOCKS block of threads each of size MFEM_CUDA_BLOCKS = 256 threads. Using mfem::forall_3D (and mfem::forall_2D ) over mfem::forall results in a higher level of parallelism, the former using N x X x Y x Z software threads and the latter only N software threads. In order to exploit 2D or 3D blocks of threads, it is convenient to use the macro MFEM_FOREACH_THREAD(i,x,p) to use threads as a for loop. The first variable i is the name of the \"loop\" variable, x is the threadId (it can take the values x , y , or z ), and p is the loop upper bound. If we rewrite the previous example using mfem::forall_3D and MFEM_FOREACH_THREAD , we get: Vector a; a.UseDevice(true); const int p = ...; const int q = ...; const int r = ...; const int N = ...; auto A = Reshape(a.Write(), p, q, r, N); // returns a DeviceTensor<4,double> mfem::forall_3D(N, p, q, r, [=] MFEM_HOST_DEVICE (int n) { MFEM_FOREACH_THREAD(k,z,r) MFEM_FOREACH_THREAD(j,y,q) MFEM_FOREACH_THREAD(i,x,p) A(i,j,k,n) = ...; }); The reasons for this more complex syntax is to better utilize the hardware, GPUs in particular. Using mfem::forall_3D and MFEM_FOREACH_THREAD allows to use more concurrency N x X x Y x Z threads instead of only N threads with mfem::forall , but more importantly the memory accesses on A(i,j,k,n) are much better with mfem::forall_3D . With mfem::forall_3D , threads access consecutive memory (i.e. coalesced memory access). Because most applied math algorithms are memory bound, having coalesced memory accesses is critical to achieve high performance. Achieving high performance on GPUs Finite element algorithms are usually memory bound on GPUs, and therefore in order to achieve peak performance one has to maximize the utilization of the different memory bandwidths . In particular, the main memory, or device memory, is the memory that has to be maximally used (i.e. saturated ) in order to achieve peak performance. It is important to not saturate memory bandwidth other than the main memory bandwidth, failing to do so will decrease the main memory throughput by creating memory bandwidth bottlenecks. Maximizing the main memory bandwidth is achieved by issuing enough memory transactions and using efficiently the transferred data. The more computationally light a kernel is the more frequently memory transactions are issued, and if there is no memory bandwidth saturated other than the main memory bandwidth, e.g: shared or L1 memory, then the first condition to achieve peak performance is fulfilled. Memory is transferred by contiguous blocks, called cache-line , which are typically the size of 32 float , or 16 double . Since each cache-line is a block of contiguous memory it is common to over-fetch data when accessing non-contiguous memory addresses (because not all the data is used in each cache-line). In the worst case, only one float of each cache-line is used resulting in only 1/32 of the data transferred being used, such a kernel is potentially 32 times slower than a kernel that fully utilizes the data in each cache line. When a kernel is carefully written to use all the data from each cache-line, the memory access are often referred as coalesced memory access. Having coalesced memory access kernels is critical to achieving peak performance. In term of parallelization, when seeing GPUs as having only one level of parallelism over threads, severe constraints are imposed to the kernels in order to achieve high performance. Each thread is limited to 255 float registers, using more registers results in what is known as register spilling which significantly impacts performance, this is why this type of parallelization strategy should only be used for the most simple kernels. Therefore, it is usually a good strategy to see GPUs as having two levels of parallelism: the coarse parallelism level among block of threads, and the fine parallelism level among threads in a block of threads. Threads in different blocks of threads can only exchange data through the main memory, therefore data exchange between blocks of threads should be kept to the absolute minimum. Threads inside a block of threads can exchange data efficiently by using the shared memory . Shared memory can also be used to store data common between threads, but stored data should be carefully managed due to the very limited storage capacity of the shared memory. Due to their low arithmetic intensity, finite element algorithms often require a significant amount of shared memory bandwidth to exchange information between threads in a block. High amounts of shared memory bandwidth usage is a common bottleneck to achieve high performance. In order to be used efficiently, shared memory also requires specific memory access patterns to prevent bank conflicts . When bank conflicts occur, memory access are serialized instead of being parallel. Each cache line in the shared memory is linearly spread over the shared memory banks, if the threads in a block of threads access different data in the same bank then a bank conflict occurs. However, if the threads in a block access the same data in a bank, or different data in different banks, then the memory access can occur optimally in parallel. Profiling on NVIDIA GPUs When profiling to improve the performance of a memory bound kernel, we recommend the following steps: Measure the main memory bandwidth and efficiency: this tells us how far from peak throughput we are. Insure that no register spills are occurring: most kernels can be written without any register spilling. Measure the shared memory bandwidth and efficiency: try to prevent the shared memory to be the performance bottleneck. Optimizing the main memory usage The first thing we need to know is how far from peak throughput and how efficiently the main memory is accessed. For instance, with nvprof the following command nvprof --metrics gld_throughput,gst_throughput,gld_efficiency,gst_efficiency gives us the desired information. The sum of the load throughput ( gld_throughput ) and store throughput ( gst_throughput ) should be as close as possible to the main memory maximum bandwidth. gld_efficiency and gst_efficiency informs us on ratio of requested global memory load/store throughput to required global memory load/store throughput expressed as percentage. As mentioned above, efficiency issues are critical to achieve peak performance and are solved by coalescing memory access. Once we know how far we are from peak throughput, one should look at the main stall reasons to get an idea of what might be slowing down the kernels: Instruction Fetch \u2014 The next assembly instruction has not yet been fetched. Memory Throttle \u2014 A large number of pending memory operations prevent further forward progress. These can be reduced by combining several memory transactions into one. Memory Dependency \u2014 A load/store cannot be made because the required resources are not available or are fully utilized, or too many requests of a given type are outstanding. Memory dependency stalls can potentially be reduced by optimizing memory alignment and access patterns. Synchronization \u2014 The warp is blocked at a __syncthreads() call. Execution Dependency \u2014 An input required by the instruction is not yet available. Execution dependency stalls can potentially be reduced by increasing instruction-level parallelism. You can use nvprof --metrics with: stall_inst_fetch for the percentage of stalls occurring because of instruction fetch, stall_exec_dependency for the percentage of stalls occurring because of execution dependency, stall_memory_dependency for the percentage of stalls occurring because a memory dependency, stall_memory_throttle for the percentage of stalls occurring because of memory throttle, stall_sync for the percentage of stalls occurring because the warp is blocked at a __syncthreads() call. Optimizing the register usage Register spilling can be detected in two ways: Compile for CUDA with -Xptxas=\"-v\" which reports at compilation the register usage and spills for each kernel. Measure the local memory transfers with a profiler to check if there are register spills. nvprof --metrics local_load_transactions,local_store_transactions --kernels myKernel should be 0 . Register spills happen for two main reasons: Each thread uses too many registers, Array indices are not known at compilation time. When each thread uses too many registers it is often useful to redesign the kernel to use more threads per block to perform the computation, this lowers the amount of registers used per thread but usually increases the shared memory usage due to more distributed data. Computing indices at compilation can often be resolved by simply unrolling loops with MFEM_UNROLL and making sure that all the necessary information to compute the indices is known at compilation time. Roofline model A roofline model helps predicting the peak performance achievable by a specific algorithm. The arithmetic intensity is the ratio of the total number of operations divided by the amount of data movement from and to the main memory. By dividing the maximum FLOPs, by the maximum bandwidth we get an arithmetic intensity threshold value between the two main regime of a GPU. A kernel with an arithmetic intensity below or above the threshold value will be memory bound or computation bound respectively. For in depths performance analysis we recommend to look at efficiency issues The list of all the possible metrics for nvprof is available here . Tips & Tricks Compile in debug mode when developing for devices The memory manager performs checks that catches most of the misuse of the memory on host or device. When using device debug, if your code fails you can run gdb or lldb , and set a breakpoint at b mfem::mfem_error . The code will break as soon as it reaches this point and then you can backtrace bt from here to see what went wrong and where. Forcing synchronization with the host or the device It is sometimes needed to force synchronization between host and device data. In order to make sure that the host data is synchronized one should use HostRead() , similarly to ensure synchronized data on the device one should use Read() . Do not use GetData Do not use GetData() to access a pointer for device work since this will always return the host pointer without synchronizing the data with the device. Tracking data movements and allocations Compiling MFEM with MFEM_TRACK_CUDA_MEM can help by printing when data is transferred, allocated, etc. Large amount of data movement between host and device should be avoided at all costs. Pinpoint where this is occurring and see if you can refactor your code so the data stays mainly on the device. Avoid allocating GPU memory too frequently, CUDA malloc calls are slow and can hinder performance. If you really need to allocate frequently GPU memory, consider using a memory pool (e.g. Umpire ), that way the mallocs are much cheaper on the GPU. The UseDevice function It is a good practice to call UseDevice(true) on any Vector intended to go on device right after constructing it. Vector v; v.UseDevice(true); Be aware that UseDevice() is not the same as UseDevice(true) , the first one just returns a boolean that tells you whether the object is intended for computation on the device or not. Using constexpr inside mfem::forall constexpr P = ...; // Results in an error on MSVC mfem::forall(N, [=] MFEM_HOST_DEVICE (int n) { double my_data[P]; }); The mfem::forall macro relies on lambda capturing in C++. One issue comes up with compilers such as MSVC is the capturing of constexpr variables inside mfem::forall . According to the C++ standard, constexpr variables do not need to be captured, and should not lose their const-ness in a lambda. However, on MSVC (e.g. in the MFEM AppVeyor CI checks), this can result in errors like: error C2131: expression did not evaluate to a constant A simple fix for this error is to declare the constexpr variable as static constexpr . static constexpr P = ...; // Omitting the static results in an error on MSVC mfem::forall(N, [=] MFEM_HOST_DEVICE (int n) { double my_data[P]; }); Similar problems and workarounds are discussed here . Error: \"alias not found\" This error message indicates that you are trying to move an \"alias\" Vector to GPU while its \"base\" Vector did not have a GPU allocation (valid or not) when the alias was created (and may still not have GPU allocation when the move of the \"alias\" was attempted). This is another case where we cannot update the \"base\" Vector because we do not have access to it (and even if we did, there are complications). This can be avoided if one follows the following rule: if you are creating an \"alias\" that will be used on device, you need to ensure that the \"base\" is allocated on that device. Depending on the context, one can use different methods to do that. For example, if the \"base\" is initialized (on host, otherwise there will be no issue) in the same function that will create the alias, one can call base.Write() to create the device allocation followed by base.HostWrite() and then initialize \"base\" on host -- this sequence avoids any unnecessary host-device transfers. Another example: if the \"base\" was initialized outside of the function where the \"alias\" is created, then the most appropriate choice probably is to call base.Read() before creating the \"alias\". Since the alias will need the data on device, the incurred host-to-device transfer is (at least partially) necessary anyway. Ideally, \"base\" Vectors that will be modified/accessed on device through aliases should be allocated on device to begin with, e.g. using Vector::SetSize(int s, MemoryType mt) typically with mt = Device::GetDeviceMemoryType() . MakeRef vectors do not see the same valid host/device data as their base vector Consider the following code snippet where the vector w is defined from v using the MakeRef() method: const int vSize = 10; Vector v; v.UseDevice(true); v.SetSize(vSize); v = 0.0; cout << \"IsHost(v) = \" << IsHostMemory(v.GetMemory().GetMemoryType()) << endl; Vector w; w.MakeRef(v, 0, vSize); cout << \"IsHost(w) = \" << IsHostMemory(w.GetMemory().GetMemoryType()) << endl; auto hv = v.HostWrite(); for (int j = 0; j < vSize; j++) { hv[j] = 1.0; } cout << \"IsHost(v) = \" << IsHostMemory(v.GetMemory().GetMemoryType()) << endl; cout << \"IsHost(w) = \" << IsHostMemory(w.GetMemory().GetMemoryType()) << endl; Vector z; z.UseDevice(true); z.SetSize(vSize); auto dz = z.Write(); auto dw = w.Read(); mfem::forall(vSize, [=] MFEM_HOST_DEVICE (int i) { dz[i] = dw[i]; }); z.HostRead(); cout << \"norm(z) = \" << z.Norml2() << endl; dz = z.Write(); auto dv = v.Read(); mfem::forall(vSize, [=] MFEM_HOST_DEVICE (int i) { dz[i] = dv[i]; }); z.HostRead(); cout << \"norm(z) = \" << z.Norml2() << endl; The resulting output may be unexpected: IsHost(v) = 0 IsHost(w) = 0 IsHost(v) = 1 IsHost(w) = 0 norm(z) = 0 norm(z) = 3.16228 Basically the issue is that the Memory objects (inside the Vector s) do not know about the other version, so they cannot update the validity flags (the host and device validity flags indicate which of the pointers has valid data) of the other Vector . Also such update may not make sense if you just moved the subvector. There is no easy way to keep the big \"base\" Vector v and the \"alias\" subvector w synchronized when they are being moved/copied between host and device. Therefore such synchronizations need to be done \"manually\" using the methods Vector::SyncMemory and Vector::SyncAliasMemory . In the example above, after you move the \"base\" Vector v to host, you need to \"inform\" the \"alias\" w that the validity flags of its base have been changed. This is done by calling w.SyncMemory(v) which simply copies the validity flags from v to w , there are no host-device memory transfers involved. On the other hand, if in the example you moved w to host and modified it there, and then you want to access the data through the base Vector v (you can think of the more general case here, when w is smaller than v ) then you need to call w.SyncAliasMemory(v) . In this particular case, the call will move the subvector described by w from host to device and update the validity flags of w to be the same as the ones of v . This way the whole Vector v gets the real data in one location -- before the call part of it was on device and the part described by w was on host. Both w.SyncMemory(v) and w.SyncAliasMemory(v) ensure that w gets the validity flags of v , the difference is where the real data is before the call -- in the first case the real data is in v and in the second, it is in w .", "title": "GPU Support"}, {"location": "gpu-support/#gpu-support-in-mfem", "text": "MFEM relies mainly on two features for running algorithms on devices such as GPUs: The memory manager handles transparently the moving of data between the host (CPU) and the device (e.g. GPU), The mfem::forall function to abstract for loops to parallelize the execution on an arbitrary device. Vector u; Vector v; // ... const auto u_data = u.Read(); // Express the intent to read u auto v_data = v.ReadWrite(); // Express the intent to read and write v // Abstract the loop: for(int i=0; i objects. The Memory objects handle host and device pointers, memory allocations, and data synchronizations between host and device. To get the pointer T* from a Memory object, one has to use the Read() , Write() , or ReadWrite() methods. Read() returns a const T* pointer, and should be used when the data will only be read, Write() returns a T* pointer, and should be used when writing data without using any previously contained data, ReadWrite() returns T* pointer, and should be used when read and write access to the pointer are required. Read() , Write() , and ReadWrite() automatically handle data movement between the host and device. They can optimize data transfer, since e.g. data that is declared as Write() on the host/device need not be updated from the device/host. The method void UseDevice(bool) specifies if a Memory object is intended for computation on host or on device. The Read() , Write() , and ReadWrite() methods will return device pointer if using the device has been set to true with UseDevice , by default it is false and will return a host pointer. Sometimes, it is necessary to access the data specifically on the host. In this case the HostRead() , HostWrite() , and HostReadWrite() methods should be used. In practice, developers rarely have to manipulate Memory objects, instead objects data can be stored using Vector and Array . Vector and Array data pointers can be accessed with the same methods as Memory . Vector v; v.UseDevice(true); const double *device_ptr = v.Read(); const double *host_ptr = v.HostRead();", "title": "Memory manager"}, {"location": "gpu-support/#mfemforall", "text": "The idea behind the mfem::forall function is to have the same behavior as a for loop and hide all device-specific code in order to enable performance portability. Example: for (int i = 0; i < N; i++) { ... } becomes mfem::forall(N, [=] MFEM_HOST_DEVICE (int i) { ... }); One class that is convenient to use in combination with the memory manager and mfem::forall is DeviceTensor : an N dimensional array containing elements of type T , which by default is double . The Reshape function reshapes its input into such an N dimensional array: Vector a; a.UseDevice(true); const int p = ...; const int q = ...; const int r = ...; const int N = ...; auto A = Reshape(a.Write(), p, q, r, N); // returns a DeviceTensor<4,double> mfem::forall(N, [=] MFEM_HOST_DEVICE (int n) { for (int k = 0; k < r; k++) for (int j = 0; j < q; j++) for (int i = 0; i < p; i++) A(i,j,k,n) = ...; }); Several variants of mfem::forall exist, such as mfem::forall_2D and mfem::forall_3D , to help map 2D or 3D blocks of threads to the hardware more efficiently. In the case of a GPU, mfem::forall_3D(N, X,Y,Z, [=] MFEM_HOST_DEVICE (int n){...}) will declare N block of threads each of size X x Y x Z threads, whereas mfem::forall uses N/MFEM_CUDA_BLOCKS block of threads each of size MFEM_CUDA_BLOCKS = 256 threads. Using mfem::forall_3D (and mfem::forall_2D ) over mfem::forall results in a higher level of parallelism, the former using N x X x Y x Z software threads and the latter only N software threads. In order to exploit 2D or 3D blocks of threads, it is convenient to use the macro MFEM_FOREACH_THREAD(i,x,p) to use threads as a for loop. The first variable i is the name of the \"loop\" variable, x is the threadId (it can take the values x , y , or z ), and p is the loop upper bound. If we rewrite the previous example using mfem::forall_3D and MFEM_FOREACH_THREAD , we get: Vector a; a.UseDevice(true); const int p = ...; const int q = ...; const int r = ...; const int N = ...; auto A = Reshape(a.Write(), p, q, r, N); // returns a DeviceTensor<4,double> mfem::forall_3D(N, p, q, r, [=] MFEM_HOST_DEVICE (int n) { MFEM_FOREACH_THREAD(k,z,r) MFEM_FOREACH_THREAD(j,y,q) MFEM_FOREACH_THREAD(i,x,p) A(i,j,k,n) = ...; }); The reasons for this more complex syntax is to better utilize the hardware, GPUs in particular. Using mfem::forall_3D and MFEM_FOREACH_THREAD allows to use more concurrency N x X x Y x Z threads instead of only N threads with mfem::forall , but more importantly the memory accesses on A(i,j,k,n) are much better with mfem::forall_3D . With mfem::forall_3D , threads access consecutive memory (i.e. coalesced memory access). Because most applied math algorithms are memory bound, having coalesced memory accesses is critical to achieve high performance.", "title": "mfem::forall"}, {"location": "gpu-support/#achieving-high-performance-on-gpus", "text": "Finite element algorithms are usually memory bound on GPUs, and therefore in order to achieve peak performance one has to maximize the utilization of the different memory bandwidths . In particular, the main memory, or device memory, is the memory that has to be maximally used (i.e. saturated ) in order to achieve peak performance. It is important to not saturate memory bandwidth other than the main memory bandwidth, failing to do so will decrease the main memory throughput by creating memory bandwidth bottlenecks. Maximizing the main memory bandwidth is achieved by issuing enough memory transactions and using efficiently the transferred data. The more computationally light a kernel is the more frequently memory transactions are issued, and if there is no memory bandwidth saturated other than the main memory bandwidth, e.g: shared or L1 memory, then the first condition to achieve peak performance is fulfilled. Memory is transferred by contiguous blocks, called cache-line , which are typically the size of 32 float , or 16 double . Since each cache-line is a block of contiguous memory it is common to over-fetch data when accessing non-contiguous memory addresses (because not all the data is used in each cache-line). In the worst case, only one float of each cache-line is used resulting in only 1/32 of the data transferred being used, such a kernel is potentially 32 times slower than a kernel that fully utilizes the data in each cache line. When a kernel is carefully written to use all the data from each cache-line, the memory access are often referred as coalesced memory access. Having coalesced memory access kernels is critical to achieving peak performance. In term of parallelization, when seeing GPUs as having only one level of parallelism over threads, severe constraints are imposed to the kernels in order to achieve high performance. Each thread is limited to 255 float registers, using more registers results in what is known as register spilling which significantly impacts performance, this is why this type of parallelization strategy should only be used for the most simple kernels. Therefore, it is usually a good strategy to see GPUs as having two levels of parallelism: the coarse parallelism level among block of threads, and the fine parallelism level among threads in a block of threads. Threads in different blocks of threads can only exchange data through the main memory, therefore data exchange between blocks of threads should be kept to the absolute minimum. Threads inside a block of threads can exchange data efficiently by using the shared memory . Shared memory can also be used to store data common between threads, but stored data should be carefully managed due to the very limited storage capacity of the shared memory. Due to their low arithmetic intensity, finite element algorithms often require a significant amount of shared memory bandwidth to exchange information between threads in a block. High amounts of shared memory bandwidth usage is a common bottleneck to achieve high performance. In order to be used efficiently, shared memory also requires specific memory access patterns to prevent bank conflicts . When bank conflicts occur, memory access are serialized instead of being parallel. Each cache line in the shared memory is linearly spread over the shared memory banks, if the threads in a block of threads access different data in the same bank then a bank conflict occurs. However, if the threads in a block access the same data in a bank, or different data in different banks, then the memory access can occur optimally in parallel.", "title": "Achieving high performance on GPUs"}, {"location": "gpu-support/#profiling-on-nvidia-gpus", "text": "When profiling to improve the performance of a memory bound kernel, we recommend the following steps: Measure the main memory bandwidth and efficiency: this tells us how far from peak throughput we are. Insure that no register spills are occurring: most kernels can be written without any register spilling. Measure the shared memory bandwidth and efficiency: try to prevent the shared memory to be the performance bottleneck.", "title": "Profiling on NVIDIA GPUs"}, {"location": "gpu-support/#optimizing-the-main-memory-usage", "text": "The first thing we need to know is how far from peak throughput and how efficiently the main memory is accessed. For instance, with nvprof the following command nvprof --metrics gld_throughput,gst_throughput,gld_efficiency,gst_efficiency gives us the desired information. The sum of the load throughput ( gld_throughput ) and store throughput ( gst_throughput ) should be as close as possible to the main memory maximum bandwidth. gld_efficiency and gst_efficiency informs us on ratio of requested global memory load/store throughput to required global memory load/store throughput expressed as percentage. As mentioned above, efficiency issues are critical to achieve peak performance and are solved by coalescing memory access. Once we know how far we are from peak throughput, one should look at the main stall reasons to get an idea of what might be slowing down the kernels: Instruction Fetch \u2014 The next assembly instruction has not yet been fetched. Memory Throttle \u2014 A large number of pending memory operations prevent further forward progress. These can be reduced by combining several memory transactions into one. Memory Dependency \u2014 A load/store cannot be made because the required resources are not available or are fully utilized, or too many requests of a given type are outstanding. Memory dependency stalls can potentially be reduced by optimizing memory alignment and access patterns. Synchronization \u2014 The warp is blocked at a __syncthreads() call. Execution Dependency \u2014 An input required by the instruction is not yet available. Execution dependency stalls can potentially be reduced by increasing instruction-level parallelism. You can use nvprof --metrics with: stall_inst_fetch for the percentage of stalls occurring because of instruction fetch, stall_exec_dependency for the percentage of stalls occurring because of execution dependency, stall_memory_dependency for the percentage of stalls occurring because a memory dependency, stall_memory_throttle for the percentage of stalls occurring because of memory throttle, stall_sync for the percentage of stalls occurring because the warp is blocked at a __syncthreads() call.", "title": "Optimizing the main memory usage"}, {"location": "gpu-support/#optimizing-the-register-usage", "text": "Register spilling can be detected in two ways: Compile for CUDA with -Xptxas=\"-v\" which reports at compilation the register usage and spills for each kernel. Measure the local memory transfers with a profiler to check if there are register spills. nvprof --metrics local_load_transactions,local_store_transactions --kernels myKernel should be 0 . Register spills happen for two main reasons: Each thread uses too many registers, Array indices are not known at compilation time. When each thread uses too many registers it is often useful to redesign the kernel to use more threads per block to perform the computation, this lowers the amount of registers used per thread but usually increases the shared memory usage due to more distributed data. Computing indices at compilation can often be resolved by simply unrolling loops with MFEM_UNROLL and making sure that all the necessary information to compute the indices is known at compilation time.", "title": "Optimizing the register usage"}, {"location": "gpu-support/#roofline-model", "text": "A roofline model helps predicting the peak performance achievable by a specific algorithm. The arithmetic intensity is the ratio of the total number of operations divided by the amount of data movement from and to the main memory. By dividing the maximum FLOPs, by the maximum bandwidth we get an arithmetic intensity threshold value between the two main regime of a GPU. A kernel with an arithmetic intensity below or above the threshold value will be memory bound or computation bound respectively. For in depths performance analysis we recommend to look at efficiency issues The list of all the possible metrics for nvprof is available here .", "title": "Roofline model"}, {"location": "gpu-support/#tips-tricks", "text": "", "title": "Tips & Tricks"}, {"location": "gpu-support/#compile-in-debug-mode-when-developing-for-devices", "text": "The memory manager performs checks that catches most of the misuse of the memory on host or device. When using device debug, if your code fails you can run gdb or lldb , and set a breakpoint at b mfem::mfem_error . The code will break as soon as it reaches this point and then you can backtrace bt from here to see what went wrong and where.", "title": "Compile in debug mode when developing for devices"}, {"location": "gpu-support/#forcing-synchronization-with-the-host-or-the-device", "text": "It is sometimes needed to force synchronization between host and device data. In order to make sure that the host data is synchronized one should use HostRead() , similarly to ensure synchronized data on the device one should use Read() .", "title": "Forcing synchronization with the host or the device"}, {"location": "gpu-support/#do-not-use-getdata", "text": "Do not use GetData() to access a pointer for device work since this will always return the host pointer without synchronizing the data with the device.", "title": "Do not use GetData"}, {"location": "gpu-support/#tracking-data-movements-and-allocations", "text": "Compiling MFEM with MFEM_TRACK_CUDA_MEM can help by printing when data is transferred, allocated, etc. Large amount of data movement between host and device should be avoided at all costs. Pinpoint where this is occurring and see if you can refactor your code so the data stays mainly on the device. Avoid allocating GPU memory too frequently, CUDA malloc calls are slow and can hinder performance. If you really need to allocate frequently GPU memory, consider using a memory pool (e.g. Umpire ), that way the mallocs are much cheaper on the GPU.", "title": "Tracking data movements and allocations"}, {"location": "gpu-support/#the-usedevice-function", "text": "It is a good practice to call UseDevice(true) on any Vector intended to go on device right after constructing it. Vector v; v.UseDevice(true); Be aware that UseDevice() is not the same as UseDevice(true) , the first one just returns a boolean that tells you whether the object is intended for computation on the device or not.", "title": "The UseDevice function"}, {"location": "gpu-support/#using-constexpr-inside-mfemforall", "text": "constexpr P = ...; // Results in an error on MSVC mfem::forall(N, [=] MFEM_HOST_DEVICE (int n) { double my_data[P]; }); The mfem::forall macro relies on lambda capturing in C++. One issue comes up with compilers such as MSVC is the capturing of constexpr variables inside mfem::forall . According to the C++ standard, constexpr variables do not need to be captured, and should not lose their const-ness in a lambda. However, on MSVC (e.g. in the MFEM AppVeyor CI checks), this can result in errors like: error C2131: expression did not evaluate to a constant A simple fix for this error is to declare the constexpr variable as static constexpr . static constexpr P = ...; // Omitting the static results in an error on MSVC mfem::forall(N, [=] MFEM_HOST_DEVICE (int n) { double my_data[P]; }); Similar problems and workarounds are discussed here .", "title": "Using constexpr inside mfem::forall"}, {"location": "gpu-support/#error-alias-not-found", "text": "This error message indicates that you are trying to move an \"alias\" Vector to GPU while its \"base\" Vector did not have a GPU allocation (valid or not) when the alias was created (and may still not have GPU allocation when the move of the \"alias\" was attempted). This is another case where we cannot update the \"base\" Vector because we do not have access to it (and even if we did, there are complications). This can be avoided if one follows the following rule: if you are creating an \"alias\" that will be used on device, you need to ensure that the \"base\" is allocated on that device. Depending on the context, one can use different methods to do that. For example, if the \"base\" is initialized (on host, otherwise there will be no issue) in the same function that will create the alias, one can call base.Write() to create the device allocation followed by base.HostWrite() and then initialize \"base\" on host -- this sequence avoids any unnecessary host-device transfers. Another example: if the \"base\" was initialized outside of the function where the \"alias\" is created, then the most appropriate choice probably is to call base.Read() before creating the \"alias\". Since the alias will need the data on device, the incurred host-to-device transfer is (at least partially) necessary anyway. Ideally, \"base\" Vectors that will be modified/accessed on device through aliases should be allocated on device to begin with, e.g. using Vector::SetSize(int s, MemoryType mt) typically with mt = Device::GetDeviceMemoryType() .", "title": "Error: \"alias not found\""}, {"location": "gpu-support/#makeref-vectors-do-not-see-the-same-valid-hostdevice-data-as-their-base-vector", "text": "Consider the following code snippet where the vector w is defined from v using the MakeRef() method: const int vSize = 10; Vector v; v.UseDevice(true); v.SetSize(vSize); v = 0.0; cout << \"IsHost(v) = \" << IsHostMemory(v.GetMemory().GetMemoryType()) << endl; Vector w; w.MakeRef(v, 0, vSize); cout << \"IsHost(w) = \" << IsHostMemory(w.GetMemory().GetMemoryType()) << endl; auto hv = v.HostWrite(); for (int j = 0; j < vSize; j++) { hv[j] = 1.0; } cout << \"IsHost(v) = \" << IsHostMemory(v.GetMemory().GetMemoryType()) << endl; cout << \"IsHost(w) = \" << IsHostMemory(w.GetMemory().GetMemoryType()) << endl; Vector z; z.UseDevice(true); z.SetSize(vSize); auto dz = z.Write(); auto dw = w.Read(); mfem::forall(vSize, [=] MFEM_HOST_DEVICE (int i) { dz[i] = dw[i]; }); z.HostRead(); cout << \"norm(z) = \" << z.Norml2() << endl; dz = z.Write(); auto dv = v.Read(); mfem::forall(vSize, [=] MFEM_HOST_DEVICE (int i) { dz[i] = dv[i]; }); z.HostRead(); cout << \"norm(z) = \" << z.Norml2() << endl; The resulting output may be unexpected: IsHost(v) = 0 IsHost(w) = 0 IsHost(v) = 1 IsHost(w) = 0 norm(z) = 0 norm(z) = 3.16228 Basically the issue is that the Memory objects (inside the Vector s) do not know about the other version, so they cannot update the validity flags (the host and device validity flags indicate which of the pointers has valid data) of the other Vector . Also such update may not make sense if you just moved the subvector. There is no easy way to keep the big \"base\" Vector v and the \"alias\" subvector w synchronized when they are being moved/copied between host and device. Therefore such synchronizations need to be done \"manually\" using the methods Vector::SyncMemory and Vector::SyncAliasMemory . In the example above, after you move the \"base\" Vector v to host, you need to \"inform\" the \"alias\" w that the validity flags of its base have been changed. This is done by calling w.SyncMemory(v) which simply copies the validity flags from v to w , there are no host-device memory transfers involved. On the other hand, if in the example you moved w to host and modified it there, and then you want to access the data through the base Vector v (you can think of the more general case here, when w is smaller than v ) then you need to call w.SyncAliasMemory(v) . In this particular case, the call will move the subvector described by w from host to device and update the validity flags of w to be the same as the ones of v . This way the whole Vector v gets the real data in one location -- before the call part of it was on device and the part described by w was on host. Both w.SyncMemory(v) and w.SyncAliasMemory(v) ensure that w gets the validity flags of v , the difference is where the real data is before the call -- in the first case the real data is in v and in the second, it is in w .", "title": "MakeRef vectors do not see the same valid host/device data as their base vector"}, {"location": "integration/", "text": "Integration MFEM's spatial integrations are performed in the usual finite element manner by first splitting the spatial domain into a collection of non-overlapping \"elements\" which cover the domain. This is usually referred to as the \"mesh\". An integral can then be computed separately in each element and the results added together: $$ \\int_\\Omega f(x)\\,d\\Omega = \\sum_i\\int_{\\Omega_i}f(x)\\,d\\Omega $$ Where $\\Omega$ is the full domain and $\\Omega_i$ is the domain of the i-th element. In MFEM this sum over elements is performed in classes such as the BilinearForm or LinearForm and their parallel counterparts. Elements come in a variety of shapes and they may be flat-sided or curved. For this reason it is much simpler to perform the element-wise integrations on reference elements which have relatively simple shapes. For example in 2D we might integrate over a unit square rather than an arbitrary quadrilateral. Finite element methods typically make the assumption that the functions to be integrated are non-singular and at least reasonably smooth. This enables us to employ families of relatively simple quadrature rules which are designed for accurately integrating polynomials. This is in contrast to boundary element methods which require more specialized rules which can accurately integrate singularities. Our rules take the form: $$\\int_{\\Omega_i} f(x)\\,d\\Omega \\approx \\sum_j w_j\\,f(x(u_j))\\,|J_i(u_j)|\\label{eq:quad_rule}$$ Where $w_j$ are the quadrature weights, $u_j$ are the quadrature points within the reference element, and $|J_i(u_j)|$ is the Jacobian determinant for element $i$ at the location $u_j$. Integrals at this level are typically computed by classes derived from BilinearFormIntegrator or LinearFormIntegrator , see Bilinear Form Integrators or Linear Form Integrators for numerous examples. Integration Rules The basic building block of an integration rule is the IntegrationPoint . This is a minimal object with member data 'x', 'y', 'z', and 'weight' (and an integer 'index' which indicates the point's place in an integration rule). These store the coordinates of the integration point in the reference coordinate system, $u_j$ from equation $\\ref{eq:quad_rule}$ is defined as $u_j\\equiv(x,y,z)$ , along with the quadrature weight, $w$ also from equation $\\ref{eq:quad_rule}$. Integration points can be collected together into an IntegrationRule object. IntegrationRule is little more than a container for the set of IntegrationPoint objects associated with an integration rule for a given order of accuracy within the domain of a specific reference element. IntegrationRule objects are in turn collected together into the IntRules global object. This object constructs and caches all IntegrationRule objects requested by the calling program. On one hand the IntRules global object is a container class which categorizes IntegrationRule objects by element type and order of accuracy but more importantly it is responsible for allocating IntegrationRule objects and populating them with appropriate IntegrationPoint objects. It is also possible to sidestep the IntRules global object and setup custom IntegrationRule objects. These custom integration rules can then be passed to BilinearFormIntegrator or LinearFormIntegrator objects (using custom integration rules with mixed meshes currently requires specialized handling). Coordinate Transformations The coordinate transformation from the reference element to an individual mesh element is performed by the ElementTransformation class. Objects of this class are prepared by the Mesh object and retrieved in various ways depending on context. For standard mesh elements for (int e = 0; e < mesh->GetNE(); e++) { ElementTransformation *Trans = mesh->GetElementTransformation(e); ... } or for boundary elements for (int be = 0; be < mesh->GetNBE(); be++) { ElementTransformation *Trans = mesh->GetBdrElementTransformation(be); ... } or for faces (usually in a Discontinuous Galerkin (DG) context) for (int f = 0; f < mesh->GetNumFaces(); f++) { FaceElementTransformation *FETrans = mesh->GetFaceElementTransformation(f); ... } or, finally, for boundary faces in a DG context for (int bf = 0; bf < mesh->GetNBE(); bf++) { FaceElementTransformation *FETrans = mesh->GetBdrFaceElementTransformation(bf); ... } A FaceElementTransformation object is a convenience object for easily accessing the three ElementTransformation objects associated with a mesh face and its two neighboring elements. In the case of boundary faces one of the neighboring element transformation objects is not present. In addition to transforming coordinates between the reference and global coordinate systems an ElementTransformation object can be used to compute the following quantities related to the Jacobian matrix: Name C++ Expression Formula Jacobian Matrix const DenseMatrix &J = Trans.Jacobian() ${\\bf J}_{ij} = \\frac{\\partial x_i}{\\partial u_j}$ Jacobian Determinant double detJ = Trans.Weight() $\\det({\\bf J})$ Inverse Jacobian const DenseMatrix &InvJ = Trans.InverseJacobian() ${\\bf J}^{-1}$ Adjugate Jacobian const DenseMatrix &AdjJ = Trans.AdjugateJacobian() $\\det({\\bf J})\\,{\\bf J}^{-1}$ Since these quantities can be expensive to compute the ElementTransformation object will avoid recomputing values whenever possible. However, once a new quadrature point is set, using ElementTransformation::SetIntPoint() , any cached values will be overwritten by subsequent calls to the above functions. Writing Custom Integrators Element-wise integration arises in various places in the finite element method. A few of the most common occurrences are square and rectangular bilinear form operators, linear functionals, and the calculation of norms from field data. Type Primary Function Needing Implementation Square Operators BilinearFormIntegrator::AssembleElementMatrix Rectangular Operators BilinearFormIntegrator::AssembleElementMatrix2 Linear Functionals LinearFormIntegrator::AssembleRHSElementVect Development of a new norm or another custom integral might follow the code found in GridFunction::ComputeElementLpErrors . The pieces that are common to each of these include: Determination of the appropriate quadrature order Obtaining the quadrature rule for the appropriate element type Working with the ElementTransformation object Evaluating the function to be integrated An appropriate quadrature order depends on many variables. If we could restrict ourselves to integrating polynomials then a specific order would produce an exact result and a higher order would only incur additional effort. However, skewed or curved elements can introduce a rational polynomial factor through the inverse Jacobian of the element transformation. Furthermore, non-trivial material coefficients can introduce factors with arbitrary functional forms. Useful rules of thumb for linear and bilinear form integration orders are: (linear form order) = (basis function order) + (geometry order) (bilinear form order) = (domain basis function order) + (range basis function order) + (geometry order) It can be appropriate to lower the basis function order by one if a derivative of the basis function is being used. It might be appropriate to increase the order if the coefficient is expected to vary more rapidly but, in such a case, it would probably be more appropriate to further refine the mesh. Appropriate orders for computing norms should probably follow the guidance for bilinear forms since most common norms tend to be quadratic. For example a custom integrator for a rectangular operator might start with the following lines: void CustomIntegrator::AssembleElementMatrix2(const FiniteElement &trial_fe, const FiniteElement &test_fe, ElementTransformation &Trans, DenseMatrix &elmat) { // Determine an appropriate integration rule order int order = trial_fe.GetOrder() // Polynomial order of domain space + test_fe.GetOrder() // Polynomial order of range space + Trans.OrderW(); // Polynomial order of the geometry // Determine the element type: triangle, quadrilateral, tetrahedron, etc. Geometry::Type geom = Trans.GetGeometryType(); // Construct or retrieve an integration rule for the appropriate // reference element with the desired order of accuracy const IntegrationRule * ir = &IntRules.Get(Trans.GetGeometryType(), order); ... } This example uses the IntRules global object but custom integration rules could be provided through the use of a similar global object or by some other means. The next piece is to loop over the integration points and, in most cases, make use of the ElementTransformation object. ... // Loop over each quadrature point in the reference element for (int i = 0; i < ir->GetNPoints(); i++) { // Extract the current quadrature point from the integration rule const IntegrationPoint &ip = ir->IntPoint(i); // Prepare to evaluate the coordinate transformation at the current // quadrature point Trans.SetIntPoint(&ip); // Compute the Jacobian determinant at the current integration point double detJ = Trans.Weight(); ... } The final piece is to evaluate the function to be integrated. This often involves evaluation of a Coefficient object as well as one or two sets of basis functions or their derivatives. The coefficient should be straightforward, simply call its Eval method with the ElementTransformation and IntegrationPoint objects and perhaps a Vector or DenseMatrix to hold the resulting coefficient value when appropriate. Basis function evaluation can be a bit more complicated. Basis Function Evaluation Some basis functions, particularly vector-valued basis functions, partially depend upon the geometry of the physical element in addition to their dependence on the reference element. The scalar basis functions provided by the H1_FECollection are straightforward. Simply call FiniteElement::CalcShape with the current quadrature point to retrieve a vector containing the values of each basis function evaluated at the given point in reference space. ... // Retrieve the number of basis functions int tr_dof = trial_fe.GetDof(); // Allocate a vector to hold the values of each basis function Vector tr_shape(tr_dof); // Loop over each quadrature point in the reference element for (int i = 0; i < ir->GetNPoints(); i++) { // Extract the current quadrature point from the integration rule const IntegrationPoint &ip = ir->IntPoint(i); ... // Evaluate the basis functions at the point ip trial_fe.CalcShape(ip, tr_shape); ... } For other types of basis functions it can be simpler to call CalcPhysShape or CalcPhysVShape . These, and similar evaluation functions with \"Phys\" in the name, internally perform the geometric transformation of the basis functions when necessary. This is clearly a convenience feature but it can lead to unnecessary computations when certain optimizations are possible. In the following table subscripts on the derivative operators indicate which coordinate system is being used to compute the derivative; 'x' for the physical coordinates and 'u' for the reference coordinates. Quantities with a caret above them indicate functions computed in the reference coordinate system. Family Evaluation Transformation H1 Basis None H1 Gradient of Basis $\\nabla_x\\varphi_i = (J^{-1})^T\\nabla_u\\hat{\\varphi}_i$ ND Basis $\\vec{W}_i = (J^{-1})^T\\hat{W}_i$ ND Curl of Basis $\\nabla_x\\times\\vec{W}_i = \\frac{1}{\\det(J)}J\\,\\nabla_u\\times\\hat{W}_i$ RT Basis $\\vec{F}_i = \\frac{1}{\\det(J)}J\\,\\hat{F}_i$ RT Divergence of Basis $\\nabla_x\\cdot\\vec{F}_i = \\frac{1}{\\det(J)}\\nabla_u\\cdot\\hat{F}_i$ L2 (INTEGRAL) Basis $\\psi_i = \\frac{1}{\\det(J)}\\hat{\\psi}_i$ L2 (VALUE) Basis None Use of these \"CalcPhys\" functions enable integrators to be used with a wider variety of basis function families without the need to explicitly handle these transformations within the integrator. This leads to more general implementations but at the possible cost of added computational expense. For example, a LinearFormIntegrator involving an L2 basis function using the INTEGRAL map type would both multiply and divide by the Jacobian determinant at each integration point. Clearly this is unnecessary and could significantly increase the computational effort needed to compute the integrals. Working with the MixedScalarIntegrator The MixedScalarIntegrator is designed to help construct BilinearFormIntegrators which build an integrand from two sets of scalar-valued basis function evaluations. Such integrands will involve combinations of the following quantities: Scalar-valued basis functions obtained from CalcPhysShape Divergence of vector-valued basis functions obtained from CalcPhysDivShape Curl of vector-valued basis functions in 2D obtained from CalcPhysCurlShape Gradient of scalar-valued basis functions in 1D obtained from CalcPhysDShape An optional scalar coefficient To derive a custom integrator from MixedScalarIntegrator a developer need only define constructors for the custom integrator. Only one constructor is necessary but support of various coefficient types is often useful. class MixedScalarMassIntegrator : public MixedScalarIntegrator { public: MixedScalarMassIntegrator() { same_calc_shape = true; } MixedScalarMassIntegrator(Coefficient &q) : MixedScalarIntegrator(q) { same_calc_shape = true; } }; By default this integrator will compute the operator: $$a_{ij} = \\int_{\\Omega_e}q(x)\\,f_j(x)\\,g_i(x)\\,d\\Omega$$ Where $f_j$ and $g_i$ are two sets of scalar-valued basis functions which produces a \"mass\" matrix. The MixedScalarIntegrator has two public methods and five protected methods which can be overridden to customize the integrator. The public methods are AssembleElementMatrix for use with the BilinearForm class of square bilinear forms and AssembleElementMatrix2 for use with the MixedBilinearForm class of rectangular bilinear forms. Typically only one of these is necessary and the default implementations will often suffice. However, one or both of these methods may be overridden by a derived class if some customization is desired. For example, to implement optimizations related to coordinate transformations or custom integration rules, etc.. More commonly a derived class will need to override one or both of the CalcTestShape and CalcTrialShape methods which compute the necessary basis function values. For example the four types of scalar basis function evaluations supported by MixedScalarIntegrator could be obtained by these overrides of the trial (domain) finite element basis functions: /// Evaluate the scalar-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { trial_fe.CalcPhysShape(Trans, shape); } or /// Evaluate the divergence of the vector-valued basis functions virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { trial_fe.CalcPhysDivShape(Trans, shape); } or /// Evaluate the 2D curl of the vector-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { DenseMatrix dshape(shape.GetData(), shape.Size(), 1); trial_fe.CalcPhysCurlShape(Trans, dshape); } or /// Evaluate the 1D gradient of the scalar-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { DenseMatrix dshape(shape.GetData(), shape.Size(), 1); trial_fe.CalcPhysDShape(Trans, dshape); } Similar overrides could be implemented for the test (range) space. Of course other overrides are possible and may be quite useful for other custom integrators. The next override that is often advisable is VerifyFiniteElementTypes which provides a means of testing the FiniteElement objects passed by the BilinearForm class to make sure they support the evaluations needed by the CalcTestShape and CalcTrialShape methods. This override is optional but highly recommended. As an example the following override verifies that the geometry is one dimensional and that the trial (domain) space supports evaluation of the gradient of the basis functions. inline virtual bool VerifyFiniteElementTypes(const FiniteElement & trial_fe, const FiniteElement & test_fe ) const { return (trial_fe.GetDim() == 1 && test_fe.GetDim() == 1 && trial_fe.GetDerivType() == mfem::FiniteElement::GRAD && test_fe.GetRangeType() == mfem::FiniteElement::SCALAR ); } A related optional method can be used to output an appropriate error message in the event that unsuitable basis functions have been provided. For example the following error message might be appropriate in conjunction with the previous VerifyFiniteElementTypes implementation: inline virtual const char * FiniteElementTypeFailureMessage() const { return \"Trial and test spaces must both be scalar fields in 1D \" \"and the trial space must implement CalcDShape.\"; } The last optional protected method allows a certain flexibility in the choice of quadrature order. The default implementation is shown below but other choices may be suitable. inline virtual int GetIntegrationOrder(const FiniteElement & trial_fe, const FiniteElement & test_fe, ElementTransformation &Trans) { return trial_fe.GetOrder() + test_fe.GetOrder() + Trans.OrderW(); } A wide variety of bilinear forms can be easily implemented using the MixedScalarIntegrator . Most of these are probably already included in MFEM, see Bilinear Form Integrators for a listing, but other options may be useful. Working with the MixedVectorIntegrator The MixedVectorIntegrator is very similar in spirit to the MixedScalarIntegrator but the integrand in this case is computed as the inner product of two vectors. Such integrands will involve combinations of the following quantities: Vector-valued basis functions obtained from CalcVShape Gradient of scalar-valued basis functions obtained from CalcPhysDShape Curl of vector-valued basis functions in 3D obtained from CalcPhysCurlShape Optional scalar, vector, or matrix-valued coefficients By default this integrator will compute different operators based on coefficient type: Coefficient Type Default Integral Scalar $a_{ij} = \\int_{\\Omega_e}q(x)\\,\\vec{F}_j(x)\\cdot\\,\\vec{G}_i(x)\\,d\\Omega$ Matrix $a_{ij} = \\int_{\\Omega_e}\\left(Q(x)\\,\\vec{F}_j(x)\\right)\\cdot\\,\\vec{G}_i(x)\\,d\\Omega$ Vector $a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\times\\vec{F}_j(x)\\right)\\cdot\\,\\vec{G}_i(x)\\,d\\Omega$ Where $\\vec{F}_j$ and $\\vec{G}_i$ are two sets of vector-valued basis functions which produces a \"mass\" matrix. The MixedVectorIntegrator also has public and protected methods which may be overridden in an analogous manner to those in MixedScalarIntegrator to implement an even wider variety of custom integrators. Note that the default implementation of the assembly methods do assume a square matrix coefficient but this assumption could be removed if necessary. The CalcTestShape and CalcTrialShape methods which compute the necessary vector-valued basis function values might be overridden as follows: /// Evaluate the vector-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, DenseMatrix & shape) { trial_fe.CalcVShape(Trans, shape); } or /// Evaluate the gradient of the scalar-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, DenseMatrix & shape) { trial_fe.CalcPhysDShape(Trans, shape); } or /// Evaluate the 3D curl of the vector-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, DenseMatrix & shape) { trial_fe.CalcPhysCurlShape(Trans, shape); } Many of the possible MixedVectorIntegrator customizations are already included in MFEM. See Bilinear Form Integrators for a listing. Working with the MixedScalarVectorIntegrator The MixedScalarVectorIntegrator follows naturally from the MixedScalarIntegrator and the MixedVectorIntegrator . The integrand in this case is computed as the product of a scalar basis function with a vector basis function. However, since the integrand must be scalar valued, a vector-valued coefficient will always be required. The types of scalar-valued basis functions will include: Scalar-valued basis functions obtained from CalcPhysShape Divergence of vector-valued basis functions obtained from CalcPhysDivShape Curl of vector-valued basis functions in 2D obtained from CalcPhysCurlShape Gradient of scalar-valued basis functions in 1D obtained from CalcPhysDShape The types of vector-valued basis functions will include: Vector-valued basis functions obtained from CalcVShape Gradient of scalar-valued basis functions obtained from CalcPhysDShape Curl of vector-valued basis functions in 3D obtained from CalcPhysCurlShape By default this integrator will compute different operators based on the choice of the trial and test spaces and, in 2D, how the vector coefficient should be employed: $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\,f_j(x)\\right)\\cdot\\vec{G}_i(x)\\,d\\Omega\\label{msv_def}$$ or $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\cdot\\vec{F}_j(x)\\right)\\,g_i(x)\\,d\\Omega\\label{msv_trans}$$ or in 2D there is an option to compute $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\,f_j(x)\\right)\\times\\vec{G}_i(x)\\,d\\Omega\\label{msv_2d_def}$$ or (again optionally in 2D) $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\times\\vec{F}_j(x)\\right)\\,g_i(x)\\,d\\Omega\\label{msv_2d_trans}$$ The methods that a developer may choose to override are again quite similar to those in MixedScalarIntegrator and MixedVectorIntegrator . The main difference is the basis function overrides which have been renamed to CalcShape for the scalar-valued basis and CalcVShape for the vector-valued basis. By default it is assumed that the trial (domain) space is scalar-valued and the test (range) space is vector-valued as in equations \\ref{msv_def} and \\ref{msv_2d_def}. The choice of trial and test spaces is here controlled by a transpose option in the MixedScalarVectorIntegrator constructor. If transpose == true then equations \\ref{msv_trans} and \\ref{msv_2d_trans} are assumed. The choice between equations \\ref{msv_def} and \\ref{msv_trans} on the one hand and equations \\ref{msv_2d_def} and \\ref{msv_2d_trans} on the other is made with the cross_2d optional constructor argument. There are several customizations of this integrator included in MFEM but others are possible. See Bilinear Form Integrators for a listing. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Integration"}, {"location": "integration/#integration", "text": "MFEM's spatial integrations are performed in the usual finite element manner by first splitting the spatial domain into a collection of non-overlapping \"elements\" which cover the domain. This is usually referred to as the \"mesh\". An integral can then be computed separately in each element and the results added together: $$ \\int_\\Omega f(x)\\,d\\Omega = \\sum_i\\int_{\\Omega_i}f(x)\\,d\\Omega $$ Where $\\Omega$ is the full domain and $\\Omega_i$ is the domain of the i-th element. In MFEM this sum over elements is performed in classes such as the BilinearForm or LinearForm and their parallel counterparts. Elements come in a variety of shapes and they may be flat-sided or curved. For this reason it is much simpler to perform the element-wise integrations on reference elements which have relatively simple shapes. For example in 2D we might integrate over a unit square rather than an arbitrary quadrilateral. Finite element methods typically make the assumption that the functions to be integrated are non-singular and at least reasonably smooth. This enables us to employ families of relatively simple quadrature rules which are designed for accurately integrating polynomials. This is in contrast to boundary element methods which require more specialized rules which can accurately integrate singularities. Our rules take the form: $$\\int_{\\Omega_i} f(x)\\,d\\Omega \\approx \\sum_j w_j\\,f(x(u_j))\\,|J_i(u_j)|\\label{eq:quad_rule}$$ Where $w_j$ are the quadrature weights, $u_j$ are the quadrature points within the reference element, and $|J_i(u_j)|$ is the Jacobian determinant for element $i$ at the location $u_j$. Integrals at this level are typically computed by classes derived from BilinearFormIntegrator or LinearFormIntegrator , see Bilinear Form Integrators or Linear Form Integrators for numerous examples.", "title": "Integration"}, {"location": "integration/#integration-rules", "text": "The basic building block of an integration rule is the IntegrationPoint . This is a minimal object with member data 'x', 'y', 'z', and 'weight' (and an integer 'index' which indicates the point's place in an integration rule). These store the coordinates of the integration point in the reference coordinate system, $u_j$ from equation $\\ref{eq:quad_rule}$ is defined as $u_j\\equiv(x,y,z)$ , along with the quadrature weight, $w$ also from equation $\\ref{eq:quad_rule}$. Integration points can be collected together into an IntegrationRule object. IntegrationRule is little more than a container for the set of IntegrationPoint objects associated with an integration rule for a given order of accuracy within the domain of a specific reference element. IntegrationRule objects are in turn collected together into the IntRules global object. This object constructs and caches all IntegrationRule objects requested by the calling program. On one hand the IntRules global object is a container class which categorizes IntegrationRule objects by element type and order of accuracy but more importantly it is responsible for allocating IntegrationRule objects and populating them with appropriate IntegrationPoint objects. It is also possible to sidestep the IntRules global object and setup custom IntegrationRule objects. These custom integration rules can then be passed to BilinearFormIntegrator or LinearFormIntegrator objects (using custom integration rules with mixed meshes currently requires specialized handling).", "title": "Integration Rules"}, {"location": "integration/#coordinate-transformations", "text": "The coordinate transformation from the reference element to an individual mesh element is performed by the ElementTransformation class. Objects of this class are prepared by the Mesh object and retrieved in various ways depending on context. For standard mesh elements for (int e = 0; e < mesh->GetNE(); e++) { ElementTransformation *Trans = mesh->GetElementTransformation(e); ... } or for boundary elements for (int be = 0; be < mesh->GetNBE(); be++) { ElementTransformation *Trans = mesh->GetBdrElementTransformation(be); ... } or for faces (usually in a Discontinuous Galerkin (DG) context) for (int f = 0; f < mesh->GetNumFaces(); f++) { FaceElementTransformation *FETrans = mesh->GetFaceElementTransformation(f); ... } or, finally, for boundary faces in a DG context for (int bf = 0; bf < mesh->GetNBE(); bf++) { FaceElementTransformation *FETrans = mesh->GetBdrFaceElementTransformation(bf); ... } A FaceElementTransformation object is a convenience object for easily accessing the three ElementTransformation objects associated with a mesh face and its two neighboring elements. In the case of boundary faces one of the neighboring element transformation objects is not present. In addition to transforming coordinates between the reference and global coordinate systems an ElementTransformation object can be used to compute the following quantities related to the Jacobian matrix: Name C++ Expression Formula Jacobian Matrix const DenseMatrix &J = Trans.Jacobian() ${\\bf J}_{ij} = \\frac{\\partial x_i}{\\partial u_j}$ Jacobian Determinant double detJ = Trans.Weight() $\\det({\\bf J})$ Inverse Jacobian const DenseMatrix &InvJ = Trans.InverseJacobian() ${\\bf J}^{-1}$ Adjugate Jacobian const DenseMatrix &AdjJ = Trans.AdjugateJacobian() $\\det({\\bf J})\\,{\\bf J}^{-1}$ Since these quantities can be expensive to compute the ElementTransformation object will avoid recomputing values whenever possible. However, once a new quadrature point is set, using ElementTransformation::SetIntPoint() , any cached values will be overwritten by subsequent calls to the above functions.", "title": "Coordinate Transformations"}, {"location": "integration/#writing-custom-integrators", "text": "Element-wise integration arises in various places in the finite element method. A few of the most common occurrences are square and rectangular bilinear form operators, linear functionals, and the calculation of norms from field data. Type Primary Function Needing Implementation Square Operators BilinearFormIntegrator::AssembleElementMatrix Rectangular Operators BilinearFormIntegrator::AssembleElementMatrix2 Linear Functionals LinearFormIntegrator::AssembleRHSElementVect Development of a new norm or another custom integral might follow the code found in GridFunction::ComputeElementLpErrors . The pieces that are common to each of these include: Determination of the appropriate quadrature order Obtaining the quadrature rule for the appropriate element type Working with the ElementTransformation object Evaluating the function to be integrated An appropriate quadrature order depends on many variables. If we could restrict ourselves to integrating polynomials then a specific order would produce an exact result and a higher order would only incur additional effort. However, skewed or curved elements can introduce a rational polynomial factor through the inverse Jacobian of the element transformation. Furthermore, non-trivial material coefficients can introduce factors with arbitrary functional forms. Useful rules of thumb for linear and bilinear form integration orders are: (linear form order) = (basis function order) + (geometry order) (bilinear form order) = (domain basis function order) + (range basis function order) + (geometry order) It can be appropriate to lower the basis function order by one if a derivative of the basis function is being used. It might be appropriate to increase the order if the coefficient is expected to vary more rapidly but, in such a case, it would probably be more appropriate to further refine the mesh. Appropriate orders for computing norms should probably follow the guidance for bilinear forms since most common norms tend to be quadratic. For example a custom integrator for a rectangular operator might start with the following lines: void CustomIntegrator::AssembleElementMatrix2(const FiniteElement &trial_fe, const FiniteElement &test_fe, ElementTransformation &Trans, DenseMatrix &elmat) { // Determine an appropriate integration rule order int order = trial_fe.GetOrder() // Polynomial order of domain space + test_fe.GetOrder() // Polynomial order of range space + Trans.OrderW(); // Polynomial order of the geometry // Determine the element type: triangle, quadrilateral, tetrahedron, etc. Geometry::Type geom = Trans.GetGeometryType(); // Construct or retrieve an integration rule for the appropriate // reference element with the desired order of accuracy const IntegrationRule * ir = &IntRules.Get(Trans.GetGeometryType(), order); ... } This example uses the IntRules global object but custom integration rules could be provided through the use of a similar global object or by some other means. The next piece is to loop over the integration points and, in most cases, make use of the ElementTransformation object. ... // Loop over each quadrature point in the reference element for (int i = 0; i < ir->GetNPoints(); i++) { // Extract the current quadrature point from the integration rule const IntegrationPoint &ip = ir->IntPoint(i); // Prepare to evaluate the coordinate transformation at the current // quadrature point Trans.SetIntPoint(&ip); // Compute the Jacobian determinant at the current integration point double detJ = Trans.Weight(); ... } The final piece is to evaluate the function to be integrated. This often involves evaluation of a Coefficient object as well as one or two sets of basis functions or their derivatives. The coefficient should be straightforward, simply call its Eval method with the ElementTransformation and IntegrationPoint objects and perhaps a Vector or DenseMatrix to hold the resulting coefficient value when appropriate. Basis function evaluation can be a bit more complicated.", "title": "Writing Custom Integrators"}, {"location": "integration/#basis-function-evaluation", "text": "Some basis functions, particularly vector-valued basis functions, partially depend upon the geometry of the physical element in addition to their dependence on the reference element. The scalar basis functions provided by the H1_FECollection are straightforward. Simply call FiniteElement::CalcShape with the current quadrature point to retrieve a vector containing the values of each basis function evaluated at the given point in reference space. ... // Retrieve the number of basis functions int tr_dof = trial_fe.GetDof(); // Allocate a vector to hold the values of each basis function Vector tr_shape(tr_dof); // Loop over each quadrature point in the reference element for (int i = 0; i < ir->GetNPoints(); i++) { // Extract the current quadrature point from the integration rule const IntegrationPoint &ip = ir->IntPoint(i); ... // Evaluate the basis functions at the point ip trial_fe.CalcShape(ip, tr_shape); ... } For other types of basis functions it can be simpler to call CalcPhysShape or CalcPhysVShape . These, and similar evaluation functions with \"Phys\" in the name, internally perform the geometric transformation of the basis functions when necessary. This is clearly a convenience feature but it can lead to unnecessary computations when certain optimizations are possible. In the following table subscripts on the derivative operators indicate which coordinate system is being used to compute the derivative; 'x' for the physical coordinates and 'u' for the reference coordinates. Quantities with a caret above them indicate functions computed in the reference coordinate system. Family Evaluation Transformation H1 Basis None H1 Gradient of Basis $\\nabla_x\\varphi_i = (J^{-1})^T\\nabla_u\\hat{\\varphi}_i$ ND Basis $\\vec{W}_i = (J^{-1})^T\\hat{W}_i$ ND Curl of Basis $\\nabla_x\\times\\vec{W}_i = \\frac{1}{\\det(J)}J\\,\\nabla_u\\times\\hat{W}_i$ RT Basis $\\vec{F}_i = \\frac{1}{\\det(J)}J\\,\\hat{F}_i$ RT Divergence of Basis $\\nabla_x\\cdot\\vec{F}_i = \\frac{1}{\\det(J)}\\nabla_u\\cdot\\hat{F}_i$ L2 (INTEGRAL) Basis $\\psi_i = \\frac{1}{\\det(J)}\\hat{\\psi}_i$ L2 (VALUE) Basis None Use of these \"CalcPhys\" functions enable integrators to be used with a wider variety of basis function families without the need to explicitly handle these transformations within the integrator. This leads to more general implementations but at the possible cost of added computational expense. For example, a LinearFormIntegrator involving an L2 basis function using the INTEGRAL map type would both multiply and divide by the Jacobian determinant at each integration point. Clearly this is unnecessary and could significantly increase the computational effort needed to compute the integrals.", "title": "Basis Function Evaluation"}, {"location": "integration/#working-with-the-mixedscalarintegrator", "text": "The MixedScalarIntegrator is designed to help construct BilinearFormIntegrators which build an integrand from two sets of scalar-valued basis function evaluations. Such integrands will involve combinations of the following quantities: Scalar-valued basis functions obtained from CalcPhysShape Divergence of vector-valued basis functions obtained from CalcPhysDivShape Curl of vector-valued basis functions in 2D obtained from CalcPhysCurlShape Gradient of scalar-valued basis functions in 1D obtained from CalcPhysDShape An optional scalar coefficient To derive a custom integrator from MixedScalarIntegrator a developer need only define constructors for the custom integrator. Only one constructor is necessary but support of various coefficient types is often useful. class MixedScalarMassIntegrator : public MixedScalarIntegrator { public: MixedScalarMassIntegrator() { same_calc_shape = true; } MixedScalarMassIntegrator(Coefficient &q) : MixedScalarIntegrator(q) { same_calc_shape = true; } }; By default this integrator will compute the operator: $$a_{ij} = \\int_{\\Omega_e}q(x)\\,f_j(x)\\,g_i(x)\\,d\\Omega$$ Where $f_j$ and $g_i$ are two sets of scalar-valued basis functions which produces a \"mass\" matrix. The MixedScalarIntegrator has two public methods and five protected methods which can be overridden to customize the integrator. The public methods are AssembleElementMatrix for use with the BilinearForm class of square bilinear forms and AssembleElementMatrix2 for use with the MixedBilinearForm class of rectangular bilinear forms. Typically only one of these is necessary and the default implementations will often suffice. However, one or both of these methods may be overridden by a derived class if some customization is desired. For example, to implement optimizations related to coordinate transformations or custom integration rules, etc.. More commonly a derived class will need to override one or both of the CalcTestShape and CalcTrialShape methods which compute the necessary basis function values. For example the four types of scalar basis function evaluations supported by MixedScalarIntegrator could be obtained by these overrides of the trial (domain) finite element basis functions: /// Evaluate the scalar-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { trial_fe.CalcPhysShape(Trans, shape); } or /// Evaluate the divergence of the vector-valued basis functions virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { trial_fe.CalcPhysDivShape(Trans, shape); } or /// Evaluate the 2D curl of the vector-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { DenseMatrix dshape(shape.GetData(), shape.Size(), 1); trial_fe.CalcPhysCurlShape(Trans, dshape); } or /// Evaluate the 1D gradient of the scalar-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, Vector & shape) { DenseMatrix dshape(shape.GetData(), shape.Size(), 1); trial_fe.CalcPhysDShape(Trans, dshape); } Similar overrides could be implemented for the test (range) space. Of course other overrides are possible and may be quite useful for other custom integrators. The next override that is often advisable is VerifyFiniteElementTypes which provides a means of testing the FiniteElement objects passed by the BilinearForm class to make sure they support the evaluations needed by the CalcTestShape and CalcTrialShape methods. This override is optional but highly recommended. As an example the following override verifies that the geometry is one dimensional and that the trial (domain) space supports evaluation of the gradient of the basis functions. inline virtual bool VerifyFiniteElementTypes(const FiniteElement & trial_fe, const FiniteElement & test_fe ) const { return (trial_fe.GetDim() == 1 && test_fe.GetDim() == 1 && trial_fe.GetDerivType() == mfem::FiniteElement::GRAD && test_fe.GetRangeType() == mfem::FiniteElement::SCALAR ); } A related optional method can be used to output an appropriate error message in the event that unsuitable basis functions have been provided. For example the following error message might be appropriate in conjunction with the previous VerifyFiniteElementTypes implementation: inline virtual const char * FiniteElementTypeFailureMessage() const { return \"Trial and test spaces must both be scalar fields in 1D \" \"and the trial space must implement CalcDShape.\"; } The last optional protected method allows a certain flexibility in the choice of quadrature order. The default implementation is shown below but other choices may be suitable. inline virtual int GetIntegrationOrder(const FiniteElement & trial_fe, const FiniteElement & test_fe, ElementTransformation &Trans) { return trial_fe.GetOrder() + test_fe.GetOrder() + Trans.OrderW(); } A wide variety of bilinear forms can be easily implemented using the MixedScalarIntegrator . Most of these are probably already included in MFEM, see Bilinear Form Integrators for a listing, but other options may be useful.", "title": "Working with the MixedScalarIntegrator"}, {"location": "integration/#working-with-the-mixedvectorintegrator", "text": "The MixedVectorIntegrator is very similar in spirit to the MixedScalarIntegrator but the integrand in this case is computed as the inner product of two vectors. Such integrands will involve combinations of the following quantities: Vector-valued basis functions obtained from CalcVShape Gradient of scalar-valued basis functions obtained from CalcPhysDShape Curl of vector-valued basis functions in 3D obtained from CalcPhysCurlShape Optional scalar, vector, or matrix-valued coefficients By default this integrator will compute different operators based on coefficient type: Coefficient Type Default Integral Scalar $a_{ij} = \\int_{\\Omega_e}q(x)\\,\\vec{F}_j(x)\\cdot\\,\\vec{G}_i(x)\\,d\\Omega$ Matrix $a_{ij} = \\int_{\\Omega_e}\\left(Q(x)\\,\\vec{F}_j(x)\\right)\\cdot\\,\\vec{G}_i(x)\\,d\\Omega$ Vector $a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\times\\vec{F}_j(x)\\right)\\cdot\\,\\vec{G}_i(x)\\,d\\Omega$ Where $\\vec{F}_j$ and $\\vec{G}_i$ are two sets of vector-valued basis functions which produces a \"mass\" matrix. The MixedVectorIntegrator also has public and protected methods which may be overridden in an analogous manner to those in MixedScalarIntegrator to implement an even wider variety of custom integrators. Note that the default implementation of the assembly methods do assume a square matrix coefficient but this assumption could be removed if necessary. The CalcTestShape and CalcTrialShape methods which compute the necessary vector-valued basis function values might be overridden as follows: /// Evaluate the vector-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, DenseMatrix & shape) { trial_fe.CalcVShape(Trans, shape); } or /// Evaluate the gradient of the scalar-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, DenseMatrix & shape) { trial_fe.CalcPhysDShape(Trans, shape); } or /// Evaluate the 3D curl of the vector-valued basis functions inline virtual void CalcTrialShape(const FiniteElement & trial_fe, ElementTransformation &Trans, DenseMatrix & shape) { trial_fe.CalcPhysCurlShape(Trans, shape); } Many of the possible MixedVectorIntegrator customizations are already included in MFEM. See Bilinear Form Integrators for a listing.", "title": "Working with the MixedVectorIntegrator"}, {"location": "integration/#working-with-the-mixedscalarvectorintegrator", "text": "The MixedScalarVectorIntegrator follows naturally from the MixedScalarIntegrator and the MixedVectorIntegrator . The integrand in this case is computed as the product of a scalar basis function with a vector basis function. However, since the integrand must be scalar valued, a vector-valued coefficient will always be required. The types of scalar-valued basis functions will include: Scalar-valued basis functions obtained from CalcPhysShape Divergence of vector-valued basis functions obtained from CalcPhysDivShape Curl of vector-valued basis functions in 2D obtained from CalcPhysCurlShape Gradient of scalar-valued basis functions in 1D obtained from CalcPhysDShape The types of vector-valued basis functions will include: Vector-valued basis functions obtained from CalcVShape Gradient of scalar-valued basis functions obtained from CalcPhysDShape Curl of vector-valued basis functions in 3D obtained from CalcPhysCurlShape By default this integrator will compute different operators based on the choice of the trial and test spaces and, in 2D, how the vector coefficient should be employed: $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\,f_j(x)\\right)\\cdot\\vec{G}_i(x)\\,d\\Omega\\label{msv_def}$$ or $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\cdot\\vec{F}_j(x)\\right)\\,g_i(x)\\,d\\Omega\\label{msv_trans}$$ or in 2D there is an option to compute $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\,f_j(x)\\right)\\times\\vec{G}_i(x)\\,d\\Omega\\label{msv_2d_def}$$ or (again optionally in 2D) $$a_{ij} = \\int_{\\Omega_e}\\left(\\vec{q}(x)\\times\\vec{F}_j(x)\\right)\\,g_i(x)\\,d\\Omega\\label{msv_2d_trans}$$ The methods that a developer may choose to override are again quite similar to those in MixedScalarIntegrator and MixedVectorIntegrator . The main difference is the basis function overrides which have been renamed to CalcShape for the scalar-valued basis and CalcVShape for the vector-valued basis. By default it is assumed that the trial (domain) space is scalar-valued and the test (range) space is vector-valued as in equations \\ref{msv_def} and \\ref{msv_2d_def}. The choice of trial and test spaces is here controlled by a transpose option in the MixedScalarVectorIntegrator constructor. If transpose == true then equations \\ref{msv_trans} and \\ref{msv_2d_trans} are assumed. The choice between equations \\ref{msv_def} and \\ref{msv_trans} on the one hand and equations \\ref{msv_2d_def} and \\ref{msv_2d_trans} on the other is made with the cross_2d optional constructor argument. There are several customizations of this integrator included in MFEM but others are possible. See Bilinear Form Integrators for a listing. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Working with the MixedScalarVectorIntegrator"}, {"location": "lininteg/", "text": "Linear Form Integrators $ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} $ Linear form integrators are the right-hand side companion to Bilinear Form Integrators that compute the integrals of products of a basis function and a given \"right-hand side\" function (coefficient) $\\,f$ over individual mesh elements (or sometimes over edges or faces). Typically each element is contained in the support of several basis functions, therefore linear integrators simultaneously compute the integrals of all combinations of the relevant basis functions with the given input function $\\,f$. This produces a one dimensional array of results that is arranged into a small vector of integral (dual) values called a local element (load) vector . To put this another way, the LinearForm class builds a global vector, glb_vec , by performing the outer loop in the following pseudocode snippet whereas the LinearFormIntegrator class performs the nested inner loops to compute the local vector, loc_vec . for each elem in elements loc_vec = 0.0 for each pt in quadrature_points for each v_i in elem loc_vec(i) += w(pt) * rhs(pt) v_i(pt) end end glb_vec += loc_vec end There are three types of integrals that typically arise although many other, more exotic, forms are possible: Integrals involving Scalar rhs $\\,f$ and basis functions: $\\int_\\Omega\\, f v$ Integrals involving Vector rhs $\\,\\vec{f}$ and basis functions: $\\int_\\Omega\\, \\vec{f}\\cdot\\vec{v}$ Integrals involving mix of Scalar and Vector rhs $\\,\\vec{f}$ and basis functions: $\\int_\\Omega f\\,\\vec{\\lambda}\\cdot\\vec{v}$ and $\\int_\\Omega v\\,\\vec{\\lambda}\\cdot\\vec{f}$ The LinearFormIntegrator classes allow MFEM to produce a wide variety of local element vectors without modifying the LinearForm class. Many of the possible operators are collected below into tables that briefly describe their action and requirements. In the tables below the Space column refers to finite element spaces which implement the following methods: Space Operator Derivative Operator H1 CalcShape CalcDShape ND CalcVShape CalcCurlShape RT CalcVShape CalcDivShape L2 CalcShape None Notation: $$\\{(f, v)\\}_i\\equiv \\int_\\Omega f v_i$$ $$\\{(\\vec{F}, \\vec{v})\\}_i\\equiv \\int_\\Omega \\lambda \\vec{F}\\cdot\\vec{v}_i$$ For boundary integrators, the integrals are over $\\partial \\Omega$. Face integrators integrate over the interior and boundary faces of mesh elements and are denoted with $\\left<\\cdot,\\cdot\\right>$. Scalar Field Operators Domain Integrators Class Name Space Operator Continuous Op. Dimension DomainLFIntegrator H1, L2 $(f, v)$ $f$ 1D, 2D, 3D DomainLFGradIntegrator H1 $(\\vec{f}, \\nabla v)$ $-\\nabla \\cdot \\vec{f}$ 1D, 2D, 3D Boundary Integrators Class Name Space Operator Continuous Op. Dimension BoundaryLFIntegrator H1, L2 $(f, v)$ $f$ 1D, 2D, 3D BoundaryNormalLFIntegrator H1, L2 $(\\vec{f} \\cdot \\hat{n}, v)$ $\\vec{f} \\cdot \\hat{n}$ 1D, 2D, 3D BoundaryTangentialLFIntegrator H1, L2 $(\\vec{f} \\cdot \\hat{\\tau}, v)$ $\\vec{f} \\cdot \\hat{\\tau}$ 2D BoundaryFlowIntegrator H1, L2 $\\frac{\\alpha}{2}\\, \\left< (\\vec{u} \\cdot \\hat{n})\\, f, v \\right> - \\beta\\, \\left<\\mid \\vec{u} \\cdot \\hat{n} \\mid f, v \\right>$ $\\frac{\\alpha}{2} (\\vec{u} \\cdot \\hat{n})\\, f - \\beta \\mid \\vec{u} \\cdot \\hat{n} \\mid f$ 1D, 2D, 3D Face Integrators Class Name Space Operator Continuous Op. Dimension DGDirichletLFIntegrator L2 $\\sigma \\left< u_D, Q \\nabla v \\cdot \\hat{n} \\right> + \\kappa \\left< \\{h^{-1} Q\\} u_D, v \\right>$ DG essential BCs for $u_D$ 1D, 2D, 3D Vector Field Operators Domain Integrators Class Name Space Operator Continuous Op. Dimension VectorDomainLFIntegrator H1, L2 $(\\vec{f}, \\vec{v})$ $\\vec{f}$ 1D, 2D, 3D VectorFEDomainLFIntegrator ND, RT $(\\vec{f}, \\vec{v})$ $\\vec{f}$ 2D, 3D VectorFEDomainLFCurlIntegrator ND $(\\vec{f}, \\nabla \\times \\vec{v})$ $\\nabla \\times \\vec{f}$ 2D, 3D VectorFEDomainLFDivIntegrator RT $(f, \\nabla \\cdot \\vec{v})$ $ - \\nabla f$ 2D, 3D Boundary Integrators Class Name Space Operator Continuous Op. Dimension VectorBoundaryLFIntegrator H1, L2 $( \\vec{f}, \\vec{v} )$ $\\vec{f}$ 1D, 2D, 3D VectorBoundaryFluxLFIntegrator H1, L2 $( f, \\vec{v} \\cdot \\hat{n} )$ $f \\hat{n}$ 1D, 2D, 3D VectorFEBoundaryFluxLFIntegrator RT $( f, \\vec{v} \\cdot \\hat{n} )$ $f \\hat{n}$ 2D, 3D VectorFEBoundaryTangentLFIntegrator ND $( \\hat{n} \\times \\vec{f}, \\vec{v} )$ $\\hat{n} \\times \\vec{f}$ 2D, 3D Face Integrators Class Name Space Operator Continuous Op. Dimension DGElasticityDirichletLFIntegrator L2 $\\alpha\\left<\\vec{u_D}, \\left(\\lambda \\left(\\div \\vec{v}\\right) I + \\mu \\left(\\nabla\\vec{v} + \\nabla\\vec{v}^T\\right)\\right) \\cdot \\hat{n}\\right> \\\\ + \\kappa\\left< h^{-1} (\\lambda + 2 \\mu) \\vec{u_D}, \\vec{v} \\right>$ DG essential BCs for $\\vec{u_D}$ 1D, 2D, 3D MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Linear Form Integrators"}, {"location": "lininteg/#linear-form-integrators", "text": "$ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} $ Linear form integrators are the right-hand side companion to Bilinear Form Integrators that compute the integrals of products of a basis function and a given \"right-hand side\" function (coefficient) $\\,f$ over individual mesh elements (or sometimes over edges or faces). Typically each element is contained in the support of several basis functions, therefore linear integrators simultaneously compute the integrals of all combinations of the relevant basis functions with the given input function $\\,f$. This produces a one dimensional array of results that is arranged into a small vector of integral (dual) values called a local element (load) vector . To put this another way, the LinearForm class builds a global vector, glb_vec , by performing the outer loop in the following pseudocode snippet whereas the LinearFormIntegrator class performs the nested inner loops to compute the local vector, loc_vec . for each elem in elements loc_vec = 0.0 for each pt in quadrature_points for each v_i in elem loc_vec(i) += w(pt) * rhs(pt) v_i(pt) end end glb_vec += loc_vec end There are three types of integrals that typically arise although many other, more exotic, forms are possible: Integrals involving Scalar rhs $\\,f$ and basis functions: $\\int_\\Omega\\, f v$ Integrals involving Vector rhs $\\,\\vec{f}$ and basis functions: $\\int_\\Omega\\, \\vec{f}\\cdot\\vec{v}$ Integrals involving mix of Scalar and Vector rhs $\\,\\vec{f}$ and basis functions: $\\int_\\Omega f\\,\\vec{\\lambda}\\cdot\\vec{v}$ and $\\int_\\Omega v\\,\\vec{\\lambda}\\cdot\\vec{f}$ The LinearFormIntegrator classes allow MFEM to produce a wide variety of local element vectors without modifying the LinearForm class. Many of the possible operators are collected below into tables that briefly describe their action and requirements. In the tables below the Space column refers to finite element spaces which implement the following methods: Space Operator Derivative Operator H1 CalcShape CalcDShape ND CalcVShape CalcCurlShape RT CalcVShape CalcDivShape L2 CalcShape None Notation: $$\\{(f, v)\\}_i\\equiv \\int_\\Omega f v_i$$ $$\\{(\\vec{F}, \\vec{v})\\}_i\\equiv \\int_\\Omega \\lambda \\vec{F}\\cdot\\vec{v}_i$$ For boundary integrators, the integrals are over $\\partial \\Omega$. Face integrators integrate over the interior and boundary faces of mesh elements and are denoted with $\\left<\\cdot,\\cdot\\right>$.", "title": "Linear Form Integrators"}, {"location": "lininteg/#scalar-field-operators", "text": "", "title": "Scalar Field Operators"}, {"location": "lininteg/#domain-integrators", "text": "Class Name Space Operator Continuous Op. Dimension DomainLFIntegrator H1, L2 $(f, v)$ $f$ 1D, 2D, 3D DomainLFGradIntegrator H1 $(\\vec{f}, \\nabla v)$ $-\\nabla \\cdot \\vec{f}$ 1D, 2D, 3D", "title": "Domain Integrators"}, {"location": "lininteg/#boundary-integrators", "text": "Class Name Space Operator Continuous Op. Dimension BoundaryLFIntegrator H1, L2 $(f, v)$ $f$ 1D, 2D, 3D BoundaryNormalLFIntegrator H1, L2 $(\\vec{f} \\cdot \\hat{n}, v)$ $\\vec{f} \\cdot \\hat{n}$ 1D, 2D, 3D BoundaryTangentialLFIntegrator H1, L2 $(\\vec{f} \\cdot \\hat{\\tau}, v)$ $\\vec{f} \\cdot \\hat{\\tau}$ 2D BoundaryFlowIntegrator H1, L2 $\\frac{\\alpha}{2}\\, \\left< (\\vec{u} \\cdot \\hat{n})\\, f, v \\right> - \\beta\\, \\left<\\mid \\vec{u} \\cdot \\hat{n} \\mid f, v \\right>$ $\\frac{\\alpha}{2} (\\vec{u} \\cdot \\hat{n})\\, f - \\beta \\mid \\vec{u} \\cdot \\hat{n} \\mid f$ 1D, 2D, 3D", "title": "Boundary Integrators"}, {"location": "lininteg/#face-integrators", "text": "Class Name Space Operator Continuous Op. Dimension DGDirichletLFIntegrator L2 $\\sigma \\left< u_D, Q \\nabla v \\cdot \\hat{n} \\right> + \\kappa \\left< \\{h^{-1} Q\\} u_D, v \\right>$ DG essential BCs for $u_D$ 1D, 2D, 3D", "title": "Face Integrators"}, {"location": "lininteg/#vector-field-operators", "text": "", "title": "Vector Field Operators"}, {"location": "lininteg/#domain-integrators_1", "text": "Class Name Space Operator Continuous Op. Dimension VectorDomainLFIntegrator H1, L2 $(\\vec{f}, \\vec{v})$ $\\vec{f}$ 1D, 2D, 3D VectorFEDomainLFIntegrator ND, RT $(\\vec{f}, \\vec{v})$ $\\vec{f}$ 2D, 3D VectorFEDomainLFCurlIntegrator ND $(\\vec{f}, \\nabla \\times \\vec{v})$ $\\nabla \\times \\vec{f}$ 2D, 3D VectorFEDomainLFDivIntegrator RT $(f, \\nabla \\cdot \\vec{v})$ $ - \\nabla f$ 2D, 3D", "title": "Domain Integrators"}, {"location": "lininteg/#boundary-integrators_1", "text": "Class Name Space Operator Continuous Op. Dimension VectorBoundaryLFIntegrator H1, L2 $( \\vec{f}, \\vec{v} )$ $\\vec{f}$ 1D, 2D, 3D VectorBoundaryFluxLFIntegrator H1, L2 $( f, \\vec{v} \\cdot \\hat{n} )$ $f \\hat{n}$ 1D, 2D, 3D VectorFEBoundaryFluxLFIntegrator RT $( f, \\vec{v} \\cdot \\hat{n} )$ $f \\hat{n}$ 2D, 3D VectorFEBoundaryTangentLFIntegrator ND $( \\hat{n} \\times \\vec{f}, \\vec{v} )$ $\\hat{n} \\times \\vec{f}$ 2D, 3D", "title": "Boundary Integrators"}, {"location": "lininteg/#face-integrators_1", "text": "Class Name Space Operator Continuous Op. Dimension DGElasticityDirichletLFIntegrator L2 $\\alpha\\left<\\vec{u_D}, \\left(\\lambda \\left(\\div \\vec{v}\\right) I + \\mu \\left(\\nabla\\vec{v} + \\nabla\\vec{v}^T\\right)\\right) \\cdot \\hat{n}\\right> \\\\ + \\kappa\\left< h^{-1} (\\lambda + 2 \\mu) \\vec{u_D}, \\vec{v} \\right>$ DG essential BCs for $\\vec{u_D}$ 1D, 2D, 3D MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Face Integrators"}, {"location": "lininterp/", "text": "Linear Interpolators $ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} $ Linear interpolators can be very useful for interpolating one discrete representation of a field onto another set of basis functions to produce another representation. However, this must be done with care because different discrete representations are not completely interchangeable. As an example consider a scalar field projected onto either piece-wise linear ($H_1$) or piece-wise constant ($L_2$) basis functions. Interpolating from an $H_1$ representation to an $L_2$ representation should produce a reasonable result because the constant value needed in each element can be computed as a weighted sum of the $H_1$ basis functions in that element. On the other hand, if we try to interpolate from the $L_2$ representation to an $H_1$ representation we don't have enough information to determine reasonable values for the degrees of freedom which are shared between neighboring elements because linear interpolators can only access one element at a time. To accurately compute an $H_1$ representation from an $L_2$ representation requires the type of weighted average of values from neighboring elements that bilinear forms provide but this requires a linear solve and often suitable boundary conditions. The operators produced by the BilinearForm classes involve integrations and therefore they sum the various contributions from neighboring elements to compute a full integral. The DiscreteLinearOperator classes are not performing integrals but rather interpolations and as such they do not combine contributions from different elements in any way. Consequently if the LinearInterpolator s produce different results for entities that are shared between neighboring elements then the resulting representation will depend on the order in which the elements are processed. Such operators are not good candidates for DiscreteLinearOperator s. The sections below will offer some guidance on the appropriate use of these operators. In the tables below the Space column refers to finite element spaces which implement the following methods: Space Operator Derivative Operator H1 CalcShape CalcDShape ND CalcVShape CalcCurlShape RT CalcVShape CalcDivShape L2 CalcShape None The Coef. column refers to the types of coefficients that are available. A boldface coefficient type is required whereas most coefficients are optional. Coef. Type S Scalar Valued Function V Vector Valued Function D Diagonal Matrix Function M General Matrix Function Derivative Interpolators The $H(Curl)$ and $H(Div)$ spaces are specifically designed to support these derivative operators by having the necessary inter-element continuity. Other possible derivative operators would not possess the correct continuity and must therefore be implemented in a weak sense. Class Name Domain Range Operator GradientInterpolator H1 ND $\\grad u$ CurlInterpolator ND in 3D RT $\\curl\\vec{u}$ CurlInterpolator ND in 2D L2 $\\hat{z}\\cdot(\\curl\\vec{u})$ DivergenceInterpolator RT L2 $\\div\\vec{u}$ Product Interpolators These operators require a bit more care than the previous set. In order for these operators to produce valid results the product of the coefficient with the domain space must be uniquely representable within the desired range space. Additionally, it may sometimes be desirable for the range space to have a higher order than the domain space if the coefficient is not constant. For example if the domain space and the coefficient are both linear it might be desirable, though not necessary, for the range space to be quadratic. Class Name Domain Range Coef. Operator ScalarProductInterpolator H1,L2 H1,L2 S $\\lambda u$ ScalarVectorProductInterpolator ND,RT ND,RT S $\\lambda\\vec{u}$ VectorScalarProductInterpolator H1,L2 ND,RT V $\\vec{\\lambda}u$ VectorCrossProductInterpolator ND,RT in 3D ND,RT V $\\vec{\\lambda}\\times\\vec{u}$ ScalarCrossProductInterpolator ND,RT in 2D H1,L2 V $\\hat{z}\\cdot(\\vec{\\lambda}\\times\\vec{u})$ VectorInnerProductInterpolator ND,RT H1,L2 V $\\vec{\\lambda}\\cdot\\vec{u}$ Special Purpose Interpolators Class Name Domain Range Operator IdentityInterpolator H1,L2 H1,L2 $u$ NormalInterpolator H1$^d$ RT_Trace $\\hat{n}\\cdot\\vec{u}$ MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Linear Interpolators"}, {"location": "lininterp/#linear-interpolators", "text": "$ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} \\newcommand{\\abs}[1]{|#1|} $ Linear interpolators can be very useful for interpolating one discrete representation of a field onto another set of basis functions to produce another representation. However, this must be done with care because different discrete representations are not completely interchangeable. As an example consider a scalar field projected onto either piece-wise linear ($H_1$) or piece-wise constant ($L_2$) basis functions. Interpolating from an $H_1$ representation to an $L_2$ representation should produce a reasonable result because the constant value needed in each element can be computed as a weighted sum of the $H_1$ basis functions in that element. On the other hand, if we try to interpolate from the $L_2$ representation to an $H_1$ representation we don't have enough information to determine reasonable values for the degrees of freedom which are shared between neighboring elements because linear interpolators can only access one element at a time. To accurately compute an $H_1$ representation from an $L_2$ representation requires the type of weighted average of values from neighboring elements that bilinear forms provide but this requires a linear solve and often suitable boundary conditions. The operators produced by the BilinearForm classes involve integrations and therefore they sum the various contributions from neighboring elements to compute a full integral. The DiscreteLinearOperator classes are not performing integrals but rather interpolations and as such they do not combine contributions from different elements in any way. Consequently if the LinearInterpolator s produce different results for entities that are shared between neighboring elements then the resulting representation will depend on the order in which the elements are processed. Such operators are not good candidates for DiscreteLinearOperator s. The sections below will offer some guidance on the appropriate use of these operators. In the tables below the Space column refers to finite element spaces which implement the following methods: Space Operator Derivative Operator H1 CalcShape CalcDShape ND CalcVShape CalcCurlShape RT CalcVShape CalcDivShape L2 CalcShape None The Coef. column refers to the types of coefficients that are available. A boldface coefficient type is required whereas most coefficients are optional. Coef. Type S Scalar Valued Function V Vector Valued Function D Diagonal Matrix Function M General Matrix Function", "title": "Linear Interpolators"}, {"location": "lininterp/#derivative-interpolators", "text": "The $H(Curl)$ and $H(Div)$ spaces are specifically designed to support these derivative operators by having the necessary inter-element continuity. Other possible derivative operators would not possess the correct continuity and must therefore be implemented in a weak sense. Class Name Domain Range Operator GradientInterpolator H1 ND $\\grad u$ CurlInterpolator ND in 3D RT $\\curl\\vec{u}$ CurlInterpolator ND in 2D L2 $\\hat{z}\\cdot(\\curl\\vec{u})$ DivergenceInterpolator RT L2 $\\div\\vec{u}$", "title": "Derivative Interpolators"}, {"location": "lininterp/#product-interpolators", "text": "These operators require a bit more care than the previous set. In order for these operators to produce valid results the product of the coefficient with the domain space must be uniquely representable within the desired range space. Additionally, it may sometimes be desirable for the range space to have a higher order than the domain space if the coefficient is not constant. For example if the domain space and the coefficient are both linear it might be desirable, though not necessary, for the range space to be quadratic. Class Name Domain Range Coef. Operator ScalarProductInterpolator H1,L2 H1,L2 S $\\lambda u$ ScalarVectorProductInterpolator ND,RT ND,RT S $\\lambda\\vec{u}$ VectorScalarProductInterpolator H1,L2 ND,RT V $\\vec{\\lambda}u$ VectorCrossProductInterpolator ND,RT in 3D ND,RT V $\\vec{\\lambda}\\times\\vec{u}$ ScalarCrossProductInterpolator ND,RT in 2D H1,L2 V $\\hat{z}\\cdot(\\vec{\\lambda}\\times\\vec{u})$ VectorInnerProductInterpolator ND,RT H1,L2 V $\\vec{\\lambda}\\cdot\\vec{u}$", "title": "Product Interpolators"}, {"location": "lininterp/#special-purpose-interpolators", "text": "Class Name Domain Range Operator IdentityInterpolator H1,L2 H1,L2 $u$ NormalInterpolator H1$^d$ RT_Trace $\\hat{n}\\cdot\\vec{u}$ MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Special Purpose Interpolators"}, {"location": "maxwell-notes/", "text": "Maxwell's Equations $$\\begin{align} \\nabla\\times{\\bf H}& = & \\frac{\\partial{\\bf D}}{\\partial t} + {\\bf J}+ \\overline{\\sigma}{\\bf E}\\label{ampere} \\\\ \\nabla\\times{\\bf E}& = & -\\frac{\\partial{\\bf B}}{\\partial t} - {\\bf M}- \\overline{\\sigma}_M{\\bf H}\\label{faraday} \\\\ \\nabla\\cdot{\\bf D}& = & \\rho\\label{gauss} \\\\ \\nabla\\cdot{\\bf B}& = & 0\\label{trans} \\end{align}$$ With electric current density, ${\\bf J}$, magnetic current density, ${\\bf M}$, electric conductivity, $\\overline{\\sigma}$, magnetic conductivity, $\\overline{\\sigma}_M$, and electric charge density, $\\rho$. We will sometimes refer to these equations by the names Amp\u00e8re's Law, Faraday's Law, Gauss's Law, and the Transversality Condition respectively. It is also necessary to define the constitutive relations ${\\bf D}\\equiv\\epsilon{\\bf E}$ and ${\\bf B}\\equiv\\mu{\\bf H}$. It is also common to combine equations \\eqref{ampere} and \\eqref{faraday} into a single second order PDE. $$\\begin{align} \\frac{\\partial^2\\left(\\epsilon{\\bf E}\\right)}{\\partial t^2} + \\frac{\\partial\\left(\\overline{\\sigma}{\\bf E}\\right)}{\\partial t} + \\nabla\\times\\left(\\mu^{-1}\\nabla\\times{\\bf E}\\right) & \\nonumber \\\\ + \\nabla\\times\\left(\\mu^{-1}\\overline{\\sigma}_M{\\bf H}\\right) & = -\\frac{\\partial{\\bf J}}{\\partial t} - \\nabla\\times\\left(\\mu^{-1}{\\bf M}\\right) \\label{curlcurle} %\\end{align} {or}&\\\\ %\\begin{equation} \\frac{\\partial^2\\left(\\mu{\\bf H}\\right)}{\\partial t^2} + \\frac{\\partial\\left(\\overline{\\sigma}_M{\\bf H}\\right)}{\\partial t} + \\nabla\\times\\left(\\epsilon^{-1}\\nabla\\times{\\bf H}\\right) & \\nonumber \\\\ - \\nabla\\times\\left(\\epsilon^{-1}\\overline{\\sigma}{\\bf E}\\right) & = -\\frac{\\partial{\\bf M}}{\\partial t} +\\nabla\\times\\left(\\epsilon^{-1}{\\bf J}\\right) \\label{curlcurlh} \\end{align}$$ One drawback of these formulations is the appearance of ${\\bf H}$ in equation \\eqref{curlcurle} or ${\\bf E}$ in equation \\eqref{curlcurlh}. The only way to formulate these equations entirely in terms of ${\\bf E}$ or ${\\bf H}$ is to make assumptions about the spatial variation of $\\epsilon^{-1}\\overline{\\sigma}$ or $\\mu^{-1}\\overline{\\sigma}_M$. For this reason these second order formulations should be avoided unless $\\overline{\\sigma}_M=0$ or $\\overline{\\sigma}=0$. Discretization Basis Functions There are two sets of basis functions particularly well suited for electromagnetics; Nedelec and Raviart-Thomas. The Nedelec basis functions guarantee tangential continuity of their approximations across element interfaces. This makes them well suited for the fields ${\\bf E}$ and ${\\bf H}$ which share this constraint on material interfaces. The Raviart-Thomas basis functions guarantee continuity of the normal component of their approximations across element interfaces. This makes them well suited for the fields ${\\bf B}$ and ${\\bf D}$ which share this constraint on material interfaces. The Nedelec basis functions which discretize the H(Curl) space are indispensable due to the presence of the Curl operators in equations \\eqref{ampere}, \\eqref{faraday}, \\eqref{curlcurle}, and \\eqref{curlcurlh}. The Raviart-Thomas basis functions which discretize the H(Div) space are convenient and reduce the computational cost but are optional, strictly speaking. Discretization of the primary fields There are three choices for discretizing the set of coupled first order partial differential equations: ${\\bf E}\\in$ H(Curl) and ${\\bf B},{\\bf J},{\\bf M}\\in$ H(Div) ${\\bf H}\\in$ H(Curl) and ${\\bf D},{\\bf J},{\\bf M}\\in$ H(Div) ${\\bf E}\\in$ H(Curl), ${\\bf H}\\in$ H(Curl), and ${\\bf J},{\\bf M}\\in$ H(Curl) (grudgingly) There is only one choice for discretizing the second order equations i.e. ${\\bf E}$ or ${\\bf H}$ in H(Curl). These basis function choices merely ensure that the approximate fields maintain the proper interface constraints at material boundaries. The choice of formulation can be made based on the required sources, boundary conditions, and/or post-processing requirements. Hence, different physical requirements can lead to different choices of formulation i.e. there is no single best choice for all problems. Discretization of ${\\bf J}$ and ${\\bf M}$ The electric and magnetic current source densities are both flux vectors and as such they are best represented using the H(Div) space. This is most apparent when modeling the eddy current equation but H(Div) can be important in wave equations as well. Imagine modeling a current carrying conductor surrounded by some insulating material. The current density ${\\bf J}$ may be non-zero inside the conductor but it should be identically zero outside of it. Assuming the computational mesh conforms to the surface of this conductor, an H(Div) field can accurately represent such a current flow as long as the current at the surface of the conductor remains parallel to that surface. In other words the current will not \"leak\" out of the conductor as long as the normal component of the current is zero at the surface. On the other hand, if H(Curl) basis functions were used for ${\\bf J}$ its tangential components would need to be continuous across the surface of the conductor. This produces a non-physical current within the first layer of elements surrounding the conductor. Non-physical currents leaking out of conductors when using H(Curl) basis functions for the current density ${\\bf J}$ can lead to inaccurate eddy current simulations either by producing a larger than expected magnetic field outside the conductor or a reduced thermal heat load within the conductor. Similarly, in wave simulations the total power emanating from an antenna can be either over- or under-estimated depending upon how ${\\bf J}$ is computed on the surface of the antenna. Such matters can be eliminated by simply representing ${\\bf J}$ as an H(Div) function. I'm sure similar arguments can be made for the magnetization ${\\bf M}$ although I have less experience with that. The maxwell Miniapp The maxwell Miniapp uses the EB formulation with $\\overline{\\sigma}_M$ and ${\\bf M}$ assumed to be zero. It evolves the first order coupled system of equations using a symplectic time integration algorithm by Candy and Rozmus described in \"A Symplectic Integration Algorithm for Separable Hamiltonian Functions\", Journal of Computational Physics, Vol. 92, pages 230-256 (1991). The main advantage of this algorithm is that it conserves energy. Another advantage is that the approximations of ${\\bf E}$ and ${\\bf B}$ correspond to the same simulation time rather than being staggered as in other methods. The variable order symplectic integration class in MFEM called SIAVSolver requires that we implement our coupled set of PDEs as a pair of operators. The first is an Operator which can be used to update the magnetic field, ${\\bf B}$, using Faraday's Law by computing $-\\nabla\\times{\\bf E}$. The second is a TimeDependentOperator which can be used to update the electric field, ${\\bf E}$, using Amp\u00e8re's Law by computing $\\nabla\\times\\left(\\mu^{-1}{\\bf B}\\right)-{\\bf J}-\\overline{\\sigma}{\\bf E}$. We choose to implement both of these operators in a single class which we call MaxwellSolver . The first operator, $-\\nabla\\times{\\bf E}$, acts on ${\\bf E}\\in$ H(Curl) to produce a result $\\frac{\\partial{\\bf B}}{\\partial t}\\in$ H(Div). By design our discrete representation of H(Div) contains the curl of any field in our discrete representation of H(curl). Consequently we can compute this operator by simply evaluating the curl of our H(Curl) basis functions in terms of our H(Div) basis functions. This evaluation is handled by a DiscreteInterpolator called CurlInterpolator . The process of looping over each element to compute these interpolations is conducted by the ParDiscreteLinearOperator . In the MaxwellSolver this curl operator is simply named Curl_ and its negative, needed by the SIAVSolver , is named NegCurl_ . These operators are setup between lines 227 and 236 of the file maxwell_solver.cpp . The second operator, $\\nabla\\times\\left(\\mu^{-1}{\\bf B}\\right)-{\\bf J}-\\overline{\\sigma}{\\bf E}$, requires a bit more effort. The first thing to notice is that we cannot compute the curl of $\\mu^{-1}{\\bf B}$ precisely. Primarily this is due to the fact that ${\\bf B}\\in$ H(Div) rather than H(Curl) but, in general, the presence of $\\mu^{-1}$ is also a problem since we don't know its derivatives at all. These complications require that we compute the curl operator in a weak sense. Setup of the TimeDependentOperator Weak curl of $\\mu^{-1}{\\bf B}$ Often in wave propagation $\\mu$ is assumed to be constant but we will not make this assumption. In principle $\\mu$ could be anisotropic and inhomogeneous although we do assume it is constant in time. The magnetic field ${\\bf B}$ will be written as a linear combination of basis functions in H(Div) which we will label as ${\\bf F}_i$ e.g. ${\\bf B}(\\vec{x})\\approx\\sum_i b_i(t){\\bf F}_i(\\vec{x})$. Our goal is to compute $\\frac{\\partial{\\bf E}}{\\partial t}$ where ${\\bf E}\\in$ H(Curl) so we need to represent $\\nabla\\times\\mu^{-1}{\\bf B}$ also in H(Curl). The basis functions of H(Curl) will be labeled as ${\\bf W}_i$. To compute the weak form of this term we multiply the operator of interest by each of our H(Curl) basis functions and integrate over the entire problem domain to obtain an equation corresponding to each basis function in H(Curl). For example $$\\begin{align} \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}{\\bf B})] d\\Omega &=& \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\sum_j b_j{\\bf F}_j(\\vec{x}))] d\\Omega \\\\ &=& \\sum_j b_j\\{\\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}{\\bf F}_j(\\vec{x}))] d\\Omega \\} \\end{align}$$ The expression in curly braces depends only on our material coefficient and our basis functions so we can precompute this if we assume $\\mu$ does not change in time. This particular integral requires a little more manipulation to move the curl operator onto the H(Curl) basis function. $$\\begin{align} \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot\\left[\\nabla\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Omega &=& \\int_\\Omega\\left(\\nabla\\times{\\bf W}_i(\\vec{x})\\right)\\cdot\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\,d\\Omega \\\\ &-& \\int_\\Omega\\nabla\\cdot\\left[{\\bf W}_i(\\vec{x})\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Omega \\\\ &=& \\int_\\Omega\\left(\\mu^{-T}\\nabla\\times{\\bf W}_i(\\vec{x})\\right)\\cdot{\\bf F}_j(\\vec{x})\\,d\\Omega \\\\ &-& \\int_\\Gamma\\hat{n}\\cdot\\left[{\\bf W}_i(\\vec{x})\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Gamma \\\\ &=& \\int_\\Omega\\left(\\mu^{-T}\\nabla\\times{\\bf W}_i(\\vec{x})\\right)\\cdot{\\bf F}_j(\\vec{x})\\,d\\Omega \\\\ &+& \\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot\\left[\\hat{n}\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Gamma \\end{align}$$ Where $\\mu^{-T}$ is the transpose of the inverse of $\\mu$ and $\\Gamma=\\partial\\Omega$ i.e. the boundary of the domain. The first integral remaining on the right hand side is the weak curl operator which is implemented in MFEM as a BilinearFormIntegrator named MixedVectorWeakCurlIntegrator 1 . This operator is setup between lines 178 and 184 of the file maxwell_solver.cpp . The boundary integral term shown above is ignored in the maxwell miniapp which implies that it is assumed to be zero. This gives rise to a so-called natural boundary condition which in this case implies that $\\hat{n}\\times{\\bf H}=0$. Any portion of the boundary where an essential (a.k.a. Dirichlet) boundary condition is set will override this implicit boundary condition. Alternatively an inhomogeneous Neumann boundary condition can be applied by providing a nonzero function in place of $\\hat{n}\\times{\\bf H}$ in this integral. This would be accomplished by passing a known vector function to the LinearFormIntegrator named VectorFEDomainLFIntegrator and using this as a boundary integrator in ParLinearForm . Unfortunately we don't seem to have an example of this usage in either of the tesla or maxwell miniapps. Loss term $\\overline{\\sigma}{\\bf E}$ This would seem to be a simple term but, of course, there is a complication. According to the Candy and Rozmus paper this piece of the Hamiltonian should not depend on ${\\bf E}$. Furthermore, to properly model such a loss term it is best to handle it implicitly. To accomplish this the MaxwellSolver stores the current value of the electric field internally since the SIAVSolver will not provide this data to the update method (which is called ImplicitSolve ). The integral needed to model this term simply computes the product of the H(Curl) basis functions against each other along with the material coefficient, $\\overline{\\sigma}$ in this case. This integrator is called VectorFEMassIntegrator . The portion of this operator which will be used with the current value of the electric field is setup between lines 195 and 208 of the file maxwell_solver.cpp . The implicit portion is setup between lines 399 and 407 using the same integrator. Current density ${\\bf J}$ The maxwell miniapp does not place ${\\bf J}$ in H(Div) despite the comments in Section J and M . The reason for this is that the maxwell miniapp does not use a GridFunction representation of ${\\bf J}$ in any computations. It does, however, write ${\\bf J}$ to its data files for visualization and this really should be done using an H(Div) field. The way the current density enters the wave equation is a source term which is computed using the following integral: $$\\int_\\Omega{\\bf W}_i\\cdot{\\bf J}\\,d\\Omega$$ This is accomplished by using the LinearFormIntegrator named VectorFEDomainLFIntegrator and a ParLinearForm object. The setup of this object can be found between lines 264 and 266 of the file maxwell_solver.cpp . Integrals such as this, which directly evaluate a c-style function, avoid the continuity concerns raised in Section J and M . Setting up the solver The time derivative in Amp\u00e8re's Law is of the form: $$\\frac{\\partial\\epsilon{\\bf E}}{\\partial t} \\approx \\frac{\\partial}{\\partial t}(\\epsilon\\sum_ie(t){\\bf W}_i) = \\epsilon\\sum_i\\dot{e}(t){\\bf W}_i$$ Where we have assumed that $\\epsilon$ is constant in time. For the weak form of Amp\u00e8re's Law we need to again multiply by the H(Curl) basis functions and integrate over the problem domain. $$\\int_\\Omega{\\bf W}_i\\cdot(\\epsilon\\sum_j\\dot{e}(t){\\bf W}_j)d\\Omega = \\sum_j\\dot{e}(t)\\{\\int_\\Omega{\\bf W}_i\\cdot(\\epsilon{\\bf W}_j)d\\Omega \\}$$ The integral in the curly braces is a mass matrix which is again computed using the BilinearFormIntegrator named VectorFEMassIntegrator . This is setup between lines 388 and 395 of the file maxwell_solver.cpp . The more unusual part of this operator comes from the implicit handling of the loss term and the absorbing boundary condition. The latter is a simple Sommerfeld first order radiation boundary condition. Each of these implicit terms multiplies the electric field which we approximate at the time $t+\\Delta t/2$. Each of these bilinear forms which multiply the time derivative are mass matrices so a conjugate gradient iterative solver with a diagonal scaling preconditioner should work quite well. These are setup between lines 423 and 428 of the file maxwell_solver.cpp . One odd thing does appear in this setupSolver member function (and a few other places) and that is the variable idt . This is an integer related to the double precision time step dt . The reason for this is that our variable order symplectic time integrator breaks up a time step into a handful of smaller time steps which are generally not the same size. If we need to handle loss terms implicitly this variable time step will appear in the matrix passed to our solver. Of course we don't want to rebuild this matrix every time the time step changes so we build and cache the matrices in a container. The integer idt is simply the key used to access these cached matrices and the solvers that were setup to work with them. Putting it all together The only remaining thing to discuss is the way in which we use a combination of primal and dual vectors within the simulation code. However, it's hard to know what level of detail will be useful here. At this point I would recommend referring to our online documentation which can be found at Primal and Dual Vectors for an overview of this concept. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); A list of the various BilinearFormIntegrators can be found at Bilinear Form Integrators . More detailed descriptions can be found in the files fem/biliniteg.[ch]pp . \u21a9", "title": "_Maxwell Notes"}, {"location": "maxwell-notes/#maxwells-equations", "text": "$$\\begin{align} \\nabla\\times{\\bf H}& = & \\frac{\\partial{\\bf D}}{\\partial t} + {\\bf J}+ \\overline{\\sigma}{\\bf E}\\label{ampere} \\\\ \\nabla\\times{\\bf E}& = & -\\frac{\\partial{\\bf B}}{\\partial t} - {\\bf M}- \\overline{\\sigma}_M{\\bf H}\\label{faraday} \\\\ \\nabla\\cdot{\\bf D}& = & \\rho\\label{gauss} \\\\ \\nabla\\cdot{\\bf B}& = & 0\\label{trans} \\end{align}$$ With electric current density, ${\\bf J}$, magnetic current density, ${\\bf M}$, electric conductivity, $\\overline{\\sigma}$, magnetic conductivity, $\\overline{\\sigma}_M$, and electric charge density, $\\rho$. We will sometimes refer to these equations by the names Amp\u00e8re's Law, Faraday's Law, Gauss's Law, and the Transversality Condition respectively. It is also necessary to define the constitutive relations ${\\bf D}\\equiv\\epsilon{\\bf E}$ and ${\\bf B}\\equiv\\mu{\\bf H}$. It is also common to combine equations \\eqref{ampere} and \\eqref{faraday} into a single second order PDE. $$\\begin{align} \\frac{\\partial^2\\left(\\epsilon{\\bf E}\\right)}{\\partial t^2} + \\frac{\\partial\\left(\\overline{\\sigma}{\\bf E}\\right)}{\\partial t} + \\nabla\\times\\left(\\mu^{-1}\\nabla\\times{\\bf E}\\right) & \\nonumber \\\\ + \\nabla\\times\\left(\\mu^{-1}\\overline{\\sigma}_M{\\bf H}\\right) & = -\\frac{\\partial{\\bf J}}{\\partial t} - \\nabla\\times\\left(\\mu^{-1}{\\bf M}\\right) \\label{curlcurle} %\\end{align} {or}&\\\\ %\\begin{equation} \\frac{\\partial^2\\left(\\mu{\\bf H}\\right)}{\\partial t^2} + \\frac{\\partial\\left(\\overline{\\sigma}_M{\\bf H}\\right)}{\\partial t} + \\nabla\\times\\left(\\epsilon^{-1}\\nabla\\times{\\bf H}\\right) & \\nonumber \\\\ - \\nabla\\times\\left(\\epsilon^{-1}\\overline{\\sigma}{\\bf E}\\right) & = -\\frac{\\partial{\\bf M}}{\\partial t} +\\nabla\\times\\left(\\epsilon^{-1}{\\bf J}\\right) \\label{curlcurlh} \\end{align}$$ One drawback of these formulations is the appearance of ${\\bf H}$ in equation \\eqref{curlcurle} or ${\\bf E}$ in equation \\eqref{curlcurlh}. The only way to formulate these equations entirely in terms of ${\\bf E}$ or ${\\bf H}$ is to make assumptions about the spatial variation of $\\epsilon^{-1}\\overline{\\sigma}$ or $\\mu^{-1}\\overline{\\sigma}_M$. For this reason these second order formulations should be avoided unless $\\overline{\\sigma}_M=0$ or $\\overline{\\sigma}=0$.", "title": "Maxwell's Equations"}, {"location": "maxwell-notes/#discretization", "text": "", "title": "Discretization"}, {"location": "maxwell-notes/#basis-functions", "text": "There are two sets of basis functions particularly well suited for electromagnetics; Nedelec and Raviart-Thomas. The Nedelec basis functions guarantee tangential continuity of their approximations across element interfaces. This makes them well suited for the fields ${\\bf E}$ and ${\\bf H}$ which share this constraint on material interfaces. The Raviart-Thomas basis functions guarantee continuity of the normal component of their approximations across element interfaces. This makes them well suited for the fields ${\\bf B}$ and ${\\bf D}$ which share this constraint on material interfaces. The Nedelec basis functions which discretize the H(Curl) space are indispensable due to the presence of the Curl operators in equations \\eqref{ampere}, \\eqref{faraday}, \\eqref{curlcurle}, and \\eqref{curlcurlh}. The Raviart-Thomas basis functions which discretize the H(Div) space are convenient and reduce the computational cost but are optional, strictly speaking.", "title": "Basis Functions"}, {"location": "maxwell-notes/#discretization-of-the-primary-fields", "text": "There are three choices for discretizing the set of coupled first order partial differential equations: ${\\bf E}\\in$ H(Curl) and ${\\bf B},{\\bf J},{\\bf M}\\in$ H(Div) ${\\bf H}\\in$ H(Curl) and ${\\bf D},{\\bf J},{\\bf M}\\in$ H(Div) ${\\bf E}\\in$ H(Curl), ${\\bf H}\\in$ H(Curl), and ${\\bf J},{\\bf M}\\in$ H(Curl) (grudgingly) There is only one choice for discretizing the second order equations i.e. ${\\bf E}$ or ${\\bf H}$ in H(Curl). These basis function choices merely ensure that the approximate fields maintain the proper interface constraints at material boundaries. The choice of formulation can be made based on the required sources, boundary conditions, and/or post-processing requirements. Hence, different physical requirements can lead to different choices of formulation i.e. there is no single best choice for all problems.", "title": "Discretization of the primary fields"}, {"location": "maxwell-notes/#sec:JM", "text": "The electric and magnetic current source densities are both flux vectors and as such they are best represented using the H(Div) space. This is most apparent when modeling the eddy current equation but H(Div) can be important in wave equations as well. Imagine modeling a current carrying conductor surrounded by some insulating material. The current density ${\\bf J}$ may be non-zero inside the conductor but it should be identically zero outside of it. Assuming the computational mesh conforms to the surface of this conductor, an H(Div) field can accurately represent such a current flow as long as the current at the surface of the conductor remains parallel to that surface. In other words the current will not \"leak\" out of the conductor as long as the normal component of the current is zero at the surface. On the other hand, if H(Curl) basis functions were used for ${\\bf J}$ its tangential components would need to be continuous across the surface of the conductor. This produces a non-physical current within the first layer of elements surrounding the conductor. Non-physical currents leaking out of conductors when using H(Curl) basis functions for the current density ${\\bf J}$ can lead to inaccurate eddy current simulations either by producing a larger than expected magnetic field outside the conductor or a reduced thermal heat load within the conductor. Similarly, in wave simulations the total power emanating from an antenna can be either over- or under-estimated depending upon how ${\\bf J}$ is computed on the surface of the antenna. Such matters can be eliminated by simply representing ${\\bf J}$ as an H(Div) function. I'm sure similar arguments can be made for the magnetization ${\\bf M}$ although I have less experience with that.", "title": "Discretization of ${\\bf J}$ and ${\\bf M}$"}, {"location": "maxwell-notes/#the-maxwell-miniapp", "text": "The maxwell Miniapp uses the EB formulation with $\\overline{\\sigma}_M$ and ${\\bf M}$ assumed to be zero. It evolves the first order coupled system of equations using a symplectic time integration algorithm by Candy and Rozmus described in \"A Symplectic Integration Algorithm for Separable Hamiltonian Functions\", Journal of Computational Physics, Vol. 92, pages 230-256 (1991). The main advantage of this algorithm is that it conserves energy. Another advantage is that the approximations of ${\\bf E}$ and ${\\bf B}$ correspond to the same simulation time rather than being staggered as in other methods. The variable order symplectic integration class in MFEM called SIAVSolver requires that we implement our coupled set of PDEs as a pair of operators. The first is an Operator which can be used to update the magnetic field, ${\\bf B}$, using Faraday's Law by computing $-\\nabla\\times{\\bf E}$. The second is a TimeDependentOperator which can be used to update the electric field, ${\\bf E}$, using Amp\u00e8re's Law by computing $\\nabla\\times\\left(\\mu^{-1}{\\bf B}\\right)-{\\bf J}-\\overline{\\sigma}{\\bf E}$. We choose to implement both of these operators in a single class which we call MaxwellSolver . The first operator, $-\\nabla\\times{\\bf E}$, acts on ${\\bf E}\\in$ H(Curl) to produce a result $\\frac{\\partial{\\bf B}}{\\partial t}\\in$ H(Div). By design our discrete representation of H(Div) contains the curl of any field in our discrete representation of H(curl). Consequently we can compute this operator by simply evaluating the curl of our H(Curl) basis functions in terms of our H(Div) basis functions. This evaluation is handled by a DiscreteInterpolator called CurlInterpolator . The process of looping over each element to compute these interpolations is conducted by the ParDiscreteLinearOperator . In the MaxwellSolver this curl operator is simply named Curl_ and its negative, needed by the SIAVSolver , is named NegCurl_ . These operators are setup between lines 227 and 236 of the file maxwell_solver.cpp . The second operator, $\\nabla\\times\\left(\\mu^{-1}{\\bf B}\\right)-{\\bf J}-\\overline{\\sigma}{\\bf E}$, requires a bit more effort. The first thing to notice is that we cannot compute the curl of $\\mu^{-1}{\\bf B}$ precisely. Primarily this is due to the fact that ${\\bf B}\\in$ H(Div) rather than H(Curl) but, in general, the presence of $\\mu^{-1}$ is also a problem since we don't know its derivatives at all. These complications require that we compute the curl operator in a weak sense.", "title": "The maxwell Miniapp"}, {"location": "maxwell-notes/#setup-of-the-timedependentoperator", "text": "", "title": "Setup of the TimeDependentOperator"}, {"location": "maxwell-notes/#weak-curl-of-mu-1bf-b", "text": "Often in wave propagation $\\mu$ is assumed to be constant but we will not make this assumption. In principle $\\mu$ could be anisotropic and inhomogeneous although we do assume it is constant in time. The magnetic field ${\\bf B}$ will be written as a linear combination of basis functions in H(Div) which we will label as ${\\bf F}_i$ e.g. ${\\bf B}(\\vec{x})\\approx\\sum_i b_i(t){\\bf F}_i(\\vec{x})$. Our goal is to compute $\\frac{\\partial{\\bf E}}{\\partial t}$ where ${\\bf E}\\in$ H(Curl) so we need to represent $\\nabla\\times\\mu^{-1}{\\bf B}$ also in H(Curl). The basis functions of H(Curl) will be labeled as ${\\bf W}_i$. To compute the weak form of this term we multiply the operator of interest by each of our H(Curl) basis functions and integrate over the entire problem domain to obtain an equation corresponding to each basis function in H(Curl). For example $$\\begin{align} \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}{\\bf B})] d\\Omega &=& \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\sum_j b_j{\\bf F}_j(\\vec{x}))] d\\Omega \\\\ &=& \\sum_j b_j\\{\\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}{\\bf F}_j(\\vec{x}))] d\\Omega \\} \\end{align}$$ The expression in curly braces depends only on our material coefficient and our basis functions so we can precompute this if we assume $\\mu$ does not change in time. This particular integral requires a little more manipulation to move the curl operator onto the H(Curl) basis function. $$\\begin{align} \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot\\left[\\nabla\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Omega &=& \\int_\\Omega\\left(\\nabla\\times{\\bf W}_i(\\vec{x})\\right)\\cdot\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\,d\\Omega \\\\ &-& \\int_\\Omega\\nabla\\cdot\\left[{\\bf W}_i(\\vec{x})\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Omega \\\\ &=& \\int_\\Omega\\left(\\mu^{-T}\\nabla\\times{\\bf W}_i(\\vec{x})\\right)\\cdot{\\bf F}_j(\\vec{x})\\,d\\Omega \\\\ &-& \\int_\\Gamma\\hat{n}\\cdot\\left[{\\bf W}_i(\\vec{x})\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Gamma \\\\ &=& \\int_\\Omega\\left(\\mu^{-T}\\nabla\\times{\\bf W}_i(\\vec{x})\\right)\\cdot{\\bf F}_j(\\vec{x})\\,d\\Omega \\\\ &+& \\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot\\left[\\hat{n}\\times\\left(\\mu^{-1}{\\bf F}_j(\\vec{x})\\right)\\right]\\,d\\Gamma \\end{align}$$ Where $\\mu^{-T}$ is the transpose of the inverse of $\\mu$ and $\\Gamma=\\partial\\Omega$ i.e. the boundary of the domain. The first integral remaining on the right hand side is the weak curl operator which is implemented in MFEM as a BilinearFormIntegrator named MixedVectorWeakCurlIntegrator 1 . This operator is setup between lines 178 and 184 of the file maxwell_solver.cpp . The boundary integral term shown above is ignored in the maxwell miniapp which implies that it is assumed to be zero. This gives rise to a so-called natural boundary condition which in this case implies that $\\hat{n}\\times{\\bf H}=0$. Any portion of the boundary where an essential (a.k.a. Dirichlet) boundary condition is set will override this implicit boundary condition. Alternatively an inhomogeneous Neumann boundary condition can be applied by providing a nonzero function in place of $\\hat{n}\\times{\\bf H}$ in this integral. This would be accomplished by passing a known vector function to the LinearFormIntegrator named VectorFEDomainLFIntegrator and using this as a boundary integrator in ParLinearForm . Unfortunately we don't seem to have an example of this usage in either of the tesla or maxwell miniapps.", "title": "Weak curl of $\\mu^{-1}{\\bf B}$"}, {"location": "maxwell-notes/#loss-term-overlinesigmabf-e", "text": "This would seem to be a simple term but, of course, there is a complication. According to the Candy and Rozmus paper this piece of the Hamiltonian should not depend on ${\\bf E}$. Furthermore, to properly model such a loss term it is best to handle it implicitly. To accomplish this the MaxwellSolver stores the current value of the electric field internally since the SIAVSolver will not provide this data to the update method (which is called ImplicitSolve ). The integral needed to model this term simply computes the product of the H(Curl) basis functions against each other along with the material coefficient, $\\overline{\\sigma}$ in this case. This integrator is called VectorFEMassIntegrator . The portion of this operator which will be used with the current value of the electric field is setup between lines 195 and 208 of the file maxwell_solver.cpp . The implicit portion is setup between lines 399 and 407 using the same integrator.", "title": "Loss term $\\overline{\\sigma}{\\bf E}$"}, {"location": "maxwell-notes/#current-density-bf-j", "text": "The maxwell miniapp does not place ${\\bf J}$ in H(Div) despite the comments in Section J and M . The reason for this is that the maxwell miniapp does not use a GridFunction representation of ${\\bf J}$ in any computations. It does, however, write ${\\bf J}$ to its data files for visualization and this really should be done using an H(Div) field. The way the current density enters the wave equation is a source term which is computed using the following integral: $$\\int_\\Omega{\\bf W}_i\\cdot{\\bf J}\\,d\\Omega$$ This is accomplished by using the LinearFormIntegrator named VectorFEDomainLFIntegrator and a ParLinearForm object. The setup of this object can be found between lines 264 and 266 of the file maxwell_solver.cpp . Integrals such as this, which directly evaluate a c-style function, avoid the continuity concerns raised in Section J and M .", "title": "Current density ${\\bf J}$"}, {"location": "maxwell-notes/#setting-up-the-solver", "text": "The time derivative in Amp\u00e8re's Law is of the form: $$\\frac{\\partial\\epsilon{\\bf E}}{\\partial t} \\approx \\frac{\\partial}{\\partial t}(\\epsilon\\sum_ie(t){\\bf W}_i) = \\epsilon\\sum_i\\dot{e}(t){\\bf W}_i$$ Where we have assumed that $\\epsilon$ is constant in time. For the weak form of Amp\u00e8re's Law we need to again multiply by the H(Curl) basis functions and integrate over the problem domain. $$\\int_\\Omega{\\bf W}_i\\cdot(\\epsilon\\sum_j\\dot{e}(t){\\bf W}_j)d\\Omega = \\sum_j\\dot{e}(t)\\{\\int_\\Omega{\\bf W}_i\\cdot(\\epsilon{\\bf W}_j)d\\Omega \\}$$ The integral in the curly braces is a mass matrix which is again computed using the BilinearFormIntegrator named VectorFEMassIntegrator . This is setup between lines 388 and 395 of the file maxwell_solver.cpp . The more unusual part of this operator comes from the implicit handling of the loss term and the absorbing boundary condition. The latter is a simple Sommerfeld first order radiation boundary condition. Each of these implicit terms multiplies the electric field which we approximate at the time $t+\\Delta t/2$. Each of these bilinear forms which multiply the time derivative are mass matrices so a conjugate gradient iterative solver with a diagonal scaling preconditioner should work quite well. These are setup between lines 423 and 428 of the file maxwell_solver.cpp . One odd thing does appear in this setupSolver member function (and a few other places) and that is the variable idt . This is an integer related to the double precision time step dt . The reason for this is that our variable order symplectic time integrator breaks up a time step into a handful of smaller time steps which are generally not the same size. If we need to handle loss terms implicitly this variable time step will appear in the matrix passed to our solver. Of course we don't want to rebuild this matrix every time the time step changes so we build and cache the matrices in a container. The integer idt is simply the key used to access these cached matrices and the solvers that were setup to work with them.", "title": "Setting up the solver"}, {"location": "maxwell-notes/#putting-it-all-together", "text": "The only remaining thing to discuss is the way in which we use a combination of primal and dual vectors within the simulation code. However, it's hard to know what level of detail will be useful here. At this point I would recommend referring to our online documentation which can be found at Primal and Dual Vectors for an overview of this concept. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); A list of the various BilinearFormIntegrators can be found at Bilinear Form Integrators . More detailed descriptions can be found in the files fem/biliniteg.[ch]pp . \u21a9", "title": "Putting it all together"}, {"location": "mesh-format-v1.0/", "text": "Mesh Formats MFEM mesh v1.0 This is the default format in GLVis. It can be used to describe simple (triangular, quadrilateral, tetrahedral and hexahedral meshes with straight edges) or complicated (curvilinear and more general) meshes. Straight meshes In the simple case of a mesh with straight edges the format looks as follows MFEM mesh v1.0 # Dimension of the mesh: 1, 2 or 3 (e.g. 2 for a 2D surface mesh in 3D space) dimension # Mesh elements, e.g. tetrahedrons (4) elements ... ... # Mesh faces/edges on the boundary, e.g. triangles (2) boundary ... ... # Vertex coordinates vertices ... > ... Lines starting with \"#\" denote comments. The supported geometry types are: POINT = 0 SEGMENT = 1 TRIANGLE = 2 SQUARE = 3 TETRAHEDRON = 4 CUBE = 5 PRISM = 6 see the comments in this source file for more details. For example, the beam-quad.mesh file from the data directory looks like this: MFEM mesh v1.0 dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 1 0 3 1 2 1 3 1 3 2 3 1 4 3 3 1 5 4 3 1 6 5 3 1 7 6 3 1 8 7 3 1 9 10 3 1 10 11 3 1 11 12 3 1 12 13 3 1 13 14 3 1 14 15 3 1 15 16 3 1 16 17 1 1 0 9 2 1 17 8 vertices 18 2 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 which corresponds to the mesh visualized with glvis -m beam-quad.mesh -k \"Ame****\" Curvilinear and more general meshes The MFEM mesh v1.0 format also support the general description of meshes based on a vector finite element grid function with degrees of freedom in the \"nodes\" of the mesh. This general format is described briefly below, and in more details on the General Mesh Format page . MFEM mesh v1.0 # Dimension of the mesh: 1, 2 or 3 (e.g. 2 for a 2D surface mesh in 3D space) dimension # Mesh elements, e.g. tetrahedrons (4) elements ... ... # Mesh faces/edges on the boundary, e.g. triangles (2) boundary ... ... # Number of vertices (no coordinates) vertices # Mesh nodes as degrees of freedom of a finite element grid function nodes FiniteElementSpace FiniteElementCollection: VDim: Ordering: 0 ... ... ... Some possible finite element collection choices are: Linear , Quadratic and Cubic corresponding to curvilinear P1/Q1, P2/Q2 and P3/Q3 meshes. The algorithm for the numbering of the degrees of freedom can be found in MFEM's source code . For example, the escher-p3.mesh from MFEM's data directory describes a tetrahedral mesh with nodes given by a P3 vector Lagrangian finite element function. Visualizing this mesh with glvis -m escher-p3.mesh -k \"Aaaoooooooooo**************tt\" we get: Topologically periodic meshes can also be described in this format, see for example the periodic-segment , periodic-square , and periodic-cube meshes in the data directory, as well as Example 9 . MFEM NC mesh v1.0 The MFEM NC mesh v1.0 is a format for nonconforming meshes in MFEM. It is similar in style to the default (conforming) MFEM mesh v1.0 format, but is in fact independent and supports advanced AMR features such as storing refined elements and the refinement hierarchy, anisotropic element refinement, hanging nodes (vertices), parallel partitioning. The file starts with a signature and the mesh dimension: MFEM NC mesh v1.0 # NCMesh supported geometry types: # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # PRISM = 6 # mesh dimension 1, 2 or 3 dimension # optional rank for parallel files, defaults to 0 rank The rank section defines the MPI rank of the process that saved the file. This section can be omitted in serial meshes. Similarly to the conforming format, the next section lists all elements. This time however, we recognize two kinds of elements: Regular, active elements ( refinement type == 0 ). These elements participate in the computation (are listed in the Mesh class) and reference vertex indices. Inactive, previously refined elements ( refinement type > 0 ). Instead of vertices, these elements contain links to their child elements, and are not visible in the Mesh class. All elements also have their geometry type and user attribute defined, as well as the MPI rank of their owner process (only used in parallel meshes). # mesh elements, both regular and refined elements 0 ... Storing the complete refinement hierarchy allows MFEM to coarsen some of the fine elements if necessary, and also to naturally define an ordering of the fine elements that can be used for fast parallel partitioning of the mesh (a depth-first traversal of all refinement trees defines a space-filling curve (SFC) that can be easily partitioned among parallel processes). The following picture illustrates the refinement hierarchy of a mesh that started as two quadrilaterals and then underwent two anisotropic refinements (blue numbers are vertex indices): The corresponding elements section of the mesh file could look like this: elements 6 0 1 3 2 2 3 # element 0: refinement 2 (Y), children 2, 3 0 1 3 0 1 2 5 4 # element 1: no refinement, vertices 1, 2, 5, 4 0 1 3 1 4 5 # element 2: refinement 1 (X), children 4, 5 0 1 3 0 6 7 4 3 # element 3: no refinement, vertices 6, 7, 4, 3 0 1 3 0 0 8 9 6 # element 4: no refinement, vertices 0, 8, 9, 6 0 1 3 0 8 1 7 9 # element 5: no refinement, vertices 8, 1, 7, 9 The refinement types are numbered as follows: Note that the type is encoded as a 3-bit number, where bits 0, 1, 2 correspond to the X, Y, Z axes, respectively. Other element geometries allow fewer but similar refinement types: triangle (3), square (1, 2, 3), tetrahedron (7), prism (3, 4, 7). The next section is the boundary section, which is exactly the same as in the conforming format: boundary ... The nonconforming mesh however needs to identify hanging vertices, which may occur in the middle of edges or faces as elements are refined. In fact, any vertex that was created as a result of refinement always has two \"parent\" vertices and needs to be listed in the vertex_parents section: vertex_parents ... In our example above, vertices 6, 7, 8, 9 have these parents: vertex_parents 4 6 0 3 7 1 4 8 0 1 9 6 7 Vertices can appear in any order in this section. The only limitation is that the first N vertex indices (not listed in this section) be reserved for top-level vertices (those with no parents, typically the vertices of the coarse mesh). The next section is optional and can be safely omitted when creating the mesh file manually. The root_state section affects leaf ordering when traversing the refinement trees and is used to optimize the SFC-based partitioning. There is one number per root element. The default state for all root elements is zero. root_state ... Finally, we have the coordinates section which assigns physical positions to the N top-level vertices. Note that the positions of hanging vertices are always derived from their parent vertices and are not listed in the mesh file. coordinates ... > ... If the mesh is curvilinear, the coordinates section can be replaced with an alternative section called nodes . The nodes keyword is then followed by a serialized GridFunction representing a vector-valued finite element function defining the curvature of the elements, similarly as in the conforming case. The end of the mesh file is marked with the line mfem_mesh_end . For examples of meshes using the NC mesh v1.0 format, see amr-quad.mesh , amr-hex.mesh and fichera-amr.mesh (visualized below) in the data directory of MFEM. MFEM mesh v1.3 Version 1.3 of the MFEM mesh file format adds support for named attribute sets. This is a convenience feature which allows application users (or developers) to refer to a set of attribute numbers or boundary attribute numbers using a text string as a shorthand. Domain attribute numbers and boundary attribute numbers cannot coexist in the same set. Attribute numbers can appear in more than one set so that a given region may be referenced for different purposes in different parts of an application. Domain attribute sets are listed after the elements section of the mesh file in a new section titled attribute_sets . Similarly, boundary attribute sets follow boundary in a new section titled bdr_attribute_sets . MFEM mesh v1.3 ... elements ... attribute_sets \"\" ... ... boundary ... bdr_attribute_sets \"\" ... ... vertices ... mfem_mesh_end A specific example of a v1.3 mesh file can be seen in compass.mesh , shown above, which includes names based on compass directions for illustration. NURBS meshes MFEM provides full support for meshes and discretization spaces based on Non-uniform Rational B-Splines (NURBS). These are treated similarly to general curvilinear meshes where the NURBS nodes are specified as a grid function at the end of the mesh file. For example, here is a simple quadratic NURBS mesh for a square domain with a (perfectly) circular hole in the middle. (The exact representation of conical sections is a major advantage of the NURBS approach over high-order finite elements.) MFEM NURBS mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # SEGMENT = 1 # SQUARE = 3 # CUBE = 5 # dimension 2 elements 4 1 3 0 1 5 4 1 3 1 2 6 5 1 3 2 3 7 6 1 3 3 0 4 7 boundary 8 1 1 0 1 1 1 1 2 1 1 2 3 1 1 3 0 1 1 5 4 1 1 6 5 1 1 7 6 1 1 4 7 edges 12 0 0 1 0 4 5 1 1 2 1 5 6 2 2 3 2 6 7 3 3 0 3 7 4 4 0 4 4 1 5 4 2 6 4 3 7 vertices 8 knotvectors 5 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 weights 1 1 1 1 1 1 1 1 1 0.707106781 1 0.707106781 1 0.707106781 1 0.707106781 1 1 1 1 0.853553391 0.853553391 0.853553391 0.853553391 FiniteElementSpace FiniteElementCollection: NURBS2 VDim: 2 Ordering: 1 0 0 1 0 1 1 0 1 0.358578644 0.358578644 0.641421356 0.358578644 0.641421356 0.641421356 0.358578644 0.641421356 0.5 0 0.5 0.217157288 1 0.5 0.782842712 0.5 0.5 1 0.5 0.782842712 0 0.5 0.217157288 0.5 0.15 0.15 0.85 0.15 0.85 0.85 0.15 0.85 0.5 0.108578644 0.891421356 0.5 0.5 0.891421356 0.108578644 0.5 This above file, as well as other examples of NURBS meshes, can be found in MFEM's data directory . It can be visualized directly with glvis -m square-disc-nurbs.mesh which after several refinements with the \" i \" key looks like To explain MFEM's NURBS mesh file format, we first note that the topological part of the mesh (the elements and boundary sections) describe the 4 NURBS patches visible above. We use the vertex numbers as labels, so we only need the number of vertices. In the NURBS case we need to also provide description of the edges on the patch boundaries and associate a knot vector with each of them. This is done in the edges section where the first index in each row refers to the knot vector id (from the following knotvectors section), while the remaining two indexes are the edge vertex numbers. The position of the NURBS nodes (control points) is given as a NURBS grid function at the end of the file, while the associated weights are listed in the preceding weights section. Some examples of VTK meshes can be found in MFEM's data directory . Here is one of the 3D NURBS meshes The image above was produced with some refinement (key \" o \") and mouse manipulations from glvis -m pipe-nurbs.mesh Solutions from NURBS discretization spaces are also natively supported. For example here is the approximation for the solution of a simple Poisson problem on a refined version of the above mesh. glvis -m square-disc-nurbs.mesh -g sol.gf Curvilinear VTK meshes MFEM also supports quadratic triangular, quadrilaterals, tetrahedral and hexahedral curvilinear meshes in VTK format. This format is described in the VTK file format documentation . The local numbering of degrees of freedom for the biquadratic quads and triquadratic hexes can be found in the Doxygen reference of the vtkBiQuadraticQuad and vtkTriQuadraticHexahedron classes. Currently VTK does not support cubic, and higher-order meshes. As an example, consider a simple curved quadrilateral saved in a file quad.vtk : # vtk DataFile Version 3.0 Generated by MFEM ASCII DATASET UNSTRUCTURED_GRID POINTS 9 double 0 0 0 1 0 0 1 1 0 0.1 0.9 0 0.5 -0.05 0 0.9 0.5 0 0.5 1 0 0 0.5 0 0.45 0.55 0 CELLS 1 10 9 0 1 2 3 4 5 6 7 8 CELL_TYPES 1 28 CELL_DATA 1 SCALARS material int LOOKUP_TABLE default 1 Visualizing it with \" glvis -m quad.vtk \" and typing \" Aemiii \" in the GLVis window we get: The \" i \" key increases the reference element subdivision which gives an increasingly better approximation of the actual curvature of the element. To view the curvature of the mapping inside the element we can use the \"I\" key, e.g., glvis -m quad.vtk -k \"AemIIiii\" Here is a slightly more complicated quadratic quadrilateral mesh example (the different colors in the GLVis window are used to distinguish neighboring elements): glvis -m star-q2.vtk -k \"Am\" MFEM and GLVis can also handle quadratic triangular meshes: glvis -m square-disc-p2.vtk -k \"Am\" As well as quadratic tetrahedral and quadratic hexahedral VTK meshes: glvis -m escher-p2.vtk -k \"Aaaooooo**************\" glvis -m fichera-q2.vtk -k \"Aaaooooo******\"", "title": "_Mesh Format v1.0"}, {"location": "mesh-format-v1.0/#mesh-formats", "text": "", "title": "Mesh Formats"}, {"location": "mesh-format-v1.0/#mfem-mesh-v10", "text": "This is the default format in GLVis. It can be used to describe simple (triangular, quadrilateral, tetrahedral and hexahedral meshes with straight edges) or complicated (curvilinear and more general) meshes.", "title": "MFEM mesh v1.0"}, {"location": "mesh-format-v1.0/#straight-meshes", "text": "In the simple case of a mesh with straight edges the format looks as follows MFEM mesh v1.0 # Dimension of the mesh: 1, 2 or 3 (e.g. 2 for a 2D surface mesh in 3D space) dimension # Mesh elements, e.g. tetrahedrons (4) elements ... ... # Mesh faces/edges on the boundary, e.g. triangles (2) boundary ... ... # Vertex coordinates vertices ... > ... Lines starting with \"#\" denote comments. The supported geometry types are: POINT = 0 SEGMENT = 1 TRIANGLE = 2 SQUARE = 3 TETRAHEDRON = 4 CUBE = 5 PRISM = 6 see the comments in this source file for more details. For example, the beam-quad.mesh file from the data directory looks like this: MFEM mesh v1.0 dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 1 0 3 1 2 1 3 1 3 2 3 1 4 3 3 1 5 4 3 1 6 5 3 1 7 6 3 1 8 7 3 1 9 10 3 1 10 11 3 1 11 12 3 1 12 13 3 1 13 14 3 1 14 15 3 1 15 16 3 1 16 17 1 1 0 9 2 1 17 8 vertices 18 2 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 which corresponds to the mesh visualized with glvis -m beam-quad.mesh -k \"Ame****\"", "title": "Straight meshes"}, {"location": "mesh-format-v1.0/#curvilinear-and-more-general-meshes", "text": "The MFEM mesh v1.0 format also support the general description of meshes based on a vector finite element grid function with degrees of freedom in the \"nodes\" of the mesh. This general format is described briefly below, and in more details on the General Mesh Format page . MFEM mesh v1.0 # Dimension of the mesh: 1, 2 or 3 (e.g. 2 for a 2D surface mesh in 3D space) dimension # Mesh elements, e.g. tetrahedrons (4) elements ... ... # Mesh faces/edges on the boundary, e.g. triangles (2) boundary ... ... # Number of vertices (no coordinates) vertices # Mesh nodes as degrees of freedom of a finite element grid function nodes FiniteElementSpace FiniteElementCollection: VDim: Ordering: 0 ... ... ... Some possible finite element collection choices are: Linear , Quadratic and Cubic corresponding to curvilinear P1/Q1, P2/Q2 and P3/Q3 meshes. The algorithm for the numbering of the degrees of freedom can be found in MFEM's source code . For example, the escher-p3.mesh from MFEM's data directory describes a tetrahedral mesh with nodes given by a P3 vector Lagrangian finite element function. Visualizing this mesh with glvis -m escher-p3.mesh -k \"Aaaoooooooooo**************tt\" we get: Topologically periodic meshes can also be described in this format, see for example the periodic-segment , periodic-square , and periodic-cube meshes in the data directory, as well as Example 9 .", "title": "Curvilinear and more general meshes"}, {"location": "mesh-format-v1.0/#mfem-nc-mesh-v10", "text": "The MFEM NC mesh v1.0 is a format for nonconforming meshes in MFEM. It is similar in style to the default (conforming) MFEM mesh v1.0 format, but is in fact independent and supports advanced AMR features such as storing refined elements and the refinement hierarchy, anisotropic element refinement, hanging nodes (vertices), parallel partitioning. The file starts with a signature and the mesh dimension: MFEM NC mesh v1.0 # NCMesh supported geometry types: # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # PRISM = 6 # mesh dimension 1, 2 or 3 dimension # optional rank for parallel files, defaults to 0 rank The rank section defines the MPI rank of the process that saved the file. This section can be omitted in serial meshes. Similarly to the conforming format, the next section lists all elements. This time however, we recognize two kinds of elements: Regular, active elements ( refinement type == 0 ). These elements participate in the computation (are listed in the Mesh class) and reference vertex indices. Inactive, previously refined elements ( refinement type > 0 ). Instead of vertices, these elements contain links to their child elements, and are not visible in the Mesh class. All elements also have their geometry type and user attribute defined, as well as the MPI rank of their owner process (only used in parallel meshes). # mesh elements, both regular and refined elements 0 ... Storing the complete refinement hierarchy allows MFEM to coarsen some of the fine elements if necessary, and also to naturally define an ordering of the fine elements that can be used for fast parallel partitioning of the mesh (a depth-first traversal of all refinement trees defines a space-filling curve (SFC) that can be easily partitioned among parallel processes). The following picture illustrates the refinement hierarchy of a mesh that started as two quadrilaterals and then underwent two anisotropic refinements (blue numbers are vertex indices): The corresponding elements section of the mesh file could look like this: elements 6 0 1 3 2 2 3 # element 0: refinement 2 (Y), children 2, 3 0 1 3 0 1 2 5 4 # element 1: no refinement, vertices 1, 2, 5, 4 0 1 3 1 4 5 # element 2: refinement 1 (X), children 4, 5 0 1 3 0 6 7 4 3 # element 3: no refinement, vertices 6, 7, 4, 3 0 1 3 0 0 8 9 6 # element 4: no refinement, vertices 0, 8, 9, 6 0 1 3 0 8 1 7 9 # element 5: no refinement, vertices 8, 1, 7, 9 The refinement types are numbered as follows: Note that the type is encoded as a 3-bit number, where bits 0, 1, 2 correspond to the X, Y, Z axes, respectively. Other element geometries allow fewer but similar refinement types: triangle (3), square (1, 2, 3), tetrahedron (7), prism (3, 4, 7). The next section is the boundary section, which is exactly the same as in the conforming format: boundary ... The nonconforming mesh however needs to identify hanging vertices, which may occur in the middle of edges or faces as elements are refined. In fact, any vertex that was created as a result of refinement always has two \"parent\" vertices and needs to be listed in the vertex_parents section: vertex_parents ... In our example above, vertices 6, 7, 8, 9 have these parents: vertex_parents 4 6 0 3 7 1 4 8 0 1 9 6 7 Vertices can appear in any order in this section. The only limitation is that the first N vertex indices (not listed in this section) be reserved for top-level vertices (those with no parents, typically the vertices of the coarse mesh). The next section is optional and can be safely omitted when creating the mesh file manually. The root_state section affects leaf ordering when traversing the refinement trees and is used to optimize the SFC-based partitioning. There is one number per root element. The default state for all root elements is zero. root_state ... Finally, we have the coordinates section which assigns physical positions to the N top-level vertices. Note that the positions of hanging vertices are always derived from their parent vertices and are not listed in the mesh file. coordinates ... > ... If the mesh is curvilinear, the coordinates section can be replaced with an alternative section called nodes . The nodes keyword is then followed by a serialized GridFunction representing a vector-valued finite element function defining the curvature of the elements, similarly as in the conforming case. The end of the mesh file is marked with the line mfem_mesh_end . For examples of meshes using the NC mesh v1.0 format, see amr-quad.mesh , amr-hex.mesh and fichera-amr.mesh (visualized below) in the data directory of MFEM.", "title": "MFEM NC mesh v1.0"}, {"location": "mesh-format-v1.0/#mfem-mesh-v13", "text": "Version 1.3 of the MFEM mesh file format adds support for named attribute sets. This is a convenience feature which allows application users (or developers) to refer to a set of attribute numbers or boundary attribute numbers using a text string as a shorthand. Domain attribute numbers and boundary attribute numbers cannot coexist in the same set. Attribute numbers can appear in more than one set so that a given region may be referenced for different purposes in different parts of an application. Domain attribute sets are listed after the elements section of the mesh file in a new section titled attribute_sets . Similarly, boundary attribute sets follow boundary in a new section titled bdr_attribute_sets . MFEM mesh v1.3 ... elements ... attribute_sets \"\" ... ... boundary ... bdr_attribute_sets \"\" ... ... vertices ... mfem_mesh_end A specific example of a v1.3 mesh file can be seen in compass.mesh , shown above, which includes names based on compass directions for illustration.", "title": "MFEM mesh v1.3"}, {"location": "mesh-format-v1.0/#nurbs-meshes", "text": "MFEM provides full support for meshes and discretization spaces based on Non-uniform Rational B-Splines (NURBS). These are treated similarly to general curvilinear meshes where the NURBS nodes are specified as a grid function at the end of the mesh file. For example, here is a simple quadratic NURBS mesh for a square domain with a (perfectly) circular hole in the middle. (The exact representation of conical sections is a major advantage of the NURBS approach over high-order finite elements.) MFEM NURBS mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # SEGMENT = 1 # SQUARE = 3 # CUBE = 5 # dimension 2 elements 4 1 3 0 1 5 4 1 3 1 2 6 5 1 3 2 3 7 6 1 3 3 0 4 7 boundary 8 1 1 0 1 1 1 1 2 1 1 2 3 1 1 3 0 1 1 5 4 1 1 6 5 1 1 7 6 1 1 4 7 edges 12 0 0 1 0 4 5 1 1 2 1 5 6 2 2 3 2 6 7 3 3 0 3 7 4 4 0 4 4 1 5 4 2 6 4 3 7 vertices 8 knotvectors 5 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 2 3 0 0 0 1 1 1 weights 1 1 1 1 1 1 1 1 1 0.707106781 1 0.707106781 1 0.707106781 1 0.707106781 1 1 1 1 0.853553391 0.853553391 0.853553391 0.853553391 FiniteElementSpace FiniteElementCollection: NURBS2 VDim: 2 Ordering: 1 0 0 1 0 1 1 0 1 0.358578644 0.358578644 0.641421356 0.358578644 0.641421356 0.641421356 0.358578644 0.641421356 0.5 0 0.5 0.217157288 1 0.5 0.782842712 0.5 0.5 1 0.5 0.782842712 0 0.5 0.217157288 0.5 0.15 0.15 0.85 0.15 0.85 0.85 0.15 0.85 0.5 0.108578644 0.891421356 0.5 0.5 0.891421356 0.108578644 0.5 This above file, as well as other examples of NURBS meshes, can be found in MFEM's data directory . It can be visualized directly with glvis -m square-disc-nurbs.mesh which after several refinements with the \" i \" key looks like To explain MFEM's NURBS mesh file format, we first note that the topological part of the mesh (the elements and boundary sections) describe the 4 NURBS patches visible above. We use the vertex numbers as labels, so we only need the number of vertices. In the NURBS case we need to also provide description of the edges on the patch boundaries and associate a knot vector with each of them. This is done in the edges section where the first index in each row refers to the knot vector id (from the following knotvectors section), while the remaining two indexes are the edge vertex numbers. The position of the NURBS nodes (control points) is given as a NURBS grid function at the end of the file, while the associated weights are listed in the preceding weights section. Some examples of VTK meshes can be found in MFEM's data directory . Here is one of the 3D NURBS meshes The image above was produced with some refinement (key \" o \") and mouse manipulations from glvis -m pipe-nurbs.mesh Solutions from NURBS discretization spaces are also natively supported. For example here is the approximation for the solution of a simple Poisson problem on a refined version of the above mesh. glvis -m square-disc-nurbs.mesh -g sol.gf", "title": "NURBS meshes"}, {"location": "mesh-format-v1.0/#curvilinear-vtk-meshes", "text": "MFEM also supports quadratic triangular, quadrilaterals, tetrahedral and hexahedral curvilinear meshes in VTK format. This format is described in the VTK file format documentation . The local numbering of degrees of freedom for the biquadratic quads and triquadratic hexes can be found in the Doxygen reference of the vtkBiQuadraticQuad and vtkTriQuadraticHexahedron classes. Currently VTK does not support cubic, and higher-order meshes. As an example, consider a simple curved quadrilateral saved in a file quad.vtk : # vtk DataFile Version 3.0 Generated by MFEM ASCII DATASET UNSTRUCTURED_GRID POINTS 9 double 0 0 0 1 0 0 1 1 0 0.1 0.9 0 0.5 -0.05 0 0.9 0.5 0 0.5 1 0 0 0.5 0 0.45 0.55 0 CELLS 1 10 9 0 1 2 3 4 5 6 7 8 CELL_TYPES 1 28 CELL_DATA 1 SCALARS material int LOOKUP_TABLE default 1 Visualizing it with \" glvis -m quad.vtk \" and typing \" Aemiii \" in the GLVis window we get: The \" i \" key increases the reference element subdivision which gives an increasingly better approximation of the actual curvature of the element. To view the curvature of the mapping inside the element we can use the \"I\" key, e.g., glvis -m quad.vtk -k \"AemIIiii\" Here is a slightly more complicated quadratic quadrilateral mesh example (the different colors in the GLVis window are used to distinguish neighboring elements): glvis -m star-q2.vtk -k \"Am\" MFEM and GLVis can also handle quadratic triangular meshes: glvis -m square-disc-p2.vtk -k \"Am\" As well as quadratic tetrahedral and quadratic hexahedral VTK meshes: glvis -m escher-p2.vtk -k \"Aaaooooo**************\" glvis -m fichera-q2.vtk -k \"Aaaooooo******\"", "title": "Curvilinear VTK meshes"}, {"location": "mesh-format-v1.x/", "text": "General MFEM Mesh Format The MFEM mesh v1.x format supports the general description of meshes based on a vector finite element grid function with degrees of freedom in the nodes of the mesh. For simplicity, in this document we refer to this version of the format as MFEM mesh v1.x . The legacy version for meshes with straight edges we will call MFEM linear mesh format. A mesh in the MFEM mesh v1.x format consists of two parts: Topology and Geometry. We illustrate these concepts by comparing with the beam-quad.mesh from MFEM's data/ directory. This is just a simple quadrilateral beam mesh with 8 elements, 18 vertices (numbered 0 to 17) and 18 boundary segments: The original linear mesh version of this file is given in Listing 1 . Topology The topological part of the mesh describes the relations between the elements in the mesh, in terms of neighborhood implied by shared vertices. Actual coordinates do not play a role in this part, so the vertices are just labels used to imply which elements share a vertex, an edge or a face. Some examples: General version of data/beam-quad.mesh Below is the annotated topological part of the MFEM mesh v1.x format for the beam mesh. The complete file is given in Listing 2 . ... # BEGIN Topology Part dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary # Skipping the 18 boundary segments for simplicity vertices 18 # END Topology Part ... The element format above is: ... . Type 3 is quadrilateral, which requires 4 vertex indices. The attribute identify e.g. material sub-domains (2 in this case). NOTE: The topology part of this mesh will be the same, irrespective of the order. Compare e.g. Listing 2 , Listing 3 and Listing 4 . WARNING: The vertices are used only to imply topology, and so there coordinates are not important. The mesh coordinates are implied by the mesh nodes not vertices . In particular, while the Mesh object can return vertex coordinates, they are not used an may be incorrect for high-order mesh. Periodic version of data/beam-quad.mesh The topology part can be used to describe more complicated mesh relations. For example we can identify the two vertical lines of the beam mesh, turning it topologically into a cylinder. The complete file is given in Listing 5 . ... # BEGIN Topology Part dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 0 9 16 # Last element uses vertices 0 and 9 # two vertical boundary have been removed boundary 16 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 0 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 9 16 vertices 18 # END Topology Part ... Compared to the non-periodic version, e.g. Listing 2 , the main difference above is that we have fused vertices 8 and 0 and vertices 17 and 9. The difference between the two topologies can be illustrated by solving a simple Laplace problem with homogeneous essential boundary conditions on the resulting mesh. In the periodic case we get: while the solution on the non-periodic mesh looks like: NOTE: Meshes with periodic topology allow us to solve problems with periodic boundary conditions without modifying the application to impose them -- we simply run on a different mesh. Geometry The geometry of the mesh, i.e. the actual position of mesh elements in physical space is described by specifying the mesh nodes as a general finite element (vector) function. In MFEM, finite element functions are objects of type GridFunction which belong to discrete finite element spaces specified by objects FiniteElementSpace and FiniteElementCollection . The actual geometry of each element is obtained by extracting the local degrees of freedom from the global nodes , expanding them in the corresponding (reference element) finite element basis, and using the resulting polynomial vector field to map the reference element. An example of a first order geometry is given in Listing 2 : ... # BEGIN Geometry Part nodes FiniteElementSpace FiniteElementCollection: H1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 # END Geometry Part Here VDim: 2 means that the nodes grid function is a vector field with two components (i.e. the mesh is embedded in R^2); H1_2D_P1 describes the finite element space (H1/continuous finite elements in 2D of order 1); Ordering refers to how the vector field values are serialized (in this case x,y,x,y,...); and the rest is just the global degrees of freedom representing in this case the vertex coordinates. Compare the above with the linear mesh vertex coordinates from Listing 1 : vertices 18 2 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 In the MFEM mesh v1.x format, the nodes are a regular grid function, just like an other discretized field in a simulation, which has several advantages: The nodes can be part of the discretization, and be evolved directly e.g. in a Lagrangian/ALE simulation. Mesh optimization problems can be posed directly for the nodes variable. Since the nodes can be any finite element function, a wide variety of meshes are easily supported. As an illustration of the last point, consider the geometry of the periodic version of the mesh in Listing 5 ... # BEGIN Geometry Part nodes FiniteElementSpace FiniteElementCollection: L2_T1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 0 1 1 1 1 0 2 0 1 1 2 1 2 0 3 0 2 1 3 1 3 0 4 0 3 1 4 1 4 0 5 0 4 1 5 1 5 0 6 0 5 1 6 1 6 0 7 0 6 1 7 1 7 0 8 0 7 1 8 1 8 0 9 0 8 1 9 1 9 0 10 0 9 1 10 1 # END Geometry Part ... Note that the space here is L2 , which means a discontinuous linear vector field, where four vertex coordinates are specified on each element. This allows us to plot the periodic mesh as a regular beam, which is what you'd expect for periodic boundary conditions. Finite Element Spaces To fully specify the MFEM mesh v1.x format, we need to describe the degrees of freedom of the nodes finite element space and their global numbering. This is something that the MFEM team is very interested to discuss and standardize with other high-order projects and applications. Below is a description of our current approach... Finite element spaces have degrees of freedom (dofs) that are associated with the (interiors of the) mesh vertices, edges, faces and elements. There may be multiple dofs associated with the same geometric entity (e.g. vector fields), and different spaces have different sets of degrees of freedom. For example H1/continuous spaces can have degrees of freedom associated with the Gauss-Lobatto points in a quadrilateral, while L2/discontinuous spaces can have degrees of freedom associated with the Gauss-Legendre points. These are just examples, many choices for the basis are actually possible to be encoded in the FiniteElementCollection string above. In general, based just on the mesh topology and the type of the space, the FiniteElementSpace object can determine a global set of dofs, that will be the values listed for the mesh nodes . The algorithm starts with the given numbering of the elements and the vertices, from which a numbering of the edges and the faces is derived as follows: loop over elements loop over edges and faces inside each element (see below) number currently the edges and faces that have not been numbered yet The ordering of edges/faces within each element is defined by the arrays Edges and FaceVert in the classes Geometry::Constants which are defined in the file fem/geom.cpp , e.g. search for ::Edges or ::FaceVert . Here is the result of this numbering for the beam mesh In addition to a number, each edges and face is also given a global orientation. In 2D and 3D, an edge is oriented from the vertex with the lower vertex id to the vertex with the higher vertex id. In 3D, a face is oriented according to the face-to-vertex mappings in the first element in which the face is enumerated. See the FaceVert arrays in fem/geom.cpp mentioned above, as well as the Mesh::GenerateFaces method in mesh/mesh.cpp . In particular, the normal of the face between two elements points from the element with lower number to the element with higher number. Face orientation however includes not just the normal direction, but also any rotation of the vertices compared to the base, i.e. orientation here means permutation of vertices. The global numbering of degrees of freedom is now performed as follows: loop over vertices list the dofs associated with each vertex loop over edges list the dofs associated with the interior of the edge, lexicographically with respect to the edge orientation loop over faces list the dofs associated with the interior of the face, lexicographically with respect to the face orientation loop over elements list the dofs associated with the interior of the element An example of this is the quadratic mesh in Listing 3 ... # BEGIN Geometry Part nodes FiniteElementSpace FiniteElementCollection: H1_2D_P2 VDim: 2 Ordering: 1 # 18 vertex dofs 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 # 25 edge dofs 0.5 0 1 0.5 0.5 1 0 0.5 1.5 0 2 0.5 1.5 1 2.5 0 3 0.5 2.5 1 3.5 0 4 0.5 3.5 1 4.5 0 5 0.5 4.5 1 5.5 0 6 0.5 5.5 1 6.5 0 7 0.5 6.5 1 7.5 0 8 0.5 7.5 1 # 8 element dofs 0.5 0.5 1.5 0.5 2.5 0.5 3.5 0.5 4.5 0.5 5.5 0.5 6.5 0.5 7.5 0.5 # END Geometry Part ... Listings Listing 1 This is the original version of the beam-quad.mesh using the linear mesh format. MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 2 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 Listing 2 This is a MFEM mesh v1.x version of the beam-quad.mesh which is first order. The mesh is identical to the one of Listing 1 , it is just described in a different format. MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 nodes FiniteElementSpace FiniteElementCollection: H1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 Listing 3 This is a second order version of the beam-quad.mesh . MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 nodes FiniteElementSpace FiniteElementCollection: H1_2D_P2 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 0.5 0 1 0.5 0.5 1 0 0.5 1.5 0 2 0.5 1.5 1 2.5 0 3 0.5 2.5 1 3.5 0 4 0.5 3.5 1 4.5 0 5 0.5 4.5 1 5.5 0 6 0.5 5.5 1 6.5 0 7 0.5 6.5 1 7.5 0 8 0.5 7.5 1 0.5 0.5 1.5 0.5 2.5 0.5 3.5 0.5 4.5 0.5 5.5 0.5 6.5 0.5 7.5 0.5 Listing 4 This is a third order version of the beam-quad.mesh . MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 nodes FiniteElementSpace FiniteElementCollection: H1_2D_P3 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 0.27639320225002 0 0.72360679774998 0 1 0.27639320225002 1 0.72360679774998 0.27639320225002 1 0.72360679774998 1 0 0.27639320225002 0 0.72360679774998 1.27639320225 0 1.72360679775 0 2 0.27639320225002 2 0.72360679774998 1.27639320225 1 1.72360679775 1 2.27639320225 0 2.72360679775 0 3 0.27639320225002 3 0.72360679774998 2.27639320225 1 2.72360679775 1 3.27639320225 0 3.72360679775 0 4 0.27639320225002 4 0.72360679774998 3.27639320225 1 3.72360679775 1 4.27639320225 0 4.72360679775 0 5 0.27639320225002 5 0.72360679774998 4.27639320225 1 4.72360679775 1 5.27639320225 0 5.72360679775 0 6 0.27639320225002 6 0.72360679774998 5.27639320225 1 5.72360679775 1 6.27639320225 0 6.72360679775 0 7 0.27639320225002 7 0.72360679774998 6.27639320225 1 6.72360679775 1 7.27639320225 0 7.72360679775 0 8 0.27639320225002 8 0.72360679774998 7.27639320225 1 7.72360679775 1 0.27639320225002 0.27639320225002 0.72360679774998 0.27639320225002 0.27639320225002 0.72360679774998 0.72360679774998 0.72360679774998 1.27639320225 0.27639320225002 1.72360679775 0.27639320225002 1.27639320225 0.72360679774998 1.72360679775 0.72360679774998 2.27639320225 0.27639320225002 2.72360679775 0.27639320225002 2.27639320225 0.72360679774998 2.72360679775 0.72360679774998 3.27639320225 0.27639320225002 3.72360679775 0.27639320225002 3.27639320225 0.72360679774998 3.72360679775 0.72360679774998 4.27639320225 0.27639320225002 4.72360679775 0.27639320225002 4.27639320225 0.72360679774998 4.72360679775 0.72360679774998 5.27639320225 0.27639320225002 5.72360679775 0.27639320225002 5.27639320225 0.72360679774998 5.72360679775 0.72360679774998 6.27639320225 0.27639320225002 6.72360679775 0.27639320225002 6.27639320225 0.72360679774998 6.72360679775 0.72360679774998 7.27639320225 0.27639320225002 7.72360679775 0.27639320225002 7.27639320225 0.72360679774998 7.72360679775 0.72360679774998 Listing 5 Periodic version of the first-order mesh from Listing 1 . MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 0 9 16 boundary 16 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 0 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 9 16 vertices 18 nodes FiniteElementSpace FiniteElementCollection: L2_T1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 0 1 1 1 1 0 2 0 1 1 2 1 2 0 3 0 2 1 3 1 3 0 4 0 3 1 4 1 4 0 5 0 4 1 5 1 5 0 6 0 5 1 6 1 6 0 7 0 6 1 7 1 7 0 8 0 7 1 8 1 8 0 9 0 8 1 9 1 9 0 10 0 9 1 10 1", "title": "_Mesh Format v1.x"}, {"location": "mesh-format-v1.x/#general-mfem-mesh-format", "text": "The MFEM mesh v1.x format supports the general description of meshes based on a vector finite element grid function with degrees of freedom in the nodes of the mesh. For simplicity, in this document we refer to this version of the format as MFEM mesh v1.x . The legacy version for meshes with straight edges we will call MFEM linear mesh format. A mesh in the MFEM mesh v1.x format consists of two parts: Topology and Geometry. We illustrate these concepts by comparing with the beam-quad.mesh from MFEM's data/ directory. This is just a simple quadrilateral beam mesh with 8 elements, 18 vertices (numbered 0 to 17) and 18 boundary segments: The original linear mesh version of this file is given in Listing 1 .", "title": "General MFEM Mesh Format"}, {"location": "mesh-format-v1.x/#topology", "text": "The topological part of the mesh describes the relations between the elements in the mesh, in terms of neighborhood implied by shared vertices. Actual coordinates do not play a role in this part, so the vertices are just labels used to imply which elements share a vertex, an edge or a face. Some examples:", "title": "Topology"}, {"location": "mesh-format-v1.x/#general-version-of-databeam-quadmesh", "text": "Below is the annotated topological part of the MFEM mesh v1.x format for the beam mesh. The complete file is given in Listing 2 . ... # BEGIN Topology Part dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary # Skipping the 18 boundary segments for simplicity vertices 18 # END Topology Part ... The element format above is: ... . Type 3 is quadrilateral, which requires 4 vertex indices. The attribute identify e.g. material sub-domains (2 in this case). NOTE: The topology part of this mesh will be the same, irrespective of the order. Compare e.g. Listing 2 , Listing 3 and Listing 4 . WARNING: The vertices are used only to imply topology, and so there coordinates are not important. The mesh coordinates are implied by the mesh nodes not vertices . In particular, while the Mesh object can return vertex coordinates, they are not used an may be incorrect for high-order mesh.", "title": "General version of data/beam-quad.mesh"}, {"location": "mesh-format-v1.x/#periodic-version-of-databeam-quadmesh", "text": "The topology part can be used to describe more complicated mesh relations. For example we can identify the two vertical lines of the beam mesh, turning it topologically into a cylinder. The complete file is given in Listing 5 . ... # BEGIN Topology Part dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 0 9 16 # Last element uses vertices 0 and 9 # two vertical boundary have been removed boundary 16 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 0 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 9 16 vertices 18 # END Topology Part ... Compared to the non-periodic version, e.g. Listing 2 , the main difference above is that we have fused vertices 8 and 0 and vertices 17 and 9. The difference between the two topologies can be illustrated by solving a simple Laplace problem with homogeneous essential boundary conditions on the resulting mesh. In the periodic case we get: while the solution on the non-periodic mesh looks like: NOTE: Meshes with periodic topology allow us to solve problems with periodic boundary conditions without modifying the application to impose them -- we simply run on a different mesh.", "title": "Periodic version of data/beam-quad.mesh"}, {"location": "mesh-format-v1.x/#geometry", "text": "The geometry of the mesh, i.e. the actual position of mesh elements in physical space is described by specifying the mesh nodes as a general finite element (vector) function. In MFEM, finite element functions are objects of type GridFunction which belong to discrete finite element spaces specified by objects FiniteElementSpace and FiniteElementCollection . The actual geometry of each element is obtained by extracting the local degrees of freedom from the global nodes , expanding them in the corresponding (reference element) finite element basis, and using the resulting polynomial vector field to map the reference element. An example of a first order geometry is given in Listing 2 : ... # BEGIN Geometry Part nodes FiniteElementSpace FiniteElementCollection: H1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 # END Geometry Part Here VDim: 2 means that the nodes grid function is a vector field with two components (i.e. the mesh is embedded in R^2); H1_2D_P1 describes the finite element space (H1/continuous finite elements in 2D of order 1); Ordering refers to how the vector field values are serialized (in this case x,y,x,y,...); and the rest is just the global degrees of freedom representing in this case the vertex coordinates. Compare the above with the linear mesh vertex coordinates from Listing 1 : vertices 18 2 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 In the MFEM mesh v1.x format, the nodes are a regular grid function, just like an other discretized field in a simulation, which has several advantages: The nodes can be part of the discretization, and be evolved directly e.g. in a Lagrangian/ALE simulation. Mesh optimization problems can be posed directly for the nodes variable. Since the nodes can be any finite element function, a wide variety of meshes are easily supported. As an illustration of the last point, consider the geometry of the periodic version of the mesh in Listing 5 ... # BEGIN Geometry Part nodes FiniteElementSpace FiniteElementCollection: L2_T1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 0 1 1 1 1 0 2 0 1 1 2 1 2 0 3 0 2 1 3 1 3 0 4 0 3 1 4 1 4 0 5 0 4 1 5 1 5 0 6 0 5 1 6 1 6 0 7 0 6 1 7 1 7 0 8 0 7 1 8 1 8 0 9 0 8 1 9 1 9 0 10 0 9 1 10 1 # END Geometry Part ... Note that the space here is L2 , which means a discontinuous linear vector field, where four vertex coordinates are specified on each element. This allows us to plot the periodic mesh as a regular beam, which is what you'd expect for periodic boundary conditions.", "title": "Geometry"}, {"location": "mesh-format-v1.x/#finite-element-spaces", "text": "To fully specify the MFEM mesh v1.x format, we need to describe the degrees of freedom of the nodes finite element space and their global numbering. This is something that the MFEM team is very interested to discuss and standardize with other high-order projects and applications. Below is a description of our current approach... Finite element spaces have degrees of freedom (dofs) that are associated with the (interiors of the) mesh vertices, edges, faces and elements. There may be multiple dofs associated with the same geometric entity (e.g. vector fields), and different spaces have different sets of degrees of freedom. For example H1/continuous spaces can have degrees of freedom associated with the Gauss-Lobatto points in a quadrilateral, while L2/discontinuous spaces can have degrees of freedom associated with the Gauss-Legendre points. These are just examples, many choices for the basis are actually possible to be encoded in the FiniteElementCollection string above. In general, based just on the mesh topology and the type of the space, the FiniteElementSpace object can determine a global set of dofs, that will be the values listed for the mesh nodes . The algorithm starts with the given numbering of the elements and the vertices, from which a numbering of the edges and the faces is derived as follows: loop over elements loop over edges and faces inside each element (see below) number currently the edges and faces that have not been numbered yet The ordering of edges/faces within each element is defined by the arrays Edges and FaceVert in the classes Geometry::Constants which are defined in the file fem/geom.cpp , e.g. search for ::Edges or ::FaceVert . Here is the result of this numbering for the beam mesh In addition to a number, each edges and face is also given a global orientation. In 2D and 3D, an edge is oriented from the vertex with the lower vertex id to the vertex with the higher vertex id. In 3D, a face is oriented according to the face-to-vertex mappings in the first element in which the face is enumerated. See the FaceVert arrays in fem/geom.cpp mentioned above, as well as the Mesh::GenerateFaces method in mesh/mesh.cpp . In particular, the normal of the face between two elements points from the element with lower number to the element with higher number. Face orientation however includes not just the normal direction, but also any rotation of the vertices compared to the base, i.e. orientation here means permutation of vertices. The global numbering of degrees of freedom is now performed as follows: loop over vertices list the dofs associated with each vertex loop over edges list the dofs associated with the interior of the edge, lexicographically with respect to the edge orientation loop over faces list the dofs associated with the interior of the face, lexicographically with respect to the face orientation loop over elements list the dofs associated with the interior of the element An example of this is the quadratic mesh in Listing 3 ... # BEGIN Geometry Part nodes FiniteElementSpace FiniteElementCollection: H1_2D_P2 VDim: 2 Ordering: 1 # 18 vertex dofs 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 # 25 edge dofs 0.5 0 1 0.5 0.5 1 0 0.5 1.5 0 2 0.5 1.5 1 2.5 0 3 0.5 2.5 1 3.5 0 4 0.5 3.5 1 4.5 0 5 0.5 4.5 1 5.5 0 6 0.5 5.5 1 6.5 0 7 0.5 6.5 1 7.5 0 8 0.5 7.5 1 # 8 element dofs 0.5 0.5 1.5 0.5 2.5 0.5 3.5 0.5 4.5 0.5 5.5 0.5 6.5 0.5 7.5 0.5 # END Geometry Part ...", "title": "Finite Element Spaces"}, {"location": "mesh-format-v1.x/#listings", "text": "", "title": "Listings"}, {"location": "mesh-format-v1.x/#listing-1", "text": "This is the original version of the beam-quad.mesh using the linear mesh format. MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 2 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1", "title": "Listing 1"}, {"location": "mesh-format-v1.x/#listing-2", "text": "This is a MFEM mesh v1.x version of the beam-quad.mesh which is first order. The mesh is identical to the one of Listing 1 , it is just described in a different format. MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 nodes FiniteElementSpace FiniteElementCollection: H1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1", "title": "Listing 2"}, {"location": "mesh-format-v1.x/#listing-3", "text": "This is a second order version of the beam-quad.mesh . MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 nodes FiniteElementSpace FiniteElementCollection: H1_2D_P2 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 0.5 0 1 0.5 0.5 1 0 0.5 1.5 0 2 0.5 1.5 1 2.5 0 3 0.5 2.5 1 3.5 0 4 0.5 3.5 1 4.5 0 5 0.5 4.5 1 5.5 0 6 0.5 5.5 1 6.5 0 7 0.5 6.5 1 7.5 0 8 0.5 7.5 1 0.5 0.5 1.5 0.5 2.5 0.5 3.5 0.5 4.5 0.5 5.5 0.5 6.5 0.5 7.5 0.5", "title": "Listing 3"}, {"location": "mesh-format-v1.x/#listing-4", "text": "This is a third order version of the beam-quad.mesh . MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 8 17 16 boundary 18 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 8 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 17 16 1 1 9 0 2 1 8 17 vertices 18 nodes FiniteElementSpace FiniteElementCollection: H1_2D_P3 VDim: 2 Ordering: 1 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 0.27639320225002 0 0.72360679774998 0 1 0.27639320225002 1 0.72360679774998 0.27639320225002 1 0.72360679774998 1 0 0.27639320225002 0 0.72360679774998 1.27639320225 0 1.72360679775 0 2 0.27639320225002 2 0.72360679774998 1.27639320225 1 1.72360679775 1 2.27639320225 0 2.72360679775 0 3 0.27639320225002 3 0.72360679774998 2.27639320225 1 2.72360679775 1 3.27639320225 0 3.72360679775 0 4 0.27639320225002 4 0.72360679774998 3.27639320225 1 3.72360679775 1 4.27639320225 0 4.72360679775 0 5 0.27639320225002 5 0.72360679774998 4.27639320225 1 4.72360679775 1 5.27639320225 0 5.72360679775 0 6 0.27639320225002 6 0.72360679774998 5.27639320225 1 5.72360679775 1 6.27639320225 0 6.72360679775 0 7 0.27639320225002 7 0.72360679774998 6.27639320225 1 6.72360679775 1 7.27639320225 0 7.72360679775 0 8 0.27639320225002 8 0.72360679774998 7.27639320225 1 7.72360679775 1 0.27639320225002 0.27639320225002 0.72360679774998 0.27639320225002 0.27639320225002 0.72360679774998 0.72360679774998 0.72360679774998 1.27639320225 0.27639320225002 1.72360679775 0.27639320225002 1.27639320225 0.72360679774998 1.72360679775 0.72360679774998 2.27639320225 0.27639320225002 2.72360679775 0.27639320225002 2.27639320225 0.72360679774998 2.72360679775 0.72360679774998 3.27639320225 0.27639320225002 3.72360679775 0.27639320225002 3.27639320225 0.72360679774998 3.72360679775 0.72360679774998 4.27639320225 0.27639320225002 4.72360679775 0.27639320225002 4.27639320225 0.72360679774998 4.72360679775 0.72360679774998 5.27639320225 0.27639320225002 5.72360679775 0.27639320225002 5.27639320225 0.72360679774998 5.72360679775 0.72360679774998 6.27639320225 0.27639320225002 6.72360679775 0.27639320225002 6.27639320225 0.72360679774998 6.72360679775 0.72360679774998 7.27639320225 0.27639320225002 7.72360679775 0.27639320225002 7.27639320225 0.72360679774998 7.72360679775 0.72360679774998", "title": "Listing 4"}, {"location": "mesh-format-v1.x/#listing-5", "text": "Periodic version of the first-order mesh from Listing 1 . MFEM mesh v1.0 # # MFEM Geometry Types (see mesh/geom.hpp): # # POINT = 0 # SEGMENT = 1 # TRIANGLE = 2 # SQUARE = 3 # TETRAHEDRON = 4 # CUBE = 5 # dimension 2 elements 8 1 3 0 1 10 9 1 3 1 2 11 10 1 3 2 3 12 11 1 3 3 4 13 12 2 3 4 5 14 13 2 3 5 6 15 14 2 3 6 7 16 15 2 3 7 0 9 16 boundary 16 3 1 0 1 3 1 1 2 3 1 2 3 3 1 3 4 3 1 4 5 3 1 5 6 3 1 6 7 3 1 7 0 3 1 10 9 3 1 11 10 3 1 12 11 3 1 13 12 3 1 14 13 3 1 15 14 3 1 16 15 3 1 9 16 vertices 18 nodes FiniteElementSpace FiniteElementCollection: L2_T1_2D_P1 VDim: 2 Ordering: 1 0 0 1 0 0 1 1 1 1 0 2 0 1 1 2 1 2 0 3 0 2 1 3 1 3 0 4 0 3 1 4 1 4 0 5 0 4 1 5 1 5 0 6 0 5 1 6 1 6 0 7 0 6 1 7 1 7 0 8 0 7 1 8 1 8 0 9 0 8 1 9 1 9 0 10 0 9 1 10 1", "title": "Listing 5"}, {"location": "mesh-formats/", "text": "Supported Mesh Formats MFEM supports a number of mesh formats, including: MFEM's built-in formats, including arbitrary high-order curvilinear meshes and non-conforming (AMR) meshes. VTK format (XML VTU format and legacy ASCII format). The CUBIT meshes through the Genesis (NetCDF) binary format. The NETGEN triangular and tetrahedral mesh formats. The TrueGrid hexahedral mesh format. See below for more details and information on the specific formats that are supported. All of these mesh formats are also supported by MFEM's native visualization tool, GLVis . MFEM Mesh Formats Detailed description of these formats can be found on MFEM's mesh formats page. MFEM supports: MFEM's mesh v1.0 format for straight meshes. MFEM's mesh v1.x format for arbitrary high-order curvilinear and more general meshes. MFEM's mesh v1.2 format, which adds support for parallel meshes. MFEM's mesh v1.3 format , which adds support for named attribute sets. MFEM's NC mesh v1.0 format , supporting non-conforming (AMR) meshes. MFEM's format for NURBS meshes. VTK Mesh Formats MFEM supports reading VTK (ASCII) and VTU (XML) unstructured meshes. For more details on these formats, see the VTK User's Guide and the VTK Wiki . Specifically, MFEM supports: Meshes with high-order Lagrange elements . Mixed meshes with all element types. XML format with inline or appended binary data, including zlib compression. If the VTK or VTU file has a cell data array named \"material\" or \"attribute\", this cell data will be used for MFEM's element attribute numbers. If both data arrays are present, the one named \"material\" will take precedence. Gmsh Mesh Formats MFEM supports reading version 2.2 of the Gmsh ASCII and binary formats for 2D and 3D meshes. High-order elements (up to order 9) are supported, as are periodic meshes. Note that newer versions of Gmsh output files in version 4.1 of the Gmsh format, which is not compatible with MFEM. Users should either specify Mesh.MshFileVersion = 2.2; in their geometry file or run Gmsh with -format msh22 from the command line. Elements' physical tags in Gmsh correspond to their attribute numbers in MFEM. MFEM only supports strictly positive (\u2265 1) attributes, so users should be sure to define all physical groups with strictly positive tag numbers. The one exception to this is in cases where all elements have physical tag zero (which happens by default in Gmsh when no physical groups are defined). In this case, MFEM will reassign all the elements to have attribute number 1 instead of failing to read the mesh.", "title": "Mesh Formats"}, {"location": "mesh-formats/#supported-mesh-formats", "text": "MFEM supports a number of mesh formats, including: MFEM's built-in formats, including arbitrary high-order curvilinear meshes and non-conforming (AMR) meshes. VTK format (XML VTU format and legacy ASCII format). The CUBIT meshes through the Genesis (NetCDF) binary format. The NETGEN triangular and tetrahedral mesh formats. The TrueGrid hexahedral mesh format. See below for more details and information on the specific formats that are supported. All of these mesh formats are also supported by MFEM's native visualization tool, GLVis .", "title": "Supported Mesh Formats"}, {"location": "mesh-formats/#mfem-mesh-formats", "text": "Detailed description of these formats can be found on MFEM's mesh formats page. MFEM supports: MFEM's mesh v1.0 format for straight meshes. MFEM's mesh v1.x format for arbitrary high-order curvilinear and more general meshes. MFEM's mesh v1.2 format, which adds support for parallel meshes. MFEM's mesh v1.3 format , which adds support for named attribute sets. MFEM's NC mesh v1.0 format , supporting non-conforming (AMR) meshes. MFEM's format for NURBS meshes.", "title": "MFEM Mesh Formats"}, {"location": "mesh-formats/#vtk-mesh-formats", "text": "MFEM supports reading VTK (ASCII) and VTU (XML) unstructured meshes. For more details on these formats, see the VTK User's Guide and the VTK Wiki . Specifically, MFEM supports: Meshes with high-order Lagrange elements . Mixed meshes with all element types. XML format with inline or appended binary data, including zlib compression. If the VTK or VTU file has a cell data array named \"material\" or \"attribute\", this cell data will be used for MFEM's element attribute numbers. If both data arrays are present, the one named \"material\" will take precedence.", "title": "VTK Mesh Formats"}, {"location": "mesh-formats/#gmsh-mesh-formats", "text": "MFEM supports reading version 2.2 of the Gmsh ASCII and binary formats for 2D and 3D meshes. High-order elements (up to order 9) are supported, as are periodic meshes. Note that newer versions of Gmsh output files in version 4.1 of the Gmsh format, which is not compatible with MFEM. Users should either specify Mesh.MshFileVersion = 2.2; in their geometry file or run Gmsh with -format msh22 from the command line. Elements' physical tags in Gmsh correspond to their attribute numbers in MFEM. MFEM only supports strictly positive (\u2265 1) attributes, so users should be sure to define all physical groups with strictly positive tag numbers. The one exception to this is in cases where all elements have physical tag zero (which happens by default in Gmsh when no physical groups are defined). In this case, MFEM will reassign all the elements to have attribute number 1 instead of failing to read the mesh.", "title": "Gmsh Mesh Formats"}, {"location": "meshing-miniapps/", "text": "Meshing Miniapps The miniapps/meshing directory contains a collection of meshing-related miniapps based on MFEM. Compared to the example codes , the miniapps are more complex, demonstrating more advanced usage of the library. They are intended to be more representative of MFEM-based application codes. We recommend that new users start with the example codes before moving to the miniapps. The current meshing miniapps are described below. Mobius Strip This miniapp generates various Mobius strip-like surface meshes. It is a good way to generate complex surface meshes. Manipulating the mesh topology and performing mesh transformation are demonstrated. The mobius-strip mesh in the data directory was generated with this miniapp. Klein Bottle This miniapp generates three types of Klein bottle surfaces. It is similar to the mobius-strip miniapp. The klein-bottle and klein-donut meshes in the data directory were generated with this miniapp. Toroid This miniapp generates two types of toroidal volume meshes; one with triangular cross sections and one with square cross sections. A wide variety of toroidal meshes can be generated by varying the amount of twist as well as the major and minor radii and other variables. The toroid-wedge and toroid-hex meshes in the data directory were generated with this miniapp. Twist This miniapp generates simple periodic meshes made from different types of elements. A wide variety of twisted meshes can be generated by varying the amount of twist as well as the dimensions, element types, and other variables. Extruder This miniapp creates higher dimensional meshes from lower dimensional meshes by extrusion. Simple coordinate transformations can also be applied if desired. The initial mesh can be 1D or 2D. 1D meshes can be extruded in the y-direction first and then in the z-direction. 2D meshes can be triangular, quadrilateral, or contain both element types. Trimmer This miniapp creates a new mesh file from an existing mesh by trimming away elements with selected attributes. High order and/or periodic meshes are supported although NURBS meshes are not. By default newly exposed boundaries will be assigned unique boundary attributes. The new boundary attributes are determined by adding the volume attribute of the exposing elements to the maximum boundary attribute in the original mesh. Alternatively the user can specify new boundary attributes to be associated with each volume attribute being trimmed away. In the later case the new attributes need not be unique. Polar-NC This miniapp generates a circular sector mesh that consist of quadrilaterals and triangles of similar sizes. The 3D version of the mesh is made of prisms and tetrahedra: The mesh is non-conforming by design, and can optionally be made curvilinear. The elements are ordered along a space-filling curve by default, which makes the mesh ready for parallel non-conforming AMR in MFEM. Shaper This miniapp performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material() function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. Mesh Explorer This miniapp is a handy tool to examine, visualize and manipulate a given mesh. Some of its features are: visualizing of mesh materials and individual mesh elements mesh scaling, randomization, and general transformation manipulation of the mesh curvature the ability to simulate parallel partitioning quantitative and visual reports of mesh quality Mesh Optimizer This miniapp performs mesh optimization using the Target-Matrix Optimization Paradigm (TMOP) by P.Knupp et al., and a global variational minimization approach. It minimizes the quantity $\\sum_T \\int_T \\mu(J(x))$, where $T$ are the target (ideal) elements, $J$ is the Jacobian of the transformation from the target to the physical element, and $\\mu$ is the mesh quality metric. This metric can measure shape, size or alignment of the region around each quadrature point. The combination of targets and quality metrics is used to optimize the physical node positions, i.e., they must be as close as possible to the shape / size / alignment of their targets. Minimal Surface This miniapp solves Plateau's nonlinear elliptic problem: the Dirichlet problem for the minimal surface equation. The weak form of the equation, with prescribed boundary conditions, is given by: $$\\int_\\Omega\\frac{\\nabla{u}\\cdot\\nabla{v}}{\\sqrt{1+|\\nabla{u}|^2}}dx = 0$$ Two problems can be run: Problem 0 solves the minimal surface equation of parametric surfaces . The command line options allow the selection of different parametrization: Catenoid, Helicoid, Enneper, Hold, Costa, Shell, Scherk or simply one from an input mesh file. Problem 1 solves the minimal surface equation for surfaces restricted to be graphs of the form $z=f(x,y)$ . This problem is solved using the Picard iterations: $$\\int_\\Omega\\frac{\\nabla{u_{n+1}}\\cdot\\nabla{v}}{\\sqrt{1+|\\nabla{u_n}|^2}}dx = 0$$ MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Meshing"}, {"location": "meshing-miniapps/#meshing-miniapps", "text": "The miniapps/meshing directory contains a collection of meshing-related miniapps based on MFEM. Compared to the example codes , the miniapps are more complex, demonstrating more advanced usage of the library. They are intended to be more representative of MFEM-based application codes. We recommend that new users start with the example codes before moving to the miniapps. The current meshing miniapps are described below.", "title": "Meshing Miniapps"}, {"location": "meshing-miniapps/#mobius-strip", "text": "This miniapp generates various Mobius strip-like surface meshes. It is a good way to generate complex surface meshes. Manipulating the mesh topology and performing mesh transformation are demonstrated. The mobius-strip mesh in the data directory was generated with this miniapp.", "title": "Mobius Strip"}, {"location": "meshing-miniapps/#klein-bottle", "text": "This miniapp generates three types of Klein bottle surfaces. It is similar to the mobius-strip miniapp. The klein-bottle and klein-donut meshes in the data directory were generated with this miniapp.", "title": "Klein Bottle"}, {"location": "meshing-miniapps/#toroid", "text": "This miniapp generates two types of toroidal volume meshes; one with triangular cross sections and one with square cross sections. A wide variety of toroidal meshes can be generated by varying the amount of twist as well as the major and minor radii and other variables. The toroid-wedge and toroid-hex meshes in the data directory were generated with this miniapp.", "title": "Toroid"}, {"location": "meshing-miniapps/#twist", "text": "This miniapp generates simple periodic meshes made from different types of elements. A wide variety of twisted meshes can be generated by varying the amount of twist as well as the dimensions, element types, and other variables.", "title": "Twist"}, {"location": "meshing-miniapps/#extruder", "text": "This miniapp creates higher dimensional meshes from lower dimensional meshes by extrusion. Simple coordinate transformations can also be applied if desired. The initial mesh can be 1D or 2D. 1D meshes can be extruded in the y-direction first and then in the z-direction. 2D meshes can be triangular, quadrilateral, or contain both element types.", "title": "Extruder"}, {"location": "meshing-miniapps/#trimmer", "text": "This miniapp creates a new mesh file from an existing mesh by trimming away elements with selected attributes. High order and/or periodic meshes are supported although NURBS meshes are not. By default newly exposed boundaries will be assigned unique boundary attributes. The new boundary attributes are determined by adding the volume attribute of the exposing elements to the maximum boundary attribute in the original mesh. Alternatively the user can specify new boundary attributes to be associated with each volume attribute being trimmed away. In the later case the new attributes need not be unique.", "title": "Trimmer"}, {"location": "meshing-miniapps/#polar-nc", "text": "This miniapp generates a circular sector mesh that consist of quadrilaterals and triangles of similar sizes. The 3D version of the mesh is made of prisms and tetrahedra: The mesh is non-conforming by design, and can optionally be made curvilinear. The elements are ordered along a space-filling curve by default, which makes the mesh ready for parallel non-conforming AMR in MFEM.", "title": "Polar-NC"}, {"location": "meshing-miniapps/#shaper", "text": "This miniapp performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material() function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported.", "title": "Shaper"}, {"location": "meshing-miniapps/#mesh-explorer", "text": "This miniapp is a handy tool to examine, visualize and manipulate a given mesh. Some of its features are: visualizing of mesh materials and individual mesh elements mesh scaling, randomization, and general transformation manipulation of the mesh curvature the ability to simulate parallel partitioning quantitative and visual reports of mesh quality", "title": "Mesh Explorer"}, {"location": "meshing-miniapps/#mesh-optimizer", "text": "This miniapp performs mesh optimization using the Target-Matrix Optimization Paradigm (TMOP) by P.Knupp et al., and a global variational minimization approach. It minimizes the quantity $\\sum_T \\int_T \\mu(J(x))$, where $T$ are the target (ideal) elements, $J$ is the Jacobian of the transformation from the target to the physical element, and $\\mu$ is the mesh quality metric. This metric can measure shape, size or alignment of the region around each quadrature point. The combination of targets and quality metrics is used to optimize the physical node positions, i.e., they must be as close as possible to the shape / size / alignment of their targets.", "title": "Mesh Optimizer"}, {"location": "meshing-miniapps/#minimal-surface", "text": "This miniapp solves Plateau's nonlinear elliptic problem: the Dirichlet problem for the minimal surface equation. The weak form of the equation, with prescribed boundary conditions, is given by: $$\\int_\\Omega\\frac{\\nabla{u}\\cdot\\nabla{v}}{\\sqrt{1+|\\nabla{u}|^2}}dx = 0$$ Two problems can be run: Problem 0 solves the minimal surface equation of parametric surfaces . The command line options allow the selection of different parametrization: Catenoid, Helicoid, Enneper, Hold, Costa, Shell, Scherk or simply one from an input mesh file. Problem 1 solves the minimal surface equation for surfaces restricted to be graphs of the form $z=f(x,y)$ . This problem is solved using the Picard iterations: $$\\int_\\Omega\\frac{\\nabla{u_{n+1}}\\cdot\\nabla{v}}{\\sqrt{1+|\\nabla{u_n}|^2}}dx = 0$$ MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Minimal Surface"}, {"location": "news/", "text": "MFEM News Nov 25, 2024 Recap of the 2024 MFEM Community Workshop . Oct 28, 2024 Postdoc position on the MFEM team at LLNL. Oct 22, 2024 2024 MFEM community workshop . Jun 5, 2024 MFEM in the cloud tutorial as part of the HPCIC Tutorial series. May 7, 2024 Version 4.7 released . May 2, 2024 New MFEM paper to appear in the International Journal of High Performance Computing Application. Nov 13, 2023 Recap of the 2023 Workshop , held on October 26. Oct 26, 2023 2023 MFEM community workshop . Sep 27, 2023 Version 4.6 released . Sep 11, 2023 MFEM now available in Homebrew . Jul 17, 2023 The third MFEM Community Workshop will take place on October 26th, 2023. Jul 11, 2023 MFEM in the cloud tutorial as part of the RADIUSS AWS Tutorial series. Apr 11, 2023 GitHub ReadME project article on open-source software for fusion mentions MFEM. Mar 23, 2023 Version 4.5.2 released . Feb 22, 2023 AWS releases the Palace code for cloud-based electromagnetics simulations of quantum computing hardware based on MFEM Jan 6, 2023 Complete YouTube playlist of 2022 Workshop videos now available. Nov 16, 2022 Recap of the 2022 Workshop , held on October 25. Oct 22, 2022 Version 4.5 released . Oct 11, 2022 New Enzyme + MFEM project to efficiently differentiate large-scale finite element applications. Aug 18, 2022 The second MFEM Community Workshop will take place on October 25th, 2022. Aug 15, 2022 MFEM in the cloud tutorial as part of the RADIUSS AWS Tutorial series. Mar 21, 2022 Version 4.4 released . Jan 20, 2022 FEM@LLNL seminar series starting. Nov 30, 2021 New page with recorded talks + videos. Nov 12, 2021 Article summarizing the October 20th, 2021, community workshop . Jul 29, 2021 Version 4.3 released . Jul 10, 2021 The inaugural MFEM Community Workshop will take place on October 20th, 2021. Apr 22, 2021 MFEM featured on S&TR magazine cover . Mar 1, 2021 Logo featured throughout LLNL 2020 annual report . Feb 16, 2021 New documentation page on GPU performance . Dec 19, 2020 PyMFEM available with pip install mfem . Oct 30, 2020 Version 4.2 released . Jul 11, 2020 MFEM paper in Computers & Mathematics with Applications. Jun 24, 2020 MFEM video available on YouTube. Jun 8, 2020 ECP podcast about mfem-4.1. Jun 8, 2020 Matrix-free high-order solvers research highlighted in CASC Newsletter #9. Mar 30, 2020 Remhos a new MFEM-based miniapp for high-order DG remap released. Mar 29, 2020 CEED v3.0 and libCEED v0.6 released with updated MFEM support. Mar 27, 2020 Laghos v3.0 released with direct device support based on MFEM-4.1. Mar 10, 2020 Version 4.1 released . Nov 20, 2019 MFEM overview paper available on arXiv. May 24, 2019 Version 4.0 released with initial GPU support. May 10, 2019 AMR and TMOP papers available on arXiv. Mar 30, 2019 CEED v2.0 and libCEED v0.4 released with MFEM support. Mar 22, 2019 A version of the Laghos miniapp released for use in the second edition of the Commodity Technology Systems procurement process. Nov 19, 2018 Laghos v2.0 released with CUDA, RAJA, OCCA and AMR versions. Nov 9, 2018 MFEM part of the first release of the Extreme-Scale Scientific Software Stack (E4S) by the Software Technologies focus area of the ECP. Aug 6, 2018 Unstructured technologies presentation at ATPESC18 . May 29, 2018 Version 3.4 released . Apr 2, 2018 MFEM part of OpenHPC , a Linux Foundation project for software components required to deploy and manage HPC Linux clusters. Mar 30, 2018 CEED v1.0 and libCEED v0.2 released with MFEM support. Mar 1, 2018 MFEM highlighted in LLNL's Science & Technology Review magazine, including on the cover . Dec 30, 2017 Initial version of libCEED , the low-level CEED API, released. Nov 10, 2017 Version 3.3.2 released . Nov 7, 2017 ECP article: Co-Design Center Develops Next-Generation Simulation Tools , also in HPCwire . Oct 30, 2017 Laghos part of the ECP Proxy App Suite 1.0 , CORAL-2 Benchmarks and ASC co-design miniapps . Oct 16, 2017 Postdoc position available for electromagnetic simulations with MFEM. Sep 22, 2017 LLNL Newsline: LLNL gears up for next generation of computer-aided design and engineering . Jun 15, 2017 Laghos miniapp and CEED benchmarks released. May 8, 2017 News highlight: Accelerating Simulation Software with Graphics Processing Units . Feb 16, 2017 Moved main development to GitHub. Jan 28, 2017 Version 3.3 released . Dec 15, 2016 Postdoc position for exascale computing with MFEM. Nov 11, 2016 MFEM part of the new ECP co-design Center for Efficient Exascale Discretizations (CEED) . Nov 11, 2016 LLNL Newsline: Lawrence Livermore tapped to lead co-design center for exascale computing ecosystem . Oct 6, 2016 Science & Technology Review article: Laying the Groundwork for Extreme-Scale Computing , see also the YouTube preview . Sep 19, 2016 PyMFEM - a Python wrapper for MFEM by Syun'ichi Shiraiwa from MIT's Plasma Science and Fusion Center released. Jun 30, 2016 Version 3.2 released . May 6, 2016 MFEM packages available in homebrew and spack . Mar 9, 2016 VisIt 2.10.1 released with MFEM 3.1 support. Mar 4, 2016 New LLNL open-source software Blog and Twitter . Feb 16, 2016 Version 3.1 released . Feb 5, 2016 MFEM simulation images part of the Art of Science exhibition at the Livermore public library. Jan 6, 2016 News highlight: High-order finite element library provides scientists with access to cutting-edge algorithms . Aug 18, 2015 Moved to GitHub and mfem.org . Jan 26, 2015 Version 3.0 released .", "title": "News"}, {"location": "news/#mfem-news", "text": "Nov 25, 2024 Recap of the 2024 MFEM Community Workshop . Oct 28, 2024 Postdoc position on the MFEM team at LLNL. Oct 22, 2024 2024 MFEM community workshop . Jun 5, 2024 MFEM in the cloud tutorial as part of the HPCIC Tutorial series. May 7, 2024 Version 4.7 released . May 2, 2024 New MFEM paper to appear in the International Journal of High Performance Computing Application. Nov 13, 2023 Recap of the 2023 Workshop , held on October 26. Oct 26, 2023 2023 MFEM community workshop . Sep 27, 2023 Version 4.6 released . Sep 11, 2023 MFEM now available in Homebrew . Jul 17, 2023 The third MFEM Community Workshop will take place on October 26th, 2023. Jul 11, 2023 MFEM in the cloud tutorial as part of the RADIUSS AWS Tutorial series. Apr 11, 2023 GitHub ReadME project article on open-source software for fusion mentions MFEM. Mar 23, 2023 Version 4.5.2 released . Feb 22, 2023 AWS releases the Palace code for cloud-based electromagnetics simulations of quantum computing hardware based on MFEM Jan 6, 2023 Complete YouTube playlist of 2022 Workshop videos now available. Nov 16, 2022 Recap of the 2022 Workshop , held on October 25. Oct 22, 2022 Version 4.5 released . Oct 11, 2022 New Enzyme + MFEM project to efficiently differentiate large-scale finite element applications. Aug 18, 2022 The second MFEM Community Workshop will take place on October 25th, 2022. Aug 15, 2022 MFEM in the cloud tutorial as part of the RADIUSS AWS Tutorial series. Mar 21, 2022 Version 4.4 released . Jan 20, 2022 FEM@LLNL seminar series starting. Nov 30, 2021 New page with recorded talks + videos. Nov 12, 2021 Article summarizing the October 20th, 2021, community workshop . Jul 29, 2021 Version 4.3 released . Jul 10, 2021 The inaugural MFEM Community Workshop will take place on October 20th, 2021. Apr 22, 2021 MFEM featured on S&TR magazine cover . Mar 1, 2021 Logo featured throughout LLNL 2020 annual report . Feb 16, 2021 New documentation page on GPU performance . Dec 19, 2020 PyMFEM available with pip install mfem . Oct 30, 2020 Version 4.2 released . Jul 11, 2020 MFEM paper in Computers & Mathematics with Applications. Jun 24, 2020 MFEM video available on YouTube. Jun 8, 2020 ECP podcast about mfem-4.1. Jun 8, 2020 Matrix-free high-order solvers research highlighted in CASC Newsletter #9. Mar 30, 2020 Remhos a new MFEM-based miniapp for high-order DG remap released. Mar 29, 2020 CEED v3.0 and libCEED v0.6 released with updated MFEM support. Mar 27, 2020 Laghos v3.0 released with direct device support based on MFEM-4.1. Mar 10, 2020 Version 4.1 released . Nov 20, 2019 MFEM overview paper available on arXiv. May 24, 2019 Version 4.0 released with initial GPU support. May 10, 2019 AMR and TMOP papers available on arXiv. Mar 30, 2019 CEED v2.0 and libCEED v0.4 released with MFEM support. Mar 22, 2019 A version of the Laghos miniapp released for use in the second edition of the Commodity Technology Systems procurement process. Nov 19, 2018 Laghos v2.0 released with CUDA, RAJA, OCCA and AMR versions. Nov 9, 2018 MFEM part of the first release of the Extreme-Scale Scientific Software Stack (E4S) by the Software Technologies focus area of the ECP. Aug 6, 2018 Unstructured technologies presentation at ATPESC18 . May 29, 2018 Version 3.4 released . Apr 2, 2018 MFEM part of OpenHPC , a Linux Foundation project for software components required to deploy and manage HPC Linux clusters. Mar 30, 2018 CEED v1.0 and libCEED v0.2 released with MFEM support. Mar 1, 2018 MFEM highlighted in LLNL's Science & Technology Review magazine, including on the cover . Dec 30, 2017 Initial version of libCEED , the low-level CEED API, released. Nov 10, 2017 Version 3.3.2 released . Nov 7, 2017 ECP article: Co-Design Center Develops Next-Generation Simulation Tools , also in HPCwire . Oct 30, 2017 Laghos part of the ECP Proxy App Suite 1.0 , CORAL-2 Benchmarks and ASC co-design miniapps . Oct 16, 2017 Postdoc position available for electromagnetic simulations with MFEM. Sep 22, 2017 LLNL Newsline: LLNL gears up for next generation of computer-aided design and engineering . Jun 15, 2017 Laghos miniapp and CEED benchmarks released. May 8, 2017 News highlight: Accelerating Simulation Software with Graphics Processing Units . Feb 16, 2017 Moved main development to GitHub. Jan 28, 2017 Version 3.3 released . Dec 15, 2016 Postdoc position for exascale computing with MFEM. Nov 11, 2016 MFEM part of the new ECP co-design Center for Efficient Exascale Discretizations (CEED) . Nov 11, 2016 LLNL Newsline: Lawrence Livermore tapped to lead co-design center for exascale computing ecosystem . Oct 6, 2016 Science & Technology Review article: Laying the Groundwork for Extreme-Scale Computing , see also the YouTube preview . Sep 19, 2016 PyMFEM - a Python wrapper for MFEM by Syun'ichi Shiraiwa from MIT's Plasma Science and Fusion Center released. Jun 30, 2016 Version 3.2 released . May 6, 2016 MFEM packages available in homebrew and spack . Mar 9, 2016 VisIt 2.10.1 released with MFEM 3.1 support. Mar 4, 2016 New LLNL open-source software Blog and Twitter . Feb 16, 2016 Version 3.1 released . Feb 5, 2016 MFEM simulation images part of the Art of Science exhibition at the Livermore public library. Jan 6, 2016 News highlight: High-order finite element library provides scientists with access to cutting-edge algorithms . Aug 18, 2015 Moved to GitHub and mfem.org . Jan 26, 2015 Version 3.0 released .", "title": "MFEM News"}, {"location": "nonlininteg/", "text": "Nonlinear Form Integrators $ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} $ Nonlinear form integrators are used to express the local action of a general nonlinear finite element operator. Depending on the implementation they can also provide the capability to assemble the local gradient operator or to compute the local energy. TMOP integrator for variational minimization The TMOP_Integrator is used for mesh optimization by node movement. It represents the nonlinear objective function that arises in the Target-Matrix Optimization Paradigm (TMOP), as described in this publication . The local action and gradient, for an element $E_p$ in physical space, of the integrator compute \\begin{equation} F(x) = \\int_{E_t} \\frac{\\partial \\mu(J_{pt})}{\\partial x} ~ d x_t \\,, \\quad \\partial F(x) = \\int_{E_t} \\frac{\\partial^2 \\mu(J_{pt})}{\\partial{x^2}} ~ d x_t \\,, \\end{equation} where $x$ is the vector of positions for the mesh nodes of $E_p$; $x_t$ are positions in the target element $E_t$, which corresponds to $E_p$ (see class TargetConstructor ), and $J_{pt}$ is the Jacobian of the transformation from $E_t$ to $E_p$; and $\\mu$ is a mesh quality metric that is evaluated at quadrature points (see class TMOP_QualityMetric ). The local energy of the integrator represents the integral of $\\mu$ over the target element. Convective acceleration The VectorConvectionNLFIntegrator implements the local action of $(u \\cdot \\grad u, v)$, where $u, v \\in H_1^d$ for $d = 2, 3$. This term arises e.g. in the weak form of the Navier-Stokes equations. It also allows to assemble the local gradient which is represented by the linearization of the local action around $\\delta u$. Using the definition of the Gateaux derivative for functions \\begin{equation} F'(u, \\delta u) = \\lim_{\\epsilon \\to \\infty} \\frac{F(u + \\epsilon \\delta u) - F(u)}{\\epsilon} \\end{equation} with $F(u) = u \\cdot \\grad u$, we arrive at \\begin{equation} F'(u, \\delta u) = u \\cdot \\grad \\delta u + \\delta u \\cdot \\grad u. \\end{equation} The local gradient $(F'(u, \\delta u), v)$ can be computed by calling the GetGradient method of NonlinearForm . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Nonlinear Form Integrators"}, {"location": "nonlininteg/#nonlinear-form-integrators", "text": "$ \\newcommand{\\cross}{\\times} \\newcommand{\\inner}{\\cdot} \\newcommand{\\div}{\\nabla\\cdot} \\newcommand{\\curl}{\\nabla\\times} \\newcommand{\\grad}{\\nabla} \\newcommand{\\ddx}[1]{\\frac{d#1}{dx}} $ Nonlinear form integrators are used to express the local action of a general nonlinear finite element operator. Depending on the implementation they can also provide the capability to assemble the local gradient operator or to compute the local energy.", "title": "Nonlinear Form Integrators"}, {"location": "nonlininteg/#tmop-integrator-for-variational-minimization", "text": "The TMOP_Integrator is used for mesh optimization by node movement. It represents the nonlinear objective function that arises in the Target-Matrix Optimization Paradigm (TMOP), as described in this publication . The local action and gradient, for an element $E_p$ in physical space, of the integrator compute \\begin{equation} F(x) = \\int_{E_t} \\frac{\\partial \\mu(J_{pt})}{\\partial x} ~ d x_t \\,, \\quad \\partial F(x) = \\int_{E_t} \\frac{\\partial^2 \\mu(J_{pt})}{\\partial{x^2}} ~ d x_t \\,, \\end{equation} where $x$ is the vector of positions for the mesh nodes of $E_p$; $x_t$ are positions in the target element $E_t$, which corresponds to $E_p$ (see class TargetConstructor ), and $J_{pt}$ is the Jacobian of the transformation from $E_t$ to $E_p$; and $\\mu$ is a mesh quality metric that is evaluated at quadrature points (see class TMOP_QualityMetric ). The local energy of the integrator represents the integral of $\\mu$ over the target element.", "title": "TMOP integrator for variational minimization"}, {"location": "nonlininteg/#convective-acceleration", "text": "The VectorConvectionNLFIntegrator implements the local action of $(u \\cdot \\grad u, v)$, where $u, v \\in H_1^d$ for $d = 2, 3$. This term arises e.g. in the weak form of the Navier-Stokes equations. It also allows to assemble the local gradient which is represented by the linearization of the local action around $\\delta u$. Using the definition of the Gateaux derivative for functions \\begin{equation} F'(u, \\delta u) = \\lim_{\\epsilon \\to \\infty} \\frac{F(u + \\epsilon \\delta u) - F(u)}{\\epsilon} \\end{equation} with $F(u) = u \\cdot \\grad u$, we arrive at \\begin{equation} F'(u, \\delta u) = u \\cdot \\grad \\delta u + \\delta u \\cdot \\grad u. \\end{equation} The local gradient $(F'(u, \\delta u), v)$ can be computed by calling the GetGradient method of NonlinearForm . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Convective acceleration"}, {"location": "nurbs/", "text": "NURBS Miniapps These miniapps demonstrate the use of NURBS-based Isogeometric analysis 1 , 2 . NURBS Ex 1: Laplace problem This example code solves a simple Laplace problem \\begin{align} -\\Delta u = 1 \\end{align} with homogeneous Dirichlet boundary conditions. For implementation see miniapps/nurbs/nurbs__ex1 . NURBS Ex 3: Maxwell problem This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation \\begin{align} \\nabla\\times\\nabla\\times\\, E + E = f \\end{align} with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 2D or 3D. The example demonstrates the use of $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. Static condensation is also illustrated. For implementation see miniapps/nurbs/nurbs__ex1 . NURBS Ex 5: Darcy problem This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system \\begin{align} \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} \\end{align} with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize with Raviart-Thomas finite elements (velocity $\\bf u$) and piecewise discontinuous polynomials (pressure $p$). For implementation see miniapps/nurbs/nurbs__ex5 . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs: \"Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement\", Computer Methods in Applied Mechanics and Engineering, Elsevier, 2005, 194 (39-41), pp.4135-4195. \u21a9 T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs: \"Isogeometric analysis: toward integration of CAD and FEA\", Wiley&Sons 2009 \u21a9", "title": "NURBS Discretization"}, {"location": "nurbs/#nurbs-miniapps", "text": "These miniapps demonstrate the use of NURBS-based Isogeometric analysis 1 , 2 .", "title": "NURBS Miniapps"}, {"location": "nurbs/#nurbs-ex-1-laplace-problem", "text": "This example code solves a simple Laplace problem \\begin{align} -\\Delta u = 1 \\end{align} with homogeneous Dirichlet boundary conditions. For implementation see miniapps/nurbs/nurbs__ex1 .", "title": "NURBS Ex 1: Laplace problem"}, {"location": "nurbs/#nurbs-ex-3-maxwell-problem", "text": "This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation \\begin{align} \\nabla\\times\\nabla\\times\\, E + E = f \\end{align} with boundary condition $ E \\times n $ = \"given tangential field\". Here, we use a given exact solution $E$ and compute the corresponding r.h.s. $f$. We discretize with Nedelec finite elements in 2D or 3D. The example demonstrates the use of $H(curl)$ finite element spaces with the curl-curl and the (vector finite element) mass bilinear form, as well as the computation of discretization error when the exact solution is known. Static condensation is also illustrated. For implementation see miniapps/nurbs/nurbs__ex1 .", "title": "NURBS Ex 3: Maxwell problem"}, {"location": "nurbs/#nurbs-ex-5-darcy-problem", "text": "This example code solves a simple 2D/3D mixed Darcy problem corresponding to the saddle point system \\begin{align} \\begin{array}{rcl} k\\,{\\bf u} + {\\rm grad}\\,p &=& f \\\\ -{\\rm div}\\,{\\bf u} &=& g \\end{array} \\end{align} with natural boundary condition $-p = $ \"given pressure\". Here we use a given exact solution $({\\bf u},p)$ and compute the corresponding right hand side $(f, g)$. We discretize with Raviart-Thomas finite elements (velocity $\\bf u$) and piecewise discontinuous polynomials (pressure $p$). For implementation see miniapps/nurbs/nurbs__ex5 . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}}); T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs: \"Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement\", Computer Methods in Applied Mechanics and Engineering, Elsevier, 2005, 194 (39-41), pp.4135-4195. \u21a9 T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs: \"Isogeometric analysis: toward integration of CAD and FEA\", Wiley&Sons 2009 \u21a9", "title": "NURBS Ex 5: Darcy problem"}, {"location": "parallel-tutorial/", "text": "Parallel Tutorial Summary This tutorial illustrates the building and sample use of the following MFEM parallel example codes: Example 1p Example 2p Example 3p An interactive documentation of all example codes is available here . Building Follow the building instructions to build the parallel MFEM library and to start a GLVis server. The latter is the recommended visualization software for MFEM (though its use is optional). To build the parallel example codes, type make in MFEM's examples directory: ~/mfem/examples> make mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex1p.cpp -o ex1p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex2p.cpp -o ex2p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex3p.cpp -o ex3p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex4p.cpp -o ex4p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex5p.cpp -o ex5p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex7p.cpp -o ex7p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex8p.cpp -o ex8p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex9p.cpp -o ex9p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex10p.cpp -o ex10p ... Example 1p This is a parallel version of Example 1 using hypre 's BoomerAMG preconditioner. Run this example as follows: ~/mfem/examples> mpirun -np 16 ex1p -m ../data/square-disc.mesh ... PCG Iterations = 26 Final PCG Relative Residual Norm = 4.30922e-13 If a GLVis server is running, the computed finite element solution combined from all processors , will appear in an interactive window: You can examine the solution using the mouse and the GLVis command keystrokes . To view the parallel partitioning, for example, press the following keys in the GLVis window: \" RAjlmm \" followed by F11/F12 and zooming with the right mouse button. To examine the solution only in one, or a few parallel subdomains, one can use the F9/F10 and the F8 keys. In 2D, one can also use press \" b \" to draw the only the boundaries between the subdomains. For example was produced by glvis -np 16 -m mesh -g sol -k \"RAjlb\" followed by F9 and scaling/position adjustment with the mouse. Three-dimensional and curvilinear meshes are also supported in parallel: ~/mfem/examples> mpirun -np 16 ex1p -m ../data/escher-p3.mesh ... PCG Iterations = 24 Final PCG Relative Residual Norm = 3.59964e-13 ~/mfem/examples> glvis -np 16 -m mesh -g sol -k \"Aooogtt\" The continuity of the solution across the inter-processor interfaces can be seen by using a cutting plane (keys \" AoooiMMtmm \" followed by \" z \" and \" Y \" adjustments): Example 2p This is a parallel version of Example 2 using the systems version of hypre 's BoomerAMG preconditioner, which can be run analogous to the serial case: ~/mfem/examples> mpirun -np 16 ex2p -m ../data/beam-hex.mesh -o 1 ... PCG Iterations = 39 Final PCG Relative Residual Norm = 2.91528e-09 To view the parallel partitioning with the magnitude of the computed displacement field, type \" Atttaa \" in the GLVis window followed by subdomain shrinking with F11 and scaling adjustments with the mouse: Example 3p This is a parallel version of Example 3 using hypre 's AMS preconditioner. Its use is analogous to the serial case: /mfem/examples> mpirun -np 16 ex3p -m ../data/fichera-q3.mesh ... PCG Iterations = 17 Final PCG Relative Residual Norm = 7.61595e-13 || E_h - E ||_{L^2} = 0.0821685 Note that AMS leads to much fewer iterations than the Gauss-Seidel preconditioner used in the serial code. The parallel subdomain partitioning can be seen with \" ooogt \" and F11/F12: One can also visualize individual components of the Nedelec solution and remove the elements in a cutting plane to see the surfaces corresponding to inter-processor boundaries: glvis -np 16 -m mesh -g sol -k \"ooottmiEF\"", "title": "_Parallel Tutorial"}, {"location": "parallel-tutorial/#parallel-tutorial", "text": "", "title": "Parallel Tutorial"}, {"location": "parallel-tutorial/#summary", "text": "This tutorial illustrates the building and sample use of the following MFEM parallel example codes: Example 1p Example 2p Example 3p An interactive documentation of all example codes is available here .", "title": "Summary"}, {"location": "parallel-tutorial/#building", "text": "Follow the building instructions to build the parallel MFEM library and to start a GLVis server. The latter is the recommended visualization software for MFEM (though its use is optional). To build the parallel example codes, type make in MFEM's examples directory: ~/mfem/examples> make mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex1p.cpp -o ex1p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex2p.cpp -o ex2p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex3p.cpp -o ex3p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex4p.cpp -o ex4p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex5p.cpp -o ex5p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex7p.cpp -o ex7p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex8p.cpp -o ex8p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex9p.cpp -o ex9p ... mpicxx -O3 -I.. -I../../hypre/src/hypre/include ex10p.cpp -o ex10p ...", "title": "Building"}, {"location": "parallel-tutorial/#example-1p", "text": "This is a parallel version of Example 1 using hypre 's BoomerAMG preconditioner. Run this example as follows: ~/mfem/examples> mpirun -np 16 ex1p -m ../data/square-disc.mesh ... PCG Iterations = 26 Final PCG Relative Residual Norm = 4.30922e-13 If a GLVis server is running, the computed finite element solution combined from all processors , will appear in an interactive window: You can examine the solution using the mouse and the GLVis command keystrokes . To view the parallel partitioning, for example, press the following keys in the GLVis window: \" RAjlmm \" followed by F11/F12 and zooming with the right mouse button. To examine the solution only in one, or a few parallel subdomains, one can use the F9/F10 and the F8 keys. In 2D, one can also use press \" b \" to draw the only the boundaries between the subdomains. For example was produced by glvis -np 16 -m mesh -g sol -k \"RAjlb\" followed by F9 and scaling/position adjustment with the mouse. Three-dimensional and curvilinear meshes are also supported in parallel: ~/mfem/examples> mpirun -np 16 ex1p -m ../data/escher-p3.mesh ... PCG Iterations = 24 Final PCG Relative Residual Norm = 3.59964e-13 ~/mfem/examples> glvis -np 16 -m mesh -g sol -k \"Aooogtt\" The continuity of the solution across the inter-processor interfaces can be seen by using a cutting plane (keys \" AoooiMMtmm \" followed by \" z \" and \" Y \" adjustments):", "title": "Example 1p"}, {"location": "parallel-tutorial/#example-2p", "text": "This is a parallel version of Example 2 using the systems version of hypre 's BoomerAMG preconditioner, which can be run analogous to the serial case: ~/mfem/examples> mpirun -np 16 ex2p -m ../data/beam-hex.mesh -o 1 ... PCG Iterations = 39 Final PCG Relative Residual Norm = 2.91528e-09 To view the parallel partitioning with the magnitude of the computed displacement field, type \" Atttaa \" in the GLVis window followed by subdomain shrinking with F11 and scaling adjustments with the mouse:", "title": "Example 2p"}, {"location": "parallel-tutorial/#example-3p", "text": "This is a parallel version of Example 3 using hypre 's AMS preconditioner. Its use is analogous to the serial case: /mfem/examples> mpirun -np 16 ex3p -m ../data/fichera-q3.mesh ... PCG Iterations = 17 Final PCG Relative Residual Norm = 7.61595e-13 || E_h - E ||_{L^2} = 0.0821685 Note that AMS leads to much fewer iterations than the Gauss-Seidel preconditioner used in the serial code. The parallel subdomain partitioning can be seen with \" ooogt \" and F11/F12: One can also visualize individual components of the Nedelec solution and remove the elements in a cutting plane to see the surfaces corresponding to inter-processor boundaries: glvis -np 16 -m mesh -g sol -k \"ooottmiEF\"", "title": "Example 3p"}, {"location": "performance/", "text": "Performance and Partial Assembly This document provides a brief overview of the tensor-based high-performance and partial assembly features in MFEM. In the traditional finite element setting, the operator is assembled in the form of a matrix. The action of the operator is computed by multiplying with this matrix. At high orders this requires both a large amount of memory to store the matrix, as well as many floating point operations to compute and apply it. Partial assembly is a technique that allows for efficiently applying the action of finite element operators without forming the corresponding matrix. This is particularly important when running on GPUs . Partial assembly is enabled at the level of the BilinearForm by setting the assembly level: a->SetAssemblyLevel(AssemblyLevel::PARTIAL); Once partial assembly is enabled, subsequent calls to member functions such as FormLinearSystem will result in an Operator that represents the action of the bilinear form a , without assembling a matrix. This functionality is illustrated in several MFEM examples , including examples 1, 3, 4, 5, 6, 9, 24, and 26. Note that partial assembly is currently implemented for tensor-product elements (i.e. quadrilaterals and hexahedra). Partial assembly for simplex elements (triangles and tetrahedra) is planned. Preconditioning with Partial Assembly When using partial assembly, the system matrix is no longer available for constructing preconditioners. This means that some of the standard preconditioners in MFEM such as HypreBoomerAMG and GSSmoother cannot be used. MFEM allows for the efficient construction of diagonal (Jacobi) smoothers for partially assembled operators on quad and hex meshes using the class OperatorJacobiSmoother . This class efficiently assembles the diagonal of the corresponding matrix, exploiting the tensor-product structure for efficient evaluation. MFEM also allows for Chebyshev smoothing with partial assembly using the class OperatorChebyshevSmoother . This smoother uses estimates of the eigenvalues of the operator computed using the power method , and is built upon the functionality of OperatorJacobiSmoother . Very efficient partially assembled h-multigrid and p-multigrid preconditioners can be constructed by leveraging a hierarchy of discretizations and the smoothers described above. This functionality is illustrated in Example 26 . Finite Element Operator Decomposition The partial assembly functionality in MFEM is based on decomposing the finite element operator into a nested sequence of operations that act on different levels of the discretization. Finite element operators are typically defined through weak formulations of partial differential equations that involve integration over a computational mesh. The required integrals are computed by splitting them as a sum over the mesh elements, mapping each element to a simple reference element (e.g. the unit square) and applying a quadrature rule in reference space. This sequence of operations highlights an inherent hierarchical structure present in all finite element operators where the evaluation starts on global (trial) degrees of freedom (dofs) on the whole mesh , restricts to degrees of freedom on subdomains (groups of elements), then moves to independent degrees of freedom on each element , transitions to independent quadrature points in reference space, performs the integration, and then goes back in reverse order to global (test) degrees of freedom on the whole mesh. This is illustrated below for the case of a symmetric linear operator. We use the notions T-vector , L-vector , E-vector and Q-vector to represent the sets corresponding to the (true) degrees of freedom on the global mesh, the split local degrees of freedom on the subdomains, the split degrees of freedom on the mesh elements, and the values at quadrature points, respectively. We refer to the operators that connect the different types of vectors as: Subdomain restriction P Element restriction G Basis (Dofs-to-Qpts) evaluator B Operator at quadrature points D More generally, if the operator is nonsymmetric or the test and trial space differ, then the operators mapping back from quadrature points to test spaces may not be transposes of P , G and B , but they still have the same basic structure and interpretation. Note that in the case of adaptive mesh refinement (AMR), the prolongation operator P involves not only extracting sub-vectors, but evaluating values at constrained degrees of freedom through the AMR interpolation. There can also be several levels of subdomains ( P1 , P2 , etc.), and it may be convenient to split D as the product of several operators ( D1 , D2 , etc.). Partial Assembly in MFEM Since the global operator A is just a series of variational restrictions with B , G and P , starting from its point-wise kernel D , a \"matrix-vector product\" with A can be performed by evaluating and storing some of the innermost variational restriction matrices, and applying the rest of the operators \"on-the-fly\". For example, one can compute and store a global matrix on T-vector level. Alternatively, one can compute and store only the subdomain (L-vector) or element (E-vector) matrices and perform the action of A using matvecs with P or P and G . While these options are natural for low-order discretizations, they are not a good fit for high-order methods due to the amount of FLOPs needed for their evaluation, as well as the memory transfer needed for a matvec. MFEM's partial assembly functionality computes and stores only D (or portions of it) and evaluates the actions of P , G and B on-the-fly. Critically for performance, MFEM takes advantage of the tensor-product structure of the degrees of freedom and quadrature points on quadrilateral and hexahedral elements to perform the action of B without storing it as a matrix. Note that the action of B is performed element-wise (it corresponds to a block-diagonal matrix), and the blocks depend only on the element order and reference geometry. Currently, only fixed order and geometry is supported, meaning that all the blocks of B are identical. The partial assembly algorithm requires the optimal amount of memory transfers (with respect to the polynomial order) and near-optimal FLOPs for operator evaluation. It consists of an operator setup phase, that evaluates and stores D and an operator apply (evaluation) phase that computes the action of A on an input vector. When desired, the setup phase may be done as a side-effect of evaluating a different operator, such as a nonlinear residual. The relative costs of the setup and apply phases are different depending on the physics being expressed and the representation of D . Parallel Decomposition After the application of each of the first three transition operators, P , G and B , the operator evaluation is decoupled on their ranges, so P , G and B allow us to \"zoom-in\" to subdomain, element and quadrature point level, ignoring the coupling at higher levels. Thus, a natural mapping of A on a parallel computer is to split the T-vector over MPI ranks (a non-overlapping decomposition, as is typically used for sparse matrices), and then split the rest of the vector types over computational devices (CPUs, GPUs, etc.) as indicated by the shaded regions in the diagram above. One of the advantages of the decomposition perspective in these settings is that the operators P , G , B and D clearly separate the MPI parallelism in the operator ( P ) from the unstructured mesh topology ( G ), the choice of the finite element space/basis ( B ) and the geometry and point-wise physics D . These components also naturally fall in different classes of numerical algorithms: parallel (multi-device) linear algebra for P , sparse (on-device) linear algebra for G , dense/structured linear algebra (tensor contractions) for B and parallel point-wise evaluations for D . Essential Boundary Conditions Essential boundary conditions for partially assembled operators are enforced using the class ConstrainedOperator (or, for rectangular systems, RectangularConstrainedOperator ). These operators represent the action of the partially assembled operator, together with specified constraints on essential degrees of freedom. The Operator returned from, for example, BilinearForm::FormLinearSystem or BilinearForm::FormSystemMatrix will in fact be a ConstrainedOperator . The Operator returned from MixedBilinearForm::FormRectangularSystemMatrix will be a RectangularConstrainedOperator . These classes perform the matrix-free equivalent of eliminating the rows and columns of the system matrix corresponding to the essential degrees of freedom. Partial Assembly for Discontinuous Galerkin methods A complementary partial assembly decomposition is used for Discontinuous Galerkin methods to handle face terms, where a similar sequence of operators is applied on the faces to compute the numerical fluxes. However, since elements are decoupled, the element restriction G is the identity, and a face restriction G F is used instead to compute the numerical fluxes and couple elements together. This face restriction G F goes from element degrees of freedom to face degrees of freedom. Then a B F operator can be applied on the faces. An analogous D F operator is then applied at the face quadrature points. Currently, we support partial assembly only for Gauss-Lobatto and Bernstein bases, with integrators that don't require derivatives on the faces. High-Performance Templated Operators MFEM also offers a set of templated classes to evaluate finite element operators on tensor-product (quadrilateral and hexahedral) meshes, described in further detail here .", "title": "Performance"}, {"location": "performance/#performance-and-partial-assembly", "text": "This document provides a brief overview of the tensor-based high-performance and partial assembly features in MFEM. In the traditional finite element setting, the operator is assembled in the form of a matrix. The action of the operator is computed by multiplying with this matrix. At high orders this requires both a large amount of memory to store the matrix, as well as many floating point operations to compute and apply it. Partial assembly is a technique that allows for efficiently applying the action of finite element operators without forming the corresponding matrix. This is particularly important when running on GPUs . Partial assembly is enabled at the level of the BilinearForm by setting the assembly level: a->SetAssemblyLevel(AssemblyLevel::PARTIAL); Once partial assembly is enabled, subsequent calls to member functions such as FormLinearSystem will result in an Operator that represents the action of the bilinear form a , without assembling a matrix. This functionality is illustrated in several MFEM examples , including examples 1, 3, 4, 5, 6, 9, 24, and 26. Note that partial assembly is currently implemented for tensor-product elements (i.e. quadrilaterals and hexahedra). Partial assembly for simplex elements (triangles and tetrahedra) is planned.", "title": "Performance and Partial Assembly"}, {"location": "performance/#preconditioning-with-partial-assembly", "text": "When using partial assembly, the system matrix is no longer available for constructing preconditioners. This means that some of the standard preconditioners in MFEM such as HypreBoomerAMG and GSSmoother cannot be used. MFEM allows for the efficient construction of diagonal (Jacobi) smoothers for partially assembled operators on quad and hex meshes using the class OperatorJacobiSmoother . This class efficiently assembles the diagonal of the corresponding matrix, exploiting the tensor-product structure for efficient evaluation. MFEM also allows for Chebyshev smoothing with partial assembly using the class OperatorChebyshevSmoother . This smoother uses estimates of the eigenvalues of the operator computed using the power method , and is built upon the functionality of OperatorJacobiSmoother . Very efficient partially assembled h-multigrid and p-multigrid preconditioners can be constructed by leveraging a hierarchy of discretizations and the smoothers described above. This functionality is illustrated in Example 26 .", "title": "Preconditioning with Partial Assembly"}, {"location": "performance/#finite-element-operator-decomposition", "text": "The partial assembly functionality in MFEM is based on decomposing the finite element operator into a nested sequence of operations that act on different levels of the discretization. Finite element operators are typically defined through weak formulations of partial differential equations that involve integration over a computational mesh. The required integrals are computed by splitting them as a sum over the mesh elements, mapping each element to a simple reference element (e.g. the unit square) and applying a quadrature rule in reference space. This sequence of operations highlights an inherent hierarchical structure present in all finite element operators where the evaluation starts on global (trial) degrees of freedom (dofs) on the whole mesh , restricts to degrees of freedom on subdomains (groups of elements), then moves to independent degrees of freedom on each element , transitions to independent quadrature points in reference space, performs the integration, and then goes back in reverse order to global (test) degrees of freedom on the whole mesh. This is illustrated below for the case of a symmetric linear operator. We use the notions T-vector , L-vector , E-vector and Q-vector to represent the sets corresponding to the (true) degrees of freedom on the global mesh, the split local degrees of freedom on the subdomains, the split degrees of freedom on the mesh elements, and the values at quadrature points, respectively. We refer to the operators that connect the different types of vectors as: Subdomain restriction P Element restriction G Basis (Dofs-to-Qpts) evaluator B Operator at quadrature points D More generally, if the operator is nonsymmetric or the test and trial space differ, then the operators mapping back from quadrature points to test spaces may not be transposes of P , G and B , but they still have the same basic structure and interpretation. Note that in the case of adaptive mesh refinement (AMR), the prolongation operator P involves not only extracting sub-vectors, but evaluating values at constrained degrees of freedom through the AMR interpolation. There can also be several levels of subdomains ( P1 , P2 , etc.), and it may be convenient to split D as the product of several operators ( D1 , D2 , etc.).", "title": "Finite Element Operator Decomposition"}, {"location": "performance/#partial-assembly-in-mfem", "text": "Since the global operator A is just a series of variational restrictions with B , G and P , starting from its point-wise kernel D , a \"matrix-vector product\" with A can be performed by evaluating and storing some of the innermost variational restriction matrices, and applying the rest of the operators \"on-the-fly\". For example, one can compute and store a global matrix on T-vector level. Alternatively, one can compute and store only the subdomain (L-vector) or element (E-vector) matrices and perform the action of A using matvecs with P or P and G . While these options are natural for low-order discretizations, they are not a good fit for high-order methods due to the amount of FLOPs needed for their evaluation, as well as the memory transfer needed for a matvec. MFEM's partial assembly functionality computes and stores only D (or portions of it) and evaluates the actions of P , G and B on-the-fly. Critically for performance, MFEM takes advantage of the tensor-product structure of the degrees of freedom and quadrature points on quadrilateral and hexahedral elements to perform the action of B without storing it as a matrix. Note that the action of B is performed element-wise (it corresponds to a block-diagonal matrix), and the blocks depend only on the element order and reference geometry. Currently, only fixed order and geometry is supported, meaning that all the blocks of B are identical. The partial assembly algorithm requires the optimal amount of memory transfers (with respect to the polynomial order) and near-optimal FLOPs for operator evaluation. It consists of an operator setup phase, that evaluates and stores D and an operator apply (evaluation) phase that computes the action of A on an input vector. When desired, the setup phase may be done as a side-effect of evaluating a different operator, such as a nonlinear residual. The relative costs of the setup and apply phases are different depending on the physics being expressed and the representation of D .", "title": "Partial Assembly in MFEM"}, {"location": "performance/#parallel-decomposition", "text": "After the application of each of the first three transition operators, P , G and B , the operator evaluation is decoupled on their ranges, so P , G and B allow us to \"zoom-in\" to subdomain, element and quadrature point level, ignoring the coupling at higher levels. Thus, a natural mapping of A on a parallel computer is to split the T-vector over MPI ranks (a non-overlapping decomposition, as is typically used for sparse matrices), and then split the rest of the vector types over computational devices (CPUs, GPUs, etc.) as indicated by the shaded regions in the diagram above. One of the advantages of the decomposition perspective in these settings is that the operators P , G , B and D clearly separate the MPI parallelism in the operator ( P ) from the unstructured mesh topology ( G ), the choice of the finite element space/basis ( B ) and the geometry and point-wise physics D . These components also naturally fall in different classes of numerical algorithms: parallel (multi-device) linear algebra for P , sparse (on-device) linear algebra for G , dense/structured linear algebra (tensor contractions) for B and parallel point-wise evaluations for D .", "title": "Parallel Decomposition"}, {"location": "performance/#essential-boundary-conditions", "text": "Essential boundary conditions for partially assembled operators are enforced using the class ConstrainedOperator (or, for rectangular systems, RectangularConstrainedOperator ). These operators represent the action of the partially assembled operator, together with specified constraints on essential degrees of freedom. The Operator returned from, for example, BilinearForm::FormLinearSystem or BilinearForm::FormSystemMatrix will in fact be a ConstrainedOperator . The Operator returned from MixedBilinearForm::FormRectangularSystemMatrix will be a RectangularConstrainedOperator . These classes perform the matrix-free equivalent of eliminating the rows and columns of the system matrix corresponding to the essential degrees of freedom.", "title": "Essential Boundary Conditions"}, {"location": "performance/#partial-assembly-for-discontinuous-galerkin-methods", "text": "A complementary partial assembly decomposition is used for Discontinuous Galerkin methods to handle face terms, where a similar sequence of operators is applied on the faces to compute the numerical fluxes. However, since elements are decoupled, the element restriction G is the identity, and a face restriction G F is used instead to compute the numerical fluxes and couple elements together. This face restriction G F goes from element degrees of freedom to face degrees of freedom. Then a B F operator can be applied on the faces. An analogous D F operator is then applied at the face quadrature points. Currently, we support partial assembly only for Gauss-Lobatto and Bernstein bases, with integrators that don't require derivatives on the faces.", "title": "Partial Assembly for Discontinuous Galerkin methods"}, {"location": "performance/#high-performance-templated-operators", "text": "MFEM also offers a set of templated classes to evaluate finite element operators on tensor-product (quadrilateral and hexahedral) meshes, described in further detail here .", "title": "High-Performance Templated Operators"}, {"location": "pri-dual-vec/", "text": "Primal and Dual Vectors The finite element method uses vectors of data in a variety of ways and the differences can be subtle. MFEM defines GridFunction , LinearForm , and Vector classes which help to distinguish the different roles that vectors of data can play. Graphical summary of Primal, Dual, DoF (dofs), and True DoF (tdofs) vectors Primal Vectors The finite element method is based on the notion that a smooth function can be approximated by a sum of piece-wise smooth functions (typically piece-wise polynomials) called basis functions : $$f(\\vec{x})\\approx\\sum_i f_i \\phi_i(\\vec{x}) \\label{expan}$$ The support of an individual basis function, $\\;\\phi_i(\\vec{x})$, will either be a single zone or a collection of zones that share a common vertex, edge, or face. The expansion coefficients, $\\;f_i$, are linear functionals of the field being approximated, $\\;f(\\vec{x})$ in this case. The $\\;f_i$ could be as simple as values of the function at particular points, called interpolation points, e.g. $\\;f_i=f(\\vec{x}_i)$, or they could be integrals of the field over submanifolds of the domain, e.g. $\\;f_i = \\int_{\\Omega_i}f(\\vec{x})d\\vec{x}$. There are many possibilities but the expansion coefficients must be linear functionals of $\\;f(\\vec{x})$. The expansion coefficients are often called degrees of freedom , or DoFs for short, though in certain cases they may not be actually independent because of some problem specific constraints. We'll discuss this more in a later section on True DoFs . Once the basis functions are defined, with some unique ordering, the expansion coefficients can be stored in a vector using the same order. Such a vector of coefficients is called a primal vector . The original function, $\\;f(\\vec{x})$, can then be approximated using \\eqref{expan}. In practice this requires not only the primal vector of coefficients but also knowledge of the mesh and the basis functions for each element of the mesh. In MFEM these collections of information are combined into GridFunction objects (or ParGridFunction objects when used in parallel) which represent piece-wise functions belonging to a finite element approximation space. The GridFunction class contains many Get methods which can compute the expansion \\eqref{expan} at particular locations within an element. The primal vector of expansion coefficients can be computed by solving a linear system or by using any of the various Project methods provided by the GridFunction class. These methods compute the degrees of freedom, $\\;f_i$, or some subset of them, from a Coefficient object representing $\\;f(\\vec{x})$. Other methods in this class can be used to compute various measures of the error in the finite element approximation of $\\;f(\\vec{x})$. Dual Vectors Any vector space, such as the space of primal vectors , has a dual space containing co-vectors a.k.a. dual vectors . In this context a dual vector is a linear functional of a primal vector meaning that the action of a dual vector upon a primal vector is a real number. For example, the integral of a field over a domain, $\\;\\alpha=\\int_\\Omega g(\\vec{x})d\\vec{x}$, is a linear functional because the integral is linear with respect to the function being integrated and the result is a real number. Indeed we can derive similar linear functionals using compatible functions, $\\;f(\\vec{x})$, in a variety of ways, for example $G(f)=\\int_\\Omega g(\\vec{x})f(\\vec{x})d\\vec{x}$. If we compute the action of our functional on the finite element basis functions, $$G_i=G(\\phi_i(\\vec{x})) = \\int_\\Omega g(\\vec{x})\\phi_i(\\vec{x})d\\vec{x}\\label{dualvec},$$ and we collect the results into a vector with entries $\\;G_i$, we call this a dual vector of $\\;g(\\vec{x})$. Integrals such as this often arise when enforcing energy balance in physical systems. For example, if $\\vec{J}$ is a current density describing a flow of charged particles and $\\vec{E}$ is an electric field acting upon those particles, then $\\int_\\Omega\\vec{J}\\cdot\\vec{E}\\,d\\vec{x}$ is the rate at which work is being done by the field on the charged particles. MFEM provides LinearForm objects (or ParLinearForm objects in parallel) which can compute dual vectors from a given function, $\\;g(\\vec{x})$, described by a Coefficient object. (Par)LinearForm objects require not only the mesh, basis functions, and the field $\\;g(\\vec{x})$ but also a LinearFormIntegrator which defines precisely what type of linear functional is being computed. See Linear Form Integrators for more information about MFEM's linear form integrators. If, instead of a Coefficient object, you have a primal vector , $\\;g_j$, representing $\\;g(\\vec{x})$ you can form a dual vector by multiplying $\\;g_j$ by a bilinear form, see Bilinear Form Integrators for more information on bilinear forms. To understand why this is so, consider inserting the expansion \\eqref{expan} into \\eqref{dualvec}. $$ G_i=\\int_\\Omega \\left(\\sum_j g_j \\phi_j(\\vec{x})\\right)\\phi_i(\\vec{x})d\\vec{x} = \\sum_j \\left(\\int_\\Omega \\phi_j(\\vec{x})\\phi_i(\\vec{x})d\\vec{x}\\right)g_j \\label{dualvecprod}$$ The last integral contains two indices and can therefore be viewed as an entry in a square matrix. Furthermore, each dual vector entry, $\\;G_i$, is equivalent to one row of a matrix-vector product between this matrix of basis function integrals and the primal vector $\\;g_j$. This particular matrix, involving only the product of basis functions, is traditionally called a mass matrix . However, the action of any matrix, resulting from a bilinear form, upon a primal vector will produce a dual vector . In general, such dual vectors will have more complicated definitions than \\eqref{dualvec} or \\eqref{dualvecprod} but they will still be linear functionals of primal vectors . True Degree-of-Freedom Vectors Primal vectors contain all of the expansion coefficients needed to compute the finite element approximation of a function in each element of a mesh. When run in parallel, the local portion of a primal vector only contains data for the locally owned elements. Regardless of whether or not the simulation is being run in parallel, some of these coefficients may in fact be redundant or interdependent. Sources of redundancy: In parallel some coefficients must be shared between processors. When using static condensation or hybridization many coefficients will depend upon the coefficients which are associated with the skeleton of the mesh as well as upon other data. When using non-conforming meshes some of the coefficients on the finer side of a non-conforming interface between elements will depend upon those on the coarser side of the interface. For any or all of these reasons primal vectors may not contain the true degrees-of-freedom for describing a finite element approximation of a field. The true set of degrees-of-freedom may in fact be much smaller than the size of the primal vector. When setting up and solving a linear system to determine the finite element approximation of a field, the size of the linear system is determined by the number of true degrees-of-freedom . The details of creating this linear system are mostly hidden within the BilinearForm object. To convert individual bilinear form objects the user can call the BilinearForm::FormSystemMatrix() method, however, the more common task is to form the entire linear system with BilinearForm::FormLinearSystem() . As input, this method requires a primal vector , a dual vector , and an array of Dirichlet boundary degree-of-freedom indices. The degree-of-freedom array contains the true degrees-of-freedom, as obtained from a FiniteElementSpace object, which coincide with the Dirichlet, a.k.a. essential , boundaries. // Given a bilinear form 'a', a primal vector 'x', a dual vector 'b', // and an array of essential boundary true dof indices... SparseMatrix A; Vector B, X; a.FormLinearSystem(ess_tdof_list, x, b, A, X, B); // Solve X = A^{-1}B ... a.RecoverFEMSolution(X, b, x); The primal vector must contain the appropriate values for the solution on the essential boundaries. The interior of the primal vector is ignored by default although it can be used to supply an initial guess when using certain solvers. The dual vector should be an assembled LinearForm object or the product of a GridFunction and a BilinearForm . As output, BilinearForm::FormLinearSystem() produces the objects $A$, $X$, and $B$ in the linear system $A X=B$. Where $A$ is ready to be passed to the appropriate MFEM solver, $X$ is properly initialized, and $B$ has been modified to incorporate the essential boundary conditions. After the linear system has been solved the primal vector representing the solution must be built from $X$ and the original dual vector by calling BilinearForm::RecoverFEMSolution() . Technical Details Constructing Dual Vectors It was mentioned above, in the section on Dual Vectors , that you can create a dual vector by multiplying a primal vector by a bilinear form. But of course if you have a primal vector you can also use a GridFunctionCoefficient to create a dual vector using a LinearForm and an appropriate LinearFormIntegrator . These two choices should produce nearly identical results if the BilinearFormIntegrator and the LinearFormIntegrator use the same integration rule order. The order of the summation might differ between BilinearFormIntegrator and LinearFormIntegrator , potentially resulting in round-off error differences. When considering to use a BilinearForm or a LinearForm, one must be aware of their different computational and memory costs. A bilinear form must create a sparse matrix which can require a great deal of memory. Integrating a GridFunctionCoefficient in a LinearForm object will require very little memory. On the other hand, computing the integrals inside a LinearForm object can be computationally expensive even in comparison to assembling the bilinear form. Which is the better option? As always, there are trade-offs. The answer depends on many variables; the complexities of the BilinearFormIntegrator and the LinearFormIntegrator , the complexity of other coefficients that may be present, the order of the basis functions, can the bilinear form be reused or is this a one-time calculation, whether the code runs on a CPU or GPU , etc. On some architectures the motion of data through memory during a matrix-vector multiplication may be expensive enough that using a LinearForm and recomputing the integrals is more efficient. Often the construction of dual vectors is a small portion of the overall compute time so this choice may not be critical. The best choice is to test your application and determine which method is more appropriate for your algorithm on your hardware. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Primal and Dual Vectors"}, {"location": "pri-dual-vec/#primal-and-dual-vectors", "text": "The finite element method uses vectors of data in a variety of ways and the differences can be subtle. MFEM defines GridFunction , LinearForm , and Vector classes which help to distinguish the different roles that vectors of data can play. Graphical summary of Primal, Dual, DoF (dofs), and True DoF (tdofs) vectors", "title": "Primal and Dual Vectors"}, {"location": "pri-dual-vec/#primal-vectors", "text": "The finite element method is based on the notion that a smooth function can be approximated by a sum of piece-wise smooth functions (typically piece-wise polynomials) called basis functions : $$f(\\vec{x})\\approx\\sum_i f_i \\phi_i(\\vec{x}) \\label{expan}$$ The support of an individual basis function, $\\;\\phi_i(\\vec{x})$, will either be a single zone or a collection of zones that share a common vertex, edge, or face. The expansion coefficients, $\\;f_i$, are linear functionals of the field being approximated, $\\;f(\\vec{x})$ in this case. The $\\;f_i$ could be as simple as values of the function at particular points, called interpolation points, e.g. $\\;f_i=f(\\vec{x}_i)$, or they could be integrals of the field over submanifolds of the domain, e.g. $\\;f_i = \\int_{\\Omega_i}f(\\vec{x})d\\vec{x}$. There are many possibilities but the expansion coefficients must be linear functionals of $\\;f(\\vec{x})$. The expansion coefficients are often called degrees of freedom , or DoFs for short, though in certain cases they may not be actually independent because of some problem specific constraints. We'll discuss this more in a later section on True DoFs . Once the basis functions are defined, with some unique ordering, the expansion coefficients can be stored in a vector using the same order. Such a vector of coefficients is called a primal vector . The original function, $\\;f(\\vec{x})$, can then be approximated using \\eqref{expan}. In practice this requires not only the primal vector of coefficients but also knowledge of the mesh and the basis functions for each element of the mesh. In MFEM these collections of information are combined into GridFunction objects (or ParGridFunction objects when used in parallel) which represent piece-wise functions belonging to a finite element approximation space. The GridFunction class contains many Get methods which can compute the expansion \\eqref{expan} at particular locations within an element. The primal vector of expansion coefficients can be computed by solving a linear system or by using any of the various Project methods provided by the GridFunction class. These methods compute the degrees of freedom, $\\;f_i$, or some subset of them, from a Coefficient object representing $\\;f(\\vec{x})$. Other methods in this class can be used to compute various measures of the error in the finite element approximation of $\\;f(\\vec{x})$.", "title": "Primal Vectors"}, {"location": "pri-dual-vec/#dual-vectors", "text": "Any vector space, such as the space of primal vectors , has a dual space containing co-vectors a.k.a. dual vectors . In this context a dual vector is a linear functional of a primal vector meaning that the action of a dual vector upon a primal vector is a real number. For example, the integral of a field over a domain, $\\;\\alpha=\\int_\\Omega g(\\vec{x})d\\vec{x}$, is a linear functional because the integral is linear with respect to the function being integrated and the result is a real number. Indeed we can derive similar linear functionals using compatible functions, $\\;f(\\vec{x})$, in a variety of ways, for example $G(f)=\\int_\\Omega g(\\vec{x})f(\\vec{x})d\\vec{x}$. If we compute the action of our functional on the finite element basis functions, $$G_i=G(\\phi_i(\\vec{x})) = \\int_\\Omega g(\\vec{x})\\phi_i(\\vec{x})d\\vec{x}\\label{dualvec},$$ and we collect the results into a vector with entries $\\;G_i$, we call this a dual vector of $\\;g(\\vec{x})$. Integrals such as this often arise when enforcing energy balance in physical systems. For example, if $\\vec{J}$ is a current density describing a flow of charged particles and $\\vec{E}$ is an electric field acting upon those particles, then $\\int_\\Omega\\vec{J}\\cdot\\vec{E}\\,d\\vec{x}$ is the rate at which work is being done by the field on the charged particles. MFEM provides LinearForm objects (or ParLinearForm objects in parallel) which can compute dual vectors from a given function, $\\;g(\\vec{x})$, described by a Coefficient object. (Par)LinearForm objects require not only the mesh, basis functions, and the field $\\;g(\\vec{x})$ but also a LinearFormIntegrator which defines precisely what type of linear functional is being computed. See Linear Form Integrators for more information about MFEM's linear form integrators. If, instead of a Coefficient object, you have a primal vector , $\\;g_j$, representing $\\;g(\\vec{x})$ you can form a dual vector by multiplying $\\;g_j$ by a bilinear form, see Bilinear Form Integrators for more information on bilinear forms. To understand why this is so, consider inserting the expansion \\eqref{expan} into \\eqref{dualvec}. $$ G_i=\\int_\\Omega \\left(\\sum_j g_j \\phi_j(\\vec{x})\\right)\\phi_i(\\vec{x})d\\vec{x} = \\sum_j \\left(\\int_\\Omega \\phi_j(\\vec{x})\\phi_i(\\vec{x})d\\vec{x}\\right)g_j \\label{dualvecprod}$$ The last integral contains two indices and can therefore be viewed as an entry in a square matrix. Furthermore, each dual vector entry, $\\;G_i$, is equivalent to one row of a matrix-vector product between this matrix of basis function integrals and the primal vector $\\;g_j$. This particular matrix, involving only the product of basis functions, is traditionally called a mass matrix . However, the action of any matrix, resulting from a bilinear form, upon a primal vector will produce a dual vector . In general, such dual vectors will have more complicated definitions than \\eqref{dualvec} or \\eqref{dualvecprod} but they will still be linear functionals of primal vectors .", "title": "Dual Vectors"}, {"location": "pri-dual-vec/#true-degree-of-freedom-vectors", "text": "Primal vectors contain all of the expansion coefficients needed to compute the finite element approximation of a function in each element of a mesh. When run in parallel, the local portion of a primal vector only contains data for the locally owned elements. Regardless of whether or not the simulation is being run in parallel, some of these coefficients may in fact be redundant or interdependent. Sources of redundancy: In parallel some coefficients must be shared between processors. When using static condensation or hybridization many coefficients will depend upon the coefficients which are associated with the skeleton of the mesh as well as upon other data. When using non-conforming meshes some of the coefficients on the finer side of a non-conforming interface between elements will depend upon those on the coarser side of the interface. For any or all of these reasons primal vectors may not contain the true degrees-of-freedom for describing a finite element approximation of a field. The true set of degrees-of-freedom may in fact be much smaller than the size of the primal vector. When setting up and solving a linear system to determine the finite element approximation of a field, the size of the linear system is determined by the number of true degrees-of-freedom . The details of creating this linear system are mostly hidden within the BilinearForm object. To convert individual bilinear form objects the user can call the BilinearForm::FormSystemMatrix() method, however, the more common task is to form the entire linear system with BilinearForm::FormLinearSystem() . As input, this method requires a primal vector , a dual vector , and an array of Dirichlet boundary degree-of-freedom indices. The degree-of-freedom array contains the true degrees-of-freedom, as obtained from a FiniteElementSpace object, which coincide with the Dirichlet, a.k.a. essential , boundaries. // Given a bilinear form 'a', a primal vector 'x', a dual vector 'b', // and an array of essential boundary true dof indices... SparseMatrix A; Vector B, X; a.FormLinearSystem(ess_tdof_list, x, b, A, X, B); // Solve X = A^{-1}B ... a.RecoverFEMSolution(X, b, x); The primal vector must contain the appropriate values for the solution on the essential boundaries. The interior of the primal vector is ignored by default although it can be used to supply an initial guess when using certain solvers. The dual vector should be an assembled LinearForm object or the product of a GridFunction and a BilinearForm . As output, BilinearForm::FormLinearSystem() produces the objects $A$, $X$, and $B$ in the linear system $A X=B$. Where $A$ is ready to be passed to the appropriate MFEM solver, $X$ is properly initialized, and $B$ has been modified to incorporate the essential boundary conditions. After the linear system has been solved the primal vector representing the solution must be built from $X$ and the original dual vector by calling BilinearForm::RecoverFEMSolution() .", "title": "True Degree-of-Freedom Vectors"}, {"location": "pri-dual-vec/#technical-details", "text": "", "title": "Technical Details"}, {"location": "pri-dual-vec/#constructing-dual-vectors", "text": "It was mentioned above, in the section on Dual Vectors , that you can create a dual vector by multiplying a primal vector by a bilinear form. But of course if you have a primal vector you can also use a GridFunctionCoefficient to create a dual vector using a LinearForm and an appropriate LinearFormIntegrator . These two choices should produce nearly identical results if the BilinearFormIntegrator and the LinearFormIntegrator use the same integration rule order. The order of the summation might differ between BilinearFormIntegrator and LinearFormIntegrator , potentially resulting in round-off error differences. When considering to use a BilinearForm or a LinearForm, one must be aware of their different computational and memory costs. A bilinear form must create a sparse matrix which can require a great deal of memory. Integrating a GridFunctionCoefficient in a LinearForm object will require very little memory. On the other hand, computing the integrals inside a LinearForm object can be computationally expensive even in comparison to assembling the bilinear form. Which is the better option? As always, there are trade-offs. The answer depends on many variables; the complexities of the BilinearFormIntegrator and the LinearFormIntegrator , the complexity of other coefficients that may be present, the order of the basis functions, can the bilinear form be reused or is this a one-time calculation, whether the code runs on a CPU or GPU , etc. On some architectures the motion of data through memory during a matrix-vector multiplication may be expensive enough that using a LinearForm and recomputing the integrals is more efficient. Often the construction of dual vectors is a small portion of the overall compute time so this choice may not be critical. The best choice is to test your application and determine which method is more appropriate for your algorithm on your hardware. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Constructing Dual Vectors"}, {"location": "publications/", "text": "Publications Google Scholar Citations Recent All time Selected Publications 2024 T. Dzanic, K. Mittal, D. Kim, J. Yang, S. Petrides, B. Keith, R. Anderson, DynAMO: Multi-agent reinforcement learning for dynamic anticipatory mesh optimization with applications to hyperbolic conservation laws , Journal of Computational Physics , 506, 112924, 2024 K. Mittal, V. Dobrev, P. Knupp, T. Kolev, F. Ledoux, C. Roche, V. Tomov, Mixed-Order Meshes through rp-adaptivity for Surface Fitting to Implicit Geometries , Proceedings of the 2024 SIAM International Meshing Roundtable (IMR) . 2024 . T. Stitt, K. Belcher, A. Campos, T. Kolev, P. Mocz, R. Rieben, A. Skinner, V. Tomov, A. Vargas, K. Weiss, Performance portable GPU acceleration of a high-order finite element multiphysics application , Journal of Fluids Engineering , 146(4):041102, 2024 . V. Dobrev, P. Knupp, T. Kolev, K. Mittal, R. Rieben, M. Stees, V. Tomov, Asymptotic Analysis of Compound Volume+ Shape Metrics for Mesh Optimization , Proceedings of the 2024 SIAM International Meshing Roundtable (IMR) . 2024 . W. Pazner, Tz. Kolev, P. Vassilevski, Matrix-free high-performance saddle-point solvers for high-order problems in H(div) , SIAM Journal on Scientific Computing , 2024 . Also available as arXiv:2304.12387 . G. Fu, S. Osher, W. Pazner, and W. Li. Generalized optimal transport and mean field control problems for reaction-diffusion systems with high-order finite element computation , Journal of Computational Physics , 2024 . Also available as arXiv:2306.06287 . J. Andrej, N. Atallah, J.-P. B\u00e4cker, J. Camier, D. Copeland, V. Dobrev, Y. Dudouit, T. Duswald, B. Keith, D. Kim, Tz. Kolev, B. Lazarov, K. Mittal, W. Pazner, S. Petrides, S. Shiraiwa, M. Stowell, V. Tomov. High-performance finite elements with MFEM , accepted for publication in the International Journal of High Performance Computing Applications, 2024 . Also available as arXiv:2402.15940 . A. Gillette, B. Keith, S. Petrides, Learning robust marking policies for adaptive mesh refinement , SIAM Journal on Scientific Computing , 2024 . Also available as arXiv:2207.06339 . T. Duswald, B. Keith, B. Lazarov, S. Petrides, B. Wohlmuth, Finite elements for Mat\u00e9rn-type random fields: Uncertainty in computational mechanics and design optimization (in-review). Also available as arXiv:2403.03658 2023 J. Vedral, Dissipative WENO stabilization of high-order discontinuous Galerkin methods for hyperbolic problems , in review . D. Kuzmin, H. Hajduk, Property-Preserving Numerical Schemes for Conservation Laws , World Scientific , 2023 D. Kuzmin, J. Vedral, Dissipation-based WENO stabilization of high-order finite element methods for scalar conservation laws , Journal of Computational Physics , 487, 112153, 2023 B. Keith, T.M. Surowiec, Proximal Galerkin: A structure-preserving finite element method for pointwise bound constraints , 2023 . R. Bollapragada, C. Karamanli, B. Keith, B. Lazarov, S. Petrides, J. Wang, An Adaptive Sampling Augmented Lagrangian Method for Stochastic Optimization with Deterministic Constraints , Computers & Mathematics with Applications , 2023 . Also available as arXiv:2305.01018 . J. Yang, K. Mittal, T. Dzanic, S. Petrides, B. Keith, B. Petersen, D. Faissol, R. Anderson, Multi-Agent Reinforcement Learning for Adaptive Mesh Refinement , Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems , 2023 . J. Yang, T. Dzanic, B. Petersen, J. Kudo, K. Mittal, V. Tomov, J.S. Camier, T. Zhao, H. Zha, T. Kolev, R. Anderson, Reinforcement learning for adaptive mesh refinement , Proceedings of the International Conference on Artificial Intelligence and Statistics , 2023 . W. Pazner, Tz. Kolev, and J. Camier, End-to-end GPU acceleration of low-order-refined preconditioning for high-order finite element discretizations , The International Journal of High Performance Computing Applications , 2023 . Also available as arXiv:2210.12253 . W. Pazner, Tz. Kolev, and C. Dohrmann, Low-order preconditioning for the high-order finite element de Rham complex , SIAM Journal on Scientific Computing , 2023 . Also available as arXiv:2203.02465 . J. Barrera, Tz. Kolev, K. Mittal, and V. Tomov, High-Order Mesh Morphing for Boundary and Interface Fitting to Implicit Geometries , Computer-Aided Design , 158, 103499, 2023 . Also available as arXiv:2208.05062 . J. Camier, V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, R. Rieben, and V. Tomov, Accelerating high-order mesh optimization using finite element partial assembly on GPUs , Journal of Computational Physics , 474, 111808, 2023 . Also available as arXiv:2205.12721 . F. G\u00f3mez-Lozada, C. Andr\u00e9s del Valle, J. D. Jim\u00e9nez-Paz, B. S. Lazarov and J. Galvis, Modelling and simulation of brinicle formation , Royal Society Open Science , 10, 10, 230268, 2023 . 2022 D. Kuzmin, J.-P. B\u00e4cker, An unfitted finite element method using level set functions for extrapolation into deformable diffuse interfaces , Journal of Computational Physics , 461, 111218, 2022 A. Vargas, T. Stitt, K. Weiss, V. Tomov, J. Camier, Tz. Kolev, and R. Rieben, Matrix-free approaches for GPU acceleration of a high-order finite element hydrodynamics application using MFEM, Umpire, and RAJA , The International Journal of High Performance Computing Applications , 36(4):492-509, 2022 . Also available as arXiv:2112.07075 . J. Nikl, M. Kucha\u0159\u00edk, and S. Weber, High-Order Curvilinear Finite Element Magneto-Hydrodynamics I: A Conservative Lagrangian Scheme , Journal of Computational Physics , 464, 111158, 2022 . Also available as arXiv:2110.11669 . T. L. Horvath and S. Rhebergen, A conforming sliding mesh technique for an embedded-hybridized discontinuous Galerkin discretization for fluid-rigid body interaction , in review , 2022 . N. Yavich, N. Koshev, M. Malovichko, A. Razorenova and M. Fedorov, Conservative Finite Element Modeling of EEG and MEG on Unstructured Grids , IEEE Transactions on Medical Imaging , 41(3):647-656, 2022 . Q. Tang, L. Chacon, Tz. Kolev, J. N. Shadid and X.-Z. Tang, An adaptive scalable fully implicit algorithm based on stabilized finite element for reduced visco-resistive MHD , Journal of Computational Physics , (454) 110967, 2022 . Also available as arXiv:2106.00260 . J. A. Turner, J. Belak, N. Barton, M. Bement, N. Carlson, R. Carson, S. DeWitt, J.-L. Fattebert, N. Hodge, Z. Jibben, W. King, L. Levine, C. Newman, A. Plotkowski, B. Radhakrishnan, S. T. Reeve, M. Rolchigo, A. Sabau, S. Slattery, and B. Stump. ExaAM: Metal additive manufacturing simulation at the fidelity of the microstructure. The International Journal of High Performance Computing Applications , 36(1):13-39, 2022 . Tz. Kolev and W. Pazner, Conservative and accurate solution transfer between high-order and low-order refined finite element spaces , SIAM Journal on Scientific Computing , 44(1), A1-A27, 2022 . Also available as arXiv:2103.05283 . 2021 A. Abdelfattah, V. Barra, N. Beams, R. Bleile, J. Brown, J. Camier, R. Carson, N. Chalmers, V. Dobrev, Y. Dudouit, P. Fischer, A. Karakus, S. Kerkemeier, Tz. Kolev, Y. Lan, E. Merzari, M. Min, M. Phillips, T. Rathnayake, R. Rieben, T. Stitt, A. Tomboulides, S. Tomov, V. Tomov, A. Vargas, T. Warburton, K. Weiss, GPU Algorithms for Efficient Exascale Discretizations , Parallel Computing , 108, 102841, 2021 . W. Pazner and Tz. Kolev, Uniform subspace correction preconditioners for discontinuous Galerkin methods with hp -refinement , Communications on Applied Mathematics and Computation , 2021 . Also available as arXiv:2009.01287 . Tz. Kolev, P. Fischer, J. Brown, V. Dobrev, J. Dongarra, M. Min, M. Shephard, S. Tomov, T. Warburton, A. Abdelfattah, V. Barra, N. Beams, J.-S. Camier, N. Chalmers, Y. Dudouit, W. Pazner, C. Smith, K. Swirydowicz, J. Thompson and V. Tomov, Efficient Exascale Discretizations: High Order Finite Element Methods , The International Journal on High Performance Computing Applications , 35(6), 527-552, 2021 . V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, and V. Tomov, hr -adaptivity for nonconforming high-order meshes with the target matrix optimization paradigm , Engineering with Computers , 2021 . Also available as arXiv:2010.02166 . W. Pazner, Sparse invariant domain preserving discontinuous Galerkin methods with subcell convex limiting , Computer Methods in Applied Mechanics and Engineering , 382, 113876, 2021 . Also available as arXiv:2004.08503 . J. Yang, T. Dzanic, B. Petersen, J. Kudo, K. Mittal, V. Tomov, J.-S. Camier, T. Zhao, H. Zha, Tz. Kolev, R. Anderson, D. Faissol, Reinforcement Learning for Adaptive Mesh Refinement , in review , 2021 . D. Kalchev, P. Vassilevski, and U. Villa, Parallel Element-based Algebraic Multigrid for H(curl) and H(div) Problems Using the ParELAG Library , in review , 2021 . N. Whitman, T. Palmer, P. Greaney, S. Hosseini, D. Burkes, and D. Senor, Gray Phonon Transport Prediction of Thermal Conductivity in Lithium Aluminate with Higher-Order Finite Elements on Meshes with Curved Surfaces , Journal of Computational and Theoretical Transport , 2021 . H. Hajduk, Monolithic convex limiting in discontinuous Galerkin discretizations of hyperbolic conservation laws , Computers & Mathematics with Applications , (87) 120-138, 2021 . Also available as arXiv:2007.01212 . J. Nikl, I. G\u00f6thel, M. Kucha\u0159\u00edk, S. Weber, and M. Bussmann, Implicit reduced Vlasov-Fokker-Planck-Maxwell model based on high-order mixed elements , Journal of Computational Physics , (434) 110214, 2021 . D. Kalchev, P. Vassilevski, and U. Villa, On ParELAG's Parallel Element-based Algebraic Multigrid and its MFEM Miniapps for H(curl) and H(div) Problems: a report including lowest and next to the lowest order numerical results , LLNL Tech. Report , LLNL-TR-824455, 2021 . J. Brown, A. Abdelfattah, V. Barra, N. Beams, J. Camier, V. Dobrev, Y. Dudouit, L. Ghaffari, Tz. Kolev, D. Medina, W. Pazner, T. Ratnayaka, J. Thompson and S. Tomov, libCEED: Fast algebra for high-order element-based discretizations , The Journal of Open Source Software , 2021 . P. Knupp, Tz. Kolev, K. Mittal, V. Tomov, Adaptive Surface Fitting and Tangential Relaxation for High-Order Mesh Optimization . International Meshing Roundtable , 2021 . 2020 N. Beams, A. Abdelfattah, S. Tomov, J. Dongarra, T. Kolev, and Y. Dudouit, High-Order Finite Element Method using Standard and Device-Level Batch GEMM on GPUs , IEEE/ACM 11th ScalA Workshop , 53-60, 2020 . A. Barker and Tz. Kolev, Matrix-free preconditioning for high-order H(curl) discretizations , Numerical Linear Algebra with Applications , 28(2) e2348, 2020 . D. Kuzmin and M. Quezada de Luna, Entropy conservation property and entropy stabilization of high-order continuous Galerkin approximations to scalar conservation laws , Computers & Fluids , (213) 104742, 2020 . A. Sandu, V. Tomov, L. Cervena, and Tz. Kolev, Conservative High-Order Time Integration for Lagrangian Hydrodynamics , SIAM Journal on Scientific Computing , 43(1), A221-A241, 2020 . B. S. Southworth, M. Holec, and T. Haut. Diffusion synthetic acceleration for heterogeneous domains, compatible with voids , Nuclear Science and Engineering , 195(2), 119-136, 2020 . T. Haut, B. Southworth, P. Maginot, V. Tomov, Diffusion Synthetic Acceleration Preconditioning for Discontinuous Galerkin Discretizations of SN Transport on High-Order Curved Meshes , SIAM Journal on Scientific Computing , 42(5), B1271-B1301, 2020 . R. Anderson, J. Andrej, A. Barker, J. Bramwell, J.-S. Camier, J. Cerveny V. Dobrev, Y. Dudouit, A. Fisher, Tz. Kolev, W. Pazner, M. Stowell, V. Tomov, I. Akkerman, J. Dahm, D. Medina, and S. Zampini, MFEM: A Modular Finite Element Library , Computers & Mathematics with Applications , (81) 42-74, 2020 . Also available as arXiv:1911.09220 . R. Li and C. Zhang, Efficient Parallel Implementations of Sparse Triangular Solves for GPU Architectures , Proceedings of the 2020 SIAM Conference on Parallel Processing for Scientific Computing , 2020 . W. Pazner, Efficient low-order refined preconditioners for high-order matrix-free continuous and discontinuous Galerkin methods , SIAM Journal on Scientific Computing , 42(5), pp. A3055-A3083, 2020 . B. Yee, S. Olivier, T. Haut, M. Holec, V. Tomov, P. Maginot, A Quadratic Programming Flux Correction Method for High-Order DG Discretizations of SN Transport , Journal of Computational Physics , (419) 109696, 2020 . T. L. Horvath and S. Rhebergen, An exactly mass conserving space-time embedded-hybridized discontinuous Galerkin method for the Navier-Stokes equations on moving domains , Journal of Computational Physics , (417) 109577, 2020 . S. Rhebergen and G. N. Wells, An embedded-hybridized discontinuous Galerkin finite element method for the Stokes equations , Computer Methods in Applied Mechanics and Engineering , (358) 112619, 2020 . P. Bello-Maldonado, Tz. Kolev, R. Rieben, and V. Tomov, A Matrix-Free Hyperviscosity Formulation for High-Order ALE Hydrodynamics , Computers & Fluids , (205) 104577, 2020 . V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, R. Rieben, and V. Tomov, Simulation-Driven Optimization of High-Order Meshes in ALE Hydrodynamics , Computers & Fluids , (208) 104602, 2020 . H. Hajduk, D. Kuzmin, Tz. Kolev, V. Tomov, I. Tomas, and J. Shadid, Matrix-free subcell residual distribution for Bernstein finite elements: Monolithic limiting , Computers & Fluids , (200) 104451, 2020 . M. Franco, J.-S. Camier, J. Andrej, and W. Pazner, High-order matrix-free incompressible flow solvers with GPU acceleration and low-order refined preconditioners , Computers & Fluids , (203) 104541, 2020 . S. Friedhoff and B. S. Southworth, On \"Optimal\" h-independent convergence of Parareal and multigrid-reduction-in-time using Runge-Kutta time integration , Numerical Linear Algebra with Applications , e2301, 2020 . B. S. Southworth, A. A. Sivas, and S. Rhebergen, On fixed-point, Krylov, and 2x2 block preconditioners for nonsymmetric problems , SIAM Journal on Matrix Analysis and Applications , 41(2), pp. 871-900, 2020 . P. Fischer, M. Min, T. Rathnayake, S. Dutta, Tz. Kolev, V. Dobrev, J.S. Camier, M. Kronbichler, T. Warburton, K. Swirydowicz, and J. Brown, Scalability of High-Performance PDE Solvers , The International Journal on High Performance Computing Applications , 34(5), pp. 562-586, 2020 . G. Sosa Jones, J. J. Lee, and S. Rhebergen, A space-time hybridizable discontinuous Galerkin method for linear free-surface waves , Journal of Scientific Computing , (85) 61, 2020 . Also available as arXiv:1910.07315 Z. Peng, Q. Tang and X.-Z. Tang. An adaptive discontinuous Petrov-Galerkin method for the Grad-Shafranov equation , SIAM Journal on Scientific Computing , 42(5):B1227-B1249, 2020 . 2019 H. Hajduk, D. Kuzmin, Tz. Kolev, and R. Abgrall, Matrix-free subcell residual distribution for Bernstein finite elements: Low-order schemes and FCT , Comp. Meth. Appl. Mech. Eng. , (359) 112658, 2019 . K. Suzuki, M. Fujisawa, and M. Mikawa, Simulation Controlling Method for Generating Desired Water Caustics , 2019 International Conference on Cyberworlds (CW) , Kyoto, Japan, pp. 163-170, 2019 . D. White, Y. Choit, and J. Kudo, A dual mesh method with adaptivity for stress constrained topology optimization , Structural and Multidisciplinary Optimization , 61, pp. 749-762, 2019 . S. Watts, W. Arrighi, J. Kudo, D. A. Tortorelli, and D. A. White, Simple, accurate surrogate models of the elastic response of three-dimensional open truss micro-architectures with applications to multiscale topology design , Structural and Multidisciplinary Optimization , 60, pp. 1887-1920, 2019 . V. Dobrev, P. Knupp, Tz. Kolev, and V. Tomov, Towards Simulation-Driven Optimization of High-Order Meshes by the Target-Matrix Optimization Paradigm , 27th International Meshing Roundtable, Oct 1-8, 2018, Albuquerque , Lecture Notes in Computational Science and Engineering, 127, pp. 285-302, 2019 . J. Cerveny, V. Dobrev, and Tz. Kolev, Non-Conforming Mesh Refinement For High-Order Finite Elements , SIAM Journal on Scientific Computing , 41(4):C367-C392, 2019 . D. White, W. Arrighi, J. Kudo, and S. Watts, Multiscale topology optimization using neural network surrogate models , Comp. Meth. Appl. Mech. Eng. , 346, pp. 1118-1135, 2019 . V. A. Dobrev, T. V. Kolev, C. S. Lee, V. Z. Tomov, and P. S. Vassilevski, Algebraic Hybridization and Static Condensation with Application to Scalable H(div) Preconditioning , SIAM Journal on Scientific Computing , 41(3):B425-B447, 2019 . D. White, and A. Voronin, A computational study of symmetry and well-posedness of structural topology optimization , Structural and Multidisciplinary Optimization , 59(3), pp. 759-766, 2019 . T. Haut, P. Maginot, V. Tomov, B. Southworth, T. Brunner and T. Bailey, An Efficient Sweep-Based Solver for the SN Equations on High-Order Meshes , Nuclear Science and Engineering , 193(7):746-759, 2019 . V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, and V. Tomov, The Target-Matrix Optimization Paradigm For High-Order Meshes , SIAM Journal on Scientific Computing , 41(1):B50-B68, 2019 . K. L. A. Kirk, T. L. Horvath, A. Cesmelioglu, and S. Rhebergen, Analysis of a space-time hybridizable discontinuous Galerkin method for the advection-diffusion problem on time-dependent domains , SIAM Journal on Numerical Analysis , 57(4), pp. 1677-1696, 2019 . T. L. Horvath and S. Rhebergen, A locally conservative and energy-stable finite element method for the Navier-Stokes problem on time-dependent domains , International Journal for Numerical Methods in Fluids , 89(12):519-532, 2019 . R. Li, Y. Xi, L. Erlandson, and Y. Saad, The Eigenvalues Slicing Library (EVSL): Algorithms, Implementation, and Software , SIAM Journal on Scientific Computing , 41(4), pp. C393-C415, 2019 . 2018 H. Auten, The High Value of Open Source Software , Science & Technology Review , January/February 2018, pp. 5-11, 2018 . R. W. Anderson, V. A. Dobrev, Tz. V. Kolev, R. N. Rieben, and V. Z. Tomov, High-Order Multi-Material ALE Hydrodynamics , SIAM Journal on Scientific Computing , 40(1), pp. B32-B58, 2018 . A. T. Barker, V. Dobrev, J. Gopalakrishnan, and Tz. Kolev, A scalable preconditioner for a primal discontinuous Petrov-Galerkin method , SIAM Journal on Scientific Computing , 40(2), pp. A1187-A1203, 2018 . V. Dobrev, T. Kolev, D. Kuzmin, R. Rieben, and V. Tomov, Sequential limiting in continuous and discontinuous Galerkin methods for the Euler equations , Journal of Computational Physics , 356, pp. 372-390, 2018 . M. Reberol and B. L\u00e9vy, Computing the Distance between Two Finite Element Solutions Defined on Different 3D Meshes on a GPU , SIAM Journal on Scientific Computing , 40(1), pp. C131-C155, 2018 . A. Mazuyer, P. Cupillard, R. Giot, M. Conin, Y. Leroy, and P. Thore, Stress estimation in reservoirs using an integrated inverse method , Computers & Geosciences , 114, pp. 30-40, 2018 . J. Gopalakrishnan, M. Neum\u00fcller, and P. Vassilevski, The auxiliary space preconditioner for the de Rham complex , SIAM Journal on Numerical Analysis , 56(6), pp. 3196-3218, 2018 . D. A. White, M. Stowell, and D. A. Tortorelli, Topological optimization of structures using Fourier representations , Structural and Multidisciplinary Optimization , pp. 1-16, 2018 . S. Rhebergen and G. N. Wells, Preconditioning of a hybridized discontinuous Galerkin finite element method for the Stokes equations , Journal of Scientific Computing , 77(3), pp. 1936-1501, 2018 . T. S. Haut, P. G. Maginot, V. Z. Tomov, T. A. Brunner, and T. S. Bailey, An Efficient Sweep-based Solver for the $S_N$ Equations on High-Order Meshes , American Nuclear Society 2018 Annual Meeting, June 14-21, Philadelphia, PA , 2018 . A. S\u00e1nchez-Villar and M. Merino, Advances in Wave-Plasma Modelling in ECR Thrusters , 2018 Space Propulsion Conference, May 14-18, Seville, Spain , 2018 . 2017 S. Osborn, P. S. Vassilevski, and U. Villa, A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields , SIAM Journal on Scientific Computing , 39(5), pp. S543-S562, 2017 . R. D. Falgout, T. A. Manteuffel, B. O'Neill, and J. B. Schroder, Multigrid Reduction In Time For Nonlinear Parabolic Problems: A Case Study , SIAM Journal on Scientific Computing , 39(5), pp. S298-S322, 2017 . T. A. Manteuffel, L. N. Olson, J. B. Schroder, and B. S. Southworth, A Root-Node Based Algebraic Multigrid Method , SIAM Journal on Scientific Computing , 39(5), pp. S723-S756, 2017 . A. T. Barker, C. S. Lee, and P. S. Vassilevski, Spectral Upscaling for Graph Laplacian Problems with Application to Reservoir Simulation , SIAM Journal on Scientific Computing , 39(5), pp. S323-S346, 2017 . V. A. Dobrev, Tz. Kolev, N. A. Peterson, and J. B. Schroder, Two-level Convergence Theory For Multigrid Reduction In Time (MGRIT) , SIAM Journal on Scientific Computing , 39(5), pp. S501-S527, 2017 . R. E. Bank, P. S. Vassilevski, and L. T. Zikatanov, Arbitrary Dimension Convection-Diffusion Schemes For Space-Time Discretizations , Journal of Computational and Applied Mathematics , 310, pp. 19-31, 2017 . S. Osborn, P. Zulian, T. Benson, U. Villa, R. Krause, and P. S. Vassilevski, Scalable hierarchical PDE sampler for generating spatially correlated random fields using non-matching meshes , Numerical Linear Algebra with Applications , 25, pp. e2146, 2017 . J. H. Adler, I. Lashuk, and S. P. MacLachlan, Composite-grid multigrid for diffusion on the sphere , Numerical Linear Algebra with Applications , 25(1), pp. e2115, 2017 . S. Zampini, P. S. Vassilevski, V. Dobrev, and T. Kolev, Balancing Domain Decomposition by Constraints Algorithms for Curl-conforming Spaces of Arbitrary Order , Domain Decomposition Methods in Science and Engineering XXIV , 2017 . M. Larsen, J. Ahrens, U. Ayachit, E. Brugger, H. Childs, B. Geveci, and C. Harrison, The ALPINE In Situ Infrastructure: Ascending from the Ashes of Strawman , ISAV 2017: In Situ Infrastructures for Enabling Extreme-scale Analysis and Visualization , 2017 . J. Wright and S. Shiraiwa, Antenna to Core: A New Approach to RF Modelling , 22 Topical Conference on Radio-Frequency Power in Plasmas , 2017 . S. Shiraiwa, J. C. Wright, P. T. Bonoli, Tz. Kolev, and M. Stowell, RF wave simulation for cold edge plasmas using the MFEM library , 22 Topical Conference on Radio-Frequency Power in Plasmas , 2017 . C. Hofer, U. Langer, M. Neum\u00fcller, and I. Toulopoulos, Time-Multipatch Discontinuous Galerkin Space-Time Isogeometric Analysis of Parabolic Evolution Problems , RICAM-Report 2017-26 , 2017 . J. Billings, A. McCaskey, G. Vallee, and G. Watson, Will humans even write code in 2040 and what would that mean for extreme heterogeneity in computing? , arXiv:1712.00676 , 2017 . M. L. C. Christensen, U. Villa, A. Engsig-Karup, and P. S. Vassilevski, Numerical Multilevel Upscaling For Incompressible Flow in Reservoir Simulation: An Element-Based Algebraic Multigrid (AMGe) Approach , SIAM Journal on Scientific Computing , 39(1), pp. B102-B137, 2017 . R. Anderson, V. Dobrev, Tz. Kolev, D. Kuzmin, M. Q. de Luna, R. Rieben, and V. Tomov, High-order local maximum principle preserving (MPP) discontinuous Galerkin finite element method for the transport equation , Journal of Computational Physics , 334, pp. 102-124, 2017 . R. Li and Y. Saad, Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners , SIAM Journal on Matrix Analysis and Applications , 38(3), pp. 807-828, 2017 . 2016 D. Z. Kalchev, C. S. Lee, U. Villa, Y. Efendiev, and P. S. Vassilevski, Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method , SIAM Journal on Scientific Computing , 38(5), pp. A2912-A2933, 2016 . V. A. Dobrev, Tz. V. Kolev, R. N. Rieben, and V. Z. Tomov, Multi-material closure model for high-order finite element Lagrangian hydrodynamics , International Journal for Numerical Methods in Fluids , 82(10), pp. 689-706, 2016 . J. Guermond, B. Popov, and V. Tomov, Entropy-viscosity method for the single material Euler equations in Lagrangian frame , Computer Methods in Applied Mechanics and Engineering , 300, pp. 402-426, 2016 . M. Holec, J. Limpouch, R. Liska, and S. Weber, High-order discontinuous Galerkin nonlocal transport and energy equations scheme for radiation hydrodynamics , International Journal for Numerical Methods in Fluids , 83(10), pp. 779-797, 2016 . Tz. V. Kolev, J. Xu, and Y. Zhu, Multilevel Preconditioners for Reaction-Diffusion Problems with Discontinuous Coefficients , Journal of Scientific Computing , 67(1), pp. 324-350, 2016 . M. Reberol and B. L\u00e9vy, Low-order continuous finite element spaces on hybrid non-conforming hexahedral-tetrahedral meshes , CoRR , abs/1605.02626, 2016 . O. Marques, A. Druinsky, X. S. Li, A. T. Barker, P. Vassilevski, and D. Kalchev, Tuning the Coarse Space Construction in a Spectral AMG Solver , Procedia Computer Science , 80, pp. 212-221, International Conference on Computational Science 2016, ICCS 2016, 6-8 June 2016, San Diego, California, USA, 2016 . J. S. Yeom, J. J. Thiagarajan, A. Bhatele, G. Bronevetsky, and T. Kolev, Data-Driven Performance Modeling of Linear Solvers for Sparse Matrices , 2016 7th International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS) , 2016 . 2015 and earlier D. Osei-Kuffuor, R. Li, and Y. Saad, Matrix Reordering Using Multilevel Graph Coarsening for ILU Preconditioning , SIAM Journal on Scientific Computing , 37(1), pp. A391-A419, 2015 . R. Anderson, V. Dobrev, Tz. Kolev, and R. Rieben, Monotonicity in high-order curvilinear finite element ALE remap , Int. J. Numer. Meth. Fluids , 77(5), pp. 249-273, 2014 . V. Dobrev, Tz. Kolev, and R. Rieben, High-order curvilinear finite element methods for elastic-plastic Lagrangian dynamics , J. Comp. Phys. , (257B), pp. 1062-1080, 2014 . P. Vassilevski and U. Villa, A mixed formulation for the Brinkman problem , SIAM Journal on Numerical Analysis , 52-1, pp. 258-281, 2014 . J. H. Adler and P. S. Vassilevski, Error Analysis for Constrained First-Order System Least-Squares Finite-Element Methods , SIAM Journal on Scientific Computing , 36(3), pp. A1071-A1088, 2014 . A. Aposporidis, P. S. Vassilevski, and A. Veneziani, Multigrid preconditioning of the non-regularized augmented Bingham fluid problem , ETNA. Electronic Transactions on Numerical Analysis , 41, 2014 . P. S. Vassilevski and U. M. Yang, Reducing communication in algebraic multigrid using additive variants , Numerical Linear Algebra with Applications , 21(2), pp. 275-296, 2014 . T. Dong, V. Dobrev, T. Kolev, R. Rieben, S. Tomov, and J. Dongarra, A Step towards Energy Efficient Computing: Redesigning a Hydrodynamic Application on CPU-GPU , 2014 IEEE 28th International Parallel and Distributed Processing Symposium , May 2014 . P. Vassilevski and U. Villa, A block-diagonal algebraic multigrid preconditioner for the Brinkman problem , SIAM Journal on Scientific Computing , 35-5, pp. S3-S17, 2013 . V. Dobrev, T. Ellis, Tz. Kolev, and R. Rieben, High-order curvilinear finite elements for axisymmetric Lagrangian hydrodynamics , Computers & Fluids , pp. 58-69, 2013 . D. Kalchev, C. Ketelsen, and P. S. Vassilevski, Two-level adaptive algebraic multigrid for sequence of problems with slowly varying random coefficients , SIAM Journal on Scientific Computing , 35(6), pp. B1215-B1234, 2013 . P. D'Ambra and P. S. Vassilevski, Adaptive AMG with coarsening based on compatible weighted matching , Computing and Visualization in Science , 16(2), pp. 59-76, 2013 . T. A. Brunner, T. V. Kolev, T. S. Bailey, and A. T. Till, Preserving Spherical Symmetry in Axisymmetric Coordinates for Diffusion , International Conference on Mathematics and Computational Methods Applied to Nuclear Science & Engineering , 2013 . Tz. Kolev and P. Vassilevski, Parallel auxiliary space AMG solver for H(div) problems , SIAM Journal on Scientific Computing , 34, pp. A3079-A3098, 2012 . V. Dobrev, Tz. Kolev, and R. Rieben, High-order curvilinear finite element methods for Lagrangian hydrodynamics , SIAM Journal on Scientific Computing , 34, pp. B606-B641, 2012 . I. Lashuk and P. Vassilevski, Element agglomeration coarse Raviart-Thomas spaces with improved approximation properties , Numerical Linear Algebra with Applications , 19, pp. 414-426, 2012 . D. Kalchev, Adaptive algebraic multigrid for finite element elliptic equations with random coefficients , LLNL Tech. Report , LLNL-TR-553254, 2012 . A. Aposporidis, P. Vassilevski, and A. Veneziani, A geometric nonlinear AMLI preconditioner for the Bingham fluid flow in mixed variables , LLNL Tech. Report , LLNL-JRNL-600372, 2012 . P. Knupp, Introducing the target-matrix paradigm for mesh optimization by node movement , Engineering with Computers , 28(4), pp. 419-429, 2012 . T. A. Brunner, Mulard: A Multigroup Thermal Radiation Diffusion Mini-Application , DOE Exascale Research Conference, Portland, Oregon , 2012 . A. Baker, R. Falgout, T. Kolev, and U. Yang, Multigrid smoothers for ultra-parallel computing , SIAM Journal on Scientific Computing , 33(5), pp. 2864-2887, 2011 . V. Dobrev, T. Ellis, Tz. Kolev, and R. Rieben, Curvilinear finite elements for Lagrangian hydrodynamics , Int. J. Numer. Meth. Fluids , 65, pp. 1295-1310, 2011 . V. Dobrev, J.-L. Guermond, and B. Popov, Surface reconstruction and image enhancement via L1-minimization , SIAM Journal on Scientific Computing , 32 (3), pp. 1591-1616, 2010 . J. Brannick and R. Falgout, Compatible relaxation and coarsening in algebraic multigrid , SIAM Journal on Scientific Computing , 32, pp. 1393-1416, 2010 . A. Baker, Tz. Kolev, and U. M. Yang, Improving algebraic multigrid interpolation operators for linear elasticity problems , Numerical Linear Algebra with Applications , 17, pp. 495-517, 2010 . U. M. Yang, On long-range interpolation operators for aggressive coarsening , Numerical Linear Algebra with Applications , 17, pp. 453-472, 2010 . Tz. Kolev and P. Vassilevski, Parallel auxiliary space AMG for H(curl) problems , Journal of Computational Mathematics , 27, pp. 604-623, 2009 . Tz. V. Kolev and R. N. Rieben, A tensor artificial viscosity using a finite element approach , Journal of Computational Physics , 228(22), pp. 8336 - 8366, 2009 . A. Baker, E. Jessup, and Tz. Kolev, A simple strategy for varying the restart parameter in GMRES(m) , J. Comp. Appl. Math. , 230, pp. 751-761, 2009 . Tz. Kolev, J. Pasciak, and P. Vassilevski, H(curl) auxiliary mesh preconditioning , Numerical Linear Algebra with Applications , 15, pp. 455-471, 2008 . H. De Sterck, R. Falgout, J. Nolting, and U. M. Yang, Distance-two interpolation for parallel algebraic multigrid , Numerical Linear Algebra with Applications , 15, pp. 115-139, 2008 . V. Dobrev, R. Lazarov, and L. Zikatanov, Preconditioning of symmetric interior penalty discontinuous Galerkin FEM for second order elliptic problems , in Domain Decomposition Methods in Science and Engineering XVII, Lecture Notes in Computational Science and Engineering, vol. 60, U. Langer et al. eds, Springer-Verlag, Berlin, Heidelberg, pp. 33-44, 2008 . D. Alber and L. Olson, Parallel coarse grid selection , Numerical Linear Algebra with Applications , 14, pp. 611-643, 2007 . V. Dobrev, R. Lazarov, P. Vassilevski, and L. Zikatanov, Two-level preconditioning of discontinuous Galerkin approximations of second-order elliptic equations , Numerical Linear Algebra with Applications , 13 (9), pp. 753-770, 2006 . Tz. Kolev and P. Vassilevski, AMG by element agglomeration and constrained energy minimization interpolation , Numerical Linear Algebra with Applications , 13, pp. 771-788, 2006 . J. Bramble, Tz. Kolev, and J. Pasciak, A least-squares approximation method for the time-harmonic Maxwell equations , Journal of Numerical Mathematics , 13(4), pp. 237-263, 2005 . P. Vassilevski, Sparse matrix element topology with application to AMG(e) and preconditioning , Numerical Linear Algebra with Applications , 9, pp. 429-444, 2002 . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Publications"}, {"location": "publications/#publications", "text": "", "title": "Publications"}, {"location": "publications/#google-scholar-citations", "text": "Recent All time", "title": "Google Scholar Citations"}, {"location": "publications/#selected-publications", "text": "", "title": "Selected Publications"}, {"location": "publications/#2024", "text": "T. Dzanic, K. Mittal, D. Kim, J. Yang, S. Petrides, B. Keith, R. Anderson, DynAMO: Multi-agent reinforcement learning for dynamic anticipatory mesh optimization with applications to hyperbolic conservation laws , Journal of Computational Physics , 506, 112924, 2024 K. Mittal, V. Dobrev, P. Knupp, T. Kolev, F. Ledoux, C. Roche, V. Tomov, Mixed-Order Meshes through rp-adaptivity for Surface Fitting to Implicit Geometries , Proceedings of the 2024 SIAM International Meshing Roundtable (IMR) . 2024 . T. Stitt, K. Belcher, A. Campos, T. Kolev, P. Mocz, R. Rieben, A. Skinner, V. Tomov, A. Vargas, K. Weiss, Performance portable GPU acceleration of a high-order finite element multiphysics application , Journal of Fluids Engineering , 146(4):041102, 2024 . V. Dobrev, P. Knupp, T. Kolev, K. Mittal, R. Rieben, M. Stees, V. Tomov, Asymptotic Analysis of Compound Volume+ Shape Metrics for Mesh Optimization , Proceedings of the 2024 SIAM International Meshing Roundtable (IMR) . 2024 . W. Pazner, Tz. Kolev, P. Vassilevski, Matrix-free high-performance saddle-point solvers for high-order problems in H(div) , SIAM Journal on Scientific Computing , 2024 . Also available as arXiv:2304.12387 . G. Fu, S. Osher, W. Pazner, and W. Li. Generalized optimal transport and mean field control problems for reaction-diffusion systems with high-order finite element computation , Journal of Computational Physics , 2024 . Also available as arXiv:2306.06287 . J. Andrej, N. Atallah, J.-P. B\u00e4cker, J. Camier, D. Copeland, V. Dobrev, Y. Dudouit, T. Duswald, B. Keith, D. Kim, Tz. Kolev, B. Lazarov, K. Mittal, W. Pazner, S. Petrides, S. Shiraiwa, M. Stowell, V. Tomov. High-performance finite elements with MFEM , accepted for publication in the International Journal of High Performance Computing Applications, 2024 . Also available as arXiv:2402.15940 . A. Gillette, B. Keith, S. Petrides, Learning robust marking policies for adaptive mesh refinement , SIAM Journal on Scientific Computing , 2024 . Also available as arXiv:2207.06339 . T. Duswald, B. Keith, B. Lazarov, S. Petrides, B. Wohlmuth, Finite elements for Mat\u00e9rn-type random fields: Uncertainty in computational mechanics and design optimization (in-review). Also available as arXiv:2403.03658", "title": "2024"}, {"location": "publications/#2023", "text": "J. Vedral, Dissipative WENO stabilization of high-order discontinuous Galerkin methods for hyperbolic problems , in review . D. Kuzmin, H. Hajduk, Property-Preserving Numerical Schemes for Conservation Laws , World Scientific , 2023 D. Kuzmin, J. Vedral, Dissipation-based WENO stabilization of high-order finite element methods for scalar conservation laws , Journal of Computational Physics , 487, 112153, 2023 B. Keith, T.M. Surowiec, Proximal Galerkin: A structure-preserving finite element method for pointwise bound constraints , 2023 . R. Bollapragada, C. Karamanli, B. Keith, B. Lazarov, S. Petrides, J. Wang, An Adaptive Sampling Augmented Lagrangian Method for Stochastic Optimization with Deterministic Constraints , Computers & Mathematics with Applications , 2023 . Also available as arXiv:2305.01018 . J. Yang, K. Mittal, T. Dzanic, S. Petrides, B. Keith, B. Petersen, D. Faissol, R. Anderson, Multi-Agent Reinforcement Learning for Adaptive Mesh Refinement , Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems , 2023 . J. Yang, T. Dzanic, B. Petersen, J. Kudo, K. Mittal, V. Tomov, J.S. Camier, T. Zhao, H. Zha, T. Kolev, R. Anderson, Reinforcement learning for adaptive mesh refinement , Proceedings of the International Conference on Artificial Intelligence and Statistics , 2023 . W. Pazner, Tz. Kolev, and J. Camier, End-to-end GPU acceleration of low-order-refined preconditioning for high-order finite element discretizations , The International Journal of High Performance Computing Applications , 2023 . Also available as arXiv:2210.12253 . W. Pazner, Tz. Kolev, and C. Dohrmann, Low-order preconditioning for the high-order finite element de Rham complex , SIAM Journal on Scientific Computing , 2023 . Also available as arXiv:2203.02465 . J. Barrera, Tz. Kolev, K. Mittal, and V. Tomov, High-Order Mesh Morphing for Boundary and Interface Fitting to Implicit Geometries , Computer-Aided Design , 158, 103499, 2023 . Also available as arXiv:2208.05062 . J. Camier, V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, R. Rieben, and V. Tomov, Accelerating high-order mesh optimization using finite element partial assembly on GPUs , Journal of Computational Physics , 474, 111808, 2023 . Also available as arXiv:2205.12721 . F. G\u00f3mez-Lozada, C. Andr\u00e9s del Valle, J. D. Jim\u00e9nez-Paz, B. S. Lazarov and J. Galvis, Modelling and simulation of brinicle formation , Royal Society Open Science , 10, 10, 230268, 2023 .", "title": "2023"}, {"location": "publications/#2022", "text": "D. Kuzmin, J.-P. B\u00e4cker, An unfitted finite element method using level set functions for extrapolation into deformable diffuse interfaces , Journal of Computational Physics , 461, 111218, 2022 A. Vargas, T. Stitt, K. Weiss, V. Tomov, J. Camier, Tz. Kolev, and R. Rieben, Matrix-free approaches for GPU acceleration of a high-order finite element hydrodynamics application using MFEM, Umpire, and RAJA , The International Journal of High Performance Computing Applications , 36(4):492-509, 2022 . Also available as arXiv:2112.07075 . J. Nikl, M. Kucha\u0159\u00edk, and S. Weber, High-Order Curvilinear Finite Element Magneto-Hydrodynamics I: A Conservative Lagrangian Scheme , Journal of Computational Physics , 464, 111158, 2022 . Also available as arXiv:2110.11669 . T. L. Horvath and S. Rhebergen, A conforming sliding mesh technique for an embedded-hybridized discontinuous Galerkin discretization for fluid-rigid body interaction , in review , 2022 . N. Yavich, N. Koshev, M. Malovichko, A. Razorenova and M. Fedorov, Conservative Finite Element Modeling of EEG and MEG on Unstructured Grids , IEEE Transactions on Medical Imaging , 41(3):647-656, 2022 . Q. Tang, L. Chacon, Tz. Kolev, J. N. Shadid and X.-Z. Tang, An adaptive scalable fully implicit algorithm based on stabilized finite element for reduced visco-resistive MHD , Journal of Computational Physics , (454) 110967, 2022 . Also available as arXiv:2106.00260 . J. A. Turner, J. Belak, N. Barton, M. Bement, N. Carlson, R. Carson, S. DeWitt, J.-L. Fattebert, N. Hodge, Z. Jibben, W. King, L. Levine, C. Newman, A. Plotkowski, B. Radhakrishnan, S. T. Reeve, M. Rolchigo, A. Sabau, S. Slattery, and B. Stump. ExaAM: Metal additive manufacturing simulation at the fidelity of the microstructure. The International Journal of High Performance Computing Applications , 36(1):13-39, 2022 . Tz. Kolev and W. Pazner, Conservative and accurate solution transfer between high-order and low-order refined finite element spaces , SIAM Journal on Scientific Computing , 44(1), A1-A27, 2022 . Also available as arXiv:2103.05283 .", "title": "2022"}, {"location": "publications/#2021", "text": "A. Abdelfattah, V. Barra, N. Beams, R. Bleile, J. Brown, J. Camier, R. Carson, N. Chalmers, V. Dobrev, Y. Dudouit, P. Fischer, A. Karakus, S. Kerkemeier, Tz. Kolev, Y. Lan, E. Merzari, M. Min, M. Phillips, T. Rathnayake, R. Rieben, T. Stitt, A. Tomboulides, S. Tomov, V. Tomov, A. Vargas, T. Warburton, K. Weiss, GPU Algorithms for Efficient Exascale Discretizations , Parallel Computing , 108, 102841, 2021 . W. Pazner and Tz. Kolev, Uniform subspace correction preconditioners for discontinuous Galerkin methods with hp -refinement , Communications on Applied Mathematics and Computation , 2021 . Also available as arXiv:2009.01287 . Tz. Kolev, P. Fischer, J. Brown, V. Dobrev, J. Dongarra, M. Min, M. Shephard, S. Tomov, T. Warburton, A. Abdelfattah, V. Barra, N. Beams, J.-S. Camier, N. Chalmers, Y. Dudouit, W. Pazner, C. Smith, K. Swirydowicz, J. Thompson and V. Tomov, Efficient Exascale Discretizations: High Order Finite Element Methods , The International Journal on High Performance Computing Applications , 35(6), 527-552, 2021 . V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, and V. Tomov, hr -adaptivity for nonconforming high-order meshes with the target matrix optimization paradigm , Engineering with Computers , 2021 . Also available as arXiv:2010.02166 . W. Pazner, Sparse invariant domain preserving discontinuous Galerkin methods with subcell convex limiting , Computer Methods in Applied Mechanics and Engineering , 382, 113876, 2021 . Also available as arXiv:2004.08503 . J. Yang, T. Dzanic, B. Petersen, J. Kudo, K. Mittal, V. Tomov, J.-S. Camier, T. Zhao, H. Zha, Tz. Kolev, R. Anderson, D. Faissol, Reinforcement Learning for Adaptive Mesh Refinement , in review , 2021 . D. Kalchev, P. Vassilevski, and U. Villa, Parallel Element-based Algebraic Multigrid for H(curl) and H(div) Problems Using the ParELAG Library , in review , 2021 . N. Whitman, T. Palmer, P. Greaney, S. Hosseini, D. Burkes, and D. Senor, Gray Phonon Transport Prediction of Thermal Conductivity in Lithium Aluminate with Higher-Order Finite Elements on Meshes with Curved Surfaces , Journal of Computational and Theoretical Transport , 2021 . H. Hajduk, Monolithic convex limiting in discontinuous Galerkin discretizations of hyperbolic conservation laws , Computers & Mathematics with Applications , (87) 120-138, 2021 . Also available as arXiv:2007.01212 . J. Nikl, I. G\u00f6thel, M. Kucha\u0159\u00edk, S. Weber, and M. Bussmann, Implicit reduced Vlasov-Fokker-Planck-Maxwell model based on high-order mixed elements , Journal of Computational Physics , (434) 110214, 2021 . D. Kalchev, P. Vassilevski, and U. Villa, On ParELAG's Parallel Element-based Algebraic Multigrid and its MFEM Miniapps for H(curl) and H(div) Problems: a report including lowest and next to the lowest order numerical results , LLNL Tech. Report , LLNL-TR-824455, 2021 . J. Brown, A. Abdelfattah, V. Barra, N. Beams, J. Camier, V. Dobrev, Y. Dudouit, L. Ghaffari, Tz. Kolev, D. Medina, W. Pazner, T. Ratnayaka, J. Thompson and S. Tomov, libCEED: Fast algebra for high-order element-based discretizations , The Journal of Open Source Software , 2021 . P. Knupp, Tz. Kolev, K. Mittal, V. Tomov, Adaptive Surface Fitting and Tangential Relaxation for High-Order Mesh Optimization . International Meshing Roundtable , 2021 .", "title": "2021"}, {"location": "publications/#2020", "text": "N. Beams, A. Abdelfattah, S. Tomov, J. Dongarra, T. Kolev, and Y. Dudouit, High-Order Finite Element Method using Standard and Device-Level Batch GEMM on GPUs , IEEE/ACM 11th ScalA Workshop , 53-60, 2020 . A. Barker and Tz. Kolev, Matrix-free preconditioning for high-order H(curl) discretizations , Numerical Linear Algebra with Applications , 28(2) e2348, 2020 . D. Kuzmin and M. Quezada de Luna, Entropy conservation property and entropy stabilization of high-order continuous Galerkin approximations to scalar conservation laws , Computers & Fluids , (213) 104742, 2020 . A. Sandu, V. Tomov, L. Cervena, and Tz. Kolev, Conservative High-Order Time Integration for Lagrangian Hydrodynamics , SIAM Journal on Scientific Computing , 43(1), A221-A241, 2020 . B. S. Southworth, M. Holec, and T. Haut. Diffusion synthetic acceleration for heterogeneous domains, compatible with voids , Nuclear Science and Engineering , 195(2), 119-136, 2020 . T. Haut, B. Southworth, P. Maginot, V. Tomov, Diffusion Synthetic Acceleration Preconditioning for Discontinuous Galerkin Discretizations of SN Transport on High-Order Curved Meshes , SIAM Journal on Scientific Computing , 42(5), B1271-B1301, 2020 . R. Anderson, J. Andrej, A. Barker, J. Bramwell, J.-S. Camier, J. Cerveny V. Dobrev, Y. Dudouit, A. Fisher, Tz. Kolev, W. Pazner, M. Stowell, V. Tomov, I. Akkerman, J. Dahm, D. Medina, and S. Zampini, MFEM: A Modular Finite Element Library , Computers & Mathematics with Applications , (81) 42-74, 2020 . Also available as arXiv:1911.09220 . R. Li and C. Zhang, Efficient Parallel Implementations of Sparse Triangular Solves for GPU Architectures , Proceedings of the 2020 SIAM Conference on Parallel Processing for Scientific Computing , 2020 . W. Pazner, Efficient low-order refined preconditioners for high-order matrix-free continuous and discontinuous Galerkin methods , SIAM Journal on Scientific Computing , 42(5), pp. A3055-A3083, 2020 . B. Yee, S. Olivier, T. Haut, M. Holec, V. Tomov, P. Maginot, A Quadratic Programming Flux Correction Method for High-Order DG Discretizations of SN Transport , Journal of Computational Physics , (419) 109696, 2020 . T. L. Horvath and S. Rhebergen, An exactly mass conserving space-time embedded-hybridized discontinuous Galerkin method for the Navier-Stokes equations on moving domains , Journal of Computational Physics , (417) 109577, 2020 . S. Rhebergen and G. N. Wells, An embedded-hybridized discontinuous Galerkin finite element method for the Stokes equations , Computer Methods in Applied Mechanics and Engineering , (358) 112619, 2020 . P. Bello-Maldonado, Tz. Kolev, R. Rieben, and V. Tomov, A Matrix-Free Hyperviscosity Formulation for High-Order ALE Hydrodynamics , Computers & Fluids , (205) 104577, 2020 . V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, R. Rieben, and V. Tomov, Simulation-Driven Optimization of High-Order Meshes in ALE Hydrodynamics , Computers & Fluids , (208) 104602, 2020 . H. Hajduk, D. Kuzmin, Tz. Kolev, V. Tomov, I. Tomas, and J. Shadid, Matrix-free subcell residual distribution for Bernstein finite elements: Monolithic limiting , Computers & Fluids , (200) 104451, 2020 . M. Franco, J.-S. Camier, J. Andrej, and W. Pazner, High-order matrix-free incompressible flow solvers with GPU acceleration and low-order refined preconditioners , Computers & Fluids , (203) 104541, 2020 . S. Friedhoff and B. S. Southworth, On \"Optimal\" h-independent convergence of Parareal and multigrid-reduction-in-time using Runge-Kutta time integration , Numerical Linear Algebra with Applications , e2301, 2020 . B. S. Southworth, A. A. Sivas, and S. Rhebergen, On fixed-point, Krylov, and 2x2 block preconditioners for nonsymmetric problems , SIAM Journal on Matrix Analysis and Applications , 41(2), pp. 871-900, 2020 . P. Fischer, M. Min, T. Rathnayake, S. Dutta, Tz. Kolev, V. Dobrev, J.S. Camier, M. Kronbichler, T. Warburton, K. Swirydowicz, and J. Brown, Scalability of High-Performance PDE Solvers , The International Journal on High Performance Computing Applications , 34(5), pp. 562-586, 2020 . G. Sosa Jones, J. J. Lee, and S. Rhebergen, A space-time hybridizable discontinuous Galerkin method for linear free-surface waves , Journal of Scientific Computing , (85) 61, 2020 . Also available as arXiv:1910.07315 Z. Peng, Q. Tang and X.-Z. Tang. An adaptive discontinuous Petrov-Galerkin method for the Grad-Shafranov equation , SIAM Journal on Scientific Computing , 42(5):B1227-B1249, 2020 .", "title": "2020"}, {"location": "publications/#2019", "text": "H. Hajduk, D. Kuzmin, Tz. Kolev, and R. Abgrall, Matrix-free subcell residual distribution for Bernstein finite elements: Low-order schemes and FCT , Comp. Meth. Appl. Mech. Eng. , (359) 112658, 2019 . K. Suzuki, M. Fujisawa, and M. Mikawa, Simulation Controlling Method for Generating Desired Water Caustics , 2019 International Conference on Cyberworlds (CW) , Kyoto, Japan, pp. 163-170, 2019 . D. White, Y. Choit, and J. Kudo, A dual mesh method with adaptivity for stress constrained topology optimization , Structural and Multidisciplinary Optimization , 61, pp. 749-762, 2019 . S. Watts, W. Arrighi, J. Kudo, D. A. Tortorelli, and D. A. White, Simple, accurate surrogate models of the elastic response of three-dimensional open truss micro-architectures with applications to multiscale topology design , Structural and Multidisciplinary Optimization , 60, pp. 1887-1920, 2019 . V. Dobrev, P. Knupp, Tz. Kolev, and V. Tomov, Towards Simulation-Driven Optimization of High-Order Meshes by the Target-Matrix Optimization Paradigm , 27th International Meshing Roundtable, Oct 1-8, 2018, Albuquerque , Lecture Notes in Computational Science and Engineering, 127, pp. 285-302, 2019 . J. Cerveny, V. Dobrev, and Tz. Kolev, Non-Conforming Mesh Refinement For High-Order Finite Elements , SIAM Journal on Scientific Computing , 41(4):C367-C392, 2019 . D. White, W. Arrighi, J. Kudo, and S. Watts, Multiscale topology optimization using neural network surrogate models , Comp. Meth. Appl. Mech. Eng. , 346, pp. 1118-1135, 2019 . V. A. Dobrev, T. V. Kolev, C. S. Lee, V. Z. Tomov, and P. S. Vassilevski, Algebraic Hybridization and Static Condensation with Application to Scalable H(div) Preconditioning , SIAM Journal on Scientific Computing , 41(3):B425-B447, 2019 . D. White, and A. Voronin, A computational study of symmetry and well-posedness of structural topology optimization , Structural and Multidisciplinary Optimization , 59(3), pp. 759-766, 2019 . T. Haut, P. Maginot, V. Tomov, B. Southworth, T. Brunner and T. Bailey, An Efficient Sweep-Based Solver for the SN Equations on High-Order Meshes , Nuclear Science and Engineering , 193(7):746-759, 2019 . V. Dobrev, P. Knupp, Tz. Kolev, K. Mittal, and V. Tomov, The Target-Matrix Optimization Paradigm For High-Order Meshes , SIAM Journal on Scientific Computing , 41(1):B50-B68, 2019 . K. L. A. Kirk, T. L. Horvath, A. Cesmelioglu, and S. Rhebergen, Analysis of a space-time hybridizable discontinuous Galerkin method for the advection-diffusion problem on time-dependent domains , SIAM Journal on Numerical Analysis , 57(4), pp. 1677-1696, 2019 . T. L. Horvath and S. Rhebergen, A locally conservative and energy-stable finite element method for the Navier-Stokes problem on time-dependent domains , International Journal for Numerical Methods in Fluids , 89(12):519-532, 2019 . R. Li, Y. Xi, L. Erlandson, and Y. Saad, The Eigenvalues Slicing Library (EVSL): Algorithms, Implementation, and Software , SIAM Journal on Scientific Computing , 41(4), pp. C393-C415, 2019 .", "title": "2019"}, {"location": "publications/#2018", "text": "H. Auten, The High Value of Open Source Software , Science & Technology Review , January/February 2018, pp. 5-11, 2018 . R. W. Anderson, V. A. Dobrev, Tz. V. Kolev, R. N. Rieben, and V. Z. Tomov, High-Order Multi-Material ALE Hydrodynamics , SIAM Journal on Scientific Computing , 40(1), pp. B32-B58, 2018 . A. T. Barker, V. Dobrev, J. Gopalakrishnan, and Tz. Kolev, A scalable preconditioner for a primal discontinuous Petrov-Galerkin method , SIAM Journal on Scientific Computing , 40(2), pp. A1187-A1203, 2018 . V. Dobrev, T. Kolev, D. Kuzmin, R. Rieben, and V. Tomov, Sequential limiting in continuous and discontinuous Galerkin methods for the Euler equations , Journal of Computational Physics , 356, pp. 372-390, 2018 . M. Reberol and B. L\u00e9vy, Computing the Distance between Two Finite Element Solutions Defined on Different 3D Meshes on a GPU , SIAM Journal on Scientific Computing , 40(1), pp. C131-C155, 2018 . A. Mazuyer, P. Cupillard, R. Giot, M. Conin, Y. Leroy, and P. Thore, Stress estimation in reservoirs using an integrated inverse method , Computers & Geosciences , 114, pp. 30-40, 2018 . J. Gopalakrishnan, M. Neum\u00fcller, and P. Vassilevski, The auxiliary space preconditioner for the de Rham complex , SIAM Journal on Numerical Analysis , 56(6), pp. 3196-3218, 2018 . D. A. White, M. Stowell, and D. A. Tortorelli, Topological optimization of structures using Fourier representations , Structural and Multidisciplinary Optimization , pp. 1-16, 2018 . S. Rhebergen and G. N. Wells, Preconditioning of a hybridized discontinuous Galerkin finite element method for the Stokes equations , Journal of Scientific Computing , 77(3), pp. 1936-1501, 2018 . T. S. Haut, P. G. Maginot, V. Z. Tomov, T. A. Brunner, and T. S. Bailey, An Efficient Sweep-based Solver for the $S_N$ Equations on High-Order Meshes , American Nuclear Society 2018 Annual Meeting, June 14-21, Philadelphia, PA , 2018 . A. S\u00e1nchez-Villar and M. Merino, Advances in Wave-Plasma Modelling in ECR Thrusters , 2018 Space Propulsion Conference, May 14-18, Seville, Spain , 2018 .", "title": "2018"}, {"location": "publications/#2017", "text": "S. Osborn, P. S. Vassilevski, and U. Villa, A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields , SIAM Journal on Scientific Computing , 39(5), pp. S543-S562, 2017 . R. D. Falgout, T. A. Manteuffel, B. O'Neill, and J. B. Schroder, Multigrid Reduction In Time For Nonlinear Parabolic Problems: A Case Study , SIAM Journal on Scientific Computing , 39(5), pp. S298-S322, 2017 . T. A. Manteuffel, L. N. Olson, J. B. Schroder, and B. S. Southworth, A Root-Node Based Algebraic Multigrid Method , SIAM Journal on Scientific Computing , 39(5), pp. S723-S756, 2017 . A. T. Barker, C. S. Lee, and P. S. Vassilevski, Spectral Upscaling for Graph Laplacian Problems with Application to Reservoir Simulation , SIAM Journal on Scientific Computing , 39(5), pp. S323-S346, 2017 . V. A. Dobrev, Tz. Kolev, N. A. Peterson, and J. B. Schroder, Two-level Convergence Theory For Multigrid Reduction In Time (MGRIT) , SIAM Journal on Scientific Computing , 39(5), pp. S501-S527, 2017 . R. E. Bank, P. S. Vassilevski, and L. T. Zikatanov, Arbitrary Dimension Convection-Diffusion Schemes For Space-Time Discretizations , Journal of Computational and Applied Mathematics , 310, pp. 19-31, 2017 . S. Osborn, P. Zulian, T. Benson, U. Villa, R. Krause, and P. S. Vassilevski, Scalable hierarchical PDE sampler for generating spatially correlated random fields using non-matching meshes , Numerical Linear Algebra with Applications , 25, pp. e2146, 2017 . J. H. Adler, I. Lashuk, and S. P. MacLachlan, Composite-grid multigrid for diffusion on the sphere , Numerical Linear Algebra with Applications , 25(1), pp. e2115, 2017 . S. Zampini, P. S. Vassilevski, V. Dobrev, and T. Kolev, Balancing Domain Decomposition by Constraints Algorithms for Curl-conforming Spaces of Arbitrary Order , Domain Decomposition Methods in Science and Engineering XXIV , 2017 . M. Larsen, J. Ahrens, U. Ayachit, E. Brugger, H. Childs, B. Geveci, and C. Harrison, The ALPINE In Situ Infrastructure: Ascending from the Ashes of Strawman , ISAV 2017: In Situ Infrastructures for Enabling Extreme-scale Analysis and Visualization , 2017 . J. Wright and S. Shiraiwa, Antenna to Core: A New Approach to RF Modelling , 22 Topical Conference on Radio-Frequency Power in Plasmas , 2017 . S. Shiraiwa, J. C. Wright, P. T. Bonoli, Tz. Kolev, and M. Stowell, RF wave simulation for cold edge plasmas using the MFEM library , 22 Topical Conference on Radio-Frequency Power in Plasmas , 2017 . C. Hofer, U. Langer, M. Neum\u00fcller, and I. Toulopoulos, Time-Multipatch Discontinuous Galerkin Space-Time Isogeometric Analysis of Parabolic Evolution Problems , RICAM-Report 2017-26 , 2017 . J. Billings, A. McCaskey, G. Vallee, and G. Watson, Will humans even write code in 2040 and what would that mean for extreme heterogeneity in computing? , arXiv:1712.00676 , 2017 . M. L. C. Christensen, U. Villa, A. Engsig-Karup, and P. S. Vassilevski, Numerical Multilevel Upscaling For Incompressible Flow in Reservoir Simulation: An Element-Based Algebraic Multigrid (AMGe) Approach , SIAM Journal on Scientific Computing , 39(1), pp. B102-B137, 2017 . R. Anderson, V. Dobrev, Tz. Kolev, D. Kuzmin, M. Q. de Luna, R. Rieben, and V. Tomov, High-order local maximum principle preserving (MPP) discontinuous Galerkin finite element method for the transport equation , Journal of Computational Physics , 334, pp. 102-124, 2017 . R. Li and Y. Saad, Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners , SIAM Journal on Matrix Analysis and Applications , 38(3), pp. 807-828, 2017 .", "title": "2017"}, {"location": "publications/#2016", "text": "D. Z. Kalchev, C. S. Lee, U. Villa, Y. Efendiev, and P. S. Vassilevski, Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method , SIAM Journal on Scientific Computing , 38(5), pp. A2912-A2933, 2016 . V. A. Dobrev, Tz. V. Kolev, R. N. Rieben, and V. Z. Tomov, Multi-material closure model for high-order finite element Lagrangian hydrodynamics , International Journal for Numerical Methods in Fluids , 82(10), pp. 689-706, 2016 . J. Guermond, B. Popov, and V. Tomov, Entropy-viscosity method for the single material Euler equations in Lagrangian frame , Computer Methods in Applied Mechanics and Engineering , 300, pp. 402-426, 2016 . M. Holec, J. Limpouch, R. Liska, and S. Weber, High-order discontinuous Galerkin nonlocal transport and energy equations scheme for radiation hydrodynamics , International Journal for Numerical Methods in Fluids , 83(10), pp. 779-797, 2016 . Tz. V. Kolev, J. Xu, and Y. Zhu, Multilevel Preconditioners for Reaction-Diffusion Problems with Discontinuous Coefficients , Journal of Scientific Computing , 67(1), pp. 324-350, 2016 . M. Reberol and B. L\u00e9vy, Low-order continuous finite element spaces on hybrid non-conforming hexahedral-tetrahedral meshes , CoRR , abs/1605.02626, 2016 . O. Marques, A. Druinsky, X. S. Li, A. T. Barker, P. Vassilevski, and D. Kalchev, Tuning the Coarse Space Construction in a Spectral AMG Solver , Procedia Computer Science , 80, pp. 212-221, International Conference on Computational Science 2016, ICCS 2016, 6-8 June 2016, San Diego, California, USA, 2016 . J. S. Yeom, J. J. Thiagarajan, A. Bhatele, G. Bronevetsky, and T. Kolev, Data-Driven Performance Modeling of Linear Solvers for Sparse Matrices , 2016 7th International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS) , 2016 .", "title": "2016"}, {"location": "publications/#2015-and-earlier", "text": "D. Osei-Kuffuor, R. Li, and Y. Saad, Matrix Reordering Using Multilevel Graph Coarsening for ILU Preconditioning , SIAM Journal on Scientific Computing , 37(1), pp. A391-A419, 2015 . R. Anderson, V. Dobrev, Tz. Kolev, and R. Rieben, Monotonicity in high-order curvilinear finite element ALE remap , Int. J. Numer. Meth. Fluids , 77(5), pp. 249-273, 2014 . V. Dobrev, Tz. Kolev, and R. Rieben, High-order curvilinear finite element methods for elastic-plastic Lagrangian dynamics , J. Comp. Phys. , (257B), pp. 1062-1080, 2014 . P. Vassilevski and U. Villa, A mixed formulation for the Brinkman problem , SIAM Journal on Numerical Analysis , 52-1, pp. 258-281, 2014 . J. H. Adler and P. S. Vassilevski, Error Analysis for Constrained First-Order System Least-Squares Finite-Element Methods , SIAM Journal on Scientific Computing , 36(3), pp. A1071-A1088, 2014 . A. Aposporidis, P. S. Vassilevski, and A. Veneziani, Multigrid preconditioning of the non-regularized augmented Bingham fluid problem , ETNA. Electronic Transactions on Numerical Analysis , 41, 2014 . P. S. Vassilevski and U. M. Yang, Reducing communication in algebraic multigrid using additive variants , Numerical Linear Algebra with Applications , 21(2), pp. 275-296, 2014 . T. Dong, V. Dobrev, T. Kolev, R. Rieben, S. Tomov, and J. Dongarra, A Step towards Energy Efficient Computing: Redesigning a Hydrodynamic Application on CPU-GPU , 2014 IEEE 28th International Parallel and Distributed Processing Symposium , May 2014 . P. Vassilevski and U. Villa, A block-diagonal algebraic multigrid preconditioner for the Brinkman problem , SIAM Journal on Scientific Computing , 35-5, pp. S3-S17, 2013 . V. Dobrev, T. Ellis, Tz. Kolev, and R. Rieben, High-order curvilinear finite elements for axisymmetric Lagrangian hydrodynamics , Computers & Fluids , pp. 58-69, 2013 . D. Kalchev, C. Ketelsen, and P. S. Vassilevski, Two-level adaptive algebraic multigrid for sequence of problems with slowly varying random coefficients , SIAM Journal on Scientific Computing , 35(6), pp. B1215-B1234, 2013 . P. D'Ambra and P. S. Vassilevski, Adaptive AMG with coarsening based on compatible weighted matching , Computing and Visualization in Science , 16(2), pp. 59-76, 2013 . T. A. Brunner, T. V. Kolev, T. S. Bailey, and A. T. Till, Preserving Spherical Symmetry in Axisymmetric Coordinates for Diffusion , International Conference on Mathematics and Computational Methods Applied to Nuclear Science & Engineering , 2013 . Tz. Kolev and P. Vassilevski, Parallel auxiliary space AMG solver for H(div) problems , SIAM Journal on Scientific Computing , 34, pp. A3079-A3098, 2012 . V. Dobrev, Tz. Kolev, and R. Rieben, High-order curvilinear finite element methods for Lagrangian hydrodynamics , SIAM Journal on Scientific Computing , 34, pp. B606-B641, 2012 . I. Lashuk and P. Vassilevski, Element agglomeration coarse Raviart-Thomas spaces with improved approximation properties , Numerical Linear Algebra with Applications , 19, pp. 414-426, 2012 . D. Kalchev, Adaptive algebraic multigrid for finite element elliptic equations with random coefficients , LLNL Tech. Report , LLNL-TR-553254, 2012 . A. Aposporidis, P. Vassilevski, and A. Veneziani, A geometric nonlinear AMLI preconditioner for the Bingham fluid flow in mixed variables , LLNL Tech. Report , LLNL-JRNL-600372, 2012 . P. Knupp, Introducing the target-matrix paradigm for mesh optimization by node movement , Engineering with Computers , 28(4), pp. 419-429, 2012 . T. A. Brunner, Mulard: A Multigroup Thermal Radiation Diffusion Mini-Application , DOE Exascale Research Conference, Portland, Oregon , 2012 . A. Baker, R. Falgout, T. Kolev, and U. Yang, Multigrid smoothers for ultra-parallel computing , SIAM Journal on Scientific Computing , 33(5), pp. 2864-2887, 2011 . V. Dobrev, T. Ellis, Tz. Kolev, and R. Rieben, Curvilinear finite elements for Lagrangian hydrodynamics , Int. J. Numer. Meth. Fluids , 65, pp. 1295-1310, 2011 . V. Dobrev, J.-L. Guermond, and B. Popov, Surface reconstruction and image enhancement via L1-minimization , SIAM Journal on Scientific Computing , 32 (3), pp. 1591-1616, 2010 . J. Brannick and R. Falgout, Compatible relaxation and coarsening in algebraic multigrid , SIAM Journal on Scientific Computing , 32, pp. 1393-1416, 2010 . A. Baker, Tz. Kolev, and U. M. Yang, Improving algebraic multigrid interpolation operators for linear elasticity problems , Numerical Linear Algebra with Applications , 17, pp. 495-517, 2010 . U. M. Yang, On long-range interpolation operators for aggressive coarsening , Numerical Linear Algebra with Applications , 17, pp. 453-472, 2010 . Tz. Kolev and P. Vassilevski, Parallel auxiliary space AMG for H(curl) problems , Journal of Computational Mathematics , 27, pp. 604-623, 2009 . Tz. V. Kolev and R. N. Rieben, A tensor artificial viscosity using a finite element approach , Journal of Computational Physics , 228(22), pp. 8336 - 8366, 2009 . A. Baker, E. Jessup, and Tz. Kolev, A simple strategy for varying the restart parameter in GMRES(m) , J. Comp. Appl. Math. , 230, pp. 751-761, 2009 . Tz. Kolev, J. Pasciak, and P. Vassilevski, H(curl) auxiliary mesh preconditioning , Numerical Linear Algebra with Applications , 15, pp. 455-471, 2008 . H. De Sterck, R. Falgout, J. Nolting, and U. M. Yang, Distance-two interpolation for parallel algebraic multigrid , Numerical Linear Algebra with Applications , 15, pp. 115-139, 2008 . V. Dobrev, R. Lazarov, and L. Zikatanov, Preconditioning of symmetric interior penalty discontinuous Galerkin FEM for second order elliptic problems , in Domain Decomposition Methods in Science and Engineering XVII, Lecture Notes in Computational Science and Engineering, vol. 60, U. Langer et al. eds, Springer-Verlag, Berlin, Heidelberg, pp. 33-44, 2008 . D. Alber and L. Olson, Parallel coarse grid selection , Numerical Linear Algebra with Applications , 14, pp. 611-643, 2007 . V. Dobrev, R. Lazarov, P. Vassilevski, and L. Zikatanov, Two-level preconditioning of discontinuous Galerkin approximations of second-order elliptic equations , Numerical Linear Algebra with Applications , 13 (9), pp. 753-770, 2006 . Tz. Kolev and P. Vassilevski, AMG by element agglomeration and constrained energy minimization interpolation , Numerical Linear Algebra with Applications , 13, pp. 771-788, 2006 . J. Bramble, Tz. Kolev, and J. Pasciak, A least-squares approximation method for the time-harmonic Maxwell equations , Journal of Numerical Mathematics , 13(4), pp. 237-263, 2005 . P. Vassilevski, Sparse matrix element topology with application to AMG(e) and preconditioning , Numerical Linear Algebra with Applications , 9, pp. 429-444, 2002 . MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "2015 and earlier"}, {"location": "seminar/", "text": "FEM@LLNL Seminar Series The FEM@LLNL seminar series is focused on finite element research and applications talks of interest to the MFEM community. Videos will be added to a YouTube playlist as well as this site's videos page . Sign-Up Fill in this form to sign-up for future FEM@LLNL seminar announcements. Next Talk TBD TBD 9am PDT Webex Abstract: TBD Previous Talks Pablo Brubeck (University of Oxford) FIAT: from basis functions to efficient finite element solvers November 12, 2024 Slides Talk Recording Abstract: The FInite element Automatic Tabulator (FIAT) is a powerful Python library for tabulating basis functions. In this talk, we present two major recent developments in FIAT. First, we have extended the FIAT abstraction to natively support macroelements. Macroelements offer conforming discretizations with highly desirable properties, such as divergence-free vector fields, and divergence-conforming symmetric tensors with low-order polynomial degrees. Elements implemented include the Hsieh-Clough-Tocher macroelement for biharmonic problems, the divergence-free, H1-conforming, inf-sup stable Guzm\u00e1n-Neilan macroelement for Stokes, and the Johnson-Mercier macroelement for strongly-symmetric, H(div)-conforming stresses in solid mechanics. We also improved the performance of tabulation and quadrature for simplicial high-order elements, and introduced novel basis functions, leading to solvers with better complexity in polynomial degree. Inspired by the fast diagonalization method, we define new degrees of freedom on simplices as moments against a numerically-computed orthogonal polynomial basis to decouple element interiors in the stiffness matrix. We exploit this decoupling in a domain decomposition method with vertex or edge subdomains on the interface degrees of freedom, and Jacobi relaxation for the interior degrees of freedom. This enables fast solvers for high-order discretizations of the Riesz maps of the spaces of the de Rham complex (Lagrange, N\u00e9d\u00e9lec, Raviart-Thomas, and Brezzi-Douglas-Marini). For each case, we illustrate the performance gains with numerical examples in Firedrake. Denis Ridzal (Sandia National Laboratories) R-Adaptive Mesh Optimization to Enhance Finite Element Basis Compression October 15, 2024 Slides Talk Recording Abstract: Modern computing systems are capable of exascale calculations. While these systems continue to grow in processing power, the available system memory has not increased commensurately. A predominant approach to limit the memory usage in large-scale applications is to exploit the abundant processing power and continually recompute many low-level simulation quantities, rather than storing them. However, this approach can adversely impact the throughput of the simulation and diminish the benefits of modern computing architectures. We present two novel contributions to reduce the memory burden while maintaining performance in simulations based on finite element discretizations. The first contribution develops dictionary-based data compression schemes that detect and exploit the structure of the discretization, due to redundancies across the finite element mesh. These schemes are shown to reduce the memory requirements of key computational kernels by more than 99% on meshes with large numbers of nearly identical mesh cells. For applications where this structure does not exist, our second contribution leverages a recently developed augmented Lagrangian sequential quadratic programming algorithm to enable r-adaptive mesh optimization, with the goal of enhancing redundancies in the mesh. Numerical results demonstrate the effectiveness of the proposed methods to detect, exploit and enhance mesh structure on examples inspired by large-scale applications. Daniele Panozzo (Courant Institute, NYU) Geometric Predicates for Unconditionally Robust Elastodynamics Simulation October 1, 2024 Slides Talk Recording Abstract: The numerical solution of partial differential equations (PDE) is ubiquitously used for physical simulation in scientific computing and engineering. Ideally, a PDE solver should be opaque: the user provides as input the domain boundary, boundary conditions, and the governing equations, and the code returns an evaluator that can compute the value of the solution at any point of the input domain. This is surprisingly far from being the case for all existing open-source or commercial software, despite the research efforts in this direction and the large academic and industrial interest. To a large extent, this is due to lack of robustness in geometric algorithms used to create the discretization, detect collisions, and evaluate element validity. I will present the incremental potential contact simulation paradigm, which provides strong robustness guarantees in simulation codes, ensuring, for the first time, validity of the trajectories accounting for floating point rounding errors over an entire elastodynamic simulation with contact. A core part of this approach is the use of a conservative line-search to check for collisions between geometric primitives and for ensuring validity of the deforming elements over linear trajectories. I will discuss both problems in depth, showing that SOTA approaches favor numerical efficiency but are unfortunately not robust to floating point rounding, leading to major failures in simulation. I will then present an alternative approach based on judiciously using rational and interval types to ensure provable correctness, while keeping a running time comparable with non-conservative methods. To conclude, I will discuss a set of applications enabled by this approach in microscopy and biomechanics, including traction force estimation on a live zebrafish and efficient modeling and simulation of fibrous materials. Rub\u00e9n Sevilla (Swansea University) Mesh Generation and Adaptation using Green AI September 17, 2024 Slides Talk Recording Abstract: Most methods used to solve partial differential equations require creating a mesh that represents the model's geometry. Today, unstructured mesh technology is widely used, allowing three-dimensional meshes with hundreds of millions of elements to be generated in just a few minutes. However, when optimising a design, many simulations are needed for different operating conditions and geometric configurations. Creating the best mesh for each setup becomes time-consuming due to the requirement of excessive human intervention and expertise. This talk will cover our recent work on using artificial intelligence to predict near-optimal meshes suitable for simulations. The main idea is to take advantage of the large amount of data that already exists in the industry to improve the selection of a suitable spacing function, including anisotropic spacing. The proposed approach aims to use knowledge from previous simulations to guide the mesh generation process. I will assess the proposed method based on the accuracy of the predictions, efficiency, and environmental impact. This includes considering the carbon footprint and energy consumption of the computations required for a parametric CFD analysis under different flow conditions and angles of attack. For transient problems, the use of high order methods provides several advantages due to the low dissipation and dispersion errors associated to these schemes. An attractive approach to simulate these problems is to incorporate degree adaptive schemes to enhance the approximation only where needed. In this talk I will also present our recent work on using artificial intelligence to aid a degree adaptive process. Esteban Ferrer and David Huergo (Universidad Polit\u00e9cnica de Madrid) New Avenues in High Order Fluid Dynamics September 3, 2024 Slides Talk Recording Abstract: We present the latest developments of our High-Order Spectral Element Solver (HORSES3D), an open source high-order discontinuous Galerkin framework capable of solving a variety of flow applications, including compressible flows (with or without shocks), incompressible flows, various RANS and LES turbulence models, particle dynamics, multiphase flows, and aeroacoustics [ 1 ]. Recent developments allow us to simulate challenging multiphysics including turbulent flows, multiphase and moving bodies, using local h and p-adaption. In addition, we present recent work that couples Machine Learning and reinforcement learning techniques with high order simulations. Patrick Farrell (University of Oxford) Designing conservative and accurately dissipative numerical integrators in time July 30, 2024 Slides Talk Recording Abstract: Numerical methods for the simulation of transient systems with structure-preserving properties are known to exhibit greater accuracy and physical reliability, in particular over long durations. These schemes are often built on powerful geometric ideas for broad classes of problems, such as Hamiltonian or reversible systems. However, there remain difficulties in devising higher-order- in-time structure-preserving discretizations for nonlinear problems, and in conserving non-polynomial invariants. In this work we propose a new, general framework for the construction of structure-preserving timesteppers via finite elements in time and the systematic introduction of auxiliary variables. The framework reduces to Gauss methods where those are structure-preserving, but extends to generate arbitrary-order structure-preserving schemes for nonlinear problems, and allows for the construction of schemes that conserve multiple higher-order invariants. We demonstrate the ideas by devising novel schemes that exactly conserve all known invariants of the Kepler and Kovalevskaya problems, arbitrary-order schemes for the compressible Navier\u2013Stokes equations that conserve mass, momentum, and energy, and provably dissipate entropy, and multi-conservative schemes for the Benjamin-Bona-Mahony equation. Gonzalo de Diego (Courant Institute) Numerical Solvers for Viscous Contact Problems in Glaciology May 6, 2024 Slides Talk Recording Abstract: Viscous contact problems are time-dependent viscous flow problems where the fluid is in contact with a solid surface from which it can detach and reattach. Over sufficiently long timescales, ice is assumed to flow like a viscous fluid with a nonlinear rheology. Therefore, certain phenomena in glaciology, like the formation of subglacial cavities in the base of an ice sheet or the dynamics of marine ice sheets (continental ice sheets that slide into the ocean and go afloat at a grounding line, detaching from the bedrock), can be modelled as viscous contact problems. In particular, these problems can be described by coupling the Stokes equations with contact boundary conditions to free boundary equations that evolve the ice domain in time. In this talk, I will describe the difficulties that arise when attempting to solve this system numerically and I will introduce a method that is capable of overcoming them. Nat Trask (University of Pennsylvania) A Data Driven Finite Element Exterior Calculus April 2, 2024 Slides Talk Recording Abstract: Despite the recent flurry of work employing machine learning to develop surrogate models to accelerate scientific computation, the \"black-box\" underpinnings of current techniques fail to provide the verification and validation guarantees provided by modern finite element methods. In this talk we present a data-driven finite element exterior calculus for developing reduced-order models of multiphysics systems when the governing equations are either unknown or require closure. The framework employs deep learning architectures typically used for logistic classification to construct a trainable partition of unity which provides notions of control volumes with associated boundary operators. This alternative to a traditional finite element mesh is fully differentiable and allows construction of a discrete de Rham complex with a corresponding Hodge theory. We demonstrate how models may be obtained with the same robustness guarantees as traditional mixed finite element discretization, with deep connections to contemporary techniques in graph neural networks. For applications developing digital twins where surrogates are intended to support real time data assimilation and optimal control, we further develop the framework to support Bayesian optimization of unknown physics on the underlying adjacency matrices of the chain complex. By framing the learning of fluxes via an optimal recovery problem with a computationally tractable posterior distribution, we are able to develop models with intrinsic representations of epistemic uncertainty. William Moses (University of Illinois Urbana-Champaign) Supercharging Programming Through Compiler Technology March 14, 2024 Slides Talk Recording Abstract: The decline of Moore's law and an increasing reliance on computation has led to an explosion of specialized software packages and hardware architectures. While this diversity enables unprecedented flexibility, it also requires domain-experts to learn how to customize programs to efficiently leverage the latest platform-specific API's and data structures, instead of working on their intended problem. Rather than forcing each user to bear this burden, I propose building high-level abstractions within general-purpose compilers that enable fast, portable, and composable programs to be automatically generated. This talk will demonstrate this approach through compilers that I built for two domains: automatic differentiation and parallelism. These domains are critical to both scientific computing and machine learning, forming the basis of neural network training, uncertainty quantification, and high-performance computing. For example, a researcher hoping to incorporate their climate simulation into a machine learning model must also provide a corresponding derivative simulation. My compiler, Enzyme, automatically generates these derivatives from existing computer programs, without modifying the original application. Moreover, operating within the compiler enables Enzyme to combine differentiation with program optimization, resulting in asymptotically and empirically faster code. Looking forward, this talk will also touch on how this domain-agnostic compiler approach can be applied to new directions, including probabilistic programming. Sungho Lee (University of Memphis) LAGHOST: Development of Lagrangian High-Order Solver for Tectonics March 5, 2024 Slides Talk Recording Abstract: Long-term geological and tectonic processes associated with large deformation highlight the importance of using a moving Lagrangian frame. However, modern advancements in the finite element method, such as MPI parallelization, GPU acceleration, high-order elements, and adaptive grid refinement for tectonics based on this frame, have not been updated. Moreover, the existing solvers available in open access suffer from limited tutorials, a poor user manual, and several dependencies that make model building complex. These limitations can discourage both new users and developers from utilizing and improving these models. As a result, we are motivated to develop a user-friendly, Lagrangian thermo-mechanical numerical model that incorporates visco-elastoplastic rheology to simulate long-term tectonic processes like mountain building, mantle convection and so on. We introduce an ongoing project called LAGHOST (Lagrangian High-Order Solver for Tectonics), which is an MFEM-based tectonic solver. LAGHOST expands the capabilities of MFEM's LAGHOS mini-app. Currently, our solver incorporates constitutive equation, body force, mass scaling, dynamic relaxation, Mohr-Coulomb plasticity, plastic softening, Winkler foundation, remeshing, and remapping. To evaluate LAGHOST, we conducted four benchmark tests. The first test involved compressing an elastic box at a constant velocity, while the second test focused on the compaction of a self-weighted elastic column. To enable larger time-step sizes and achieve quasi-static solutions in the benchmarks, we introduced a fictitious density and implemented dynamic relaxation. This involved scaling the density factor and introducing a portion of force component opposing the previous velocity direction at nodal points. Our results exhibited good agreement with analytical solutions. Subsequently, we incorporated Mohr-Coulomb plasticity, a reliable model for predicting rock failure, into LAGHOST. We revisited the elastic box benchmark and considered plastic materials. By considering stress correction arising from plastic yielding, we confirmed that the updated solution from elastic guess aligned with the analytical solution. Furthermore, we applied LAGHOST to simulate the evolution of a normal fault, a significant tectonic phenomenon. To model normal fault evolution, we introduced strain softening on cohesion as the dominant factor based on geological evidence. Our simulations successfully captured the normal fault's evolution, with plastic strain localizing at shallow depths before propagating deeper. The fault angle reached approximately 60 degrees, in line with the Mohr-Coulomb failure theory. Kevin Chung (LLNL) Data-Driven DG FEM Via Reduced Order Modeling and Domain Decomposition February 6, 2024 Slides Talk Recording Abstract: Numerous cutting-edge scientific technologies originate at the laboratory scale, but transitioning them to practical industry applications can be a formidable challenge. Traditional pilot projects at intermediate scales are costly and time-consuming. Alternatives such as E-pilots can rely on high-fidelity numerical simulations, but even these simulations can be computationally prohibitive at larger scales. To overcome these limitations, we propose a scalable, component reduced order model (CROM) method. We employ Discontinuous Galerkin Domain Decomposition (DG-DD) to decompose the physics governing equation for a large-scale system into repeated small-scale unit components. Critical physics modes are identified via proper orthogonal decomposition (POD) from small-scale unit component samples. The computationally expensive, high-fidelity discretization of the physics governing equation is then projected onto these modes to create a reduced order model (ROM) that retains essential physics details. The combination of DG-DD and POD enables ROMs to be used as building blocks comprised of unit components and interfaces, which can then be used to construct a global large-scale ROM without data at such large scales. This method is demonstrated on the Poisson and Stokes flow equations, showing that it can solve equations about 15\u221240 times faster with only \u223c 1% relative error, even at scales 1000 times larger than the unit components. This research is ongoing, with efforts to apply these methods to more complex physics such as Navier-Stokes equation, highlighting their potential for transitioning laboratory-scale technologies to practical industrial use. Brian Young A Full-Wave Electromagnetic Simulator for Frequency-Domain S-Parameter Calculations January 9, 2024 Slides Talk Recording Abstract: An open-source and free full-wave electromagnetic simulator is presented that addresses the engineering community\u2019s need for the calculation of frequency-domain S-parameters. Two-dimensional port simulations are used to excite the 3D space and to extract S-parameters using modal projections. Matrix solutions are performed using complex computations. Features enabled by the MFEM library include adaptive mesh refinement, arbitrary order finite elements, and parallel processing using MPI. Implementation details are presented along with sample results and accuracy demonstrations. Jesse Chan (Rice University) High order positivity-preserving entropy stable discontinuous Galerkin discretizations December 5, 2023 Slides Talk Recording Abstract: High order discontinuous Galerkin (DG) methods provide high order accuracy and geometric flexibility, but are known to be unstable when applied to nonlinear conservation laws whose solutions exhibit shocks and under-resolved solution features. Entropy stable schemes improve robustness by ensuring that physically relevant solutions satisfy a semi-discrete cell entropy inequality independently of numerical resolution and solution regularization while retaining formal high order accuracy. In this talk, we will review the construction of entropy stable high order discontinuous Galerkin methods and describe approaches for enforcing that solutions are \"physically relevant\" (i.e., the thermodynamic variables remain positive). Youngsoo Choi (Lawrence Livermore National Laboratory) Physics-guided interpretable data-driven simulations November 14, 2023 Slides Talk Recording Abstract: A computationally demanding physical simulation often presents a significant impediment to scientific and technological progress. Fortunately, recent advancements in machine learning (ML) and artificial intelligence have given rise to data-driven methods that can expedite these simulations. For instance, a well-trained 2D convolutional deep neural network can provide a 100,000-fold acceleration in solving complex problems like Richtmyer-Meshkov instability [ 1 ]. However, conventional black-box ML models lack the integration of fundamental physics principles, such as the conservation of mass, momentum, and energy. Consequently, they often run afoul of critical physical laws, raising concerns among physicists. These models attempt to compensate for the absence of physics information by relying on vast amounts of data. Additionally, they suffer from various drawbacks, including a lack of structure-preservation, computationally intensive training phases, reduced interpretability, and susceptibility to extrapolation issues. To address these shortcomings, we propose an approach that incorporates physics into the data-driven framework. This integration occurs at different stages of the modeling process, including the sampling and model-building phases. A physics-informed greedy sampling procedure minimizes the necessary training data while maintaining target accuracy [ 2 ]. A physics-guided data-driven model not only preserves the underlying physical structure more effectively but also demonstrates greater robustness in extrapolation compared to traditional black-box ML models. We will showcase numerical results in areas such as hydrodynamics [ 3 , 4 ], particle transport [ 5 ], plasma physics, pore-collapse, and 3D printing to highlight the efficacy of these data-driven approaches. The advantages of these methods will also become apparent in multi-query decision-making applications, such as design optimization [ 6 , 7 ]. Ben Southworth (Los Alamos National Laboratory) Superior discretizations and AMG solvers for extremely anisotropic diffusion via hyperbolic operators October 17, 2023 Slides Talk Recording Abstract: Heat conduction in magnetic confinement fusion can reach anisotropy ratios of 10^9-10^10, and in complex problems the direction of anisotropy may not be aligned with (or is impossible to align with) the spatial mesh. Such problems pose major challenges for both discretization accuracy and efficient implicit linear solvers. Although the underlying problem is elliptic or parabolic in nature, we argue that the problem is better approached from the perspective of hyperbolic operators. The problem is posed in a directional gradient first order formulation, introducing a directional heat flux along magnetic field lines as an auxiliary variable. We then develop novel continuous and discontinuous discretizations of the mixed system, using stabilization techniques developed for hyperbolic problems. The resulting block matrix system is then reordered so that the advective operators are on the diagonal, and the system is solved using AMG based on approximate ideal restriction (AIR), which is particularly efficient for upwind discretizations of advection. Compared with traditional discretizations and AMG solvers, we achieve orders of magnitude reduction in error and AMG iterations in the extremely anisotropic regime. Natasha Sharma (University of Texas at El Paso) A Continuous Interior Penalty Method Framework for Sixth Order Cahn-Hilliard-type Equations with applications to microstructure evolution and microemulsions July 18, 2023 Slides Talk Recording Abstract: The focus of this talk is on presenting unconditionally stable, uniquely solvable, and convergent numerical methods to solve two classes of the sixth-order Cahn-Hilliard-type equations. The first class arises as the so-called phase field crystal atomistic model of crystal growth, which has been employed to simulate a number of physical phenomena such as crystal growth in a supercooled liquid, crack propagation in ductile material, dendritic and eutectic solidification. The second class, henceforth referred to as Microemulsion systems (ME systems) appears as a model capturing the dynamics of phase transitions in ternary oil-water-surfactant systems in which three phases namely a microemulsion, almost pure oil, and almost pure water can coexist in equilibrium. ME systems have several applications ranging from enhanced oil recovery to the development of environmentally friendly solvents and drug delivery systems. Despite the widespread applications of these models, the major challenge impeding their use has been and continues to be a lack of understanding of the complex systems which they model. Thus, building computational models for these systems is crucial to the understanding of these systems. The presence of the higher order derivative in combination with a time-dependent process poses many challenges to the creation of stable, convergent, and efficient numerical methods approximating solutions to these equations. In this talk, we present a continuous interior penalty Galerkin framework for solving these equations and theoretically establish the desirable properties of stability, unique solvability, and first-order convergence. We close the talk by presenting the numerical results of some benchmark problems to verify the practical performance of the proposed approach and discuss some exciting current and future applications. Freddie Witherden (Texas A&M University) FSSpMDM \u2014 Accelerating Small Sparse Matrix Multiplications by Run-Time Code Generation June 20, 2023 Slides Talk Recording Abstract: Small matrix multiplications are a key building block of modern high-order finite element method solvers. Such multiplications describe the act of applying a specific finite element operator onto a set of state vectors. The small and irregular size of these multiplications makes them poor candidates for generic matrix multiplication routines. Moreover, for elements with a tensor product construction, the operators themselves can exhibit a significant degree of sparsity. In this talk, I will describe the code generation strategies employed by our Fixed Size Sparse Matrix-Dense Matrix (FSSpMDM) routine in libxsmm and show how these result in performant operator kernels for prismatic and hexahedral elements. Strategies will be described for both x86-64 (AVX2/AVX-512) and AARCH64 (NEON/SVE) instruction sets. Results will be presented on recent Intel and Apple CPUs and compared against the well-known GiMMiK C code generation library. Frank Giraldo (Naval Postgraduate School) Using High-Order Element-Based Galerkin Methods to Capture Hurricane Intensification May 16, 2023 Slides Talk Recording Abstract: Properly capture hurricane rapid intensification (where the winds increase by 30 knots in the first 24 hours) remains challenging for atmospheric models. The reason is that we need LES-type scales \ud835\udcaa(100m) which is still elusive due to computational cost. In this talk, I describe the work that we are doing in this area and how element-based Galerkin Methods are being used to approximate spatial derivatives. I will also discuss the time-integration strategy that we are exploring for this class of problems. In particular, we are exploring process Multirate methods whereby each process in a system of nonlinear partial different equations (PDEs) uses a time-integrator and time-step commensurate with the wave-speed of that process. We have constructed Multirate methods of any order using extrapolation methods. Along this same idea, we have also developed a multi-modeling framework (MMF) designed to replace the physical parameterizations used in weather/climate models. Our approach is to view the coarse-scale and fine-scale models through the lens of Variational Multi-Scale (VMS) methods in order to give MMF a more rigorous mathematical foundation. Our end goal is to use MMF in order to better resolve the inner core of hurricanes. In addition, I will show some results using flux differencing discontinuous Galerkin Methods for constructing both Kinetic Energy Preserving and Entropy Stable methods and discuss why we need scalable models in order to achieve our goals. Our model, NUMA, is a 3D nonhydrostatic atmospheric model that runs on large CPU clusters and on GPUs. Leszek F. Demkowicz (University of Texas at Austin) Full Envelope DPG Approximation for Electromagnetic Waveguides. Stability and Convergence Analysis April 25, 2023 Slides Talk Recording Abstract: The presented work started with a convergence and stability analysis for the so-called full envelope approximation used in analyzing optical amplifiers (lasers). The specific problem of interest was the simulation of Transverse Mode Instabilities (TMI). The problem translates into the solution of a system of two nonlinear time-harmonic Maxwell equations coupled with a transient heat equation. Simulation of a 1 m long fiber involves the resolution of 10 M wavelengths. A superefficient MPI/openMP hp FE code run on a supercomputer gets you to the range of ten thousand wavelengths. The resolution of the additional thousand wavelengths is done using an exponential ansatz e^{ikz} in the z-coordinate. This results in a non-standard Maxwell problem. The stability and convergence analysis for the problem has been restricted to the linear case only. It turns out that the modified Maxwell problem is stable if and only if the original waveguide problem is stable and the boundedness below stability constants are identical. We have converged to the task of determining the boundedness below constant. The stability analysis started with an easier, acoustic waveguide problem. Separation of variables leads to an eigenproblem for a self-adjoint operator in the transverse plane (in x,y). Expansion of the solution in terms of the corresponding eigenvectors leads then to a decoupled system of ODEs, and a stability analysis for a two-point BVP for an ODE parametrized with the corresponding eigenvalues. The L^2-orthogonality of the eigenmodes and the stability result for a single mode, lead then to the final result: the inverse boundedness below constant depends inversely linearly upon the length L of the waveguide. The corresponding stability for the Maxwell waveguide turned out to be unexpectedly difficult. The equation is vector-valued so a direct separation of variables is out to begin with. An exponential ansatz in z leads to a non-standard eigenproblem involving an operator that is non-self adjoint even for the easiest, homogeneous case. The answer to the problem came from a tricky analysis of the eigenproblem combined with the perturbation technique for perturbed self-adjoint operators. The use of perturbation theory requires an assumption on the smallness of perturbation of the dielectric constant (around a constant value) but with no additional assumptions on its differentiability (discontinuities are allowed). In the end, the final result is similar to that for the acoustic waveguide - the boundedness below constant depends inversely linearly on L. Joachim Sch\u00f6berl (Vienna University of Technology) The Netgen/NGSolve Finite Element Software March 28, 2023 Slides Talk Recording Abstract: In this talk we give an overview of the open source finite element software Netgen/NGSolve, available from www.ngsolve.org . We show how to setup various physical models using FEniCS-like Python scripting. We discuss how we use NGSolve for teaching finite element methods, and how recent research projects have contributed to the further development of the NGSolve software. Some recent highlights are matrix-valued finite elements with applications in elasticity, fluid dynamics, and numerical relativity. We show how the recently extended framework of linear operators allows the utilization of GPUs for linear solvers, as well as time-dependent problems. Vikram Gavini (University of Michigan) Fast, Accurate and Large-scale Ab-initio Calculations for Materials Modeling March 7, 2023 Slides Talk Recording Abstract: Electronic structure calculations, especially those using density functional theory (DFT), have been very useful in understanding and predicting a wide range of materials properties. The importance of DFT calculations to engineering and physical sciences is evident from the fact that ~20% of computational resources on some of the world's largest public supercomputers are devoted to DFT calculations. Despite the wide adoption of DFT, the state-of-the-art implementations of DFT suffer from cell-size and geometry limitations, with the widely used codes in solid state physics being limited to periodic geometries and typical simulation domains containing a few hundred atoms. This talk will present our recent advances towards the development of computational methods and numerical algorithms for conducting fast and accurate large-scale DFT calculations using adaptive finite-element discretization, which form the basis for the recently released DFT-FE open-source code . Details of the implementation, including mixed precision algorithms and asynchronous computing, will be presented. The computational efficiency, scalability and performance of DFT-FE will be presented, which demonstrates a significant outperformance of widely used plane-wave DFT codes. Som Dutta (Utah State University) Quantifying the Potential of Covid-19 Transmission Across Scales: Using SEM based Navier-Stokes solver and the CEAT February 7, 2023 Slides Talk Recording Abstract: The ongoing Covid-19 pandemic has redefined our understanding of respiratory infectious disease transmission. The primary modes of transmission of the SARS-CoV-2 virus has been identified to be airborne, with human generated respiratory aerosols being the main carrier of the virus. Understanding the dispersion of these aerosols/droplets generated during speaking and coughing, has helped quantify potential for transmission and design effective mitigation strategies. Through my talk I will present how models at two ends of the spatio-temporal resolution spectrum helped quantify the physics and aid NASA Ames administrators design mitigation strategies. For the higher spatio-temporal resolution I will illustrate how the high-order SEM based Navier-Stokes solver Nek5000/NekRS was utilized to develop the models, including algorithms developed through CEED. I will present the two main modes of respiratory aerosol/droplet dispersal indoors, first at a shorter time-scale through expiratory events like coughing, and second at a longer time-scale through the room ventilation system induced flow and turbulence. At the other end of the spatio-temporal resolution, I will talk briefly about Covid-19 Exposure Assessment Tool (CEAT), a novel tool developed to account for multiple factors that affect transmission. I will end my talk by briefly discussing how we can bridge the scales and heterogeneities in the problem with the aid of cutting edge computing and data-driven methods, so that we are fully prepared for the next pandemic. The work presented here has been facilitated by funding through DOE's National Virtual Biotechnology Laboratory (NVBL). Stefan Henneking (University of Texas at Austin) Bayesian Inversion of an Acoustic-Gravity Model for Predictive Tsunami Simulation January 10, 2023 Slides Talk Recording Abstract: To improve tsunami preparedness, early-alert systems and real-time monitoring are essential. We use a novel approach for predictive tsunami modeling within the Bayesian inversion framework. This effort focuses on informing the immediate response to an occurring tsunami event using near-field data observation. Our forward model is based on a coupled acoustic-gravity model (e.g., Lotto and Dunham, Comput Geosci (2015) 19:327-340). Similar to other tsunami models, our forward model relies on transient boundary data describing the location and magnitude of the seafloor deformation. In a real-time scenario, these parameter fields must be inferred from a variety of measurements, including observations from pressure gauges mounted on the seafloor. One particular difficulty of this inference problem lies in the accurate inversion from sparse pressure data recorded in the near-field where strong hydroacoustic waves propagate in the compressible ocean; these acoustic waves complicate the task of estimating the hydrostatic pressure changes related to the forming surface gravity wave. Our space-time model is discretized with finite elements in space and finite differences in time. The forward model incurs a high computational complexity, since the pressure waves must be resolved in the 3D compressible ocean over a sufficiently long time span. Due to the infeasibility of rapidly solving the corresponding inverse problem for the fully discretized space-time operator, we discuss approaches for using compact representations of the parameter-to-observable map. Lin Mu (University of Georgia) An Efficient and Effective FEM Solver for Diffusion Equation with Strong Anisotropy December 13, 2022 Slides Talk Recording Abstract: The Diffusion equation with strong anisotropy has broad applications. In this project, we discuss numerical solution of diffusion equations with strong anisotropy on meshes not aligned with the anisotropic vector field, focusing on application to magnetic confinement fusion. In order to resolve the numerical pollution for simulations on a non-anisotropy-aligned mesh and reduce the associated high computational cost, we developed a high-order discontinuous Galerkin scheme with an efficient preconditioner. The auxiliary space preconditioning framework is designed by employing a continuous finite element space as the auxiliary space for the discontinuous finite element space. An effective line smoother that can mitigate the high-frequency error perpendicular to the magnetic field has been designed by a graph-based approach to pick the line smoother that is approximately perpendicular to the vector fields when the mesh does not align with anisotropy. Numerical experiments for several benchmark problems are presented to validate the effectiveness and robustness. Garth Wells (University of Cambridge) FEniCSx: design of the next generation FEniCS libraries for finite element methods November 8, 2022 Slides Talk Recording Abstract: The FEniCS Project provides libraries for solving partial differential equations using the finite element method. An aim of the FEniCS Project has been to provide high-performance solver environments that closely mirror mathematical syntax, with the hypothesis that high-level representations means that solvers are faster to write, easier to debug, and can deliver faster runtime performance than is reasonably possible by hand. Using domain-specific languages and code generation techniques, arguably the FEniCS libraries delivered on these goals for a set of problems. However, over time limitations, including performance and extensibility, become clear and maintainability/sustainability became an issue.Building on experiences from the FEniCS libraries, I will present and discuss the design on the next generation of tools, FEniCSx. The new design retains strengths of the past approach, and addresses limitations using new designs and new tools. Solvers can be written in C++ or Python, and a number of design changes allow the creation of flexible, fast solvers in Python. In the second part of my presentation, I will discuss high-performance finite element kernels (limited to CPUs on this occasion), motivated by the Center for Efficient Exascale Discretizations 'bake-off' problems, and which would not have been possible in the original FEniCS libraries. Double, single and half-precision kernels are considered, and results include (i) the observation that kernels with vector intrinsics can be slower than auto-vectorised kernels for common cases, and (ii) a cache-aware performance model which is remarkably accurate in predicting performance across architectures. Dennis Ogiermann (University of Bochum) Computing Meets Cardiology: Making Heart Simulations Fast and Accurate September 13, 2022 Slides Talk Recording Abstract: Heart diseases are an ubiquitous societal burden responsible for a majority of deaths world wide. A central problem in developing effective treatments for heart diseases is the inherent complexity of the heart as an organ. From a modeling perspective, the heart can be interpreted as a biological pump involving multiple physical fields, namely fluid and solid mechanics, as well as chemistry and electricity, all interacting on different time scales. This multiphysics and multiscale aspect makes simulations inherently expensive, especially when approached with naive numerical techniques. However, computational models can be extraordinarily useful in helping us understanding how the healthy heart functions and especially how malfunctions influence different diseases. In this context, also information about possible weaknesses of therapies can be obtained to ultimately improve clinical treatment and decision support. In this talk, we will focus primarily on two important model classes in computational cardiology and their respective efficient numerical treatment without compromising significant accuracy. The first class is the problem of computing electrocardiograms (ECG) from electrical heart simulations. Since ECG measurements can give a wide range of insights about a wide range of heart diseases they offer suitable data to validate our electrophysiological models and verify our numerical schemes on organ-scale. Known numerical issues, arising in the context of electrophysiological models, will be reviewed. The second class addresses bidirectionally coupled electromechanical models and their efficient numerical treatment. Focus will be on a unified space-time adaptive operator splitting framework developed on top of MFEM which proves highly efficient so far for the investigated model classes while still preserving high accuracy. Ricardo Vinuesa (KTH) Modeling and Controlling Turbulent Flows through Deep Learning August 23, 2022 Slides Talk Recording Abstract: The advent of new powerful deep neural networks (DNNs) has fostered their application in a wide range of research areas, including more recently in fluid mechanics. In this presentation, we will cover some of the fundamentals of deep learning applied to computational fluid dynamics (CFD). Furthermore, we explore the capabilities of DNNs to perform various predictions in turbulent flows: we will use convolutional neural networks (CNNs) for non-intrusive sensing, i.e. to predict the flow in a turbulent open channel based on quantities measured at the wall. We show that it is possible to obtain very good flow predictions, outperforming traditional linear models, and we showcase the potential of transfer learning between friction Reynolds numbers of 180 and 550. We also discuss other modelling methods based on autoencoders (AEs) and generative adversarial networks (GANs), and we present results of deep-reinforcement-learning-based flow control. Jeffrey Banks (RPI) Efficient Techniques for Fluid Structure Interaction: Compatibility Coupling and Galerkin Differences July 26, 2022 Slides Talk Recording Abstract: Predictive simulation increasingly involves the dynamics of complex systems with multiple interacting physical processes. In designing simulation tools for these problems, both the formulation of individual constituent solvers, as well as coupling of such solvers into a cohesive simulation tool must be addressed. In this talk, I discuss both of these aspects in the context of fluid-structure interaction, where we have recently developed a new class of stable and accurate partitioned solvers that overcome added-mass instability through the use of so-called compatibility boundary conditions. Here I will present partitioned coupling strategies for incompressible FSI. One interesting aspect of CBC-based coupling is the occurrence of nonstandard and/or high-derivative operators, which can make adoption of the techniques challenging, e.g. in the context of FEM methods. To address this, I will also discuss our newly developed Galerkin Difference approximations, which may provide a natural pathway for CBCs in an FEM context. Although GD is fundamentally a finite element approximation based on a Galerkin projection, the underlying GD space is nonstandard and is derived using profitable ideas from the finite difference literature. The resulting schemes possess remarkable properties including nodal superconvergence and the ability to use large CFL-one time steps. I will also present preliminary results for GD discretizations on unstructured grids using MFEM. Paul Fischer (UIUC/ANL) Outlook for Exascale Fluid Dynamics Simulations June 21, 2022 Slides Talk Recording Abstract: We consider design, development, and use of simulation software for exascale computing, with a particular emphasis on fluid dynamics simulation. Our perspective is through the lens of the high-order code Nek5000/RS, which has been developed under DOE's Center for Efficient Exascale Discretizations (CEED). Nek5000/RS is an open source thermal fluids simulation code with a long development history on leadership computing platforms--it was the first commercial software on distributed memory platforms and a Gordon Bell Prize winner on Intel's ASCII Red. There are a myriad of objectives that drive software design choices in HPC, such as scalability, low-memory, portability, and maintainability. Throughout, our design objective has been to address the needs of the user, including facilitating data analysis and ensuring flexibility with respect to platform and number of processors that can be used. When running on large-scale HPC platforms, three of the most common user questions are How long will my job take? How many nodes will be required? Is there anything I can do to make my job run faster? Additionally, one might have concerns about storage, post-processing (Will I be able to analyze the results? Where?), and queue times. This talk will seek to answer several of these questions over a broad range of fluid-thermal problems from the perspective of a Nek5000/RS user. We specifically address performance with data for NekRS on several of the DOE's pre-exascale architectures, which feature AMD MI250X or NVIDIA V100 or A100 GPUs. Mike Puso (LLNL) Topics in Immersed Boundary and Contact Methods: Current LLNL Projects and Research May 24, 2022 Slides Talk Recording Abstract: Many of the most interesting phenomena in solid mechanics occurs at material interfaces. This can be in the form of fluid structure interaction, cracks, material discontinuities, impact etc. Solutions to these problems often require some form of immersed/embedded boundary method or contact or combination of both. This talk will provide a brief overview of different lab efforts in these areas and presents some of the current research aspects and results using from LLNL production codes. Technically speaking, the methods discussed here all require Lagrange multipliers to satisfy the constraints on the interface of overlapping or dissimilar meshes which complicates the solution. Stability and consistency of Lagrange multiplier approaches can be hard to achieve both in space and time. For example, the wrong choice of multiplier space will either be over-constrained and/or cause oscillations at the material interfaces for simple statics problems. For dynamics, many of the basic time integration schemes such as Newmark's method are known to be unstable due to gaps opening and closing. Here we introduce some (non-Nitsche) stabilized multiplier spaces for immersed boundary and contact problems and a structure preserving time integration scheme for long time dynamic contact problems. Finally, I will describe some on-going efforts extending this work. Robert Chiodi (UIUC) CHyPS: An MFEM-Based Material Response Solver for Hypersonic Thermal Protection Systems April 26, 2022 Slides Talk Recording Abstract: The University of Illinois at Urbana-Champaign's Center for Hypersonics and Entry Systems Studies has developed a material response solver, named CHyPS, to predict the behavior of thermal protection systems for hypersonic flight. CHyPS uses MFEM to provide the underlying discontinuous Galerkin spatial discretization and linear solvers used to solve the equations. In this talk, we will briefly present the physics and corresponding equations governing material response in hypersonic environments. We will also include a discussion on the implementation of a direct Arbitrary Lagrangian-Eulerian approach to handle mesh movement resulting from the ablation of the material surface. Results for standard community test cases developed at a series of Ablation Workshop meetings over the past decade will be presented and compared to other material response solvers. We will also show the potential of high-order solutions for simulating thermal protection system material response. Tamas Horvath (Oakland University) Space-Time Hybridizable Discontinuous Galerkin with MFEM March 29, 2022 Slides Talk Recording Abstract: Unsteady partial differential equations on deforming domains appear in many real-life scenarios, such as wind turbines, helicopter rotors, car wheels, free-surface flows, etc. We will focus on the space-time finite element method, which is an excellent approach to discretize problems on evolving domains. This method uses discontinuous Galerkin to discretize both in the spatial and temporal directions, allowing for an arbitrarily high-order approximation in space and time. Furthermore, this method automatically satisfies the geometric conservation law, which is essential for accurate solutions on time-dependent domains. The biggest criticism is that the application of space-time discretization increases the computational complexity significantly. To overcome this, we can use the high-order accurate Hybridizable or Embedded discontinuous Galerkin method. Numerical results will be presented to illustrate the applicability of the method for fluid flow around rigid bodies. Tobin Isaac (Georgia Tech) Unifying the Analysis of Geometric Decomposition in FEEC March 22, 2022 Slides Talk Recording Abstract: Two operations take function spaces and make them suitable for finite element computations. The first is the construction of trace-free subspaces (which creates \"bubble\" functions) and the second is the extension of functions from cell boundaries into cell interiors (which create edge functions with the correct continuity): together these operations define the geometric decomposition of a function space. In finite element exterior calculus (FEEC), these two operations have been treated separately for the two main families of finite elements: full polynomial elements and trimmed polynomial elements. In this talk we will see how one constructor of trace-free functions and one extension operator can be used for both families, and indeed for all differential forms. We will also examine the practicality of these two operators as tools for implementing geometric decompositions in actual finite element codes. Rapha\u00ebl Zanella (UT Austin) Axisymmetric MFEM-Based Solvers for the Compressible Navier-Stokes Equations and Other Problems March 1, 2022 Slides Talk Recording Abstract: An axisymmetric model leads, when suitable, to a substantial cut in the computational cost with respect to a 3D model. Although not as accurate, the axisymmetric model allows to quickly obtain a result which can be satisfying. Simple modifications to a 2D finite element solver allow to obtain an axisymmetric solver. We present MFEM-based parallel axisymmetric solvers for different problems. We first present simple axisymmetric solvers for the Laplacian problem and the heat equation. We then present an axisymmetric solver for the compressible Navier-Stokes equations. All solvers are based on H^1-conforming finite element spaces. The correctness of the implementation is verified with convergence tests on manufactured solutions. The Navier-Stokes solver is used to simulate axisymmetric flows with an analytical solution (Poiseuille and Taylor-Couette) and an air flow in a plasma torch geometry. Robert Carson (LLNL) An Overview of ExaConstit and Its Use in the ExaAM Project February 1, 2022 Slides Talk Recording Abstract: As additively manufactured (AM) parts become increasingly more popular in industry, a growing need exists to help expediate the certifying process of parts. The ExaAM project seeks to help this process by producing a workflow to model the AM process from the melt pool process all the way up to the part scale response by leveraging multiple physics codes run on upcoming exascale computing platforms. As part of this workflow, ExaConstit is a next-generation quasi-static, solid mechanics FEM code built upon MFEM used to connect local microstructures and local properties within the part scale response. Within this talk, we will first provide an overview of ExaConstit, how we have ported it over to the GPU, and some performance numbers on a number of different platforms. Next, we will discuss how we have leveraged MFEM and the FLUX workflow to run hundreds of high-fidelity simulations on Summit in-order to generate the local properties needed to drive the part scale simulation in the ExaAM workflow. Finally, we will show case a few other areas ExaConstit has been used in. Guglielmo Scovazzi (Duke University) The Shifted Boundary Method: An Immersed Approach for Computational Mechanics January 20, 2022 Slides Talk Recording Abstract: Immersed/embedded/unfitted boundary methods obviate the need for continual re-meshing in many applications involving rapid prototyping and design. Unfortunately, many finite element embedded boundary methods are also difficult to implement due to the need to perform complex cell cutting operations at boundaries, and the consequences that these operations may have on the overall conditioning of the ensuing algebraic problems. We present a new, stable, and simple embedded boundary method, named \"shifted boundary method\" (SBM), which eliminates the need to perform cell cutting. Boundary conditions are imposed on a surrogate discrete boundary, lying on the interior of the true boundary interface. We then construct appropriate field extension operators, by way of Taylor expansions, with the purpose of preserving accuracy when imposing the boundary conditions. We demonstrate the SBM on large-scale solid and fracture mechanics problems; thermomechanics problems; porous media flow problems; incompressible flow problems governed by the Navier-Stokes equations (also including free surfaces); and problems governed by hyperbolic conservation laws. Future Talks Martin Kronbichler (Ruhr University Bochum) December 17, 2024 Svetlana Tokareva (Los Alamos National Laboratory) January 14, 2025 Patrick Zulian (Universit\u00e0 della Svizzera italiana / UniDistance Suisse) February 18, 2025 Stefan Turek (Technical University Dortmund) March 11, 2025 \u0141ukasz Kaczmarczyk (University of Glasgow) April 8, 2025", "title": "Seminar"}, {"location": "seminar/#femllnl-seminar-series", "text": "The FEM@LLNL seminar series is focused on finite element research and applications talks of interest to the MFEM community. Videos will be added to a YouTube playlist as well as this site's videos page .", "title": "FEM@LLNL Seminar Series"}, {"location": "seminar/#sign-up", "text": "Fill in this form to sign-up for future FEM@LLNL seminar announcements.", "title": " Sign-Up"}, {"location": "seminar/#next-talk", "text": "", "title": " Next Talk"}, {"location": "seminar/#tbd", "text": "", "title": "TBD"}, {"location": "seminar/#tbd_1", "text": "", "title": "TBD"}, {"location": "seminar/#9am-pdt", "text": "Webex Abstract: TBD", "title": "9am PDT"}, {"location": "seminar/#previous-talks", "text": "", "title": " Previous Talks"}, {"location": "seminar/#pablo-brubeck-university-of-oxford", "text": "", "title": "Pablo Brubeck (University of Oxford)"}, {"location": "seminar/#fiat-from-basis-functions-to-efficient-finite-element-solvers", "text": "", "title": "FIAT: from basis functions to efficient finite element solvers"}, {"location": "seminar/#november-12-2024", "text": "Slides Talk Recording Abstract: The FInite element Automatic Tabulator (FIAT) is a powerful Python library for tabulating basis functions. In this talk, we present two major recent developments in FIAT. First, we have extended the FIAT abstraction to natively support macroelements. Macroelements offer conforming discretizations with highly desirable properties, such as divergence-free vector fields, and divergence-conforming symmetric tensors with low-order polynomial degrees. Elements implemented include the Hsieh-Clough-Tocher macroelement for biharmonic problems, the divergence-free, H1-conforming, inf-sup stable Guzm\u00e1n-Neilan macroelement for Stokes, and the Johnson-Mercier macroelement for strongly-symmetric, H(div)-conforming stresses in solid mechanics. We also improved the performance of tabulation and quadrature for simplicial high-order elements, and introduced novel basis functions, leading to solvers with better complexity in polynomial degree. Inspired by the fast diagonalization method, we define new degrees of freedom on simplices as moments against a numerically-computed orthogonal polynomial basis to decouple element interiors in the stiffness matrix. We exploit this decoupling in a domain decomposition method with vertex or edge subdomains on the interface degrees of freedom, and Jacobi relaxation for the interior degrees of freedom. This enables fast solvers for high-order discretizations of the Riesz maps of the spaces of the de Rham complex (Lagrange, N\u00e9d\u00e9lec, Raviart-Thomas, and Brezzi-Douglas-Marini). For each case, we illustrate the performance gains with numerical examples in Firedrake.", "title": "November 12, 2024"}, {"location": "seminar/#denis-ridzal-sandia-national-laboratories", "text": "", "title": "Denis Ridzal (Sandia National Laboratories)"}, {"location": "seminar/#r-adaptive-mesh-optimization-to-enhance-finite-element-basis-compression", "text": "", "title": "R-Adaptive Mesh Optimization to Enhance Finite Element Basis Compression"}, {"location": "seminar/#october-15-2024", "text": "Slides Talk Recording Abstract: Modern computing systems are capable of exascale calculations. While these systems continue to grow in processing power, the available system memory has not increased commensurately. A predominant approach to limit the memory usage in large-scale applications is to exploit the abundant processing power and continually recompute many low-level simulation quantities, rather than storing them. However, this approach can adversely impact the throughput of the simulation and diminish the benefits of modern computing architectures. We present two novel contributions to reduce the memory burden while maintaining performance in simulations based on finite element discretizations. The first contribution develops dictionary-based data compression schemes that detect and exploit the structure of the discretization, due to redundancies across the finite element mesh. These schemes are shown to reduce the memory requirements of key computational kernels by more than 99% on meshes with large numbers of nearly identical mesh cells. For applications where this structure does not exist, our second contribution leverages a recently developed augmented Lagrangian sequential quadratic programming algorithm to enable r-adaptive mesh optimization, with the goal of enhancing redundancies in the mesh. Numerical results demonstrate the effectiveness of the proposed methods to detect, exploit and enhance mesh structure on examples inspired by large-scale applications.", "title": "October 15, 2024"}, {"location": "seminar/#daniele-panozzo-courant-institute-nyu", "text": "", "title": "Daniele Panozzo (Courant Institute, NYU)"}, {"location": "seminar/#geometric-predicates-for-unconditionally-robust-elastodynamics-simulation", "text": "", "title": "Geometric Predicates for Unconditionally Robust Elastodynamics Simulation"}, {"location": "seminar/#october-1-2024", "text": "Slides Talk Recording Abstract: The numerical solution of partial differential equations (PDE) is ubiquitously used for physical simulation in scientific computing and engineering. Ideally, a PDE solver should be opaque: the user provides as input the domain boundary, boundary conditions, and the governing equations, and the code returns an evaluator that can compute the value of the solution at any point of the input domain. This is surprisingly far from being the case for all existing open-source or commercial software, despite the research efforts in this direction and the large academic and industrial interest. To a large extent, this is due to lack of robustness in geometric algorithms used to create the discretization, detect collisions, and evaluate element validity. I will present the incremental potential contact simulation paradigm, which provides strong robustness guarantees in simulation codes, ensuring, for the first time, validity of the trajectories accounting for floating point rounding errors over an entire elastodynamic simulation with contact. A core part of this approach is the use of a conservative line-search to check for collisions between geometric primitives and for ensuring validity of the deforming elements over linear trajectories. I will discuss both problems in depth, showing that SOTA approaches favor numerical efficiency but are unfortunately not robust to floating point rounding, leading to major failures in simulation. I will then present an alternative approach based on judiciously using rational and interval types to ensure provable correctness, while keeping a running time comparable with non-conservative methods. To conclude, I will discuss a set of applications enabled by this approach in microscopy and biomechanics, including traction force estimation on a live zebrafish and efficient modeling and simulation of fibrous materials.", "title": "October 1, 2024"}, {"location": "seminar/#ruben-sevilla-swansea-university", "text": "", "title": "Rub\u00e9n Sevilla (Swansea University)"}, {"location": "seminar/#mesh-generation-and-adaptation-using-green-ai", "text": "", "title": "Mesh Generation and Adaptation using Green AI"}, {"location": "seminar/#september-17-2024", "text": "Slides Talk Recording Abstract: Most methods used to solve partial differential equations require creating a mesh that represents the model's geometry. Today, unstructured mesh technology is widely used, allowing three-dimensional meshes with hundreds of millions of elements to be generated in just a few minutes. However, when optimising a design, many simulations are needed for different operating conditions and geometric configurations. Creating the best mesh for each setup becomes time-consuming due to the requirement of excessive human intervention and expertise. This talk will cover our recent work on using artificial intelligence to predict near-optimal meshes suitable for simulations. The main idea is to take advantage of the large amount of data that already exists in the industry to improve the selection of a suitable spacing function, including anisotropic spacing. The proposed approach aims to use knowledge from previous simulations to guide the mesh generation process. I will assess the proposed method based on the accuracy of the predictions, efficiency, and environmental impact. This includes considering the carbon footprint and energy consumption of the computations required for a parametric CFD analysis under different flow conditions and angles of attack. For transient problems, the use of high order methods provides several advantages due to the low dissipation and dispersion errors associated to these schemes. An attractive approach to simulate these problems is to incorporate degree adaptive schemes to enhance the approximation only where needed. In this talk I will also present our recent work on using artificial intelligence to aid a degree adaptive process.", "title": "September 17, 2024"}, {"location": "seminar/#esteban-ferrer-and-david-huergo-universidad-politecnica-de-madrid", "text": "", "title": "Esteban Ferrer and David Huergo (Universidad Polit\u00e9cnica de Madrid)"}, {"location": "seminar/#new-avenues-in-high-order-fluid-dynamics", "text": "", "title": "New Avenues in High Order Fluid Dynamics"}, {"location": "seminar/#september-3-2024", "text": "Slides Talk Recording Abstract: We present the latest developments of our High-Order Spectral Element Solver (HORSES3D), an open source high-order discontinuous Galerkin framework capable of solving a variety of flow applications, including compressible flows (with or without shocks), incompressible flows, various RANS and LES turbulence models, particle dynamics, multiphase flows, and aeroacoustics [ 1 ]. Recent developments allow us to simulate challenging multiphysics including turbulent flows, multiphase and moving bodies, using local h and p-adaption. In addition, we present recent work that couples Machine Learning and reinforcement learning techniques with high order simulations.", "title": "September 3, 2024"}, {"location": "seminar/#patrick-farrell-university-of-oxford", "text": "", "title": "Patrick Farrell (University of Oxford)"}, {"location": "seminar/#designing-conservative-and-accurately-dissipative-numerical-integrators-in-time", "text": "", "title": "Designing conservative and accurately dissipative numerical integrators in time"}, {"location": "seminar/#july-30-2024", "text": "Slides Talk Recording Abstract: Numerical methods for the simulation of transient systems with structure-preserving properties are known to exhibit greater accuracy and physical reliability, in particular over long durations. These schemes are often built on powerful geometric ideas for broad classes of problems, such as Hamiltonian or reversible systems. However, there remain difficulties in devising higher-order- in-time structure-preserving discretizations for nonlinear problems, and in conserving non-polynomial invariants. In this work we propose a new, general framework for the construction of structure-preserving timesteppers via finite elements in time and the systematic introduction of auxiliary variables. The framework reduces to Gauss methods where those are structure-preserving, but extends to generate arbitrary-order structure-preserving schemes for nonlinear problems, and allows for the construction of schemes that conserve multiple higher-order invariants. We demonstrate the ideas by devising novel schemes that exactly conserve all known invariants of the Kepler and Kovalevskaya problems, arbitrary-order schemes for the compressible Navier\u2013Stokes equations that conserve mass, momentum, and energy, and provably dissipate entropy, and multi-conservative schemes for the Benjamin-Bona-Mahony equation.", "title": "July 30, 2024"}, {"location": "seminar/#gonzalo-de-diego-courant-institute", "text": "", "title": "Gonzalo de Diego (Courant Institute)"}, {"location": "seminar/#numerical-solvers-for-viscous-contact-problems-in-glaciology", "text": "", "title": "Numerical Solvers for Viscous Contact Problems in Glaciology"}, {"location": "seminar/#may-6-2024", "text": "Slides Talk Recording Abstract: Viscous contact problems are time-dependent viscous flow problems where the fluid is in contact with a solid surface from which it can detach and reattach. Over sufficiently long timescales, ice is assumed to flow like a viscous fluid with a nonlinear rheology. Therefore, certain phenomena in glaciology, like the formation of subglacial cavities in the base of an ice sheet or the dynamics of marine ice sheets (continental ice sheets that slide into the ocean and go afloat at a grounding line, detaching from the bedrock), can be modelled as viscous contact problems. In particular, these problems can be described by coupling the Stokes equations with contact boundary conditions to free boundary equations that evolve the ice domain in time. In this talk, I will describe the difficulties that arise when attempting to solve this system numerically and I will introduce a method that is capable of overcoming them.", "title": "May 6, 2024"}, {"location": "seminar/#nat-trask-university-of-pennsylvania", "text": "", "title": "Nat Trask (University of Pennsylvania)"}, {"location": "seminar/#a-data-driven-finite-element-exterior-calculus", "text": "", "title": "A Data Driven Finite Element Exterior Calculus"}, {"location": "seminar/#april-2-2024", "text": "Slides Talk Recording Abstract: Despite the recent flurry of work employing machine learning to develop surrogate models to accelerate scientific computation, the \"black-box\" underpinnings of current techniques fail to provide the verification and validation guarantees provided by modern finite element methods. In this talk we present a data-driven finite element exterior calculus for developing reduced-order models of multiphysics systems when the governing equations are either unknown or require closure. The framework employs deep learning architectures typically used for logistic classification to construct a trainable partition of unity which provides notions of control volumes with associated boundary operators. This alternative to a traditional finite element mesh is fully differentiable and allows construction of a discrete de Rham complex with a corresponding Hodge theory. We demonstrate how models may be obtained with the same robustness guarantees as traditional mixed finite element discretization, with deep connections to contemporary techniques in graph neural networks. For applications developing digital twins where surrogates are intended to support real time data assimilation and optimal control, we further develop the framework to support Bayesian optimization of unknown physics on the underlying adjacency matrices of the chain complex. By framing the learning of fluxes via an optimal recovery problem with a computationally tractable posterior distribution, we are able to develop models with intrinsic representations of epistemic uncertainty.", "title": "April 2, 2024"}, {"location": "seminar/#william-moses-university-of-illinois-urbana-champaign", "text": "", "title": "William Moses (University of Illinois Urbana-Champaign)"}, {"location": "seminar/#supercharging-programming-through-compiler-technology", "text": "", "title": "Supercharging Programming Through Compiler Technology"}, {"location": "seminar/#march-14-2024", "text": "Slides Talk Recording Abstract: The decline of Moore's law and an increasing reliance on computation has led to an explosion of specialized software packages and hardware architectures. While this diversity enables unprecedented flexibility, it also requires domain-experts to learn how to customize programs to efficiently leverage the latest platform-specific API's and data structures, instead of working on their intended problem. Rather than forcing each user to bear this burden, I propose building high-level abstractions within general-purpose compilers that enable fast, portable, and composable programs to be automatically generated. This talk will demonstrate this approach through compilers that I built for two domains: automatic differentiation and parallelism. These domains are critical to both scientific computing and machine learning, forming the basis of neural network training, uncertainty quantification, and high-performance computing. For example, a researcher hoping to incorporate their climate simulation into a machine learning model must also provide a corresponding derivative simulation. My compiler, Enzyme, automatically generates these derivatives from existing computer programs, without modifying the original application. Moreover, operating within the compiler enables Enzyme to combine differentiation with program optimization, resulting in asymptotically and empirically faster code. Looking forward, this talk will also touch on how this domain-agnostic compiler approach can be applied to new directions, including probabilistic programming.", "title": "March 14, 2024"}, {"location": "seminar/#sungho-lee-university-of-memphis", "text": "", "title": "Sungho Lee (University of Memphis)"}, {"location": "seminar/#laghost-development-of-lagrangian-high-order-solver-for-tectonics", "text": "", "title": "LAGHOST: Development of Lagrangian High-Order Solver for Tectonics"}, {"location": "seminar/#march-5-2024", "text": "Slides Talk Recording Abstract: Long-term geological and tectonic processes associated with large deformation highlight the importance of using a moving Lagrangian frame. However, modern advancements in the finite element method, such as MPI parallelization, GPU acceleration, high-order elements, and adaptive grid refinement for tectonics based on this frame, have not been updated. Moreover, the existing solvers available in open access suffer from limited tutorials, a poor user manual, and several dependencies that make model building complex. These limitations can discourage both new users and developers from utilizing and improving these models. As a result, we are motivated to develop a user-friendly, Lagrangian thermo-mechanical numerical model that incorporates visco-elastoplastic rheology to simulate long-term tectonic processes like mountain building, mantle convection and so on. We introduce an ongoing project called LAGHOST (Lagrangian High-Order Solver for Tectonics), which is an MFEM-based tectonic solver. LAGHOST expands the capabilities of MFEM's LAGHOS mini-app. Currently, our solver incorporates constitutive equation, body force, mass scaling, dynamic relaxation, Mohr-Coulomb plasticity, plastic softening, Winkler foundation, remeshing, and remapping. To evaluate LAGHOST, we conducted four benchmark tests. The first test involved compressing an elastic box at a constant velocity, while the second test focused on the compaction of a self-weighted elastic column. To enable larger time-step sizes and achieve quasi-static solutions in the benchmarks, we introduced a fictitious density and implemented dynamic relaxation. This involved scaling the density factor and introducing a portion of force component opposing the previous velocity direction at nodal points. Our results exhibited good agreement with analytical solutions. Subsequently, we incorporated Mohr-Coulomb plasticity, a reliable model for predicting rock failure, into LAGHOST. We revisited the elastic box benchmark and considered plastic materials. By considering stress correction arising from plastic yielding, we confirmed that the updated solution from elastic guess aligned with the analytical solution. Furthermore, we applied LAGHOST to simulate the evolution of a normal fault, a significant tectonic phenomenon. To model normal fault evolution, we introduced strain softening on cohesion as the dominant factor based on geological evidence. Our simulations successfully captured the normal fault's evolution, with plastic strain localizing at shallow depths before propagating deeper. The fault angle reached approximately 60 degrees, in line with the Mohr-Coulomb failure theory.", "title": "March 5, 2024"}, {"location": "seminar/#kevin-chung-llnl", "text": "", "title": "Kevin Chung (LLNL)"}, {"location": "seminar/#data-driven-dg-fem-via-reduced-order-modeling-and-domain-decomposition", "text": "", "title": "Data-Driven DG FEM Via Reduced Order Modeling and Domain Decomposition"}, {"location": "seminar/#february-6-2024", "text": "Slides Talk Recording Abstract: Numerous cutting-edge scientific technologies originate at the laboratory scale, but transitioning them to practical industry applications can be a formidable challenge. Traditional pilot projects at intermediate scales are costly and time-consuming. Alternatives such as E-pilots can rely on high-fidelity numerical simulations, but even these simulations can be computationally prohibitive at larger scales. To overcome these limitations, we propose a scalable, component reduced order model (CROM) method. We employ Discontinuous Galerkin Domain Decomposition (DG-DD) to decompose the physics governing equation for a large-scale system into repeated small-scale unit components. Critical physics modes are identified via proper orthogonal decomposition (POD) from small-scale unit component samples. The computationally expensive, high-fidelity discretization of the physics governing equation is then projected onto these modes to create a reduced order model (ROM) that retains essential physics details. The combination of DG-DD and POD enables ROMs to be used as building blocks comprised of unit components and interfaces, which can then be used to construct a global large-scale ROM without data at such large scales. This method is demonstrated on the Poisson and Stokes flow equations, showing that it can solve equations about 15\u221240 times faster with only \u223c 1% relative error, even at scales 1000 times larger than the unit components. This research is ongoing, with efforts to apply these methods to more complex physics such as Navier-Stokes equation, highlighting their potential for transitioning laboratory-scale technologies to practical industrial use.", "title": "February 6, 2024"}, {"location": "seminar/#brian-young", "text": "", "title": "Brian Young"}, {"location": "seminar/#a-full-wave-electromagnetic-simulator-for-frequency-domain-s-parameter-calculations", "text": "", "title": "A Full-Wave Electromagnetic Simulator for Frequency-Domain S-Parameter Calculations"}, {"location": "seminar/#january-9-2024", "text": "Slides Talk Recording Abstract: An open-source and free full-wave electromagnetic simulator is presented that addresses the engineering community\u2019s need for the calculation of frequency-domain S-parameters. Two-dimensional port simulations are used to excite the 3D space and to extract S-parameters using modal projections. Matrix solutions are performed using complex computations. Features enabled by the MFEM library include adaptive mesh refinement, arbitrary order finite elements, and parallel processing using MPI. Implementation details are presented along with sample results and accuracy demonstrations.", "title": "January 9, 2024"}, {"location": "seminar/#jesse-chan-rice-university", "text": "", "title": "Jesse Chan (Rice University)"}, {"location": "seminar/#high-order-positivity-preserving-entropy-stable-discontinuous-galerkin-discretizations", "text": "", "title": "High order positivity-preserving entropy stable discontinuous Galerkin discretizations"}, {"location": "seminar/#december-5-2023", "text": "Slides Talk Recording Abstract: High order discontinuous Galerkin (DG) methods provide high order accuracy and geometric flexibility, but are known to be unstable when applied to nonlinear conservation laws whose solutions exhibit shocks and under-resolved solution features. Entropy stable schemes improve robustness by ensuring that physically relevant solutions satisfy a semi-discrete cell entropy inequality independently of numerical resolution and solution regularization while retaining formal high order accuracy. In this talk, we will review the construction of entropy stable high order discontinuous Galerkin methods and describe approaches for enforcing that solutions are \"physically relevant\" (i.e., the thermodynamic variables remain positive).", "title": "December 5, 2023"}, {"location": "seminar/#youngsoo-choi-lawrence-livermore-national-laboratory", "text": "", "title": "Youngsoo Choi (Lawrence Livermore National Laboratory)"}, {"location": "seminar/#physics-guided-interpretable-data-driven-simulations", "text": "", "title": "Physics-guided interpretable data-driven simulations"}, {"location": "seminar/#november-14-2023", "text": "Slides Talk Recording Abstract: A computationally demanding physical simulation often presents a significant impediment to scientific and technological progress. Fortunately, recent advancements in machine learning (ML) and artificial intelligence have given rise to data-driven methods that can expedite these simulations. For instance, a well-trained 2D convolutional deep neural network can provide a 100,000-fold acceleration in solving complex problems like Richtmyer-Meshkov instability [ 1 ]. However, conventional black-box ML models lack the integration of fundamental physics principles, such as the conservation of mass, momentum, and energy. Consequently, they often run afoul of critical physical laws, raising concerns among physicists. These models attempt to compensate for the absence of physics information by relying on vast amounts of data. Additionally, they suffer from various drawbacks, including a lack of structure-preservation, computationally intensive training phases, reduced interpretability, and susceptibility to extrapolation issues. To address these shortcomings, we propose an approach that incorporates physics into the data-driven framework. This integration occurs at different stages of the modeling process, including the sampling and model-building phases. A physics-informed greedy sampling procedure minimizes the necessary training data while maintaining target accuracy [ 2 ]. A physics-guided data-driven model not only preserves the underlying physical structure more effectively but also demonstrates greater robustness in extrapolation compared to traditional black-box ML models. We will showcase numerical results in areas such as hydrodynamics [ 3 , 4 ], particle transport [ 5 ], plasma physics, pore-collapse, and 3D printing to highlight the efficacy of these data-driven approaches. The advantages of these methods will also become apparent in multi-query decision-making applications, such as design optimization [ 6 , 7 ].", "title": "November 14, 2023"}, {"location": "seminar/#ben-southworth-los-alamos-national-laboratory", "text": "", "title": "Ben Southworth (Los Alamos National Laboratory)"}, {"location": "seminar/#superior-discretizations-and-amg-solvers-for-extremely-anisotropic-diffusion-via-hyperbolic-operators", "text": "", "title": "Superior discretizations and AMG solvers for extremely anisotropic diffusion via hyperbolic operators"}, {"location": "seminar/#october-17-2023", "text": "Slides Talk Recording Abstract: Heat conduction in magnetic confinement fusion can reach anisotropy ratios of 10^9-10^10, and in complex problems the direction of anisotropy may not be aligned with (or is impossible to align with) the spatial mesh. Such problems pose major challenges for both discretization accuracy and efficient implicit linear solvers. Although the underlying problem is elliptic or parabolic in nature, we argue that the problem is better approached from the perspective of hyperbolic operators. The problem is posed in a directional gradient first order formulation, introducing a directional heat flux along magnetic field lines as an auxiliary variable. We then develop novel continuous and discontinuous discretizations of the mixed system, using stabilization techniques developed for hyperbolic problems. The resulting block matrix system is then reordered so that the advective operators are on the diagonal, and the system is solved using AMG based on approximate ideal restriction (AIR), which is particularly efficient for upwind discretizations of advection. Compared with traditional discretizations and AMG solvers, we achieve orders of magnitude reduction in error and AMG iterations in the extremely anisotropic regime.", "title": "October 17, 2023"}, {"location": "seminar/#natasha-sharma-university-of-texas-at-el-paso", "text": "", "title": "Natasha Sharma (University of Texas at El Paso)"}, {"location": "seminar/#a-continuous-interior-penalty-method-framework-for-sixth-order-cahn-hilliard-type-equations-with-applications-to-microstructure-evolution-and-microemulsions", "text": "", "title": "A Continuous Interior Penalty Method Framework for Sixth Order Cahn-Hilliard-type Equations with applications to microstructure evolution and microemulsions"}, {"location": "seminar/#july-18-2023", "text": "Slides Talk Recording Abstract: The focus of this talk is on presenting unconditionally stable, uniquely solvable, and convergent numerical methods to solve two classes of the sixth-order Cahn-Hilliard-type equations. The first class arises as the so-called phase field crystal atomistic model of crystal growth, which has been employed to simulate a number of physical phenomena such as crystal growth in a supercooled liquid, crack propagation in ductile material, dendritic and eutectic solidification. The second class, henceforth referred to as Microemulsion systems (ME systems) appears as a model capturing the dynamics of phase transitions in ternary oil-water-surfactant systems in which three phases namely a microemulsion, almost pure oil, and almost pure water can coexist in equilibrium. ME systems have several applications ranging from enhanced oil recovery to the development of environmentally friendly solvents and drug delivery systems. Despite the widespread applications of these models, the major challenge impeding their use has been and continues to be a lack of understanding of the complex systems which they model. Thus, building computational models for these systems is crucial to the understanding of these systems. The presence of the higher order derivative in combination with a time-dependent process poses many challenges to the creation of stable, convergent, and efficient numerical methods approximating solutions to these equations. In this talk, we present a continuous interior penalty Galerkin framework for solving these equations and theoretically establish the desirable properties of stability, unique solvability, and first-order convergence. We close the talk by presenting the numerical results of some benchmark problems to verify the practical performance of the proposed approach and discuss some exciting current and future applications.", "title": "July 18, 2023"}, {"location": "seminar/#freddie-witherden-texas-am-university", "text": "", "title": "Freddie Witherden (Texas A&M University)"}, {"location": "seminar/#fsspmdm-accelerating-small-sparse-matrix-multiplications-by-run-time-code-generation", "text": "", "title": "FSSpMDM \u2014 Accelerating Small Sparse Matrix Multiplications by Run-Time Code Generation"}, {"location": "seminar/#june-20-2023", "text": "Slides Talk Recording Abstract: Small matrix multiplications are a key building block of modern high-order finite element method solvers. Such multiplications describe the act of applying a specific finite element operator onto a set of state vectors. The small and irregular size of these multiplications makes them poor candidates for generic matrix multiplication routines. Moreover, for elements with a tensor product construction, the operators themselves can exhibit a significant degree of sparsity. In this talk, I will describe the code generation strategies employed by our Fixed Size Sparse Matrix-Dense Matrix (FSSpMDM) routine in libxsmm and show how these result in performant operator kernels for prismatic and hexahedral elements. Strategies will be described for both x86-64 (AVX2/AVX-512) and AARCH64 (NEON/SVE) instruction sets. Results will be presented on recent Intel and Apple CPUs and compared against the well-known GiMMiK C code generation library.", "title": "June 20, 2023"}, {"location": "seminar/#frank-giraldo-naval-postgraduate-school", "text": "", "title": "Frank Giraldo (Naval Postgraduate School)"}, {"location": "seminar/#using-high-order-element-based-galerkin-methods-to-capture-hurricane-intensification", "text": "", "title": "Using High-Order Element-Based Galerkin Methods to Capture Hurricane Intensification"}, {"location": "seminar/#may-16-2023", "text": "Slides Talk Recording Abstract: Properly capture hurricane rapid intensification (where the winds increase by 30 knots in the first 24 hours) remains challenging for atmospheric models. The reason is that we need LES-type scales \ud835\udcaa(100m) which is still elusive due to computational cost. In this talk, I describe the work that we are doing in this area and how element-based Galerkin Methods are being used to approximate spatial derivatives. I will also discuss the time-integration strategy that we are exploring for this class of problems. In particular, we are exploring process Multirate methods whereby each process in a system of nonlinear partial different equations (PDEs) uses a time-integrator and time-step commensurate with the wave-speed of that process. We have constructed Multirate methods of any order using extrapolation methods. Along this same idea, we have also developed a multi-modeling framework (MMF) designed to replace the physical parameterizations used in weather/climate models. Our approach is to view the coarse-scale and fine-scale models through the lens of Variational Multi-Scale (VMS) methods in order to give MMF a more rigorous mathematical foundation. Our end goal is to use MMF in order to better resolve the inner core of hurricanes. In addition, I will show some results using flux differencing discontinuous Galerkin Methods for constructing both Kinetic Energy Preserving and Entropy Stable methods and discuss why we need scalable models in order to achieve our goals. Our model, NUMA, is a 3D nonhydrostatic atmospheric model that runs on large CPU clusters and on GPUs.", "title": "May 16, 2023"}, {"location": "seminar/#leszek-f-demkowicz-university-of-texas-at-austin", "text": "", "title": "Leszek F. Demkowicz (University of Texas at Austin)"}, {"location": "seminar/#full-envelope-dpg-approximation-for-electromagnetic-waveguides-stability-and-convergence-analysis", "text": "", "title": "Full Envelope DPG Approximation for Electromagnetic Waveguides. Stability and Convergence Analysis"}, {"location": "seminar/#april-25-2023", "text": "Slides Talk Recording Abstract: The presented work started with a convergence and stability analysis for the so-called full envelope approximation used in analyzing optical amplifiers (lasers). The specific problem of interest was the simulation of Transverse Mode Instabilities (TMI). The problem translates into the solution of a system of two nonlinear time-harmonic Maxwell equations coupled with a transient heat equation. Simulation of a 1 m long fiber involves the resolution of 10 M wavelengths. A superefficient MPI/openMP hp FE code run on a supercomputer gets you to the range of ten thousand wavelengths. The resolution of the additional thousand wavelengths is done using an exponential ansatz e^{ikz} in the z-coordinate. This results in a non-standard Maxwell problem. The stability and convergence analysis for the problem has been restricted to the linear case only. It turns out that the modified Maxwell problem is stable if and only if the original waveguide problem is stable and the boundedness below stability constants are identical. We have converged to the task of determining the boundedness below constant. The stability analysis started with an easier, acoustic waveguide problem. Separation of variables leads to an eigenproblem for a self-adjoint operator in the transverse plane (in x,y). Expansion of the solution in terms of the corresponding eigenvectors leads then to a decoupled system of ODEs, and a stability analysis for a two-point BVP for an ODE parametrized with the corresponding eigenvalues. The L^2-orthogonality of the eigenmodes and the stability result for a single mode, lead then to the final result: the inverse boundedness below constant depends inversely linearly upon the length L of the waveguide. The corresponding stability for the Maxwell waveguide turned out to be unexpectedly difficult. The equation is vector-valued so a direct separation of variables is out to begin with. An exponential ansatz in z leads to a non-standard eigenproblem involving an operator that is non-self adjoint even for the easiest, homogeneous case. The answer to the problem came from a tricky analysis of the eigenproblem combined with the perturbation technique for perturbed self-adjoint operators. The use of perturbation theory requires an assumption on the smallness of perturbation of the dielectric constant (around a constant value) but with no additional assumptions on its differentiability (discontinuities are allowed). In the end, the final result is similar to that for the acoustic waveguide - the boundedness below constant depends inversely linearly on L.", "title": "April 25, 2023"}, {"location": "seminar/#joachim-schoberl-vienna-university-of-technology", "text": "", "title": "Joachim Sch\u00f6berl (Vienna University of Technology)"}, {"location": "seminar/#the-netgenngsolve-finite-element-software", "text": "", "title": "The Netgen/NGSolve Finite Element Software"}, {"location": "seminar/#march-28-2023", "text": "Slides Talk Recording Abstract: In this talk we give an overview of the open source finite element software Netgen/NGSolve, available from www.ngsolve.org . We show how to setup various physical models using FEniCS-like Python scripting. We discuss how we use NGSolve for teaching finite element methods, and how recent research projects have contributed to the further development of the NGSolve software. Some recent highlights are matrix-valued finite elements with applications in elasticity, fluid dynamics, and numerical relativity. We show how the recently extended framework of linear operators allows the utilization of GPUs for linear solvers, as well as time-dependent problems.", "title": "March 28, 2023"}, {"location": "seminar/#vikram-gavini-university-of-michigan", "text": "", "title": "Vikram Gavini (University of Michigan)"}, {"location": "seminar/#fast-accurate-and-large-scale-ab-initio-calculations-for-materials-modeling", "text": "", "title": "Fast, Accurate and Large-scale Ab-initio Calculations for Materials Modeling"}, {"location": "seminar/#march-7-2023", "text": "Slides Talk Recording Abstract: Electronic structure calculations, especially those using density functional theory (DFT), have been very useful in understanding and predicting a wide range of materials properties. The importance of DFT calculations to engineering and physical sciences is evident from the fact that ~20% of computational resources on some of the world's largest public supercomputers are devoted to DFT calculations. Despite the wide adoption of DFT, the state-of-the-art implementations of DFT suffer from cell-size and geometry limitations, with the widely used codes in solid state physics being limited to periodic geometries and typical simulation domains containing a few hundred atoms. This talk will present our recent advances towards the development of computational methods and numerical algorithms for conducting fast and accurate large-scale DFT calculations using adaptive finite-element discretization, which form the basis for the recently released DFT-FE open-source code . Details of the implementation, including mixed precision algorithms and asynchronous computing, will be presented. The computational efficiency, scalability and performance of DFT-FE will be presented, which demonstrates a significant outperformance of widely used plane-wave DFT codes.", "title": "March 7, 2023"}, {"location": "seminar/#som-dutta-utah-state-university", "text": "", "title": "Som Dutta (Utah State University)"}, {"location": "seminar/#quantifying-the-potential-of-covid-19-transmission-across-scales-using-sem-based-navier-stokes-solver-and-the-ceat", "text": "", "title": "Quantifying the Potential of Covid-19 Transmission Across Scales: Using SEM based Navier-Stokes solver and the CEAT"}, {"location": "seminar/#february-7-2023", "text": "Slides Talk Recording Abstract: The ongoing Covid-19 pandemic has redefined our understanding of respiratory infectious disease transmission. The primary modes of transmission of the SARS-CoV-2 virus has been identified to be airborne, with human generated respiratory aerosols being the main carrier of the virus. Understanding the dispersion of these aerosols/droplets generated during speaking and coughing, has helped quantify potential for transmission and design effective mitigation strategies. Through my talk I will present how models at two ends of the spatio-temporal resolution spectrum helped quantify the physics and aid NASA Ames administrators design mitigation strategies. For the higher spatio-temporal resolution I will illustrate how the high-order SEM based Navier-Stokes solver Nek5000/NekRS was utilized to develop the models, including algorithms developed through CEED. I will present the two main modes of respiratory aerosol/droplet dispersal indoors, first at a shorter time-scale through expiratory events like coughing, and second at a longer time-scale through the room ventilation system induced flow and turbulence. At the other end of the spatio-temporal resolution, I will talk briefly about Covid-19 Exposure Assessment Tool (CEAT), a novel tool developed to account for multiple factors that affect transmission. I will end my talk by briefly discussing how we can bridge the scales and heterogeneities in the problem with the aid of cutting edge computing and data-driven methods, so that we are fully prepared for the next pandemic. The work presented here has been facilitated by funding through DOE's National Virtual Biotechnology Laboratory (NVBL).", "title": "February 7, 2023"}, {"location": "seminar/#stefan-henneking-university-of-texas-at-austin", "text": "", "title": "Stefan Henneking (University of Texas at Austin)"}, {"location": "seminar/#bayesian-inversion-of-an-acoustic-gravity-model-for-predictive-tsunami-simulation", "text": "", "title": "Bayesian Inversion of an Acoustic-Gravity Model for Predictive Tsunami Simulation"}, {"location": "seminar/#january-10-2023", "text": "Slides Talk Recording Abstract: To improve tsunami preparedness, early-alert systems and real-time monitoring are essential. We use a novel approach for predictive tsunami modeling within the Bayesian inversion framework. This effort focuses on informing the immediate response to an occurring tsunami event using near-field data observation. Our forward model is based on a coupled acoustic-gravity model (e.g., Lotto and Dunham, Comput Geosci (2015) 19:327-340). Similar to other tsunami models, our forward model relies on transient boundary data describing the location and magnitude of the seafloor deformation. In a real-time scenario, these parameter fields must be inferred from a variety of measurements, including observations from pressure gauges mounted on the seafloor. One particular difficulty of this inference problem lies in the accurate inversion from sparse pressure data recorded in the near-field where strong hydroacoustic waves propagate in the compressible ocean; these acoustic waves complicate the task of estimating the hydrostatic pressure changes related to the forming surface gravity wave. Our space-time model is discretized with finite elements in space and finite differences in time. The forward model incurs a high computational complexity, since the pressure waves must be resolved in the 3D compressible ocean over a sufficiently long time span. Due to the infeasibility of rapidly solving the corresponding inverse problem for the fully discretized space-time operator, we discuss approaches for using compact representations of the parameter-to-observable map.", "title": "January 10, 2023"}, {"location": "seminar/#lin-mu-university-of-georgia", "text": "", "title": "Lin Mu (University of Georgia)"}, {"location": "seminar/#an-efficient-and-effective-fem-solver-for-diffusion-equation-with-strong-anisotropy", "text": "", "title": "An Efficient and Effective FEM Solver for Diffusion Equation with Strong Anisotropy"}, {"location": "seminar/#december-13-2022", "text": "Slides Talk Recording Abstract: The Diffusion equation with strong anisotropy has broad applications. In this project, we discuss numerical solution of diffusion equations with strong anisotropy on meshes not aligned with the anisotropic vector field, focusing on application to magnetic confinement fusion. In order to resolve the numerical pollution for simulations on a non-anisotropy-aligned mesh and reduce the associated high computational cost, we developed a high-order discontinuous Galerkin scheme with an efficient preconditioner. The auxiliary space preconditioning framework is designed by employing a continuous finite element space as the auxiliary space for the discontinuous finite element space. An effective line smoother that can mitigate the high-frequency error perpendicular to the magnetic field has been designed by a graph-based approach to pick the line smoother that is approximately perpendicular to the vector fields when the mesh does not align with anisotropy. Numerical experiments for several benchmark problems are presented to validate the effectiveness and robustness.", "title": "December 13, 2022"}, {"location": "seminar/#garth-wells-university-of-cambridge", "text": "", "title": "Garth Wells (University of Cambridge)"}, {"location": "seminar/#fenicsx-design-of-the-next-generation-fenics-libraries-for-finite-element-methods", "text": "", "title": "FEniCSx: design of the next generation FEniCS libraries for finite element methods"}, {"location": "seminar/#november-8-2022", "text": "Slides Talk Recording Abstract: The FEniCS Project provides libraries for solving partial differential equations using the finite element method. An aim of the FEniCS Project has been to provide high-performance solver environments that closely mirror mathematical syntax, with the hypothesis that high-level representations means that solvers are faster to write, easier to debug, and can deliver faster runtime performance than is reasonably possible by hand. Using domain-specific languages and code generation techniques, arguably the FEniCS libraries delivered on these goals for a set of problems. However, over time limitations, including performance and extensibility, become clear and maintainability/sustainability became an issue.Building on experiences from the FEniCS libraries, I will present and discuss the design on the next generation of tools, FEniCSx. The new design retains strengths of the past approach, and addresses limitations using new designs and new tools. Solvers can be written in C++ or Python, and a number of design changes allow the creation of flexible, fast solvers in Python. In the second part of my presentation, I will discuss high-performance finite element kernels (limited to CPUs on this occasion), motivated by the Center for Efficient Exascale Discretizations 'bake-off' problems, and which would not have been possible in the original FEniCS libraries. Double, single and half-precision kernels are considered, and results include (i) the observation that kernels with vector intrinsics can be slower than auto-vectorised kernels for common cases, and (ii) a cache-aware performance model which is remarkably accurate in predicting performance across architectures.", "title": "November 8, 2022"}, {"location": "seminar/#dennis-ogiermann-university-of-bochum", "text": "", "title": "Dennis Ogiermann (University of Bochum)"}, {"location": "seminar/#computing-meets-cardiology-making-heart-simulations-fast-and-accurate", "text": "", "title": "Computing Meets Cardiology: Making Heart Simulations Fast and Accurate"}, {"location": "seminar/#september-13-2022", "text": "Slides Talk Recording Abstract: Heart diseases are an ubiquitous societal burden responsible for a majority of deaths world wide. A central problem in developing effective treatments for heart diseases is the inherent complexity of the heart as an organ. From a modeling perspective, the heart can be interpreted as a biological pump involving multiple physical fields, namely fluid and solid mechanics, as well as chemistry and electricity, all interacting on different time scales. This multiphysics and multiscale aspect makes simulations inherently expensive, especially when approached with naive numerical techniques. However, computational models can be extraordinarily useful in helping us understanding how the healthy heart functions and especially how malfunctions influence different diseases. In this context, also information about possible weaknesses of therapies can be obtained to ultimately improve clinical treatment and decision support. In this talk, we will focus primarily on two important model classes in computational cardiology and their respective efficient numerical treatment without compromising significant accuracy. The first class is the problem of computing electrocardiograms (ECG) from electrical heart simulations. Since ECG measurements can give a wide range of insights about a wide range of heart diseases they offer suitable data to validate our electrophysiological models and verify our numerical schemes on organ-scale. Known numerical issues, arising in the context of electrophysiological models, will be reviewed. The second class addresses bidirectionally coupled electromechanical models and their efficient numerical treatment. Focus will be on a unified space-time adaptive operator splitting framework developed on top of MFEM which proves highly efficient so far for the investigated model classes while still preserving high accuracy.", "title": "September 13, 2022"}, {"location": "seminar/#ricardo-vinuesa-kth", "text": "", "title": "Ricardo Vinuesa (KTH)"}, {"location": "seminar/#modeling-and-controlling-turbulent-flows-through-deep-learning", "text": "", "title": "Modeling and Controlling Turbulent Flows through Deep Learning"}, {"location": "seminar/#august-23-2022", "text": "Slides Talk Recording Abstract: The advent of new powerful deep neural networks (DNNs) has fostered their application in a wide range of research areas, including more recently in fluid mechanics. In this presentation, we will cover some of the fundamentals of deep learning applied to computational fluid dynamics (CFD). Furthermore, we explore the capabilities of DNNs to perform various predictions in turbulent flows: we will use convolutional neural networks (CNNs) for non-intrusive sensing, i.e. to predict the flow in a turbulent open channel based on quantities measured at the wall. We show that it is possible to obtain very good flow predictions, outperforming traditional linear models, and we showcase the potential of transfer learning between friction Reynolds numbers of 180 and 550. We also discuss other modelling methods based on autoencoders (AEs) and generative adversarial networks (GANs), and we present results of deep-reinforcement-learning-based flow control.", "title": "August 23, 2022"}, {"location": "seminar/#jeffrey-banks-rpi", "text": "", "title": "Jeffrey Banks (RPI)"}, {"location": "seminar/#efficient-techniques-for-fluid-structure-interaction-compatibility-coupling-and-galerkin-differences", "text": "", "title": "Efficient Techniques for Fluid Structure Interaction: Compatibility Coupling and Galerkin Differences"}, {"location": "seminar/#july-26-2022", "text": "Slides Talk Recording Abstract: Predictive simulation increasingly involves the dynamics of complex systems with multiple interacting physical processes. In designing simulation tools for these problems, both the formulation of individual constituent solvers, as well as coupling of such solvers into a cohesive simulation tool must be addressed. In this talk, I discuss both of these aspects in the context of fluid-structure interaction, where we have recently developed a new class of stable and accurate partitioned solvers that overcome added-mass instability through the use of so-called compatibility boundary conditions. Here I will present partitioned coupling strategies for incompressible FSI. One interesting aspect of CBC-based coupling is the occurrence of nonstandard and/or high-derivative operators, which can make adoption of the techniques challenging, e.g. in the context of FEM methods. To address this, I will also discuss our newly developed Galerkin Difference approximations, which may provide a natural pathway for CBCs in an FEM context. Although GD is fundamentally a finite element approximation based on a Galerkin projection, the underlying GD space is nonstandard and is derived using profitable ideas from the finite difference literature. The resulting schemes possess remarkable properties including nodal superconvergence and the ability to use large CFL-one time steps. I will also present preliminary results for GD discretizations on unstructured grids using MFEM.", "title": "July 26, 2022"}, {"location": "seminar/#paul-fischer-uiucanl", "text": "", "title": "Paul Fischer (UIUC/ANL)"}, {"location": "seminar/#outlook-for-exascale-fluid-dynamics-simulations", "text": "", "title": "Outlook for Exascale Fluid Dynamics Simulations"}, {"location": "seminar/#june-21-2022", "text": "Slides Talk Recording Abstract: We consider design, development, and use of simulation software for exascale computing, with a particular emphasis on fluid dynamics simulation. Our perspective is through the lens of the high-order code Nek5000/RS, which has been developed under DOE's Center for Efficient Exascale Discretizations (CEED). Nek5000/RS is an open source thermal fluids simulation code with a long development history on leadership computing platforms--it was the first commercial software on distributed memory platforms and a Gordon Bell Prize winner on Intel's ASCII Red. There are a myriad of objectives that drive software design choices in HPC, such as scalability, low-memory, portability, and maintainability. Throughout, our design objective has been to address the needs of the user, including facilitating data analysis and ensuring flexibility with respect to platform and number of processors that can be used. When running on large-scale HPC platforms, three of the most common user questions are How long will my job take? How many nodes will be required? Is there anything I can do to make my job run faster? Additionally, one might have concerns about storage, post-processing (Will I be able to analyze the results? Where?), and queue times. This talk will seek to answer several of these questions over a broad range of fluid-thermal problems from the perspective of a Nek5000/RS user. We specifically address performance with data for NekRS on several of the DOE's pre-exascale architectures, which feature AMD MI250X or NVIDIA V100 or A100 GPUs.", "title": "June 21, 2022"}, {"location": "seminar/#mike-puso-llnl", "text": "", "title": "Mike Puso (LLNL)"}, {"location": "seminar/#topics-in-immersed-boundary-and-contact-methods-current-llnl-projects-and-research", "text": "", "title": "Topics in Immersed Boundary and Contact Methods: Current LLNL Projects and Research"}, {"location": "seminar/#may-24-2022", "text": "Slides Talk Recording Abstract: Many of the most interesting phenomena in solid mechanics occurs at material interfaces. This can be in the form of fluid structure interaction, cracks, material discontinuities, impact etc. Solutions to these problems often require some form of immersed/embedded boundary method or contact or combination of both. This talk will provide a brief overview of different lab efforts in these areas and presents some of the current research aspects and results using from LLNL production codes. Technically speaking, the methods discussed here all require Lagrange multipliers to satisfy the constraints on the interface of overlapping or dissimilar meshes which complicates the solution. Stability and consistency of Lagrange multiplier approaches can be hard to achieve both in space and time. For example, the wrong choice of multiplier space will either be over-constrained and/or cause oscillations at the material interfaces for simple statics problems. For dynamics, many of the basic time integration schemes such as Newmark's method are known to be unstable due to gaps opening and closing. Here we introduce some (non-Nitsche) stabilized multiplier spaces for immersed boundary and contact problems and a structure preserving time integration scheme for long time dynamic contact problems. Finally, I will describe some on-going efforts extending this work.", "title": "May 24, 2022"}, {"location": "seminar/#robert-chiodi-uiuc", "text": "", "title": "Robert Chiodi (UIUC)"}, {"location": "seminar/#chyps-an-mfem-based-material-response-solver-for-hypersonic-thermal-protection-systems", "text": "", "title": "CHyPS: An MFEM-Based Material Response Solver for Hypersonic Thermal Protection Systems"}, {"location": "seminar/#april-26-2022", "text": "Slides Talk Recording Abstract: The University of Illinois at Urbana-Champaign's Center for Hypersonics and Entry Systems Studies has developed a material response solver, named CHyPS, to predict the behavior of thermal protection systems for hypersonic flight. CHyPS uses MFEM to provide the underlying discontinuous Galerkin spatial discretization and linear solvers used to solve the equations. In this talk, we will briefly present the physics and corresponding equations governing material response in hypersonic environments. We will also include a discussion on the implementation of a direct Arbitrary Lagrangian-Eulerian approach to handle mesh movement resulting from the ablation of the material surface. Results for standard community test cases developed at a series of Ablation Workshop meetings over the past decade will be presented and compared to other material response solvers. We will also show the potential of high-order solutions for simulating thermal protection system material response.", "title": "April 26, 2022"}, {"location": "seminar/#tamas-horvath-oakland-university", "text": "", "title": "Tamas Horvath (Oakland University)"}, {"location": "seminar/#space-time-hybridizable-discontinuous-galerkin-with-mfem", "text": "", "title": "Space-Time Hybridizable Discontinuous Galerkin with MFEM"}, {"location": "seminar/#march-29-2022", "text": "Slides Talk Recording Abstract: Unsteady partial differential equations on deforming domains appear in many real-life scenarios, such as wind turbines, helicopter rotors, car wheels, free-surface flows, etc. We will focus on the space-time finite element method, which is an excellent approach to discretize problems on evolving domains. This method uses discontinuous Galerkin to discretize both in the spatial and temporal directions, allowing for an arbitrarily high-order approximation in space and time. Furthermore, this method automatically satisfies the geometric conservation law, which is essential for accurate solutions on time-dependent domains. The biggest criticism is that the application of space-time discretization increases the computational complexity significantly. To overcome this, we can use the high-order accurate Hybridizable or Embedded discontinuous Galerkin method. Numerical results will be presented to illustrate the applicability of the method for fluid flow around rigid bodies.", "title": "March 29, 2022"}, {"location": "seminar/#tobin-isaac-georgia-tech", "text": "", "title": "Tobin Isaac (Georgia Tech)"}, {"location": "seminar/#unifying-the-analysis-of-geometric-decomposition-in-feec", "text": "", "title": "Unifying the Analysis of Geometric Decomposition in FEEC"}, {"location": "seminar/#march-22-2022", "text": "Slides Talk Recording Abstract: Two operations take function spaces and make them suitable for finite element computations. The first is the construction of trace-free subspaces (which creates \"bubble\" functions) and the second is the extension of functions from cell boundaries into cell interiors (which create edge functions with the correct continuity): together these operations define the geometric decomposition of a function space. In finite element exterior calculus (FEEC), these two operations have been treated separately for the two main families of finite elements: full polynomial elements and trimmed polynomial elements. In this talk we will see how one constructor of trace-free functions and one extension operator can be used for both families, and indeed for all differential forms. We will also examine the practicality of these two operators as tools for implementing geometric decompositions in actual finite element codes.", "title": "March 22, 2022"}, {"location": "seminar/#raphael-zanella-ut-austin", "text": "", "title": "Rapha\u00ebl Zanella (UT Austin)"}, {"location": "seminar/#axisymmetric-mfem-based-solvers-for-the-compressible-navier-stokes-equations-and-other-problems", "text": "", "title": "Axisymmetric MFEM-Based Solvers for the Compressible Navier-Stokes Equations and Other Problems"}, {"location": "seminar/#march-1-2022", "text": "Slides Talk Recording Abstract: An axisymmetric model leads, when suitable, to a substantial cut in the computational cost with respect to a 3D model. Although not as accurate, the axisymmetric model allows to quickly obtain a result which can be satisfying. Simple modifications to a 2D finite element solver allow to obtain an axisymmetric solver. We present MFEM-based parallel axisymmetric solvers for different problems. We first present simple axisymmetric solvers for the Laplacian problem and the heat equation. We then present an axisymmetric solver for the compressible Navier-Stokes equations. All solvers are based on H^1-conforming finite element spaces. The correctness of the implementation is verified with convergence tests on manufactured solutions. The Navier-Stokes solver is used to simulate axisymmetric flows with an analytical solution (Poiseuille and Taylor-Couette) and an air flow in a plasma torch geometry.", "title": "March 1, 2022"}, {"location": "seminar/#robert-carson-llnl", "text": "", "title": "Robert Carson (LLNL)"}, {"location": "seminar/#an-overview-of-exaconstit-and-its-use-in-the-exaam-project", "text": "", "title": "An Overview of ExaConstit and Its Use in the ExaAM Project"}, {"location": "seminar/#february-1-2022", "text": "Slides Talk Recording Abstract: As additively manufactured (AM) parts become increasingly more popular in industry, a growing need exists to help expediate the certifying process of parts. The ExaAM project seeks to help this process by producing a workflow to model the AM process from the melt pool process all the way up to the part scale response by leveraging multiple physics codes run on upcoming exascale computing platforms. As part of this workflow, ExaConstit is a next-generation quasi-static, solid mechanics FEM code built upon MFEM used to connect local microstructures and local properties within the part scale response. Within this talk, we will first provide an overview of ExaConstit, how we have ported it over to the GPU, and some performance numbers on a number of different platforms. Next, we will discuss how we have leveraged MFEM and the FLUX workflow to run hundreds of high-fidelity simulations on Summit in-order to generate the local properties needed to drive the part scale simulation in the ExaAM workflow. Finally, we will show case a few other areas ExaConstit has been used in.", "title": "February 1, 2022"}, {"location": "seminar/#guglielmo-scovazzi-duke-university", "text": "", "title": "Guglielmo Scovazzi (Duke University)"}, {"location": "seminar/#the-shifted-boundary-method-an-immersed-approach-for-computational-mechanics", "text": "", "title": "The Shifted Boundary Method: An Immersed Approach for Computational Mechanics"}, {"location": "seminar/#january-20-2022", "text": "Slides Talk Recording Abstract: Immersed/embedded/unfitted boundary methods obviate the need for continual re-meshing in many applications involving rapid prototyping and design. Unfortunately, many finite element embedded boundary methods are also difficult to implement due to the need to perform complex cell cutting operations at boundaries, and the consequences that these operations may have on the overall conditioning of the ensuing algebraic problems. We present a new, stable, and simple embedded boundary method, named \"shifted boundary method\" (SBM), which eliminates the need to perform cell cutting. Boundary conditions are imposed on a surrogate discrete boundary, lying on the interior of the true boundary interface. We then construct appropriate field extension operators, by way of Taylor expansions, with the purpose of preserving accuracy when imposing the boundary conditions. We demonstrate the SBM on large-scale solid and fracture mechanics problems; thermomechanics problems; porous media flow problems; incompressible flow problems governed by the Navier-Stokes equations (also including free surfaces); and problems governed by hyperbolic conservation laws.", "title": "January 20, 2022"}, {"location": "seminar/#future-talks", "text": "", "title": " Future Talks"}, {"location": "seminar/#martin-kronbichler-ruhr-university-bochum", "text": "", "title": "Martin Kronbichler (Ruhr University Bochum)"}, {"location": "seminar/#december-17-2024", "text": "", "title": "December 17, 2024"}, {"location": "seminar/#svetlana-tokareva-los-alamos-national-laboratory", "text": "", "title": "Svetlana Tokareva (Los Alamos National Laboratory)"}, {"location": "seminar/#january-14-2025", "text": "", "title": "January 14, 2025"}, {"location": "seminar/#patrick-zulian-universita-della-svizzera-italiana-unidistance-suisse", "text": "", "title": "Patrick Zulian (Universit\u00e0 della Svizzera italiana / UniDistance Suisse)"}, {"location": "seminar/#february-18-2025", "text": "", "title": "February 18, 2025"}, {"location": "seminar/#stefan-turek-technical-university-dortmund", "text": "", "title": "Stefan Turek (Technical University Dortmund)"}, {"location": "seminar/#march-11-2025", "text": "", "title": "March 11, 2025"}, {"location": "seminar/#ukasz-kaczmarczyk-university-of-glasgow", "text": "", "title": "\u0141ukasz Kaczmarczyk (University of Glasgow)"}, {"location": "seminar/#april-8-2025", "text": "", "title": "April 8, 2025"}, {"location": "serial-tutorial/", "text": "MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$']]}}); Serial Tutorial Summary This tutorial illustrates the building and sample use of the following MFEM serial example codes: Example 1 Example 2 Example 3 An interactive documentation of all example codes is available here . Building Follow the serial instructions to build the MFEM library and to start a GLVis server. The latter is the recommended visualization software for MFEM (though its use is optional). To build the serial example codes, type make in MFEM's examples directory: ~/mfem/examples> make g++ -O3 -I.. ex1.cpp -o ex1 -L.. -lmfem g++ -O3 -I.. ex2.cpp -o ex2 -L.. -lmfem g++ -O3 -I.. ex3.cpp -o ex3 -L.. -lmfem g++ -O3 -I.. ex4.cpp -o ex4 -L.. -lmfem g++ -O3 -I.. ex5.cpp -o ex5 -L.. -lmfem g++ -O3 -I.. ex6.cpp -o ex6 -L.. -lmfem g++ -O3 -I.. ex7.cpp -o ex7 -L.. -lmfem g++ -O3 -I.. ex8.cpp -o ex8 -L.. -lmfem g++ -O3 -I.. ex9.cpp -o ex9 -L.. -lmfem g++ -O3 -I.. ex10.cpp -o ex10 -L.. -lmfem Example 1 This example code demonstrates the use of MFEM to define a simple linear finite element discretization of the Laplace problem $-\\Delta u = 1$ with homogeneous Dirichlet boundary conditions. To run it, simply specify the input mesh file (which will be refined to a final mesh with no more than 50,000 elements): ~/mfem/examples> ex1 -m ../data/star.mesh Iteration : 0 (B r, r) = 0.00111712 Iteration : 1 (B r, r) = 0.00674088 Iteration : 2 (B r, r) = 0.0123008 ... Iteration : 88 (B r, r) = 5.28955e-15 Iteration : 89 (B r, r) = 1.99155e-15 Iteration : 90 (B r, r) = 9.91309e-16 Average reduction factor = 0.857127 If a GLVis server is running, the computed finite element solution will appear in an interactive window: You can examine the solution using the mouse and the GLVis command keystrokes . Pressing \" RAfjlmm \", for example, will give us a 2D view without light or perspective showing the computed level lines: This example saves two files called refined.mesh and sol.gf , which represent the refined mesh and the computed solution as a grid function. These can be visualized with glvis -m refined.mesh -g sol.gf as discussed here . Example 1 can be run on any mesh that is supported by MFEM, including 3D, curvilinear and VTK meshes, e.g., ~/mfem/examples> ex1 -m ../data/fichera-q2.vtk Iteration : 0 (B r, r) = 0.0235996 Iteration : 1 (B r, r) = 0.0476694 Iteration : 2 (B r, r) = 0.0200109 ... Iteration : 27 (B r, r) = 7.77888e-14 Iteration : 28 (B r, r) = 2.36255e-14 Iteration : 29 (B r, r) = 8.56679e-15 Average reduction factor = 0.610261 The picture above shows the solution with level lines plotted in normal direction of a cutting plane, and was produced by typing \" AaafmIMMooo \" followed by cutting plane adjustments with \" z \", \" y \" and \" w \". Example 2 This example code solves a simple linear elasticity problem describing a multi-material Cantilever beam. Note that the input mesh should have at least two materials and two boundary attributes as shown below: +----------+----------+ boundary --->| material | material |<--- boundary attribute 1 | 1 | 2 | attribute 2 (fixed) +----------+----------+ (pull down) The example demonstrates the use of (high-order) vector finite element spaces by supporting several different discretization options: ~/mfem/examples> ex2 -m ../data/beam-quad.mesh -o 2 Assembling: r.h.s. ... matrix ... done. Iteration : 0 (B r, r) = 1.88755e-06 Iteration : 1 (B r, r) = 8.2357e-07 Iteration : 2 (B r, r) = 9.9098e-07 ... Iteration : 498 (B r, r) = 2.78279e-11 Iteration : 499 (B r, r) = 3.75298e-11 Iteration : 500 (B r, r) = 4.95682e-11 PCG: No convergence! (B r_0, r_0) = 1.88755e-06 (B r_N, r_N) = 4.95682e-11 Number of PCG iterations: 500 Average reduction factor = 0.989508 The output shows the (curved) displaced mesh together with the inverse displacement vector field: The above plot can be alternatively produced with: glvis -m displaced.mesh -g sol.gf -k \"RfjliiiiimmAbb\" Example 2 also works in 3D: ~/mfem/examples> ex2 -m ../data/beam-tet.mesh -o 3 Assembling: r.h.s. ... matrix ... done. Iteration : 0 (B r, r) = 2.7147e-06 Iteration : 1 (B r, r) = 1.95756e-06 Iteration : 2 (B r, r) = 2.24159e-06 ... Iteration : 426 (B r, r) = 3.37563e-14 Iteration : 427 (B r, r) = 3.06198e-14 Iteration : 428 (B r, r) = 2.5706e-14 Average reduction factor = 0.978648 One can visualize the vector field, e.g., by pressing \" dbAfmeoooovvaa \" followed by scale and position adjustments with the mouse: Example 3 This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation ${\\rm curl\\, curl}\\, E + E = f$ discretized with the lowest order Nedelec finite elements. It computes the approximation error with a know exact solution, and requires a 3D input mesh: ~/mfem/examples> ex3 -m ../data/fichera.mesh Iteration : 0 (B r, r) = 121.209 Iteration : 1 (B r, r) = 21.1137 Iteration : 2 (B r, r) = 12.6503 ... Iteration : 149 (B r, r) = 2.40571e-10 Iteration : 150 (B r, r) = 1.39788e-10 Iteration : 151 (B r, r) = 9.43635e-11 Average reduction factor = 0.911811 || E_h - E ||_{L^2} = 0.00976655 To visualize the magnitude of the solution with the proportionally-sized vector field shown only on the boundary of the domain, type \" Vfooogt \" in the GLVis window (or run glvis -m refined.mesh -g sol.gf -k \"Vfooogt\" ): Curved meshes are also supported: ~/mfem/examples> ex3 -m ../data/fichera-q3.mesh Iteration : 0 (B r, r) = 135.613 Iteration : 1 (B r, r) = 22.3785 Iteration : 2 (B r, r) = 12.5215 ... Iteration : 168 (B r, r) = 4.95911e-10 Iteration : 169 (B r, r) = 2.23499e-10 Iteration : 170 (B r, r) = 1.25714e-10 Average reduction factor = 0.921741 || E_h - E ||_{L^2} = 0.0821686 To visualize the entire vector field, type \" fooogtevv \" instead, which will use uniform sized arrows colored according to their magnitude. Here is the corresponding plot from \" ex3 -m ../data/beam-hex.mesh \": Since entire vector fields in 3D might be difficult to see, a good alternative might be to plot the separate components of the field as scalar functions. For example: ~/mfem/examples> ex3 -m ../data/escher.mesh Iteration : 0 (B r, r) = 348.797 Iteration : 1 (B r, r) = 32.0699 Iteration : 2 (B r, r) = 14.902 ... Iteration : 159 (B r, r) = 4.16076e-10 Iteration : 160 (B r, r) = 3.50907e-10 Iteration : 161 (B r, r) = 3.22923e-10 Average reduction factor = 0.917548 || E_h - E ||_{L^2} = 0.36541 ~/mfem/examples> glvis -m refined.mesh -g sol.gf -gc 0 -k \"gooottF\" The discontinuity of the Nedelec functions is clearly seen in the above plot.", "title": "_Serial Tutorial"}, {"location": "serial-tutorial/#serial-tutorial", "text": "", "title": "Serial Tutorial"}, {"location": "serial-tutorial/#summary", "text": "This tutorial illustrates the building and sample use of the following MFEM serial example codes: Example 1 Example 2 Example 3 An interactive documentation of all example codes is available here .", "title": "Summary"}, {"location": "serial-tutorial/#building", "text": "Follow the serial instructions to build the MFEM library and to start a GLVis server. The latter is the recommended visualization software for MFEM (though its use is optional). To build the serial example codes, type make in MFEM's examples directory: ~/mfem/examples> make g++ -O3 -I.. ex1.cpp -o ex1 -L.. -lmfem g++ -O3 -I.. ex2.cpp -o ex2 -L.. -lmfem g++ -O3 -I.. ex3.cpp -o ex3 -L.. -lmfem g++ -O3 -I.. ex4.cpp -o ex4 -L.. -lmfem g++ -O3 -I.. ex5.cpp -o ex5 -L.. -lmfem g++ -O3 -I.. ex6.cpp -o ex6 -L.. -lmfem g++ -O3 -I.. ex7.cpp -o ex7 -L.. -lmfem g++ -O3 -I.. ex8.cpp -o ex8 -L.. -lmfem g++ -O3 -I.. ex9.cpp -o ex9 -L.. -lmfem g++ -O3 -I.. ex10.cpp -o ex10 -L.. -lmfem", "title": "Building"}, {"location": "serial-tutorial/#example-1", "text": "This example code demonstrates the use of MFEM to define a simple linear finite element discretization of the Laplace problem $-\\Delta u = 1$ with homogeneous Dirichlet boundary conditions. To run it, simply specify the input mesh file (which will be refined to a final mesh with no more than 50,000 elements): ~/mfem/examples> ex1 -m ../data/star.mesh Iteration : 0 (B r, r) = 0.00111712 Iteration : 1 (B r, r) = 0.00674088 Iteration : 2 (B r, r) = 0.0123008 ... Iteration : 88 (B r, r) = 5.28955e-15 Iteration : 89 (B r, r) = 1.99155e-15 Iteration : 90 (B r, r) = 9.91309e-16 Average reduction factor = 0.857127 If a GLVis server is running, the computed finite element solution will appear in an interactive window: You can examine the solution using the mouse and the GLVis command keystrokes . Pressing \" RAfjlmm \", for example, will give us a 2D view without light or perspective showing the computed level lines: This example saves two files called refined.mesh and sol.gf , which represent the refined mesh and the computed solution as a grid function. These can be visualized with glvis -m refined.mesh -g sol.gf as discussed here . Example 1 can be run on any mesh that is supported by MFEM, including 3D, curvilinear and VTK meshes, e.g., ~/mfem/examples> ex1 -m ../data/fichera-q2.vtk Iteration : 0 (B r, r) = 0.0235996 Iteration : 1 (B r, r) = 0.0476694 Iteration : 2 (B r, r) = 0.0200109 ... Iteration : 27 (B r, r) = 7.77888e-14 Iteration : 28 (B r, r) = 2.36255e-14 Iteration : 29 (B r, r) = 8.56679e-15 Average reduction factor = 0.610261 The picture above shows the solution with level lines plotted in normal direction of a cutting plane, and was produced by typing \" AaafmIMMooo \" followed by cutting plane adjustments with \" z \", \" y \" and \" w \".", "title": "Example 1"}, {"location": "serial-tutorial/#example-2", "text": "This example code solves a simple linear elasticity problem describing a multi-material Cantilever beam. Note that the input mesh should have at least two materials and two boundary attributes as shown below: +----------+----------+ boundary --->| material | material |<--- boundary attribute 1 | 1 | 2 | attribute 2 (fixed) +----------+----------+ (pull down) The example demonstrates the use of (high-order) vector finite element spaces by supporting several different discretization options: ~/mfem/examples> ex2 -m ../data/beam-quad.mesh -o 2 Assembling: r.h.s. ... matrix ... done. Iteration : 0 (B r, r) = 1.88755e-06 Iteration : 1 (B r, r) = 8.2357e-07 Iteration : 2 (B r, r) = 9.9098e-07 ... Iteration : 498 (B r, r) = 2.78279e-11 Iteration : 499 (B r, r) = 3.75298e-11 Iteration : 500 (B r, r) = 4.95682e-11 PCG: No convergence! (B r_0, r_0) = 1.88755e-06 (B r_N, r_N) = 4.95682e-11 Number of PCG iterations: 500 Average reduction factor = 0.989508 The output shows the (curved) displaced mesh together with the inverse displacement vector field: The above plot can be alternatively produced with: glvis -m displaced.mesh -g sol.gf -k \"RfjliiiiimmAbb\" Example 2 also works in 3D: ~/mfem/examples> ex2 -m ../data/beam-tet.mesh -o 3 Assembling: r.h.s. ... matrix ... done. Iteration : 0 (B r, r) = 2.7147e-06 Iteration : 1 (B r, r) = 1.95756e-06 Iteration : 2 (B r, r) = 2.24159e-06 ... Iteration : 426 (B r, r) = 3.37563e-14 Iteration : 427 (B r, r) = 3.06198e-14 Iteration : 428 (B r, r) = 2.5706e-14 Average reduction factor = 0.978648 One can visualize the vector field, e.g., by pressing \" dbAfmeoooovvaa \" followed by scale and position adjustments with the mouse:", "title": "Example 2"}, {"location": "serial-tutorial/#example-3", "text": "This example code solves a simple 3D electromagnetic diffusion problem corresponding to the second order definite Maxwell equation ${\\rm curl\\, curl}\\, E + E = f$ discretized with the lowest order Nedelec finite elements. It computes the approximation error with a know exact solution, and requires a 3D input mesh: ~/mfem/examples> ex3 -m ../data/fichera.mesh Iteration : 0 (B r, r) = 121.209 Iteration : 1 (B r, r) = 21.1137 Iteration : 2 (B r, r) = 12.6503 ... Iteration : 149 (B r, r) = 2.40571e-10 Iteration : 150 (B r, r) = 1.39788e-10 Iteration : 151 (B r, r) = 9.43635e-11 Average reduction factor = 0.911811 || E_h - E ||_{L^2} = 0.00976655 To visualize the magnitude of the solution with the proportionally-sized vector field shown only on the boundary of the domain, type \" Vfooogt \" in the GLVis window (or run glvis -m refined.mesh -g sol.gf -k \"Vfooogt\" ): Curved meshes are also supported: ~/mfem/examples> ex3 -m ../data/fichera-q3.mesh Iteration : 0 (B r, r) = 135.613 Iteration : 1 (B r, r) = 22.3785 Iteration : 2 (B r, r) = 12.5215 ... Iteration : 168 (B r, r) = 4.95911e-10 Iteration : 169 (B r, r) = 2.23499e-10 Iteration : 170 (B r, r) = 1.25714e-10 Average reduction factor = 0.921741 || E_h - E ||_{L^2} = 0.0821686 To visualize the entire vector field, type \" fooogtevv \" instead, which will use uniform sized arrows colored according to their magnitude. Here is the corresponding plot from \" ex3 -m ../data/beam-hex.mesh \": Since entire vector fields in 3D might be difficult to see, a good alternative might be to plot the separate components of the field as scalar functions. For example: ~/mfem/examples> ex3 -m ../data/escher.mesh Iteration : 0 (B r, r) = 348.797 Iteration : 1 (B r, r) = 32.0699 Iteration : 2 (B r, r) = 14.902 ... Iteration : 159 (B r, r) = 4.16076e-10 Iteration : 160 (B r, r) = 3.50907e-10 Iteration : 161 (B r, r) = 3.22923e-10 Average reduction factor = 0.917548 || E_h - E ||_{L^2} = 0.36541 ~/mfem/examples> glvis -m refined.mesh -g sol.gf -gc 0 -k \"gooottF\" The discontinuity of the Nedelec functions is clearly seen in the above plot.", "title": "Example 3"}, {"location": "tesla-notes/", "text": "Magnetostatic Equations The magnetostatic equations that we start from are the following: $$\\nabla\\times\\bf H = \\bf J \\label{ampere}$$ $$\\nabla\\cdot{\\bf B}= 0 \\label{mag_gauss}$$ $${\\bf B} = \\mu{\\bf H}+\\mu_0{\\bf M} \\label{const}$$ Where \\eqref{ampere} is Amp\u00e8re's Law, \\eqref{mag_gauss} is Gauss's Law for Magnetism, and \\eqref{const} is a somewhat atypical way to write the Constitutive Relation between ${\\bf B}$ and ${\\bf H}$. The constitutive relation used here follows \"Classical Electrodynamics\" 3rd edition by J.D. Jackson and uses ${\\bf M}$, measured in A/m, to represent the magnetization of a permanent magnet. Some sources would instead use ${\\bf B}_r=\\mu_0{\\bf M}$ to represent a residual magnetization, measured in tesla. These conventions are, of course, mathematically equivalent but the choice made in this miniapp does seem a bit odd as I look at it now. These equations can be combined if we make use of the fact that $\\nabla\\cdot{\\bf B}=0$ implies ${\\bf B}=\\nabla\\times{\\bf A}$ for some vector potential ${\\bf A}$. This leads to: $$\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf A}) = {\\bf J}+ \\nabla\\times(\\mu^{-1}\\mu_0{\\bf M})$$ This equation supports a current source density, a permanent magnetization, surface current boundary conditions, and fixed ${\\bf A}$ boundary condition which can be used to apply an external magnetic field. There also exists a special case in magnetostatics when the current density is equal to zero. In this case $\\nabla\\times{\\bf H}=0$ which implies that the magnetic field can be computed as ${\\bf H}=-\\nabla\\Phi_M$. This leads to the scalar potential formulation which we will not consider further except to say that the electrostatic solver, named volta , can be adapted to model such situations. The tesla Miniapp The tesla miniapp models the magnetostatic equation for the magnetic vector potential ${\\bf A}$. It includes source terms derived from a volumetric current source ${\\bf J}$, magnetization vector ${\\bf M}$, or surface currents ${\\bf K}$. $$\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf A}) = {\\bf J}+\\nabla\\times(\\mu^{-1}\\mu_0{\\bf M})$$ $$\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf A}) = \\hat{n}\\times{\\bf K}$$ The magnetic vector potential will be approximated in H(Curl) so that the left hand side operator is well defined. $${\\bf A} \\approx \\sum_i a_i {\\bf W}_i (\\vec{x})$$ Inserting this into the left hand side of the equation and integrating the resulting equation against each H(Curl) basis function leads to the following weak form: $$\\begin{align} \\int_{\\Omega}{\\bf W}_{i}(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf A})]d\\Omega & \\approx \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot\\{\\nabla\\times[\\mu^{-1}\\nabla\\times(\\sum_j a_j{\\bf W}_j(\\vec{x}))]\\}d\\Omega \\\\ & = \\sum_j a_j\\{\\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]d\\Omega\\} \\end{align}$$ The expression in curly braces depends only on our material coefficient and our basis functions. This particular integral requires a little more manipulation to move the outermost curl operator onto the H(Curl) basis function. $$\\begin{aligned} \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Omega &=& \\int_\\Omega(\\nabla\\times{\\bf W}_i(\\vec{x}))\\cdot(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))\\,d\\Omega \\\\ &-& \\int_\\Omega\\nabla\\cdot[{\\bf W}_i(\\vec{x})\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Omega \\\\ &=& \\int_\\Omega(\\nabla\\times{\\bf W}_i(\\vec{x}))\\cdot(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))\\,d\\Omega \\\\ &-& \\int_\\Gamma\\hat{n}\\cdot[{\\bf W}_i(\\vec{x})\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Gamma \\\\ &=& \\int_\\Omega(\\nabla\\times{\\bf W}_i(\\vec{x}))\\cdot(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))\\,d\\Omega \\\\ &+& \\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot[\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Gamma \\\\ \\end{aligned}$$ The first integral remaining on the right hand side is implemented in MFEM as a BilinearFormIntegrator named CurlCurlIntegrator . The second integral, the boundary integral, gives rise to a Neumann boundary condition which will be discussed further in Section 2.1.3 . Source Terms Current Density ${\\bf J}$ The current density ${\\bf J}$ requires special care. In order for the magnetostatic equations to possess a solution ${\\bf J}$ must be in the range of the curl operator. Another way to say this is that the divergence of ${\\bf J}$ must be zero. If $\\nabla\\cdot{\\bf J}\\neq 0$ we can correct this by adding the gradient of a scalar field. If we start with some initial estimate of the current density which we call ${\\bf J}_0$, $$\\begin{aligned} \\nabla\\cdot({\\bf J}_0-\\nabla\\Psi) &=& 0 \\\\ \\nabla\\cdot\\nabla\\Psi &=& \\nabla\\cdot{\\bf J}_0 \\\\ {\\bf J}& = & {\\bf J}_0 - \\nabla\\Psi \\end{aligned}$$ The current density ${\\bf J}$ computed in this manner will be divergence free and therefore it will be in the range of the curl operator. Normally, in the continuous world, we simply define ${\\bf J}$ directly, however, in the discrete world we can only approximate ${\\bf J}$ so we must always perform this divergence cleaning procedure on our approximations of ${\\bf J}$. Failure to do so can lead to lack of convergence or complete failure of the solve. In MFEM the divergence cleaning procedure is handled by a class called DivergenceFreeProjector which is not a part of the MFEM library itself. It is provided as part of a collection of convenience classes in the miniapps/common subdirectory. Magnetization ${\\bf M}$ The magnetization ${\\bf M}$ is intended to represent permanent magnetics or other regions of prescribed magnetization. In the Tesla miniapp ${\\bf M}$ is discretized using H(Div) basis functions which allow its tangential components to be discontinuous. Its curl appears in the magnetostatic equations as a source term and this curl operation ensures that this source lies in the range of the curl operator so no divergence cleaning operation is needed for this portion of the source. In the Tesla miniapp this source is computed and applied on lines 338-343 in the TeslaSolver::Solve() function. The weak curl operator is configured on lines 168-175 in the TeslaSolver constructor. Surface Current ${\\bf K}$ The integration by parts needed to create the weak form of the curl-curl operators also leads to a boundary integral: $$\\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot[\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Gamma$$ This means that our weak curl-curl operator applied to ${\\bf A}$ differs from the continuous curl-curl operator by a surface integral of the form: $$\\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot[\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf A})]\\,d\\Gamma = \\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot(\\hat{n}\\times{\\bf H})\\,d\\Gamma$$ If we do nothing to account for this boundary integral we are implicitly setting it equal to zero which amounts to a boundary condition on the tangential part of the magnetic field i.e. $\\hat{n}\\times{\\bf H}=0$. Another possibility is to set a surface current boundary condition i.e. $\\hat{n}\\times{\\bf H}=\\hat{n}\\times{\\bf K}$. This could be done by using a ParLinearForm object to integrate $\\hat{n}\\times{\\bf K}$ over the portion of the boundary where ${\\bf K}$ is non-zero and adding the resulting vector to the right hand side of the linear system. However, this is not the approach used in the Tesla miniapp. In Tesla we employ a trick based on the Stoke's theorem. A surface current leads to a discontinuity in the tangential part of ${\\bf H}$ on the boundary. Similarly, a discontinuity in ${\\bf H}$ leads to a discontinuity in ${\\bf A}$ on the boundary. Therefore we can set the tangential part of ${\\bf A}$ to equal ${\\bf K}$ and we get the correct behavior as long as we set the tangential part of ${\\bf A}=0$ elsewhere on the boundary. To be honest I'm not sure how valid this approach is but it does seem to work and it can improve solver convergence. I would recommend confirming this approach before relying on it. Post-Processing Computation of ${\\bf H}$ The magnetic field ${\\bf H}$ needs to have tangential continuity so we approximate it using the H(Curl) basis: $${\\bf H}\\approx\\sum_i h_i{\\bf W}_i(\\vec{x})$$ Recall that the magnetic flux ${\\bf B}$ is approximated using the H(Div) basis due to the continuity of its normal component. $${\\bf B}\\approx\\sum_i b_i{\\bf F}_i(\\vec{x})$$ To compute ${\\bf H}$ from ${\\bf B}$ we make use of the constitutive equation ${\\bf B}=\\mu{\\bf H}$. Inserting our approximations and integrating this equation against each H(Curl) basis function we obtain the following: $$\\sum_j h_j\\int_\\Omega\\mu{\\bf W}_i\\cdot{\\bf W}_j\\,d\\Omega = \\sum_k b_k\\int_\\Omega{\\bf W}_i\\cdot{\\bf F}_k\\,d\\Omega$$ This set of linear equations is equivalent to the matrix equation: $$M_1(\\mu)h = M_{21}b$$ Where $M_1(\\mu)$ is an H(Curl) mass matrix incorporating the material coefficient $\\mu$ which is implemented in MFEM as a BilinearFormIntegrator named VectorFEMassIntegrator . The $M_{21}$ operator is a rectangular matrix which maps H(Div) to H(Curl) and is also built using the VectorFEMassIntegrator but with the default material coefficient which is equal to 1. The solution of this linear system is usually obtained with a conjugate gradient iterative solver along with a diagonal scaling preconditioner. Since the matrix to be inverted is a mass matrix this solution is usually very efficient involving fewer than thirty solver iterations. It is important to point out that an H(Curl) approximation usually has more degrees of freedom than a comparable H(Div) approximation. In the interior of the domain the density of degrees of freedom are approximately equal but H(Curl) approximations tend to have more degrees of freedom on the boundary. Consequently, this type of conversion can produce H(Curl) approximations with poor accuracy near the boundary. If the tangential components of ${\\bf B}$ are nearly constant within the elements adjacent to the boundary the conversion can produce a good approximation. However, if these tangential components vary too rapidly non-physical oscillations can occur in ${\\bf H}$. To alleviate these oscillations Dirichlet boundary conditions can be applied during the solution of ${\\bf H}$ provided that reasonable values for $(\\hat{n}\\times{\\bf H})\\times\\hat{n}$ can be determined. In the present magnetostatics context we can reuse any Neumann boundary conditions used during the solution of ${\\bf A}$ since these were equivalent to setting $\\hat{n}\\times{\\bf H}$ on the boundary. Magnetic Energy in a Region The tesla miniapp does not compute the energy in the magnetic field but such a computation should be easy to add. There are two basic procedures for computing energy in MFEM. One involves a bilinear form and the other a linear form. The bilinear form approach makes sense when the energies of multiple fields will be computed with the same operator so that the cost of building the bilinear form can be amortized. In a magnetostatic problem the linear form approach is likely to be more efficient. The usual formula for magnetic energy is $u = \\frac{1}{2}\\int_\\Omega{\\bf H}\\cdot{\\bf B}\\,d\\Omega$. There are many ways to compute this quantity in MFEM but perhaps the most convenient is to make use of a VectorCoefficient and a ParLinearForm . For example let's assume we have a coefficient for $\\mu^{-1}$ and a GridFunction for ${\\bf B}$ called Bgf : { VectorGridFunctionCoefficient BCoef(&Bgf); ScalarVectorProductCoefficient HCoef(muInvCoef, BCoef); ParLinearForm Hlf(&HDivFESpace); Hlf.AddDomainIntegrator(new VectorFEDomainIntegrator(HCoef)); Hlf.Assemble(); double energy = 0.5 * Hlf(Bgf); } This integral can be restricted to some region, defined by a set of element attributes, by incorporating a VectorRestrictedCoefficient . Other forms of energy such as $\\frac{1}{2}\\int_\\Omega{\\bf J}\\cdot{\\bf A}\\,d\\Omega$ or perhaps $\\int_\\Omega{\\bf M}\\cdot{\\bf B}\\,d\\Omega$ could be computed in a similar manner. Torque on a Current Density Torque can also be defined as a volume integral so we can employ a technique similar to the one used for the energy computation. The important difference is that torque is a vector quantity so we will need to integrate each of its vector components separately. This will likely require custom coefficients but the procedure should be straightforward. The existing vector coefficient classes ScalarVectorProductCoefficient and VectorCrossProductCoefficient should serve as guides for how this can be accomplished. Torque on a Permanent Magnet Torque on a Surface Current In theory a surface integral can be computed in a very similar manner to a volume integral. However, discontinuous finite element spaces such as H(Curl), H(Div), or L2 create a complication. Approximations made with these discontinuous fields do not possess well defined values on surfaces. Consequently such an integral could lack precision or even be multi-valued. To overcome this limitation it may be necessary to compute different contributions to the torque in different manners and combine the results. For example the normal component of ${\\bf B}$ is well defined on surfaces. Therefore the force ${\\bf K}\\times{\\bf B}$ may be inaccurate but the quantity $(\\hat{n}\\cdot{\\bf B}){\\bf K}\\times\\hat{n}$ will be more reliable. To obtain another contribution to the torque we can use the tangential components of ${\\bf H}$ as $\\mu{\\bf K}\\times[(\\hat{n}\\times{\\bf H})\\times\\hat{n}]$. This of course assumes that we have an accurate representation of ${\\bf H}$ on this surface which may not be the case if the surface is an outer boundary (see Section Computation of H ). Appendix A: Magnetic Energy class MagneticEnergy { private: const ParGridFunction & b_; const ParGridFunction & h_; public: MagneticEnergy(const ParGridFunction & b, const ParGridFunction & h) : b_(b), h_(h) {} double ComputeEnergy() { VectorGridFunctionCoefficient h_coef(&h_); ParLinearForm h_lf(b_.ParFESpace()); h_lf.AddDomainIntegrator(new VectorFEDomainLFIntegrator(h_coef)); h_lf.Assemble(); return 0.5 * h_lf(b_); } double ComputeEnergy(const Array & elem_attr_marker) { VectorGridFunctionCoefficient h_coef(&h_); ParLinearForm h_lf(b_.ParFESpace()); h_lf.AddDomainIntegrator(new VectorFEDomainLFIntegrator(h_coef), const_cast&>(elem_attr_marker)); h_lf.Assemble(); return 0.5 * h_lf(b_); } }; Appendix B: Torque class Torque { private: const ParGridFunction & b_; const ParGridFunction & h_; const ParGridFunction & j_; public: Torque(const ParGridFunction & b, const ParGridFunction & h, const ParGridFunction & j) : b_(b), h_(h), j_(j) {} void ComputeTorqueOnSurface(const Array &bdr_attr_marker, const Vector ¢, Vector &T); void ComputeTorqueOnVolume(const Array &vol_attr_marker, const Vector ¢, Vector &T); }; void Torque::ComputeTorqueOnSurface(const Array &bdr_attr_marker, const Vector ¢, Vector &trq) { trq = 0.0; ParFiniteElementSpace * fes = b_.ParFESpace(); ParMesh *mesh = b_.ParFESpace()->GetParMesh(); ElementTransformation *eltrans = NULL; Vector b, h, ht(3), nor(3), x(3), f(3), loc_trq(3); loc_trq = 0.0; for (int i=0; iGetNBE(); i++) { const int bdr_attr = mesh->GetBdrAttribute(i); if (bdr_attr_marker[bdr_attr-1] == 0) { continue; } eltrans = fes->GetBdrElementTransformation(i); const FiniteElement &el = *fes->GetBE(i); const IntegrationRule *ir = NULL; if (ir == NULL) { const int order = 2*el.GetOrder() + eltrans->OrderW(); // <----- ir = &IntRules.Get(eltrans->GetGeometryType(), order); } for (int pi = 0; pi < ir->GetNPoints(); ++pi) { const IntegrationPoint &ip = ir->IntPoint(pi); eltrans->SetIntPoint(&ip); CalcOrtho(eltrans->Jacobian(), nor); double a = nor.Norml2(); eltrans->Transform(ip, x); b_.GetVectorValue(*eltrans, ip, b); h_.GetVectorValue(*eltrans, ip, h); double bn = b * nor / a; double hn = h * nor / a; add(h, -hn / a, nor, ht); f.Set(ip.weight * bn * bn / mu0_, nor); f.Add(ip.weight * a * bn, ht); f.Add(-0.5 * ip.weight * (mu0_ * (ht * ht) + bn * bn / mu0_), nor); loc_trq[0] += (x[1]-cent[1]) * f[2] - (x[2]-cent[2]) * f[1]; loc_trq[1] += (x[2]-cent[2]) * f[0] - (x[0]-cent[0]) * f[2]; loc_trq[2] += (x[0]-cent[0]) * f[1] - (x[1]-cent[1]) * f[0]; } } MPI_Allreduce(loc_trq, trq, 3, MPI_DOUBLE, MPI_SUM, fes->GetComm()); } void Torque::ComputeTorqueOnVolume(const Array &attr_marker, const Vector ¢, Vector &trq) { trq = 0.0; ParFiniteElementSpace * fes = b_.ParFESpace(); ParMesh *mesh = b_.ParFESpace()->GetParMesh(); ElementTransformation *eltrans = NULL; Vector b, j, x(3), f(3), t(3), loc_trq(3); loc_trq = 0.0; for (int i=0; iGetNE(); i++) { const int attr = mesh->GetAttribute(i); if (attr_marker[attr-1] == 0) { continue; } eltrans = fes->GetElementTransformation(i); const FiniteElement &el = *fes->GetFE(i); const IntegrationRule *ir = NULL; if (ir == NULL) { const int order = 2*el.GetOrder() + eltrans->OrderW(); // <----- ir = &IntRules.Get(eltrans->GetGeometryType(), order); } for (int pi = 0; pi < ir->GetNPoints(); ++pi) { const IntegrationPoint &ip = ir->IntPoint(pi); eltrans->SetIntPoint(&ip); eltrans->Transform(ip, x); b_.GetVectorValue(*eltrans, ip, b); j_.GetVectorValue(*eltrans, ip, j); f[0] = j[1] * b[2] - j[2] * b[1]; f[1] = j[2] * b[0] - j[0] * b[2]; f[2] = j[0] * b[1] - j[1] * b[0]; t[0] = (x[1]-cent[1]) * f[2] - (x[2]-cent[2]) * f[1]; t[1] = (x[2]-cent[2]) * f[0] - (x[0]-cent[0]) * f[2]; t[2] = (x[0]-cent[0]) * f[1] - (x[1]-cent[1]) * f[0]; loc_trq.Add(ip.weight * eltrans->Weight(), t); } } MPI_Allreduce(loc_trq, trq, 3, MPI_DOUBLE, MPI_SUM, fes->GetComm()); } MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "_Tesla Notes"}, {"location": "tesla-notes/#magnetostatic-equations", "text": "The magnetostatic equations that we start from are the following: $$\\nabla\\times\\bf H = \\bf J \\label{ampere}$$ $$\\nabla\\cdot{\\bf B}= 0 \\label{mag_gauss}$$ $${\\bf B} = \\mu{\\bf H}+\\mu_0{\\bf M} \\label{const}$$ Where \\eqref{ampere} is Amp\u00e8re's Law, \\eqref{mag_gauss} is Gauss's Law for Magnetism, and \\eqref{const} is a somewhat atypical way to write the Constitutive Relation between ${\\bf B}$ and ${\\bf H}$. The constitutive relation used here follows \"Classical Electrodynamics\" 3rd edition by J.D. Jackson and uses ${\\bf M}$, measured in A/m, to represent the magnetization of a permanent magnet. Some sources would instead use ${\\bf B}_r=\\mu_0{\\bf M}$ to represent a residual magnetization, measured in tesla. These conventions are, of course, mathematically equivalent but the choice made in this miniapp does seem a bit odd as I look at it now. These equations can be combined if we make use of the fact that $\\nabla\\cdot{\\bf B}=0$ implies ${\\bf B}=\\nabla\\times{\\bf A}$ for some vector potential ${\\bf A}$. This leads to: $$\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf A}) = {\\bf J}+ \\nabla\\times(\\mu^{-1}\\mu_0{\\bf M})$$ This equation supports a current source density, a permanent magnetization, surface current boundary conditions, and fixed ${\\bf A}$ boundary condition which can be used to apply an external magnetic field. There also exists a special case in magnetostatics when the current density is equal to zero. In this case $\\nabla\\times{\\bf H}=0$ which implies that the magnetic field can be computed as ${\\bf H}=-\\nabla\\Phi_M$. This leads to the scalar potential formulation which we will not consider further except to say that the electrostatic solver, named volta , can be adapted to model such situations.", "title": "Magnetostatic Equations"}, {"location": "tesla-notes/#the-tesla-miniapp", "text": "The tesla miniapp models the magnetostatic equation for the magnetic vector potential ${\\bf A}$. It includes source terms derived from a volumetric current source ${\\bf J}$, magnetization vector ${\\bf M}$, or surface currents ${\\bf K}$. $$\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf A}) = {\\bf J}+\\nabla\\times(\\mu^{-1}\\mu_0{\\bf M})$$ $$\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf A}) = \\hat{n}\\times{\\bf K}$$ The magnetic vector potential will be approximated in H(Curl) so that the left hand side operator is well defined. $${\\bf A} \\approx \\sum_i a_i {\\bf W}_i (\\vec{x})$$ Inserting this into the left hand side of the equation and integrating the resulting equation against each H(Curl) basis function leads to the following weak form: $$\\begin{align} \\int_{\\Omega}{\\bf W}_{i}(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf A})]d\\Omega & \\approx \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot\\{\\nabla\\times[\\mu^{-1}\\nabla\\times(\\sum_j a_j{\\bf W}_j(\\vec{x}))]\\}d\\Omega \\\\ & = \\sum_j a_j\\{\\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]d\\Omega\\} \\end{align}$$ The expression in curly braces depends only on our material coefficient and our basis functions. This particular integral requires a little more manipulation to move the outermost curl operator onto the H(Curl) basis function. $$\\begin{aligned} \\int_\\Omega{\\bf W}_i(\\vec{x})\\cdot[\\nabla\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Omega &=& \\int_\\Omega(\\nabla\\times{\\bf W}_i(\\vec{x}))\\cdot(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))\\,d\\Omega \\\\ &-& \\int_\\Omega\\nabla\\cdot[{\\bf W}_i(\\vec{x})\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Omega \\\\ &=& \\int_\\Omega(\\nabla\\times{\\bf W}_i(\\vec{x}))\\cdot(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))\\,d\\Omega \\\\ &-& \\int_\\Gamma\\hat{n}\\cdot[{\\bf W}_i(\\vec{x})\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Gamma \\\\ &=& \\int_\\Omega(\\nabla\\times{\\bf W}_i(\\vec{x}))\\cdot(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))\\,d\\Omega \\\\ &+& \\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot[\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Gamma \\\\ \\end{aligned}$$ The first integral remaining on the right hand side is implemented in MFEM as a BilinearFormIntegrator named CurlCurlIntegrator . The second integral, the boundary integral, gives rise to a Neumann boundary condition which will be discussed further in Section 2.1.3 .", "title": "The tesla Miniapp"}, {"location": "tesla-notes/#source-terms", "text": "", "title": "Source Terms"}, {"location": "tesla-notes/#current-density-bf-j", "text": "The current density ${\\bf J}$ requires special care. In order for the magnetostatic equations to possess a solution ${\\bf J}$ must be in the range of the curl operator. Another way to say this is that the divergence of ${\\bf J}$ must be zero. If $\\nabla\\cdot{\\bf J}\\neq 0$ we can correct this by adding the gradient of a scalar field. If we start with some initial estimate of the current density which we call ${\\bf J}_0$, $$\\begin{aligned} \\nabla\\cdot({\\bf J}_0-\\nabla\\Psi) &=& 0 \\\\ \\nabla\\cdot\\nabla\\Psi &=& \\nabla\\cdot{\\bf J}_0 \\\\ {\\bf J}& = & {\\bf J}_0 - \\nabla\\Psi \\end{aligned}$$ The current density ${\\bf J}$ computed in this manner will be divergence free and therefore it will be in the range of the curl operator. Normally, in the continuous world, we simply define ${\\bf J}$ directly, however, in the discrete world we can only approximate ${\\bf J}$ so we must always perform this divergence cleaning procedure on our approximations of ${\\bf J}$. Failure to do so can lead to lack of convergence or complete failure of the solve. In MFEM the divergence cleaning procedure is handled by a class called DivergenceFreeProjector which is not a part of the MFEM library itself. It is provided as part of a collection of convenience classes in the miniapps/common subdirectory.", "title": "Current Density ${\\bf J}$"}, {"location": "tesla-notes/#magnetization-bf-m", "text": "The magnetization ${\\bf M}$ is intended to represent permanent magnetics or other regions of prescribed magnetization. In the Tesla miniapp ${\\bf M}$ is discretized using H(Div) basis functions which allow its tangential components to be discontinuous. Its curl appears in the magnetostatic equations as a source term and this curl operation ensures that this source lies in the range of the curl operator so no divergence cleaning operation is needed for this portion of the source. In the Tesla miniapp this source is computed and applied on lines 338-343 in the TeslaSolver::Solve() function. The weak curl operator is configured on lines 168-175 in the TeslaSolver constructor.", "title": "Magnetization ${\\bf M}$"}, {"location": "tesla-notes/#sec:surf_current", "text": "The integration by parts needed to create the weak form of the curl-curl operators also leads to a boundary integral: $$\\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot[\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf W}_j(\\vec{x}))]\\,d\\Gamma$$ This means that our weak curl-curl operator applied to ${\\bf A}$ differs from the continuous curl-curl operator by a surface integral of the form: $$\\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot[\\hat{n}\\times(\\mu^{-1}\\nabla\\times{\\bf A})]\\,d\\Gamma = \\int_\\Gamma{\\bf W}_i(\\vec{x})\\cdot(\\hat{n}\\times{\\bf H})\\,d\\Gamma$$ If we do nothing to account for this boundary integral we are implicitly setting it equal to zero which amounts to a boundary condition on the tangential part of the magnetic field i.e. $\\hat{n}\\times{\\bf H}=0$. Another possibility is to set a surface current boundary condition i.e. $\\hat{n}\\times{\\bf H}=\\hat{n}\\times{\\bf K}$. This could be done by using a ParLinearForm object to integrate $\\hat{n}\\times{\\bf K}$ over the portion of the boundary where ${\\bf K}$ is non-zero and adding the resulting vector to the right hand side of the linear system. However, this is not the approach used in the Tesla miniapp. In Tesla we employ a trick based on the Stoke's theorem. A surface current leads to a discontinuity in the tangential part of ${\\bf H}$ on the boundary. Similarly, a discontinuity in ${\\bf H}$ leads to a discontinuity in ${\\bf A}$ on the boundary. Therefore we can set the tangential part of ${\\bf A}$ to equal ${\\bf K}$ and we get the correct behavior as long as we set the tangential part of ${\\bf A}=0$ elsewhere on the boundary. To be honest I'm not sure how valid this approach is but it does seem to work and it can improve solver convergence. I would recommend confirming this approach before relying on it.", "title": "Surface Current ${\\bf K}$"}, {"location": "tesla-notes/#post-processing", "text": "", "title": "Post-Processing"}, {"location": "tesla-notes/#sec:h_comp", "text": "The magnetic field ${\\bf H}$ needs to have tangential continuity so we approximate it using the H(Curl) basis: $${\\bf H}\\approx\\sum_i h_i{\\bf W}_i(\\vec{x})$$ Recall that the magnetic flux ${\\bf B}$ is approximated using the H(Div) basis due to the continuity of its normal component. $${\\bf B}\\approx\\sum_i b_i{\\bf F}_i(\\vec{x})$$ To compute ${\\bf H}$ from ${\\bf B}$ we make use of the constitutive equation ${\\bf B}=\\mu{\\bf H}$. Inserting our approximations and integrating this equation against each H(Curl) basis function we obtain the following: $$\\sum_j h_j\\int_\\Omega\\mu{\\bf W}_i\\cdot{\\bf W}_j\\,d\\Omega = \\sum_k b_k\\int_\\Omega{\\bf W}_i\\cdot{\\bf F}_k\\,d\\Omega$$ This set of linear equations is equivalent to the matrix equation: $$M_1(\\mu)h = M_{21}b$$ Where $M_1(\\mu)$ is an H(Curl) mass matrix incorporating the material coefficient $\\mu$ which is implemented in MFEM as a BilinearFormIntegrator named VectorFEMassIntegrator . The $M_{21}$ operator is a rectangular matrix which maps H(Div) to H(Curl) and is also built using the VectorFEMassIntegrator but with the default material coefficient which is equal to 1. The solution of this linear system is usually obtained with a conjugate gradient iterative solver along with a diagonal scaling preconditioner. Since the matrix to be inverted is a mass matrix this solution is usually very efficient involving fewer than thirty solver iterations. It is important to point out that an H(Curl) approximation usually has more degrees of freedom than a comparable H(Div) approximation. In the interior of the domain the density of degrees of freedom are approximately equal but H(Curl) approximations tend to have more degrees of freedom on the boundary. Consequently, this type of conversion can produce H(Curl) approximations with poor accuracy near the boundary. If the tangential components of ${\\bf B}$ are nearly constant within the elements adjacent to the boundary the conversion can produce a good approximation. However, if these tangential components vary too rapidly non-physical oscillations can occur in ${\\bf H}$. To alleviate these oscillations Dirichlet boundary conditions can be applied during the solution of ${\\bf H}$ provided that reasonable values for $(\\hat{n}\\times{\\bf H})\\times\\hat{n}$ can be determined. In the present magnetostatics context we can reuse any Neumann boundary conditions used during the solution of ${\\bf A}$ since these were equivalent to setting $\\hat{n}\\times{\\bf H}$ on the boundary.", "title": "Computation of ${\\bf H}$"}, {"location": "tesla-notes/#magnetic-energy-in-a-region", "text": "The tesla miniapp does not compute the energy in the magnetic field but such a computation should be easy to add. There are two basic procedures for computing energy in MFEM. One involves a bilinear form and the other a linear form. The bilinear form approach makes sense when the energies of multiple fields will be computed with the same operator so that the cost of building the bilinear form can be amortized. In a magnetostatic problem the linear form approach is likely to be more efficient. The usual formula for magnetic energy is $u = \\frac{1}{2}\\int_\\Omega{\\bf H}\\cdot{\\bf B}\\,d\\Omega$. There are many ways to compute this quantity in MFEM but perhaps the most convenient is to make use of a VectorCoefficient and a ParLinearForm . For example let's assume we have a coefficient for $\\mu^{-1}$ and a GridFunction for ${\\bf B}$ called Bgf : { VectorGridFunctionCoefficient BCoef(&Bgf); ScalarVectorProductCoefficient HCoef(muInvCoef, BCoef); ParLinearForm Hlf(&HDivFESpace); Hlf.AddDomainIntegrator(new VectorFEDomainIntegrator(HCoef)); Hlf.Assemble(); double energy = 0.5 * Hlf(Bgf); } This integral can be restricted to some region, defined by a set of element attributes, by incorporating a VectorRestrictedCoefficient . Other forms of energy such as $\\frac{1}{2}\\int_\\Omega{\\bf J}\\cdot{\\bf A}\\,d\\Omega$ or perhaps $\\int_\\Omega{\\bf M}\\cdot{\\bf B}\\,d\\Omega$ could be computed in a similar manner.", "title": "Magnetic Energy in a Region"}, {"location": "tesla-notes/#torque-on-a-current-density", "text": "Torque can also be defined as a volume integral so we can employ a technique similar to the one used for the energy computation. The important difference is that torque is a vector quantity so we will need to integrate each of its vector components separately. This will likely require custom coefficients but the procedure should be straightforward. The existing vector coefficient classes ScalarVectorProductCoefficient and VectorCrossProductCoefficient should serve as guides for how this can be accomplished.", "title": "Torque on a Current Density"}, {"location": "tesla-notes/#torque-on-a-permanent-magnet", "text": "", "title": "Torque on a Permanent Magnet"}, {"location": "tesla-notes/#torque-on-a-surface-current", "text": "In theory a surface integral can be computed in a very similar manner to a volume integral. However, discontinuous finite element spaces such as H(Curl), H(Div), or L2 create a complication. Approximations made with these discontinuous fields do not possess well defined values on surfaces. Consequently such an integral could lack precision or even be multi-valued. To overcome this limitation it may be necessary to compute different contributions to the torque in different manners and combine the results. For example the normal component of ${\\bf B}$ is well defined on surfaces. Therefore the force ${\\bf K}\\times{\\bf B}$ may be inaccurate but the quantity $(\\hat{n}\\cdot{\\bf B}){\\bf K}\\times\\hat{n}$ will be more reliable. To obtain another contribution to the torque we can use the tangential components of ${\\bf H}$ as $\\mu{\\bf K}\\times[(\\hat{n}\\times{\\bf H})\\times\\hat{n}]$. This of course assumes that we have an accurate representation of ${\\bf H}$ on this surface which may not be the case if the surface is an outer boundary (see Section Computation of H ).", "title": "Torque on a Surface Current"}, {"location": "tesla-notes/#appendix-a-magnetic-energy", "text": "class MagneticEnergy { private: const ParGridFunction & b_; const ParGridFunction & h_; public: MagneticEnergy(const ParGridFunction & b, const ParGridFunction & h) : b_(b), h_(h) {} double ComputeEnergy() { VectorGridFunctionCoefficient h_coef(&h_); ParLinearForm h_lf(b_.ParFESpace()); h_lf.AddDomainIntegrator(new VectorFEDomainLFIntegrator(h_coef)); h_lf.Assemble(); return 0.5 * h_lf(b_); } double ComputeEnergy(const Array & elem_attr_marker) { VectorGridFunctionCoefficient h_coef(&h_); ParLinearForm h_lf(b_.ParFESpace()); h_lf.AddDomainIntegrator(new VectorFEDomainLFIntegrator(h_coef), const_cast&>(elem_attr_marker)); h_lf.Assemble(); return 0.5 * h_lf(b_); } };", "title": "Appendix A: Magnetic Energy"}, {"location": "tesla-notes/#appendix-b-torque", "text": "class Torque { private: const ParGridFunction & b_; const ParGridFunction & h_; const ParGridFunction & j_; public: Torque(const ParGridFunction & b, const ParGridFunction & h, const ParGridFunction & j) : b_(b), h_(h), j_(j) {} void ComputeTorqueOnSurface(const Array &bdr_attr_marker, const Vector ¢, Vector &T); void ComputeTorqueOnVolume(const Array &vol_attr_marker, const Vector ¢, Vector &T); }; void Torque::ComputeTorqueOnSurface(const Array &bdr_attr_marker, const Vector ¢, Vector &trq) { trq = 0.0; ParFiniteElementSpace * fes = b_.ParFESpace(); ParMesh *mesh = b_.ParFESpace()->GetParMesh(); ElementTransformation *eltrans = NULL; Vector b, h, ht(3), nor(3), x(3), f(3), loc_trq(3); loc_trq = 0.0; for (int i=0; iGetNBE(); i++) { const int bdr_attr = mesh->GetBdrAttribute(i); if (bdr_attr_marker[bdr_attr-1] == 0) { continue; } eltrans = fes->GetBdrElementTransformation(i); const FiniteElement &el = *fes->GetBE(i); const IntegrationRule *ir = NULL; if (ir == NULL) { const int order = 2*el.GetOrder() + eltrans->OrderW(); // <----- ir = &IntRules.Get(eltrans->GetGeometryType(), order); } for (int pi = 0; pi < ir->GetNPoints(); ++pi) { const IntegrationPoint &ip = ir->IntPoint(pi); eltrans->SetIntPoint(&ip); CalcOrtho(eltrans->Jacobian(), nor); double a = nor.Norml2(); eltrans->Transform(ip, x); b_.GetVectorValue(*eltrans, ip, b); h_.GetVectorValue(*eltrans, ip, h); double bn = b * nor / a; double hn = h * nor / a; add(h, -hn / a, nor, ht); f.Set(ip.weight * bn * bn / mu0_, nor); f.Add(ip.weight * a * bn, ht); f.Add(-0.5 * ip.weight * (mu0_ * (ht * ht) + bn * bn / mu0_), nor); loc_trq[0] += (x[1]-cent[1]) * f[2] - (x[2]-cent[2]) * f[1]; loc_trq[1] += (x[2]-cent[2]) * f[0] - (x[0]-cent[0]) * f[2]; loc_trq[2] += (x[0]-cent[0]) * f[1] - (x[1]-cent[1]) * f[0]; } } MPI_Allreduce(loc_trq, trq, 3, MPI_DOUBLE, MPI_SUM, fes->GetComm()); } void Torque::ComputeTorqueOnVolume(const Array &attr_marker, const Vector ¢, Vector &trq) { trq = 0.0; ParFiniteElementSpace * fes = b_.ParFESpace(); ParMesh *mesh = b_.ParFESpace()->GetParMesh(); ElementTransformation *eltrans = NULL; Vector b, j, x(3), f(3), t(3), loc_trq(3); loc_trq = 0.0; for (int i=0; iGetNE(); i++) { const int attr = mesh->GetAttribute(i); if (attr_marker[attr-1] == 0) { continue; } eltrans = fes->GetElementTransformation(i); const FiniteElement &el = *fes->GetFE(i); const IntegrationRule *ir = NULL; if (ir == NULL) { const int order = 2*el.GetOrder() + eltrans->OrderW(); // <----- ir = &IntRules.Get(eltrans->GetGeometryType(), order); } for (int pi = 0; pi < ir->GetNPoints(); ++pi) { const IntegrationPoint &ip = ir->IntPoint(pi); eltrans->SetIntPoint(&ip); eltrans->Transform(ip, x); b_.GetVectorValue(*eltrans, ip, b); j_.GetVectorValue(*eltrans, ip, j); f[0] = j[1] * b[2] - j[2] * b[1]; f[1] = j[2] * b[0] - j[0] * b[2]; f[2] = j[0] * b[1] - j[1] * b[0]; t[0] = (x[1]-cent[1]) * f[2] - (x[2]-cent[2]) * f[1]; t[1] = (x[2]-cent[2]) * f[0] - (x[0]-cent[0]) * f[2]; t[2] = (x[0]-cent[0]) * f[1] - (x[1]-cent[1]) * f[0]; loc_trq.Add(ip.weight * eltrans->Weight(), t); } } MPI_Allreduce(loc_trq, trq, 3, MPI_DOUBLE, MPI_SUM, fes->GetComm()); } MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Appendix B: Torque"}, {"location": "tools/", "text": "Tools This page provides a brief description of several useful tool programs that are distributed in the MFEM's miniapps/tools directory. General Tools Display Basis The display-basis miniapp, found under miniapps/tools , visualizes various types of finite element basis functions on a single mesh element in 1D, 2D, and 3D. The element type, basis type and order can be changed interactively. The mesh element is either the reference element, or a simple transformation of it. Low-Order Refined Transfer The lor-transfer miniapp, found under miniapps/tools demonstrates the capability to generate a low-order refined mesh from a high-order mesh, and to transfer solutions between these meshes. Grid functions can be transferred between the coarse, high-order mesh and the low-order refined mesh using either $L^2$ projection or pointwise evaluation. These transfer operators can be designed to discretely conserve mass and to recover the original high-order solution when transferring a low-order grid function that was obtained by restricting a high-order grid function to the low-order refined space. DataCollection Tools Convert DC This tool, named convert-dc in the miniapps/tools subdirectory, demonstrates how to convert between MFEM's different concrete DataCollection options. Currently supported data collection type options: Nickname Full Class Name visit VisItDataCollection (default) sidre or sidre_hdf5 SidreDataCollection json ConduitDataCollection w/ protocol json conduit_json ConduitDataCollection w/ protocol conduit_json conduit_bin ConduitDataCollection w/ protocol conduit_bin hdf5 ConduitDataCollection w/ protocol hdf5 Load DC The load-dc miniapp, found in the miniapps/tools subdirectory, loads and visualizes (in GLVis) previously saved data using DataCollection sub-classes, see e.g. Example 5/5p. Currently, only the VisItDataCollection class is supported. Get Values The get-values miniapp, found in miniapps/tools , loads previously saved data using DataCollection sub-classes and outputs field values at a set of points. Currently, only the VisItDataCollection class is supported. # Number of fields 3 # Legend # \"Index\" \"Location\":2 \"pressure\":1 \"velocity\":2 2 1 2 # Number of points 6 0 0.0 0.8 0.717336 -0.716172 -0.696674 1 0.2 0.8 0.876045 -0.875874 -0.852278 2 0.4 0.8 1.06999 -1.07106 -1.03923 3 0.6 0.8 1.30719 -1.30931 -1.26903 4 0.8 0.8 1.59678 -1.59601 -1.54949 5 1.0 0.8 1.94995 -1.94853 -1.89371 Point locations can be specified on the command line using -p or within a data file whose name can be given with option -pf . The data file format is: number_of_points space_dimension x_0 y_0 ... x_1 y_1 ... etc. By default all available fields are evaluated. The list of fields can be reduced by specifying the desired field names with -fn . The -fn option takes a space separated list of field names surrounded by quotes. Field names containing spaces, such as \"Field 1\" and \"Field 2\", can be entered as: get-values -fn \"Field\\ 1 Field\\ 2\" By default the data is written to standard out. This can be overwritten with the -o [filename] option. The output format contains comments as well as sizing information to aid in subsequent processing. The bulk of the data consists of one line per point with a 0-based integer index followed by the point coordinates and then the field data. A legend, appearing before the bulk data, shows the order of the fields along with the number of values per field (for vector data). MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Tools"}, {"location": "tools/#tools", "text": "This page provides a brief description of several useful tool programs that are distributed in the MFEM's miniapps/tools directory.", "title": "Tools"}, {"location": "tools/#general-tools", "text": "", "title": "General Tools"}, {"location": "tools/#display-basis", "text": "The display-basis miniapp, found under miniapps/tools , visualizes various types of finite element basis functions on a single mesh element in 1D, 2D, and 3D. The element type, basis type and order can be changed interactively. The mesh element is either the reference element, or a simple transformation of it.", "title": "Display Basis"}, {"location": "tools/#low-order-refined-transfer", "text": "The lor-transfer miniapp, found under miniapps/tools demonstrates the capability to generate a low-order refined mesh from a high-order mesh, and to transfer solutions between these meshes. Grid functions can be transferred between the coarse, high-order mesh and the low-order refined mesh using either $L^2$ projection or pointwise evaluation. These transfer operators can be designed to discretely conserve mass and to recover the original high-order solution when transferring a low-order grid function that was obtained by restricting a high-order grid function to the low-order refined space.", "title": "Low-Order Refined Transfer"}, {"location": "tools/#datacollection-tools", "text": "", "title": "DataCollection Tools"}, {"location": "tools/#convert-dc", "text": "This tool, named convert-dc in the miniapps/tools subdirectory, demonstrates how to convert between MFEM's different concrete DataCollection options. Currently supported data collection type options: Nickname Full Class Name visit VisItDataCollection (default) sidre or sidre_hdf5 SidreDataCollection json ConduitDataCollection w/ protocol json conduit_json ConduitDataCollection w/ protocol conduit_json conduit_bin ConduitDataCollection w/ protocol conduit_bin hdf5 ConduitDataCollection w/ protocol hdf5", "title": "Convert DC"}, {"location": "tools/#load-dc", "text": "The load-dc miniapp, found in the miniapps/tools subdirectory, loads and visualizes (in GLVis) previously saved data using DataCollection sub-classes, see e.g. Example 5/5p. Currently, only the VisItDataCollection class is supported.", "title": "Load DC"}, {"location": "tools/#get-values", "text": "The get-values miniapp, found in miniapps/tools , loads previously saved data using DataCollection sub-classes and outputs field values at a set of points. Currently, only the VisItDataCollection class is supported. # Number of fields 3 # Legend # \"Index\" \"Location\":2 \"pressure\":1 \"velocity\":2 2 1 2 # Number of points 6 0 0.0 0.8 0.717336 -0.716172 -0.696674 1 0.2 0.8 0.876045 -0.875874 -0.852278 2 0.4 0.8 1.06999 -1.07106 -1.03923 3 0.6 0.8 1.30719 -1.30931 -1.26903 4 0.8 0.8 1.59678 -1.59601 -1.54949 5 1.0 0.8 1.94995 -1.94853 -1.89371 Point locations can be specified on the command line using -p or within a data file whose name can be given with option -pf . The data file format is: number_of_points space_dimension x_0 y_0 ... x_1 y_1 ... etc. By default all available fields are evaluated. The list of fields can be reduced by specifying the desired field names with -fn . The -fn option takes a space separated list of field names surrounded by quotes. Field names containing spaces, such as \"Field 1\" and \"Field 2\", can be entered as: get-values -fn \"Field\\ 1 Field\\ 2\" By default the data is written to standard out. This can be overwritten with the -o [filename] option. The output format contains comments as well as sizing information to aid in subsequent processing. The bulk of the data consists of one line per point with a 0-based integer index followed by the point coordinates and then the field data. A legend, appearing before the bulk data, shows the order of the fields along with the number of values per field (for vector data). MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Get Values"}, {"location": "toys/", "text": "Toys A handful of \"toy\" miniapps of less serious nature demonstrate the flexibility of MFEM (and provide a bit of fun): Automata The automata miniapp implements a one dimensional elementary cellular automata as described in: Wolfram MathWorld . This miniapp shows a completely unnecessary use of the finite element method to simply display binary data (but it's fun to play with). The automata miniapp has only three options; -vis or -no-vis to enable or disable visualization, -ns which defines the number of steps to evolve the cellular automata, and -r to select the rule which is applied at each step. Rules for this type of cellular automata consist of a sequence of 8 bits which are normally passed as an integer 0-255. The rule defines how to update each cell based on the current values of that cell and its two nearest neighbors. Life The life miniapp implements Conway's Game of Life. A few simple starting positions are available as well as a random initial state. The game will terminate only if two successive iterations are identical. Users can control the size of the domain and the initial placement of simple objects like blinkers and gliders . Arbitrary patterns can be supplied through the --sketch-pad or -sp option. The sketch pad was used to produce the above image with the command line: life -nx 30 -sp '11 11 1 1 1 1 1 1 1 1 2 1 0 1 1 1 1 0 1 2 1 1 1 1 1 1 1 1' The values following -sp are the starting coordinates of the pattern followed by zeros or ones to indicate pixels that should be off or on, any twos indicate new lines in the pattern. Lissajous The lissajous miniapp generates two different Lissajous curves in 3D which appear to spin vertically and/or horizontally, even though the net motion is the same. Vertical Rotation Horizontal Rotation Based on the 2019 Illusion of the year \"Dual Axis Illusion\" by Frank Force, see Dual Axis Illusion . Mandel The mandel miniapp is a specialized version of the shaper miniapp which adapts a mesh to the Mandelbrot set. Both planar and surface meshes are supported. Mondrian The mondrian miniapp is a specialized version of the shaper miniapp that converts an input image to an AMR mesh. It allows the fast approximate meshing of any domain for which there is an image. The input image should be in 8-bit grayscale PGM format. You can use a number of image manipulation tools, such as GIMP (gimp.org) and ImageMagick's convert utility (imagemagick.org/script/convert.php) to convert your image to this format as a pre-processing step, e.g.: /usr/bin/convert australia.svg -compress none -depth 8 australia.pgm Rubik The rubik miniapp implements an interactive model of a Rubik's Cube\u2122 puzzle. The basic interactive command is of the form [xyz][1,2,3][0-3] which rotates, about the x, y, or z axis, a single tier, indicated by the first integer, by a number of increments, indicated by the final integer. Any manipulation of the cube can be accomplished with a sequence of these simple three character commands. Common commands: Command Action R Resets or re-paints the cube S or s Solve the cube starting from the top and working down r[0-9]+ Specific number of random moves p Print the current state of the cube to the screen q Quit Other commands: Command Action T Solve the top tier only M Solve the middle tier assuming the top has already been solved B Solve the bottom tier assuming the top and middle are done c Swap bottom tier corners in positions 0 and 1 t[0,1] Twist, in place, three of the bottom tier corners e[0,1] Permute three of the bottom tier edges f[2,4] Flip, in place, 2 or 4 of the bottom tier edges Snake The snake miniapp provides a light-hearted example of mesh manipulation and GLVis integration. The Rubik's Snake\u2122 a.k.a. Twist is a simple tool for experimenting with geometric shapes in 3D. It consists of 24 triangular prisms attached in a row so that neighboring wedges can rotate against each other but cannot be separated. An astonishing variety of different configurations can be reached. Thirteen pre-programmed configurations are available via the -c [0-12] command line option. Other configurations can be reached with the -u option. Each configuration must be 23 integers long corresponding to the 23 joints making up the Snake\u2122 puzzle. The values can be 0-3 indicating how far to rotate the joint in the clockwise direction when looking along the snake from the starting (lower) end. The values 0, 1, 2, and 3 correspond to angles of 0, 90, 180, and 270 degrees respectively.", "title": "Toys"}, {"location": "toys/#toys", "text": "A handful of \"toy\" miniapps of less serious nature demonstrate the flexibility of MFEM (and provide a bit of fun):", "title": "Toys"}, {"location": "toys/#automata", "text": "The automata miniapp implements a one dimensional elementary cellular automata as described in: Wolfram MathWorld . This miniapp shows a completely unnecessary use of the finite element method to simply display binary data (but it's fun to play with). The automata miniapp has only three options; -vis or -no-vis to enable or disable visualization, -ns which defines the number of steps to evolve the cellular automata, and -r to select the rule which is applied at each step. Rules for this type of cellular automata consist of a sequence of 8 bits which are normally passed as an integer 0-255. The rule defines how to update each cell based on the current values of that cell and its two nearest neighbors.", "title": "Automata"}, {"location": "toys/#life", "text": "The life miniapp implements Conway's Game of Life. A few simple starting positions are available as well as a random initial state. The game will terminate only if two successive iterations are identical. Users can control the size of the domain and the initial placement of simple objects like blinkers and gliders . Arbitrary patterns can be supplied through the --sketch-pad or -sp option. The sketch pad was used to produce the above image with the command line: life -nx 30 -sp '11 11 1 1 1 1 1 1 1 1 2 1 0 1 1 1 1 0 1 2 1 1 1 1 1 1 1 1' The values following -sp are the starting coordinates of the pattern followed by zeros or ones to indicate pixels that should be off or on, any twos indicate new lines in the pattern.", "title": "Life"}, {"location": "toys/#lissajous", "text": "The lissajous miniapp generates two different Lissajous curves in 3D which appear to spin vertically and/or horizontally, even though the net motion is the same. Vertical Rotation Horizontal Rotation Based on the 2019 Illusion of the year \"Dual Axis Illusion\" by Frank Force, see Dual Axis Illusion .", "title": "Lissajous"}, {"location": "toys/#mandel", "text": "The mandel miniapp is a specialized version of the shaper miniapp which adapts a mesh to the Mandelbrot set. Both planar and surface meshes are supported.", "title": "Mandel"}, {"location": "toys/#mondrian", "text": "The mondrian miniapp is a specialized version of the shaper miniapp that converts an input image to an AMR mesh. It allows the fast approximate meshing of any domain for which there is an image. The input image should be in 8-bit grayscale PGM format. You can use a number of image manipulation tools, such as GIMP (gimp.org) and ImageMagick's convert utility (imagemagick.org/script/convert.php) to convert your image to this format as a pre-processing step, e.g.: /usr/bin/convert australia.svg -compress none -depth 8 australia.pgm", "title": "Mondrian"}, {"location": "toys/#rubik", "text": "The rubik miniapp implements an interactive model of a Rubik's Cube\u2122 puzzle. The basic interactive command is of the form [xyz][1,2,3][0-3] which rotates, about the x, y, or z axis, a single tier, indicated by the first integer, by a number of increments, indicated by the final integer. Any manipulation of the cube can be accomplished with a sequence of these simple three character commands. Common commands: Command Action R Resets or re-paints the cube S or s Solve the cube starting from the top and working down r[0-9]+ Specific number of random moves p Print the current state of the cube to the screen q Quit Other commands: Command Action T Solve the top tier only M Solve the middle tier assuming the top has already been solved B Solve the bottom tier assuming the top and middle are done c Swap bottom tier corners in positions 0 and 1 t[0,1] Twist, in place, three of the bottom tier corners e[0,1] Permute three of the bottom tier edges f[2,4] Flip, in place, 2 or 4 of the bottom tier edges", "title": "Rubik"}, {"location": "toys/#snake", "text": "The snake miniapp provides a light-hearted example of mesh manipulation and GLVis integration. The Rubik's Snake\u2122 a.k.a. Twist is a simple tool for experimenting with geometric shapes in 3D. It consists of 24 triangular prisms attached in a row so that neighboring wedges can rotate against each other but cannot be separated. An astonishing variety of different configurations can be reached. Thirteen pre-programmed configurations are available via the -c [0-12] command line option. Other configurations can be reached with the -u option. Each configuration must be 23 integers long corresponding to the 23 joints making up the Snake\u2122 puzzle. The values can be 0-3 indicating how far to rotate the joint in the clockwise direction when looking along the snake from the starting (lower) end. The values 0, 1, 2, and 3 correspond to angles of 0, 90, 180, and 270 degrees respectively.", "title": "Snake"}, {"location": "videos/", "text": "MFEM Videos A collection of MFEM-related videos, including recorded talks from the MFEM workshops and conference presentations. MFEM Workshop 2024 Aaron Fisher (LLNL) Welcome and Overview October 22-24, 2024 | MFEM Workshop 2024 Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community resources. Tzanio Kolev (LLNL) The State of MFEM October 22-24, 2024 | MFEM Workshop 2024 MFEM project lead Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities, examples, and mini-apps. Kolev also highlighted the growth of the global community as well as features developed during 2024. Veselin Dobrev (LLNL) Recent Developments October 22-24, 2024 | MFEM Workshop 2024 Veselin Dobrev of LLNL detailed the project\u2019s recent developments including meshing and discretization improvements, GPU acceleration and partial/full assembly support, new examples and mini-apps, and more. He also highlighted functionality such as anisotropic refinement, conforming H1 spaces, square pyramid shaped elements, and hybridized discontinuous Galerkin solutions. Ketan Mittal (LLNL) Interpolation at Arbitrary Points in High-Order Meshes on GPUs October 22-24, 2024 | MFEM Workshop 2024 Robust and scalable arbitrary point interpolation is required in the finite element method and spectral element method for querying the partial differential equation solution at points of interest in the domain, comparison of solution between different meshes, and Lagrangian particle tracking. This is a challenging problem, particularly for high-order unstructured meshes partitioned in parallel with MPI, as it requires identifying the element that overlaps a given point and computing the reference space coordinates inside the element corresponding to the point. We present a robust and efficient way to address this problem for large-scale high-order meshes. First, a combination of globally partitioned and processor-local maps are used to determine a list of candidate MPI ranks and element pairs that could contain the point. Next, element-wise bounding boxes are used to further narrow down the list of candidate elements. Finally, Newton's method with trust region-based approach is used to invert the affine map for the candidate elements and determine the reference space coordinates corresponding to the point. Since GPU-based architectures have demonstrated to accelerate computational analyses using meshes with tensor-product elements, specialized kernel have been developed to effect the arbitrary point search and interpolation on GPUs. We demonstrate the effectiveness of this approach using various high-order meshes. Michael Tupek (LLNL) Automatic Parameter Sensitivities in Serac for Engineering Applications October 22-24, 2024 | MFEM Workshop 2024 We present a framework for automatically calculating sensitivities for both topology and shape design optimization workflows. Building on MFEM infrastructure, we provide abstractions for quickly specifying, solving, coupling, and differentiating new PDEs for engineering applications. Recent developments in Serac include: highly robust nonlinear solvers, integration of the Tribol library for contact enforcement, coupled thermal-mechanics, differentiable material model library, and checkpointing for transient adjoint calculations. Jan Nikl (LLNL) Hybridization of Convection-Diffusion Systems in MFEM October 22-24, 2024 | MFEM Workshop 2024 Convection-diffusion systems are likely the most common class of partial differential equations appearing in practically all different applications. However, their mixed formulation typically suffers from prohibitively high computational costs and difficult preconditioning, especially close to the steady state where the system becomes a saddle point problem. The hybridization technique offers an appealing answer to these issues. The new framework for mixed systems enables single-line hybridization, reducing the problem to face traces of the total flux only. Solution of such system is then inexpensive, and preconditioning becomes nearly trivial. Non-linear convection is also supported with the action-based regime of operation. Description of the mechanism as well as code examples to show ease of usage are presented. Vladimir Tomov (LLNL) Miniapps for Shock Hydro, Field Remap, and Mesh Optimization October 22-24, 2024 | MFEM Workshop 2024 This presentation discusses recent advancements, research, and exploratory work in the MFEM miniapps for shock hydrodynamics (Laghos), field remap (Remhos), and mesh optimization. For shock hydro, we present the implementation of slip wall boundary conditions for curved domains, along with research involving material interfaces using the shifted interface method or cut-element integration through Algoim and moments-based integration. In the field remap miniapp, we cover developments in stabilized remap for continuous fields, interface sharpening techniques, and matrix-free methods for GPU execution. Lastly, we explore recent progress in mesh optimization, including surface fitting and its GPU implementation, tangential relaxation, automatic differentiation (AD) for complex objective functionals, enhanced metric theory and quality metrics, and hpr-adaptivity for the mesh representation. While some of these advancements are public, general methods that can be applied across various practical miniapps, others are exploratory, demonstrating how the miniapps can serve as a starting point for research in specific areas. Dylan Copeland (LLNL) Sparse, Approximate Quadrature for Acceleration of Isogeometric Analysis & ROMs October 22-24, 2024 | MFEM Workshop 2024 Numerical integration for assembly of FEM systems typically employs quadrature rules selected for the polynomial order of basis functions in each element. In some cases, a much sparser rule can maintain accuracy. We present an algebraic method for constructing sparse rules, by formulating a constraint system of states required to be integrated accurately. A nonnegative least squares solver finds a sparse, approximate solution to this constraint system, yielding a quadrature rule with fewer points. One application we demonstrate is isogeometric analysis, where a NURBS FEM space is defined on patches consisting of many elements. Setup times are greatly accelerated, by using patch-wise integration with sum factorization and reduced quadrature rules constructed on patches. Another area of application is reduced order models (ROM), where the FEM system is restricted to a reduced POD basis formed from training data. Instead of hyper-reduction methods such as DEIM, the empirical quadrature procedure (EQP) can be used to accelerate ROM simulations with a sparse quadrature rule in the reduced subspace. We demonstrate this on several benchmark problems in the Laghos miniapp and show that energy conservation is maintained. Jacob Spainhour (CU Boulder) Robust Containment Queries over Collections of Parametric Curves via Generalized Winding Numbers October 22-24, 2024 | MFEM Workshop 2024 The containment query is an important geometric primitive in many multiphysics applications. For example, when initializing multimaterial Arbitrary Lagrangian-Eulerian (ALE) simulations, we often need to determine whether arbitrary quadrature points from the background mesh are inside or outside the regions associated with each material. However, existing methods require expensive refinement to accurately capture curved regions. At the same time, many methods are wholly incompatible with user-defined geometries that contain geometric and numeric gaps and/or self-intersections. In this work, we develop a containment query for 2D regions defined by rational Bezier curves that operates directly on curved objects. Our method relies on the generalized winding number (GWN), a mathematical construction that can be evaluated for each curve independently, making the derived containment query robust to non-watertightness. We use an adaptive algorithm to compute the GWN field exactly, which permits fast evaluation for points considered \"distant\" to the curve while being numerically stable for points that are arbitrarily close. Overall, this classification scheme greatly expands the types of bounding geometry that can be used directly in shaping applications without the need for otherwise expensive repair techniques. If time permits, we will also discuss our extensions of this idea to 3D shapes defined by parametric surfaces. Mathias Schmidt (LLNL) Level-Set Topology Optimization with PDE Generated Conformal Meshes October 22-24, 2024 | MFEM Workshop 2024 The promise of topology optimization (TO) is to provide engineers with a systematic computational tool to support the development of optimal designs. A shortcoming of classic density based multi-material TO designs is the nebulous interphase region between materials, which leads to inaccurate response predictions in these very regions. In contrast, designs based on boundary and interface regions, rather than interphase regions, yield accurate response predictions. Level-set based TO is an example of such; however, the analysis of the response often requires repeated mesh generation or non-standard finite element computations. We present a solely PDE-based, level-set topology optimization approach in which geometries are described through the iso-contour of one or multiple level-set fields which are discretized over a mesh. The nodal heights serve as the design parameters. The governing field equations are discretized by a conformal discretization over a separate \u201canalysis\u201d mesh. In the optimization, the \u201canalysis\u201d mesh is morphed such that its boundary and interfaces conform with the isocontours of the LS fields. The mesh morphing is performed using the Target-Matrix Optimization Paradigm (TMOP) approach. Our TMOP formulation is a PDE-based mesh morphing operation which aims to improve the interface conformity while preserving mesh quality. Design sensitivities of the optimization cost and constraint functions with respect to all design level-set fields are computed through an adjoint approach which accounts for the mesh morphing process. The proposed analysis and optimization framework is based on MFEM, a free, lightweight, scalable C++ library for finite element methods which supports the optimization of large-scale problems. We investigate the robustness of the proposed optimization methodology by solving two- and three-dimensional multi-material optimization problems involving linear diffusion and elasticity. We discuss the advantages and challenges of our approach with regards to the mesh morphing process. LS regularization techniques are employed to produce a well-behaved mesh morphing problem throughout the optimization. Finally, select aspects and challenges of our approach with respect to parallel computing and processor decomposition are discussed. Yohann Dudouit (LLNL) Mitigating Rays-Effect in Phase-Space Advection with Matrix-Free HD DG Methods October 22-24, 2024 | MFEM Workshop 2024 The mitigation of the rays-effect in phase-space advection problems is a critical challenge in deterministic transport simulations, particularly when using traditional methods that struggle with numerical artifacts. In this work, we propose a novel high-dimensional matrix-free discontinuous Galerkin (DG) approach designed to address the rays-effect by fully discretizing phase space, including velocity components, up to six dimensions. This methodology avoids the excessive computational cost associated with Monte Carlo simulations while offering a deterministic alternative that preserves accuracy and scalability. A key component of our approach is the use of advanced coordinate transformations, which optimize the coordinate system to minimize the rays-effect by aligning the coordinate system with the net flux. Our matrix-free formulation minimizes memory usage and improves computational efficiency by avoiding the assembly of large sparse matrices, a critical factor when scaling to high-dimensional problems. Numerical experiments demonstrate the effectiveness of this approach in reducing rays-effect artifacts, providing a robust and scalable solution for high-dimensional transport problems. FEM@LLNL Seminars Denis Ridzal (Sandia National Laboratories) R-Adaptive Mesh Optimization to Enhance Finite Element Basis Compression October 15, 2024 | FEM@LLNL Seminar Series Modern computing systems are capable of exascale calculations. While these systems continue to grow in processing power, the available system memory has not increased commensurately. A predominant approach to limit the memory usage in large-scale applications is to exploit the abundant processing power and continually recompute many low-level simulation quantities, rather than storing them. However, this approach can adversely impact the throughput of the simulation and diminish the benefits of modern computing architectures. We present two novel contributions to reduce the memory burden while maintaining performance in simulations based on finite element discretizations. The first contribution develops dictionary-based data compression schemes that detect and exploit the structure of the discretization, due to redundancies across the finite element mesh. These schemes are shown to reduce the memory requirements of key computational kernels by more than 99% on meshes with large numbers of nearly identical mesh cells. For applications where this structure does not exist, our second contribution leverages a recently developed augmented Lagrangian sequential quadratic programming algorithm to enable r-adaptive mesh optimization, with the goal of enhancing redundancies in the mesh. Numerical results demonstrate the effectiveness of the proposed methods to detect, exploit and enhance mesh structure on examples inspired by large-scale applications. Rub\u00e9n Sevilla (Swansea University) Mesh Generation and Adaptation using Green AI September 17, 2024 | FEM@LLNL Seminar Series Most methods used to solve partial differential equations require creating a mesh that represents the model's geometry. Today, unstructured mesh technology is widely used, allowing three-dimensional meshes with hundreds of millions of elements to be generated in just a few minutes. However, when optimising a design, many simulations are needed for different operating conditions and geometric configurations. Creating the best mesh for each setup becomes time-consuming due to the requirement of excessive human intervention and expertise. This talk will cover our recent work on using artificial intelligence to predict near-optimal meshes suitable for simulations. The main idea is to take advantage of the large amount of data that already exists in the industry to improve the selection of a suitable spacing function, including anisotropic spacing. The proposed approach aims to use knowledge from previous simulations to guide the mesh generation process. I will assess the proposed method based on the accuracy of the predictions, efficiency, and environmental impact. This includes considering the carbon footprint and energy consumption of the computations required for a parametric CFD analysis under different flow conditions and angles of attack. For transient problems, the use of high order methods provides several advantages due to the low dissipation and dispersion errors associated to these schemes. An attractive approach to simulate these problems is to incorporate degree adaptive schemes to enhance the approximation only where needed. In this talk I will also present our recent work on using artificial intelligence to aid a degree adaptive process. Esteban Ferrer and David Huergo (Universidad Polit\u00e9cnica de Madrid) New Avenues in High Order Fluid Dynamics September 3, 2024 | FEM@LLNL Seminar Series We present the latest developments of our High-Order Spectral Element Solver (HORSES3D), an open source high-order discontinuous Galerkin framework capable of solving a variety of flow applications, including compressible flows (with or without shocks), incompressible flows, various RANS and LES turbulence models, particle dynamics, multiphase flows, and aeroacoustics [ 1 ]. Recent developments allow us to simulate challenging multiphysics including turbulent flows, multiphase and moving bodies, using local h and p-adaption. In addition, we present recent work that couples Machine Learning and reinforcement learning techniques with high order simulations. Patrick Farrell (University of Oxford) Designing conservative and accurately dissipative numerical integrators in time July 30, 2024 | FEM@LLNL Seminar Series Numerical methods for the simulation of transient systems with structure-preserving properties are known to exhibit greater accuracy and physical reliability, in particular over long durations. These schemes are often built on powerful geometric ideas for broad classes of problems, such as Hamiltonian or reversible systems. However, there remain difficulties in devising higher-order- in-time structure-preserving discretizations for nonlinear problems, and in conserving non-polynomial invariants. In this work we propose a new, general framework for the construction of structure-preserving timesteppers via finite elements in time and the systematic introduction of auxiliary variables. The framework reduces to Gauss methods where those are structure-preserving, but extends to generate arbitrary-order structure-preserving schemes for nonlinear problems, and allows for the construction of schemes that conserve multiple higher-order invariants. We demonstrate the ideas by devising novel schemes that exactly conserve all known invariants of the Kepler and Kovalevskaya problems, arbitrary-order schemes for the compressible Navier\u2013Stokes equations that conserve mass, momentum, and energy, and provably dissipate entropy, and multi-conservative schemes for the Benjamin-Bona-Mahony equation. Gonzalo de Diego (Courant Institute) Numerical Solvers for Viscous Contact Problems in Glaciology May 6, 2024 | FEM@LLNL Seminar Series Viscous contact problems are time-dependent viscous flow problems where the fluid is in contact with a solid surface from which it can detach and reattach. Over sufficiently long timescales, ice is assumed to flow like a viscous fluid with a nonlinear rheology. Therefore, certain phenomena in glaciology, like the formation of subglacial cavities in the base of an ice sheet or the dynamics of marine ice sheets (continental ice sheets that slide into the ocean and go afloat at a grounding line, detaching from the bedrock), can be modelled as viscous contact problems. In particular, these problems can be described by coupling the Stokes equations with contact boundary conditions to free boundary equations that evolve the ice domain in time. In this talk, I will describe the difficulties that arise when attempting to solve this system numerically and I will introduce a method that is capable of overcoming them. Nat Trask (University of Pennsylvania) A Data Driven Finite Element Exterior Calculus April 2, 2024 | FEM@LLNL Seminar Series Despite the recent flurry of work employing machine learning to develop surrogate models to accelerate scientific computation, the \"black-box\" underpinnings of current techniques fail to provide the verification and validation guarantees provided by modern finite element methods. In this talk we present a data-driven finite element exterior calculus for developing reduced-order models of multiphysics systems when the governing equations are either unknown or require closure. The framework employs deep learning architectures typically used for logistic classification to construct a trainable partition of unity which provides notions of control volumes with associated boundary operators. This alternative to a traditional finite element mesh is fully differentiable and allows construction of a discrete de Rham complex with a corresponding Hodge theory. We demonstrate how models may be obtained with the same robustness guarantees as traditional mixed finite element discretization, with deep connections to contemporary techniques in graph neural networks. For applications developing digital twins where surrogates are intended to support real time data assimilation and optimal control, we further develop the framework to support Bayesian optimization of unknown physics on the underlying adjacency matrices of the chain complex. By framing the learning of fluxes via an optimal recovery problem with a computationally tractable posterior distribution, we are able to develop models with intrinsic representations of epistemic uncertainty. William Moses (University of Illinois Urbana-Champaign) Supercharging Programming Through Compiler Technology March 14, 2024 | FEM@LLNL Seminar Series The decline of Moore's law and an increasing reliance on computation has led to an explosion of specialized software packages and hardware architectures. While this diversity enables unprecedented flexibility, it also requires domain-experts to learn how to customize programs to efficiently leverage the latest platform-specific API's and data structures, instead of working on their intended problem. Rather than forcing each user to bear this burden, I propose building high-level abstractions within general-purpose compilers that enable fast, portable, and composable programs to be automatically generated. This talk will demonstrate this approach through compilers that I built for two domains: automatic differentiation and parallelism. These domains are critical to both scientific computing and machine learning, forming the basis of neural network training, uncertainty quantification, and high-performance computing. For example, a researcher hoping to incorporate their climate simulation into a machine learning model must also provide a corresponding derivative simulation. My compiler, Enzyme, automatically generates these derivatives from existing computer programs, without modifying the original application. Moreover, operating within the compiler enables Enzyme to combine differentiation with program optimization, resulting in asymptotically and empirically faster code. Looking forward, this talk will also touch on how this domain-agnostic compiler approach can be applied to new directions, including probabilistic programming. Sungho Lee (University of Memphis) LAGHOST: Development of Lagrangian High-Order Solver for Tectonics March 5, 2024 | FEM@LLNL Seminar Series Long-term geological and tectonic processes associated with large deformation highlight the importance of using a moving Lagrangian frame. However, modern advancements in the finite element method, such as MPI parallelization, GPU acceleration, high-order elements, and adaptive grid refinement for tectonics based on this frame, have not been updated. Moreover, the existing solvers available in open access suffer from limited tutorials, a poor user manual, and several dependencies that make model building complex. These limitations can discourage both new users and developers from utilizing and improving these models. As a result, we are motivated to develop a user-friendly, Lagrangian thermo-mechanical numerical model that incorporates visco-elastoplastic rheology to simulate long-term tectonic processes like mountain building, mantle convection and so on. We introduce an ongoing project called LAGHOST (Lagrangian High-Order Solver for Tectonics), which is an MFEM-based tectonic solver. LAGHOST expands the capabilities of MFEM's LAGHOS mini-app. Currently, our solver incorporates constitutive equation, body force, mass scaling, dynamic relaxation, Mohr-Coulomb plasticity, plastic softening, Winkler foundation, remeshing, and remapping. To evaluate LAGHOST, we conducted four benchmark tests. The first test involved compressing an elastic box at a constant velocity, while the second test focused on the compaction of a self-weighted elastic column. To enable larger time-step sizes and achieve quasi-static solutions in the benchmarks, we introduced a fictitious density and implemented dynamic relaxation. This involved scaling the density factor and introducing a portion of force component opposing the previous velocity direction at nodal points. Our results exhibited good agreement with analytical solutions. Subsequently, we incorporated Mohr-Coulomb plasticity, a reliable model for predicting rock failure, into LAGHOST. We revisited the elastic box benchmark and considered plastic materials. By considering stress correction arising from plastic yielding, we confirmed that the updated solution from elastic guess aligned with the analytical solution. Furthermore, we applied LAGHOST to simulate the evolution of a normal fault, a significant tectonic phenomenon. To model normal fault evolution, we introduced strain softening on cohesion as the dominant factor based on geological evidence. Our simulations successfully captured the normal fault's evolution, with plastic strain localizing at shallow depths before propagating deeper. The fault angle reached approximately 60 degrees, in line with the Mohr-Coulomb failure theory. Kevin Chung (LLNL) Data-Driven DG FEM Via Reduced Order Modeling and Domain Decomposition February 6, 2024 | FEM@LLNL Seminar Series Numerous cutting-edge scientific technologies originate at the laboratory scale, but transitioning them to practical industry applications can be a formidable challenge. Traditional pilot projects at intermediate scales are costly and time-consuming. Alternatives such as E-pilots can rely on high-fidelity numerical simulations, but even these simulations can be computationally prohibitive at larger scales. To overcome these limitations, we propose a scalable, component reduced order model (CROM) method. We employ Discontinuous Galerkin Domain Decomposition (DG-DD) to decompose the physics governing equation for a large-scale system into repeated small-scale unit components. Critical physics modes are identified via proper orthogonal decomposition (POD) from small-scale unit component samples. The computationally expensive, high-fidelity discretization of the physics governing equation is then projected onto these modes to create a reduced order model (ROM) that retains essential physics details. The combination of DG-DD and POD enables ROMs to be used as building blocks comprised of unit components and interfaces, which can then be used to construct a global large-scale ROM without data at such large scales. This method is demonstrated on the Poisson and Stokes flow equations, showing that it can solve equations about 15\u221240 times faster with only \u223c 1% relative error, even at scales 1000 times larger than the unit components. This research is ongoing, with efforts to apply these methods to more complex physics such as Navier-Stokes equation, highlighting their potential for transitioning laboratory-scale technologies to practical industrial use. Brian Young A Full-Wave Electromagnetic Simulator for Frequency-Domain S-Parameter Calculations January 9, 2024 | FEM@LLNL Seminar Series An open-source and free full-wave electromagnetic simulator is presented that addresses the engineering community\u2019s need for the calculation of frequency-domain S-parameters. Two-dimensional port simulations are used to excite the 3D space and to extract S-parameters using modal projections. Matrix solutions are performed using complex computations. Features enabled by the MFEM library include adaptive mesh refinement, arbitrary order finite elements, and parallel processing using MPI. Implementation details are presented along with sample results and accuracy demonstrations. Jesse Chan (Rice University) High order positivity-preserving entropy stable discontinuous Galerkin discretizations December 5, 2023 | FEM@LLNL Seminar Series High order discontinuous Galerkin (DG) methods provide high order accuracy and geometric flexibility, but are known to be unstable when applied to nonlinear conservation laws whose solutions exhibit shocks and under-resolved solution features. Entropy stable schemes improve robustness by ensuring that physically relevant solutions satisfy a semi-discrete cell entropy inequality independently of numerical resolution and solution regularization while retaining formal high order accuracy. In this talk, we will review the construction of entropy stable high order discontinuous Galerkin methods and describe approaches for enforcing that solutions are \"physically relevant\" (i.e., the thermodynamic variables remain positive). Youngsoo Choi (LLNL) Physics-guided interpretable data-driven simulations November 14, 2023 | FEM@LLNL Seminar Series A computationally demanding physical simulation often presents a significant impediment to scientific and technological progress. Fortunately, recent advancements in machine learning (ML) and artificial intelligence have given rise to data-driven methods that can expedite these simulations. For instance, a well-trained 2D convolutional deep neural network can provide a 100,000-fold acceleration in solving complex problems like Richtmyer-Meshkov instability [ 1 ]. However, conventional black-box ML models lack the integration of fundamental physics principles, such as the conservation of mass, momentum, and energy. Consequently, they often run afoul of critical physical laws, raising concerns among physicists. These models attempt to compensate for the absence of physics information by relying on vast amounts of data. Additionally, they suffer from various drawbacks, including a lack of structure-preservation, computationally intensive training phases, reduced interpretability, and susceptibility to extrapolation issues. To address these shortcomings, we propose an approach that incorporates physics into the data-driven framework. This integration occurs at different stages of the modeling process, including the sampling and model-building phases. A physics-informed greedy sampling procedure minimizes the necessary training data while maintaining target accuracy [ 2 ]. A physics-guided data-driven model not only preserves the underlying physical structure more effectively but also demonstrates greater robustness in extrapolation compared to traditional black-box ML models. We will showcase numerical results in areas such as hydrodynamics [ 3 , 4 ], particle transport [ 5 ], plasma physics, pore-collapse, and 3D printing to highlight the efficacy of these data-driven approaches. The advantages of these methods will also become apparent in multi-query decision-making applications, such as design optimization [ 6 , 7 ]. Ben Southworth (Los Alamos National Laboratory) Superior discretizations and AMG solvers for extremely anisotropic diffusion via hyperbolic operators October 17, 2023 | FEM@LLNL Seminar Series Heat conduction in magnetic confinement fusion can reach anisotropy ratios of 10^9-10^10, and in complex problems the direction of anisotropy may not be aligned with (or is impossible to align with) the spatial mesh. Such problems pose major challenges for both discretization accuracy and efficient implicit linear solvers. Although the underlying problem is elliptic or parabolic in nature, we argue that the problem is better approached from the perspective of hyperbolic operators. The problem is posed in a directional gradient first order formulation, introducing a directional heat flux along magnetic field lines as an auxiliary variable. We then develop novel continuous and discontinuous discretizations of the mixed system, using stabilization techniques developed for hyperbolic problems. The resulting block matrix system is then reordered so that the advective operators are on the diagonal, and the system is solved using AMG based on approximate ideal restriction (AIR), which is particularly efficient for upwind discretizations of advection. Compared with traditional discretizations and AMG solvers, we achieve orders of magnitude reduction in error and AMG iterations in the extremely anisotropic regime. Natasha Sharma (University of Texas at El Paso) A Continuous Interior Penalty Method Framework for Sixth Order Cahn-Hilliard-type Equations with applications to microstructure evolution and microemulsions July 18, 2023 | FEM@LLNL Seminar Series The focus of this talk is on presenting unconditionally stable, uniquely solvable, and convergent numerical methods to solve two classes of the sixth-order Cahn-Hilliard-type equations. The first class arises as the so-called phase field crystal atomistic model of crystal growth, which has been employed to simulate a number of physical phenomena such as crystal growth in a supercooled liquid, crack propagation in ductile material, dendritic and eutectic solidification. The second class, henceforth referred to as Microemulsion systems (ME systems) appears as a model capturing the dynamics of phase transitions in ternary oil-water-surfactant systems in which three phases namely a microemulsion, almost pure oil, and almost pure water can coexist in equilibrium. ME systems have several applications ranging from enhanced oil recovery to the development of environmentally friendly solvents and drug delivery systems. Despite the widespread applications of these models, the major challenge impeding their use has been and continues to be a lack of understanding of the complex systems which they model. Thus, building computational models for these systems is crucial to the understanding of these systems. The presence of the higher order derivative in combination with a time-dependent process poses many challenges to the creation of stable, convergent, and efficient numerical methods approximating solutions to these equations. In this talk, we present a continuous interior penalty Galerkin framework for solving these equations and theoretically establish the desirable properties of stability, unique solvability, and first-order convergence. We close the talk by presenting the numerical results of some benchmark problems to verify the practical performance of the proposed approach and discuss some exciting current and future applications. Freddie Witherden (Texas A&M University) FSSpMDM \u2014 Accelerating Small Sparse Matrix Multiplications by Run-Time Code Generation June 20, 2023 | FEM@LLNL Seminar Series Small matrix multiplications are a key building block of modern high-order finite element method solvers. Such multiplications describe the act of applying a specific finite element operator onto a set of state vectors. The small and irregular size of these multiplications makes them poor candidates for generic matrix multiplication routines. Moreover, for elements with a tensor product construction, the operators themselves can exhibit a significant degree of sparsity. In this talk, I will describe the code generation strategies employed by our Fixed Size Sparse Matrix-Dense Matrix (FSSpMDM) routine in libxsmm and show how these result in performant operator kernels for prismatic and hexahedral elements. Strategies will be described for both x86-64 (AVX2/AVX-512) and AARCH64 (NEON/SVE) instruction sets. Results will be presented on recent Intel and Apple CPUs and compared against the well-known GiMMiK C code generation library. Frank Giraldo (Naval Postgraduate School) Using High-Order Element-Based Galerkin Methods to Capture Hurricane Intensification May 16, 2023 | FEM@LLNL Seminar Series Properly capture hurricane rapid intensification (where the winds increase by 30 knots in the first 24 hours) remains challenging for atmospheric models. The reason is that we need LES-type scales \ud835\udcaa(100m) which is still elusive due to computational cost. In this talk, I describe the work that we are doing in this area and how element-based Galerkin Methods are being used to approximate spatial derivatives. I will also discuss the time-integration strategy that we are exploring for this class of problems. In particular, we are exploring process Multirate methods whereby each process in a system of nonlinear partial different equations (PDEs) uses a time-integrator and time-step commensurate with the wave-speed of that process. We have constructed Multirate methods of any order using extrapolation methods. Along this same idea, we have also developed a multi-modeling framework (MMF) designed to replace the physical parameterizations used in weather/climate models. Our approach is to view the coarse-scale and fine-scale models through the lens of Variational Multi-Scale (VMS) methods in order to give MMF a more rigorous mathematical foundation. Our end goal is to use MMF in order to better resolve the inner core of hurricanes. In addition, I will show some results using flux differencing discontinuous Galerkin Methods for constructing both Kinetic Energy Preserving and Entropy Stable methods and discuss why we need scalable models in order to achieve our goals.Our model, NUMA, is a 3D nonhydrostatic atmospheric model that runs on large CPU clusters and on GPUs. Leszek F. Demkowicz (University of Texas at Austin) Full Envelope DPG Approximation for Electromagnetic Waveguides. Stability and Convergence Analysis April 25, 2023 | FEM@LLNL Seminar Series The presented work started with a convergence and stability analysis for the so-called full envelope approximation used in analyzing optical amplifiers (lasers). The specific problem of interest was the simulation of Transverse Mode Instabilities (TMI). The problem translates into the solution of a system of two nonlinear time-harmonic Maxwell equations coupled with a transient heat equation. Simulation of a 1 m long fiber involves the resolution of 10 M wavelengths. A superefficient MPI/openMP hp FE code run on a supercomputer gets you to the range of ten thousand wavelengths. The resolution of the additional thousand wavelengths is done using an exponential ansatz e^{ikz} in the z-coordinate. This results in a non-standard Maxwell problem. The stability and convergence analysis for the problem has been restricted to the linear case only. It turns out that the modified Maxwell problem is stable if and only if the original waveguide problem is stable and the boundedness below stability constants are identical. We have converged to the task of determining the boundedness below constant. The stability analysis started with an easier, acoustic waveguide problem. Separation of variables leads to an eigenproblem for a self-adjoint operator in the transverse plane (in x,y). Expansion of the solution in terms of the corresponding eigenvectors leads then to a decoupled system of ODEs, and a stability analysis for a two-point BVP for an ODE parametrized with the corresponding eigenvalues. The L^2-orthogonality of the eigenmodes and the stability result for a single mode, lead then to the final result: the inverse boundedness below constant depends inversely linearly upon the length L of the waveguide. The corresponding stability for the Maxwell waveguide turned out to be unexpectedly difficult. The equation is vector-valued so a direct separation of variables is out to begin with. An exponential ansatz in z leads to a non-standard eigenproblem involving an operator that is non-self adjoint even for the easiest, homogeneous case. The answer to the problem came from a tricky analysis of the eigenproblem combined with the perturbation technique for perturbed self-adjoint operators. The use of perturbation theory requires an assumption on the smallness of perturbation of the dielectric constant (around a constant value) but with no additional assumptions on its differentiability (discontinuities are allowed). In the end, the final result is similar to that for the acoustic waveguide - the boundedness below constant depends inversely linearly on L. Joachim Sch\u00f6berl (Vienna University of Technology) The Netgen/NGSolve Finite Element Software March 28, 2023 | FEM@LLNL Seminar Series In this talk we give an overview of the open source finite element software Netgen/NGSolve, available from www.ngsolve.org . We show how to setup various physical models using FEniCS-like Python scripting. We discuss how we use NGSolve for teaching finite element methods, and how recent research projects have contributed to the further development of the NGSolve software. Some recent highlights are matrix-valued finite elements with applications in elasticity, fluid dynamics, and numerical relativity. We show how the recently extended framework of linear operators allows the utilization of GPUs for linear solvers, as well as time-dependent problems. Vikram Gavini (University of Michigan) Fast, Accurate and Large-scale Ab-initio Calculations for Materials Modeling March 7, 2023 | FEM@LLNL Seminar Series Electronic structure calculations, especially those using density functional theory (DFT), have been very useful in understanding and predicting a wide range of materials properties. The importance of DFT calculations to engineering and physical sciences is evident from the fact that ~20% of computational resources on some of the world's largest public supercomputers are devoted to DFT calculations. Despite the wide adoption of DFT, the state-of-the-art implementations of DFT suffer from cell-size and geometry limitations, with the widely used codes in solid state physics being limited to periodic geometries and typical simulation domains containing a few hundred atoms. This talk will present our recent advances towards the development of computational methods and numerical algorithms for conducting fast and accurate large-scale DFT calculations using adaptive finite-element discretization, which form the basis for the recently released DFT-FE open-source code . Details of the implementation, including mixed precision algorithms and asynchronous computing, will be presented. The computational efficiency, scalability and performance of DFT-FE will be presented, which demonstrates a significant outperformance of widely used plane-wave DFT codes. Stefan Henneking (University of Texas at Austin) Bayesian Inversion of an Acoustic-Gravity Model for Predictive Tsunami Simulation January 10, 2023 | FEM@LLNL Seminar Series To improve tsunami preparedness, early-alert systems and real-time monitoring are essential. We use a novel approach for predictive tsunami modeling within the Bayesian inversion framework. This effort focuses on informing the immediate response to an occurring tsunami event using near-field data observation. Our forward model is based on a coupled acoustic-gravity model (e.g., Lotto and Dunham, Comput Geosci (2015) 19:327\u2014340). Similar to other tsunami models, our forward model relies on transient boundary data describing the location and magnitude of the seafloor deformation. In a real-time scenario, these parameter fields must be inferred from a variety of measurements, including observations from pressure gauges mounted on the seafloor. One particular difficulty of this inference problem lies in the accurate inversion from sparse pressure data recorded in the near-field where strong hydroacoustic waves propagate in the compressible ocean; these acoustic waves complicate the task of estimating the hydrostatic pressure changes related to the forming surface gravity wave. Our space-time model is discretized with finite elements in space and finite differences in time. The forward model incurs a high computational complexity, since the pressure waves must be resolved in the 3D compressible ocean over a sufficiently long time span. Due to the infeasibility of rapidly solving the corresponding inverse problem for the fully discretized space-time operator, we discuss approaches for using compact representations of the parameter-to-observable map. Lin Mu (University of Georgia) An Efficient and Effective FEM Solver for Diffusion Equation with Strong Anisotropy December 13, 2022 | FEM@LLNL Seminar Series The Diffusion equation with strong anisotropy has broad applications. In this project, we discuss numerical solution of diffusion equations with strong anisotropy on meshes not aligned with the anisotropic vector field, focusing on application to magnetic confinement fusion. In order to resolve the numerical pollution for simulations on a non-anisotropy-aligned mesh and reduce the associated high computational cost, we developed a high-order discontinuous Galerkin scheme with an efficient preconditioner. The auxiliary space preconditioning framework is designed by employing a continuous finite element space as the auxiliary space for the discontinuous finite element space. An effective line smoother that can mitigate the high-frequency error perpendicular to the magnetic field has been designed by a graph-based approach to pick the line smoother that is approximately perpendicular to the vector fields when the mesh does not align with anisotropy. Numerical experiments for several benchmark problems are presented to validate the effectiveness and robustness. Garth Wells (University of Cambridge) FEniCSx: design of the next generation FEniCS libraries for finite element methods November 8, 2022 | FEM@LLNL Seminar Series The FEniCS Project provides libraries for solving partial differential equations using the finite element method. An aim of the FEniCS Project has been to provide high-performance solver environments that closely mirror mathematical syntax, with the hypothesis that high-level representations means that solvers are faster to write, easier to debug, and can deliver faster runtime performance than is reasonably possible by hand. Using domain-specific languages and code generation techniques, arguably the FEniCS libraries delivered on these goals for a set of problems. However, over time limitations, including performance and extensibility, become clear and maintainability/sustainability became an issue.Building on experiences from the FEniCS libraries, I will present and discuss the design on the next generation of tools, FEniCSx. The new design retains strengths of the past approach, and addresses limitations using new designs and new tools. Solvers can be written in C++ or Python, and a number of design changes allow the creation of flexible, fast solvers in Python. In the second part of my presentation, I will discuss high-performance finite element kernels (limited to CPUs on this occasion), motivated by the Center for Efficient Exascale Discretizations 'bake-off' problems, and which would not have been possible in the original FEniCS libraries. Double, single and half-precision kernels are considered, and results include (i) the observation that kernels with vector intrinsics can be slower than auto-vectorised kernels for common cases, and (ii) a cache-aware performance model which is remarkably accurate in predicting performance across architectures. Dennis Ogiermann (University of Bochum) Computing Meets Cardiology: Making Heart Simulations Fast and Accurate September 13, 2022 | FEM@LLNL Seminar Series Heart diseases are an ubiquitous societal burden responsible for a majority of deaths world wide. A central problem in developing effective treatments for heart diseases is the inherent complexity of the heart as an organ. From a modeling perspective, the heart can be interpreted as a biological pump involving multiple physical fields, namely fluid and solid mechanics, as well as chemistry and electricity, all interacting on different time scales. This multiphysics and multiscale aspect makes simulations inherently expensive, especially when approached with naive numerical techniques. However, computational models can be extraordinarily useful in helping us understanding how the healthy heart functions and especially how malfunctions influence different diseases. In this context, also information about possible weaknesses of therapies can be obtained to ultimately improve clinical treatment and decision support. In this talk, we will focus primarily on two important model classes in computational cardiology and their respective efficient numerical treatment without compromising significant accuracy. The first class is the problem of computing electrocardiograms (ECG) from electrical heart simulations. Since ECG measurements can give a wide range of insights about a wide range of heart diseases they offer suitable data to validate our electrophysiological models and verify our numerical schemes on organ-scale. Known numerical issues, arising in the context of electrophysiological models, will be reviewed. The second class addresses bidirectionally coupled electromechanical models and their efficient numerical treatment. Focus will be on a unified space-time adaptive operator splitting framework developed on top of MFEM which proves highly efficient so far for the investigated model classes while still preserving high accuracy. Ricardo Vinuesa (KTH) Modeling and Controlling Turbulent Flows through Deep Learning August 23, 2022 | FEM@LLNL Seminar Series The advent of new powerful deep neural networks (DNNs) has fostered their application in a wide range of research areas, including more recently in fluid mechanics. In this presentation, we will cover some of the fundamentals of deep learning applied to computational fluid dynamics (CFD). Furthermore, we explore the capabilities of DNNs to perform various predictions in turbulent flows: we will use convolutional neural networks (CNNs) for non-intrusive sensing, i.e. to predict the flow in a turbulent open channel based on quantities measured at the wall. We show that it is possible to obtain very good flow predictions, outperforming traditional linear models, and we showcase the potential of transfer learning between friction Reynolds numbers of 180 and 550. We also discuss other modelling methods based on autoencoders (AEs) and generative adversarial networks (GANs), and we present results of deep-reinforcement-learning-based flow control. Jeffrey Banks (RPI) Efficient Techniques for Fluid Structure Interaction: Compatibility Coupling and Galerkin Differences July 26, 2022 | FEM@LLNL Seminar Series Predictive simulation increasingly involves the dynamics of complex systems with multiple interacting physical processes. In designing simulation tools for these problems, both the formulation of individual constituent solvers, as well as coupling of such solvers into a cohesive simulation tool must be addressed. In this talk, I discuss both of these aspects in the context of fluid-structure interaction, where we have recently developed a new class of stable and accurate partitioned solvers that overcome added-mass instability through the use of so-called compatibility boundary conditions. Here I will present partitioned coupling strategies for incompressible FSI. One interesting aspect of CBC-based coupling is the occurrence of nonstandard and/or high-derivative operators, which can make adoption of the techniques challenging, e.g. in the context of FEM methods. To address this, I will also discuss our newly developed Galerkin Difference approximations, which may provide a natural pathway for CBCs in an FEM context. Although GD is fundamentally a finite element approximation based on a Galerkin projection, the underlying GD space is nonstandard and is derived using profitable ideas from the finite difference literature. The resulting schemes possess remarkable properties including nodal superconvergence and the ability to use large CFL-one time steps. I will also present preliminary results for GD discretizations on unstructured grids using MFEM. Paul Fischer (UIUC/ANL) Outlook for Exascale Fluid Dynamics Simulations June 21, 2022 | FEM@LLNL Seminar Series We consider design, development, and use of simulation software for exascale computing, with a particular emphasis on fluid dynamics simulation. Our perspective is through the lens of the high-order code Nek5000/RS, which has been developed under DOE's Center for Efficient Exascale Discretizations (CEED). Nek5000/RS is an open source thermal fluids simulation code with a long development history on leadership computing platforms\u2014it was the first commercial software on distributed memory platforms and a Gordon Bell Prize winner on Intel's ASCII Red. There are a myriad of objectives that drive software design choices in HPC, such as scalability, low-memory, portability, and maintainability. Throughout, our design objective has been to address the needs of the user, including facilitating data analysis and ensuring flexibility with respect to platform and number of processors that can be used. When running on large-scale HPC platforms, three of the most common user questions are: How long will my job take? How many nodes will be required? Is there anything I can do to make my job run faster? Additionally, one might have concerns about storage, post-processing (Will I be able to analyze the results? Where?), and queue times. This talk will seek to answer several of these questions over a broad range of fluid-thermal problems from the perspective of a Nek5000/RS user. We specifically address performance with data for NekRS on several of the DOE's pre-exascale architectures, which feature AMD MI250X or NVIDIA V100 or A100 GPUs. Mike Puso (LLNL) Topics in Immersed Boundary and Contact Methods: Current LLNL Projects and Research May 24, 2022 | FEM@LLNL Seminar Series Many of the most interesting phenomena in solid mechanics occurs at material interfaces. This can be in the form of fluid structure interaction, cracks, material discontinuities, impact etc. Solutions to these problems often require some form of immersed/embedded boundary method or contact or combination of both. This talk will provide a brief overview of different lab efforts in these areas and presents some of the current research aspects and results using from LLNL production codes. Technically speaking, the methods discussed here all require Lagrange multipliers to satisfy the constraints on the interface of overlapping or dissimilar meshes which complicates the solution. Stability and consistency of Lagrange multiplier approaches can be hard to achieve both in space and time. For example, the wrong choice of multiplier space will either be over-constrained and/or cause oscillations at the material interfaces for simple statics problems. For dynamics, many of the basic time integration schemes such as Newmark's method are known to be unstable due to gaps opening and closing. Here we introduce some (non-Nitsche) stabilized multiplier spaces for immersed boundary and contact problems and a structure preserving time integration scheme for long time dynamic contact problems. Finally, I will describe some on-going efforts extending this work. Robert Chiodi (UIUC) CHyPS: An MFEM-Based Material Response Solver for Hypersonic Thermal Protection Systems April 16, 2022 | FEM@LLNL Seminar Series The University of Illinois at Urbana-Champaign\u2019s Center for Hypersonics and Entry Systems Studies has developed a material response solver, named CHyPS, to predict the behavior of thermal protection systems for hypersonic flight. CHyPS uses MFEM to provide the underlying discontinuous Galerkin spatial discretization and linear solvers used to solve the equations. In this talk, we will briefly present the physics and corresponding equations governing material response in hypersonic environments. We will also include a discussion on the implementation of a direct Arbitrary Lagrangian-Eulerian approach to handle mesh movement resulting from the ablation of the material surface. Results for standard community test cases developed at a series of Ablation Workshop meetings over the past decade will be presented and compared to other material response solvers. We will also show the potential of high-order solutions for simulating thermal protection system material response. Tamas Horvath (Oakland University) Space-Time Hybridizable Discontinuous Galerkin with MFEM March 29, 2022 | FEM@LLNL Seminar Series Unsteady partial differential equations on deforming domains appear in many real-life scenarios, such as wind turbines, helicopter rotors, car wheels, free-surface flows, etc. We will focus on the space-time finite element method, which is an excellent approach to discretize problems on evolving domains. This method uses discontinuous Galerkin to discretize both in the spatial and temporal directions, allowing for an arbitrarily high-order approximation in space and time. Furthermore, this method automatically satisfies the geometric conservation law, which is essential for accurate solutions on time-dependent domains. The biggest criticism is that the application of space-time discretization increases the computational complexity significantly. To overcome this, we can use the high-order accurate Hybridizable or Embedded discontinuous Galerkin method. Numerical results will be presented to illustrate the applicability of the method for fluid flow around rigid bodies. Tobin Isaac (Georgia Tech) Unifying the Analysis of Geometric Decomposition in FEEC March 22, 2022 | FEM@LLNL Seminar Series Two operations take function spaces and make them suitable for finite element computations. The first is the construction of trace-free subspaces (which creates \"bubble\" functions) and the second is the extension of functions from cell boundaries into cell interiors (which create edge functions with the correct continuity): together these operations define the geometric decomposition of a function space. In finite element exterior calculus (FEEC), these two operations have been treated separately for the two main families of finite elements: full polynomial elements and trimmed polynomial elements. In this talk we will see how one constructor of trace-free functions and one extension operator can be used for both families, and indeed for all differential forms. We will also examine the practicality of these two operators as tools for implementing geometric decompositions in actual finite element codes. Rapha\u00ebl Zanella (UT Austin) Axisymmetric MFEM-Based Solvers for the Compressible Navier-Stokes Equations and Other Problems March 1, 2022 | FEM@LLNL Seminar Series An axisymmetric model leads, when suitable, to a substantial cut in the computational cost with respect to a 3D model. Although not as accurate, the axisymmetric model allows to quickly obtain a result which can be satisfying. Simple modifications to a 2D finite element solver allow to obtain an axisymmetric solver. We present MFEM-based parallel axisymmetric solvers for different problems. We first present simple axisymmetric solvers for the Laplacian problem and the heat equation. We then present an axisymmetric solver for the compressible Navier-Stokes equations. All solvers are based on H^1-conforming finite element spaces. The correctness of the implementation is verified with convergence tests on manufactured solutions. The Navier-Stokes solver is used to simulate axisymmetric flows with an analytical solution (Poiseuille and Taylor-Couette) and an air flow in a plasma torch geometry. Robert Carson (LLNL) An Overview of ExaConstit and Its Use in the ExaAM Project February 1, 2022 | FEM@LLNL Seminar Series As additively manufactured (AM) parts become increasingly more popular in industry, a growing need exists to help expediate the certifying process of parts. The ExaAM project seeks to help this process by producing a workflow to model the AM process from the melt pool process all the way up to the part scale response by leveraging multiple physics codes run on upcoming exascale computing platforms. As part of this workflow, ExaConstit is a next-generation quasi-static, solid mechanics FEM code built upon MFEM used to connect local microstructures and local properties within the part scale response. Within this talk, we will first provide an overview of ExaConstit, how we have ported it over to the GPU, and some performance numbers on a number of different platforms. Next, we will discuss how we have leveraged MFEM and the FLUX workflow to run hundreds of high-fidelity simulations on Summit in-order to generate the local properties needed to drive the part scale simulation in the ExaAM workflow. Finally, we will show case a few other areas ExaConstit has been used in. Guglielmo Scovazzi (Duke) The Shifted Boundary Method: An Immersed Approach for Computational Mechanics January 20, 2022 | FEM@LLNL Seminar Series Immersed/embedded/unfitted boundary methods obviate the need for continual re-meshing in many applications involving rapid prototyping and design. Unfortunately, many finite element embedded boundary methods are also difficult to implement due to the need to perform complex cell cutting operations at boundaries, and the consequences that these operations may have on the overall conditioning of the ensuing algebraic problems. We present a new, stable, and simple embedded boundary method, named \u201cshifted boundary method\u201d (SBM), which eliminates the need to perform cell cutting. Boundary conditions are imposed on a surrogate discrete boundary, lying on the interior of the true boundary interface. We then construct appropriate field extension operators, by way of Taylor expansions, with the purpose of preserving accuracy when imposing the boundary conditions. We demonstrate the SBM on large-scale solid and fracture mechanics problems; thermomechanics problems; porous media flow problems; incompressible flow problems governed by the Navier-Stokes equations (also including free surfaces); and problems governed by hyperbolic conservation laws. MFEM Workshop 2023 Aaron Fisher (LLNL) Welcome and Overview October 26, 2023 | MFEM Workshop 2023 Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community resources. Tzanio Kolev (LLNL) The State of MFEM October 26, 2023 | MFEM Workshop 2023 MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities (including adaptive mesh refinement, GPU support, and FEM operator decomposition and partial assembly), examples, and mini-apps. Kolev also highlighted the growth of the global community as well as features included in the recent v4.5 software release. Veselin Dobrev (LLNL) Recent Developments October 26, 2023 | MFEM Workshop 2023 Veselin Dobrev of LLNL detailed the project\u2019s recent developments including sub-mesh extraction, linear form assembly on GPUs, coefficient evaluation on GPUs, new mini-apps and examples, Windows 2022 CI testing on GitHub, and more. He also summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Extreme-scale Scientific Software Development Kit, SciDAC, and the FASTMath Institute. Sebastian Grimberg (AWS) Palace: PArallel LArge-scale Computational Electromagnetics October 26, 2023 | MFEM Workshop 2023 Palace is a parallel finite element code for full-wave electromagnetics simulations based on the MFEM library. Palace is used at the AWS Center for Quantum Computing to perform large-scale 3D simulations of complex electromagnetics models and enable the design of quantum computing hardware. Grimberg provided an overview of the simulation capabilities of Palace as well as some recent developments for conforming and nonconforming adaptive mesh refinement, operator partial assembly, and GPU support. Jacob Lotz (Delft University of Technology) Computation and Reduced Order Modelling of Periodic Flows October 26, 2023 | MFEM Workshop 2023 Many types of periodic flows can be found in nature and industrial applications and their computation is expensive due to lengthy time simulations. His work aims to reduce the cost of these computations. His team solves periodic flows in a space-time domain in which both ends in time are periodic such that they only have to model one period. MFEM is used to discretize the space-time domain and solve our discretized system of equations. Lotz applies a hyper-reduced Proper Orthogonal Decomposition Galerkin reduced order model to speed up our computations. During the presentation he showed (results of) their full order model and their advances in reduced order modelling. Boyan Lazarov (LLNL) Scalable Design and Optimization with MFEM October 26, 2023 | MFEM Workshop 2023 Lazarov discussed recently added and ongoing code development facilitating the solution of shape and topology optimization problems. Both topology and shape optimization are gradient-based iterative algorithms aiming to find a material distribution that minimizes an objective and fulfills a set of constraints. Every optimization step includes a solution to a forward optimization problem, an evaluation of the objective and constraints, a solution to an adjoint problem associated with every objective or constraint, an evaluation of gradients, and an update of the design based on mathematical programming techniques. All these steps can be easily implemented and executed by using MFEM in a scalable manner, allowing the design and optimization of large-scale realistic industrial problems. Thus, the goal is to exemplify these features, highlight the techniques that simplify the implementation of new problems, and provide a glimpse into the future. Student Lightning Talks Part 1 October 26, 2023 | MFEM Workshop 2023 The following four students presented in this video: Shani Martinez Weissberg (Tel Aviv University): \u201c\u00b5FEA of a Rabbit Femur\u201d Paul Moujaes (TU-Dortmund): \u201cDissipation-Based Entropy Stabilization for Slope-Limited Discontinuous Galerkin Approximations of Hyperbolic Problems\u201d Alejandro Mu\u00f1oz (Universidad de Granada): \u201cDiscontinuous Galerkin in the Time Domain for Maxwell\u2019s Equations\u201d Bill Ellis (UK Atomic Energy Authority): \u201cComparing Thermo-Mechanical Solves in MOOSE and MFEM\u201d Student Lightning Talks Part 2 October 26, 2023 | MFEM Workshop 2023 The following four students presented in this video: Alexander Mote (Oregon State University): \u201cA Neural Network Surrogate Model for Nonlocal Thermal Flux Calculations\u201d (LLNL-PRES-854134) Amit Rotem (Virginia Tech): \u201cGPU Acceleration of IPDG in MFEM\u201d Josiah Brown (Relogic Research): \u201cProject Minerva\u201d Mike Pozulp (UC Berkeley): \u201cAn Implicit Monte Carlo Acceleration Scheme\u201d Syun'ichi Shiraiwa (PPPL) Radio-Frequency Wave Simulation in Hot Magnetized Plasma using Differential Operator for Non-Local Conductivity Response October 26, 2023 | MFEM Workshop 2023 In high-temperature plasmas, the dielectric response to the RF fields is caused by freely moving charged particles, which naturally makes such a response non-local and correspondingly, the Maxwell wave problem becomes an integro-differential equation. A differential form of dielectric operator, based on the small k\u22a5\u03c1 expansion, is widely used. However, they typically includes up-to the second order terms, and thus the use of such an operator is limited to the waves that satisfy k\u22a5\u03c1 < 1. We propose an alternative approach to construct a dielectric operator, which includes all-order finite Larmor radius effects without explicitly containing higher order derivatives. We use a rational approximation of the plasma dielectric tensor in the wave number space, in order to yield a differential operator acting on the dielectric current (J). The 1D O-X-B mode-conversion of the electron Bernstein wave in the non-relativistic Maxwellian plasma was modeled using this approach. An agreement with analytic calculation and the conservation of wave energy carried by the Poynting flux and electron thermal motion (\u201csloshing\u201d) is found. The connection between our construction method and superposition of Green\u2019s function for these screened Poisson\u2019s equations is presented. An approach to extend the operator in a multi-dimensional setting will also be discussed. Tamas Horvath (Oakland University) Implementation of Hybridizable Discontinuous Galerkin Methods via the HDG Branch October 26, 2023 | MFEM Workshop 2023 Horvath presented the HDG branch, which was initially developed for HDG discretizations of advection-diffusion problems. Recent updates have made the branch highly adaptable for various applications, allowing a flexible implementation of HDG for many different PDEs. He showcased these enhancements and provide insights into their versatile usage across different problems. Yohann Dudouit (LLNL) Empowering MFEM Using libCEED October 26, 2023 | MFEM Workshop 2023 Dudouit began with an overview of the features introduced to MFEM through the integration of libCEED. He emphasized capabilities that are distinct from native MFEM functionalities, marking an enhancement in the software\u2019s suite of tools, such as support for simplices, handling of mixed meshes, and support for p-adaptivity. The presentation concluded by showcasing benchmarks for various problems executed on different HPC architectures, illustrating the performance gains and efficiencies achieved through the libCEED integration. Zhang Chunyu (Sun Yat-Sen University) Homogenized Energy Theory for Solution of Elasticity Problems with Consideration of Higher-Order Microscopic Deformations October 26, 2023 | MFEM Workshop 2023 The classical continuum mechanics faces difficulties in solving problems involving highly inhomogeneous deformations. The proposed theory investigates the impact of high-order microscopic deformation on modeling of material behaviors and provides a refined interpretation of strain gradients through the averaged strain energy density. Only one scale parameter, i.e., the size of the Representative Volume Element(RVE), is required by the proposed theory. By employing the variational approach and the Augmented Lagrangian Method(ALM), the governing equations for deformation as well as the numerical solution procedure are derived. It is demonstrated that the homogenized energy theory offers plausible explanations and reasonable predictions for the problems yet unsolved by the classical theory such as the size effect of deformation and the stress singularity at the crack tip. The concept of averaged strain energy proves to be more suitable for describing the intricate mechanical behavior of materials. And high order partial differential equations can be effectively solved by the ALM by introducing supplementary variables to lower the highest order of the equations. Eric Chin (LLNL) Contact Constraint Enforcement Using the Tribol Interface Physics Library October 26, 2023 | MFEM Workshop 2023 Chin discussed recent additions to the Tribol interface physics library to simplify MPI parallel contact constraint enforcement in large deformation, implicit and explicit continuum solid mechanics simulations using MFEM. Tribol is an open-source software package available on GitHub and includes tools for contact detection, state-of-the-art Lagrangian contact methods such as common plane and mortar, and various enforcement techniques such as penalty and Lagrange multiplier. Additionally, Tribol recently added a domain redecomposer for coalescing proximal contact pairs on a single rank. Tribol\u2019s features are designed to interact seamlessly with MFEM and other codes that use MFEM, with native support for MFEM data structures such as ParMesh, ParGridFunction, and HypreParMatrix. Chin highlighted the simplicity of adding Tribol features to an MFEM-based code by looking at integration with Serac , an open-source implicit nonlinear thermal-structural simulation code. Milan Holec (LLNL) Deterministic Transport MFEM-Miniapp October 26, 2023 | MFEM Workshop 2023 Holec introduced a new multidimensional discretization in MFEM enabling efficient high-order phase-space simulations of various types of Boltzmann transport. In terms of a generalized form of the standard discrete ordinate SN method for the phase-space, his team carefully designs discrete analogs obeying important continuous properties such as conservation of energy, preservation of positivity, preservation of the diffusion limit of transport, preservation of symmetry leading to rays-effect mitigation, and other laws of physics. Finally, Holec showed how to apply this new phase-space MFEM feature to increase the fidelity of modeling of fusion energy experiments. Aaron Fisher (LLNL) Wrap-Up and Visualization Contest Winners October 26, 2023 | MFEM Workshop 2023 The workshop concluded with the announcement of winners of the simulation and visualization contest: (1) displacement distribution of a loaded excavator arm under static equilibrium, rendered by Mehran Ebrahimi from Autodesk Research; and (2) leapfrogging vortex rings based on an MFEM incompressible Schr\u00f6dinger fluid solver, rendered by John Camier from LLNL. Contest winners are featured in the gallery . Conferences in 2023 Tzanio Kolev (LLNL) PDE Simulations on Unstructured Grids with Finite Element Discretizations March 15, 2023 | IPAM at UCLA LLNL computational mathematician Tzanio Kolev presented an overview of MFEM as part of the long program on New Mathematics for the Exascale: Applications to Materials Science at the Institute for Pure and Applied Mathematics. MFEM Workshop 2022 Aaron Fisher (LLNL) Welcome and Overview October 25, 2022 | MFEM Workshop 2022 Held on October 25, 2022, the second annual MFEM community workshop brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, an interactive Q&A session, and a visualization contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results. Tzanio Kolev (LLNL) The State of MFEM October 25, 2022 | MFEM Workshop 2022 MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities (including adaptive mesh refinement, GPU support, and FEM operator decomposition and partial assembly), examples, and mini-apps. Kolev also highlighted the growth of the global community as well as features included in the recent v4.5 software release. Veselin Dobrev (LLNL) Recent Developments in MFEM October 25, 2022 | MFEM Workshop 2022 Veselin Dobrev of LLNL detailed the project\u2019s recent developments including sub-mesh extraction, linear form assembly on GPUs, coefficient evaluation on GPUs, new mini-apps and examples, Windows 2022 CI testing on GitHub, and more. He also summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Extreme-scale Scientific Software Development Kit, SciDAC, and the FASTMath Institute. Ben Zwick (University of Western Australia) Solution of the Electroencephalography (EEG) Forward Problem October 25, 2022 | MFEM Workshop 2022 Ben Zwick of the University of Western Australia presented \"Solution of the Electroencephalography (EEG) Forward Problem.\" The brain's electrical activity can be measured using EEG with electrodes attached to the scalp, or electrocorticography (ECoG), also known as intracranial EEG (iEEG), with electrodes implanted on the brain's surface. EEG source localization combines measurements from EEG or iEEG with data from medical imaging to estimate the location and strengths of the current sources that generated the measured electric potential at the electrodes. Source localization can be used to locate the epileptic zone in pharmaco-resistant focal epilepsies and study evoked related potentials. Accurate source localization requires fast and accurate solutions of the EEG forward problem, which involves calculating the electric potential within the brain volume given a predefined source. This presentation demonstrates how MFEM can be used to solve the EEG forward problem using patient-specific geometry and tissue conductivity obtained from medical images. Carlos Brito Pacheco (Universit\u00e9 Grenoble Alpes) Rodin: Density and Topology Optimization Framework October 25, 2022 | MFEM Workshop 2022 Carlos Brito Pacheco of Universit\u00e9 Grenoble Alpes presented \"Rodin: Lightweight and Modern C++17 Shape, Density and Topology Optimization Framework.\" He introduced the shape optimization library Rodin; a lightweight and modular shape optimization framework which provides many of the associated functionalities that are needed when implementing shape and topology optimization algorithms. These functionalities range from refining and remeshing the underlying shape, to providing elegant mechanisms to specify and solve variational problems. Learn more about Rodin on GitHub . Tobias Duswald (CERN/TUM) Stochastic Fractional PDEs: Random Field Generation & Topology Optimization October 25, 2022 | MFEM Workshop 2022 Tobias Duswald of CERN/Technical University of Munich presented \"Stochastic Fractional PDEs with MFEM with Applications to Random Field Generation and Topology Optimization.\" Over the last several centuries, engineers, physicists, and mathematicians have learned how to describe their problems accurately with partial differential equations (PDEs). PDEs govern the laws of continuum mechanics, quantum mechanics, heat transfer, and many other phenomena. More recently, fractional PDEs have gained popularity in the scientific community because they allow for a more general description of complicated systems (e.g., multiphysics) by leveraging a real-valued exponent for the operators. Besides fractional operators, stochastic PDEs have also sparked the community's interest because they generalize the PDE framework to account for randomness appearing in many disciplines. This talk addresses the numerical solution of stochastic, fractional PDEs with MFEM. To deal with these two flavors of PDEs, Duswald introduced MFEM\u2019s WhiteNoiseIntegrator to treat a stochastic linear form and adopt a rational approximation for the fractional operator. He presented results for three different use cases. First, he showed numerical results for the fractional Laplace problem with homogeneous Dirichlet boundary conditions. Second, he generated Mat\u00e9rn-type Gaussian random fields (GRFs) by solving a specific stochastic, fractional PDE using an approach commonly referred to as SPDE method in the spatial statistics literature. Thirdly, he used GRFs to model geometric uncertainties in additive manufacturing processes and apply the model for topology optimization under uncertainty. Alvaro S\u00e1nchez Villar (Princeton Plasma Physics Laboratory) MFEM Application to EM-Wave Simulation in ECR Space Plasma Thrusters October 25, 2022 | MFEM Workshop 2022 Alvaro S\u00e1nchez Villar of the Princeton Plasma Physics Laboratory presented \"MFEM Application to EM-Wave Simulation in ECR Space Plasma Thrusters.\" The solution of Maxwell equations using the cold-plasma approximation is shown in the context of the design of electron cyclotron resonance plasma thrusters for space propulsion applications. This thruster class utilizes the electron cyclotron resonance to energize the plasma constituents and to sustain the plasma discharge. MFEM finite element discretization is used to solve for the time-harmonic electromagnetic waves. The shape and magnitude of the electromagnetic power density absorbed by the plasma is coupled to the plasma transport variables, and therefore determines the thruster operation performance parameters. Coupled simulations of the electromagnetic-wave and the plasma transport problems are used to interpret thruster operational principles, to understand its sensitivity to operational and design parameters, and compared to experimental measurements to both assess the accuracy of the current numerical model and to highlight its main limitations. Brian Young OpenParEM2D: A 2D Simulator for Guided Waves October 25, 2022 | MFEM Workshop 2022 Independent software developer Brian Young presented \"OpenParEM2D: A Free, Open-Source Electromagnetic Simulator for 2D Waveguides and Transmission Lines.\" An overview is provided on a 2D electromagnetic simulator for guided waves called OpenParEM2D. It is an open-source and free project licensed under GPLv3 or later and released at its website . Capabilities and methodology are presented. Christina Migliore (MIT) The Development of the EM RF-Edge Interactions Mini-app \u201cStix\u201d Using MFEM October 25, 2022 | MFEM Workshop 2022 Christina Migliore of MIT presented \"The Development of the EM RF-Edge Interactions Mini-App Stix Using MFEM.\" Ion cyclotron radio frequency range (ICRF) power plays an important role in heating and current drive in fusion devices. However, experiments show that in the ICRF regime there is a formation of a radio frequency (RF) sheath at the material and antenna boundaries that influences sputtering and power dissipation. Given the size of the sheath relative to the scale of the device, it can be approximated as a boundary condition (BC). Electromagnetic field solvers in the ICRF regime typically treat material boundaries as perfectly conducting, thus ignoring the effect of the RF sheath. Here it is described progress of implementing a model for the RF sheath based on a finite impedance sheath BC formulated by J. Myra and D. A. D\u2019Ippolito, Physics of Plasmas 22 (2015) which provides a representation of the RF rectified sheath including capacitive and resistive effects. This research will discuss the results from the development of a parallelized cold-plasma wave equation solver Stix that implements this non-linear sheath impedance BC through the method of finite elements in pseudo-1D and pseudo-2D using the MFEM library. Will Pazner (Portland State University) High-Order Solvers + GPU Acceleration October 25, 2022 | MFEM Workshop 2022 Will Pazner of Portland State University presented \"High-Order Solvers + GPU Acceleration.\" He discussed the benefits of high-order (HO) methods in modeling under-resolved physics and on modern computing architectures, acknowledging that solving HO finite element problems remains challenging. His talk included details about how MFEM supports matrix-free solvers for HO methods, HO operator setup and application, low-order-refined (LOR) preconditioning and matrix assembly, LOR assembly throughput on GPUs (including CPU and GPU comparisons and parallel scalability), and LOR adaptive mesh refinement preconditioning. Jorge-Luis Barrera (LLNL) Shape and Topology Optimization Powered by MFEM October 25, 2022 | MFEM Workshop 2022 Jorge-Luis Barrera of LLNL presented \"Shape and Topology Optimization Powered by MFEM.\" He discussed the Livermore Design Optimization (LiDO) code, which solves optimization problems for a wide range of Lab-relevant engineering applications. Leveraging MFEM and the LLNL-developed engineering simulation code Serac, LiDO delivers a powerful suite of design tools that run on HPC systems. The talk highlighted several design examples that benefit from LiDO\u2019s integration with MFEM, including multi-material geometries, octet truss lattices, and a concrete dam under stress. LiDO\u2019s graph architecture that seamlessly integrates MFEM features ensures robust topology optimization, as well as shape optimization using nodal coordinates and level set fields as optimization variables. Siu Wun Cheung (LLNL) Reduced Order Modeling for FE Simulations with MFEM & libROM October 25, 2022 | MFEM Workshop 2022 Siu Wun Cheung of LLNL presented \"Reduced Order Modeling for Finite Element Simulations Through the Partnership of MFEM and libROM.\" MFEM provides a wide variety of mesh types and high-order finite element discretizations. However, subject to the model complexity and fine resolution of the discretization, the computational cost can be high, requiring a long time to complete a single forward simulation. In this talk, we will introduce various reduced order modeling techniques, which aim to lower the computational complexity and maintain good accuracy, including intrusive projection-based model reduction and non-intrusive approaches. We will demonstrate the use of reduced order modeling techniques in libROM (www.librom.net), which can be applied to various MFEM examples, including the Poisson problem, linear elasticity, linear advection, mixed nonlinear diffusion, nonlinear elasticity, nonlinear heat conduction, Euler equation, and optimal control problems. Devlin Hayduke (ReLogic Research) Accelerated Deployment of MFEM Based Solvers in Large Scale Industrial Problems October 25, 2022 | MFEM Workshop 2022 Devlin Hayduke of ReLogic Research presented \"Project Minerva: Accelerated Deployment of MFEM Based Solvers in Large Scale Industrial Problems.\" While many Advanced Scientific Computing Research (ASCR) supported software packages are open source, they are often complicated to use, distributed primarily in source-code form targeting HPC systems, and potential adopters lack options for purchasing commercial support, training, and custom-development services. In response to this need, ReLogic Research, Inc., in collaboration with LLNL, is developing a secure, cloud deployable platform based on the MFEM software termed Minerva. Minerva will feature an integration layer allowing users of commercially available finite element pre/post-processing software (e.g., Abaqus/CAE, Hypermesh, Femap) for use with the Abaqus solver to run simulation studies with the MFEM discretization library and will strengthen further MFEM implemented solvers to make them applicable for solving large scale industrial design and optimization problems. Synthetik Applied Technologies blastFEM: GPU-Accelerated, High-Performance, Energy-Efficient Solver October 25, 2022 | MFEM Workshop 2022 Tim Brewer, Ben Shields, Peter Vonk, Jeff Heylmun, and Barlev Raymond of Synthetik Applied Technologies presented \"blastFEM: A GPU-Accelerated, Very High-Performance and Energy-Efficient Solver for Highly Compressible Flows.\" Highly compressible multiphase and reactive flows are important, and manifest across a myriad of practical applications: novel energy production and propulsion methods, building design, safety and energy efficiency, material discovery, and maintenance of our nuclear arsenal. There are, however, few tools available to industry capable of simulating these flows at a resolution and scale suitable make predictions of adequate detail\u2014at least within reasonable timeframes and budgetary constraints\u2014to inform engineers and designers. A next generation, highly efficient simulation code is needed that can deliver results within useful timeframes, with sufficient detail to be useful to support simulation-driven design, discovery, and optimization. Furthermore, the code must be designed to run on modern and emerging heterogeneous architectures, and can efficiently leverage these architectures though the use of numerical schemes designed to maximized computational efficiency. Adolfo Rodriguez (OpenSim Technology) Using MFEM for Wellbore Stability Analysis October 25, 2022 | MFEM Workshop 2022 Adolfo Rodriguez of OpenSim Technology presented \"Using MFEM for Wellbore Stability Analysis.\" He discussed the results from a Department of Energy Small Business Innovation Research project regarding the implementation of wellbore stability analysis for hydrocarbon producing wells. Julian Andrej (LLNL) AWS Tutorial October 25, 2022 | MFEM Workshop 2022 In this tutorial, Julian Andrej of LLNL demonstrated how to use MFEM in the cloud (e.g., an Amazon Web Services instance) for scalable finite element discretization application development. Step-by-step instructions for the tutorial can be found on the tutorial page . Aaron Fisher (LLNL) Wrap-Up and Simulation Contest Winners October 25, 2022 | MFEM Workshop 2022 Aaron Fisher of LLNL concluded the workshop by announcing the winners of the simulation and visualization contest: (1) streamlines of the electric field generated by a current dipole source located in the temporal lobe of an epilepsy patient, rendered by Ben Zwick of the University of Western Australia; (2) a topology-optimized heat sink, rendered by Tobias Duswald of CERN/Technical University of Munich; (3) the magnetic field induced by current running through copper wire in air, rendered by Will Pazner of Portland State University. Contest winners are featured in the MFEM gallery . Conferences in 2022 Vladimir Tomov (LLNL) Finite Element Algorithms and Research Topics in ALE Hydrodynamics November 17, 2022 | Texas A&M University-Corpus Christi Department of Math & Statistics LLNL computational mathematician Vladimir Tomov discussed high-order finite element methods research, development, and application in the context of shock hydrodynamics simulations. The method is based on an Arbitrary Lagrangian-Eulerian (ALE) formulation consisting of separate Lagrangian, mesh optimization, and remap phases. The presentation addressed the following topics: Lagrangian shock hydrodynamics on curved meshes; multi-material closure models; coupling to multigroup radiation diffusion; optimization, r-adaptivity, and surface fitting of high-order meshes; advection-based remap with nonlinear sharpening of material interfaces; synchronization between the max/min bounds of primal and conservative fields during remap; computationally efficient finite element kernels based on partial assembly and sum factorization. The talk also covered the existing methods followed by a discussion about the outstanding research challenges and ongoing work to address them. John Camier (LLNL) All-Out Kernel Fusion: Reaching Peak Performance Faster in High-Order Finite Element Simulations March 21\u201324, 2022 | NVIDIA GTC22 LLNL research scientist John Camier described recent improvements of high-order finite element CUDA kernels that can reduce the time-to-solution by a factor of 10. Augmenting traditional compiler representations with a general mathematical description enables a sustainable way to generate optimized kernels, matching the peak performance of hand-tuned CUDA code. Such intermediate graph-based representation provides significant potential for optimization, both in terms of minimizing the number of kernel launches and in reducing the memory bandwidth. Camier also presented results on single and multiple GPUs that demonstrate significant reduction in the local problem size required to reach peak performance, leading to faster time-to-solution in finite element applications. MFEM Workshop 2021 Aaron Fisher (LLNL) Wrap-Up and Simulation Contest Winners October 20, 2021 | MFEM Workshop 2021 MFEM\u2019s first community workshop was held virtually on October 20, 2021, with participants around the world. Aaron Fisher of LLNL concluded the workshop by announcing the results of the simulation and visualization contest. The winners represent two very different research applications using MFEM: (1) the electric field generated by electrocardiogram waves of a rabbit\u2019s heart ventricles, rendered by Dennis Ogiermann of Ruhr-University Bochum (Germany); (2) incompressible fluid flow around a rotating turbine, animated by Tamas Horvath of Oakland University (Michigan). Contest submissions are featured in the gallery . Will Pazner (LLNL) High-Order Matrix-Free Solvers October 20, 2021 | MFEM Workshop 2021 For users unfamiliar with MFEM\u2019s solver library, Will Pazner of LLNL demonstrated a few ways\u2014in some cases adding just a single line of code\u2014to run scalable solvers for differential equations. These solvers execute hierarchical finite element discretizations for both low- and high-order problems. Vladimir Tomov (LLNL) MFEM Capabilities for High-Order Mesh Optimization October 20, 2021 | MFEM Workshop 2021 Vladimir Tomov of LLNL described MFEM\u2019s mesh optimization strategies including ways the user can define target elements. He demonstrated optimizing a mesh\u2019s shape by limiting displacements to preserve a boundary layer and by changing the size of a uniform mesh in a specific region. MFEM\u2019s mesh-optimizing miniapps are available online . William Dawn (NCSU) Unstructured Finite Element Neutron Transport using MFEM October 20, 2021 | MFEM Workshop 2021 William Dawn from North Carolina State University described his work with unstructured neutron transport. His team models microreactors, a new class of compact reactor with relatively small electrical output. As part of the Exascale Computing Project, Dawn\u2019s team is modeling the MARVEL reactor, which is planned for construction at Idaho National Laboratory. MFEM satisfies their need for a finite element framework with GPU support and rapid prototyping. With MFEM, the team discretizes a neutron transport equation with six independent variables in space, direction, and energy. Traditional neutron transport methods use a \u201csweeping\u201d method to transport particles through a problem, but this is not feasible for generally unstructured meshes. In Dawn\u2019s models, the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation is used to transform the neutron transport equation from a first-order hyperbolic form to a second-order elliptic form. Then, the SAAF equations are discretized with the finite element method and solved using MFEM. Due to the dependence of the neutron flux on angle and direction, these problems have a high vector-dimension with hundreds to thousands of degrees of freedom (DOF) per mesh vertex. Also, due to the second-order nature of these equations, highly refined meshes are required to sufficiently resolve reactor geometries with millions of vertices in a mesh. Results have been prepared for problems with billions of DOF using the Summit supercomputer at Oak Ridge National Laboratory. Syun\u2019ichi Shiraiwa (PPPL) Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion October 20, 2021 | MFEM Workshop 2021 Syun\u2019ichi Shiraiwa of Pacific Northwest National Laboratory introduced PyMFEM, a Python wrapper for MFEM that his team uses in radiofrequency (RF) wave simulations for the RF-SciDAC project. RF waves can be used to heat plasma in a nuclear fusion reaction. Simulations of this process present multiple challenges when incorporating a variety of antenna structures at different frequencies, waves with different wave lengths in the same space or spatially diverse, and RF wave effects on background plasma. To integrate MFEM, a C++ software library, into their multiphysics platform, Shiraiwa\u2019s team created a code \u201cwrapper\u201d that binds MFEM to the external Python components of RF wave simulations, ultimately extending MFEM\u2019s features to Python users. Shiraiwa described how the PyMFEM module works in serial and parallel and invited the audience to contribute to the open-source code. Qi Tang (LANL) An Adaptive, Scalable Fully Implicit Resistive MHD Solver October 20, 2021 | MFEM Workshop 2021 Qi Tang of Los Alamos National Laboratory described his team\u2019s development of an efficient, scalable solver for tokamak plasma simulations. Magnetohydrodynamics (MHD) equations are important for studying plasma systems, but efficient numerical solutions for MHD are extremely challenging due to disparate time and length scales, strong hyperbolic phenomena, and nonlinearity. Tang\u2019s team has developed a high-order stabilized finite element algorithm for incompressible resistive MHD equations based on MFEM, which provides physics-based preconditioners, adaptive mesh refinement, parallelization, and load balancing. Tang showed animated examples of the model\u2019s scalable and efficient results. Jan Nikl (ELI Beamlines) Laser Plasma Modeling with High-Order Finite Elements October 20, 2021 | MFEM Workshop 2021 Jan Nikl outlined how his team at the ELI Beamlines Centre uses MFEM for laser plasma modeling. Lasers have found their application in many scientific disciplines, where generation of plasma, the fourth state of matter, holds great potential for the future. A detailed description of laser produced plasmas is then essential for many applications, like (pre)pulses of ultra-intense lasers and ion acceleration beamlines, laboratory astrophysics, inertial confinement fusion, and many others. All of the mentioned are investigated at ELI Beamlines in the Czech Republic, a European laser facility aiming to operate the most intense laser system in the world. In this context, Nikl described the numerical construction based on the finite element method. This effort concentrates mainly on the Lagrangian hydrodynamics and Vlasov\u2013Fokker\u2013Planck\u2013Maxwell kinetic description of plasma, utilizing the MFEM library for its flexibility and scalability. Mathias Davids (Harvard) Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI) October 20, 2021 | MFEM Workshop 2021 Mathias Davids from Harvard Medical School presented MFEM\u2019s use in a medical setting. Peripheral nerve stimulation (PNS) limits the usable image encoding performance in the latest generation of magnetic resonance imaging (MRI) scanners. The rapid switching of the MRI gradient coils\u2019 magnetic fields induces electric fields in the human body strong enough to evoke unwanted action potential in peripheral nerves, leading to muscle contractions or touch perceptions. Despite its limiting role in MRI, PNS effects are not directly included during the coil design phase. Davids\u2019 team developed a modeling tool to predict PNS thresholds and locations in the human body, allowing them to directly incorporate PNS metrics in the numeric coil winding optimization to design PNS-optimized coil layouts. This modeling tool relies on electromagnetic field simulations in heterogeneous finite element body models coupled to neurodynamic models of myelinated nerve fibers. This tool enables researchers to develop strategies that mitigate PNS effects without building expensive prototype MRI systems, maximizing the usable image encoding performance. Marc Bolinches (UT) Development of DG Compressible Navier-Stokes Solver with MFEM October 20, 2021 | MFEM Workshop 2021 Marc Bolinches from the University of Texas at Austin described a compressible Navier-Stokes solver using MFEM v4.2 which did not include full support for GPUs. The solver uses the discontinuous Galerkin (DG) method as a space discretization and an explicit Runge-Kutta time-integration scheme. An effort has been made to fully support GPU computation by taking over some of the loops internal to the NonLinearForm class. This has also allowed us to implement overlap between computation and communication. The team hopes their open-source code will help other researchers in creating high-fidelity simulations of compressible flows. Robert Rieben (LLNL) The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling October 20, 2021 | MFEM Workshop 2021 High-energy-density physics (HEDP) experiments performed at LLNL and other Department of Energy laboratories require multiphysics simulations to predict the behavior of complex physical systems for applications including inertial confinement fusion, pulsed power, and material strength/equations-of-state studies. Robert Rieben described the variety of mathematical algorithms needed for these simulations, including ALE methods, unstructured adaptive mesh refinement, and high-order discretizations. LLNL\u2019s Multiphysics on Advanced Platforms Project (MAPP) is developing a next-generation multiphysics code, called MARBL, based on high-order numerical methods and modular infrastructure for deployment on advanced HPC architectures. MARBL\u2019s use of high-order methods produce better throughput on GPUs. MARBL uses MFEM for finite elements and mesh/field/operator abstractions while leveraging its support for efficient memory management. Rieben explained that co-design efforts among the MARBL, MFEM, and RAJA (portability software) teams led to better device utilization and improved performance for the MARBL code. Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia) Phase Change Heat and Mass Transfer Simulation with MFEM October 20, 2021 | MFEM Workshop 2021 Three undergraduate students\u2014Felipe G\u00f3mez, Carlos del Valle, and Juli\u00e1n Jim\u00e9nez\u2014from the National University of Colombia presented their work using MFEM in an oceanographic model. Below the Arctic sea ice, and under the right conditions, a flux of icy brine flows down into the sea. The icy brine has a much lower fusion point and a higher density than normal seawater. As a result, it sinks while freezing everything around it, forming an ice channel called a brinicle (also known as ice stalactite). The team shared their simulations of this phenomenon assuming cylindrical symmetry. The fluid is considered viscous and quasi-stationary, and the problem is simulated taking advantage of the setup symmetries. The heat and salt transport are weakly coupled to the fluid motion and are modeled with the corresponding conservation equations, taking into account diffusive and convective effects. The coupled system of partial differential equations is discretized and solved with the help of the MFEM finite element library. Thomas Helfer (CEA) MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic October 20, 2021 | MFEM Workshop 2021 Thomas Helfer from the French Atomic Energy Commission (CEA) introduced the MFEM-MGIS-MFront library (MMM), which aims for efficient use of supercomputers in the field of implicit nonlinear thermomechanics. His team\u2019s primary focus is to develop advanced nuclear fuel element simulations where the evolution of materials under irradiation are influenced by multiple phenomena (e.g., viscoplasticity, damage, phase transitions, swelling due to solid and gaseous fission products). MFEM provides this project with finite element abstractions, adaptive mesh refinement, and a parallel API. However, as applications dedicated to solid mechanics in MFEM are mostly limited to a few constitutive equations such as elasticity and hyperelasticity, Helfer explained that his team extended the software\u2019s functionality to cover a broader spectrum of mechanics. Thus, this MMM project combines MFEM with the MFrontGenericInterfaceSupport (MGIS), an open-source C++ library that provides data structures to support arbitrarily complex nonlinear constitutive equations generated by the MFront code generator. MMM is developed within the scope of CEA\u2019s PLEIADES project. Helfer\u2019s presentation provided (1) an introduction to MMM goals; (2) a tutorial of MMM usage with a focus on the high-level user interface; (3) an overview of the core design choices of MMM and how MFEM was extended to support a range of scenarios; and (4) feedback on the two main issues encountered during MMM development. Jamie Bramwell (LLNL) Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications October 20, 2021 | MFEM Workshop 2021 Jamie Bramwell of LLNL presented an overview of the open-source Serac project , whose goal is to provide user-friendly abstractions and modules that enable rapid development of complex nonlinear multiphysics simulation codes. She provided an overview of both the high-level physics modules (thermal conduction, solid mechanics, incompressible flow, electromagnetics) as well as the serac::Functional framework for quickly developing nonlinear GPU-enabled finite element method kernels. Veselin Dobrev (LLNL) Recent Developments in MFEM October 20, 2021 | MFEM Workshop 2021 Veselin Dobrev of LLNL detailed the project\u2019s recent developments including memory manager improvements; serial support for p- and hp-refinement; high-order/low-order refined solution transfer; GLVis visualization via Jupyter Notebooks; and additional GPU support regarding HYPRE preconditioners, PETSc tools, and mesh optimization. MFEM now also integrates with various new libraries (AmgX, Gingko, FMS, and others), and continuous integration testing has been conducted on LLNL\u2019s Quartz, Lassen, and Corona machines. Additionally, Dobrev summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Exascale Computing Project, SciDAC, the FASTMath Institute, and other projects. Tzanio Kolev (LLNL) The State of MFEM October 20, 2021 | MFEM Workshop 2021 MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities of discretization algorithms, built-in solvers, parallel scalability, adaptive mesh refinement, and support for a range of computing architectures. Kolev also highlighted the global community\u2019s contributions as well as features included in the recent v4.3 software release. Aaron Fisher (LLNL) Welcome and Overview October 20, 2021 | MFEM Workshop 2021 The MFEM community workshop held virtually on October 20, 2021, brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, collaborative breakout sessions, and a simulation contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results. Conferences in 2021 Tzanio Kolev (LLNL) Efficient Finite Element Discretizations for Exascale Applications February 25, 2021 | ExCALIBUR SLE 3 workshop ATPESC 2017, 2018 Tzanio Kolev (LLNL), Mark Shephard (RPI) and Cameron Smith (RPI) Unstructured Meshing Technologies August 6, 2018 | ATPESC 2018 Presented at the Argonne Training Program on Extreme-Scale Computing 2018. Slides for this presentation are available here . Tzanio Kolev (LLNL) and Mark Shephard (RPI) Unstructured Meshing Technologies August 7, 2017 | ATPESC 2017 Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here . Tzanio Kolev (LLNL) and Mark Shephard (RPI) Conforming & Nonconforming Adaptivity for Unstructured Meshes August 7, 2017 | ATPESC 2017 Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here . Other Videos LLNL HPC Software Tutorials: MFEM Aug 22, 2024 Instructions for a self-paced overview of MFEM. MFEM: Advanced Simulation Algorithms for HPC Applications Jun 24, 2020 Overview of MFEM 4.0 featuring some of its developers. Center for Applied Scientific Computing Jul 12, 2019 Overview of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, including a highlight of MFEM. S&TR Preview: Exascale Computing October 6, 2016 Some early MFEM results in the BLAST project.", "title": "Videos"}, {"location": "videos/#mfem-videos", "text": "A collection of MFEM-related videos, including recorded talks from the MFEM workshops and conference presentations.", "title": "MFEM Videos"}, {"location": "videos/#mfem-workshop-2024", "text": "", "title": "MFEM Workshop 2024"}, {"location": "videos/#aaron-fisher-llnl", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#welcome-and-overview", "text": "", "title": "Welcome and Overview"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024", "text": "Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community resources.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#tzanio-kolev-llnl", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos/#the-state-of-mfem", "text": "", "title": "The State of MFEM"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_1", "text": "MFEM project lead Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities, examples, and mini-apps. Kolev also highlighted the growth of the global community as well as features developed during 2024.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#veselin-dobrev-llnl", "text": "", "title": "Veselin Dobrev (LLNL)"}, {"location": "videos/#recent-developments", "text": "", "title": "Recent Developments"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_2", "text": "Veselin Dobrev of LLNL detailed the project\u2019s recent developments including meshing and discretization improvements, GPU acceleration and partial/full assembly support, new examples and mini-apps, and more. He also highlighted functionality such as anisotropic refinement, conforming H1 spaces, square pyramid shaped elements, and hybridized discontinuous Galerkin solutions.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#ketan-mittal-llnl", "text": "", "title": "Ketan Mittal (LLNL)"}, {"location": "videos/#interpolation-at-arbitrary-points-in-high-order-meshes-on-gpus", "text": "", "title": "Interpolation at Arbitrary Points in High-Order Meshes on GPUs"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_3", "text": "Robust and scalable arbitrary point interpolation is required in the finite element method and spectral element method for querying the partial differential equation solution at points of interest in the domain, comparison of solution between different meshes, and Lagrangian particle tracking. This is a challenging problem, particularly for high-order unstructured meshes partitioned in parallel with MPI, as it requires identifying the element that overlaps a given point and computing the reference space coordinates inside the element corresponding to the point. We present a robust and efficient way to address this problem for large-scale high-order meshes. First, a combination of globally partitioned and processor-local maps are used to determine a list of candidate MPI ranks and element pairs that could contain the point. Next, element-wise bounding boxes are used to further narrow down the list of candidate elements. Finally, Newton's method with trust region-based approach is used to invert the affine map for the candidate elements and determine the reference space coordinates corresponding to the point. Since GPU-based architectures have demonstrated to accelerate computational analyses using meshes with tensor-product elements, specialized kernel have been developed to effect the arbitrary point search and interpolation on GPUs. We demonstrate the effectiveness of this approach using various high-order meshes.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#michael-tupek-llnl", "text": "", "title": "Michael Tupek (LLNL)"}, {"location": "videos/#automatic-parameter-sensitivities-in-serac-for-engineering-applications", "text": "", "title": "Automatic Parameter Sensitivities in Serac for Engineering Applications"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_4", "text": "We present a framework for automatically calculating sensitivities for both topology and shape design optimization workflows. Building on MFEM infrastructure, we provide abstractions for quickly specifying, solving, coupling, and differentiating new PDEs for engineering applications. Recent developments in Serac include: highly robust nonlinear solvers, integration of the Tribol library for contact enforcement, coupled thermal-mechanics, differentiable material model library, and checkpointing for transient adjoint calculations.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#jan-nikl-llnl", "text": "", "title": "Jan Nikl (LLNL)"}, {"location": "videos/#hybridization-of-convection-diffusion-systems-in-mfem", "text": "", "title": "Hybridization of Convection-Diffusion Systems in MFEM"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_5", "text": "Convection-diffusion systems are likely the most common class of partial differential equations appearing in practically all different applications. However, their mixed formulation typically suffers from prohibitively high computational costs and difficult preconditioning, especially close to the steady state where the system becomes a saddle point problem. The hybridization technique offers an appealing answer to these issues. The new framework for mixed systems enables single-line hybridization, reducing the problem to face traces of the total flux only. Solution of such system is then inexpensive, and preconditioning becomes nearly trivial. Non-linear convection is also supported with the action-based regime of operation. Description of the mechanism as well as code examples to show ease of usage are presented.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#vladimir-tomov-llnl", "text": "", "title": "Vladimir Tomov (LLNL)"}, {"location": "videos/#miniapps-for-shock-hydro-field-remap-and-mesh-optimization", "text": "", "title": "Miniapps for Shock Hydro, Field Remap, and Mesh Optimization"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_6", "text": "This presentation discusses recent advancements, research, and exploratory work in the MFEM miniapps for shock hydrodynamics (Laghos), field remap (Remhos), and mesh optimization. For shock hydro, we present the implementation of slip wall boundary conditions for curved domains, along with research involving material interfaces using the shifted interface method or cut-element integration through Algoim and moments-based integration. In the field remap miniapp, we cover developments in stabilized remap for continuous fields, interface sharpening techniques, and matrix-free methods for GPU execution. Lastly, we explore recent progress in mesh optimization, including surface fitting and its GPU implementation, tangential relaxation, automatic differentiation (AD) for complex objective functionals, enhanced metric theory and quality metrics, and hpr-adaptivity for the mesh representation. While some of these advancements are public, general methods that can be applied across various practical miniapps, others are exploratory, demonstrating how the miniapps can serve as a starting point for research in specific areas.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#dylan-copeland-llnl", "text": "", "title": "Dylan Copeland (LLNL)"}, {"location": "videos/#sparse-approximate-quadrature-for-acceleration-of-isogeometric-analysis-roms", "text": "", "title": "Sparse, Approximate Quadrature for Acceleration of Isogeometric Analysis & ROMs"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_7", "text": "Numerical integration for assembly of FEM systems typically employs quadrature rules selected for the polynomial order of basis functions in each element. In some cases, a much sparser rule can maintain accuracy. We present an algebraic method for constructing sparse rules, by formulating a constraint system of states required to be integrated accurately. A nonnegative least squares solver finds a sparse, approximate solution to this constraint system, yielding a quadrature rule with fewer points. One application we demonstrate is isogeometric analysis, where a NURBS FEM space is defined on patches consisting of many elements. Setup times are greatly accelerated, by using patch-wise integration with sum factorization and reduced quadrature rules constructed on patches. Another area of application is reduced order models (ROM), where the FEM system is restricted to a reduced POD basis formed from training data. Instead of hyper-reduction methods such as DEIM, the empirical quadrature procedure (EQP) can be used to accelerate ROM simulations with a sparse quadrature rule in the reduced subspace. We demonstrate this on several benchmark problems in the Laghos miniapp and show that energy conservation is maintained.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#jacob-spainhour-cu-boulder", "text": "", "title": "Jacob Spainhour (CU Boulder)"}, {"location": "videos/#robust-containment-queries-over-collections-of-parametric-curves-via-generalized-winding-numbers", "text": "", "title": "Robust Containment Queries over Collections of Parametric Curves via Generalized Winding Numbers"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_8", "text": "The containment query is an important geometric primitive in many multiphysics applications. For example, when initializing multimaterial Arbitrary Lagrangian-Eulerian (ALE) simulations, we often need to determine whether arbitrary quadrature points from the background mesh are inside or outside the regions associated with each material. However, existing methods require expensive refinement to accurately capture curved regions. At the same time, many methods are wholly incompatible with user-defined geometries that contain geometric and numeric gaps and/or self-intersections. In this work, we develop a containment query for 2D regions defined by rational Bezier curves that operates directly on curved objects. Our method relies on the generalized winding number (GWN), a mathematical construction that can be evaluated for each curve independently, making the derived containment query robust to non-watertightness. We use an adaptive algorithm to compute the GWN field exactly, which permits fast evaluation for points considered \"distant\" to the curve while being numerically stable for points that are arbitrarily close. Overall, this classification scheme greatly expands the types of bounding geometry that can be used directly in shaping applications without the need for otherwise expensive repair techniques. If time permits, we will also discuss our extensions of this idea to 3D shapes defined by parametric surfaces.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#mathias-schmidt-llnl", "text": "", "title": "Mathias Schmidt (LLNL)"}, {"location": "videos/#level-set-topology-optimization-with-pde-generated-conformal-meshes", "text": "", "title": "Level-Set Topology Optimization with PDE Generated Conformal Meshes"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_9", "text": "The promise of topology optimization (TO) is to provide engineers with a systematic computational tool to support the development of optimal designs. A shortcoming of classic density based multi-material TO designs is the nebulous interphase region between materials, which leads to inaccurate response predictions in these very regions. In contrast, designs based on boundary and interface regions, rather than interphase regions, yield accurate response predictions. Level-set based TO is an example of such; however, the analysis of the response often requires repeated mesh generation or non-standard finite element computations. We present a solely PDE-based, level-set topology optimization approach in which geometries are described through the iso-contour of one or multiple level-set fields which are discretized over a mesh. The nodal heights serve as the design parameters. The governing field equations are discretized by a conformal discretization over a separate \u201canalysis\u201d mesh. In the optimization, the \u201canalysis\u201d mesh is morphed such that its boundary and interfaces conform with the isocontours of the LS fields. The mesh morphing is performed using the Target-Matrix Optimization Paradigm (TMOP) approach. Our TMOP formulation is a PDE-based mesh morphing operation which aims to improve the interface conformity while preserving mesh quality. Design sensitivities of the optimization cost and constraint functions with respect to all design level-set fields are computed through an adjoint approach which accounts for the mesh morphing process. The proposed analysis and optimization framework is based on MFEM, a free, lightweight, scalable C++ library for finite element methods which supports the optimization of large-scale problems. We investigate the robustness of the proposed optimization methodology by solving two- and three-dimensional multi-material optimization problems involving linear diffusion and elasticity. We discuss the advantages and challenges of our approach with regards to the mesh morphing process. LS regularization techniques are employed to produce a well-behaved mesh morphing problem throughout the optimization. Finally, select aspects and challenges of our approach with respect to parallel computing and processor decomposition are discussed.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#yohann-dudouit-llnl", "text": "", "title": "Yohann Dudouit (LLNL)"}, {"location": "videos/#mitigating-rays-effect-in-phase-space-advection-with-matrix-free-hd-dg-methods", "text": "", "title": "Mitigating Rays-Effect in Phase-Space Advection with Matrix-Free HD DG Methods"}, {"location": "videos/#october-22-24-2024-mfem-workshop-2024_10", "text": "The mitigation of the rays-effect in phase-space advection problems is a critical challenge in deterministic transport simulations, particularly when using traditional methods that struggle with numerical artifacts. In this work, we propose a novel high-dimensional matrix-free discontinuous Galerkin (DG) approach designed to address the rays-effect by fully discretizing phase space, including velocity components, up to six dimensions. This methodology avoids the excessive computational cost associated with Monte Carlo simulations while offering a deterministic alternative that preserves accuracy and scalability. A key component of our approach is the use of advanced coordinate transformations, which optimize the coordinate system to minimize the rays-effect by aligning the coordinate system with the net flux. Our matrix-free formulation minimizes memory usage and improves computational efficiency by avoiding the assembly of large sparse matrices, a critical factor when scaling to high-dimensional problems. Numerical experiments demonstrate the effectiveness of this approach in reducing rays-effect artifacts, providing a robust and scalable solution for high-dimensional transport problems.", "title": "October 22-24, 2024 | MFEM Workshop 2024"}, {"location": "videos/#femllnl-seminars", "text": "", "title": "FEM@LLNL Seminars"}, {"location": "videos/#denis-ridzal-sandia-national-laboratories", "text": "", "title": "Denis Ridzal (Sandia National Laboratories)"}, {"location": "videos/#r-adaptive-mesh-optimization-to-enhance-finite-element-basis-compression", "text": "", "title": "R-Adaptive Mesh Optimization to Enhance Finite Element Basis Compression"}, {"location": "videos/#october-15-2024-femllnl-seminar-series", "text": "Modern computing systems are capable of exascale calculations. While these systems continue to grow in processing power, the available system memory has not increased commensurately. A predominant approach to limit the memory usage in large-scale applications is to exploit the abundant processing power and continually recompute many low-level simulation quantities, rather than storing them. However, this approach can adversely impact the throughput of the simulation and diminish the benefits of modern computing architectures. We present two novel contributions to reduce the memory burden while maintaining performance in simulations based on finite element discretizations. The first contribution develops dictionary-based data compression schemes that detect and exploit the structure of the discretization, due to redundancies across the finite element mesh. These schemes are shown to reduce the memory requirements of key computational kernels by more than 99% on meshes with large numbers of nearly identical mesh cells. For applications where this structure does not exist, our second contribution leverages a recently developed augmented Lagrangian sequential quadratic programming algorithm to enable r-adaptive mesh optimization, with the goal of enhancing redundancies in the mesh. Numerical results demonstrate the effectiveness of the proposed methods to detect, exploit and enhance mesh structure on examples inspired by large-scale applications.", "title": "October 15, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#ruben-sevilla-swansea-university", "text": "", "title": "Rub\u00e9n Sevilla (Swansea University)"}, {"location": "videos/#mesh-generation-and-adaptation-using-green-ai", "text": "", "title": "Mesh Generation and Adaptation using Green AI"}, {"location": "videos/#september-17-2024-femllnl-seminar-series", "text": "Most methods used to solve partial differential equations require creating a mesh that represents the model's geometry. Today, unstructured mesh technology is widely used, allowing three-dimensional meshes with hundreds of millions of elements to be generated in just a few minutes. However, when optimising a design, many simulations are needed for different operating conditions and geometric configurations. Creating the best mesh for each setup becomes time-consuming due to the requirement of excessive human intervention and expertise. This talk will cover our recent work on using artificial intelligence to predict near-optimal meshes suitable for simulations. The main idea is to take advantage of the large amount of data that already exists in the industry to improve the selection of a suitable spacing function, including anisotropic spacing. The proposed approach aims to use knowledge from previous simulations to guide the mesh generation process. I will assess the proposed method based on the accuracy of the predictions, efficiency, and environmental impact. This includes considering the carbon footprint and energy consumption of the computations required for a parametric CFD analysis under different flow conditions and angles of attack. For transient problems, the use of high order methods provides several advantages due to the low dissipation and dispersion errors associated to these schemes. An attractive approach to simulate these problems is to incorporate degree adaptive schemes to enhance the approximation only where needed. In this talk I will also present our recent work on using artificial intelligence to aid a degree adaptive process.", "title": "September 17, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#esteban-ferrer-and-david-huergo-universidad-politecnica-de-madrid", "text": "", "title": "Esteban Ferrer and David Huergo (Universidad Polit\u00e9cnica de Madrid)"}, {"location": "videos/#new-avenues-in-high-order-fluid-dynamics", "text": "", "title": "New Avenues in High Order Fluid Dynamics"}, {"location": "videos/#september-3-2024-femllnl-seminar-series", "text": "We present the latest developments of our High-Order Spectral Element Solver (HORSES3D), an open source high-order discontinuous Galerkin framework capable of solving a variety of flow applications, including compressible flows (with or without shocks), incompressible flows, various RANS and LES turbulence models, particle dynamics, multiphase flows, and aeroacoustics [ 1 ]. Recent developments allow us to simulate challenging multiphysics including turbulent flows, multiphase and moving bodies, using local h and p-adaption. In addition, we present recent work that couples Machine Learning and reinforcement learning techniques with high order simulations.", "title": "September 3, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#patrick-farrell-university-of-oxford", "text": "", "title": "Patrick Farrell (University of Oxford)"}, {"location": "videos/#designing-conservative-and-accurately-dissipative-numerical-integrators-in-time", "text": "", "title": "Designing conservative and accurately dissipative numerical integrators in time"}, {"location": "videos/#july-30-2024-femllnl-seminar-series", "text": "Numerical methods for the simulation of transient systems with structure-preserving properties are known to exhibit greater accuracy and physical reliability, in particular over long durations. These schemes are often built on powerful geometric ideas for broad classes of problems, such as Hamiltonian or reversible systems. However, there remain difficulties in devising higher-order- in-time structure-preserving discretizations for nonlinear problems, and in conserving non-polynomial invariants. In this work we propose a new, general framework for the construction of structure-preserving timesteppers via finite elements in time and the systematic introduction of auxiliary variables. The framework reduces to Gauss methods where those are structure-preserving, but extends to generate arbitrary-order structure-preserving schemes for nonlinear problems, and allows for the construction of schemes that conserve multiple higher-order invariants. We demonstrate the ideas by devising novel schemes that exactly conserve all known invariants of the Kepler and Kovalevskaya problems, arbitrary-order schemes for the compressible Navier\u2013Stokes equations that conserve mass, momentum, and energy, and provably dissipate entropy, and multi-conservative schemes for the Benjamin-Bona-Mahony equation.", "title": "July 30, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#gonzalo-de-diego-courant-institute", "text": "", "title": "Gonzalo de Diego (Courant Institute)"}, {"location": "videos/#numerical-solvers-for-viscous-contact-problems-in-glaciology", "text": "", "title": "Numerical Solvers for Viscous Contact Problems in Glaciology"}, {"location": "videos/#may-6-2024-femllnl-seminar-series", "text": "Viscous contact problems are time-dependent viscous flow problems where the fluid is in contact with a solid surface from which it can detach and reattach. Over sufficiently long timescales, ice is assumed to flow like a viscous fluid with a nonlinear rheology. Therefore, certain phenomena in glaciology, like the formation of subglacial cavities in the base of an ice sheet or the dynamics of marine ice sheets (continental ice sheets that slide into the ocean and go afloat at a grounding line, detaching from the bedrock), can be modelled as viscous contact problems. In particular, these problems can be described by coupling the Stokes equations with contact boundary conditions to free boundary equations that evolve the ice domain in time. In this talk, I will describe the difficulties that arise when attempting to solve this system numerically and I will introduce a method that is capable of overcoming them.", "title": "May 6, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#nat-trask-university-of-pennsylvania", "text": "", "title": "Nat Trask (University of Pennsylvania)"}, {"location": "videos/#a-data-driven-finite-element-exterior-calculus", "text": "", "title": "A Data Driven Finite Element Exterior Calculus"}, {"location": "videos/#april-2-2024-femllnl-seminar-series", "text": "Despite the recent flurry of work employing machine learning to develop surrogate models to accelerate scientific computation, the \"black-box\" underpinnings of current techniques fail to provide the verification and validation guarantees provided by modern finite element methods. In this talk we present a data-driven finite element exterior calculus for developing reduced-order models of multiphysics systems when the governing equations are either unknown or require closure. The framework employs deep learning architectures typically used for logistic classification to construct a trainable partition of unity which provides notions of control volumes with associated boundary operators. This alternative to a traditional finite element mesh is fully differentiable and allows construction of a discrete de Rham complex with a corresponding Hodge theory. We demonstrate how models may be obtained with the same robustness guarantees as traditional mixed finite element discretization, with deep connections to contemporary techniques in graph neural networks. For applications developing digital twins where surrogates are intended to support real time data assimilation and optimal control, we further develop the framework to support Bayesian optimization of unknown physics on the underlying adjacency matrices of the chain complex. By framing the learning of fluxes via an optimal recovery problem with a computationally tractable posterior distribution, we are able to develop models with intrinsic representations of epistemic uncertainty.", "title": "April 2, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#william-moses-university-of-illinois-urbana-champaign", "text": "", "title": "William Moses (University of Illinois Urbana-Champaign)"}, {"location": "videos/#supercharging-programming-through-compiler-technology", "text": "", "title": "Supercharging Programming Through Compiler Technology"}, {"location": "videos/#march-14-2024-femllnl-seminar-series", "text": "The decline of Moore's law and an increasing reliance on computation has led to an explosion of specialized software packages and hardware architectures. While this diversity enables unprecedented flexibility, it also requires domain-experts to learn how to customize programs to efficiently leverage the latest platform-specific API's and data structures, instead of working on their intended problem. Rather than forcing each user to bear this burden, I propose building high-level abstractions within general-purpose compilers that enable fast, portable, and composable programs to be automatically generated. This talk will demonstrate this approach through compilers that I built for two domains: automatic differentiation and parallelism. These domains are critical to both scientific computing and machine learning, forming the basis of neural network training, uncertainty quantification, and high-performance computing. For example, a researcher hoping to incorporate their climate simulation into a machine learning model must also provide a corresponding derivative simulation. My compiler, Enzyme, automatically generates these derivatives from existing computer programs, without modifying the original application. Moreover, operating within the compiler enables Enzyme to combine differentiation with program optimization, resulting in asymptotically and empirically faster code. Looking forward, this talk will also touch on how this domain-agnostic compiler approach can be applied to new directions, including probabilistic programming.", "title": "March 14, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#sungho-lee-university-of-memphis", "text": "", "title": "Sungho Lee (University of Memphis)"}, {"location": "videos/#laghost-development-of-lagrangian-high-order-solver-for-tectonics", "text": "", "title": "LAGHOST: Development of Lagrangian High-Order Solver for Tectonics"}, {"location": "videos/#march-5-2024-femllnl-seminar-series", "text": "Long-term geological and tectonic processes associated with large deformation highlight the importance of using a moving Lagrangian frame. However, modern advancements in the finite element method, such as MPI parallelization, GPU acceleration, high-order elements, and adaptive grid refinement for tectonics based on this frame, have not been updated. Moreover, the existing solvers available in open access suffer from limited tutorials, a poor user manual, and several dependencies that make model building complex. These limitations can discourage both new users and developers from utilizing and improving these models. As a result, we are motivated to develop a user-friendly, Lagrangian thermo-mechanical numerical model that incorporates visco-elastoplastic rheology to simulate long-term tectonic processes like mountain building, mantle convection and so on. We introduce an ongoing project called LAGHOST (Lagrangian High-Order Solver for Tectonics), which is an MFEM-based tectonic solver. LAGHOST expands the capabilities of MFEM's LAGHOS mini-app. Currently, our solver incorporates constitutive equation, body force, mass scaling, dynamic relaxation, Mohr-Coulomb plasticity, plastic softening, Winkler foundation, remeshing, and remapping. To evaluate LAGHOST, we conducted four benchmark tests. The first test involved compressing an elastic box at a constant velocity, while the second test focused on the compaction of a self-weighted elastic column. To enable larger time-step sizes and achieve quasi-static solutions in the benchmarks, we introduced a fictitious density and implemented dynamic relaxation. This involved scaling the density factor and introducing a portion of force component opposing the previous velocity direction at nodal points. Our results exhibited good agreement with analytical solutions. Subsequently, we incorporated Mohr-Coulomb plasticity, a reliable model for predicting rock failure, into LAGHOST. We revisited the elastic box benchmark and considered plastic materials. By considering stress correction arising from plastic yielding, we confirmed that the updated solution from elastic guess aligned with the analytical solution. Furthermore, we applied LAGHOST to simulate the evolution of a normal fault, a significant tectonic phenomenon. To model normal fault evolution, we introduced strain softening on cohesion as the dominant factor based on geological evidence. Our simulations successfully captured the normal fault's evolution, with plastic strain localizing at shallow depths before propagating deeper. The fault angle reached approximately 60 degrees, in line with the Mohr-Coulomb failure theory.", "title": "March 5, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#kevin-chung-llnl", "text": "", "title": "Kevin Chung (LLNL)"}, {"location": "videos/#data-driven-dg-fem-via-reduced-order-modeling-and-domain-decomposition", "text": "", "title": "Data-Driven DG FEM Via Reduced Order Modeling and Domain Decomposition"}, {"location": "videos/#february-6-2024-femllnl-seminar-series", "text": "Numerous cutting-edge scientific technologies originate at the laboratory scale, but transitioning them to practical industry applications can be a formidable challenge. Traditional pilot projects at intermediate scales are costly and time-consuming. Alternatives such as E-pilots can rely on high-fidelity numerical simulations, but even these simulations can be computationally prohibitive at larger scales. To overcome these limitations, we propose a scalable, component reduced order model (CROM) method. We employ Discontinuous Galerkin Domain Decomposition (DG-DD) to decompose the physics governing equation for a large-scale system into repeated small-scale unit components. Critical physics modes are identified via proper orthogonal decomposition (POD) from small-scale unit component samples. The computationally expensive, high-fidelity discretization of the physics governing equation is then projected onto these modes to create a reduced order model (ROM) that retains essential physics details. The combination of DG-DD and POD enables ROMs to be used as building blocks comprised of unit components and interfaces, which can then be used to construct a global large-scale ROM without data at such large scales. This method is demonstrated on the Poisson and Stokes flow equations, showing that it can solve equations about 15\u221240 times faster with only \u223c 1% relative error, even at scales 1000 times larger than the unit components. This research is ongoing, with efforts to apply these methods to more complex physics such as Navier-Stokes equation, highlighting their potential for transitioning laboratory-scale technologies to practical industrial use.", "title": "February 6, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#brian-young", "text": "", "title": "Brian Young"}, {"location": "videos/#a-full-wave-electromagnetic-simulator-for-frequency-domain-s-parameter-calculations", "text": "", "title": "A Full-Wave Electromagnetic Simulator for Frequency-Domain S-Parameter Calculations"}, {"location": "videos/#january-9-2024-femllnl-seminar-series", "text": "An open-source and free full-wave electromagnetic simulator is presented that addresses the engineering community\u2019s need for the calculation of frequency-domain S-parameters. Two-dimensional port simulations are used to excite the 3D space and to extract S-parameters using modal projections. Matrix solutions are performed using complex computations. Features enabled by the MFEM library include adaptive mesh refinement, arbitrary order finite elements, and parallel processing using MPI. Implementation details are presented along with sample results and accuracy demonstrations.", "title": "January 9, 2024 | FEM@LLNL Seminar Series"}, {"location": "videos/#jesse-chan-rice-university", "text": "", "title": "Jesse Chan (Rice University)"}, {"location": "videos/#high-order-positivity-preserving-entropy-stable-discontinuous-galerkin-discretizations", "text": "", "title": "High order positivity-preserving entropy stable discontinuous Galerkin discretizations"}, {"location": "videos/#december-5-2023-femllnl-seminar-series", "text": "High order discontinuous Galerkin (DG) methods provide high order accuracy and geometric flexibility, but are known to be unstable when applied to nonlinear conservation laws whose solutions exhibit shocks and under-resolved solution features. Entropy stable schemes improve robustness by ensuring that physically relevant solutions satisfy a semi-discrete cell entropy inequality independently of numerical resolution and solution regularization while retaining formal high order accuracy. In this talk, we will review the construction of entropy stable high order discontinuous Galerkin methods and describe approaches for enforcing that solutions are \"physically relevant\" (i.e., the thermodynamic variables remain positive).", "title": "December 5, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#youngsoo-choi-llnl", "text": "", "title": "Youngsoo Choi (LLNL)"}, {"location": "videos/#physics-guided-interpretable-data-driven-simulations", "text": "", "title": "Physics-guided interpretable data-driven simulations"}, {"location": "videos/#november-14-2023-femllnl-seminar-series", "text": "A computationally demanding physical simulation often presents a significant impediment to scientific and technological progress. Fortunately, recent advancements in machine learning (ML) and artificial intelligence have given rise to data-driven methods that can expedite these simulations. For instance, a well-trained 2D convolutional deep neural network can provide a 100,000-fold acceleration in solving complex problems like Richtmyer-Meshkov instability [ 1 ]. However, conventional black-box ML models lack the integration of fundamental physics principles, such as the conservation of mass, momentum, and energy. Consequently, they often run afoul of critical physical laws, raising concerns among physicists. These models attempt to compensate for the absence of physics information by relying on vast amounts of data. Additionally, they suffer from various drawbacks, including a lack of structure-preservation, computationally intensive training phases, reduced interpretability, and susceptibility to extrapolation issues. To address these shortcomings, we propose an approach that incorporates physics into the data-driven framework. This integration occurs at different stages of the modeling process, including the sampling and model-building phases. A physics-informed greedy sampling procedure minimizes the necessary training data while maintaining target accuracy [ 2 ]. A physics-guided data-driven model not only preserves the underlying physical structure more effectively but also demonstrates greater robustness in extrapolation compared to traditional black-box ML models. We will showcase numerical results in areas such as hydrodynamics [ 3 , 4 ], particle transport [ 5 ], plasma physics, pore-collapse, and 3D printing to highlight the efficacy of these data-driven approaches. The advantages of these methods will also become apparent in multi-query decision-making applications, such as design optimization [ 6 , 7 ].", "title": "November 14, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#ben-southworth-los-alamos-national-laboratory", "text": "", "title": "Ben Southworth (Los Alamos National Laboratory)"}, {"location": "videos/#superior-discretizations-and-amg-solvers-for-extremely-anisotropic-diffusion-via-hyperbolic-operators", "text": "", "title": "Superior discretizations and AMG solvers for extremely anisotropic diffusion via hyperbolic operators"}, {"location": "videos/#october-17-2023-femllnl-seminar-series", "text": "Heat conduction in magnetic confinement fusion can reach anisotropy ratios of 10^9-10^10, and in complex problems the direction of anisotropy may not be aligned with (or is impossible to align with) the spatial mesh. Such problems pose major challenges for both discretization accuracy and efficient implicit linear solvers. Although the underlying problem is elliptic or parabolic in nature, we argue that the problem is better approached from the perspective of hyperbolic operators. The problem is posed in a directional gradient first order formulation, introducing a directional heat flux along magnetic field lines as an auxiliary variable. We then develop novel continuous and discontinuous discretizations of the mixed system, using stabilization techniques developed for hyperbolic problems. The resulting block matrix system is then reordered so that the advective operators are on the diagonal, and the system is solved using AMG based on approximate ideal restriction (AIR), which is particularly efficient for upwind discretizations of advection. Compared with traditional discretizations and AMG solvers, we achieve orders of magnitude reduction in error and AMG iterations in the extremely anisotropic regime.", "title": "October 17, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#natasha-sharma-university-of-texas-at-el-paso", "text": "", "title": "Natasha Sharma (University of Texas at El Paso)"}, {"location": "videos/#a-continuous-interior-penalty-method-framework-for-sixth-order-cahn-hilliard-type-equations-with-applications-to-microstructure-evolution-and-microemulsions", "text": "", "title": "A Continuous Interior Penalty Method Framework for Sixth Order Cahn-Hilliard-type Equations with applications to microstructure evolution and microemulsions"}, {"location": "videos/#july-18-2023-femllnl-seminar-series", "text": "The focus of this talk is on presenting unconditionally stable, uniquely solvable, and convergent numerical methods to solve two classes of the sixth-order Cahn-Hilliard-type equations. The first class arises as the so-called phase field crystal atomistic model of crystal growth, which has been employed to simulate a number of physical phenomena such as crystal growth in a supercooled liquid, crack propagation in ductile material, dendritic and eutectic solidification. The second class, henceforth referred to as Microemulsion systems (ME systems) appears as a model capturing the dynamics of phase transitions in ternary oil-water-surfactant systems in which three phases namely a microemulsion, almost pure oil, and almost pure water can coexist in equilibrium. ME systems have several applications ranging from enhanced oil recovery to the development of environmentally friendly solvents and drug delivery systems. Despite the widespread applications of these models, the major challenge impeding their use has been and continues to be a lack of understanding of the complex systems which they model. Thus, building computational models for these systems is crucial to the understanding of these systems. The presence of the higher order derivative in combination with a time-dependent process poses many challenges to the creation of stable, convergent, and efficient numerical methods approximating solutions to these equations. In this talk, we present a continuous interior penalty Galerkin framework for solving these equations and theoretically establish the desirable properties of stability, unique solvability, and first-order convergence. We close the talk by presenting the numerical results of some benchmark problems to verify the practical performance of the proposed approach and discuss some exciting current and future applications.", "title": "July 18, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#freddie-witherden-texas-am-university", "text": "", "title": "Freddie Witherden (Texas A&M University)"}, {"location": "videos/#fsspmdm-accelerating-small-sparse-matrix-multiplications-by-run-time-code-generation", "text": "", "title": "FSSpMDM \u2014 Accelerating Small Sparse Matrix Multiplications by Run-Time Code Generation"}, {"location": "videos/#june-20-2023-femllnl-seminar-series", "text": "Small matrix multiplications are a key building block of modern high-order finite element method solvers. Such multiplications describe the act of applying a specific finite element operator onto a set of state vectors. The small and irregular size of these multiplications makes them poor candidates for generic matrix multiplication routines. Moreover, for elements with a tensor product construction, the operators themselves can exhibit a significant degree of sparsity. In this talk, I will describe the code generation strategies employed by our Fixed Size Sparse Matrix-Dense Matrix (FSSpMDM) routine in libxsmm and show how these result in performant operator kernels for prismatic and hexahedral elements. Strategies will be described for both x86-64 (AVX2/AVX-512) and AARCH64 (NEON/SVE) instruction sets. Results will be presented on recent Intel and Apple CPUs and compared against the well-known GiMMiK C code generation library.", "title": "June 20, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#frank-giraldo-naval-postgraduate-school", "text": "", "title": "Frank Giraldo (Naval Postgraduate School)"}, {"location": "videos/#using-high-order-element-based-galerkin-methods-to-capture-hurricane-intensification", "text": "", "title": "Using High-Order Element-Based Galerkin Methods to Capture Hurricane Intensification"}, {"location": "videos/#may-16-2023-femllnl-seminar-series", "text": "Properly capture hurricane rapid intensification (where the winds increase by 30 knots in the first 24 hours) remains challenging for atmospheric models. The reason is that we need LES-type scales \ud835\udcaa(100m) which is still elusive due to computational cost. In this talk, I describe the work that we are doing in this area and how element-based Galerkin Methods are being used to approximate spatial derivatives. I will also discuss the time-integration strategy that we are exploring for this class of problems. In particular, we are exploring process Multirate methods whereby each process in a system of nonlinear partial different equations (PDEs) uses a time-integrator and time-step commensurate with the wave-speed of that process. We have constructed Multirate methods of any order using extrapolation methods. Along this same idea, we have also developed a multi-modeling framework (MMF) designed to replace the physical parameterizations used in weather/climate models. Our approach is to view the coarse-scale and fine-scale models through the lens of Variational Multi-Scale (VMS) methods in order to give MMF a more rigorous mathematical foundation. Our end goal is to use MMF in order to better resolve the inner core of hurricanes. In addition, I will show some results using flux differencing discontinuous Galerkin Methods for constructing both Kinetic Energy Preserving and Entropy Stable methods and discuss why we need scalable models in order to achieve our goals.Our model, NUMA, is a 3D nonhydrostatic atmospheric model that runs on large CPU clusters and on GPUs.", "title": "May 16, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#leszek-f-demkowicz-university-of-texas-at-austin", "text": "", "title": "Leszek F. Demkowicz (University of Texas at Austin)"}, {"location": "videos/#full-envelope-dpg-approximation-for-electromagnetic-waveguides-stability-and-convergence-analysis", "text": "", "title": "Full Envelope DPG Approximation for Electromagnetic Waveguides. Stability and Convergence Analysis"}, {"location": "videos/#april-25-2023-femllnl-seminar-series", "text": "The presented work started with a convergence and stability analysis for the so-called full envelope approximation used in analyzing optical amplifiers (lasers). The specific problem of interest was the simulation of Transverse Mode Instabilities (TMI). The problem translates into the solution of a system of two nonlinear time-harmonic Maxwell equations coupled with a transient heat equation. Simulation of a 1 m long fiber involves the resolution of 10 M wavelengths. A superefficient MPI/openMP hp FE code run on a supercomputer gets you to the range of ten thousand wavelengths. The resolution of the additional thousand wavelengths is done using an exponential ansatz e^{ikz} in the z-coordinate. This results in a non-standard Maxwell problem. The stability and convergence analysis for the problem has been restricted to the linear case only. It turns out that the modified Maxwell problem is stable if and only if the original waveguide problem is stable and the boundedness below stability constants are identical. We have converged to the task of determining the boundedness below constant. The stability analysis started with an easier, acoustic waveguide problem. Separation of variables leads to an eigenproblem for a self-adjoint operator in the transverse plane (in x,y). Expansion of the solution in terms of the corresponding eigenvectors leads then to a decoupled system of ODEs, and a stability analysis for a two-point BVP for an ODE parametrized with the corresponding eigenvalues. The L^2-orthogonality of the eigenmodes and the stability result for a single mode, lead then to the final result: the inverse boundedness below constant depends inversely linearly upon the length L of the waveguide. The corresponding stability for the Maxwell waveguide turned out to be unexpectedly difficult. The equation is vector-valued so a direct separation of variables is out to begin with. An exponential ansatz in z leads to a non-standard eigenproblem involving an operator that is non-self adjoint even for the easiest, homogeneous case. The answer to the problem came from a tricky analysis of the eigenproblem combined with the perturbation technique for perturbed self-adjoint operators. The use of perturbation theory requires an assumption on the smallness of perturbation of the dielectric constant (around a constant value) but with no additional assumptions on its differentiability (discontinuities are allowed). In the end, the final result is similar to that for the acoustic waveguide - the boundedness below constant depends inversely linearly on L.", "title": "April 25, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#joachim-schoberl-vienna-university-of-technology", "text": "", "title": "Joachim Sch\u00f6berl (Vienna University of Technology)"}, {"location": "videos/#the-netgenngsolve-finite-element-software", "text": "", "title": "The Netgen/NGSolve Finite Element Software"}, {"location": "videos/#march-28-2023-femllnl-seminar-series", "text": "In this talk we give an overview of the open source finite element software Netgen/NGSolve, available from www.ngsolve.org . We show how to setup various physical models using FEniCS-like Python scripting. We discuss how we use NGSolve for teaching finite element methods, and how recent research projects have contributed to the further development of the NGSolve software. Some recent highlights are matrix-valued finite elements with applications in elasticity, fluid dynamics, and numerical relativity. We show how the recently extended framework of linear operators allows the utilization of GPUs for linear solvers, as well as time-dependent problems.", "title": "March 28, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#vikram-gavini-university-of-michigan", "text": "", "title": "Vikram Gavini (University of Michigan)"}, {"location": "videos/#fast-accurate-and-large-scale-ab-initio-calculations-for-materials-modeling", "text": "", "title": "Fast, Accurate and Large-scale Ab-initio Calculations for Materials Modeling"}, {"location": "videos/#march-7-2023-femllnl-seminar-series", "text": "Electronic structure calculations, especially those using density functional theory (DFT), have been very useful in understanding and predicting a wide range of materials properties. The importance of DFT calculations to engineering and physical sciences is evident from the fact that ~20% of computational resources on some of the world's largest public supercomputers are devoted to DFT calculations. Despite the wide adoption of DFT, the state-of-the-art implementations of DFT suffer from cell-size and geometry limitations, with the widely used codes in solid state physics being limited to periodic geometries and typical simulation domains containing a few hundred atoms. This talk will present our recent advances towards the development of computational methods and numerical algorithms for conducting fast and accurate large-scale DFT calculations using adaptive finite-element discretization, which form the basis for the recently released DFT-FE open-source code . Details of the implementation, including mixed precision algorithms and asynchronous computing, will be presented. The computational efficiency, scalability and performance of DFT-FE will be presented, which demonstrates a significant outperformance of widely used plane-wave DFT codes.", "title": "March 7, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#stefan-henneking-university-of-texas-at-austin", "text": "", "title": "Stefan Henneking (University of Texas at Austin)"}, {"location": "videos/#bayesian-inversion-of-an-acoustic-gravity-model-for-predictive-tsunami-simulation", "text": "", "title": "Bayesian Inversion of an Acoustic-Gravity Model for Predictive Tsunami Simulation"}, {"location": "videos/#january-10-2023-femllnl-seminar-series", "text": "To improve tsunami preparedness, early-alert systems and real-time monitoring are essential. We use a novel approach for predictive tsunami modeling within the Bayesian inversion framework. This effort focuses on informing the immediate response to an occurring tsunami event using near-field data observation. Our forward model is based on a coupled acoustic-gravity model (e.g., Lotto and Dunham, Comput Geosci (2015) 19:327\u2014340). Similar to other tsunami models, our forward model relies on transient boundary data describing the location and magnitude of the seafloor deformation. In a real-time scenario, these parameter fields must be inferred from a variety of measurements, including observations from pressure gauges mounted on the seafloor. One particular difficulty of this inference problem lies in the accurate inversion from sparse pressure data recorded in the near-field where strong hydroacoustic waves propagate in the compressible ocean; these acoustic waves complicate the task of estimating the hydrostatic pressure changes related to the forming surface gravity wave. Our space-time model is discretized with finite elements in space and finite differences in time. The forward model incurs a high computational complexity, since the pressure waves must be resolved in the 3D compressible ocean over a sufficiently long time span. Due to the infeasibility of rapidly solving the corresponding inverse problem for the fully discretized space-time operator, we discuss approaches for using compact representations of the parameter-to-observable map.", "title": "January 10, 2023 | FEM@LLNL Seminar Series"}, {"location": "videos/#lin-mu-university-of-georgia", "text": "", "title": "Lin Mu (University of Georgia)"}, {"location": "videos/#an-efficient-and-effective-fem-solver-for-diffusion-equation-with-strong-anisotropy", "text": "", "title": "An Efficient and Effective FEM Solver for Diffusion Equation with Strong Anisotropy"}, {"location": "videos/#december-13-2022-femllnl-seminar-series", "text": "The Diffusion equation with strong anisotropy has broad applications. In this project, we discuss numerical solution of diffusion equations with strong anisotropy on meshes not aligned with the anisotropic vector field, focusing on application to magnetic confinement fusion. In order to resolve the numerical pollution for simulations on a non-anisotropy-aligned mesh and reduce the associated high computational cost, we developed a high-order discontinuous Galerkin scheme with an efficient preconditioner. The auxiliary space preconditioning framework is designed by employing a continuous finite element space as the auxiliary space for the discontinuous finite element space. An effective line smoother that can mitigate the high-frequency error perpendicular to the magnetic field has been designed by a graph-based approach to pick the line smoother that is approximately perpendicular to the vector fields when the mesh does not align with anisotropy. Numerical experiments for several benchmark problems are presented to validate the effectiveness and robustness.", "title": "December 13, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#garth-wells-university-of-cambridge", "text": "", "title": "Garth Wells (University of Cambridge)"}, {"location": "videos/#fenicsx-design-of-the-next-generation-fenics-libraries-for-finite-element-methods", "text": "", "title": "FEniCSx: design of the next generation FEniCS libraries for finite element methods"}, {"location": "videos/#november-8-2022-femllnl-seminar-series", "text": "The FEniCS Project provides libraries for solving partial differential equations using the finite element method. An aim of the FEniCS Project has been to provide high-performance solver environments that closely mirror mathematical syntax, with the hypothesis that high-level representations means that solvers are faster to write, easier to debug, and can deliver faster runtime performance than is reasonably possible by hand. Using domain-specific languages and code generation techniques, arguably the FEniCS libraries delivered on these goals for a set of problems. However, over time limitations, including performance and extensibility, become clear and maintainability/sustainability became an issue.Building on experiences from the FEniCS libraries, I will present and discuss the design on the next generation of tools, FEniCSx. The new design retains strengths of the past approach, and addresses limitations using new designs and new tools. Solvers can be written in C++ or Python, and a number of design changes allow the creation of flexible, fast solvers in Python. In the second part of my presentation, I will discuss high-performance finite element kernels (limited to CPUs on this occasion), motivated by the Center for Efficient Exascale Discretizations 'bake-off' problems, and which would not have been possible in the original FEniCS libraries. Double, single and half-precision kernels are considered, and results include (i) the observation that kernels with vector intrinsics can be slower than auto-vectorised kernels for common cases, and (ii) a cache-aware performance model which is remarkably accurate in predicting performance across architectures.", "title": "November 8, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#dennis-ogiermann-university-of-bochum", "text": "", "title": "Dennis Ogiermann (University of Bochum)"}, {"location": "videos/#computing-meets-cardiology-making-heart-simulations-fast-and-accurate", "text": "", "title": "Computing Meets Cardiology: Making Heart Simulations Fast and Accurate"}, {"location": "videos/#september-13-2022-femllnl-seminar-series", "text": "Heart diseases are an ubiquitous societal burden responsible for a majority of deaths world wide. A central problem in developing effective treatments for heart diseases is the inherent complexity of the heart as an organ. From a modeling perspective, the heart can be interpreted as a biological pump involving multiple physical fields, namely fluid and solid mechanics, as well as chemistry and electricity, all interacting on different time scales. This multiphysics and multiscale aspect makes simulations inherently expensive, especially when approached with naive numerical techniques. However, computational models can be extraordinarily useful in helping us understanding how the healthy heart functions and especially how malfunctions influence different diseases. In this context, also information about possible weaknesses of therapies can be obtained to ultimately improve clinical treatment and decision support. In this talk, we will focus primarily on two important model classes in computational cardiology and their respective efficient numerical treatment without compromising significant accuracy. The first class is the problem of computing electrocardiograms (ECG) from electrical heart simulations. Since ECG measurements can give a wide range of insights about a wide range of heart diseases they offer suitable data to validate our electrophysiological models and verify our numerical schemes on organ-scale. Known numerical issues, arising in the context of electrophysiological models, will be reviewed. The second class addresses bidirectionally coupled electromechanical models and their efficient numerical treatment. Focus will be on a unified space-time adaptive operator splitting framework developed on top of MFEM which proves highly efficient so far for the investigated model classes while still preserving high accuracy.", "title": "September 13, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#ricardo-vinuesa-kth", "text": "", "title": "Ricardo Vinuesa (KTH)"}, {"location": "videos/#modeling-and-controlling-turbulent-flows-through-deep-learning", "text": "", "title": "Modeling and Controlling Turbulent Flows through Deep Learning"}, {"location": "videos/#august-23-2022-femllnl-seminar-series", "text": "The advent of new powerful deep neural networks (DNNs) has fostered their application in a wide range of research areas, including more recently in fluid mechanics. In this presentation, we will cover some of the fundamentals of deep learning applied to computational fluid dynamics (CFD). Furthermore, we explore the capabilities of DNNs to perform various predictions in turbulent flows: we will use convolutional neural networks (CNNs) for non-intrusive sensing, i.e. to predict the flow in a turbulent open channel based on quantities measured at the wall. We show that it is possible to obtain very good flow predictions, outperforming traditional linear models, and we showcase the potential of transfer learning between friction Reynolds numbers of 180 and 550. We also discuss other modelling methods based on autoencoders (AEs) and generative adversarial networks (GANs), and we present results of deep-reinforcement-learning-based flow control.", "title": "August 23, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#jeffrey-banks-rpi", "text": "", "title": "Jeffrey Banks (RPI)"}, {"location": "videos/#efficient-techniques-for-fluid-structure-interaction-compatibility-coupling-and-galerkin-differences", "text": "", "title": "Efficient Techniques for Fluid Structure Interaction: Compatibility Coupling and Galerkin Differences"}, {"location": "videos/#july-26-2022-femllnl-seminar-series", "text": "Predictive simulation increasingly involves the dynamics of complex systems with multiple interacting physical processes. In designing simulation tools for these problems, both the formulation of individual constituent solvers, as well as coupling of such solvers into a cohesive simulation tool must be addressed. In this talk, I discuss both of these aspects in the context of fluid-structure interaction, where we have recently developed a new class of stable and accurate partitioned solvers that overcome added-mass instability through the use of so-called compatibility boundary conditions. Here I will present partitioned coupling strategies for incompressible FSI. One interesting aspect of CBC-based coupling is the occurrence of nonstandard and/or high-derivative operators, which can make adoption of the techniques challenging, e.g. in the context of FEM methods. To address this, I will also discuss our newly developed Galerkin Difference approximations, which may provide a natural pathway for CBCs in an FEM context. Although GD is fundamentally a finite element approximation based on a Galerkin projection, the underlying GD space is nonstandard and is derived using profitable ideas from the finite difference literature. The resulting schemes possess remarkable properties including nodal superconvergence and the ability to use large CFL-one time steps. I will also present preliminary results for GD discretizations on unstructured grids using MFEM.", "title": "July 26, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#paul-fischer-uiucanl", "text": "", "title": "Paul Fischer (UIUC/ANL)"}, {"location": "videos/#outlook-for-exascale-fluid-dynamics-simulations", "text": "", "title": "Outlook for Exascale Fluid Dynamics Simulations"}, {"location": "videos/#june-21-2022-femllnl-seminar-series", "text": "We consider design, development, and use of simulation software for exascale computing, with a particular emphasis on fluid dynamics simulation. Our perspective is through the lens of the high-order code Nek5000/RS, which has been developed under DOE's Center for Efficient Exascale Discretizations (CEED). Nek5000/RS is an open source thermal fluids simulation code with a long development history on leadership computing platforms\u2014it was the first commercial software on distributed memory platforms and a Gordon Bell Prize winner on Intel's ASCII Red. There are a myriad of objectives that drive software design choices in HPC, such as scalability, low-memory, portability, and maintainability. Throughout, our design objective has been to address the needs of the user, including facilitating data analysis and ensuring flexibility with respect to platform and number of processors that can be used. When running on large-scale HPC platforms, three of the most common user questions are: How long will my job take? How many nodes will be required? Is there anything I can do to make my job run faster? Additionally, one might have concerns about storage, post-processing (Will I be able to analyze the results? Where?), and queue times. This talk will seek to answer several of these questions over a broad range of fluid-thermal problems from the perspective of a Nek5000/RS user. We specifically address performance with data for NekRS on several of the DOE's pre-exascale architectures, which feature AMD MI250X or NVIDIA V100 or A100 GPUs.", "title": "June 21, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#mike-puso-llnl", "text": "", "title": "Mike Puso (LLNL)"}, {"location": "videos/#topics-in-immersed-boundary-and-contact-methods-current-llnl-projects-and-research", "text": "", "title": "Topics in Immersed Boundary and Contact Methods: Current LLNL Projects and Research"}, {"location": "videos/#may-24-2022-femllnl-seminar-series", "text": "Many of the most interesting phenomena in solid mechanics occurs at material interfaces. This can be in the form of fluid structure interaction, cracks, material discontinuities, impact etc. Solutions to these problems often require some form of immersed/embedded boundary method or contact or combination of both. This talk will provide a brief overview of different lab efforts in these areas and presents some of the current research aspects and results using from LLNL production codes. Technically speaking, the methods discussed here all require Lagrange multipliers to satisfy the constraints on the interface of overlapping or dissimilar meshes which complicates the solution. Stability and consistency of Lagrange multiplier approaches can be hard to achieve both in space and time. For example, the wrong choice of multiplier space will either be over-constrained and/or cause oscillations at the material interfaces for simple statics problems. For dynamics, many of the basic time integration schemes such as Newmark's method are known to be unstable due to gaps opening and closing. Here we introduce some (non-Nitsche) stabilized multiplier spaces for immersed boundary and contact problems and a structure preserving time integration scheme for long time dynamic contact problems. Finally, I will describe some on-going efforts extending this work.", "title": "May 24, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#robert-chiodi-uiuc", "text": "", "title": "Robert Chiodi (UIUC)"}, {"location": "videos/#chyps-an-mfem-based-material-response-solver-for-hypersonic-thermal-protection-systems", "text": "", "title": "CHyPS: An MFEM-Based Material Response Solver for Hypersonic Thermal Protection Systems"}, {"location": "videos/#april-16-2022-femllnl-seminar-series", "text": "The University of Illinois at Urbana-Champaign\u2019s Center for Hypersonics and Entry Systems Studies has developed a material response solver, named CHyPS, to predict the behavior of thermal protection systems for hypersonic flight. CHyPS uses MFEM to provide the underlying discontinuous Galerkin spatial discretization and linear solvers used to solve the equations. In this talk, we will briefly present the physics and corresponding equations governing material response in hypersonic environments. We will also include a discussion on the implementation of a direct Arbitrary Lagrangian-Eulerian approach to handle mesh movement resulting from the ablation of the material surface. Results for standard community test cases developed at a series of Ablation Workshop meetings over the past decade will be presented and compared to other material response solvers. We will also show the potential of high-order solutions for simulating thermal protection system material response.", "title": "April 16, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#tamas-horvath-oakland-university", "text": "", "title": "Tamas Horvath (Oakland University)"}, {"location": "videos/#space-time-hybridizable-discontinuous-galerkin-with-mfem", "text": "", "title": "Space-Time Hybridizable Discontinuous Galerkin with MFEM"}, {"location": "videos/#march-29-2022-femllnl-seminar-series", "text": "Unsteady partial differential equations on deforming domains appear in many real-life scenarios, such as wind turbines, helicopter rotors, car wheels, free-surface flows, etc. We will focus on the space-time finite element method, which is an excellent approach to discretize problems on evolving domains. This method uses discontinuous Galerkin to discretize both in the spatial and temporal directions, allowing for an arbitrarily high-order approximation in space and time. Furthermore, this method automatically satisfies the geometric conservation law, which is essential for accurate solutions on time-dependent domains. The biggest criticism is that the application of space-time discretization increases the computational complexity significantly. To overcome this, we can use the high-order accurate Hybridizable or Embedded discontinuous Galerkin method. Numerical results will be presented to illustrate the applicability of the method for fluid flow around rigid bodies.", "title": "March 29, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#tobin-isaac-georgia-tech", "text": "", "title": "Tobin Isaac (Georgia Tech)"}, {"location": "videos/#unifying-the-analysis-of-geometric-decomposition-in-feec", "text": "", "title": "Unifying the Analysis of Geometric Decomposition in FEEC"}, {"location": "videos/#march-22-2022-femllnl-seminar-series", "text": "Two operations take function spaces and make them suitable for finite element computations. The first is the construction of trace-free subspaces (which creates \"bubble\" functions) and the second is the extension of functions from cell boundaries into cell interiors (which create edge functions with the correct continuity): together these operations define the geometric decomposition of a function space. In finite element exterior calculus (FEEC), these two operations have been treated separately for the two main families of finite elements: full polynomial elements and trimmed polynomial elements. In this talk we will see how one constructor of trace-free functions and one extension operator can be used for both families, and indeed for all differential forms. We will also examine the practicality of these two operators as tools for implementing geometric decompositions in actual finite element codes.", "title": "March 22, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#raphael-zanella-ut-austin", "text": "", "title": "Rapha\u00ebl Zanella (UT Austin)"}, {"location": "videos/#axisymmetric-mfem-based-solvers-for-the-compressible-navier-stokes-equations-and-other-problems", "text": "", "title": "Axisymmetric MFEM-Based Solvers for the Compressible Navier-Stokes Equations and Other Problems"}, {"location": "videos/#march-1-2022-femllnl-seminar-series", "text": "An axisymmetric model leads, when suitable, to a substantial cut in the computational cost with respect to a 3D model. Although not as accurate, the axisymmetric model allows to quickly obtain a result which can be satisfying. Simple modifications to a 2D finite element solver allow to obtain an axisymmetric solver. We present MFEM-based parallel axisymmetric solvers for different problems. We first present simple axisymmetric solvers for the Laplacian problem and the heat equation. We then present an axisymmetric solver for the compressible Navier-Stokes equations. All solvers are based on H^1-conforming finite element spaces. The correctness of the implementation is verified with convergence tests on manufactured solutions. The Navier-Stokes solver is used to simulate axisymmetric flows with an analytical solution (Poiseuille and Taylor-Couette) and an air flow in a plasma torch geometry.", "title": "March 1, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#robert-carson-llnl", "text": "", "title": "Robert Carson (LLNL)"}, {"location": "videos/#an-overview-of-exaconstit-and-its-use-in-the-exaam-project", "text": "", "title": "An Overview of ExaConstit and Its Use in the ExaAM Project"}, {"location": "videos/#february-1-2022-femllnl-seminar-series", "text": "As additively manufactured (AM) parts become increasingly more popular in industry, a growing need exists to help expediate the certifying process of parts. The ExaAM project seeks to help this process by producing a workflow to model the AM process from the melt pool process all the way up to the part scale response by leveraging multiple physics codes run on upcoming exascale computing platforms. As part of this workflow, ExaConstit is a next-generation quasi-static, solid mechanics FEM code built upon MFEM used to connect local microstructures and local properties within the part scale response. Within this talk, we will first provide an overview of ExaConstit, how we have ported it over to the GPU, and some performance numbers on a number of different platforms. Next, we will discuss how we have leveraged MFEM and the FLUX workflow to run hundreds of high-fidelity simulations on Summit in-order to generate the local properties needed to drive the part scale simulation in the ExaAM workflow. Finally, we will show case a few other areas ExaConstit has been used in.", "title": "February 1, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#guglielmo-scovazzi-duke", "text": "", "title": "Guglielmo Scovazzi (Duke)"}, {"location": "videos/#the-shifted-boundary-method-an-immersed-approach-for-computational-mechanics", "text": "", "title": "The Shifted Boundary Method: An Immersed Approach for Computational Mechanics"}, {"location": "videos/#january-20-2022-femllnl-seminar-series", "text": "Immersed/embedded/unfitted boundary methods obviate the need for continual re-meshing in many applications involving rapid prototyping and design. Unfortunately, many finite element embedded boundary methods are also difficult to implement due to the need to perform complex cell cutting operations at boundaries, and the consequences that these operations may have on the overall conditioning of the ensuing algebraic problems. We present a new, stable, and simple embedded boundary method, named \u201cshifted boundary method\u201d (SBM), which eliminates the need to perform cell cutting. Boundary conditions are imposed on a surrogate discrete boundary, lying on the interior of the true boundary interface. We then construct appropriate field extension operators, by way of Taylor expansions, with the purpose of preserving accuracy when imposing the boundary conditions. We demonstrate the SBM on large-scale solid and fracture mechanics problems; thermomechanics problems; porous media flow problems; incompressible flow problems governed by the Navier-Stokes equations (also including free surfaces); and problems governed by hyperbolic conservation laws.", "title": "January 20, 2022 | FEM@LLNL Seminar Series"}, {"location": "videos/#mfem-workshop-2023", "text": "", "title": "MFEM Workshop 2023"}, {"location": "videos/#aaron-fisher-llnl_1", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#welcome-and-overview_1", "text": "", "title": "Welcome and Overview"}, {"location": "videos/#october-26-2023-mfem-workshop-2023", "text": "Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community resources.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#tzanio-kolev-llnl_1", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos/#the-state-of-mfem_1", "text": "", "title": "The State of MFEM"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_1", "text": "MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities (including adaptive mesh refinement, GPU support, and FEM operator decomposition and partial assembly), examples, and mini-apps. Kolev also highlighted the growth of the global community as well as features included in the recent v4.5 software release.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#veselin-dobrev-llnl_1", "text": "", "title": "Veselin Dobrev (LLNL)"}, {"location": "videos/#recent-developments_1", "text": "", "title": "Recent Developments"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_2", "text": "Veselin Dobrev of LLNL detailed the project\u2019s recent developments including sub-mesh extraction, linear form assembly on GPUs, coefficient evaluation on GPUs, new mini-apps and examples, Windows 2022 CI testing on GitHub, and more. He also summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Extreme-scale Scientific Software Development Kit, SciDAC, and the FASTMath Institute.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#sebastian-grimberg-aws", "text": "", "title": "Sebastian Grimberg (AWS)"}, {"location": "videos/#palace-parallel-large-scale-computational-electromagnetics", "text": "", "title": "Palace: PArallel LArge-scale Computational Electromagnetics"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_3", "text": "Palace is a parallel finite element code for full-wave electromagnetics simulations based on the MFEM library. Palace is used at the AWS Center for Quantum Computing to perform large-scale 3D simulations of complex electromagnetics models and enable the design of quantum computing hardware. Grimberg provided an overview of the simulation capabilities of Palace as well as some recent developments for conforming and nonconforming adaptive mesh refinement, operator partial assembly, and GPU support.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#jacob-lotz-delft-university-of-technology", "text": "", "title": "Jacob Lotz (Delft University of Technology)"}, {"location": "videos/#computation-and-reduced-order-modelling-of-periodic-flows", "text": "", "title": "Computation and Reduced Order Modelling of Periodic Flows"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_4", "text": "Many types of periodic flows can be found in nature and industrial applications and their computation is expensive due to lengthy time simulations. His work aims to reduce the cost of these computations. His team solves periodic flows in a space-time domain in which both ends in time are periodic such that they only have to model one period. MFEM is used to discretize the space-time domain and solve our discretized system of equations. Lotz applies a hyper-reduced Proper Orthogonal Decomposition Galerkin reduced order model to speed up our computations. During the presentation he showed (results of) their full order model and their advances in reduced order modelling.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#boyan-lazarov-llnl", "text": "", "title": "Boyan Lazarov (LLNL)"}, {"location": "videos/#scalable-design-and-optimization-with-mfem", "text": "", "title": "Scalable Design and Optimization with MFEM"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_5", "text": "Lazarov discussed recently added and ongoing code development facilitating the solution of shape and topology optimization problems. Both topology and shape optimization are gradient-based iterative algorithms aiming to find a material distribution that minimizes an objective and fulfills a set of constraints. Every optimization step includes a solution to a forward optimization problem, an evaluation of the objective and constraints, a solution to an adjoint problem associated with every objective or constraint, an evaluation of gradients, and an update of the design based on mathematical programming techniques. All these steps can be easily implemented and executed by using MFEM in a scalable manner, allowing the design and optimization of large-scale realistic industrial problems. Thus, the goal is to exemplify these features, highlight the techniques that simplify the implementation of new problems, and provide a glimpse into the future.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#student-lightning-talks", "text": "", "title": "Student Lightning Talks"}, {"location": "videos/#part-1", "text": "", "title": "Part 1"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_6", "text": "The following four students presented in this video: Shani Martinez Weissberg (Tel Aviv University): \u201c\u00b5FEA of a Rabbit Femur\u201d Paul Moujaes (TU-Dortmund): \u201cDissipation-Based Entropy Stabilization for Slope-Limited Discontinuous Galerkin Approximations of Hyperbolic Problems\u201d Alejandro Mu\u00f1oz (Universidad de Granada): \u201cDiscontinuous Galerkin in the Time Domain for Maxwell\u2019s Equations\u201d Bill Ellis (UK Atomic Energy Authority): \u201cComparing Thermo-Mechanical Solves in MOOSE and MFEM\u201d", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#student-lightning-talks_1", "text": "", "title": "Student Lightning Talks"}, {"location": "videos/#part-2", "text": "", "title": "Part 2"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_7", "text": "The following four students presented in this video: Alexander Mote (Oregon State University): \u201cA Neural Network Surrogate Model for Nonlocal Thermal Flux Calculations\u201d (LLNL-PRES-854134) Amit Rotem (Virginia Tech): \u201cGPU Acceleration of IPDG in MFEM\u201d Josiah Brown (Relogic Research): \u201cProject Minerva\u201d Mike Pozulp (UC Berkeley): \u201cAn Implicit Monte Carlo Acceleration Scheme\u201d", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#syunichi-shiraiwa-pppl", "text": "", "title": "Syun'ichi Shiraiwa (PPPL)"}, {"location": "videos/#radio-frequency-wave-simulation-in-hot-magnetized-plasma-using-differential-operator-for-non-local-conductivity-response", "text": "", "title": "Radio-Frequency Wave Simulation in Hot Magnetized Plasma using Differential Operator for Non-Local Conductivity Response"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_8", "text": "In high-temperature plasmas, the dielectric response to the RF fields is caused by freely moving charged particles, which naturally makes such a response non-local and correspondingly, the Maxwell wave problem becomes an integro-differential equation. A differential form of dielectric operator, based on the small k\u22a5\u03c1 expansion, is widely used. However, they typically includes up-to the second order terms, and thus the use of such an operator is limited to the waves that satisfy k\u22a5\u03c1 < 1. We propose an alternative approach to construct a dielectric operator, which includes all-order finite Larmor radius effects without explicitly containing higher order derivatives. We use a rational approximation of the plasma dielectric tensor in the wave number space, in order to yield a differential operator acting on the dielectric current (J). The 1D O-X-B mode-conversion of the electron Bernstein wave in the non-relativistic Maxwellian plasma was modeled using this approach. An agreement with analytic calculation and the conservation of wave energy carried by the Poynting flux and electron thermal motion (\u201csloshing\u201d) is found. The connection between our construction method and superposition of Green\u2019s function for these screened Poisson\u2019s equations is presented. An approach to extend the operator in a multi-dimensional setting will also be discussed.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#tamas-horvath-oakland-university_1", "text": "", "title": "Tamas Horvath (Oakland University)"}, {"location": "videos/#implementation-of-hybridizable-discontinuous-galerkin-methods-via-the-hdg-branch", "text": "", "title": "Implementation of Hybridizable Discontinuous Galerkin Methods via the HDG Branch"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_9", "text": "Horvath presented the HDG branch, which was initially developed for HDG discretizations of advection-diffusion problems. Recent updates have made the branch highly adaptable for various applications, allowing a flexible implementation of HDG for many different PDEs. He showcased these enhancements and provide insights into their versatile usage across different problems.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#yohann-dudouit-llnl_1", "text": "", "title": "Yohann Dudouit (LLNL)"}, {"location": "videos/#empowering-mfem-using-libceed", "text": "", "title": "Empowering MFEM Using libCEED"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_10", "text": "Dudouit began with an overview of the features introduced to MFEM through the integration of libCEED. He emphasized capabilities that are distinct from native MFEM functionalities, marking an enhancement in the software\u2019s suite of tools, such as support for simplices, handling of mixed meshes, and support for p-adaptivity. The presentation concluded by showcasing benchmarks for various problems executed on different HPC architectures, illustrating the performance gains and efficiencies achieved through the libCEED integration.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#zhang-chunyu-sun-yat-sen-university", "text": "", "title": "Zhang Chunyu (Sun Yat-Sen University)"}, {"location": "videos/#homogenized-energy-theory-for-solution-of-elasticity-problems-with-consideration-of-higher-order-microscopic-deformations", "text": "", "title": "Homogenized Energy Theory for Solution of Elasticity Problems with Consideration of Higher-Order Microscopic Deformations"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_11", "text": "The classical continuum mechanics faces difficulties in solving problems involving highly inhomogeneous deformations. The proposed theory investigates the impact of high-order microscopic deformation on modeling of material behaviors and provides a refined interpretation of strain gradients through the averaged strain energy density. Only one scale parameter, i.e., the size of the Representative Volume Element(RVE), is required by the proposed theory. By employing the variational approach and the Augmented Lagrangian Method(ALM), the governing equations for deformation as well as the numerical solution procedure are derived. It is demonstrated that the homogenized energy theory offers plausible explanations and reasonable predictions for the problems yet unsolved by the classical theory such as the size effect of deformation and the stress singularity at the crack tip. The concept of averaged strain energy proves to be more suitable for describing the intricate mechanical behavior of materials. And high order partial differential equations can be effectively solved by the ALM by introducing supplementary variables to lower the highest order of the equations.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#eric-chin-llnl", "text": "", "title": "Eric Chin (LLNL)"}, {"location": "videos/#contact-constraint-enforcement-using-the-tribol-interface-physics-library", "text": "", "title": "Contact Constraint Enforcement Using the Tribol Interface Physics Library"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_12", "text": "Chin discussed recent additions to the Tribol interface physics library to simplify MPI parallel contact constraint enforcement in large deformation, implicit and explicit continuum solid mechanics simulations using MFEM. Tribol is an open-source software package available on GitHub and includes tools for contact detection, state-of-the-art Lagrangian contact methods such as common plane and mortar, and various enforcement techniques such as penalty and Lagrange multiplier. Additionally, Tribol recently added a domain redecomposer for coalescing proximal contact pairs on a single rank. Tribol\u2019s features are designed to interact seamlessly with MFEM and other codes that use MFEM, with native support for MFEM data structures such as ParMesh, ParGridFunction, and HypreParMatrix. Chin highlighted the simplicity of adding Tribol features to an MFEM-based code by looking at integration with Serac , an open-source implicit nonlinear thermal-structural simulation code.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#milan-holec-llnl", "text": "", "title": "Milan Holec (LLNL)"}, {"location": "videos/#deterministic-transport-mfem-miniapp", "text": "", "title": "Deterministic Transport MFEM-Miniapp"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_13", "text": "Holec introduced a new multidimensional discretization in MFEM enabling efficient high-order phase-space simulations of various types of Boltzmann transport. In terms of a generalized form of the standard discrete ordinate SN method for the phase-space, his team carefully designs discrete analogs obeying important continuous properties such as conservation of energy, preservation of positivity, preservation of the diffusion limit of transport, preservation of symmetry leading to rays-effect mitigation, and other laws of physics. Finally, Holec showed how to apply this new phase-space MFEM feature to increase the fidelity of modeling of fusion energy experiments.", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#aaron-fisher-llnl_2", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#wrap-up-and-visualization-contest-winners", "text": "", "title": "Wrap-Up and Visualization Contest Winners"}, {"location": "videos/#october-26-2023-mfem-workshop-2023_14", "text": "The workshop concluded with the announcement of winners of the simulation and visualization contest: (1) displacement distribution of a loaded excavator arm under static equilibrium, rendered by Mehran Ebrahimi from Autodesk Research; and (2) leapfrogging vortex rings based on an MFEM incompressible Schr\u00f6dinger fluid solver, rendered by John Camier from LLNL. Contest winners are featured in the gallery .", "title": "October 26, 2023 | MFEM Workshop 2023"}, {"location": "videos/#conferences-in-2023", "text": "", "title": "Conferences in 2023"}, {"location": "videos/#tzanio-kolev-llnl_2", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos/#pde-simulations-on-unstructured-grids-with-finite-element-discretizations", "text": "", "title": "PDE Simulations on Unstructured Grids with Finite Element Discretizations"}, {"location": "videos/#march-15-2023-ipam-at-ucla", "text": "LLNL computational mathematician Tzanio Kolev presented an overview of MFEM as part of the long program on New Mathematics for the Exascale: Applications to Materials Science at the Institute for Pure and Applied Mathematics.", "title": "March 15, 2023 | IPAM at UCLA"}, {"location": "videos/#mfem-workshop-2022", "text": "", "title": "MFEM Workshop 2022"}, {"location": "videos/#aaron-fisher-llnl_3", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#welcome-and-overview_2", "text": "", "title": "Welcome and Overview"}, {"location": "videos/#october-25-2022-mfem-workshop-2022", "text": "Held on October 25, 2022, the second annual MFEM community workshop brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, an interactive Q&A session, and a visualization contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#tzanio-kolev-llnl_3", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos/#the-state-of-mfem_2", "text": "", "title": "The State of MFEM"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_1", "text": "MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities (including adaptive mesh refinement, GPU support, and FEM operator decomposition and partial assembly), examples, and mini-apps. Kolev also highlighted the growth of the global community as well as features included in the recent v4.5 software release.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#veselin-dobrev-llnl_2", "text": "", "title": "Veselin Dobrev (LLNL)"}, {"location": "videos/#recent-developments-in-mfem", "text": "", "title": "Recent Developments in MFEM"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_2", "text": "Veselin Dobrev of LLNL detailed the project\u2019s recent developments including sub-mesh extraction, linear form assembly on GPUs, coefficient evaluation on GPUs, new mini-apps and examples, Windows 2022 CI testing on GitHub, and more. He also summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Extreme-scale Scientific Software Development Kit, SciDAC, and the FASTMath Institute.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#ben-zwick-university-of-western-australia", "text": "", "title": "Ben Zwick (University of Western Australia)"}, {"location": "videos/#solution-of-the-electroencephalography-eeg-forward-problem", "text": "", "title": "Solution of the Electroencephalography (EEG) Forward Problem"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_3", "text": "Ben Zwick of the University of Western Australia presented \"Solution of the Electroencephalography (EEG) Forward Problem.\" The brain's electrical activity can be measured using EEG with electrodes attached to the scalp, or electrocorticography (ECoG), also known as intracranial EEG (iEEG), with electrodes implanted on the brain's surface. EEG source localization combines measurements from EEG or iEEG with data from medical imaging to estimate the location and strengths of the current sources that generated the measured electric potential at the electrodes. Source localization can be used to locate the epileptic zone in pharmaco-resistant focal epilepsies and study evoked related potentials. Accurate source localization requires fast and accurate solutions of the EEG forward problem, which involves calculating the electric potential within the brain volume given a predefined source. This presentation demonstrates how MFEM can be used to solve the EEG forward problem using patient-specific geometry and tissue conductivity obtained from medical images.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#carlos-brito-pacheco-universite-grenoble-alpes", "text": "", "title": "Carlos Brito Pacheco (Universit\u00e9 Grenoble Alpes)"}, {"location": "videos/#rodin-density-and-topology-optimization-framework", "text": "", "title": "Rodin: Density and Topology Optimization Framework"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_4", "text": "Carlos Brito Pacheco of Universit\u00e9 Grenoble Alpes presented \"Rodin: Lightweight and Modern C++17 Shape, Density and Topology Optimization Framework.\" He introduced the shape optimization library Rodin; a lightweight and modular shape optimization framework which provides many of the associated functionalities that are needed when implementing shape and topology optimization algorithms. These functionalities range from refining and remeshing the underlying shape, to providing elegant mechanisms to specify and solve variational problems. Learn more about Rodin on GitHub .", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#tobias-duswald-cerntum", "text": "", "title": "Tobias Duswald (CERN/TUM)"}, {"location": "videos/#stochastic-fractional-pdes-random-field-generation-topology-optimization", "text": "", "title": "Stochastic Fractional PDEs: Random Field Generation & Topology Optimization"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_5", "text": "Tobias Duswald of CERN/Technical University of Munich presented \"Stochastic Fractional PDEs with MFEM with Applications to Random Field Generation and Topology Optimization.\" Over the last several centuries, engineers, physicists, and mathematicians have learned how to describe their problems accurately with partial differential equations (PDEs). PDEs govern the laws of continuum mechanics, quantum mechanics, heat transfer, and many other phenomena. More recently, fractional PDEs have gained popularity in the scientific community because they allow for a more general description of complicated systems (e.g., multiphysics) by leveraging a real-valued exponent for the operators. Besides fractional operators, stochastic PDEs have also sparked the community's interest because they generalize the PDE framework to account for randomness appearing in many disciplines. This talk addresses the numerical solution of stochastic, fractional PDEs with MFEM. To deal with these two flavors of PDEs, Duswald introduced MFEM\u2019s WhiteNoiseIntegrator to treat a stochastic linear form and adopt a rational approximation for the fractional operator. He presented results for three different use cases. First, he showed numerical results for the fractional Laplace problem with homogeneous Dirichlet boundary conditions. Second, he generated Mat\u00e9rn-type Gaussian random fields (GRFs) by solving a specific stochastic, fractional PDE using an approach commonly referred to as SPDE method in the spatial statistics literature. Thirdly, he used GRFs to model geometric uncertainties in additive manufacturing processes and apply the model for topology optimization under uncertainty.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#alvaro-sanchez-villar-princeton-plasma-physics-laboratory", "text": "", "title": "Alvaro S\u00e1nchez Villar (Princeton Plasma Physics Laboratory)"}, {"location": "videos/#mfem-application-to-em-wave-simulation-in-ecr-space-plasma-thrusters", "text": "", "title": "MFEM Application to EM-Wave Simulation in ECR Space Plasma Thrusters"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_6", "text": "Alvaro S\u00e1nchez Villar of the Princeton Plasma Physics Laboratory presented \"MFEM Application to EM-Wave Simulation in ECR Space Plasma Thrusters.\" The solution of Maxwell equations using the cold-plasma approximation is shown in the context of the design of electron cyclotron resonance plasma thrusters for space propulsion applications. This thruster class utilizes the electron cyclotron resonance to energize the plasma constituents and to sustain the plasma discharge. MFEM finite element discretization is used to solve for the time-harmonic electromagnetic waves. The shape and magnitude of the electromagnetic power density absorbed by the plasma is coupled to the plasma transport variables, and therefore determines the thruster operation performance parameters. Coupled simulations of the electromagnetic-wave and the plasma transport problems are used to interpret thruster operational principles, to understand its sensitivity to operational and design parameters, and compared to experimental measurements to both assess the accuracy of the current numerical model and to highlight its main limitations.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#brian-young_1", "text": "", "title": "Brian Young"}, {"location": "videos/#openparem2d-a-2d-simulator-for-guided-waves", "text": "", "title": "OpenParEM2D: A 2D Simulator for Guided Waves"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_7", "text": "Independent software developer Brian Young presented \"OpenParEM2D: A Free, Open-Source Electromagnetic Simulator for 2D Waveguides and Transmission Lines.\" An overview is provided on a 2D electromagnetic simulator for guided waves called OpenParEM2D. It is an open-source and free project licensed under GPLv3 or later and released at its website . Capabilities and methodology are presented.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#christina-migliore-mit", "text": "", "title": "Christina Migliore (MIT)"}, {"location": "videos/#the-development-of-the-em-rf-edge-interactions-mini-app-stix-using-mfem", "text": "", "title": "The Development of the EM RF-Edge Interactions Mini-app \u201cStix\u201d Using MFEM"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_8", "text": "Christina Migliore of MIT presented \"The Development of the EM RF-Edge Interactions Mini-App Stix Using MFEM.\" Ion cyclotron radio frequency range (ICRF) power plays an important role in heating and current drive in fusion devices. However, experiments show that in the ICRF regime there is a formation of a radio frequency (RF) sheath at the material and antenna boundaries that influences sputtering and power dissipation. Given the size of the sheath relative to the scale of the device, it can be approximated as a boundary condition (BC). Electromagnetic field solvers in the ICRF regime typically treat material boundaries as perfectly conducting, thus ignoring the effect of the RF sheath. Here it is described progress of implementing a model for the RF sheath based on a finite impedance sheath BC formulated by J. Myra and D. A. D\u2019Ippolito, Physics of Plasmas 22 (2015) which provides a representation of the RF rectified sheath including capacitive and resistive effects. This research will discuss the results from the development of a parallelized cold-plasma wave equation solver Stix that implements this non-linear sheath impedance BC through the method of finite elements in pseudo-1D and pseudo-2D using the MFEM library.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#will-pazner-portland-state-university", "text": "", "title": "Will Pazner (Portland State University)"}, {"location": "videos/#high-order-solvers-gpu-acceleration", "text": "", "title": "High-Order Solvers + GPU Acceleration"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_9", "text": "Will Pazner of Portland State University presented \"High-Order Solvers + GPU Acceleration.\" He discussed the benefits of high-order (HO) methods in modeling under-resolved physics and on modern computing architectures, acknowledging that solving HO finite element problems remains challenging. His talk included details about how MFEM supports matrix-free solvers for HO methods, HO operator setup and application, low-order-refined (LOR) preconditioning and matrix assembly, LOR assembly throughput on GPUs (including CPU and GPU comparisons and parallel scalability), and LOR adaptive mesh refinement preconditioning.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#jorge-luis-barrera-llnl", "text": "", "title": "Jorge-Luis Barrera (LLNL)"}, {"location": "videos/#shape-and-topology-optimization-powered-by-mfem", "text": "", "title": "Shape and Topology Optimization Powered by MFEM"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_10", "text": "Jorge-Luis Barrera of LLNL presented \"Shape and Topology Optimization Powered by MFEM.\" He discussed the Livermore Design Optimization (LiDO) code, which solves optimization problems for a wide range of Lab-relevant engineering applications. Leveraging MFEM and the LLNL-developed engineering simulation code Serac, LiDO delivers a powerful suite of design tools that run on HPC systems. The talk highlighted several design examples that benefit from LiDO\u2019s integration with MFEM, including multi-material geometries, octet truss lattices, and a concrete dam under stress. LiDO\u2019s graph architecture that seamlessly integrates MFEM features ensures robust topology optimization, as well as shape optimization using nodal coordinates and level set fields as optimization variables.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#siu-wun-cheung-llnl", "text": "", "title": "Siu Wun Cheung (LLNL)"}, {"location": "videos/#reduced-order-modeling-for-fe-simulations-with-mfem-librom", "text": "", "title": "Reduced Order Modeling for FE Simulations with MFEM & libROM"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_11", "text": "Siu Wun Cheung of LLNL presented \"Reduced Order Modeling for Finite Element Simulations Through the Partnership of MFEM and libROM.\" MFEM provides a wide variety of mesh types and high-order finite element discretizations. However, subject to the model complexity and fine resolution of the discretization, the computational cost can be high, requiring a long time to complete a single forward simulation. In this talk, we will introduce various reduced order modeling techniques, which aim to lower the computational complexity and maintain good accuracy, including intrusive projection-based model reduction and non-intrusive approaches. We will demonstrate the use of reduced order modeling techniques in libROM (www.librom.net), which can be applied to various MFEM examples, including the Poisson problem, linear elasticity, linear advection, mixed nonlinear diffusion, nonlinear elasticity, nonlinear heat conduction, Euler equation, and optimal control problems.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#devlin-hayduke-relogic-research", "text": "", "title": "Devlin Hayduke (ReLogic Research)"}, {"location": "videos/#accelerated-deployment-of-mfem-based-solvers-in-large-scale-industrial-problems", "text": "", "title": "Accelerated Deployment of MFEM Based Solvers in Large Scale Industrial Problems"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_12", "text": "Devlin Hayduke of ReLogic Research presented \"Project Minerva: Accelerated Deployment of MFEM Based Solvers in Large Scale Industrial Problems.\" While many Advanced Scientific Computing Research (ASCR) supported software packages are open source, they are often complicated to use, distributed primarily in source-code form targeting HPC systems, and potential adopters lack options for purchasing commercial support, training, and custom-development services. In response to this need, ReLogic Research, Inc., in collaboration with LLNL, is developing a secure, cloud deployable platform based on the MFEM software termed Minerva. Minerva will feature an integration layer allowing users of commercially available finite element pre/post-processing software (e.g., Abaqus/CAE, Hypermesh, Femap) for use with the Abaqus solver to run simulation studies with the MFEM discretization library and will strengthen further MFEM implemented solvers to make them applicable for solving large scale industrial design and optimization problems.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#synthetik-applied-technologies", "text": "", "title": "Synthetik Applied Technologies"}, {"location": "videos/#blastfem-gpu-accelerated-high-performance-energy-efficient-solver", "text": "", "title": "blastFEM: GPU-Accelerated, High-Performance, Energy-Efficient Solver"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_13", "text": "Tim Brewer, Ben Shields, Peter Vonk, Jeff Heylmun, and Barlev Raymond of Synthetik Applied Technologies presented \"blastFEM: A GPU-Accelerated, Very High-Performance and Energy-Efficient Solver for Highly Compressible Flows.\" Highly compressible multiphase and reactive flows are important, and manifest across a myriad of practical applications: novel energy production and propulsion methods, building design, safety and energy efficiency, material discovery, and maintenance of our nuclear arsenal. There are, however, few tools available to industry capable of simulating these flows at a resolution and scale suitable make predictions of adequate detail\u2014at least within reasonable timeframes and budgetary constraints\u2014to inform engineers and designers. A next generation, highly efficient simulation code is needed that can deliver results within useful timeframes, with sufficient detail to be useful to support simulation-driven design, discovery, and optimization. Furthermore, the code must be designed to run on modern and emerging heterogeneous architectures, and can efficiently leverage these architectures though the use of numerical schemes designed to maximized computational efficiency.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#adolfo-rodriguez-opensim-technology", "text": "", "title": "Adolfo Rodriguez (OpenSim Technology)"}, {"location": "videos/#using-mfem-for-wellbore-stability-analysis", "text": "", "title": "Using MFEM for Wellbore Stability Analysis"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_14", "text": "Adolfo Rodriguez of OpenSim Technology presented \"Using MFEM for Wellbore Stability Analysis.\" He discussed the results from a Department of Energy Small Business Innovation Research project regarding the implementation of wellbore stability analysis for hydrocarbon producing wells.", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#julian-andrej-llnl", "text": "", "title": "Julian Andrej (LLNL)"}, {"location": "videos/#aws-tutorial", "text": "", "title": "AWS Tutorial"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_15", "text": "In this tutorial, Julian Andrej of LLNL demonstrated how to use MFEM in the cloud (e.g., an Amazon Web Services instance) for scalable finite element discretization application development. Step-by-step instructions for the tutorial can be found on the tutorial page .", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#aaron-fisher-llnl_4", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#wrap-up-and-simulation-contest-winners", "text": "", "title": "Wrap-Up and Simulation Contest Winners"}, {"location": "videos/#october-25-2022-mfem-workshop-2022_16", "text": "Aaron Fisher of LLNL concluded the workshop by announcing the winners of the simulation and visualization contest: (1) streamlines of the electric field generated by a current dipole source located in the temporal lobe of an epilepsy patient, rendered by Ben Zwick of the University of Western Australia; (2) a topology-optimized heat sink, rendered by Tobias Duswald of CERN/Technical University of Munich; (3) the magnetic field induced by current running through copper wire in air, rendered by Will Pazner of Portland State University. Contest winners are featured in the MFEM gallery .", "title": "October 25, 2022 | MFEM Workshop 2022"}, {"location": "videos/#conferences-in-2022", "text": "", "title": "Conferences in 2022"}, {"location": "videos/#vladimir-tomov-llnl_1", "text": "", "title": "Vladimir Tomov (LLNL)"}, {"location": "videos/#finite-element-algorithms-and-research-topics-in-ale-hydrodynamics", "text": "", "title": "Finite Element Algorithms and Research Topics in ALE Hydrodynamics"}, {"location": "videos/#november-17-2022-texas-am-university-corpus-christi-department-of-math-statistics", "text": "LLNL computational mathematician Vladimir Tomov discussed high-order finite element methods research, development, and application in the context of shock hydrodynamics simulations. The method is based on an Arbitrary Lagrangian-Eulerian (ALE) formulation consisting of separate Lagrangian, mesh optimization, and remap phases. The presentation addressed the following topics: Lagrangian shock hydrodynamics on curved meshes; multi-material closure models; coupling to multigroup radiation diffusion; optimization, r-adaptivity, and surface fitting of high-order meshes; advection-based remap with nonlinear sharpening of material interfaces; synchronization between the max/min bounds of primal and conservative fields during remap; computationally efficient finite element kernels based on partial assembly and sum factorization. The talk also covered the existing methods followed by a discussion about the outstanding research challenges and ongoing work to address them.", "title": "November 17, 2022 | Texas A&M University-Corpus Christi Department of Math & Statistics"}, {"location": "videos/#john-camier-llnl", "text": "", "title": "John Camier (LLNL)"}, {"location": "videos/#all-out-kernel-fusion-reaching-peak-performance-faster-in-high-order-finite-element-simulations", "text": "", "title": "All-Out Kernel Fusion: Reaching Peak Performance Faster in High-Order Finite Element Simulations"}, {"location": "videos/#march-2124-2022-nvidia-gtc22", "text": "LLNL research scientist John Camier described recent improvements of high-order finite element CUDA kernels that can reduce the time-to-solution by a factor of 10. Augmenting traditional compiler representations with a general mathematical description enables a sustainable way to generate optimized kernels, matching the peak performance of hand-tuned CUDA code. Such intermediate graph-based representation provides significant potential for optimization, both in terms of minimizing the number of kernel launches and in reducing the memory bandwidth. Camier also presented results on single and multiple GPUs that demonstrate significant reduction in the local problem size required to reach peak performance, leading to faster time-to-solution in finite element applications.", "title": "March 21\u201324, 2022 | NVIDIA GTC22"}, {"location": "videos/#mfem-workshop-2021", "text": "", "title": "MFEM Workshop 2021"}, {"location": "videos/#aaron-fisher-llnl_5", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#wrap-up-and-simulation-contest-winners_1", "text": "", "title": "Wrap-Up and Simulation Contest Winners"}, {"location": "videos/#october-20-2021-mfem-workshop-2021", "text": "MFEM\u2019s first community workshop was held virtually on October 20, 2021, with participants around the world. Aaron Fisher of LLNL concluded the workshop by announcing the results of the simulation and visualization contest. The winners represent two very different research applications using MFEM: (1) the electric field generated by electrocardiogram waves of a rabbit\u2019s heart ventricles, rendered by Dennis Ogiermann of Ruhr-University Bochum (Germany); (2) incompressible fluid flow around a rotating turbine, animated by Tamas Horvath of Oakland University (Michigan). Contest submissions are featured in the gallery .", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#will-pazner-llnl", "text": "", "title": "Will Pazner (LLNL)"}, {"location": "videos/#high-order-matrix-free-solvers", "text": "", "title": "High-Order Matrix-Free Solvers"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_1", "text": "For users unfamiliar with MFEM\u2019s solver library, Will Pazner of LLNL demonstrated a few ways\u2014in some cases adding just a single line of code\u2014to run scalable solvers for differential equations. These solvers execute hierarchical finite element discretizations for both low- and high-order problems.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#vladimir-tomov-llnl_2", "text": "", "title": "Vladimir Tomov (LLNL)"}, {"location": "videos/#mfem-capabilities-for-high-order-mesh-optimization", "text": "", "title": "MFEM Capabilities for High-Order Mesh Optimization"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_2", "text": "Vladimir Tomov of LLNL described MFEM\u2019s mesh optimization strategies including ways the user can define target elements. He demonstrated optimizing a mesh\u2019s shape by limiting displacements to preserve a boundary layer and by changing the size of a uniform mesh in a specific region. MFEM\u2019s mesh-optimizing miniapps are available online .", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#william-dawn-ncsu", "text": "", "title": "William Dawn (NCSU)"}, {"location": "videos/#unstructured-finite-element-neutron-transport-using-mfem", "text": "", "title": "Unstructured Finite Element Neutron Transport using MFEM"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_3", "text": "William Dawn from North Carolina State University described his work with unstructured neutron transport. His team models microreactors, a new class of compact reactor with relatively small electrical output. As part of the Exascale Computing Project, Dawn\u2019s team is modeling the MARVEL reactor, which is planned for construction at Idaho National Laboratory. MFEM satisfies their need for a finite element framework with GPU support and rapid prototyping. With MFEM, the team discretizes a neutron transport equation with six independent variables in space, direction, and energy. Traditional neutron transport methods use a \u201csweeping\u201d method to transport particles through a problem, but this is not feasible for generally unstructured meshes. In Dawn\u2019s models, the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation is used to transform the neutron transport equation from a first-order hyperbolic form to a second-order elliptic form. Then, the SAAF equations are discretized with the finite element method and solved using MFEM. Due to the dependence of the neutron flux on angle and direction, these problems have a high vector-dimension with hundreds to thousands of degrees of freedom (DOF) per mesh vertex. Also, due to the second-order nature of these equations, highly refined meshes are required to sufficiently resolve reactor geometries with millions of vertices in a mesh. Results have been prepared for problems with billions of DOF using the Summit supercomputer at Oak Ridge National Laboratory.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#syunichi-shiraiwa-pppl_1", "text": "", "title": "Syun\u2019ichi Shiraiwa (PPPL)"}, {"location": "videos/#development-of-pymfem-python-wrapper-for-mfem-scalable-rf-wave-simulation-for-nuclear-fusion", "text": "", "title": "Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_4", "text": "Syun\u2019ichi Shiraiwa of Pacific Northwest National Laboratory introduced PyMFEM, a Python wrapper for MFEM that his team uses in radiofrequency (RF) wave simulations for the RF-SciDAC project. RF waves can be used to heat plasma in a nuclear fusion reaction. Simulations of this process present multiple challenges when incorporating a variety of antenna structures at different frequencies, waves with different wave lengths in the same space or spatially diverse, and RF wave effects on background plasma. To integrate MFEM, a C++ software library, into their multiphysics platform, Shiraiwa\u2019s team created a code \u201cwrapper\u201d that binds MFEM to the external Python components of RF wave simulations, ultimately extending MFEM\u2019s features to Python users. Shiraiwa described how the PyMFEM module works in serial and parallel and invited the audience to contribute to the open-source code.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#qi-tang-lanl", "text": "", "title": "Qi Tang (LANL)"}, {"location": "videos/#an-adaptive-scalable-fully-implicit-resistive-mhd-solver", "text": "", "title": "An Adaptive, Scalable Fully Implicit Resistive MHD Solver"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_5", "text": "Qi Tang of Los Alamos National Laboratory described his team\u2019s development of an efficient, scalable solver for tokamak plasma simulations. Magnetohydrodynamics (MHD) equations are important for studying plasma systems, but efficient numerical solutions for MHD are extremely challenging due to disparate time and length scales, strong hyperbolic phenomena, and nonlinearity. Tang\u2019s team has developed a high-order stabilized finite element algorithm for incompressible resistive MHD equations based on MFEM, which provides physics-based preconditioners, adaptive mesh refinement, parallelization, and load balancing. Tang showed animated examples of the model\u2019s scalable and efficient results.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#jan-nikl-eli-beamlines", "text": "", "title": "Jan Nikl (ELI Beamlines)"}, {"location": "videos/#laser-plasma-modeling-with-high-order-finite-elements", "text": "", "title": "Laser Plasma Modeling with High-Order Finite Elements"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_6", "text": "Jan Nikl outlined how his team at the ELI Beamlines Centre uses MFEM for laser plasma modeling. Lasers have found their application in many scientific disciplines, where generation of plasma, the fourth state of matter, holds great potential for the future. A detailed description of laser produced plasmas is then essential for many applications, like (pre)pulses of ultra-intense lasers and ion acceleration beamlines, laboratory astrophysics, inertial confinement fusion, and many others. All of the mentioned are investigated at ELI Beamlines in the Czech Republic, a European laser facility aiming to operate the most intense laser system in the world. In this context, Nikl described the numerical construction based on the finite element method. This effort concentrates mainly on the Lagrangian hydrodynamics and Vlasov\u2013Fokker\u2013Planck\u2013Maxwell kinetic description of plasma, utilizing the MFEM library for its flexibility and scalability.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#mathias-davids-harvard", "text": "", "title": "Mathias Davids (Harvard)"}, {"location": "videos/#modeling-peripheral-nerve-stimulations-pns-in-magnetic-resonance-imaging-mri", "text": "", "title": "Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI)"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_7", "text": "Mathias Davids from Harvard Medical School presented MFEM\u2019s use in a medical setting. Peripheral nerve stimulation (PNS) limits the usable image encoding performance in the latest generation of magnetic resonance imaging (MRI) scanners. The rapid switching of the MRI gradient coils\u2019 magnetic fields induces electric fields in the human body strong enough to evoke unwanted action potential in peripheral nerves, leading to muscle contractions or touch perceptions. Despite its limiting role in MRI, PNS effects are not directly included during the coil design phase. Davids\u2019 team developed a modeling tool to predict PNS thresholds and locations in the human body, allowing them to directly incorporate PNS metrics in the numeric coil winding optimization to design PNS-optimized coil layouts. This modeling tool relies on electromagnetic field simulations in heterogeneous finite element body models coupled to neurodynamic models of myelinated nerve fibers. This tool enables researchers to develop strategies that mitigate PNS effects without building expensive prototype MRI systems, maximizing the usable image encoding performance.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#marc-bolinches-ut", "text": "", "title": "Marc Bolinches (UT)"}, {"location": "videos/#development-of-dg-compressible-navier-stokes-solver-with-mfem", "text": "", "title": "Development of DG Compressible Navier-Stokes Solver with MFEM"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_8", "text": "Marc Bolinches from the University of Texas at Austin described a compressible Navier-Stokes solver using MFEM v4.2 which did not include full support for GPUs. The solver uses the discontinuous Galerkin (DG) method as a space discretization and an explicit Runge-Kutta time-integration scheme. An effort has been made to fully support GPU computation by taking over some of the loops internal to the NonLinearForm class. This has also allowed us to implement overlap between computation and communication. The team hopes their open-source code will help other researchers in creating high-fidelity simulations of compressible flows.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#robert-rieben-llnl", "text": "", "title": "Robert Rieben (LLNL)"}, {"location": "videos/#the-multiphysics-on-advanced-platforms-project-performance-portability-and-scaling", "text": "", "title": "The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_9", "text": "High-energy-density physics (HEDP) experiments performed at LLNL and other Department of Energy laboratories require multiphysics simulations to predict the behavior of complex physical systems for applications including inertial confinement fusion, pulsed power, and material strength/equations-of-state studies. Robert Rieben described the variety of mathematical algorithms needed for these simulations, including ALE methods, unstructured adaptive mesh refinement, and high-order discretizations. LLNL\u2019s Multiphysics on Advanced Platforms Project (MAPP) is developing a next-generation multiphysics code, called MARBL, based on high-order numerical methods and modular infrastructure for deployment on advanced HPC architectures. MARBL\u2019s use of high-order methods produce better throughput on GPUs. MARBL uses MFEM for finite elements and mesh/field/operator abstractions while leveraging its support for efficient memory management. Rieben explained that co-design efforts among the MARBL, MFEM, and RAJA (portability software) teams led to better device utilization and improved performance for the MARBL code.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#felipe-gomez-carlos-del-valle-julian-jimenez-national-university-of-colombia", "text": "", "title": "Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia)"}, {"location": "videos/#phase-change-heat-and-mass-transfer-simulation-with-mfem", "text": "", "title": "Phase Change Heat and Mass Transfer Simulation with MFEM"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_10", "text": "Three undergraduate students\u2014Felipe G\u00f3mez, Carlos del Valle, and Juli\u00e1n Jim\u00e9nez\u2014from the National University of Colombia presented their work using MFEM in an oceanographic model. Below the Arctic sea ice, and under the right conditions, a flux of icy brine flows down into the sea. The icy brine has a much lower fusion point and a higher density than normal seawater. As a result, it sinks while freezing everything around it, forming an ice channel called a brinicle (also known as ice stalactite). The team shared their simulations of this phenomenon assuming cylindrical symmetry. The fluid is considered viscous and quasi-stationary, and the problem is simulated taking advantage of the setup symmetries. The heat and salt transport are weakly coupled to the fluid motion and are modeled with the corresponding conservation equations, taking into account diffusive and convective effects. The coupled system of partial differential equations is discretized and solved with the help of the MFEM finite element library.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#thomas-helfer-cea", "text": "", "title": "Thomas Helfer (CEA)"}, {"location": "videos/#mfem-mgis-mfront-a-mfem-based-library-for-nonlinear-solid-thermomechanic", "text": "", "title": "MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_11", "text": "Thomas Helfer from the French Atomic Energy Commission (CEA) introduced the MFEM-MGIS-MFront library (MMM), which aims for efficient use of supercomputers in the field of implicit nonlinear thermomechanics. His team\u2019s primary focus is to develop advanced nuclear fuel element simulations where the evolution of materials under irradiation are influenced by multiple phenomena (e.g., viscoplasticity, damage, phase transitions, swelling due to solid and gaseous fission products). MFEM provides this project with finite element abstractions, adaptive mesh refinement, and a parallel API. However, as applications dedicated to solid mechanics in MFEM are mostly limited to a few constitutive equations such as elasticity and hyperelasticity, Helfer explained that his team extended the software\u2019s functionality to cover a broader spectrum of mechanics. Thus, this MMM project combines MFEM with the MFrontGenericInterfaceSupport (MGIS), an open-source C++ library that provides data structures to support arbitrarily complex nonlinear constitutive equations generated by the MFront code generator. MMM is developed within the scope of CEA\u2019s PLEIADES project. Helfer\u2019s presentation provided (1) an introduction to MMM goals; (2) a tutorial of MMM usage with a focus on the high-level user interface; (3) an overview of the core design choices of MMM and how MFEM was extended to support a range of scenarios; and (4) feedback on the two main issues encountered during MMM development.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#jamie-bramwell-llnl", "text": "", "title": "Jamie Bramwell (LLNL)"}, {"location": "videos/#serac-user-friendly-abstractions-for-mfem-based-engineering-applications", "text": "", "title": "Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_12", "text": "Jamie Bramwell of LLNL presented an overview of the open-source Serac project , whose goal is to provide user-friendly abstractions and modules that enable rapid development of complex nonlinear multiphysics simulation codes. She provided an overview of both the high-level physics modules (thermal conduction, solid mechanics, incompressible flow, electromagnetics) as well as the serac::Functional framework for quickly developing nonlinear GPU-enabled finite element method kernels.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#veselin-dobrev-llnl_3", "text": "", "title": "Veselin Dobrev (LLNL)"}, {"location": "videos/#recent-developments-in-mfem_1", "text": "", "title": "Recent Developments in MFEM"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_13", "text": "Veselin Dobrev of LLNL detailed the project\u2019s recent developments including memory manager improvements; serial support for p- and hp-refinement; high-order/low-order refined solution transfer; GLVis visualization via Jupyter Notebooks; and additional GPU support regarding HYPRE preconditioners, PETSc tools, and mesh optimization. MFEM now also integrates with various new libraries (AmgX, Gingko, FMS, and others), and continuous integration testing has been conducted on LLNL\u2019s Quartz, Lassen, and Corona machines. Additionally, Dobrev summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Exascale Computing Project, SciDAC, the FASTMath Institute, and other projects.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#tzanio-kolev-llnl_4", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos/#the-state-of-mfem_3", "text": "", "title": "The State of MFEM"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_14", "text": "MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities of discretization algorithms, built-in solvers, parallel scalability, adaptive mesh refinement, and support for a range of computing architectures. Kolev also highlighted the global community\u2019s contributions as well as features included in the recent v4.3 software release.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#aaron-fisher-llnl_6", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos/#welcome-and-overview_3", "text": "", "title": "Welcome and Overview"}, {"location": "videos/#october-20-2021-mfem-workshop-2021_15", "text": "The MFEM community workshop held virtually on October 20, 2021, brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, collaborative breakout sessions, and a simulation contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos/#conferences-in-2021", "text": "", "title": "Conferences in 2021"}, {"location": "videos/#tzanio-kolev-llnl_5", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos/#efficient-finite-element-discretizations-for-exascale-applications", "text": "", "title": "Efficient Finite Element Discretizations for Exascale Applications"}, {"location": "videos/#february-25-2021-excalibur-sle-3-workshop", "text": "", "title": "February 25, 2021 | ExCALIBUR SLE 3 workshop"}, {"location": "videos/#atpesc-2017-2018", "text": "", "title": "ATPESC 2017, 2018"}, {"location": "videos/#tzanio-kolev-llnl-mark-shephard-rpi-and-cameron-smith-rpi", "text": "", "title": "Tzanio Kolev (LLNL), Mark Shephard (RPI) and Cameron Smith (RPI)"}, {"location": "videos/#unstructured-meshing-technologies", "text": "", "title": "Unstructured Meshing Technologies"}, {"location": "videos/#august-6-2018-atpesc-2018", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2018. Slides for this presentation are available here .", "title": "August 6, 2018 | ATPESC 2018"}, {"location": "videos/#tzanio-kolev-llnl-and-mark-shephard-rpi", "text": "", "title": "Tzanio Kolev (LLNL) and Mark Shephard (RPI)"}, {"location": "videos/#unstructured-meshing-technologies_1", "text": "", "title": "Unstructured Meshing Technologies"}, {"location": "videos/#august-7-2017-atpesc-2017", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here .", "title": "August 7, 2017 | ATPESC 2017"}, {"location": "videos/#tzanio-kolev-llnl-and-mark-shephard-rpi_1", "text": "", "title": "Tzanio Kolev (LLNL) and Mark Shephard (RPI)"}, {"location": "videos/#conforming-nonconforming-adaptivity-for-unstructured-meshes", "text": "", "title": "Conforming & Nonconforming Adaptivity for Unstructured Meshes"}, {"location": "videos/#august-7-2017-atpesc-2017_1", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here .", "title": "August 7, 2017 | ATPESC 2017"}, {"location": "videos/#other-videos", "text": "", "title": "Other Videos"}, {"location": "videos/#llnl-hpc-software-tutorials-mfem", "text": "", "title": "LLNL HPC Software Tutorials: MFEM"}, {"location": "videos/#aug-22-2024", "text": "Instructions for a self-paced overview of MFEM.", "title": "Aug 22, 2024"}, {"location": "videos/#mfem-advanced-simulation-algorithms-for-hpc-applications", "text": "", "title": "MFEM: Advanced Simulation Algorithms for HPC Applications"}, {"location": "videos/#jun-24-2020", "text": "Overview of MFEM 4.0 featuring some of its developers.", "title": "Jun 24, 2020"}, {"location": "videos/#center-for-applied-scientific-computing", "text": "", "title": "Center for Applied Scientific Computing"}, {"location": "videos/#jul-12-2019", "text": "Overview of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, including a highlight of MFEM.", "title": "Jul 12, 2019"}, {"location": "videos/#str-preview-exascale-computing", "text": "", "title": "S&TR Preview: Exascale Computing"}, {"location": "videos/#october-6-2016", "text": "Some early MFEM results in the BLAST project.", "title": "October 6, 2016"}, {"location": "videos2/", "text": "MFEM Videos A collection of MFEM-related videos, including recorded talks from the MFEM workshops and conference presentations. 2021 Aaron Fisher (LLNL) Wrap-Up and Simulation Contest Winners October 20, 2021 | MFEM Workshop 2021 MFEM\u2019s first community workshop was held virtually on October 20, 2021, with participants around the world. Aaron Fisher of LLNL concluded the workshop by announcing the results of the simulation and visualization contest. The winners represent two very different research applications using MFEM: (1) the electric field generated by electrocardiogram waves of a rabbit\u2019s heart ventricles, rendered by Dennis Ogiermann of Ruhr-University Bochum (Germany); (2) incompressible fluid flow around a rotating turbine, animated by Tamas Horvath of Oakland University (Michigan). Contest submissions are featured at https://mfem.org/gallery . Will Pazner (LLNL) High-Order Matrix-Free Solvers October 20, 2021 | MFEM Workshop 2021 For users unfamiliar with MFEM\u2019s solver library, Will Pazner of LLNL demonstrated a few ways\u2014in some cases adding just a single line of code\u2014to run scalable solvers for differential equations. These solvers execute hierarchical finite element discretizations for both low- and high-order problems. Vladimir Tomov (LLNL) MFEM Capabilities for High-Order Mesh Optimization October 20, 2021 | MFEM Workshop 2021 Vladimir Tomov of LLNL described MFEM\u2019s mesh optimization strategies including ways the user can define target elements. He demonstrated optimizing a mesh\u2019s shape by limiting displacements to preserve a boundary layer and by changing the size of a uniform mesh in a specific region. MFEM\u2019s mesh-optimizing miniapps are available online at https://mfem.org/meshing-miniapps . William Dawn (NCSU) Unstructured Finite Element Neutron Transport using MFEM October 20, 2021 | MFEM Workshop 2021 William Dawn from North Carolina State University described his work with unstructured neutron transport. His team models microreactors, a new class of compact reactor with relatively small electrical output. As part of the Exascale Computing Project, Dawn\u2019s team is modeling the MARVEL reactor, which is planned for construction at Idaho National Laboratory. MFEM satisfies their need for a finite element framework with GPU support and rapid prototyping. With MFEM, the team discretizes a neutron transport equation with six independent variables in space, direction, and energy. Traditional neutron transport methods use a \u201csweeping\u201d method to transport particles through a problem, but this is not feasible for generally unstructured meshes. In Dawn\u2019s models, the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation is used to transform the neutron transport equation from a first-order hyperbolic form to a second-order elliptic form. Then, the SAAF equations are discretized with the finite element method and solved using MFEM. Due to the dependence of the neutron flux on angle and direction, these problems have a high vector-dimension with hundreds to thousands of degrees of freedom (DOF) per mesh vertex. Also, due to the second-order nature of these equations, highly refined meshes are required to sufficiently resolve reactor geometries with millions of vertices in a mesh. Results have been prepared for problems with billions of DOF using the Summit supercomputer at Oak Ridge National Laboratory. Syun\u2019ichi Shiraiwa (PPPL) Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion October 20, 2021 | MFEM Workshop 2021 Syun\u2019ichi Shiraiwa of Pacific Northwest National Laboratory introduced PyMFEM, a Python wrapper for MFEM that his team uses in radiofrequency (RF) wave simulations for the RF-SciDAC project. RF waves can be used to heat plasma in a nuclear fusion reaction. Simulations of this process present multiple challenges when incorporating a variety of antenna structures at different frequencies, waves with different wave lengths in the same space or spatially diverse, and RF wave effects on background plasma. To integrate MFEM, a C++ software library, into their multiphysics platform, Shiraiwa\u2019s team created a code \u201cwrapper\u201d that binds MFEM to the external Python components of RF wave simulations, ultimately extending MFEM\u2019s features to Python users. Shiraiwa described how the PyMFEM module works in serial and parallel and invited the audience to contribute to the open-source code. Qi Tang (LANL) An Adaptive, Scalable Fully Implicit Resistive MHD Solver October 20, 2021 | MFEM Workshop 2021 Qi Tang of Los Alamos National Laboratory described his team\u2019s development of an efficient, scalable solver for tokamak plasma simulations. Magnetohydrodynamics (MHD) equations are important for studying plasma systems, but efficient numerical solutions for MHD are extremely challenging due to disparate time and length scales, strong hyperbolic phenomena, and nonlinearity. Tang\u2019s team has developed a high-order stabilized finite element algorithm for incompressible resistive MHD equations based on MFEM, which provides physics-based preconditioners, adaptive mesh refinement, parallelization, and load balancing. Tang showed animated examples of the model\u2019s scalable and efficient results. Jan Nikl (ELI Beamlines) Laser Plasma Modeling with High-Order Finite Elements October 20, 2021 | MFEM Workshop 2021 Jan Nikl outlined how his team at the ELI Beamlines Centre uses MFEM for laser plasma modeling. Lasers have found their application in many scientific disciplines, where generation of plasma, the fourth state of matter, holds great potential for the future. A detailed description of laser produced plasmas is then essential for many applications, like (pre)pulses of ultra-intense lasers and ion acceleration beamlines, laboratory astrophysics, inertial confinement fusion, and many others. All of the mentioned are investigated at ELI Beamlines in the Czech Republic, a European laser facility aiming to operate the most intense laser system in the world. In this context, Nikl described the numerical construction based on the finite element method. This effort concentrates mainly on the Lagrangian hydrodynamics and Vlasov\u2013Fokker\u2013Planck\u2013Maxwell kinetic description of plasma, utilizing the MFEM library for its flexibility and scalability. Mathias Davids (Harvard) Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI) October 20, 2021 | MFEM Workshop 2021 Mathias Davids from Harvard Medical School presented MFEM\u2019s use in a medical setting. Peripheral nerve stimulation (PNS) limits the usable image encoding performance in the latest generation of magnetic resonance imaging (MRI) scanners. The rapid switching of the MRI gradient coils\u2019 magnetic fields induces electric fields in the human body strong enough to evoke unwanted action potential in peripheral nerves, leading to muscle contractions or touch perceptions. Despite its limiting role in MRI, PNS effects are not directly included during the coil design phase. Davids\u2019 team developed a modeling tool to predict PNS thresholds and locations in the human body, allowing them to directly incorporate PNS metrics in the numeric coil winding optimization to design PNS-optimized coil layouts. This modeling tool relies on electromagnetic field simulations in heterogeneous finite element body models coupled to neurodynamic models of myelinated nerve fibers. This tool enables researchers to develop strategies that mitigate PNS effects without building expensive prototype MRI systems, maximizing the usable image encoding performance. Marc Bolinches (UT) Development of DG Compressible Navier-Stokes Solver with MFEM October 20, 2021 | MFEM Workshop 2021 Marc Bolinches from the University of Texas at Austin described a compressible Navier-Stokes solver using MFEM v4,2 which did not include full support for GPUs. The solver uses the discontinuous Galerkin (DG) method as a space discretization and an explicit Runge-Kutta time-integration scheme. An effort has been made to fully support GPU computation by taking over some of the loops internal to the NonLinearForm class. This has also allowed us to implement overlap between computation and communication. The team hopes their open-source code will help other researchers in creating high-fidelity simulations of compressible flows. Robert Rieben (LLNL) The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling October 20, 2021 | MFEM Workshop 2021 High-energy-density physics (HEDP) experiments performed at LLNL and other Department of Energy laboratories require multiphysics simulations to predict the behavior of complex physical systems for applications including inertial confinement fusion, pulsed power, and material strength/equations-of-state studies. Robert Rieben described the variety of mathematical algorithms needed for these simulations, including ALE methods, unstructured adaptive mesh refinement, and high-order discretizations. LLNL\u2019s Multiphysics on Advanced Platforms Project (MAPP) is developing a next-generation multiphysics code, called MARBL, based on high-order numerical methods and modular infrastructure for deployment on advanced HPC architectures. MARBL\u2019s use of high-order methods produce better throughput on GPUs. MARBL uses MFEM for finite elements and mesh/field/operator abstractions while leveraging its support for efficient memory management. Rieben explained that co-design efforts among the MARBL, MFEM, and RAJA (portability software) teams led to better device utilization and improved performance for the MARBL code. Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia) Phase Change Heat and Mass Transfer Simulation with MFEM October 20, 2021 | MFEM Workshop 2021 Three undergraduate students\u2014Felipe G\u00f3mez, Carlos del Valle, and Juli\u00e1n Jim\u00e9nez\u2014from the National University of Colombia presented their work using MFEM in an oceanographic model. Below the Arctic sea ice, and under the right conditions, a flux of icy brine flows down into the sea. The icy brine has a much lower fusion point and a higher density than normal seawater. As a result, it sinks while freezing everything around it, forming an ice channel called a brinicle (also known as ice stalactite). The team shared their simulations of this phenomenon assuming cylindrical symmetry. The fluid is considered viscous and quasi-stationary, and the problem is simulated taking advantage of the setup symmetries. The heat and salt transport are weakly coupled to the fluid motion and are modeled with the corresponding conservation equations, taking into account diffusive and convective effects. The coupled system of partial differential equations is discretized and solved with the help of the MFEM finite element library. Thomas Helfer (CEA) MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic October 20, 2021 | MFEM Workshop 2021 Thomas Helfer from the French Atomic Energy Commission (CEA) introduced the MFEM-MGIS-MFront library (MMM), which aims for efficient use of supercomputers in the field of implicit nonlinear thermomechanics. His team\u2019s primary focus is to develop advanced nuclear fuel element simulations where the evolution of materials under irradiation are influenced by multiple phenomena (e.g., viscoplasticity, damage, phase transitions, swelling due to solid and gaseous fission products). MFEM provides this project with finite element abstractions, adaptive mesh refinement, and a parallel API. However, as applications dedicated to solid mechanics in MFEM are mostly limited to a few constitutive equations such as elasticity and hyperelasticity, Helfer explained that his team extended the software\u2019s functionality to cover a broader spectrum of mechanics. Thus, this MMM project combines MFEM with the MFrontGenericInterfaceSupport (MGIS), an open-source C++ library that provides data structures to support arbitrarily complex nonlinear constitutive equations generated by the MFront code generator. MMM is developed within the scope of CEA\u2019s PLEIADES project. Helfer\u2019s presentation provided (1) an introduction to MMM goals; (2) a tutorial of MMM usage with a focus on the high-level user interface; (3) an overview of the core design choices of MMM and how MFEM was extended to support a range of scenarios; and (4) feedback on the two main issues encountered during MMM development. Jamie Bramwell (LLNL) Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications October 20, 2021 | MFEM Workshop 2021 Jamie Bramwell of LLNL presented an overview of the open-source Serac project ( https://serac.readthedocs.io/en/latest ), whose goal is to provide user-friendly abstractions and modules that enable rapid development of complex nonlinear multiphysics simulation codes. She provided an overview of both the high-level physics modules (thermal conduction, solid mechanics, incompressible flow, electromagnetics) as well as the serac::Functional framework for quickly developing nonlinear GPU-enabled finite element method kernels. Veselin Dobrev (LLNL) Recent Developments in MFEM October 20, 2021 | MFEM Workshop 2021 Veselin Dobrev of LLNL detailed the project\u2019s recent developments including memory manager improvements; serial support for p- and hp-refinement; high-order/low-order refined solution transfer; GLVis visualization via Jupyter Notebooks; and additional GPU support regarding HYPRE preconditioners, PETSc tools, and mesh optimization. MFEM now also integrates with various new libraries (AmgX, Gingko, FMS, and others), and continuous integration testing has been conducted on LLNL\u2019s Quartz, Lassen, and Corona machines. Additionally, Dobrev summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Exascale Computing Project, SciDAC, the FASTMath Institute, and other projects. Tzanio Kolev (LLNL) The State of MFEM October 20, 2021 | MFEM Workshop 2021 MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities of discretization algorithms, built-in solvers, parallel scalability, adaptive mesh refinement, and support for a range of computing architectures. Kolev also highlighted the global community\u2019s contributions as well as features included in the recent v4.3 software release. Aaron Fisher (LLNL) Welcome and Overview October 20, 2021 | MFEM Workshop 2021 The MFEM community workshop held virtually on October 20, 2021, brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, collaborative breakout sessions, and a simulation contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results. Tzanio Kolev (LLNL) Efficient Finite Element Discretizations for Exascale Applications February 25, 2021 | ExCALIBUR SLE 3 workshop 2020 MFEM: Advanced Simulation Algorithms for HPC Applications Jun 24, 2020 | YouTube Overview of MFEM 4.0 featuring some of its developers. 2019 Center for Applied Scientific Computing Jul 12, 2019 | YouTube Overview of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, including a highlight of MFEM. 2018 Tzanio Kolev (LLNL), Mark Shephard (RPI) and Cameron Smith (RPI) Unstructured Meshing Technologies August 6, 2018 | ATPESC 2018 Presented at the Argonne Training Program on Extreme-Scale Computing 2018. Slides for this presentation are available here . 2017 Tzanio Kolev (LLNL) and Mark Shephard (RPI) Unstructured Meshing Technologies August 7, 2017 | ATPESC 2017 Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here . Tzanio Kolev (LLNL) and Mark Shephard (RPI) Conforming & Nonconforming Adaptivity for Unstructured Meshes August 7, 2017 | ATPESC 2017 Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here . 2016 S&TR Preview: Exascale Computing October 6, 2016 | YouTube Some early MFEM results in the BLAST project.", "title": "MFEM Videos"}, {"location": "videos2/#mfem-videos", "text": "A collection of MFEM-related videos, including recorded talks from the MFEM workshops and conference presentations.", "title": "MFEM Videos"}, {"location": "videos2/#2021", "text": "", "title": "2021"}, {"location": "videos2/#aaron-fisher-llnl", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos2/#wrap-up-and-simulation-contest-winners", "text": "", "title": "Wrap-Up and Simulation Contest Winners"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021", "text": "MFEM\u2019s first community workshop was held virtually on October 20, 2021, with participants around the world. Aaron Fisher of LLNL concluded the workshop by announcing the results of the simulation and visualization contest. The winners represent two very different research applications using MFEM: (1) the electric field generated by electrocardiogram waves of a rabbit\u2019s heart ventricles, rendered by Dennis Ogiermann of Ruhr-University Bochum (Germany); (2) incompressible fluid flow around a rotating turbine, animated by Tamas Horvath of Oakland University (Michigan). Contest submissions are featured at https://mfem.org/gallery .", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#will-pazner-llnl", "text": "", "title": "Will Pazner (LLNL)"}, {"location": "videos2/#high-order-matrix-free-solvers", "text": "", "title": "High-Order Matrix-Free Solvers"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_1", "text": "For users unfamiliar with MFEM\u2019s solver library, Will Pazner of LLNL demonstrated a few ways\u2014in some cases adding just a single line of code\u2014to run scalable solvers for differential equations. These solvers execute hierarchical finite element discretizations for both low- and high-order problems.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#vladimir-tomov-llnl", "text": "", "title": "Vladimir Tomov (LLNL)"}, {"location": "videos2/#mfem-capabilities-for-high-order-mesh-optimization", "text": "", "title": "MFEM Capabilities for High-Order Mesh Optimization"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_2", "text": "Vladimir Tomov of LLNL described MFEM\u2019s mesh optimization strategies including ways the user can define target elements. He demonstrated optimizing a mesh\u2019s shape by limiting displacements to preserve a boundary layer and by changing the size of a uniform mesh in a specific region. MFEM\u2019s mesh-optimizing miniapps are available online at https://mfem.org/meshing-miniapps .", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#william-dawn-ncsu", "text": "", "title": "William Dawn (NCSU)"}, {"location": "videos2/#unstructured-finite-element-neutron-transport-using-mfem", "text": "", "title": "Unstructured Finite Element Neutron Transport using MFEM"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_3", "text": "William Dawn from North Carolina State University described his work with unstructured neutron transport. His team models microreactors, a new class of compact reactor with relatively small electrical output. As part of the Exascale Computing Project, Dawn\u2019s team is modeling the MARVEL reactor, which is planned for construction at Idaho National Laboratory. MFEM satisfies their need for a finite element framework with GPU support and rapid prototyping. With MFEM, the team discretizes a neutron transport equation with six independent variables in space, direction, and energy. Traditional neutron transport methods use a \u201csweeping\u201d method to transport particles through a problem, but this is not feasible for generally unstructured meshes. In Dawn\u2019s models, the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation is used to transform the neutron transport equation from a first-order hyperbolic form to a second-order elliptic form. Then, the SAAF equations are discretized with the finite element method and solved using MFEM. Due to the dependence of the neutron flux on angle and direction, these problems have a high vector-dimension with hundreds to thousands of degrees of freedom (DOF) per mesh vertex. Also, due to the second-order nature of these equations, highly refined meshes are required to sufficiently resolve reactor geometries with millions of vertices in a mesh. Results have been prepared for problems with billions of DOF using the Summit supercomputer at Oak Ridge National Laboratory.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#syunichi-shiraiwa-pppl", "text": "", "title": "Syun\u2019ichi Shiraiwa (PPPL)"}, {"location": "videos2/#development-of-pymfem-python-wrapper-for-mfem-scalable-rf-wave-simulation-for-nuclear-fusion", "text": "", "title": "Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_4", "text": "Syun\u2019ichi Shiraiwa of Pacific Northwest National Laboratory introduced PyMFEM, a Python wrapper for MFEM that his team uses in radiofrequency (RF) wave simulations for the RF-SciDAC project. RF waves can be used to heat plasma in a nuclear fusion reaction. Simulations of this process present multiple challenges when incorporating a variety of antenna structures at different frequencies, waves with different wave lengths in the same space or spatially diverse, and RF wave effects on background plasma. To integrate MFEM, a C++ software library, into their multiphysics platform, Shiraiwa\u2019s team created a code \u201cwrapper\u201d that binds MFEM to the external Python components of RF wave simulations, ultimately extending MFEM\u2019s features to Python users. Shiraiwa described how the PyMFEM module works in serial and parallel and invited the audience to contribute to the open-source code.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#qi-tang-lanl", "text": "", "title": "Qi Tang (LANL)"}, {"location": "videos2/#an-adaptive-scalable-fully-implicit-resistive-mhd-solver", "text": "", "title": "An Adaptive, Scalable Fully Implicit Resistive MHD Solver"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_5", "text": "Qi Tang of Los Alamos National Laboratory described his team\u2019s development of an efficient, scalable solver for tokamak plasma simulations. Magnetohydrodynamics (MHD) equations are important for studying plasma systems, but efficient numerical solutions for MHD are extremely challenging due to disparate time and length scales, strong hyperbolic phenomena, and nonlinearity. Tang\u2019s team has developed a high-order stabilized finite element algorithm for incompressible resistive MHD equations based on MFEM, which provides physics-based preconditioners, adaptive mesh refinement, parallelization, and load balancing. Tang showed animated examples of the model\u2019s scalable and efficient results.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#jan-nikl-eli-beamlines", "text": "", "title": "Jan Nikl (ELI Beamlines)"}, {"location": "videos2/#laser-plasma-modeling-with-high-order-finite-elements", "text": "", "title": "Laser Plasma Modeling with High-Order Finite Elements"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_6", "text": "Jan Nikl outlined how his team at the ELI Beamlines Centre uses MFEM for laser plasma modeling. Lasers have found their application in many scientific disciplines, where generation of plasma, the fourth state of matter, holds great potential for the future. A detailed description of laser produced plasmas is then essential for many applications, like (pre)pulses of ultra-intense lasers and ion acceleration beamlines, laboratory astrophysics, inertial confinement fusion, and many others. All of the mentioned are investigated at ELI Beamlines in the Czech Republic, a European laser facility aiming to operate the most intense laser system in the world. In this context, Nikl described the numerical construction based on the finite element method. This effort concentrates mainly on the Lagrangian hydrodynamics and Vlasov\u2013Fokker\u2013Planck\u2013Maxwell kinetic description of plasma, utilizing the MFEM library for its flexibility and scalability.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#mathias-davids-harvard", "text": "", "title": "Mathias Davids (Harvard)"}, {"location": "videos2/#modeling-peripheral-nerve-stimulations-pns-in-magnetic-resonance-imaging-mri", "text": "", "title": "Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI)"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_7", "text": "Mathias Davids from Harvard Medical School presented MFEM\u2019s use in a medical setting. Peripheral nerve stimulation (PNS) limits the usable image encoding performance in the latest generation of magnetic resonance imaging (MRI) scanners. The rapid switching of the MRI gradient coils\u2019 magnetic fields induces electric fields in the human body strong enough to evoke unwanted action potential in peripheral nerves, leading to muscle contractions or touch perceptions. Despite its limiting role in MRI, PNS effects are not directly included during the coil design phase. Davids\u2019 team developed a modeling tool to predict PNS thresholds and locations in the human body, allowing them to directly incorporate PNS metrics in the numeric coil winding optimization to design PNS-optimized coil layouts. This modeling tool relies on electromagnetic field simulations in heterogeneous finite element body models coupled to neurodynamic models of myelinated nerve fibers. This tool enables researchers to develop strategies that mitigate PNS effects without building expensive prototype MRI systems, maximizing the usable image encoding performance.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#marc-bolinches-ut", "text": "", "title": "Marc Bolinches (UT)"}, {"location": "videos2/#development-of-dg-compressible-navier-stokes-solver-with-mfem", "text": "", "title": "Development of DG Compressible Navier-Stokes Solver with MFEM"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_8", "text": "Marc Bolinches from the University of Texas at Austin described a compressible Navier-Stokes solver using MFEM v4,2 which did not include full support for GPUs. The solver uses the discontinuous Galerkin (DG) method as a space discretization and an explicit Runge-Kutta time-integration scheme. An effort has been made to fully support GPU computation by taking over some of the loops internal to the NonLinearForm class. This has also allowed us to implement overlap between computation and communication. The team hopes their open-source code will help other researchers in creating high-fidelity simulations of compressible flows.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#robert-rieben-llnl", "text": "", "title": "Robert Rieben (LLNL)"}, {"location": "videos2/#the-multiphysics-on-advanced-platforms-project-performance-portability-and-scaling", "text": "", "title": "The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_9", "text": "High-energy-density physics (HEDP) experiments performed at LLNL and other Department of Energy laboratories require multiphysics simulations to predict the behavior of complex physical systems for applications including inertial confinement fusion, pulsed power, and material strength/equations-of-state studies. Robert Rieben described the variety of mathematical algorithms needed for these simulations, including ALE methods, unstructured adaptive mesh refinement, and high-order discretizations. LLNL\u2019s Multiphysics on Advanced Platforms Project (MAPP) is developing a next-generation multiphysics code, called MARBL, based on high-order numerical methods and modular infrastructure for deployment on advanced HPC architectures. MARBL\u2019s use of high-order methods produce better throughput on GPUs. MARBL uses MFEM for finite elements and mesh/field/operator abstractions while leveraging its support for efficient memory management. Rieben explained that co-design efforts among the MARBL, MFEM, and RAJA (portability software) teams led to better device utilization and improved performance for the MARBL code.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#felipe-gomez-carlos-del-valle-julian-jimenez-national-university-of-colombia", "text": "", "title": "Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia)"}, {"location": "videos2/#phase-change-heat-and-mass-transfer-simulation-with-mfem", "text": "", "title": "Phase Change Heat and Mass Transfer Simulation with MFEM"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_10", "text": "Three undergraduate students\u2014Felipe G\u00f3mez, Carlos del Valle, and Juli\u00e1n Jim\u00e9nez\u2014from the National University of Colombia presented their work using MFEM in an oceanographic model. Below the Arctic sea ice, and under the right conditions, a flux of icy brine flows down into the sea. The icy brine has a much lower fusion point and a higher density than normal seawater. As a result, it sinks while freezing everything around it, forming an ice channel called a brinicle (also known as ice stalactite). The team shared their simulations of this phenomenon assuming cylindrical symmetry. The fluid is considered viscous and quasi-stationary, and the problem is simulated taking advantage of the setup symmetries. The heat and salt transport are weakly coupled to the fluid motion and are modeled with the corresponding conservation equations, taking into account diffusive and convective effects. The coupled system of partial differential equations is discretized and solved with the help of the MFEM finite element library.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#thomas-helfer-cea", "text": "", "title": "Thomas Helfer (CEA)"}, {"location": "videos2/#mfem-mgis-mfront-a-mfem-based-library-for-nonlinear-solid-thermomechanic", "text": "", "title": "MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_11", "text": "Thomas Helfer from the French Atomic Energy Commission (CEA) introduced the MFEM-MGIS-MFront library (MMM), which aims for efficient use of supercomputers in the field of implicit nonlinear thermomechanics. His team\u2019s primary focus is to develop advanced nuclear fuel element simulations where the evolution of materials under irradiation are influenced by multiple phenomena (e.g., viscoplasticity, damage, phase transitions, swelling due to solid and gaseous fission products). MFEM provides this project with finite element abstractions, adaptive mesh refinement, and a parallel API. However, as applications dedicated to solid mechanics in MFEM are mostly limited to a few constitutive equations such as elasticity and hyperelasticity, Helfer explained that his team extended the software\u2019s functionality to cover a broader spectrum of mechanics. Thus, this MMM project combines MFEM with the MFrontGenericInterfaceSupport (MGIS), an open-source C++ library that provides data structures to support arbitrarily complex nonlinear constitutive equations generated by the MFront code generator. MMM is developed within the scope of CEA\u2019s PLEIADES project. Helfer\u2019s presentation provided (1) an introduction to MMM goals; (2) a tutorial of MMM usage with a focus on the high-level user interface; (3) an overview of the core design choices of MMM and how MFEM was extended to support a range of scenarios; and (4) feedback on the two main issues encountered during MMM development.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#jamie-bramwell-llnl", "text": "", "title": "Jamie Bramwell (LLNL)"}, {"location": "videos2/#serac-user-friendly-abstractions-for-mfem-based-engineering-applications", "text": "", "title": "Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_12", "text": "Jamie Bramwell of LLNL presented an overview of the open-source Serac project ( https://serac.readthedocs.io/en/latest ), whose goal is to provide user-friendly abstractions and modules that enable rapid development of complex nonlinear multiphysics simulation codes. She provided an overview of both the high-level physics modules (thermal conduction, solid mechanics, incompressible flow, electromagnetics) as well as the serac::Functional framework for quickly developing nonlinear GPU-enabled finite element method kernels.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#veselin-dobrev-llnl", "text": "", "title": "Veselin Dobrev (LLNL)"}, {"location": "videos2/#recent-developments-in-mfem", "text": "", "title": "Recent Developments in MFEM"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_13", "text": "Veselin Dobrev of LLNL detailed the project\u2019s recent developments including memory manager improvements; serial support for p- and hp-refinement; high-order/low-order refined solution transfer; GLVis visualization via Jupyter Notebooks; and additional GPU support regarding HYPRE preconditioners, PETSc tools, and mesh optimization. MFEM now also integrates with various new libraries (AmgX, Gingko, FMS, and others), and continuous integration testing has been conducted on LLNL\u2019s Quartz, Lassen, and Corona machines. Additionally, Dobrev summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Exascale Computing Project, SciDAC, the FASTMath Institute, and other projects.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#tzanio-kolev-llnl", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos2/#the-state-of-mfem", "text": "", "title": "The State of MFEM"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_14", "text": "MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities of discretization algorithms, built-in solvers, parallel scalability, adaptive mesh refinement, and support for a range of computing architectures. Kolev also highlighted the global community\u2019s contributions as well as features included in the recent v4.3 software release.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#aaron-fisher-llnl_1", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos2/#welcome-and-overview", "text": "", "title": "Welcome and Overview"}, {"location": "videos2/#october-20-2021-mfem-workshop-2021_15", "text": "The MFEM community workshop held virtually on October 20, 2021, brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, collaborative breakout sessions, and a simulation contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos2/#tzanio-kolev-llnl_1", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos2/#efficient-finite-element-discretizations-for-exascale-applications", "text": "", "title": "Efficient Finite Element Discretizations for Exascale Applications"}, {"location": "videos2/#february-25-2021-excalibur-sle-3-workshop", "text": "", "title": "February 25, 2021 | ExCALIBUR SLE 3 workshop"}, {"location": "videos2/#2020", "text": "", "title": "2020"}, {"location": "videos2/#mfem-advanced-simulation-algorithms-for-hpc-applications", "text": "", "title": "MFEM: Advanced Simulation Algorithms for HPC Applications"}, {"location": "videos2/#jun-24-2020-youtube", "text": "Overview of MFEM 4.0 featuring some of its developers.", "title": "Jun 24, 2020 | YouTube"}, {"location": "videos2/#2019", "text": "", "title": "2019"}, {"location": "videos2/#center-for-applied-scientific-computing", "text": "", "title": "Center for Applied Scientific Computing"}, {"location": "videos2/#jul-12-2019-youtube", "text": "Overview of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, including a highlight of MFEM.", "title": "Jul 12, 2019 | YouTube"}, {"location": "videos2/#2018", "text": "", "title": "2018"}, {"location": "videos2/#tzanio-kolev-llnl-mark-shephard-rpi-and-cameron-smith-rpi", "text": "", "title": "Tzanio Kolev (LLNL), Mark Shephard (RPI) and Cameron Smith (RPI)"}, {"location": "videos2/#unstructured-meshing-technologies", "text": "", "title": "Unstructured Meshing Technologies"}, {"location": "videos2/#august-6-2018-atpesc-2018", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2018. Slides for this presentation are available here .", "title": "August 6, 2018 | ATPESC 2018"}, {"location": "videos2/#2017", "text": "", "title": "2017"}, {"location": "videos2/#tzanio-kolev-llnl-and-mark-shephard-rpi", "text": "", "title": "Tzanio Kolev (LLNL) and Mark Shephard (RPI)"}, {"location": "videos2/#unstructured-meshing-technologies_1", "text": "", "title": "Unstructured Meshing Technologies"}, {"location": "videos2/#august-7-2017-atpesc-2017", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here .", "title": "August 7, 2017 | ATPESC 2017"}, {"location": "videos2/#tzanio-kolev-llnl-and-mark-shephard-rpi_1", "text": "", "title": "Tzanio Kolev (LLNL) and Mark Shephard (RPI)"}, {"location": "videos2/#conforming-nonconforming-adaptivity-for-unstructured-meshes", "text": "", "title": "Conforming & Nonconforming Adaptivity for Unstructured Meshes"}, {"location": "videos2/#august-7-2017-atpesc-2017_1", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here .", "title": "August 7, 2017 | ATPESC 2017"}, {"location": "videos2/#2016", "text": "", "title": "2016"}, {"location": "videos2/#str-preview-exascale-computing", "text": "", "title": "S&TR Preview: Exascale Computing"}, {"location": "videos2/#october-6-2016-youtube", "text": "Some early MFEM results in the BLAST project.", "title": "October 6, 2016 | YouTube"}, {"location": "videos3/", "text": "MFEM Videos A collection of MFEM-related videos, including recorded talks from the MFEM workshops and conference presentations. MFEM Workshop 2021 Aaron Fisher (LLNL) Wrap-Up and Simulation Contest Winners October 20, 2021 | MFEM Workshop 2021 MFEM\u2019s first community workshop was held virtually on October 20, 2021, with participants around the world. Aaron Fisher of LLNL concluded the workshop by announcing the results of the simulation and visualization contest. The winners represent two very different research applications using MFEM: (1) the electric field generated by electrocardiogram waves of a rabbit\u2019s heart ventricles, rendered by Dennis Ogiermann of Ruhr-University Bochum (Germany); (2) incompressible fluid flow around a rotating turbine, animated by Tamas Horvath of Oakland University (Michigan). Contest submissions are featured in the gallery . Will Pazner (LLNL) High-Order Matrix-Free Solvers October 20, 2021 | MFEM Workshop 2021 For users unfamiliar with MFEM\u2019s solver library, Will Pazner of LLNL demonstrated a few ways\u2014in some cases adding just a single line of code\u2014to run scalable solvers for differential equations. These solvers execute hierarchical finite element discretizations for both low- and high-order problems. Vladimir Tomov (LLNL) MFEM Capabilities for High-Order Mesh Optimization October 20, 2021 | MFEM Workshop 2021 Vladimir Tomov of LLNL described MFEM\u2019s mesh optimization strategies including ways the user can define target elements. He demonstrated optimizing a mesh\u2019s shape by limiting displacements to preserve a boundary layer and by changing the size of a uniform mesh in a specific region. MFEM\u2019s mesh-optimizing miniapps are available online . William Dawn (NCSU) Unstructured Finite Element Neutron Transport using MFEM October 20, 2021 | MFEM Workshop 2021 William Dawn from North Carolina State University described his work with unstructured neutron transport. His team models microreactors, a new class of compact reactor with relatively small electrical output. As part of the Exascale Computing Project, Dawn\u2019s team is modeling the MARVEL reactor, which is planned for construction at Idaho National Laboratory. MFEM satisfies their need for a finite element framework with GPU support and rapid prototyping. With MFEM, the team discretizes a neutron transport equation with six independent variables in space, direction, and energy. Traditional neutron transport methods use a \u201csweeping\u201d method to transport particles through a problem, but this is not feasible for generally unstructured meshes. In Dawn\u2019s models, the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation is used to transform the neutron transport equation from a first-order hyperbolic form to a second-order elliptic form. Then, the SAAF equations are discretized with the finite element method and solved using MFEM. Due to the dependence of the neutron flux on angle and direction, these problems have a high vector-dimension with hundreds to thousands of degrees of freedom (DOF) per mesh vertex. Also, due to the second-order nature of these equations, highly refined meshes are required to sufficiently resolve reactor geometries with millions of vertices in a mesh. Results have been prepared for problems with billions of DOF using the Summit supercomputer at Oak Ridge National Laboratory. Syun\u2019ichi Shiraiwa (PPPL) Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion October 20, 2021 | MFEM Workshop 2021 Syun\u2019ichi Shiraiwa of Pacific Northwest National Laboratory introduced PyMFEM, a Python wrapper for MFEM that his team uses in radiofrequency (RF) wave simulations for the RF-SciDAC project. RF waves can be used to heat plasma in a nuclear fusion reaction. Simulations of this process present multiple challenges when incorporating a variety of antenna structures at different frequencies, waves with different wave lengths in the same space or spatially diverse, and RF wave effects on background plasma. To integrate MFEM, a C++ software library, into their multiphysics platform, Shiraiwa\u2019s team created a code \u201cwrapper\u201d that binds MFEM to the external Python components of RF wave simulations, ultimately extending MFEM\u2019s features to Python users. Shiraiwa described how the PyMFEM module works in serial and parallel and invited the audience to contribute to the open-source code. Qi Tang (LANL) An Adaptive, Scalable Fully Implicit Resistive MHD Solver October 20, 2021 | MFEM Workshop 2021 Qi Tang of Los Alamos National Laboratory described his team\u2019s development of an efficient, scalable solver for tokamak plasma simulations. Magnetohydrodynamics (MHD) equations are important for studying plasma systems, but efficient numerical solutions for MHD are extremely challenging due to disparate time and length scales, strong hyperbolic phenomena, and nonlinearity. Tang\u2019s team has developed a high-order stabilized finite element algorithm for incompressible resistive MHD equations based on MFEM, which provides physics-based preconditioners, adaptive mesh refinement, parallelization, and load balancing. Tang showed animated examples of the model\u2019s scalable and efficient results. Jan Nikl (ELI Beamlines) Laser Plasma Modeling with High-Order Finite Elements October 20, 2021 | MFEM Workshop 2021 Jan Nikl outlined how his team at the ELI Beamlines Centre uses MFEM for laser plasma modeling. Lasers have found their application in many scientific disciplines, where generation of plasma, the fourth state of matter, holds great potential for the future. A detailed description of laser produced plasmas is then essential for many applications, like (pre)pulses of ultra-intense lasers and ion acceleration beamlines, laboratory astrophysics, inertial confinement fusion, and many others. All of the mentioned are investigated at ELI Beamlines in the Czech Republic, a European laser facility aiming to operate the most intense laser system in the world. In this context, Nikl described the numerical construction based on the finite element method. This effort concentrates mainly on the Lagrangian hydrodynamics and Vlasov\u2013Fokker\u2013Planck\u2013Maxwell kinetic description of plasma, utilizing the MFEM library for its flexibility and scalability. Mathias Davids (Harvard) Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI) October 20, 2021 | MFEM Workshop 2021 Mathias Davids from Harvard Medical School presented MFEM\u2019s use in a medical setting. Peripheral nerve stimulation (PNS) limits the usable image encoding performance in the latest generation of magnetic resonance imaging (MRI) scanners. The rapid switching of the MRI gradient coils\u2019 magnetic fields induces electric fields in the human body strong enough to evoke unwanted action potential in peripheral nerves, leading to muscle contractions or touch perceptions. Despite its limiting role in MRI, PNS effects are not directly included during the coil design phase. Davids\u2019 team developed a modeling tool to predict PNS thresholds and locations in the human body, allowing them to directly incorporate PNS metrics in the numeric coil winding optimization to design PNS-optimized coil layouts. This modeling tool relies on electromagnetic field simulations in heterogeneous finite element body models coupled to neurodynamic models of myelinated nerve fibers. This tool enables researchers to develop strategies that mitigate PNS effects without building expensive prototype MRI systems, maximizing the usable image encoding performance. Marc Bolinches (UT) Development of DG Compressible Navier-Stokes Solver with MFEM October 20, 2021 | MFEM Workshop 2021 Marc Bolinches from the University of Texas at Austin described a compressible Navier-Stokes solver using MFEM v4,2 which did not include full support for GPUs. The solver uses the discontinuous Galerkin (DG) method as a space discretization and an explicit Runge-Kutta time-integration scheme. An effort has been made to fully support GPU computation by taking over some of the loops internal to the NonLinearForm class. This has also allowed us to implement overlap between computation and communication. The team hopes their open-source code will help other researchers in creating high-fidelity simulations of compressible flows. Robert Rieben (LLNL) The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling October 20, 2021 | MFEM Workshop 2021 High-energy-density physics (HEDP) experiments performed at LLNL and other Department of Energy laboratories require multiphysics simulations to predict the behavior of complex physical systems for applications including inertial confinement fusion, pulsed power, and material strength/equations-of-state studies. Robert Rieben described the variety of mathematical algorithms needed for these simulations, including ALE methods, unstructured adaptive mesh refinement, and high-order discretizations. LLNL\u2019s Multiphysics on Advanced Platforms Project (MAPP) is developing a next-generation multiphysics code, called MARBL, based on high-order numerical methods and modular infrastructure for deployment on advanced HPC architectures. MARBL\u2019s use of high-order methods produce better throughput on GPUs. MARBL uses MFEM for finite elements and mesh/field/operator abstractions while leveraging its support for efficient memory management. Rieben explained that co-design efforts among the MARBL, MFEM, and RAJA (portability software) teams led to better device utilization and improved performance for the MARBL code. Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia) Phase Change Heat and Mass Transfer Simulation with MFEM October 20, 2021 | MFEM Workshop 2021 Three undergraduate students\u2014Felipe G\u00f3mez, Carlos del Valle, and Juli\u00e1n Jim\u00e9nez\u2014from the National University of Colombia presented their work using MFEM in an oceanographic model. Below the Arctic sea ice, and under the right conditions, a flux of icy brine flows down into the sea. The icy brine has a much lower fusion point and a higher density than normal seawater. As a result, it sinks while freezing everything around it, forming an ice channel called a brinicle (also known as ice stalactite). The team shared their simulations of this phenomenon assuming cylindrical symmetry. The fluid is considered viscous and quasi-stationary, and the problem is simulated taking advantage of the setup symmetries. The heat and salt transport are weakly coupled to the fluid motion and are modeled with the corresponding conservation equations, taking into account diffusive and convective effects. The coupled system of partial differential equations is discretized and solved with the help of the MFEM finite element library. Thomas Helfer (CEA) MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic October 20, 2021 | MFEM Workshop 2021 Thomas Helfer from the French Atomic Energy Commission (CEA) introduced the MFEM-MGIS-MFront library (MMM), which aims for efficient use of supercomputers in the field of implicit nonlinear thermomechanics. His team\u2019s primary focus is to develop advanced nuclear fuel element simulations where the evolution of materials under irradiation are influenced by multiple phenomena (e.g., viscoplasticity, damage, phase transitions, swelling due to solid and gaseous fission products). MFEM provides this project with finite element abstractions, adaptive mesh refinement, and a parallel API. However, as applications dedicated to solid mechanics in MFEM are mostly limited to a few constitutive equations such as elasticity and hyperelasticity, Helfer explained that his team extended the software\u2019s functionality to cover a broader spectrum of mechanics. Thus, this MMM project combines MFEM with the MFrontGenericInterfaceSupport (MGIS), an open-source C++ library that provides data structures to support arbitrarily complex nonlinear constitutive equations generated by the MFront code generator. MMM is developed within the scope of CEA\u2019s PLEIADES project. Helfer\u2019s presentation provided (1) an introduction to MMM goals; (2) a tutorial of MMM usage with a focus on the high-level user interface; (3) an overview of the core design choices of MMM and how MFEM was extended to support a range of scenarios; and (4) feedback on the two main issues encountered during MMM development. Jamie Bramwell (LLNL) Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications October 20, 2021 | MFEM Workshop 2021 Jamie Bramwell of LLNL presented an overview of the open-source Serac project , whose goal is to provide user-friendly abstractions and modules that enable rapid development of complex nonlinear multiphysics simulation codes. She provided an overview of both the high-level physics modules (thermal conduction, solid mechanics, incompressible flow, electromagnetics) as well as the serac::Functional framework for quickly developing nonlinear GPU-enabled finite element method kernels. Veselin Dobrev (LLNL) Recent Developments in MFEM October 20, 2021 | MFEM Workshop 2021 Veselin Dobrev of LLNL detailed the project\u2019s recent developments including memory manager improvements; serial support for p- and hp-refinement; high-order/low-order refined solution transfer; GLVis visualization via Jupyter Notebooks; and additional GPU support regarding HYPRE preconditioners, PETSc tools, and mesh optimization. MFEM now also integrates with various new libraries (AmgX, Gingko, FMS, and others), and continuous integration testing has been conducted on LLNL\u2019s Quartz, Lassen, and Corona machines. Additionally, Dobrev summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Exascale Computing Project, SciDAC, the FASTMath Institute, and other projects. Tzanio Kolev (LLNL) The State of MFEM October 20, 2021 | MFEM Workshop 2021 MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities of discretization algorithms, built-in solvers, parallel scalability, adaptive mesh refinement, and support for a range of computing architectures. Kolev also highlighted the global community\u2019s contributions as well as features included in the recent v4.3 software release. Aaron Fisher (LLNL) Welcome and Overview October 20, 2021 | MFEM Workshop 2021 The MFEM community workshop held virtually on October 20, 2021, brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, collaborative breakout sessions, and a simulation contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results. Conferences in 2021 Tzanio Kolev (LLNL) Efficient Finite Element Discretizations for Exascale Applications February 25, 2021 | ExCALIBUR SLE 3 workshop ATPESC 2017, 2018 Tzanio Kolev (LLNL), Mark Shephard (RPI) and Cameron Smith (RPI) Unstructured Meshing Technologies August 6, 2018 | ATPESC 2018 Presented at the Argonne Training Program on Extreme-Scale Computing 2018. Slides for this presentation are available here . Tzanio Kolev (LLNL) and Mark Shephard (RPI) Unstructured Meshing Technologies August 7, 2017 | ATPESC 2017 Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here . Tzanio Kolev (LLNL) and Mark Shephard (RPI) Conforming & Nonconforming Adaptivity for Unstructured Meshes August 7, 2017 | ATPESC 2017 Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here . Other Videos MFEM: Advanced Simulation Algorithms for HPC Applications Jun 24, 2020 | YouTube Overview of MFEM 4.0 featuring some of its developers. Center for Applied Scientific Computing Jul 12, 2019 | YouTube Overview of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, including a highlight of MFEM. S&TR Preview: Exascale Computing October 6, 2016 | YouTube Some early MFEM results in the BLAST project.", "title": "MFEM Videos"}, {"location": "videos3/#mfem-videos", "text": "A collection of MFEM-related videos, including recorded talks from the MFEM workshops and conference presentations.", "title": "MFEM Videos"}, {"location": "videos3/#mfem-workshop-2021", "text": "", "title": "MFEM Workshop 2021"}, {"location": "videos3/#aaron-fisher-llnl", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos3/#wrap-up-and-simulation-contest-winners", "text": "", "title": "Wrap-Up and Simulation Contest Winners"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021", "text": "MFEM\u2019s first community workshop was held virtually on October 20, 2021, with participants around the world. Aaron Fisher of LLNL concluded the workshop by announcing the results of the simulation and visualization contest. The winners represent two very different research applications using MFEM: (1) the electric field generated by electrocardiogram waves of a rabbit\u2019s heart ventricles, rendered by Dennis Ogiermann of Ruhr-University Bochum (Germany); (2) incompressible fluid flow around a rotating turbine, animated by Tamas Horvath of Oakland University (Michigan). Contest submissions are featured in the gallery .", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#will-pazner-llnl", "text": "", "title": "Will Pazner (LLNL)"}, {"location": "videos3/#high-order-matrix-free-solvers", "text": "", "title": "High-Order Matrix-Free Solvers"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_1", "text": "For users unfamiliar with MFEM\u2019s solver library, Will Pazner of LLNL demonstrated a few ways\u2014in some cases adding just a single line of code\u2014to run scalable solvers for differential equations. These solvers execute hierarchical finite element discretizations for both low- and high-order problems.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#vladimir-tomov-llnl", "text": "", "title": "Vladimir Tomov (LLNL)"}, {"location": "videos3/#mfem-capabilities-for-high-order-mesh-optimization", "text": "", "title": "MFEM Capabilities for High-Order Mesh Optimization"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_2", "text": "Vladimir Tomov of LLNL described MFEM\u2019s mesh optimization strategies including ways the user can define target elements. He demonstrated optimizing a mesh\u2019s shape by limiting displacements to preserve a boundary layer and by changing the size of a uniform mesh in a specific region. MFEM\u2019s mesh-optimizing miniapps are available online .", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#william-dawn-ncsu", "text": "", "title": "William Dawn (NCSU)"}, {"location": "videos3/#unstructured-finite-element-neutron-transport-using-mfem", "text": "", "title": "Unstructured Finite Element Neutron Transport using MFEM"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_3", "text": "William Dawn from North Carolina State University described his work with unstructured neutron transport. His team models microreactors, a new class of compact reactor with relatively small electrical output. As part of the Exascale Computing Project, Dawn\u2019s team is modeling the MARVEL reactor, which is planned for construction at Idaho National Laboratory. MFEM satisfies their need for a finite element framework with GPU support and rapid prototyping. With MFEM, the team discretizes a neutron transport equation with six independent variables in space, direction, and energy. Traditional neutron transport methods use a \u201csweeping\u201d method to transport particles through a problem, but this is not feasible for generally unstructured meshes. In Dawn\u2019s models, the Self-Adjoint Angular Flux (SAAF) form of the neutron transport equation is used to transform the neutron transport equation from a first-order hyperbolic form to a second-order elliptic form. Then, the SAAF equations are discretized with the finite element method and solved using MFEM. Due to the dependence of the neutron flux on angle and direction, these problems have a high vector-dimension with hundreds to thousands of degrees of freedom (DOF) per mesh vertex. Also, due to the second-order nature of these equations, highly refined meshes are required to sufficiently resolve reactor geometries with millions of vertices in a mesh. Results have been prepared for problems with billions of DOF using the Summit supercomputer at Oak Ridge National Laboratory.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#syunichi-shiraiwa-pppl", "text": "", "title": "Syun\u2019ichi Shiraiwa (PPPL)"}, {"location": "videos3/#development-of-pymfem-python-wrapper-for-mfem-scalable-rf-wave-simulation-for-nuclear-fusion", "text": "", "title": "Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_4", "text": "Syun\u2019ichi Shiraiwa of Pacific Northwest National Laboratory introduced PyMFEM, a Python wrapper for MFEM that his team uses in radiofrequency (RF) wave simulations for the RF-SciDAC project. RF waves can be used to heat plasma in a nuclear fusion reaction. Simulations of this process present multiple challenges when incorporating a variety of antenna structures at different frequencies, waves with different wave lengths in the same space or spatially diverse, and RF wave effects on background plasma. To integrate MFEM, a C++ software library, into their multiphysics platform, Shiraiwa\u2019s team created a code \u201cwrapper\u201d that binds MFEM to the external Python components of RF wave simulations, ultimately extending MFEM\u2019s features to Python users. Shiraiwa described how the PyMFEM module works in serial and parallel and invited the audience to contribute to the open-source code.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#qi-tang-lanl", "text": "", "title": "Qi Tang (LANL)"}, {"location": "videos3/#an-adaptive-scalable-fully-implicit-resistive-mhd-solver", "text": "", "title": "An Adaptive, Scalable Fully Implicit Resistive MHD Solver"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_5", "text": "Qi Tang of Los Alamos National Laboratory described his team\u2019s development of an efficient, scalable solver for tokamak plasma simulations. Magnetohydrodynamics (MHD) equations are important for studying plasma systems, but efficient numerical solutions for MHD are extremely challenging due to disparate time and length scales, strong hyperbolic phenomena, and nonlinearity. Tang\u2019s team has developed a high-order stabilized finite element algorithm for incompressible resistive MHD equations based on MFEM, which provides physics-based preconditioners, adaptive mesh refinement, parallelization, and load balancing. Tang showed animated examples of the model\u2019s scalable and efficient results.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#jan-nikl-eli-beamlines", "text": "", "title": "Jan Nikl (ELI Beamlines)"}, {"location": "videos3/#laser-plasma-modeling-with-high-order-finite-elements", "text": "", "title": "Laser Plasma Modeling with High-Order Finite Elements"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_6", "text": "Jan Nikl outlined how his team at the ELI Beamlines Centre uses MFEM for laser plasma modeling. Lasers have found their application in many scientific disciplines, where generation of plasma, the fourth state of matter, holds great potential for the future. A detailed description of laser produced plasmas is then essential for many applications, like (pre)pulses of ultra-intense lasers and ion acceleration beamlines, laboratory astrophysics, inertial confinement fusion, and many others. All of the mentioned are investigated at ELI Beamlines in the Czech Republic, a European laser facility aiming to operate the most intense laser system in the world. In this context, Nikl described the numerical construction based on the finite element method. This effort concentrates mainly on the Lagrangian hydrodynamics and Vlasov\u2013Fokker\u2013Planck\u2013Maxwell kinetic description of plasma, utilizing the MFEM library for its flexibility and scalability.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#mathias-davids-harvard", "text": "", "title": "Mathias Davids (Harvard)"}, {"location": "videos3/#modeling-peripheral-nerve-stimulations-pns-in-magnetic-resonance-imaging-mri", "text": "", "title": "Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI)"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_7", "text": "Mathias Davids from Harvard Medical School presented MFEM\u2019s use in a medical setting. Peripheral nerve stimulation (PNS) limits the usable image encoding performance in the latest generation of magnetic resonance imaging (MRI) scanners. The rapid switching of the MRI gradient coils\u2019 magnetic fields induces electric fields in the human body strong enough to evoke unwanted action potential in peripheral nerves, leading to muscle contractions or touch perceptions. Despite its limiting role in MRI, PNS effects are not directly included during the coil design phase. Davids\u2019 team developed a modeling tool to predict PNS thresholds and locations in the human body, allowing them to directly incorporate PNS metrics in the numeric coil winding optimization to design PNS-optimized coil layouts. This modeling tool relies on electromagnetic field simulations in heterogeneous finite element body models coupled to neurodynamic models of myelinated nerve fibers. This tool enables researchers to develop strategies that mitigate PNS effects without building expensive prototype MRI systems, maximizing the usable image encoding performance.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#marc-bolinches-ut", "text": "", "title": "Marc Bolinches (UT)"}, {"location": "videos3/#development-of-dg-compressible-navier-stokes-solver-with-mfem", "text": "", "title": "Development of DG Compressible Navier-Stokes Solver with MFEM"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_8", "text": "Marc Bolinches from the University of Texas at Austin described a compressible Navier-Stokes solver using MFEM v4,2 which did not include full support for GPUs. The solver uses the discontinuous Galerkin (DG) method as a space discretization and an explicit Runge-Kutta time-integration scheme. An effort has been made to fully support GPU computation by taking over some of the loops internal to the NonLinearForm class. This has also allowed us to implement overlap between computation and communication. The team hopes their open-source code will help other researchers in creating high-fidelity simulations of compressible flows.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#robert-rieben-llnl", "text": "", "title": "Robert Rieben (LLNL)"}, {"location": "videos3/#the-multiphysics-on-advanced-platforms-project-performance-portability-and-scaling", "text": "", "title": "The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_9", "text": "High-energy-density physics (HEDP) experiments performed at LLNL and other Department of Energy laboratories require multiphysics simulations to predict the behavior of complex physical systems for applications including inertial confinement fusion, pulsed power, and material strength/equations-of-state studies. Robert Rieben described the variety of mathematical algorithms needed for these simulations, including ALE methods, unstructured adaptive mesh refinement, and high-order discretizations. LLNL\u2019s Multiphysics on Advanced Platforms Project (MAPP) is developing a next-generation multiphysics code, called MARBL, based on high-order numerical methods and modular infrastructure for deployment on advanced HPC architectures. MARBL\u2019s use of high-order methods produce better throughput on GPUs. MARBL uses MFEM for finite elements and mesh/field/operator abstractions while leveraging its support for efficient memory management. Rieben explained that co-design efforts among the MARBL, MFEM, and RAJA (portability software) teams led to better device utilization and improved performance for the MARBL code.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#felipe-gomez-carlos-del-valle-julian-jimenez-national-university-of-colombia", "text": "", "title": "Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia)"}, {"location": "videos3/#phase-change-heat-and-mass-transfer-simulation-with-mfem", "text": "", "title": "Phase Change Heat and Mass Transfer Simulation with MFEM"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_10", "text": "Three undergraduate students\u2014Felipe G\u00f3mez, Carlos del Valle, and Juli\u00e1n Jim\u00e9nez\u2014from the National University of Colombia presented their work using MFEM in an oceanographic model. Below the Arctic sea ice, and under the right conditions, a flux of icy brine flows down into the sea. The icy brine has a much lower fusion point and a higher density than normal seawater. As a result, it sinks while freezing everything around it, forming an ice channel called a brinicle (also known as ice stalactite). The team shared their simulations of this phenomenon assuming cylindrical symmetry. The fluid is considered viscous and quasi-stationary, and the problem is simulated taking advantage of the setup symmetries. The heat and salt transport are weakly coupled to the fluid motion and are modeled with the corresponding conservation equations, taking into account diffusive and convective effects. The coupled system of partial differential equations is discretized and solved with the help of the MFEM finite element library.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#thomas-helfer-cea", "text": "", "title": "Thomas Helfer (CEA)"}, {"location": "videos3/#mfem-mgis-mfront-a-mfem-based-library-for-nonlinear-solid-thermomechanic", "text": "", "title": "MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_11", "text": "Thomas Helfer from the French Atomic Energy Commission (CEA) introduced the MFEM-MGIS-MFront library (MMM), which aims for efficient use of supercomputers in the field of implicit nonlinear thermomechanics. His team\u2019s primary focus is to develop advanced nuclear fuel element simulations where the evolution of materials under irradiation are influenced by multiple phenomena (e.g., viscoplasticity, damage, phase transitions, swelling due to solid and gaseous fission products). MFEM provides this project with finite element abstractions, adaptive mesh refinement, and a parallel API. However, as applications dedicated to solid mechanics in MFEM are mostly limited to a few constitutive equations such as elasticity and hyperelasticity, Helfer explained that his team extended the software\u2019s functionality to cover a broader spectrum of mechanics. Thus, this MMM project combines MFEM with the MFrontGenericInterfaceSupport (MGIS), an open-source C++ library that provides data structures to support arbitrarily complex nonlinear constitutive equations generated by the MFront code generator. MMM is developed within the scope of CEA\u2019s PLEIADES project. Helfer\u2019s presentation provided (1) an introduction to MMM goals; (2) a tutorial of MMM usage with a focus on the high-level user interface; (3) an overview of the core design choices of MMM and how MFEM was extended to support a range of scenarios; and (4) feedback on the two main issues encountered during MMM development.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#jamie-bramwell-llnl", "text": "", "title": "Jamie Bramwell (LLNL)"}, {"location": "videos3/#serac-user-friendly-abstractions-for-mfem-based-engineering-applications", "text": "", "title": "Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_12", "text": "Jamie Bramwell of LLNL presented an overview of the open-source Serac project , whose goal is to provide user-friendly abstractions and modules that enable rapid development of complex nonlinear multiphysics simulation codes. She provided an overview of both the high-level physics modules (thermal conduction, solid mechanics, incompressible flow, electromagnetics) as well as the serac::Functional framework for quickly developing nonlinear GPU-enabled finite element method kernels.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#veselin-dobrev-llnl", "text": "", "title": "Veselin Dobrev (LLNL)"}, {"location": "videos3/#recent-developments-in-mfem", "text": "", "title": "Recent Developments in MFEM"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_13", "text": "Veselin Dobrev of LLNL detailed the project\u2019s recent developments including memory manager improvements; serial support for p- and hp-refinement; high-order/low-order refined solution transfer; GLVis visualization via Jupyter Notebooks; and additional GPU support regarding HYPRE preconditioners, PETSc tools, and mesh optimization. MFEM now also integrates with various new libraries (AmgX, Gingko, FMS, and others), and continuous integration testing has been conducted on LLNL\u2019s Quartz, Lassen, and Corona machines. Additionally, Dobrev summarized MFEM\u2019s integrations with other software libraries and the team\u2019s engagements with the Exascale Computing Project, SciDAC, the FASTMath Institute, and other projects.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#tzanio-kolev-llnl", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos3/#the-state-of-mfem", "text": "", "title": "The State of MFEM"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_14", "text": "MFEM principal investigator Tzanio Kolev described the project\u2019s past, present, and future with an emphasis on its key capabilities of discretization algorithms, built-in solvers, parallel scalability, adaptive mesh refinement, and support for a range of computing architectures. Kolev also highlighted the global community\u2019s contributions as well as features included in the recent v4.3 software release.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#aaron-fisher-llnl_1", "text": "", "title": "Aaron Fisher (LLNL)"}, {"location": "videos3/#welcome-and-overview", "text": "", "title": "Welcome and Overview"}, {"location": "videos3/#october-20-2021-mfem-workshop-2021_15", "text": "The MFEM community workshop held virtually on October 20, 2021, brought together users and developers for a review of software features and the development roadmap, a showcase of technical talks and applications, collaborative breakout sessions, and a simulation contest. Aaron Fisher of LLNL kicked off the event with an overview of the workshop agenda, participant demographics, and community survey results.", "title": "October 20, 2021 | MFEM Workshop 2021"}, {"location": "videos3/#conferences-in-2021", "text": "", "title": "Conferences in 2021"}, {"location": "videos3/#tzanio-kolev-llnl_1", "text": "", "title": "Tzanio Kolev (LLNL)"}, {"location": "videos3/#efficient-finite-element-discretizations-for-exascale-applications", "text": "", "title": "Efficient Finite Element Discretizations for Exascale Applications"}, {"location": "videos3/#february-25-2021-excalibur-sle-3-workshop", "text": "", "title": "February 25, 2021 | ExCALIBUR SLE 3 workshop"}, {"location": "videos3/#atpesc-2017-2018", "text": "", "title": "ATPESC 2017, 2018"}, {"location": "videos3/#tzanio-kolev-llnl-mark-shephard-rpi-and-cameron-smith-rpi", "text": "", "title": "Tzanio Kolev (LLNL), Mark Shephard (RPI) and Cameron Smith (RPI)"}, {"location": "videos3/#unstructured-meshing-technologies", "text": "", "title": "Unstructured Meshing Technologies"}, {"location": "videos3/#august-6-2018-atpesc-2018", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2018. Slides for this presentation are available here .", "title": "August 6, 2018 | ATPESC 2018"}, {"location": "videos3/#tzanio-kolev-llnl-and-mark-shephard-rpi", "text": "", "title": "Tzanio Kolev (LLNL) and Mark Shephard (RPI)"}, {"location": "videos3/#unstructured-meshing-technologies_1", "text": "", "title": "Unstructured Meshing Technologies"}, {"location": "videos3/#august-7-2017-atpesc-2017", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here .", "title": "August 7, 2017 | ATPESC 2017"}, {"location": "videos3/#tzanio-kolev-llnl-and-mark-shephard-rpi_1", "text": "", "title": "Tzanio Kolev (LLNL) and Mark Shephard (RPI)"}, {"location": "videos3/#conforming-nonconforming-adaptivity-for-unstructured-meshes", "text": "", "title": "Conforming & Nonconforming Adaptivity for Unstructured Meshes"}, {"location": "videos3/#august-7-2017-atpesc-2017_1", "text": "Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here .", "title": "August 7, 2017 | ATPESC 2017"}, {"location": "videos3/#other-videos", "text": "", "title": "Other Videos"}, {"location": "videos3/#mfem-advanced-simulation-algorithms-for-hpc-applications", "text": "", "title": "MFEM: Advanced Simulation Algorithms for HPC Applications"}, {"location": "videos3/#jun-24-2020-youtube", "text": "Overview of MFEM 4.0 featuring some of its developers.", "title": "Jun 24, 2020 | YouTube"}, {"location": "videos3/#center-for-applied-scientific-computing", "text": "", "title": "Center for Applied Scientific Computing"}, {"location": "videos3/#jul-12-2019-youtube", "text": "Overview of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, including a highlight of MFEM.", "title": "Jul 12, 2019 | YouTube"}, {"location": "videos3/#str-preview-exascale-computing", "text": "", "title": "S&TR Preview: Exascale Computing"}, {"location": "videos3/#october-6-2016-youtube", "text": "Some early MFEM results in the BLAST project.", "title": "October 6, 2016 | YouTube"}, {"location": "workshop/", "text": "MFEM Community Workshop October 22-24, 2024 LLNL + Virtual Speakers' slides are linked in the agenda below. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery . Overview The MFEM team is happy to invite you to the 2024 MFEM Community Workshop, which will take place on October 22-24, 2024 in a hybrid format: in-person at Lawrence Livermore National Laboratory (LLNL) + virtually on Zoom. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. We encourage you to join us in person if you can! For questions, please contact the meeting organizers at mfem@llnl.gov . Registration Registration closed on October 15th . Venue The meeting will take place at the University of California Livermore Collaboration Center (UCLCC) which is just outside of LLNL's East Gate. Lodging Options There are many hotels in Livermore, and others are available in Pleasanton and nearby cities. See LLNL's recommended list of area hotels or this Google Maps search . If you stay outside of Livermore, we recommend staying west of the city to have a reverse commute to the Lab. Meeting Format This will be the first hybrid edition of the MFEM community workshop that will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.7 and future roadmap Contributed talks from application developers utilizing MFEM Student lightning talks and visualization contest Office hours on the last day See also the agenda for the previous 2023 , 2022 and 2021 MFEM workshops. Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop. Agenda Tuesday, October 22 Time Activity Presenter 8:00-8:30 Breakfast + Registration on site at UCLCC 8:30-9:00 Welcome & Overview ( PDF , video ) Aaron Fisher (LLNL) 9:00-9:30 The State of MFEM ( PDF , video ) Tzanio Kolev (LLNL) 9:30-10:00 Recent Developments ( video ) Veselin Dobrev (LLNL) 10:00-10:30 Coffee Break discussions on Slack 10:30-12:00 Presentations (30 mins each) Chair: Will Pazner M\u00e1t\u00e9 Kov\u00e1cs (Braid Technologies) Rust Wrapper for MFEM ( PDF ) Rust is quickly emerging as a modern alternative to C++ for systems and performance-critical programming. With a user-centered design, \"batteries included\" philosophy around tooling, and principled approach to correctness, Rust holds a lot of potential to make complex libraries easier to use. Building a Rust wrapper for MFEM would achieve most of the benefits of a rewrite at a fraction of the effort. By showcasing this prototype, I hope to convince you that creating and maintaining a Rust wrapper for MFEM is a worthy goal. I will further argue that the small modifications to the C++ API that may be necessary to reach optimal integration with Rust would also improve the usability for C++. Adrian Butscher (Autodesk Research) Geometrically Constrained Level Set Topology Optimization Using a Novel Hilbert Space Extension Method ( PDF ) We propose an approach for level-set based topology optimization which pairs conventional free-form shape updates with highly constrained shape updates along a user-specified part of the shape boundary. It is intended for the optimal design of shapes where certain parts of the shape boundary are required to preserve their geometry, up to well-defined parametric variations such as translations, rotations, and scalings. For instance, our approach could be used to optimize a shape that must include a circular aperture of optimal radius to accommodate a pin joint to another shape. Our approach allows us to optimize both the free-form geometry of the shape, as well as the position, orientation, and scale of the circular aperture. To generate the shape updates we construct a velocity field over the entire design space and transport the level-set function defining the shape along the field at each iteration. We construct this velocity field using a novel constrained Hilbert space extension (C-HSE) method that expands upon existing Hilbert space extension methods by incorporating the affine motion constraints into the variational problem. As a result, the C-HSE method generates a velocity field for the entire design domain that constitutes a descent direction for a user-specified optimization objective function, while ensuring that all constraints are met. The C-HSE allows multiple distinct regions to have different constraints, with many possible constraint types such as translation, rotation and scaling (or all three simultaneously). We show results on a variety of geometrically constrained boundary conditions on some canonical problems. Ketan Mittal (LLNL) Interpolation at Arbitrary Points in High-Order Meshes on GPUs ( PDF , video ) Robust and scalable arbitrary point interpolation is required in the finite element method and spectral element method for querying the partial differential equation solution at points of interest in the domain, comparison of solution between different meshes, and Lagrangian particle tracking. This is a challenging problem, particularly for high-order unstructured meshes partitioned in parallel with MPI, as it requires identifying the element that overlaps a given point and computing the reference space coordinates inside the element corresponding to the point. We present a robust and efficient way to address this problem for large-scale high-order meshes. First, a combination of globally partitioned and processor-local maps are used to determine a list of candidate MPI ranks and element pairs that could contain the point. Next, element-wise bounding boxes are used to further narrow down the list of candidate elements. Finally, Newton's method with trust region-based approach is used to invert the affine map for the candidate elements and determine the reference space coordinates corresponding to the point. Since GPU-based architectures have demonstrated to accelerate computational analyses using meshes with tensor-product elements, specialized kernel have been developed to effect the arbitrary point search and interpolation on GPUs. We demonstrate the effectiveness of this approach using various high-order meshes. 12:00-1:00 Lunch on site at UCLCC 1:00-2:00 Student Session 1 (10 mins each) Chair: Ketan Mittal Nanna Berre (Norwegian University of Science and Technology) High-Order CutFEM Solvers in MFEM Creating conforming meshes for complex, realistic problems can be challenging and consume a significant portion of the total simulation time. The cut finite element method (CutFEM) allows the geometry to be represented independently of the computational domain, thus circumventing the mesh generation while maintaining the accuracy and robustness of the standard finite element method. In this talk, we present recent implementations of CutFEM solvers in MFEM, along with numerical convergence studies. Julian L\u00fcken (University of Antwerp) Simulating Atom Probe Tomography Using MFEM ( PDF ) In atom probe tomography (APT), spatial reconstruction enables volumetric insight into a specimen's nanostructure. To this day, a fast reconstruction method which utilizes the true potential of APT in terms of resolution does not exist. A model of its effective inverse, the field evaporation, which provides a physically accurate description of the ion trajectories, is a crucial component in reconstruction. The simulation of each individual evaporation while has been time inefficient. We introduce AdAPTS, an adaptive atom probe tomography simulation library based on MFEM. AdAPTS is capable of generating accurate detector hit maps of various specimens, efficiently representing and simulating the experimental domain from specimen to detector. Using AdAPTS, we are able to accurately simulate the field evaporation of various specimens, revealing realistic poles and zone lines. Aditya Parik (Utah State University) Arbitrary Point Search and Interpolation on Surface Meshes ( PDF ) Scalable high-order interpolation at arbitrary locations on finite element meshes is essential in applications such as Lagrangian particle tracking coupled to Eulerian fields, coupled overlapping grids, and grid-to-grid interpolation. This is currently achieved in MFEM for volume meshes using FindPointsGSLIB, which is based on the high-order interpolation library findpts. Therein, global and local hash maps are constructed to rapidly narrow down the search space to determine, first the correct rank, and then the candidate elements on that rank that may contain a given point in physical-space. Next, element-wise bounding boxes help further narrow down the list of candidate elements. Finally, a Newton's method based approach is used to determine if the point overlaps with the element, and the corresponding reference coordinates. Through this work, we extend FindPointsGSLIB to surface meshes where we encounter interesting implementation challenges in the construction of the global and local maps, bounding boxes, and the convergence criterion for the Newton search. The effectiveness of this approach is tested by searching for a large number of points on various 2D and 3D meshes and then obtaining the accuracy of interpolation of a test field at the found coordinates. We also test the GPU scaling characteristics of this approach with respect to the number of points for both search and interpolation operations. Gabriel Pinochet-Soto (Portland State University) Exploring Generalized Jacobi Preconditioners and Smoothers in MFEM ( PDF ) This talk will present a new type of smoother called the L(p,q)-Jacobi family of smoothers, which is a generalization of the L(1)-Jacobi smoother. We will discuss how these smoothers are implemented in MFEM and compare the performance of the solvers. Additionally, we will delve into a specific case of the L(1)-Jacobi preconditioner for partially assembled operators and explain their implementation and effectiveness. 2:00-3:00 Student Session 2 (10 mins each) Chair: Ketan Mittal Matthew Blomquist (University of California Merced) Semi-Lagrangian Characteristic Reconstruction and Projection for Transport under Incompressible Velocity Fields ( PDF ) We present a novel semi-Lagrangian characteristic reconstruction method that leverages a volume preserving projection to advect quantities under incompressible velocity fields. A key advantage of this framework is to see the traditional semi-Lagrangian scheme as the construction of a diffeomorphism between the deformed and original geometry (reference map). This representation allows us to use the local deformation of the geometry to design a projection for the reference map onto the space of volume preserving diffeomorphisms. In the context of the advection of an implicit surface representation (level set method), this results in significant improvements to the interface precision and mass conservation. In this short talk, I will demonstrate our new method with a variety of canonical two-dimensional examples and compare this new approach to traditional schemes. Paul Moujaes (Technical University Dortmund) Clip and Scale Limiting for Remapping H1 Velocity Fields in Lagrangian Hydrodynamics Simulations ( PDF ) The mesh quality in Lagrangian hydrodynamics simulations can worsen drastically over time. Therefore, pausing the simulation and remapping the quantities is needed at some point. The remapping process can be written as a linear advection equation. In this talk, we present the application of the Clip and Scale limiter for remapping the velocity field which is discretized with continuous finite elements. Arjun Vijaywargiya (University of Notre Dame) High Order Computation of MFC Barycenters with MFEM ( PDF ) We develop a class of barycenter problems based on mean field control problems in three dimensions with associated reactive-diffusion systems of unnormalized multi-species densities. The primary objective is to present a comprehensive framework for efficiently computing the proposed variational problem: generalized Benamou-Brenier formulas with multiple input density vectors as boundary conditions. Our approach involves the utilization of high-order finite element discretizations of the spacetime domain to achieve improved accuracy. The discrete optimization problem is then solved using the primal-dual hybrid gradient (PDHG) algorithm, a first-order optimization method for effectively addressing a wide range of constrained optimization problems. The efficacy and robustness of our proposed framework are illustrated through several numerical examples in three dimensions, such as the computation of the barycenter of multi-density systems consisting of Gaussian distributions and reactive-diffusive multi-density systems involving 3D voxel densities. Additional examples highlighting computations on 2D embedded surfaces are also provided. Yi Zong (Tsinghua University) FP16 Acceleration in Structured Multigrid Preconditioner for Real-World Problems ( PDF ) Half-precision hardware support is now almost ubiquitous. In contrast to its active use in AI, half-precision is less commonly employed in scientific and engineering computing. The valuable proposition of accelerating scientific computing applications using half-precision prompted this study. Focusing on solving sparse linear systems in scientific computing, we explore the technique of utilizing FP16 in multigrid preconditioners. Based on observations of sparse matrix formats, numerical features of scientific applications, and the performance characteristics of multigrid, this study formulates four guidelines for FP16 utilization in multigrid. The proposed algorithm demonstrates how to avoid FP16 overflow through scaling. A setup-then-scale strategy prevents FP16\u2019s limited accuracy and narrow range from interfering with the multigrid\u2019s numerical properties. Another strategy, recover-and-rescale on the fly, reduces the memory footprint of hotspot kernels. The extra precision-conversion overhead in mix-precision kernels is addressed by the transformation of storage formats and SIMD implementation. Two ablation experiments validate the effectiveness of our algorithm and parallel kernel implementation on ARM and X86 architectures. We further evaluate three idealized and five real-world problems to demonstrate the advantage of utilizing FP16 in a multigrid preconditioner. The average speedups are approximately 2.75x and 1.95x in preconditioner and end-to-end workflow, respectively. 3:00-3:30 Coffee Break & Group Photo download a virtual background below 3:30-5:00 Presentations (30 mins each) Chair: Tzanio Kolev Yu Leng (Los Alamos National Laboratory) Arbitrary Order Virtual Element Methods for High-Order Phase-Field Modeling of Dynamic Fracture ( PDF ) Accurate modeling of fracture nucleation and propagation in brittle and ductile materials subjected to dynamic loading is important in predicting material damage and failure under extreme conditions. Phase-field fracture models have garnered a lot of attention in recent years due to their success in representing damage and fracture processes in a wide class of materials and under a variety of loading conditions. Second-order phase-field fracture models are by far the most popular among researchers (and increasingly, among practitioners), but fourth-order models have started to gain broader acceptance since their more recent introduction. The exact solution corresponding to these high-order phase-field fracture models has higher regularity. Thus, numerical solutions of the model equations can achieve improved accuracy and higher spatial convergence rates. In this work, we develop a virtual element framework for the high-order phase-field model of dynamic fracture. The virtual element method (VEM) can be regarded as a generalization of the classical finite element method. In addition to many other desirable characteristics, the VEM allows computing on polytopal meshes. Here, we use H1-conforming virtual elements and the generalized-\u03b1 time integration method for the momentum balance equation, and adopt H2-conforming virtual elements for the high-order phase-field equation. We verify our virtual element framework using classical quasi-static benchmark problems and demonstrate its capabilities with the aid of numerical simulations of dynamic fracture in brittle materials. Michael Tupek (LLNL) Automatic Parameter Sensitivities in Serac for Engineering Applications ( PDF , video ) We present a framework for automatically calculating sensitivities for both topology and shape design optimization workflows. Building on MFEM infrastructure, we provide abstractions for quickly specifying, solving, coupling, and differentiating new PDEs for engineering applications. Recent developments in Serac include: highly robust nonlinear solvers, integration of the Tribol library for contact enforcement, coupled thermal-mechanics, differentiable material model library, and checkpointing for transient adjoint calculations. Jan Nikl (LLNL) Hybridization of Convection-Diffusion Systems in MFEM ( PDF , video ) Convection-diffusion systems are likely the most common class of partial differential equations appearing in practically all different applications. However, their mixed formulation typically suffers from prohibitively high computational costs and difficult preconditioning, especially close to the steady state where the system becomes a saddle point problem. The hybridization technique offers an appealing answer to these issues. The new framework for mixed systems enables single-line hybridization, reducing the problem to face traces of the total flux only. Solution of such system is then inexpensive, and preconditioning becomes nearly trivial. Non-linear convection is also supported with the action-based regime of operation. Description of the mechanism as well as code examples to show ease of usage are presented. 5:00 Day 1 Wrap-up MFEM team 5:30-8:00 Workshop Dinner First Street Alehouse Wednesday, October 23 Time Activity Presenter 8:00-8:30 Breakfast on site at UCLCC 8:30-9:00 Visualization Contest Winners Will Pazner (Portland State University) 9:00-10:00 Presentations (30 mins each) Chair: Sohail Reddy Gourab Panigrahi (Indian Institute of Science) Hardware Aware Matrix-Free Approach for Accelerating FE Discretized Eigenvalue Problems: Application to Large-Scale Kohn-Sham Density Functional Theory ( PDF ) The finite-element (FE) discretization of a partial differential equation usually involves construction of a FE discretized operator, and computing its action on trial FE discretized fields for the solution of a linear system of equations or eigenvalue problems using iterative solvers. This is traditionally computed using global sparse-vector multiplication algorithms. However, recent hardware-aware algorithms for evaluating such higher-order FE discretized matrix-vector multiplications suggest that on-the-fly matrix-vector products without building and storing the cell-level dense matrices (cell-matrix approach) reduce both arithmetic complexity and memory footprint and are referred to as matrix-free approaches. These approaches exploit the tensor-structured nature of the FE polynomial basis for evaluating the underlying integrals, and the current state-of-the-art matrix-free implementations deal with the action of FE discretized matrix on a single vector. These are neither optimal nor readily applicable for matrix multi-vector products involving large number of vectors (>1000). We discuss a computationally efficient and scalable matrix-free algorithm and implementation strategies to compute the FE discretized matrix multi-vector products on multi-node GPU architectures. We use batched evaluation strategies, with the batchsize tailored to underlying hardware architectures, leading to better data locality and allowing for parallelization over multiple batches. We devise an algorithm to overlap compute and data movement in conjunction with GPU shared memory, constant memory, and kernel fusion to reduce data accesses to and from device memory and registers to reduce bank conflicts. Further, we propose a strategy where the memory of both the registers and shared memory is utilized to mitigate the memory constraints. We benchmark the performance of our implementation using a representative FE discretized matrix acting on multivectors of various sizes on multi-node GPU architectures and compare the performance against cell-matrix approach and matrix-free approaches implemented in MFEM and deal.ii. Further, usefulness of the proposed approach is demonstrated in accelerating large-scale eigenvalue problems arising in FE discretized Density Functional Theory calculations, a quantum mechanical theory used for first principle material modeling. Julian Andrej (LLNL) Differentiating Large-Scale Finite Element Applications with MFEM This presentation will go over the details of dFEM by explaining how MFEM leverages the Finite Element Operator Decomposition to introduce an automatic differentiation interface. We discuss advantages of this approach over traditional AD techniques and our integration with Enzyme. The talk is concluded with examples and a live demo. 10:00-10:30 Coffee Break discussions on Slack 10:30-12:00 Presentations (30 mins each) Chair: Tzanio Kolev Vladimir Tomov (LLNL) Recent Work in the MFEM Miniapps for Shock Hydro, Field Remap, and Mesh Optimization ( PDF , video ) This presentation discusses recent advancements, research, and exploratory work in the MFEM miniapps for shock hydrodynamics (Laghos), field remap (Remhos), and mesh optimization. For shock hydro, we present the implementation of slip wall boundary conditions for curved domains, along with research involving material interfaces using the shifted interface method or cut-element integration through Algoim and moments-based integration. In the field remap miniapp, we cover developments in stabilized remap for continuous fields, interface sharpening techniques, and matrix-free methods for GPU execution. Lastly, we explore recent progress in mesh optimization, including surface fitting and its GPU implementation, tangential relaxation, automatic differentiation (AD) for complex objective functionals, enhanced metric theory and quality metrics, and hpr-adaptivity for the mesh representation. While some of these advancements are public, general methods that can be applied across various practical miniapps, others are exploratory, demonstrating how the miniapps can serve as a starting point for research in specific areas. Hui-Chia Yu (Michigan State University) Battery Electrode Simulation Toolkit using MFEM (BESFEM) ( PDF ) Conventional sharp-interface simulations require mesh systems conformal to the domain of interest for solving governing equations. Our research team employs an alternative approach, the smoothed boundary method (SBM), that utilizes a continuous domain function to describe geometries and reformulate governing equations. This formulation enables solving governing equations on a regular Cartesian grid, eliminating the need for body-conforming meshes. We have been developing an Open-Source Battery Electrode Simulation Toolkit using MFEM (BESFEM). This toolkit integrates the SBM approach on the MFEM solver library (a product of the DOE's Exascale Computing Project). To enhance accuracy and computational efficiency, our team leverage MFEM's built-in adaptive mesh refinement (AMR) functionality, where elements near SBM diffuse interfaces are multilevel refined. BESFEM will be made fully available as a research and education tool for the battery science and materials science communities. Dylan Copeland (LLNL) Sparse, Approximate Quadrature for Acceleration of Isogeometric Analysis and Reduced Order Models ( PDF , video ) Numerical integration for assembly of FEM systems typically employs quadrature rules selected for the polynomial order of basis functions in each element. In some cases, a much sparser rule can maintain accuracy. We present an algebraic method for constructing sparse rules, by formulating a constraint system of states required to be integrated accurately. A nonnegative least squares solver finds a sparse, approximate solution to this constraint system, yielding a quadrature rule with fewer points. One application we demonstrate is isogeometric analysis, where a NURBS FEM space is defined on patches consisting of many elements. Setup times are greatly accelerated, by using patch-wise integration with sum factorization and reduced quadrature rules constructed on patches. Another area of application is reduced order models (ROM), where the FEM system is restricted to a reduced POD basis formed from training data. Instead of hyper-reduction methods such as DEIM, the empirical quadrature procedure (EQP) can be used to accelerate ROM simulations with a sparse quadrature rule in the reduced subspace. We demonstrate this on several benchmark problems in the Laghos miniapp and show that energy conservation is maintained. 12:00-1:00 Lunch on site at UCLCC 1:00-3:00 Presentations (30 mins each) Chair: Aaron Fisher Jacob Spainhour (CU Boulder) Robust Containment Queries over Collections of Parametric Curves via Generalized Winding Numbers ( PDF , video ) The containment query is an important geometric primitive in many multiphysics applications. For example, when initializing multimaterial Arbitrary Lagrangian-Eulerian (ALE) simulations, we often need to determine whether arbitrary quadrature points from the background mesh are inside or outside the regions associated with each material. However, existing methods require expensive refinement to accurately capture curved regions. At the same time, many methods are wholly incompatible with user-defined geometries that contain geometric and numeric gaps and/or self-intersections. In this work, we develop a containment query for 2D regions defined by rational Bezier curves that operates directly on curved objects. Our method relies on the generalized winding number (GWN), a mathematical construction that can be evaluated for each curve independently, making the derived containment query robust to non-watertightness. We use an adaptive algorithm to compute the GWN field exactly, which permits fast evaluation for points considered \"distant\" to the curve while being numerically stable for points that are arbitrarily close. Overall, this classification scheme greatly expands the types of bounding geometry that can be used directly in shaping applications without the need for otherwise expensive repair techniques. If time permits, we will also discuss our extensions of this idea to 3D shapes defined by parametric surfaces. Alexander Blair (UK Atomic Energy Authority) Platypus: An Open-Source Application for MFEM Problem Set-Up and Assembly in the MOOSE Framework ( PDF ) The large-scale open-source finite element simulation framework MOOSE has built an extensive user community around its capabilities in solving large-scale FE problems across a wide range of physics domains whilst maintaining a simple interface for users. However, it currently lacks support for problem set-up and solution on GPU architectures, due in part to its default finite element library backend libMesh, restricting the range of facilities that it may effectively leverage. Here we present Platypus, an open-source MOOSE application under development for the massively parallel multiphysics simulations of finite element problems using the MFEM finite element library, supporting problem assembly and solves on both CPU and GPU architectures. We shall show some initial results on simple thermal and electromagnetic test problems and outline our development plans for supporting upcoming experiments at UKAEA at the HIVE and CHIMERA facilities. Qi Tang (Georgia Institute of Technology) An Adaptive Newton-Based Free-Boundary Grad-Shafranov Solver ( PDF ) Equilibriums in magnetic confinement devices result from force balancing between the Lorentz force and the plasma pressure gradient. In an axisymmetric configuration like a tokamak, such an equilibrium is described by an elliptic equation for the poloidal magnetic flux, commonly known as the Grad-Shafranov equation. It is challenging to develop a scalable and accurate free-boundary Grad-Shafranov solver, since it is a fully nonlinear optimization problem that simultaneously solves for the magnetic field coil current outside the plasma to control the plasma shape. In this work, we develop a Newton-based free-boundary Grad-Shafranov solver using adaptive finite elements and preconditioning strategies. The free-boundary interaction leads to the evaluation of a domain-dependent nonlinear form of which its contribution to the Jacobian matrix is achieved through shape calculus. The optimization problem aims to minimize the distance between the plasma boundary and specified control points while satisfying two non-trivial constraints, which correspond to the nonlinear finite element discretization of the Grad-Shafranov equation and a constraint on the total plasma current involving a nonlocal coupling term. The linear system is solved by a block factorization, and AMG is called for sub-block elliptic operators. The unique contributions of this work include the treatment of a global constraint, preconditioning strategies, nonlocal reformulation, and the implementation of adaptive finite elements. It is found that the resulting Newton solver is robust, successfully reducing the nonlinear residual to 1e-6 and lower in a small handful of iterations while addressing the challenging case to find a Taylor state equilibrium where conventional Picard-based solvers fail to converge. Dohyun Kim (Brown University) SiMPL Method: A Fast and Simple Method for Density-Based Topology Optimization ( PDF ) This talk will present a new first-order method for density-based topology optimization called SiMPL: Sigmoidal Mirror descent with Projected Lagrangian. This method delivers point-wise bound preserving density fields at every iteration. The design updates are based only on the first-order derivative information of the objective function, significantly simplifying practical implementations. We accelerate this method with adaptive step size and back-tracking line search. We numerically verified the mesh-independent behavior of the SiMPL method and observed significantly faster convergence compared to other popular first-order optimization algorithms for topology optimization. To outline the general applicability of the technique, we also include examples with (self-load) compliance minimization and compliant mechanism problems. 3:00-3:30 Coffee Break discussions on Slack 3:30-5:00 Presentations (30 mins each) Chair: Justin Laughlin Mathias Schmidt (LLNL) Level-Set Topology Optimization with PDE Generated Conformal Meshes ( PDF , video ) The promise of Topology Optimization (TO) is to provide engineers with a systematic computational tool to support the development of optimal designs. A shortcoming of classic density based multi-material TO designs is the nebulous interphase region between materials, which leads to inaccurate response predictions in these very regions. In contrast, designs based on boundary and interface regions, rather than interphase regions, yield accurate response predictions. Level-set based TO is an example of such; however, the analysis of the response often requires repeated mesh generation or non-standard finite element computations. We present a solely PDE-based, level-set topology optimization approach in which geometries are described through the iso-contour of one or multiple level-set fields which are discretized over a mesh. The nodal heights serve as the design parameters. The governing field equations are discretized by a conformal discretization over a separate \u201canalysis\u201d mesh. In the optimization, the \u201canalysis\u201d mesh is morphed such that its boundary and interfaces conform with the isocontours of the LS fields. The mesh morphing is performed using the Target-Matrix Optimization Paradigm (TMOP) approach. Our TMOP formulation is a PDE based mesh morphing operation which aims to improve the interface conformity while preserving mesh quality. Design sensitivities of the optimization cost and constraint functions with respect to all design level-set fields are computed through an adjoint approach which accounts for the mesh morphing process. The proposed analysis and optimization framework is based on MFEM, a free, lightweight, scalable C++ library for finite element methods which supports the optimization of large-scale problems. We investigate the robustness of the proposed optimization methodology by solving two- and three-dimensional multi-material optimization problems involving linear diffusion and elasticity. We discuss the advantages and challenges of our approach with regards to the mesh morphing process. LS regularization techniques are employed to produce a well-behaved mesh morphing problem throughout the optimization. Finally, select aspects and challenges of our approach with respect to parallel computing and processor decomposition are discussed. Milan Holec (Xcimer Energy) Towards Predictive Modeling of the World's Most Powerful Fusion Laser at Xcimer ( PDF ) According to the techno-economic studies, the ultra-violet excimer lasers offer the most straightforward path to the commercial fusion given the lowest J/$ price and their capacity to withstand MJ laser pulses, a fluence when the traditional solid state lasers break. We present our vision on how to model the future laser system spanning the micro-scales at 248nm laser wavelength and macro-scales at tens of meters of the actual laser beamline, where MFEM allows us to design a computationally efficient and accurate discretization based on mathematical details which we will describe in the presentation. Yohann Dudouit (LLNL) Mitigating Rays-Effect in Phase-Space Advection with Matrix-Free High-Dimensional DG Methods ( PDF , video ) The mitigation of the rays-effect in phase-space advection problems is a critical challenge in deterministic transport simulations, particularly when using traditional methods that struggle with numerical artifacts. In this work, we propose a novel high-dimensional matrix-free discontinuous Galerkin (DG) approach designed to address the rays-effect by fully discretizing phase space, including velocity components, up to six dimensions. This methodology avoids the excessive computational cost associated with Monte Carlo simulations while offering a deterministic alternative that preserves accuracy and scalability. A key component of our approach is the use of advanced coordinate transformations, which optimize the coordinate system to minimize the rays-effect by aligning the coordinate system with the net flux. Our matrix-free formulation minimizes memory usage and improves computational efficiency by avoiding the assembly of large sparse matrices, a critical factor when scaling to high-dimensional problems. Numerical experiments demonstrate the effectiveness of this approach in reducing rays-effect artifacts, providing a robust and scalable solution for high-dimensional transport problems. 5:00 Day 2 Wrap-up MFEM team Thursday, October 24 Time Activity Presenter 8:00-8:30 Breakfast on site at UCLCC 8:30-12:00 Office Hours Q&A with MFEM team 12:00-1:00 Lunch on site at UCLCC 1:00-5:00 Additional Meetings and Discussions Simulation and Visualization Contest We will be holding a simulation and visualization contest open to all attendees. Participants can submit visualizations (images or videos) from MFEM-related simulations. The winner of the competition (selected by the organizing committee) will receive an MFEM T-shirt. We will also feature the images in the gallery . Here are the winners from the 2023 workshop: Mehran Ebrahimi : Displacement distribution of a loaded excavator arm under static equilibrium John Camier : Leapfrogging vortex rings using an incompressible Schr\u00f6dinger fluid solver To submit an entry in the contest, please fill out the Google form . Alternatively, you may email your submission to mfem@llnl.gov , including your name, institution, a short description of the simulation (the underlying physics, discretization, application details, etc.), and visualization software used (GLVis, ParaView, VisIt, etc.). Virtual Backgrounds We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally. About Livermore and LLNL Founded in 1869, Livermore is California's oldest wine region, framed by award-winning wineries, farmlands, and ranches that mirror the valley's western heritage. As home to renowned science and technology centers, Lawrence Livermore and Sandia national labs, Livermore is a technological hub and an academically engaged community. It has become an integral part of the Bay Area, successfully competing in the global market powered by its wealth of research, technology, and innovation. For more than 70 years, LLNL has applied science and technology to make the world a safer place. World-class facilities include the National Ignition Facility, the Advanced Manufacturing Laboratory, and the Livermore Computing Center hosting the Sierra supercomputer and home of the future exascale machine, El Capitan. Organizing Committee Holly Auten \u250a Aaron Fisher \u250a Tzanio Kolev \u250a Justin Laughlin \u250a Ketan Mittal \u250a Will Pazner \u250a Sohail Reddy \u250a Haley Shuey Previous Workshops MFEM Community Workshop 2023 MFEM Community Workshop 2022 MFEM Community Workshop 2021", "title": "Workshop"}, {"location": "workshop/#mfem-community-workshop", "text": "", "title": "MFEM Community Workshop"}, {"location": "workshop/#overview", "text": "The MFEM team is happy to invite you to the 2024 MFEM Community Workshop, which will take place on October 22-24, 2024 in a hybrid format: in-person at Lawrence Livermore National Laboratory (LLNL) + virtually on Zoom. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. We encourage you to join us in person if you can! For questions, please contact the meeting organizers at mfem@llnl.gov .", "title": "Overview"}, {"location": "workshop/#registration", "text": "Registration closed on October 15th .", "title": "Registration"}, {"location": "workshop/#venue", "text": "The meeting will take place at the University of California Livermore Collaboration Center (UCLCC) which is just outside of LLNL's East Gate.", "title": "Venue"}, {"location": "workshop/#lodging-options", "text": "There are many hotels in Livermore, and others are available in Pleasanton and nearby cities. See LLNL's recommended list of area hotels or this Google Maps search . If you stay outside of Livermore, we recommend staying west of the city to have a reverse commute to the Lab.", "title": "Lodging Options"}, {"location": "workshop/#meeting-format", "text": "This will be the first hybrid edition of the MFEM community workshop that will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.7 and future roadmap Contributed talks from application developers utilizing MFEM Student lightning talks and visualization contest Office hours on the last day See also the agenda for the previous 2023 , 2022 and 2021 MFEM workshops. Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop.", "title": "Meeting Format"}, {"location": "workshop/#agenda", "text": "", "title": "Agenda"}, {"location": "workshop/#tuesday-october-22", "text": "Time Activity Presenter 8:00-8:30 Breakfast + Registration on site at UCLCC 8:30-9:00 Welcome & Overview ( PDF , video ) Aaron Fisher (LLNL) 9:00-9:30 The State of MFEM ( PDF , video ) Tzanio Kolev (LLNL) 9:30-10:00 Recent Developments ( video ) Veselin Dobrev (LLNL) 10:00-10:30 Coffee Break discussions on Slack 10:30-12:00 Presentations (30 mins each) Chair: Will Pazner M\u00e1t\u00e9 Kov\u00e1cs (Braid Technologies) Rust Wrapper for MFEM ( PDF )", "title": "Tuesday, October 22"}, {"location": "workshop/#wednesday-october-23", "text": "Time Activity Presenter 8:00-8:30 Breakfast on site at UCLCC 8:30-9:00 Visualization Contest Winners Will Pazner (Portland State University) 9:00-10:00 Presentations (30 mins each) Chair: Sohail Reddy Gourab Panigrahi (Indian Institute of Science) Hardware Aware Matrix-Free Approach for Accelerating FE Discretized Eigenvalue Problems: Application to Large-Scale Kohn-Sham Density Functional Theory ( PDF )", "title": "Wednesday, October 23"}, {"location": "workshop/#thursday-october-24", "text": "Time Activity Presenter 8:00-8:30 Breakfast on site at UCLCC 8:30-12:00 Office Hours Q&A with MFEM team 12:00-1:00 Lunch on site at UCLCC 1:00-5:00 Additional Meetings and Discussions", "title": "Thursday, October 24"}, {"location": "workshop/#simulation-and-visualization-contest", "text": "We will be holding a simulation and visualization contest open to all attendees. Participants can submit visualizations (images or videos) from MFEM-related simulations. The winner of the competition (selected by the organizing committee) will receive an MFEM T-shirt. We will also feature the images in the gallery . Here are the winners from the 2023 workshop: Mehran Ebrahimi : Displacement distribution of a loaded excavator arm under static equilibrium John Camier : Leapfrogging vortex rings using an incompressible Schr\u00f6dinger fluid solver To submit an entry in the contest, please fill out the Google form . Alternatively, you may email your submission to mfem@llnl.gov , including your name, institution, a short description of the simulation (the underlying physics, discretization, application details, etc.), and visualization software used (GLVis, ParaView, VisIt, etc.).", "title": "Simulation and Visualization Contest"}, {"location": "workshop/#virtual-backgrounds", "text": "We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally.", "title": "Virtual Backgrounds"}, {"location": "workshop/#about-livermore-and-llnl", "text": "Founded in 1869, Livermore is California's oldest wine region, framed by award-winning wineries, farmlands, and ranches that mirror the valley's western heritage. As home to renowned science and technology centers, Lawrence Livermore and Sandia national labs, Livermore is a technological hub and an academically engaged community. It has become an integral part of the Bay Area, successfully competing in the global market powered by its wealth of research, technology, and innovation. For more than 70 years, LLNL has applied science and technology to make the world a safer place. World-class facilities include the National Ignition Facility, the Advanced Manufacturing Laboratory, and the Livermore Computing Center hosting the Sierra supercomputer and home of the future exascale machine, El Capitan.", "title": "About Livermore and LLNL"}, {"location": "workshop/#organizing-committee", "text": "Holly Auten \u250a Aaron Fisher \u250a Tzanio Kolev \u250a Justin Laughlin \u250a Ketan Mittal \u250a Will Pazner \u250a Sohail Reddy \u250a Haley Shuey", "title": "Organizing Committee"}, {"location": "workshop/#previous-workshops", "text": "MFEM Community Workshop 2023 MFEM Community Workshop 2022 MFEM Community Workshop 2021", "title": "Previous Workshops"}, {"location": "workshop21/", "text": "MFEM Community Workshop October 20, 2021 Virtual Meeting Speakers' slides are linked in the agenda below. Read the article about the workshop on LLNL's Computing website. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery . Overview The MFEM team is happy to announce the first MFEM Community Workshop, which will take place on October 20, 2021, virtually, using WebEx for videoconferencing. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. For questions, please contact the meeting organizers at mfem@llnl.gov . Registration Registration closed on October 18th. Meeting format Depending on the interest and user feedback, the meeting will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.3 and GLVis-4.1 Contributed talks from application developers utilizing MFEM Roadmap discussion for future development Technical discussions in breakout rooms for Electromagnetics, Fluids, and Structural Mechanics applications Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop. The meeting activities will take place 7:45am-2:45pm Pacific Daylight Time (GMT-7): Wednesday, October 20 PDFs and videos are linked below. Time (PDT, GMT-7) Activity Presenter 7:45-8:00 Welcome and Overview ( PDF , video ) Aaron Fisher 8:00-8:30 The State of MFEM ( PDF , video ) Tzanio Kolev 8:30-9:00 Recent Developments in MFEM ( PDF , video ) Veselin Dobrev 9:00-10:00 Talks, Session I (20 mins each) \u2022 Jamie Bramwell (LLNL), Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications ( PDF , video ) \u2022 Thomas Helfer (CEA), MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic ( PDF , video ) \u2022 Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia), Phase Change Heat and Mass Transfer Simulation with MFEM ( PDF , video ) 10:00-10:30 Break & Group Photo All Download a virtual background below 10:30-12:30 Talks, Session II (20 mins each) \u2022 Robert Rieben (LLNL), The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling ( video ) \u2022 Marc Bolinches (UT), Development of DG Compressible Navier-Stokes Solver with MFEM ( PDF , video ) \u2022 Mathias Davids (Harvard), Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI) ( PDF , video ) \u2022 Jan Nikl (ELI Beamlines), Laser Plasma Modeling with High-Order Finite Element ( PDF , video ) \u2022 Qi Tang (LANL), An Adaptive, Scalable Fully Implicit Resistive MHD Solver ( video ) \u2022 Syun\u2019ichi Shiraiwa (PPPL), Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion ( PDF , video ) 12:30-1:00 Break All 1:00-2:00 Talks, Session III (20 mins each) \u2022 William Dawn (NCSU), Unstructured Finite Element Neutron Transport using MFEM ( PDF , video ) \u2022 Vladimir Tomov (LLNL), MFEM Capabilities for High-Order Mesh Optimization ( PDF , video ) \u2022 Will Pazner (LLNL), High-Order Matrix-Free Solvers ( PDF , video ) 2:00-2:30 Wrap-Up and Simulation Contest Winners ( PDF , video ) Aaron Fisher Simulation and Visualization Contest The 2021 MFEM Workshop featured a simulation and visualization contest. The submitted entries can be viewed in the gallery . Virtual Backgrounds We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally. Organizing Committee Holly Auten \u250a Aaron Fisher \u250a Tzanio Kolev \u250a Will Pazner \u250a Mark Stowell", "title": "_Workshop21"}, {"location": "workshop21/#mfem-community-workshop", "text": "", "title": "MFEM Community Workshop"}, {"location": "workshop21/#october-20-2021", "text": "", "title": "October 20, 2021"}, {"location": "workshop21/#virtual-meeting", "text": "Speakers' slides are linked in the agenda below. Read the article about the workshop on LLNL's Computing website. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery .", "title": "Virtual Meeting"}, {"location": "workshop21/#overview", "text": "The MFEM team is happy to announce the first MFEM Community Workshop, which will take place on October 20, 2021, virtually, using WebEx for videoconferencing. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. For questions, please contact the meeting organizers at mfem@llnl.gov .", "title": "Overview"}, {"location": "workshop21/#registration", "text": "Registration closed on October 18th.", "title": "Registration"}, {"location": "workshop21/#meeting-format", "text": "Depending on the interest and user feedback, the meeting will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.3 and GLVis-4.1 Contributed talks from application developers utilizing MFEM Roadmap discussion for future development Technical discussions in breakout rooms for Electromagnetics, Fluids, and Structural Mechanics applications Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop. The meeting activities will take place 7:45am-2:45pm Pacific Daylight Time (GMT-7):", "title": "Meeting format"}, {"location": "workshop21/#wednesday-october-20", "text": "PDFs and videos are linked below. Time (PDT, GMT-7) Activity Presenter 7:45-8:00 Welcome and Overview ( PDF , video ) Aaron Fisher 8:00-8:30 The State of MFEM ( PDF , video ) Tzanio Kolev 8:30-9:00 Recent Developments in MFEM ( PDF , video ) Veselin Dobrev 9:00-10:00 Talks, Session I (20 mins each) \u2022 Jamie Bramwell (LLNL), Serac: User-Friendly Abstractions for MFEM-Based Engineering Applications ( PDF , video ) \u2022 Thomas Helfer (CEA), MFEM-MGIS-MFront, a MFEM-Based Library for Nonlinear Solid Thermomechanic ( PDF , video ) \u2022 Felipe G\u00f3mez, Carlos del Valle, & Juli\u00e1n Jim\u00e9nez (National University of Colombia), Phase Change Heat and Mass Transfer Simulation with MFEM ( PDF , video ) 10:00-10:30 Break & Group Photo All Download a virtual background below 10:30-12:30 Talks, Session II (20 mins each) \u2022 Robert Rieben (LLNL), The Multiphysics on Advanced Platforms Project: Performance, Portability and Scaling ( video ) \u2022 Marc Bolinches (UT), Development of DG Compressible Navier-Stokes Solver with MFEM ( PDF , video ) \u2022 Mathias Davids (Harvard), Modeling Peripheral Nerve Stimulations (PNS) in Magnetic Resonance Imaging (MRI) ( PDF , video ) \u2022 Jan Nikl (ELI Beamlines), Laser Plasma Modeling with High-Order Finite Element ( PDF , video ) \u2022 Qi Tang (LANL), An Adaptive, Scalable Fully Implicit Resistive MHD Solver ( video ) \u2022 Syun\u2019ichi Shiraiwa (PPPL), Development of PyMFEM Python Wrapper for MFEM & Scalable RF Wave Simulation for Nuclear Fusion ( PDF , video ) 12:30-1:00 Break All 1:00-2:00 Talks, Session III (20 mins each) \u2022 William Dawn (NCSU), Unstructured Finite Element Neutron Transport using MFEM ( PDF , video ) \u2022 Vladimir Tomov (LLNL), MFEM Capabilities for High-Order Mesh Optimization ( PDF , video ) \u2022 Will Pazner (LLNL), High-Order Matrix-Free Solvers ( PDF , video ) 2:00-2:30 Wrap-Up and Simulation Contest Winners ( PDF , video ) Aaron Fisher", "title": "Wednesday, October 20"}, {"location": "workshop21/#simulation-and-visualization-contest", "text": "The 2021 MFEM Workshop featured a simulation and visualization contest. The submitted entries can be viewed in the gallery .", "title": "Simulation and Visualization Contest"}, {"location": "workshop21/#virtual-backgrounds", "text": "We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally.", "title": "Virtual Backgrounds"}, {"location": "workshop21/#organizing-committee", "text": "Holly Auten \u250a Aaron Fisher \u250a Tzanio Kolev \u250a Will Pazner \u250a Mark Stowell", "title": "Organizing Committee"}, {"location": "workshop22/", "text": "MFEM Community Workshop October 25, 2022 Virtual Meeting Speakers' slides are linked in the agenda below. Read the article about the workshop on LLNL's Computing website. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery . Overview The MFEM team is happy to announce the third MFEM Community Workshop, which will take place on October 25, 2022, virtually, using Zoom for video conferencing. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. For questions, please contact the meeting organizers at mfem@llnl.gov . Registration Registration closed on October 11th. Meeting format Depending on the interest and user feedback, the meeting will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.4 and GLVis-4.2 Contributed talks from application developers utilizing MFEM Roadmap discussion for future development Technical discussions in breakout rooms for Electromagnetics, Fluids, and Structural Mechanics applications See also the agenda for the previous 2021 MFEM workshop. Meeting format Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop. The meeting activities will take place 7:40am-4:00pm Pacific Daylight Time (GMT-7): Tuesday, October 25 Time (PDT, GMT-7) Activity Presenter 7:40-8:00 Welcome & Overview ( PDF , video ) Aaron Fisher (LLNL) 8:00-8:20 The State of MFEM ( PDF , video ) Tzanio Kolev (LLNL) 8:20-8:40 Recent Developments ( PDF , video ) Veselin Dobrev (LLNL) 8:40-9:00 Break All 9:00-10:00 Talks, Session I (20 mins each) Chair: Will Pazner Ben Zwick (University of Western Australia) Solution of the Electroencephalography Forward Problem Using MFEM ( PDF , video ) Carlos Brito Pacheco (Universit\u00e9 Grenoble Alpes) Rodin: Lightweight and Modern C++17 Shape, Density and Topology Optimization Framework ( PDF , video ) Tobias Duswald (CERN | TUM) Solving Stochastic, Fractional PDEs with MFEM with Applications to Random Field Generation and Topology Optimization ( PDF , video ) 10:00-10:20 Break & Group Photo All Download a virtual background below 10:20-11:20 Talks, Session II (20 mins each) Chair: Socratis Petrides Alvaro Sanchez Villar (PPPL) MFEM Application to EM-wave Simulation in ECR Space Plasma Thrusters ( PDF , video ) Brian Young OpenParEM2D: A 2D Simulator for Guided Waves ( PDF , video ) Christina Migliore (MIT) The Development of the EM RF Edge Interactions Miniapp \u201cStix\u201d Using MFEM ( PDF , video ) 11:20-11:40 Break All 11:40-12:40 Talks, Session III (20 mins each) Chair: Aaron Fisher Will Pazner (PDX) High-Order Solvers + GPU Acceleration ( PDF , video ) Jorge-Luis Barrera (LLNL) Shape and Topology Optimization Powered by MFEM ( PDF , video ) Siu Wun Cheung (LLNL) Reduced Order Modeling for Finite Element Simulations through the Partnership of MFEM and libROM ( PDF , video ) 1:00-2:00 Talks, Session IV (20 mins each) Chair: Tzanio Kolev Devlin Hayduke (ReLogic) Project Minerva: Accelerated Deployment of MFEM Based Solvers in Large Scale Industrial Problems ( PDF , video ) Tim Brewer (Synthetik) blastFEM: A GPU-accelerated, Very High-performance and Energy-efficient Solver for Highly Compressible Flows ( PDF , video ) Adolfo Rodriguez (OpenSim) Using MFEM for Wellbore Stability Analysis ( PDF , video ) 2:00-2:20 Break All 2:20-2:40 MFEM AWS tutorial ( Instructions , video ) Julian Andrej (LLNL) 2:40-3:00 Wrap-up & Contest Winners ( PDF , video ) Aaron Fisher (LLNL) 3:00-4:00 Q&A Session MFEM team available on Zoom + Slack Simulation and Visualization Contest We will be holding a simulation and visualization contest open to all attendees. Participants can submit visualizations (images or videos) from MFEM-related simulations. The winner of the competition (selected by the organizing committee) will receive an MFEM T-shirt. We will also feature the images in the gallery . Here are the winners from the 2021 workshop: Dennis Ogiermann : Electric field in rabbit heart Tamas Horvath : Incompressible flow around rotating turbine To submit an entry in the contest, please fill out the Google form . Alternatively, you may email your submission to mfem@llnl.gov , including your name, institution, a short description of the simulation (the underlying physics, discretization, application details, etc.), and visualization software used (GLVis, ParaView, VisIt, etc.). Virtual Backgrounds We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally. Organizing Committee Holly Auten \u250a Aaron Fisher \u250a Tzanio Kolev \u250a Ketan Mittal \u250a Will Pazner \u250a Socratis Petrides Previous Workshops MFEM Community Workshop 2021", "title": "_Workshop22"}, {"location": "workshop22/#mfem-community-workshop", "text": "", "title": "MFEM Community Workshop"}, {"location": "workshop22/#october-25-2022", "text": "", "title": "October 25, 2022"}, {"location": "workshop22/#virtual-meeting", "text": "Speakers' slides are linked in the agenda below. Read the article about the workshop on LLNL's Computing website. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery .", "title": "Virtual Meeting"}, {"location": "workshop22/#overview", "text": "The MFEM team is happy to announce the third MFEM Community Workshop, which will take place on October 25, 2022, virtually, using Zoom for video conferencing. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. For questions, please contact the meeting organizers at mfem@llnl.gov .", "title": "Overview"}, {"location": "workshop22/#registration", "text": "Registration closed on October 11th.", "title": "Registration"}, {"location": "workshop22/#meeting-format", "text": "Depending on the interest and user feedback, the meeting will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.4 and GLVis-4.2 Contributed talks from application developers utilizing MFEM Roadmap discussion for future development Technical discussions in breakout rooms for Electromagnetics, Fluids, and Structural Mechanics applications See also the agenda for the previous 2021 MFEM workshop.", "title": "Meeting format"}, {"location": "workshop22/#meeting-format_1", "text": "Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop. The meeting activities will take place 7:40am-4:00pm Pacific Daylight Time (GMT-7):", "title": "Meeting format"}, {"location": "workshop22/#tuesday-october-25", "text": "Time (PDT, GMT-7) Activity Presenter 7:40-8:00 Welcome & Overview ( PDF , video ) Aaron Fisher (LLNL) 8:00-8:20 The State of MFEM ( PDF , video ) Tzanio Kolev (LLNL) 8:20-8:40 Recent Developments ( PDF , video ) Veselin Dobrev (LLNL) 8:40-9:00 Break All 9:00-10:00 Talks, Session I (20 mins each) Chair: Will Pazner Ben Zwick (University of Western Australia) Solution of the Electroencephalography Forward Problem Using MFEM ( PDF , video ) Carlos Brito Pacheco (Universit\u00e9 Grenoble Alpes) Rodin: Lightweight and Modern C++17 Shape, Density and Topology Optimization Framework ( PDF , video ) Tobias Duswald (CERN | TUM) Solving Stochastic, Fractional PDEs with MFEM with Applications to Random Field Generation and Topology Optimization ( PDF , video ) 10:00-10:20 Break & Group Photo All Download a virtual background below 10:20-11:20 Talks, Session II (20 mins each) Chair: Socratis Petrides Alvaro Sanchez Villar (PPPL) MFEM Application to EM-wave Simulation in ECR Space Plasma Thrusters ( PDF , video ) Brian Young OpenParEM2D: A 2D Simulator for Guided Waves ( PDF , video ) Christina Migliore (MIT) The Development of the EM RF Edge Interactions Miniapp \u201cStix\u201d Using MFEM ( PDF , video ) 11:20-11:40 Break All 11:40-12:40 Talks, Session III (20 mins each) Chair: Aaron Fisher Will Pazner (PDX) High-Order Solvers + GPU Acceleration ( PDF , video ) Jorge-Luis Barrera (LLNL) Shape and Topology Optimization Powered by MFEM ( PDF , video ) Siu Wun Cheung (LLNL) Reduced Order Modeling for Finite Element Simulations through the Partnership of MFEM and libROM ( PDF , video ) 1:00-2:00 Talks, Session IV (20 mins each) Chair: Tzanio Kolev Devlin Hayduke (ReLogic) Project Minerva: Accelerated Deployment of MFEM Based Solvers in Large Scale Industrial Problems ( PDF , video ) Tim Brewer (Synthetik) blastFEM: A GPU-accelerated, Very High-performance and Energy-efficient Solver for Highly Compressible Flows ( PDF , video ) Adolfo Rodriguez (OpenSim) Using MFEM for Wellbore Stability Analysis ( PDF , video ) 2:00-2:20 Break All 2:20-2:40 MFEM AWS tutorial ( Instructions , video ) Julian Andrej (LLNL) 2:40-3:00 Wrap-up & Contest Winners ( PDF , video ) Aaron Fisher (LLNL) 3:00-4:00 Q&A Session MFEM team available on Zoom + Slack", "title": "Tuesday, October 25"}, {"location": "workshop22/#simulation-and-visualization-contest", "text": "We will be holding a simulation and visualization contest open to all attendees. Participants can submit visualizations (images or videos) from MFEM-related simulations. The winner of the competition (selected by the organizing committee) will receive an MFEM T-shirt. We will also feature the images in the gallery . Here are the winners from the 2021 workshop: Dennis Ogiermann : Electric field in rabbit heart Tamas Horvath : Incompressible flow around rotating turbine To submit an entry in the contest, please fill out the Google form . Alternatively, you may email your submission to mfem@llnl.gov , including your name, institution, a short description of the simulation (the underlying physics, discretization, application details, etc.), and visualization software used (GLVis, ParaView, VisIt, etc.).", "title": "Simulation and Visualization Contest"}, {"location": "workshop22/#virtual-backgrounds", "text": "We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally.", "title": "Virtual Backgrounds"}, {"location": "workshop22/#organizing-committee", "text": "Holly Auten \u250a Aaron Fisher \u250a Tzanio Kolev \u250a Ketan Mittal \u250a Will Pazner \u250a Socratis Petrides", "title": "Organizing Committee"}, {"location": "workshop22/#previous-workshops", "text": "MFEM Community Workshop 2021", "title": "Previous Workshops"}, {"location": "workshop23/", "text": "MFEM Community Workshop October 26, 2023 Virtual Meeting Speakers' slides are linked in the agenda below. Read the article about the workshop on LLNL's Computing website. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery . Overview The MFEM team is happy to announce the third MFEM Community Workshop, which will take place on October 26, 2023, virtually, using Zoom for videoconferencing. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. For questions, please contact the meeting organizers at mfem@llnl.gov . Registration Registration closed on October 19th. Meeting format Depending on the interest and user feedback, the meeting will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.5, MFEM-4.5.2 and MFEM-4.6 Contributed talks from application developers utilizing MFEM Roadmap discussion for future development See also the agenda for the previous 2022 and 2021 MFEM workshops. Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop. Agenda The meeting activities will take place 8:00am-4:00pm Pacific Daylight Time (GMT-7): Thursday, October 26 Time Activity Presenter 8:00-8:20 Welcome & Overview ( PDF , video ) Aaron Fisher (LLNL) 8:20-8:40 The State of MFEM ( PDF , video ) Tzanio Kolev (LLNL) 8:40-9:00 Recent Developments ( PDF , video ) Veselin Dobrev (LLNL) 9:00-9:20 Break Discussions on Slack 9:20-10:20 Session I (20 mins each) Chair: Will Pazner Sebastian Grimberg (Amazon Web Services) Palace: PArallel LArge-scale Computational Electromagnetics ( PDF , video ) Palace, for PArallel, LArge-scale Computational Electromagnetics, is a parallel finite element code for full-wave electromagnetics simulations based on the MFEM library. Palace is used at the AWS Center for Quantum Computing to perform large-scale 3D simulations of complex electromagnetics models and enable the design of quantum computing hardware. In this talk we will give an overview of the simulation capabilities of Palace as well as some recent developments for conforming and nonconforming adaptive mesh refinement, operator partial assembly, and GPU support. Jacob Lotz (Delft University of Technology) Computation and Reduced Order Modelling of Periodic Flows ( PDF , video ) Many types of periodic flows can be found in nature and industrial applications and their computation is expensive due to lengthy time simulations. Our work aims to reduce the cost of these computations. We solve periodic flows in a space-time domain in which both ends in time are periodic such that we only have to model one period. MFEM is used to discretise the space-time domain and solve our discretised system of equations. We apply a hyper-reduced Proper Orthogonal Decomposition Galerkin reduced order model to speed up our computations. During the presentation we show (results of) our full order model and our advances in de reduced order modelling. Boyan Lazarov (LLNL) Scalable Design and Optimization with MFEM ( PDF , video ) The talk aims to present recently added and ongoing code development facilitating the solution of shape and topology optimization problems. Both topology and shape optimization are gradient-based iterative algorithms aiming to find a material distribution that minimizes an objective and fulfills a set of constraints. Every optimization step includes a solution to a forward optimization problem, an evaluation of the objective and constraints, a solution to an adjoint problem associated with every objective or constraint, an evaluation of gradients, and an update of the design based on mathematical programming techniques. All these steps can be easily implemented and executed by using MFEM in a scalable manner, allowing the design and optimization of large-scale realistic industrial problems. Thus, the goal is to exemplify these features, highlight the techniques that simplify the implementation of new problems, and provide a glimpse into the future. 10:20-10:40 Break & Group Photo Download a virtual background below 10:40-11:40 Session II (5 mins each) Chair: Milan Holec Student Lightning Talks Part 1 ( video ) Shani Martinez Weissberg (Tel Aviv University) \u00b5FEA of a Rabbit Femur ( PDF ) Given the ethical and practical limitations of conducting preliminary medical studies on humans, New Zealand White (NZW) rabbits serve as a common model for treatment validation. An important such medical study is the prediction of the risk of fracture in femurs with metastatic bone tumors following radiation therapy and image-based treatment. For such studies, micro-computed tomography (\u00b5CT) scans of NZW rabbit femurs are essential for capturing the detailed bone architecture. These \u00b5CT scans are used to construct micro finite element models (\u00b5FEMs) of the femurs that are being virtually loaded to predict the mechanical response required for validation of the \u00b5FEMs via experiments on fresh frozen rabbit femurs. This presentation outlines the step-by-step process of creating patient-specific \u00b5FEMs of rabbit femurs using MFEM. The workflow spans from \u00b5CT imaging to segmentation and 3D reconstruction, culminating in the MFEM solution of a linear elastic problem with over 125 million degrees of freedom. Paul Moujaes (TU-Dortmund) Dissipation-Based Entropy Stabilization for Slope-Limited Discontinuous Galerkin Approximations of Hyperbolic Problems ( PDF ) Dissipation-based entropy stabilization for slope-limited DG-approximations of hyperbolic problems with focus on the Euler equations. Alejandro Mu\u00f1oz (Universidad de Granada) Discontinuous Galerkin in the Time Domain for Maxwell\u2019s Equations ( PDF ) The Discontinuous Galerkin method is a type of finite element method which uses discontinuous basis functions, almost always piecewise polynomials. Through the use of MFEM, we aim to implement an explicit scheme Maxwell Equations' solver capable of 1D, 2D and 3D problem solving. Thanks to the library's capabilities, we can focus on the implementation of operators and integrators while retaining the capacity to use multiple types of meshes with various element types and the posterior visualization through GLVIS or ParaView. Bill Ellis (UKAEA) Comparing Thermo-Mechanical Solves in MOOSE and MFEM ( PDF ) Fusion energy requires confinement of a very hot plasma. Given these high temperatures, it is necessary to model how materials and components react in these environments. The Multiphysics Object-Oriented Simulation Environment (MOOSE) offers functionality to model the mechanical effects of these temperature fields. As MFEM is increasingly utilised for electromagnetic modelling in fusion, interest as to the benefits of a purely MFEM workflow have arisen. This short talk aims to offer a comparison of the performance and stability of some thermal expansion problems in MFEM and MOOSE by modelling some fusion relevant components. Student Lightning Talks Part 2 ( video ) Alexander Mote (Oregon State University) A Neural Network Surrogate Model for Nonlocal Thermal Flux Calculations ( PDF ) Mathematically, a neural network can produce a prediction of thermal flux in a plasma physics simulation as much as 1,000,000 times faster than it would take to calculate computationally. Using a dataset of MFEM simulations, we were able to train a neural network to predict nonlocal thermal flux within a 1D2V ICF simulation with 99.3% accuracy. This model was then used to evolve temperature over time in a similar simulation setup, demonstrating accurate nonlocal heat transport properties useful to experimenters. Amit Rotem (Virginia Tech) GPU Acceleration of IPDG in MFEM ( PDF ) This talk will present the new partial assembly implementation of the DGDiffusion bilinear form integrator. The partial assembly implementation uses sum factorization and can be compiled with CUDA to gain a substantial speed up. In the second half of the talk, an example solving the Wave Equation will be presented. Josiah Brown (Relogic Research) Project Minerva ( PDF ) MFEM is a very fast solver for structural problems due to it being efficiently made and its parallel capability, but due to it being for the most part strictly a C++ library that is used by programming a C++ script, it can be difficult to make a structural mesh. Material solver like Abacus, thought slow in solving for a solution, has many visual aids in creating a structural mesh making it very user friendly. Relogic has created a C++ code that takes Abacus input data, parsers it, generates a mesh file, and then runs MFEM on this data. This program allows one to create a structural mesh in Abacus and solve it in MFEM, this was done in hopes of making MFEM more user friendly and accessible. Mike Pozulp (UC Berkeley) An Implicit Monte Carlo Acceleration Scheme ( PDF ) This is a joint research project with Terry Haut to use Monte Carlo to compute a linear form arising in one of Sam Olivier's DG discretizations of radiation diffusion that Olivier described in his PhD thesis and implemented using MFEM. We are investigating the impact of the Monte Carlo noise on the radiation diffusion solution quality. 11:40-12:00 Break Discussions on Slack 12:00-1:00 Session III (20 mins each) Chair: Tzanio Kolev Syun'ichi Shiraiwa (PPPL) Radio-Frequency Wave Simulation in Hot Magnetized Plasma using Differential Operator for Non-Local Conductivity Response ( PDF , video ) In high-temperature plasmas, the dielectric response to the RF fields is caused by freely moving charged particles, which naturally makes such a response non-local and correspondingly, the Maxwell wave problem becomes an integro-differential equation. A differential form of dielectric operator, based on the small k\u22a5\u03c1 expansion, is widely used. However, they typically includes up-to the second order terms, and thus the use of such an operator is limited to the waves that satisfy k\u22a5\u03c1 < 1. We propose an alternative approach to construct a dielectric operator, which includes all-order finite Larmor radius effects without explicitly containing higher order derivatives. We use a rational approximation of the plasma dielectric tensor in the wave number space, in order to yield a differential operator acting on the dielectric current (J). The 1D O-X-B mode-conversion of the electron Bernstein wave in the non-relativistic Maxwellian plasma was modeled using this approach. An agreement with analytic calculation and the conservation of wave energy carried by the Poynting flux and electron thermal motion (\u201csloshing\u201d) is found. The connection between our construction method and superposition of Green\u2019s function for these screened Poisson\u2019s equations is presented. An approach to extend the operator in a multi-dimensional setting will also be discussed. Tamas Horvath (Oakland University) Implementation of Hybridizable Discontinuous Galerkin Methods via the HDG Branch ( PDF , video ) In this talk, we present the HDG branch, which was initially developed for HDG discretizations of advection-diffusion problems. Recent updates have made the branch highly adaptable for various applications, allowing a flexible implementation of HDG for many different PDEs. We showcase these enhancements and provide insights into their versatile usage across different problems. Yohann Dudouit (LLNL) Empowering MFEM Using libCEED: Features and Performance Analysis ( PDF , video ) This presentation will begin with an overview of the features introduced to MFEM through the integration of libCEED. We will particularly emphasize capabilities that are distinct from native MFEM functionalities, marking an enhancement in the software's suite of tools, such as support for simplices, handling of mixed meshes, and support for p-adaptivity. The presentation will conclude by showcasing benchmarks for various problems executed on different HPC architectures, illustrating the performance gains and efficiencies achieved through the libCEED integration. 1:00-1:20 Break Discussions on Slack 1:20-2:20 Session IV (20 mins each) Chair: Ketan Mittal Zhang Chunyu (Sun Yat-Sen University) Homogenized Energy Theory for Solution of Elasticity Problems with Consideration of Higher-Order Microscopic Deformations ( PDF , video ) The classical continuum mechanics faces difficulties in solving problems involving highly inhomogeneous deformations. The proposed theory investigates the impact of high-order microscopic deformation on modeling of material behaviors and provides a refined interpretation of strain gradients through the averaged strain energy density. Only one scale parameter, i.e., the size of the Representative Volume Element(RVE), is required by the proposed theory. By employing the variational approach and the Augmented Lagrangian Method(ALM), the governing equations for deformation as well as the numerical solution procedure are derived. It is demonstrated that the homogenized energy theory offers plausible explanations and reasonable predictions for the problems yet unsolved by the classical theory such as the size effect of deformation and the stress singularity at the crack tip. The concept of averaged strain energy proves to be more suitable for describing the intricate mechanical behavior of materials. And high order partial differential equations can be effectively solved by the ALM by introducing supplementary variables to lower the highest order of the equations. Eric Chin (LLNL) Contact Constraint Enforcement Using the Tribol Interface Physics Library ( PDF , video ) In this talk, we will discuss recent additions to the Tribol interface physics library to simplify MPI parallel contact constraint enforcement in large deformation, implicit and explicit continuum solid mechanics simulations using MFEM. Tribol is an open-source software package available on GitHub (https://github.com/LLNL/Tribol) and includes tools for contact detection, state-of-the-art Lagrangian contact methods such as common plane and mortar, and various enforcement techniques such as penalty and Lagrange multiplier. Additionally, Tribol recently added a domain redecomposer for coalescing proximal contact pairs on a single rank. Tribol\u2019s features are designed to interact seamlessly with MFEM, and other codes that use MFEM, with native support for MFEM data structures such as ParMesh, ParGridFunction, and HypreParMatrix. We highlight the simplicity of adding Tribol features to an MFEM-based code by looking at integration with Serac: an open-source implicit nonlinear thermal-structural simulation code (https://github.com/LLNL/serac). Milan Holec (LLNL) Deterministic Transport MFEM-Miniapp: Advancing Fidelity of Fusion Energy Simulations ( PDF , video ) We introduce a new multi-dimensional discretization in MFEM enabling efficient high-order phase-space simulations of various types of Boltzmann transport. In terms of a generalized form of the standard discrete ordinate SN method for the phase-space, we carefully design discrete analogs obeying important continuous properties such as conservation of energy, preservation of positivity, preservation of the diffusion limit of transport, preservation of symmetry leading to rays-effect mitigation, and other laws of physics. Finally, we show how to apply this new phase-space MFEM feature to increase the fidelity of modeling of fusion energy experiments. 2:20-2:40 Break Discussions on Slack 2:40-3:00 Wrap-up & Contest Winners ( PDF , video ) Aaron Fisher (LLNL) 3:00-4:00 Q&A Session MFEM team available on Zoom + Slack Simulation and Visualization Contest We will be holding a simulation and visualization contest open to all attendees. Participants can submit visualizations (images or videos) from MFEM-related simulations. The winner of the competition (selected by the organizing committee) will receive an MFEM T-shirt. We will also feature the images in the gallery . Here are the winners from the 2022 workshop: Ben Zwick : Electric field generated by a current dipole source in epilepsy patient Tobias Duswald : Topology-optimized heat sink Will Pazner : Magnetic field computed with GPU-accelerated LOR solvers To submit an entry in the contest, please fill out the Google form . Alternatively, you may email your submission to mfem@llnl.gov , including your name, institution, a short description of the simulation (the underlying physics, discretization, application details, etc.), and visualization software used (GLVis, ParaView, VisIt, etc.). Virtual Backgrounds We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally. Organizing Committee Holly Auten \u250a Aaron Fisher \u250a Milan Holec \u250a Tzanio Kolev \u250a Ketan Mittal \u250a Will Pazner \u250a Socratis Petrides \u250a Vladimir Tomov Previous Workshops MFEM Community Workshop 2022 MFEM Community Workshop 2021", "title": "_Workshop23"}, {"location": "workshop23/#mfem-community-workshop", "text": "", "title": "MFEM Community Workshop"}, {"location": "workshop23/#october-26-2023", "text": "", "title": "October 26, 2023"}, {"location": "workshop23/#virtual-meeting", "text": "Speakers' slides are linked in the agenda below. Read the article about the workshop on LLNL's Computing website. Watch the video playlist of workshop presentations (linked individually below and available on the videos page ), and view contest submissions in the gallery .", "title": "Virtual Meeting"}, {"location": "workshop23/#overview", "text": "The MFEM team is happy to announce the third MFEM Community Workshop, which will take place on October 26, 2023, virtually, using Zoom for videoconferencing. The goal of the workshop is to foster collaboration among all MFEM users and developers, share the latest MFEM features with the broader community, deepen application engagements, and solicit feedback to guide future development directions for the project. For questions, please contact the meeting organizers at mfem@llnl.gov .", "title": "Overview"}, {"location": "workshop23/#registration", "text": "Registration closed on October 19th.", "title": "Registration"}, {"location": "workshop23/#meeting-format", "text": "Depending on the interest and user feedback, the meeting will include the following elements: Project news and development updates from the MFEM team An overview of the latest features in MFEM-4.5, MFEM-4.5.2 and MFEM-4.6 Contributed talks from application developers utilizing MFEM Roadmap discussion for future development See also the agenda for the previous 2022 and 2021 MFEM workshops. Workshop participants are encouraged to join the MFEM Community Slack workspace to communicate with other MFEM users and developers before, during and after the MFEM workshop.", "title": "Meeting format"}, {"location": "workshop23/#agenda", "text": "The meeting activities will take place 8:00am-4:00pm Pacific Daylight Time (GMT-7):", "title": "Agenda"}, {"location": "workshop23/#thursday-october-26", "text": "Time Activity Presenter 8:00-8:20 Welcome & Overview ( PDF , video ) Aaron Fisher (LLNL) 8:20-8:40 The State of MFEM ( PDF , video ) Tzanio Kolev (LLNL) 8:40-9:00 Recent Developments ( PDF , video ) Veselin Dobrev (LLNL) 9:00-9:20 Break Discussions on Slack 9:20-10:20 Session I (20 mins each) Chair: Will Pazner Sebastian Grimberg (Amazon Web Services) Palace: PArallel LArge-scale Computational Electromagnetics ( PDF , video )", "title": "Thursday, October 26"}, {"location": "workshop23/#simulation-and-visualization-contest", "text": "We will be holding a simulation and visualization contest open to all attendees. Participants can submit visualizations (images or videos) from MFEM-related simulations. The winner of the competition (selected by the organizing committee) will receive an MFEM T-shirt. We will also feature the images in the gallery . Here are the winners from the 2022 workshop: Ben Zwick : Electric field generated by a current dipole source in epilepsy patient Tobias Duswald : Topology-optimized heat sink Will Pazner : Magnetic field computed with GPU-accelerated LOR solvers To submit an entry in the contest, please fill out the Google form . Alternatively, you may email your submission to mfem@llnl.gov , including your name, institution, a short description of the simulation (the underlying physics, discretization, application details, etc.), and visualization software used (GLVis, ParaView, VisIt, etc.).", "title": "Simulation and Visualization Contest"}, {"location": "workshop23/#virtual-backgrounds", "text": "We invite workshop participants to use the virtual backgrounds designed for this event. Click each image to enlarge, then right-click to save locally.", "title": "Virtual Backgrounds"}, {"location": "workshop23/#organizing-committee", "text": "Holly Auten \u250a Aaron Fisher \u250a Milan Holec \u250a Tzanio Kolev \u250a Ketan Mittal \u250a Will Pazner \u250a Socratis Petrides \u250a Vladimir Tomov", "title": "Organizing Committee"}, {"location": "workshop23/#previous-workshops", "text": "MFEM Community Workshop 2022 MFEM Community Workshop 2021", "title": "Previous Workshops"}, {"location": "howto/assembly_levels/", "text": "HowTo: Use partial assembly and matrix-free assembly MFEM provides different levels of assembly for mfem::BilinearForm , mfem::MixedBilinearForm , mfem::DiscreteLinearOperator , and mfem::NonlinearForm based on the operator decomposition: These different levels of assembly are: LEGACY, in the case of a mfem::BilinearForm LEGACY corresponds to a fully assembled form, i.e. a global sparse matrix in MFEM, Hypre or PETSC format. In the case of a mfem::NonlinearForm LEGACY corresponds to an operator that is fully evaluated on the fly. The LEGACY assembly level is ALWAYS performed on the host. FULL, fully assembled form, i.e. a global sparse matrix in MFEM format. This assembly is compatible with device execution, and therefore the sparse matrix is assembled on device if available. This corresponds to storing the whole A = G T B T D B G operator as a sparse matrix. ELEMENT, Form assembled at element level, which computes and stores dense element matrices. This corresponds to storing the element-local dense matrices A E = B T D B. This format allows to have some access to the matrix entries, while also providing a data format that is more friendly with GPU architectures. PARTIAL, Partially-assembled form, which computes and stores data only at quadrature points. This corresponds to storing only quadrature points values D, this format results in significantly faster computations and less storage usage compared to format storing matrices. Only the diagonal entries of the operator are accessible. NONE, \"Matrix-free\" form that computes all of its action on-the-fly without any substantial storage. In this case D is computed on the fly, this format is also significantly faster than the matrix formats, but is currently slower than partial assembly due to the increased number of computations. However, in the case of operators that need to be reassembled frequently this assembly level might be faster than partial assembly by skipping any reassembly steps. The different assembly levels are accessed through the following unified interface: AssemblyLevel assembly_level = ...; a->SetAssemblyLevel(assembly_level); where a is either an mfem::BilinearForm , mfem::MixedBilinearForm , mfem::DiscreteLinearOperator , or mfem::NonlinearForm . Assembly levels and backend device configuration MFEM integrates three backends that interact with the assembly levels, namely the RAJA backend, the OCCA backend, and the libCEED backend. Backends are accessible by configuring the mfem::Device accordingly. Device Configuration cpu Default CPU backend: sequential execution on each MPI rank. omp OpenMP backend. Enabled when MFEM_USE_OPENMP = YES. cuda CUDA backend. Enabled when MFEM_USE_CUDA = YES. hip HIP backend. Enabled when MFEM_USE_HIP = YES. raja-cpu RAJA CPU backend: sequential execution on each MPI rank. Enabled when MFEM_USE_RAJA = YES. raja-omp RAJA OpenMP backend. Enabled when MFEM_USE_RAJA = YES and MFEM_USE_OPENMP = YES. raja-cuda RAJA CUDA backend. Enabled when MFEM_USE_RAJA = YES and MFEM_USE_CUDA = YES. raja-hip RAJA HIP backend. Enabled when MFEM_USE_RAJA = YES and MFEM_USE_HIP = YES. occa-cpu OCCA CPU backend: sequential execution on each MPI rank. Enabled when MFEM_USE_OCCA = YES. occa-omp OCCA OpenMP backend. Enabled when MFEM_USE_OCCA = YES. occa-cuda OCCA CUDA backend. Enabled when MFEM_USE_OCCA = YES and MFEM_USE_CUDA = YES. ceed-cpu CEED CPU backend. GPU backends can still be used, but with expensive memory transfers. Enabled when MFEM_USE_CEED = YES. ceed-cuda CEED CUDA backend working together with the CUDA backend. Enabled when MFEM_USE_CEED = YES and MFEM_USE_CUDA = YES. NOTE: The current default libCEED CUDA backend is non-deterministic! ceed-hip CEED HIP backend working together with the HIP backend. Enabled when MFEM_USE_CEED = YES and MFEM_USE_HIP = YES. debug Debug backend: host memory is READ/WRITE protected while a device is in use. It allows to test the \"device\" code-path (using separate host/device memory pools and host <-> device transfers) without any GPU hardware. As 'DEBUG' is sometimes used as a macro, _DEVICE has been added to avoid conflicts. It is also possible to request the backend of a backend, for instance if we want to use the /gpu/cuda/shared backend of libCEED, one can specify this with the following syntax: mfem::Device device(\"ceed-cuda:/gpu/cuda/shared\"); Device support The native MFEM backend and the RAJA backend support the same features and Integrators. However, the OCCA backend, and the libCEED backend each offer different features, and support different Integrators with different performance characteristics. Supported Integrators native MFEM OCCA backend libCEED backend Mass Integrator \u2705 \u2705 \u2705 Vector Mass Integrator \u2705 \u274c \u2705 Vector FE Mass Integrator \u2705 \u274c \u274c Convection Integrator \u2705 \u274c \u2705 Non-linear Convection Integrator \u2705 \u274c \u2705 Diffusion Integrator \u2705 \u2705 \u2705 Vector Diffusion Integrator \u2705 \u274c \u2705 DGTrace Integrator \u2705 \u274c \u274c Mixed Vector Gradient Integrator \u2705 \u274c \u274c Mixed Vector Curl Integrator \u2705 \u274c \u274c Mixed Vector Weak Curl Integrator \u2705 \u274c \u274c Gradient Integrator \u2705 \u274c \u274c Vector Divergence Integrator \u2705 \u274c \u274c Vector FE Divergence Integrator \u2705 \u274c \u274c Curl Curl Integrator \u2705 \u274c \u274c Div Div Integrator \u2705 \u274c \u274c Features native MFEM OCCA backend libCEED backend Tensor elements support \u2705 \u2705 \u2705 Simplices support \u274c \u274c \u2705 Mixed elements support \u274c \u274c \u2705 Assembly: None \u274c \u274c \u2705 Assembly: Partial \u2705 \u2705 \u2705 Assembly: Element \u2705 \u274c \u274c Assembly: Full \u2705 \u274c \u274c", "title": "HowTo: Use partial assembly and matrix-free assembly"}, {"location": "howto/assembly_levels/#howto-use-partial-assembly-and-matrix-free-assembly", "text": "MFEM provides different levels of assembly for mfem::BilinearForm , mfem::MixedBilinearForm , mfem::DiscreteLinearOperator , and mfem::NonlinearForm based on the operator decomposition: These different levels of assembly are: LEGACY, in the case of a mfem::BilinearForm LEGACY corresponds to a fully assembled form, i.e. a global sparse matrix in MFEM, Hypre or PETSC format. In the case of a mfem::NonlinearForm LEGACY corresponds to an operator that is fully evaluated on the fly. The LEGACY assembly level is ALWAYS performed on the host. FULL, fully assembled form, i.e. a global sparse matrix in MFEM format. This assembly is compatible with device execution, and therefore the sparse matrix is assembled on device if available. This corresponds to storing the whole A = G T B T D B G operator as a sparse matrix. ELEMENT, Form assembled at element level, which computes and stores dense element matrices. This corresponds to storing the element-local dense matrices A E = B T D B. This format allows to have some access to the matrix entries, while also providing a data format that is more friendly with GPU architectures. PARTIAL, Partially-assembled form, which computes and stores data only at quadrature points. This corresponds to storing only quadrature points values D, this format results in significantly faster computations and less storage usage compared to format storing matrices. Only the diagonal entries of the operator are accessible. NONE, \"Matrix-free\" form that computes all of its action on-the-fly without any substantial storage. In this case D is computed on the fly, this format is also significantly faster than the matrix formats, but is currently slower than partial assembly due to the increased number of computations. However, in the case of operators that need to be reassembled frequently this assembly level might be faster than partial assembly by skipping any reassembly steps. The different assembly levels are accessed through the following unified interface: AssemblyLevel assembly_level = ...; a->SetAssemblyLevel(assembly_level); where a is either an mfem::BilinearForm , mfem::MixedBilinearForm , mfem::DiscreteLinearOperator , or mfem::NonlinearForm .", "title": "HowTo: Use partial assembly and matrix-free assembly"}, {"location": "howto/assembly_levels/#assembly-levels-and-backend-device-configuration", "text": "MFEM integrates three backends that interact with the assembly levels, namely the RAJA backend, the OCCA backend, and the libCEED backend. Backends are accessible by configuring the mfem::Device accordingly. Device Configuration cpu Default CPU backend: sequential execution on each MPI rank. omp OpenMP backend. Enabled when MFEM_USE_OPENMP = YES. cuda CUDA backend. Enabled when MFEM_USE_CUDA = YES. hip HIP backend. Enabled when MFEM_USE_HIP = YES. raja-cpu RAJA CPU backend: sequential execution on each MPI rank. Enabled when MFEM_USE_RAJA = YES. raja-omp RAJA OpenMP backend. Enabled when MFEM_USE_RAJA = YES and MFEM_USE_OPENMP = YES. raja-cuda RAJA CUDA backend. Enabled when MFEM_USE_RAJA = YES and MFEM_USE_CUDA = YES. raja-hip RAJA HIP backend. Enabled when MFEM_USE_RAJA = YES and MFEM_USE_HIP = YES. occa-cpu OCCA CPU backend: sequential execution on each MPI rank. Enabled when MFEM_USE_OCCA = YES. occa-omp OCCA OpenMP backend. Enabled when MFEM_USE_OCCA = YES. occa-cuda OCCA CUDA backend. Enabled when MFEM_USE_OCCA = YES and MFEM_USE_CUDA = YES. ceed-cpu CEED CPU backend. GPU backends can still be used, but with expensive memory transfers. Enabled when MFEM_USE_CEED = YES. ceed-cuda CEED CUDA backend working together with the CUDA backend. Enabled when MFEM_USE_CEED = YES and MFEM_USE_CUDA = YES. NOTE: The current default libCEED CUDA backend is non-deterministic! ceed-hip CEED HIP backend working together with the HIP backend. Enabled when MFEM_USE_CEED = YES and MFEM_USE_HIP = YES. debug Debug backend: host memory is READ/WRITE protected while a device is in use. It allows to test the \"device\" code-path (using separate host/device memory pools and host <-> device transfers) without any GPU hardware. As 'DEBUG' is sometimes used as a macro, _DEVICE has been added to avoid conflicts. It is also possible to request the backend of a backend, for instance if we want to use the /gpu/cuda/shared backend of libCEED, one can specify this with the following syntax: mfem::Device device(\"ceed-cuda:/gpu/cuda/shared\");", "title": "Assembly levels and backend device configuration"}, {"location": "howto/assembly_levels/#device-support", "text": "The native MFEM backend and the RAJA backend support the same features and Integrators. However, the OCCA backend, and the libCEED backend each offer different features, and support different Integrators with different performance characteristics. Supported Integrators native MFEM OCCA backend libCEED backend Mass Integrator \u2705 \u2705 \u2705 Vector Mass Integrator \u2705 \u274c \u2705 Vector FE Mass Integrator \u2705 \u274c \u274c Convection Integrator \u2705 \u274c \u2705 Non-linear Convection Integrator \u2705 \u274c \u2705 Diffusion Integrator \u2705 \u2705 \u2705 Vector Diffusion Integrator \u2705 \u274c \u2705 DGTrace Integrator \u2705 \u274c \u274c Mixed Vector Gradient Integrator \u2705 \u274c \u274c Mixed Vector Curl Integrator \u2705 \u274c \u274c Mixed Vector Weak Curl Integrator \u2705 \u274c \u274c Gradient Integrator \u2705 \u274c \u274c Vector Divergence Integrator \u2705 \u274c \u274c Vector FE Divergence Integrator \u2705 \u274c \u274c Curl Curl Integrator \u2705 \u274c \u274c Div Div Integrator \u2705 \u274c \u274c Features native MFEM OCCA backend libCEED backend Tensor elements support \u2705 \u2705 \u2705 Simplices support \u274c \u274c \u2705 Mixed elements support \u274c \u274c \u2705 Assembly: None \u274c \u274c \u2705 Assembly: Partial \u2705 \u2705 \u2705 Assembly: Element \u2705 \u274c \u274c Assembly: Full \u2705 \u274c \u274c", "title": "Device support"}, {"location": "howto/block_operators_matrices/", "text": "HowTo: Use Block Operators and Matrices Some problem formulations are defined in block form and need to be implemented in terms of block operators. Examples include saddle point problems ( ex5.cpp ), DPG discretization ( ex8.cpp ), and problems with multiple variables ( ex19.cpp ). The resulting discretized system is expressed in terms of block operators and vectors, which may be distributed in parallel. This article gives an overview of working with block operators and their matrix representations. It should be noted in general that operators and matrices are appropriate in different situations, regardless of whether they are in block form. Generally, it is preferable to have an operator and not its matrix representation when only its action is needed and can be computed faster than matrix assembly, or when matrix storage requires too much memory. For example, this is the case for high-order FEM, when partial assembly (PA) is used for fast operator multiplication on GPUs without storing matrices. Also, matrix storage becomes increasingly expensive (more nonzeros per row) as FEM order increases, which is another reason to avoid matrix assembly and matrix-based preconditioners for very high order. On the other hand, for low-order FEM, matrices are necessary for example in order to use AMG preconditioning (e.g. with hypre). Thus there are cases where operators or matrices are preferable, in general and in block form. First, it is important to understand how a single, monolithic operator or matrix is distributed in parallel in MFEM. Vectors, matrices, and operators are distributed consistently with hypre, which decomposes the rows of a parallel matrix ( HypreParMatrix , see mfem/hypre.hpp ) but stores all columns of the locally owned rows on each MPI rank. On each process, a Vector or HypreParVector is of size equal to the number of locally owned rows, and a HypreParMatrix stores the local rows. The parallel communication necessary for matrix-vector multiplication is performed in hypre. Similarly, an Operator should act on a Vector of local entries, perform any necessary communication, and compute a Vector of local entries. In the case of block operators and vectors, a Vector stores the local entries for each block contiguously in its data. Offsets define where each block begins and ends. For example, in ex5.cpp , there are two blocks for spaces R_space and W_space , and block_offsets is of size three, storing offsets 0 , R_space->GetVSize() , and R_space->GetVSize() + W_space->GetVSize() . The class BlockOperator (see mfem/linalg/blockoperator.hpp ) can be used to form one operator from operators defining the blocks. It operates on vectors of local entries, stored block-wise. Similarly, a monolithic HypreParMatrix can be constructed, using the function HypreParMatrixFromBlocks (see hypre.hpp ), from blocks defined as HypreParMatrix pointers or null pointers for empty blocks. The blocks may be rectangular, but their sizes must be consistent. Scalar coefficients can optionally be used. The monolithic matrix will have copies of the entries from the blocks, so it can be modified or destroyed independently of the blocks. The unit test mfem/tests/unit/linalg/test_matrix_rectangular.cpp provides an example that compares a BlockOperator and a monolithic HypreParMatrix . As noted above, it is not practical to have both an operator and a matrix, but this test illustrates the equivalence of the two approaches. The capability to form a monolithic matrix is available only for HypreParMatrix , not for the serial class SparseMatrix .", "title": "HowTo: Use Block Operators and Matrices"}, {"location": "howto/block_operators_matrices/#howto-use-block-operators-and-matrices", "text": "Some problem formulations are defined in block form and need to be implemented in terms of block operators. Examples include saddle point problems ( ex5.cpp ), DPG discretization ( ex8.cpp ), and problems with multiple variables ( ex19.cpp ). The resulting discretized system is expressed in terms of block operators and vectors, which may be distributed in parallel. This article gives an overview of working with block operators and their matrix representations. It should be noted in general that operators and matrices are appropriate in different situations, regardless of whether they are in block form. Generally, it is preferable to have an operator and not its matrix representation when only its action is needed and can be computed faster than matrix assembly, or when matrix storage requires too much memory. For example, this is the case for high-order FEM, when partial assembly (PA) is used for fast operator multiplication on GPUs without storing matrices. Also, matrix storage becomes increasingly expensive (more nonzeros per row) as FEM order increases, which is another reason to avoid matrix assembly and matrix-based preconditioners for very high order. On the other hand, for low-order FEM, matrices are necessary for example in order to use AMG preconditioning (e.g. with hypre). Thus there are cases where operators or matrices are preferable, in general and in block form. First, it is important to understand how a single, monolithic operator or matrix is distributed in parallel in MFEM. Vectors, matrices, and operators are distributed consistently with hypre, which decomposes the rows of a parallel matrix ( HypreParMatrix , see mfem/hypre.hpp ) but stores all columns of the locally owned rows on each MPI rank. On each process, a Vector or HypreParVector is of size equal to the number of locally owned rows, and a HypreParMatrix stores the local rows. The parallel communication necessary for matrix-vector multiplication is performed in hypre. Similarly, an Operator should act on a Vector of local entries, perform any necessary communication, and compute a Vector of local entries. In the case of block operators and vectors, a Vector stores the local entries for each block contiguously in its data. Offsets define where each block begins and ends. For example, in ex5.cpp , there are two blocks for spaces R_space and W_space , and block_offsets is of size three, storing offsets 0 , R_space->GetVSize() , and R_space->GetVSize() + W_space->GetVSize() . The class BlockOperator (see mfem/linalg/blockoperator.hpp ) can be used to form one operator from operators defining the blocks. It operates on vectors of local entries, stored block-wise. Similarly, a monolithic HypreParMatrix can be constructed, using the function HypreParMatrixFromBlocks (see hypre.hpp ), from blocks defined as HypreParMatrix pointers or null pointers for empty blocks. The blocks may be rectangular, but their sizes must be consistent. Scalar coefficients can optionally be used. The monolithic matrix will have copies of the entries from the blocks, so it can be modified or destroyed independently of the blocks. The unit test mfem/tests/unit/linalg/test_matrix_rectangular.cpp provides an example that compares a BlockOperator and a monolithic HypreParMatrix . As noted above, it is not practical to have both an operator and a matrix, but this test illustrates the equivalence of the two approaches. The capability to form a monolithic matrix is available only for HypreParMatrix , not for the serial class SparseMatrix .", "title": "HowTo: Use Block Operators and Matrices"}, {"location": "howto/build-systems/", "text": "HowTo: Build and test MFEM, syntax for each build-system MFEM has two build systems: - Makefile. We will refer to it as \"original Makefile\" - CMake, an out-of-source build system generator, that will generate a build-system in Makefile or another language like Ninja . The most important difference between the two is that CMake being an out-of-source build system, it will require the creation of a build directory, and all commands will be run from there. The original Makefile system will build the code in source from the root directory. The original Makefile cd make config [...options...] make all -j 8 # Build everything make test # Run the tests CMake + Makefile (option 1: explicit makefile) cd mkdir build cd build cmake [...options...] .. # Note the \"..\" make -j 8 # Build MFEM make tests -j 8 # Build unit-tests make examples -j 8 # Build examples make miniapps -j 8 # Build miniapps make test # Run the tests CMake + Makefile (option 2: generic build, cmake wrappers) cd mkdir build cd build cmake [...options...] .. # Note the \"..\" cmake --build . -j 8 # Build MFEM cmake --build . --target tests -j 8 # Build unit-tests cmake --build . --target examples -j 8 # Build examples cmake --build . --target miniapps -j 8 # Build miniapps ctest --output-on-failure -T test # Run the tests CMake + Ninja (this is not what we are used to doing, but it works) cd mkdir build cd build cmake [...options...] -GNinja .. # Note the \"..\" cmake --build . -j 8 # Build MFEM cmake --build . --target tests -j 8 # Build unit-tests cmake --build . --target examples -j 8 # Build examples cmake --build . --target miniapps -j 8 # Build miniapps ctest --output-on-failure -T test # Run the tests", "title": "HowTo: Build and test MFEM, syntax for each build-system"}, {"location": "howto/build-systems/#howto-build-and-test-mfem-syntax-for-each-build-system", "text": "MFEM has two build systems: - Makefile. We will refer to it as \"original Makefile\" - CMake, an out-of-source build system generator, that will generate a build-system in Makefile or another language like Ninja . The most important difference between the two is that CMake being an out-of-source build system, it will require the creation of a build directory, and all commands will be run from there. The original Makefile system will build the code in source from the root directory. The original Makefile cd make config [...options...] make all -j 8 # Build everything make test # Run the tests CMake + Makefile (option 1: explicit makefile) cd mkdir build cd build cmake [...options...] .. # Note the \"..\" make -j 8 # Build MFEM make tests -j 8 # Build unit-tests make examples -j 8 # Build examples make miniapps -j 8 # Build miniapps make test # Run the tests CMake + Makefile (option 2: generic build, cmake wrappers) cd mkdir build cd build cmake [...options...] .. # Note the \"..\" cmake --build . -j 8 # Build MFEM cmake --build . --target tests -j 8 # Build unit-tests cmake --build . --target examples -j 8 # Build examples cmake --build . --target miniapps -j 8 # Build miniapps ctest --output-on-failure -T test # Run the tests CMake + Ninja (this is not what we are used to doing, but it works) cd mkdir build cd build cmake [...options...] -GNinja .. # Note the \"..\" cmake --build . -j 8 # Build MFEM cmake --build . --target tests -j 8 # Build unit-tests cmake --build . --target examples -j 8 # Build examples cmake --build . --target miniapps -j 8 # Build miniapps ctest --output-on-failure -T test # Run the tests", "title": "HowTo: Build and test MFEM, syntax for each build-system"}, {"location": "howto/custom_precond/", "text": "HowTo: Create a custom preconditioner using only matrix actions For many problems of interest the off the shelf preconditioners are insufficient and something more tailored to the equations of interest is required. MFEM has a flexible approach to defining preconditioners enabled by deriving from the existing Solver class and overriding the necessary methods to define the action. See the following example: // Define a custom solver class that can be used as the preconditioner for a broader problem solvers // Here we will define the example preconditioner: P x = M x + Ainv x class SumSolver : mfem::Solver { private: const mfem::Operator *M; //Since these are Operators only their const mfem::Operator *Ainv; //actions need to be defined public: SumSolver(const mfem::Operator *M_, const mfem::Operator *Ainv_) : mfem::Solver(M_->Height(), M_->Width(), false) { MFEM_VERIFY(M_->Height() == Ainv_->Height()); MFEM_VERIFY(M_->Width() == Ainv_->Width()); M = M_; Ainv = Ainv_; }; // Define the action of the Solver // y = P x = M x + Ainv x void Mult(const mfem::Vector &x, mfem::Vector &y) const { y = 0.0; mfem::Vector M_x(M->Height()); mfem::Vector Ainv_x(Ainv->Height()); M->Mult(x, M_x); // M_x = A x Ainv->Mult(x, Ainv_x); // Ainv_x = Ainv x y.Add(1.0, M_x); // y += M_x y.Add(1.0, Ainv_x); // y += Ainv_x }; void SetOperator(const Operator &op) { M = &op;}; }; In this example we defined a new MFEM solver that can be applied as a preconditioner for a broader solution. In this case we demonstrated an example where we have a matrix M, the action of the inverse of a matrix A, and we want to define the action of a preconditioner that is the sum of the two. In this case we cannot simply sum the matrices to form the new preconditioner because we don't have access to the elements of Ainv. As you can see this approach is quite flexible and can be utilized to create custom preconditioners of arbitrary complexity.", "title": "HowTo: Create a custom preconditioner using only matrix actions"}, {"location": "howto/custom_precond/#howto-create-a-custom-preconditioner-using-only-matrix-actions", "text": "For many problems of interest the off the shelf preconditioners are insufficient and something more tailored to the equations of interest is required. MFEM has a flexible approach to defining preconditioners enabled by deriving from the existing Solver class and overriding the necessary methods to define the action. See the following example: // Define a custom solver class that can be used as the preconditioner for a broader problem solvers // Here we will define the example preconditioner: P x = M x + Ainv x class SumSolver : mfem::Solver { private: const mfem::Operator *M; //Since these are Operators only their const mfem::Operator *Ainv; //actions need to be defined public: SumSolver(const mfem::Operator *M_, const mfem::Operator *Ainv_) : mfem::Solver(M_->Height(), M_->Width(), false) { MFEM_VERIFY(M_->Height() == Ainv_->Height()); MFEM_VERIFY(M_->Width() == Ainv_->Width()); M = M_; Ainv = Ainv_; }; // Define the action of the Solver // y = P x = M x + Ainv x void Mult(const mfem::Vector &x, mfem::Vector &y) const { y = 0.0; mfem::Vector M_x(M->Height()); mfem::Vector Ainv_x(Ainv->Height()); M->Mult(x, M_x); // M_x = A x Ainv->Mult(x, Ainv_x); // Ainv_x = Ainv x y.Add(1.0, M_x); // y += M_x y.Add(1.0, Ainv_x); // y += Ainv_x }; void SetOperator(const Operator &op) { M = &op;}; }; In this example we defined a new MFEM solver that can be applied as a preconditioner for a broader solution. In this case we demonstrated an example where we have a matrix M, the action of the inverse of a matrix A, and we want to define the action of a preconditioner that is the sum of the two. In this case we cannot simply sum the matrices to form the new preconditioner because we don't have access to the elements of Ainv. As you can see this approach is quite flexible and can be utilized to create custom preconditioners of arbitrary complexity.", "title": "HowTo: Create a custom preconditioner using only matrix actions"}, {"location": "howto/element-local-global-numbering/", "text": "HowTo: Map between local element numbering and parallel global element numbering With MPI parallelization, a distributed mesh is represented by the ParMesh class. On each MPI rank, ParMesh stores data about the local elements owned by the rank. The parallel partitioning of elements is non-overlapping. The local elements have local indexing from 0 to Mesh::GetNE() - 1 . Globally, the elements are numbered sequentially with respect to the MPI ranks and in their local order, starting from 0, so that the global index of an element is the local index plus an offset for its owning rank. The ParMesh class provides functions for mapping between local and global element indices, as described below. These functions support conforming or AMR meshes. Getting the global index corresponding to a local index For a local index local_element_num of an element owned by the current MPI rank, the global index is returned by ParMesh::GetGlobalElementNum(local_element_num) . Getting the local index corresponding to a global index For a global index global_element_num of an element owned by the current MPI rank, the local index is returned by ParMesh::GetLocalElementNum(global_element_num) . The return value is -1 if the element is owned by a different MPI rank. Getting all global indices of locally owned elements ParMesh::GetGlobalElementIndices sets an Array of the global indices of all the locally owned elements on the current MPI rank. The indices set here could alternatively be obtained by calling ParMesh::GetGlobalElementNum(i) for all i from 0 to GetNE() - 1 . Getting global indices of other mesh entities A related topic is how to get global indices for other mesh entities, meaning vertices, edges, or faces. We use the convention that in 1D, edges and faces are actually vertices, and in 2D, faces are actually edges. Whereas elements have local and global indices that are used by ParFiniteElementSpace to determine ordering of local and global finite element degrees of freedom, there are no global indices for the other mesh entities (vertices, edges, and faces). That is, the other mesh entities only have local indices in MFEM, defined in the Mesh class. Although there is no definition or meaning to global indices for the other mesh entities, the user may wish to have global indices for the user's own purposes, and the capability to generate them is provided by the following functions in the ParMesh class: GetGlobalVertexIndices GetGlobalEdgeIndices GetGlobalFaceIndices It should be noted that AMR meshes are currently not supported by these functions (only conforming meshes). Also, since these global indices are meaningless to the MFEM library, their definition is arbitrary and based on lowest-order finite element spaces (H1 for vertices, Nedelec for edges, Raviart-Thomas for faces). There is no implementation of maps between local and global indices for these other mesh entities.", "title": "HowTo: Map between local element numbering and parallel global element numbering"}, {"location": "howto/element-local-global-numbering/#howto-map-between-local-element-numbering-and-parallel-global-element-numbering", "text": "With MPI parallelization, a distributed mesh is represented by the ParMesh class. On each MPI rank, ParMesh stores data about the local elements owned by the rank. The parallel partitioning of elements is non-overlapping. The local elements have local indexing from 0 to Mesh::GetNE() - 1 . Globally, the elements are numbered sequentially with respect to the MPI ranks and in their local order, starting from 0, so that the global index of an element is the local index plus an offset for its owning rank. The ParMesh class provides functions for mapping between local and global element indices, as described below. These functions support conforming or AMR meshes.", "title": "HowTo: Map between local element numbering and parallel global element numbering"}, {"location": "howto/element-local-global-numbering/#getting-the-global-index-corresponding-to-a-local-index", "text": "For a local index local_element_num of an element owned by the current MPI rank, the global index is returned by ParMesh::GetGlobalElementNum(local_element_num) .", "title": "Getting the global index corresponding to a local index"}, {"location": "howto/element-local-global-numbering/#getting-the-local-index-corresponding-to-a-global-index", "text": "For a global index global_element_num of an element owned by the current MPI rank, the local index is returned by ParMesh::GetLocalElementNum(global_element_num) . The return value is -1 if the element is owned by a different MPI rank.", "title": "Getting the local index corresponding to a global index"}, {"location": "howto/element-local-global-numbering/#getting-all-global-indices-of-locally-owned-elements", "text": "ParMesh::GetGlobalElementIndices sets an Array of the global indices of all the locally owned elements on the current MPI rank. The indices set here could alternatively be obtained by calling ParMesh::GetGlobalElementNum(i) for all i from 0 to GetNE() - 1 .", "title": "Getting all global indices of locally owned elements"}, {"location": "howto/element-local-global-numbering/#getting-global-indices-of-other-mesh-entities", "text": "A related topic is how to get global indices for other mesh entities, meaning vertices, edges, or faces. We use the convention that in 1D, edges and faces are actually vertices, and in 2D, faces are actually edges. Whereas elements have local and global indices that are used by ParFiniteElementSpace to determine ordering of local and global finite element degrees of freedom, there are no global indices for the other mesh entities (vertices, edges, and faces). That is, the other mesh entities only have local indices in MFEM, defined in the Mesh class. Although there is no definition or meaning to global indices for the other mesh entities, the user may wish to have global indices for the user's own purposes, and the capability to generate them is provided by the following functions in the ParMesh class: GetGlobalVertexIndices GetGlobalEdgeIndices GetGlobalFaceIndices It should be noted that AMR meshes are currently not supported by these functions (only conforming meshes). Also, since these global indices are meaningless to the MFEM library, their definition is arbitrary and based on lowest-order finite element spaces (H1 for vertices, Nedelec for edges, Raviart-Thomas for faces). There is no implementation of maps between local and global indices for these other mesh entities.", "title": "Getting global indices of other mesh entities"}, {"location": "howto/findpts/", "text": "HowTo: Use FindPointsGSLIB for high-order interpolation FindPointsGSLIB provides a wrapper for high-order interpolation via findpts , a set of routines that were developed as a part of the gather-scatter library, gslib . While findpts was originally developed for interpolation of grid functions in H1 for meshes with quadrilateral or hexahedron elements, FindPointsGSLIB also enables interpolation of functions in L2, H(div), H(curl) on meshes with triangle and tetrahedral elements. The key steps of using FindPointsGSLIB , as demonstrated in the gslib miniapps are: First, setup the internal data structures required by the gslib library for the mesh of interest. This is done by using the FindPointsGSLIB::Setup(mesh) method with the desired mfem::Mesh or mfem::ParMesh . Next, use the FindPointsGSLIB::FindPoints(xyz) method with the mfem::Vector xyz of physical-space coordinates where we seek to interpolate the desired grid function. At this step, findpts determines the computational coordinates ( q j = {e j , r j , p j }) for each point. These computational coordinates include the element number (e j in mfem::Array gsl_elem ) in which the point is found, the reference-space coordinates ( r j in mfem::Vector gsl_ref ) inside e j , and the MPI rank that the element is partitioned on (p j in mfem::Array gsl_proc ). FindPoints also returns a code ( mfem::Array gsl_code ) to indicate weather the point was found inside an element ( gsl_code[j] = 0 ), on the edge/face of an element ( gsl_code[j] = 1 ), or not found at all ( gsl_code[j] = 2 ) for the case when the point is located outside the mesh. Note that if a point ( x j ) is located outside the mesh within a certain tolerance, findpts tries to find the closest location on the mesh surface (i.e. gsl_code[j] = 1 ) and returns the distance ( mfem::Vector gsl_dist ) between the sought point and the point found on the mesh surface. Finally, use FindPointsGSLIB::Interpolate(u, ui) to interpolate the desired mfem::(Par)GridFunction u at the physical-space coordinates given by xyz and return the interpolated values in mfem::Vector ui . If u is in H1 , we use findpts for interpolation. Otherwise, we use findpts only for communicating computational coordinates of each point across MPI ranks, followed by MFEM's internal methods ( mfem::GridFunction::GetValues ) for interpolation. Note , the FindPointsGSLIB::FreeData() method must be used before the program is terminated to free up the memory setup internally by findpts during the setup phase. For convenience, FindPointsGSLIB class provides methods such as FindPointsGSLIB::Interpolate(mesh, xyz, u, ui) which combines the three steps described above (setup, finding the computational coordinates of the sought points, and interpolation) into a single method. Please see the class definition for more details. Application of FindPointsGSLIB The gslib miniapps demonstrate several application of FindPointsGSLIB : findpts/pfindpts miniapps demonstrate high-order interpolation of a function in H1 , L2 , H(div) , or H(curl) at an arbitrary set of points in physical space. field-diff miniapp demonstrates comparison of grid functions defined on two different meshes. field-interp miniapp demonstrates transfer of a grid function from one mesh on to another mesh. schwarz_ex1/ex1p miniapp demonstrates use of overlapping Schwarz method to solve the Poisson problem in overlapping meshes. Here, we use FindPointsGSLIB to transfer solution between overlapping meshes to enforce Dirichlet conditions at the inter-domain boundaries. cht Navier miniapp demonstrates how a conjugate heat transfer problem can be solve with the incompressible Navier-Stokes equations and the unsteady heat equation solved on different grids. Here, FindPointsGSLIB is used to transfer the solution from one mesh to another to couple the two PDEs.", "title": "HowTo: Use FindPointsGSLIB for high-order interpolation"}, {"location": "howto/findpts/#howto-use-findpointsgslib-for-high-order-interpolation", "text": "FindPointsGSLIB provides a wrapper for high-order interpolation via findpts , a set of routines that were developed as a part of the gather-scatter library, gslib . While findpts was originally developed for interpolation of grid functions in H1 for meshes with quadrilateral or hexahedron elements, FindPointsGSLIB also enables interpolation of functions in L2, H(div), H(curl) on meshes with triangle and tetrahedral elements. The key steps of using FindPointsGSLIB , as demonstrated in the gslib miniapps are: First, setup the internal data structures required by the gslib library for the mesh of interest. This is done by using the FindPointsGSLIB::Setup(mesh) method with the desired mfem::Mesh or mfem::ParMesh . Next, use the FindPointsGSLIB::FindPoints(xyz) method with the mfem::Vector xyz of physical-space coordinates where we seek to interpolate the desired grid function. At this step, findpts determines the computational coordinates ( q j = {e j , r j , p j }) for each point. These computational coordinates include the element number (e j in mfem::Array gsl_elem ) in which the point is found, the reference-space coordinates ( r j in mfem::Vector gsl_ref ) inside e j , and the MPI rank that the element is partitioned on (p j in mfem::Array gsl_proc ). FindPoints also returns a code ( mfem::Array gsl_code ) to indicate weather the point was found inside an element ( gsl_code[j] = 0 ), on the edge/face of an element ( gsl_code[j] = 1 ), or not found at all ( gsl_code[j] = 2 ) for the case when the point is located outside the mesh. Note that if a point ( x j ) is located outside the mesh within a certain tolerance, findpts tries to find the closest location on the mesh surface (i.e. gsl_code[j] = 1 ) and returns the distance ( mfem::Vector gsl_dist ) between the sought point and the point found on the mesh surface. Finally, use FindPointsGSLIB::Interpolate(u, ui) to interpolate the desired mfem::(Par)GridFunction u at the physical-space coordinates given by xyz and return the interpolated values in mfem::Vector ui . If u is in H1 , we use findpts for interpolation. Otherwise, we use findpts only for communicating computational coordinates of each point across MPI ranks, followed by MFEM's internal methods ( mfem::GridFunction::GetValues ) for interpolation. Note , the FindPointsGSLIB::FreeData() method must be used before the program is terminated to free up the memory setup internally by findpts during the setup phase. For convenience, FindPointsGSLIB class provides methods such as FindPointsGSLIB::Interpolate(mesh, xyz, u, ui) which combines the three steps described above (setup, finding the computational coordinates of the sought points, and interpolation) into a single method. Please see the class definition for more details.", "title": "HowTo: Use FindPointsGSLIB for high-order interpolation"}, {"location": "howto/findpts/#application-of-findpointsgslib", "text": "The gslib miniapps demonstrate several application of FindPointsGSLIB : findpts/pfindpts miniapps demonstrate high-order interpolation of a function in H1 , L2 , H(div) , or H(curl) at an arbitrary set of points in physical space. field-diff miniapp demonstrates comparison of grid functions defined on two different meshes. field-interp miniapp demonstrates transfer of a grid function from one mesh on to another mesh. schwarz_ex1/ex1p miniapp demonstrates use of overlapping Schwarz method to solve the Poisson problem in overlapping meshes. Here, we use FindPointsGSLIB to transfer solution between overlapping meshes to enforce Dirichlet conditions at the inter-domain boundaries. cht Navier miniapp demonstrates how a conjugate heat transfer problem can be solve with the incompressible Navier-Stokes equations and the unsteady heat equation solved on different grids. Here, FindPointsGSLIB is used to transfer the solution from one mesh to another to couple the two PDEs.", "title": "Application of FindPointsGSLIB"}, {"location": "howto/howto-index/", "text": "HowTo Articles This is a growing collection of \"how-to\" articles on topics encountered by our users in practice. Please feel free to suggest a missing topic! \ud83d\udd0e Search the articles... Build, Install, and Test Overview of the MFEM Build and Test System Install MFEM with Spack Finite Elements Using Partial and Matrix-free Assembly Meshes Navigating Mesh Connectivity Parallel Element Numbering Finding Local Element Coordinates of Physical Points Working with Nonconforming Meshes for AMR Linear Algebra Using Block Operators and Matrices Solvers Using a Custom Preconditioner Boundaries Compute Outer Normals of Boundary Elements Using Periodic Boundaries", "title": "HowTo Articles"}, {"location": "howto/howto-index/#howto-articles", "text": "This is a growing collection of \"how-to\" articles on topics encountered by our users in practice. Please feel free to suggest a missing topic! \ud83d\udd0e Search the articles...", "title": "HowTo Articles"}, {"location": "howto/howto-index/#build-install-and-test", "text": "Overview of the MFEM Build and Test System Install MFEM with Spack", "title": "Build, Install, and Test"}, {"location": "howto/howto-index/#finite-elements", "text": "Using Partial and Matrix-free Assembly", "title": "Finite Elements"}, {"location": "howto/howto-index/#meshes", "text": "Navigating Mesh Connectivity Parallel Element Numbering Finding Local Element Coordinates of Physical Points Working with Nonconforming Meshes for AMR", "title": "Meshes"}, {"location": "howto/howto-index/#linear-algebra", "text": "Using Block Operators and Matrices", "title": "Linear Algebra"}, {"location": "howto/howto-index/#solvers", "text": "Using a Custom Preconditioner", "title": "Solvers"}, {"location": "howto/howto-index/#boundaries", "text": "Compute Outer Normals of Boundary Elements Using Periodic Boundaries", "title": "Boundaries"}, {"location": "howto/install-with-spack/", "text": "HowTo: Use Spack to install MFEM. MFEM can be built with make or CMake . But MFEM has also been packaged to be built with Spack . What does it mean to use Spack, and why use it? Packaging vs. Build-System In concrete terms, packaging with Spack here means that: Spack will interface with the build system: no make or CMake command required. Build options are specified as \"variants\". There may not be a variant for every option or combination of options allowed by building from source \"manually\". Spack will also install the dependencies, which may also be activated using \"variants\". (Note that so far, the MFEM Spack package interfaces with MFEM makefile build system, not CMake.) The first takeaway is that using Spack may not allow as much configuration as possible manually but will manage the installation of dependencies. When to use Spack? Spack is a from source package manager. So Spack will allow you to build mfem from source using the underlying makefile build system. To manage your libraries for development Spack is typically used to deploy software. You may use it to install MFEM among other libraries in a shared location for developers using MFEM as a dependency: all will have access to the same configuration and you will be able to reproduce this installation at will. But you will be limited to a predefined set of versions. Typically the releases and the latest state of master branch. In that sense Spack is not meant to be use to develop in MFEM a priori . (For those looking to use Spack to develop in MFEM, see Spack workflow feature ) To install dependencies automatically Spack will automatically build the dependencies, which can be especially valuable to get started quickly with an advanced configuration of MFEM. This is a great way to get students started quickly with a configuration that would require much too many steps otherwise. To reproduce a vetted configuration Spack is used in GitLab CI context to automate the build on dependencies, easily update those, and improve reproducibility. For more details about this, explore MFEM Uberenv configuration , and the documentation mentioned in the README. How to use Spack to install MFEM. Using Spack is easy to start with, complex when it comes to getting exactly what you want, and can be tedious to maintain on the long term. Best practices for a long-term sane relationship with Spack Unless you want to develop in Spack, those rules will help keeping things under control: Use a single Spack instance. Spack has environments that mimic the way python environments work to allow you to partition things so that all the packages installed do not show up in a big mess. Stick to a release of Spack. Packages evolve along with Spack source code. It means that updating Spack will likely affect reproducing the build of specs already installed. Expect to reinstall everything when you update Spack. Using Spack to install MFEM on LLNL's Lassen and Quartz systems Those machines are used to test MFEM. The tests running in GitLab CI use Spack to manage MFEM dependencies. The configuration used for those tests can be reproduced exactly. This guarantees to get a working installation through Spack. Unfortunately, only a handful of configurations are being tested. But this is a good starting point to explore further. See MFEM Uberenv configuration for more details.", "title": "HowTo: Use Spack to install MFEM."}, {"location": "howto/install-with-spack/#howto-use-spack-to-install-mfem", "text": "MFEM can be built with make or CMake . But MFEM has also been packaged to be built with Spack .", "title": "HowTo: Use Spack to install MFEM."}, {"location": "howto/install-with-spack/#what-does-it-mean-to-use-spack-and-why-use-it", "text": "", "title": "What does it mean to use Spack, and why use it?"}, {"location": "howto/install-with-spack/#packaging-vs-build-system", "text": "In concrete terms, packaging with Spack here means that: Spack will interface with the build system: no make or CMake command required. Build options are specified as \"variants\". There may not be a variant for every option or combination of options allowed by building from source \"manually\". Spack will also install the dependencies, which may also be activated using \"variants\". (Note that so far, the MFEM Spack package interfaces with MFEM makefile build system, not CMake.) The first takeaway is that using Spack may not allow as much configuration as possible manually but will manage the installation of dependencies.", "title": "Packaging vs. Build-System"}, {"location": "howto/install-with-spack/#when-to-use-spack", "text": "Spack is a from source package manager. So Spack will allow you to build mfem from source using the underlying makefile build system.", "title": "When to use Spack?"}, {"location": "howto/install-with-spack/#to-manage-your-libraries-for-development", "text": "Spack is typically used to deploy software. You may use it to install MFEM among other libraries in a shared location for developers using MFEM as a dependency: all will have access to the same configuration and you will be able to reproduce this installation at will. But you will be limited to a predefined set of versions. Typically the releases and the latest state of master branch. In that sense Spack is not meant to be use to develop in MFEM a priori . (For those looking to use Spack to develop in MFEM, see Spack workflow feature )", "title": "To manage your libraries for development"}, {"location": "howto/install-with-spack/#to-install-dependencies-automatically", "text": "Spack will automatically build the dependencies, which can be especially valuable to get started quickly with an advanced configuration of MFEM. This is a great way to get students started quickly with a configuration that would require much too many steps otherwise.", "title": "To install dependencies automatically"}, {"location": "howto/install-with-spack/#to-reproduce-a-vetted-configuration", "text": "Spack is used in GitLab CI context to automate the build on dependencies, easily update those, and improve reproducibility. For more details about this, explore MFEM Uberenv configuration , and the documentation mentioned in the README.", "title": "To reproduce a vetted configuration"}, {"location": "howto/install-with-spack/#how-to-use-spack-to-install-mfem", "text": "Using Spack is easy to start with, complex when it comes to getting exactly what you want, and can be tedious to maintain on the long term.", "title": "How to use Spack to install MFEM."}, {"location": "howto/install-with-spack/#best-practices-for-a-long-term-sane-relationship-with-spack", "text": "Unless you want to develop in Spack, those rules will help keeping things under control: Use a single Spack instance. Spack has environments that mimic the way python environments work to allow you to partition things so that all the packages installed do not show up in a big mess. Stick to a release of Spack. Packages evolve along with Spack source code. It means that updating Spack will likely affect reproducing the build of specs already installed. Expect to reinstall everything when you update Spack.", "title": "Best practices for a long-term sane relationship with Spack"}, {"location": "howto/install-with-spack/#using-spack-to-install-mfem-on-llnls-lassen-and-quartz-systems", "text": "Those machines are used to test MFEM. The tests running in GitLab CI use Spack to manage MFEM dependencies. The configuration used for those tests can be reproduced exactly. This guarantees to get a working installation through Spack. Unfortunately, only a handful of configurations are being tested. But this is a good starting point to explore further. See MFEM Uberenv configuration for more details.", "title": "Using Spack to install MFEM on LLNL's Lassen and Quartz systems"}, {"location": "howto/nav-mesh-connectivity/", "text": "HowTo: Navigate the connections between mesh primitives with Table objects Elements, faces, edges, and vertices are all connected to each other to form a cohesive mesh. In some lower level applications it may be necessary to navigate the MFEM mesh through these connections to find the mesh primitives you need. Each of the mesh primitives has its own numbering and MFEM represents the connections between these primitives in Table objects ( general/table.hpp ) that are stored in the Mesh object ( mesh/mesh.hpp ). You can access these Table object through 7 different accessor methods in mesh: Mesh Method Dimension Mesh object owns data const Table &ElementToElementTable() 1D, 2D, 3D Yes const Table &ElementToFaceTable() 1D, 2D, 3D Yes const Table &ElementToEdgeTable() 1D, 2D, 3D Yes Table *GetFaceEdgeTable() 3D Yes Table *GetEdgeVertexTable() 1D, 2D, 3D Yes Table *GetVertexToElementTable() 1D, 2D, 3D No Table *GetFaceToElementTable() 1D, 2D, 3D No The interfaces for these accessors are unfortunately not uniform, and care must be taken to use them properly. For example the Mesh object owns the data for most, but not all of them, so care must be taken delete the Table objects returned by the last two. In addition, two of the methods are only defined in 3D due to them using the strict definitions of faces and edges while the others use the looser definition by letting the faces be edges in 2D and the edges vertices in 1D. Once you have the table with the information you want you can access it through the table methods as in the following example: const Table &elem_edge = mesh.ElementToEdgeTable(); int num_elems = mesh.GetNE(); for (int elem_id = 0; ei < num_elems; elem_id++) { int num_edges = elem_edge.RowSize(elem_id); const int *edges = elem_edge.GetRow(elem_id); for (int edgei = 0; edgei < num_edges; edgei ++) { int edge_id = edges[edgei]; .... Do something with the edge ID .... } } Another useful method related to navigating mesh connections with these Table objects is the Transpose method. This method takes an A_to_B table and transposes it into a B_to_A table. Usage is as follows: Table &face_edge = *mesh.GetFaceEdgeTable(); Table edge_face; Transpose(face_edge, edge_face); int num_edges = mesh.GetNEdges(); for (int edge_id = 0; ei < num_edges; edge_id++) { .... }", "title": "HowTo: Navigate the connections between mesh primitives with Table objects"}, {"location": "howto/nav-mesh-connectivity/#howto-navigate-the-connections-between-mesh-primitives-with-table-objects", "text": "Elements, faces, edges, and vertices are all connected to each other to form a cohesive mesh. In some lower level applications it may be necessary to navigate the MFEM mesh through these connections to find the mesh primitives you need. Each of the mesh primitives has its own numbering and MFEM represents the connections between these primitives in Table objects ( general/table.hpp ) that are stored in the Mesh object ( mesh/mesh.hpp ). You can access these Table object through 7 different accessor methods in mesh: Mesh Method Dimension Mesh object owns data const Table &ElementToElementTable() 1D, 2D, 3D Yes const Table &ElementToFaceTable() 1D, 2D, 3D Yes const Table &ElementToEdgeTable() 1D, 2D, 3D Yes Table *GetFaceEdgeTable() 3D Yes Table *GetEdgeVertexTable() 1D, 2D, 3D Yes Table *GetVertexToElementTable() 1D, 2D, 3D No Table *GetFaceToElementTable() 1D, 2D, 3D No The interfaces for these accessors are unfortunately not uniform, and care must be taken to use them properly. For example the Mesh object owns the data for most, but not all of them, so care must be taken delete the Table objects returned by the last two. In addition, two of the methods are only defined in 3D due to them using the strict definitions of faces and edges while the others use the looser definition by letting the faces be edges in 2D and the edges vertices in 1D. Once you have the table with the information you want you can access it through the table methods as in the following example: const Table &elem_edge = mesh.ElementToEdgeTable(); int num_elems = mesh.GetNE(); for (int elem_id = 0; ei < num_elems; elem_id++) { int num_edges = elem_edge.RowSize(elem_id); const int *edges = elem_edge.GetRow(elem_id); for (int edgei = 0; edgei < num_edges; edgei ++) { int edge_id = edges[edgei]; .... Do something with the edge ID .... } } Another useful method related to navigating mesh connections with these Table objects is the Transpose method. This method takes an A_to_B table and transposes it into a B_to_A table. Usage is as follows: Table &face_edge = *mesh.GetFaceEdgeTable(); Table edge_face; Transpose(face_edge, edge_face); int num_edges = mesh.GetNEdges(); for (int edge_id = 0; ei < num_edges; edge_id++) { .... }", "title": "HowTo: Navigate the connections between mesh primitives with Table objects"}, {"location": "howto/ncmesh/", "text": "HowTo: Nonconforming and AMR meshes The Mesh class provides basic element refinement capabilities: All elements may be refined uniformly with Mesh::UniformRefinement . Local refinement is supported, but only for simplex elements. The method Mesh::GeneralRefinement uses recursive bisection in this case. These basic refinement methods preserve mesh conformity, i.e., no hanging nodes are created. This also means that quadrilaterals and hexahedra cannot be refined locally by the Mesh class. For more advanced AMR, MFEM has the class NCMesh : Tensor product element refinement (quad, hex, prism) is supported, including anisotropic refinement. Hanging nodes are created and handled transparently. Triangles and tetrahedra use \"red\" (isotropic) refinement, also producing hanging nodes in this mode. Derefinement (coarsening) of previously refined elements is possible. In parallel, the mesh can be load balanced. The user does not interact directly with the NCMesh class \u2014 it is created behind the scenes, and the Mesh class in nonconforming mode, continually updated to contain the finest elements of the refinement hierarchy, still serves as an interface for the user and other MFEM classes. To switch to the nonconforming mode (or convert and existing conforming Mesh ), you need to call EnsureNCMesh , typically at the beginning after loading the mesh: Mesh *mesh = new Mesh(mesh_file, 1, 1); mesh->EnsureNCMesh(true); The boolean parameter, if true , forces simplex meshes to use nonconforming refinement (the default is false ). Nonconforming refinement Once the Mesh is in nonconforming mode, you can simply call Mesh::GeneralRefinement to locally refine a subset of elements: Array refinement_list; for (int i = 0; i < mesh->GetNE(); i++) { if (/*element i refinement condition*/) { refinement_list.Append(i); } } mesh->GeneralRefinement(refinement_list); The resulting hanging nodes will be treated transparently by the FiniteElementSpace and BilinearForm classes: FiniteElementSpace will internally construct a conforming interpolation matrix $P$, that when applied to a vector of unconstrained (\"true\") DOFs, will augment the vector with interpolated constrained DOFs. Once the linear system $Ax = b$ is assembled, BilinearForm::FormLinearSystem will eliminate constrained nodes by transforming the linear system to $P^TAPx = P^Tb$ (see ex1.cpp ). After the reduced system is solved, the conforming solution on all nodes is recovered as $y = P x$ with BilinearForm::RecoverFEMSolution . Limiting the level of hanging nodes By default, MFEM does not limit the sizes of adjacent elements in nonconforming meshes. For some applications, it may be necessary to ensure that the refinement level of neighboring elements differs by at most one, for example. The optional parameter nc_limit of Mesh::GeneralRefinement can be used to control the maximum level of nonconformity. If nc_limit is greater than zero, the method will automatically perform additional refinements to make sure the difference of refinement levels of adjacent elements is at most nc_limit . Anisotropic refinement Uniquely, MFEM offers the capability to perform anisotropic refinement of tensor product elements in both 2D and 3D. The method Mesh::GeneralRefinement has two overloads, one taking a simple list of elements to refine (as seen above), and the other taking a list of struct Refinement { int index; char ref_type; } , where one can specify a refinement type for each element in the list: Array refinement_list; refinement_list.Append(Refinement(0, 2)); refinement_list.Append(Refinement(1, 4)); mesh->GeneralRefinement(refinement_list); This code will refine the first element (index 0) of the mesh in the Y direction only (provided it is a quad or hex element) and the second element (index 1) in the Z direction only. The directions are assumed in the element reference coordinates and are encoded as follows: Note that the refinement type is encoded as a 3-bit number, where bits 0, 1, 2 correspond to the X, Y, Z directions, respectively. Other element geometries allow fewer but similar refinement types: triangle (3), quadrilateral (1, 2, 3), tetrahedron (7), prism (3, 4, 7). In 3D meshes with anisotropic refinements it is easy to arrive at conflicting situations, where the refined faces of adjacent elements are not subsets of each other. For example, running the above code on a mesh with two hexahedra adjacent in the X direction will create an interface that cannot be constrained correctly. In such cases, MFEM will automatically adjust one side of the interface with additional refinements (called forced refinements) to ensure that the mesh remains a valid FEM mesh. In pathological cases the forced refinements may propagate. Using a reasonable nc_limit may reduce this effect. Nevertheless, a valid mesh is produced in all cases. Derefinement To coarsen elements, use the method Mesh::DerefineByError . The interface is different from refinement, because it is not possible to coarsen arbitrary groups of fine elements: it is only possible to reintroduce previously existing coarse elements by undoing their refinement (hence the term \"derefinement\"). Since one cannot supply the indices of elements that no longer exist in the Mesh class (the refinement trees are kept internal to NCMesh ), the method DerefineByError works indirectly by taking an array of \"error\" values corresponding to each element of the current Mesh . If the sum of error values of the children of some coarse element is below a supplied threshold, the children are removed and the coarse element is restored in Mesh . If the user specifies a nonzero nc_limit , care is taken not to derefine elements that are needed to keep the required level of nonconformity. Note: derefinement is not yet supported for meshes containing 3D anisotropic refinements. Parallel nonconforming meshes Just as the Mesh class has a parallel counterpart ( ParMesh ), so does the NCMesh class have a parallel descendant: ParNCMesh . The parallel class is again kept internal and the user can continue to interact with the standard ParMesh class (see examples ex1p , ex6p and ex15p ). The refinement hierarchy in parallel NC mode is fully distributed and scales to billions of elements and hundreds of thousands of MPI tasks. Ghost elements are automatically tracked by the ParNCMesh class, so that a parallel conforming interpolation matrix can be constructed by ParFiniteElementSpace . Depending on the assembly level, ParBilinearForm will either explicitly assemble the parallel $P^TAP$ system using the Hypre library, or the action of the $P$ matrix will be applied during solver iterations. Parallel refinement is still done through Mesh::GeneralRefinement inherited by the ParMesh class. The method takes local element indices and works the same as in serial. All parallel concerns such as keeping the ghost layers synchronized are handled internally in ParNCMesh . Note: parallel anisotropic refinement of 3D meshes is not supported yet. After each mesh operation (refinement, derefinement, load balancing) the ParMesh is updated to reflect the current parallel mesh state (minus the ghost elements, which are not exported to ParMesh ). Communication groups, used in conforming mode for reductions/broadcasts over parallel solution vectors, are approximated in the NC mode as if the mesh was cut along the nonconforming interfaces. Load balancing In conforming mode, a serial Mesh can only be partitioned statically (with METIS) when constructing a ParMesh . In nonconforming mode, the internal ParNCMesh class is capable of load balancing the distributed mesh at any time. This functionality is available to the user through ParMesh::Rebalance (see ex6p and ex15p ). The dynamic load balancing algorithm is based on partitioning a space-filling curve (SFC) that naturally arises when traversing the distributed refinement trees. Compared to spectral partitioners like METIS the partitions are not as high quality but the process is extremely fast and scales to hundreds of thousands of processors. For best results with SFC-based partitioning, one condition has to be met: the elements of the coarse Mesh from which the ParMesh is constructed need to be ordered, ideally as a sequence of face-neighbors. This makes it possible for ParNCMesh to order the leaves of all refinement trees into a global linear sequence, which when equipartitioned should produce compact (albeit not minimal surface) mesh partitions. Take for example a coarse mesh produced by the polar-nc miniapp. Except for two discontinuities, the elements are mostly ordered as a sequence of face-neighbors: When we start refining elements (in both serial and parallel), MFEM will try to keep the space-filling curve continuous by inserting local Hilbert curves in the refined areas (press Ctrl+O in GLVis to visualize the ordering curve): In a parallel computation, the global curve is then used for fast assignment of elements to MPI ranks. In the following run of ex15p , each processor is assigned the same number of elements (+/- one element). Note that the last partition is discontinuous due to a jump in ordering in the coarse mesh. This only affects the efficiency of MPI communication \u2014 the numerical results will be the same regardless of the partitioning. MFEM provides several methods to help with mesh ordering: Procedurally generated rectangular grids ( Mesh::MakeCartesian2D , Mesh::MakeCartesian3D and also MFEM INLINE mesh v1.0 files) are by default ordered along a pseudo-Hilbert curve. Note that even grid dimensions are recommended, as explained here . General unstructured meshes may be ordered by a spatial sort algorithm ( Mesh::GetHilbertElementOrdering ). This is a fast method that will leave a number of jumps in complex meshes, but it is still highly recommended over not ordering the mesh at all. High-quality orderings of general meshes can be obtained with the Gecko library, now included directly in MFEM and available as Mesh::GetGeckoElementOrdering . The optimization algorithm used is more costly than a simple spatial sort, but it should produce better orderings for meshes with complex geometries. Beware the exponential cost of increasing the window parameter. Large meshes should probably be ordered in a preprocessing step (you may use the mesh-explorer miniapp for that). Nonconforming mesh I/O Nonconforming meshes have their own file format MFEM NC mesh v1.0 , which supports all the additional internal structures (refinement trees, hanging nodes, etc.) and works for both serial and parallel NC meshes. The method ParMesh::ParPrint will automatically choose the right format and can be used to save and restart an AMR computation, as demonstrated in example ex6p . ParMesh::ParPrint should not be confused with the method ParMesh::Print , an analog of Mesh::Print , which is only suitable for visualization, as it uses the serial MFEM mesh v1.0 format and only adds the parallel shared faces to the output. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "HowTo: Nonconforming and AMR meshes"}, {"location": "howto/ncmesh/#howto-nonconforming-and-amr-meshes", "text": "The Mesh class provides basic element refinement capabilities: All elements may be refined uniformly with Mesh::UniformRefinement . Local refinement is supported, but only for simplex elements. The method Mesh::GeneralRefinement uses recursive bisection in this case. These basic refinement methods preserve mesh conformity, i.e., no hanging nodes are created. This also means that quadrilaterals and hexahedra cannot be refined locally by the Mesh class. For more advanced AMR, MFEM has the class NCMesh : Tensor product element refinement (quad, hex, prism) is supported, including anisotropic refinement. Hanging nodes are created and handled transparently. Triangles and tetrahedra use \"red\" (isotropic) refinement, also producing hanging nodes in this mode. Derefinement (coarsening) of previously refined elements is possible. In parallel, the mesh can be load balanced. The user does not interact directly with the NCMesh class \u2014 it is created behind the scenes, and the Mesh class in nonconforming mode, continually updated to contain the finest elements of the refinement hierarchy, still serves as an interface for the user and other MFEM classes. To switch to the nonconforming mode (or convert and existing conforming Mesh ), you need to call EnsureNCMesh , typically at the beginning after loading the mesh: Mesh *mesh = new Mesh(mesh_file, 1, 1); mesh->EnsureNCMesh(true); The boolean parameter, if true , forces simplex meshes to use nonconforming refinement (the default is false ).", "title": "HowTo: Nonconforming and AMR meshes"}, {"location": "howto/ncmesh/#nonconforming-refinement", "text": "Once the Mesh is in nonconforming mode, you can simply call Mesh::GeneralRefinement to locally refine a subset of elements: Array refinement_list; for (int i = 0; i < mesh->GetNE(); i++) { if (/*element i refinement condition*/) { refinement_list.Append(i); } } mesh->GeneralRefinement(refinement_list); The resulting hanging nodes will be treated transparently by the FiniteElementSpace and BilinearForm classes: FiniteElementSpace will internally construct a conforming interpolation matrix $P$, that when applied to a vector of unconstrained (\"true\") DOFs, will augment the vector with interpolated constrained DOFs. Once the linear system $Ax = b$ is assembled, BilinearForm::FormLinearSystem will eliminate constrained nodes by transforming the linear system to $P^TAPx = P^Tb$ (see ex1.cpp ). After the reduced system is solved, the conforming solution on all nodes is recovered as $y = P x$ with BilinearForm::RecoverFEMSolution .", "title": "Nonconforming refinement"}, {"location": "howto/ncmesh/#limiting-the-level-of-hanging-nodes", "text": "By default, MFEM does not limit the sizes of adjacent elements in nonconforming meshes. For some applications, it may be necessary to ensure that the refinement level of neighboring elements differs by at most one, for example. The optional parameter nc_limit of Mesh::GeneralRefinement can be used to control the maximum level of nonconformity. If nc_limit is greater than zero, the method will automatically perform additional refinements to make sure the difference of refinement levels of adjacent elements is at most nc_limit .", "title": "Limiting the level of hanging nodes"}, {"location": "howto/ncmesh/#anisotropic-refinement", "text": "Uniquely, MFEM offers the capability to perform anisotropic refinement of tensor product elements in both 2D and 3D. The method Mesh::GeneralRefinement has two overloads, one taking a simple list of elements to refine (as seen above), and the other taking a list of struct Refinement { int index; char ref_type; } , where one can specify a refinement type for each element in the list: Array refinement_list; refinement_list.Append(Refinement(0, 2)); refinement_list.Append(Refinement(1, 4)); mesh->GeneralRefinement(refinement_list); This code will refine the first element (index 0) of the mesh in the Y direction only (provided it is a quad or hex element) and the second element (index 1) in the Z direction only. The directions are assumed in the element reference coordinates and are encoded as follows: Note that the refinement type is encoded as a 3-bit number, where bits 0, 1, 2 correspond to the X, Y, Z directions, respectively. Other element geometries allow fewer but similar refinement types: triangle (3), quadrilateral (1, 2, 3), tetrahedron (7), prism (3, 4, 7). In 3D meshes with anisotropic refinements it is easy to arrive at conflicting situations, where the refined faces of adjacent elements are not subsets of each other. For example, running the above code on a mesh with two hexahedra adjacent in the X direction will create an interface that cannot be constrained correctly. In such cases, MFEM will automatically adjust one side of the interface with additional refinements (called forced refinements) to ensure that the mesh remains a valid FEM mesh. In pathological cases the forced refinements may propagate. Using a reasonable nc_limit may reduce this effect. Nevertheless, a valid mesh is produced in all cases.", "title": "Anisotropic refinement"}, {"location": "howto/ncmesh/#derefinement", "text": "To coarsen elements, use the method Mesh::DerefineByError . The interface is different from refinement, because it is not possible to coarsen arbitrary groups of fine elements: it is only possible to reintroduce previously existing coarse elements by undoing their refinement (hence the term \"derefinement\"). Since one cannot supply the indices of elements that no longer exist in the Mesh class (the refinement trees are kept internal to NCMesh ), the method DerefineByError works indirectly by taking an array of \"error\" values corresponding to each element of the current Mesh . If the sum of error values of the children of some coarse element is below a supplied threshold, the children are removed and the coarse element is restored in Mesh . If the user specifies a nonzero nc_limit , care is taken not to derefine elements that are needed to keep the required level of nonconformity. Note: derefinement is not yet supported for meshes containing 3D anisotropic refinements.", "title": "Derefinement"}, {"location": "howto/ncmesh/#parallel-nonconforming-meshes", "text": "Just as the Mesh class has a parallel counterpart ( ParMesh ), so does the NCMesh class have a parallel descendant: ParNCMesh . The parallel class is again kept internal and the user can continue to interact with the standard ParMesh class (see examples ex1p , ex6p and ex15p ). The refinement hierarchy in parallel NC mode is fully distributed and scales to billions of elements and hundreds of thousands of MPI tasks. Ghost elements are automatically tracked by the ParNCMesh class, so that a parallel conforming interpolation matrix can be constructed by ParFiniteElementSpace . Depending on the assembly level, ParBilinearForm will either explicitly assemble the parallel $P^TAP$ system using the Hypre library, or the action of the $P$ matrix will be applied during solver iterations. Parallel refinement is still done through Mesh::GeneralRefinement inherited by the ParMesh class. The method takes local element indices and works the same as in serial. All parallel concerns such as keeping the ghost layers synchronized are handled internally in ParNCMesh . Note: parallel anisotropic refinement of 3D meshes is not supported yet. After each mesh operation (refinement, derefinement, load balancing) the ParMesh is updated to reflect the current parallel mesh state (minus the ghost elements, which are not exported to ParMesh ). Communication groups, used in conforming mode for reductions/broadcasts over parallel solution vectors, are approximated in the NC mode as if the mesh was cut along the nonconforming interfaces.", "title": "Parallel nonconforming meshes"}, {"location": "howto/ncmesh/#load-balancing", "text": "In conforming mode, a serial Mesh can only be partitioned statically (with METIS) when constructing a ParMesh . In nonconforming mode, the internal ParNCMesh class is capable of load balancing the distributed mesh at any time. This functionality is available to the user through ParMesh::Rebalance (see ex6p and ex15p ). The dynamic load balancing algorithm is based on partitioning a space-filling curve (SFC) that naturally arises when traversing the distributed refinement trees. Compared to spectral partitioners like METIS the partitions are not as high quality but the process is extremely fast and scales to hundreds of thousands of processors. For best results with SFC-based partitioning, one condition has to be met: the elements of the coarse Mesh from which the ParMesh is constructed need to be ordered, ideally as a sequence of face-neighbors. This makes it possible for ParNCMesh to order the leaves of all refinement trees into a global linear sequence, which when equipartitioned should produce compact (albeit not minimal surface) mesh partitions. Take for example a coarse mesh produced by the polar-nc miniapp. Except for two discontinuities, the elements are mostly ordered as a sequence of face-neighbors: When we start refining elements (in both serial and parallel), MFEM will try to keep the space-filling curve continuous by inserting local Hilbert curves in the refined areas (press Ctrl+O in GLVis to visualize the ordering curve): In a parallel computation, the global curve is then used for fast assignment of elements to MPI ranks. In the following run of ex15p , each processor is assigned the same number of elements (+/- one element). Note that the last partition is discontinuous due to a jump in ordering in the coarse mesh. This only affects the efficiency of MPI communication \u2014 the numerical results will be the same regardless of the partitioning. MFEM provides several methods to help with mesh ordering: Procedurally generated rectangular grids ( Mesh::MakeCartesian2D , Mesh::MakeCartesian3D and also MFEM INLINE mesh v1.0 files) are by default ordered along a pseudo-Hilbert curve. Note that even grid dimensions are recommended, as explained here . General unstructured meshes may be ordered by a spatial sort algorithm ( Mesh::GetHilbertElementOrdering ). This is a fast method that will leave a number of jumps in complex meshes, but it is still highly recommended over not ordering the mesh at all. High-quality orderings of general meshes can be obtained with the Gecko library, now included directly in MFEM and available as Mesh::GetGeckoElementOrdering . The optimization algorithm used is more costly than a simple spatial sort, but it should produce better orderings for meshes with complex geometries. Beware the exponential cost of increasing the window parameter. Large meshes should probably be ordered in a preprocessing step (you may use the mesh-explorer miniapp for that).", "title": "Load balancing"}, {"location": "howto/ncmesh/#nonconforming-mesh-io", "text": "Nonconforming meshes have their own file format MFEM NC mesh v1.0 , which supports all the additional internal structures (refinement trees, hanging nodes, etc.) and works for both serial and parallel NC meshes. The method ParMesh::ParPrint will automatically choose the right format and can be used to save and restart an AMR computation, as demonstrated in example ex6p . ParMesh::ParPrint should not be confused with the method ParMesh::Print , an analog of Mesh::Print , which is only suitable for visualization, as it uses the serial MFEM mesh v1.0 format and only adds the parallel shared faces to the output. MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Nonconforming mesh I/O"}, {"location": "howto/outer_normals/", "text": "HowTo: Compute the outer normals of the boundary elements of a mesh In numerous applications it is important to obtain the outer normals to the boundary of your mesh. In 2D this will simply be the vector normal to the vector tangent to the boundary point of interest, and in 3D this will be the vector normal to the plane tangent to the boundary point of interest. An easy way to obtain these vector/plane tangents is to use the Jacobian of the element transformation at the point of interest as in the following example: // Loop through the boundary elements and compute the normals at the centers of those elements for (int it = 0; it < fespace->GetNBE(); it++) { Vector normal(dim); ElementTransformation *Trans = fespace->GetBdrElementTransformation(it); Trans->SetIntPoint(&Geometries.GetCenter(Trans->GetGeometryType())); CalcOrtho(Trans->Jacobian(), normal); ... Do something of interest with the normals } The ElementTransformation object handles transformations between the elements and their corresponding reference elements. We start by getting the ElementTransformation object for the boundary element we are interested in. In order to move forward we then need to set the point in the element that we are interested in with the SetIntPoint method. In this we are setting it to the geometric center of the boundary element. Finally, we can get the Jacobian of the boundary element and use the tangent vector/plane that it defines to compute the boundary element normal at the boundary element center. The CalcOrtho method simply takes a 2x1 or 3x2 matrix and compute the normal to the column vectors of that matrix. It should be noted that the vectors that are computed in this process and not necessarily of unit length.", "title": "HowTo: Compute the outer normals of the boundary elements of a mesh"}, {"location": "howto/outer_normals/#howto-compute-the-outer-normals-of-the-boundary-elements-of-a-mesh", "text": "In numerous applications it is important to obtain the outer normals to the boundary of your mesh. In 2D this will simply be the vector normal to the vector tangent to the boundary point of interest, and in 3D this will be the vector normal to the plane tangent to the boundary point of interest. An easy way to obtain these vector/plane tangents is to use the Jacobian of the element transformation at the point of interest as in the following example: // Loop through the boundary elements and compute the normals at the centers of those elements for (int it = 0; it < fespace->GetNBE(); it++) { Vector normal(dim); ElementTransformation *Trans = fespace->GetBdrElementTransformation(it); Trans->SetIntPoint(&Geometries.GetCenter(Trans->GetGeometryType())); CalcOrtho(Trans->Jacobian(), normal); ... Do something of interest with the normals } The ElementTransformation object handles transformations between the elements and their corresponding reference elements. We start by getting the ElementTransformation object for the boundary element we are interested in. In order to move forward we then need to set the point in the element that we are interested in with the SetIntPoint method. In this we are setting it to the geometric center of the boundary element. Finally, we can get the Jacobian of the boundary element and use the tangent vector/plane that it defines to compute the boundary element normal at the boundary element center. The CalcOrtho method simply takes a 2x1 or 3x2 matrix and compute the normal to the column vectors of that matrix. It should be noted that the vectors that are computed in this process and not necessarily of unit length.", "title": "HowTo: Compute the outer normals of the boundary elements of a mesh"}, {"location": "howto/periodic-boundaries/", "text": "HowTo: Use periodic meshes and enforce periodic boundary conditions In order to solve a problem with periodic boundary conditions, the Mesh object should have a periodic topology. This can be achieved in one of two ways: By reading a periodic mesh from disk. By identifying periodic vertices (e.g. through a translation vector), and then creating a new periodic mesh. Reading a periodic mesh from disk MFEM supports reading periodic meshes from a variety of mesh file formats . Several periodic sample meshes are included with MFEM in the data directory: MFEM format: periodic-square.mesh : a 3x3 Cartesian mesh of the (periodic) square [-1,1]^2 periodic-hexagon.mesh : a quad mesh of a periodic hexagonal domain with 12 elements periodic-cube.mesh : a 3x3x3 Cartesian mesh of the (periodic) cube [-1,1]^3 Gmsh format (the corresponding .geo files are also included): periodic-square.msh : a 4x4 Cartesian mesh of the (periodic) unit square periodic-cube.msh : a 4x4x4 Cartesian mesh of the (periodic) unit cube periodic-annulus-sector.msh : a 2D mesh of an annular sector with periodic boundaries defined by a rotation periodic-torus-sector.msh : a 3D mesh of a torus sector with periodic boundaries defined by a rotation Any of these meshes can be loaded as usual using MFEM (e.g. using the -m flag in the MFEM examples ), and the periodic topology will be automatically handled. (Note that some periodic boundaries (such as periodic-cube.mesh ) contain so-called \"internal boundary elements\", which may result in boundary conditions being enforced for some examples.) Example 0 on Periodic Annulus Example 0 on Periodic Torus Creating a periodic mesh by identifying vertices MFEM can also create periodic meshes from non-periodic meshes by identifying periodic vertices. The function Mesh::MakePeriodic creates a periodic mesh from a non-periodic mesh given such a vertex identification. For example, if we wish to create a periodic line segment, then we would like to identify the two endpoints of the line segment since they represent the same point in the periodic topology. An example of creating this vertex mapping in the case of a line segment is described here . It is often more convenient to describe the periodicity constraints in terms of translation vectors . Any two vertices that are coincident under any of the given translation vectors will be considered topologically identical. MFEM can generate a vertex mapping from these translation vectors using the Mesh::CreatePeriodicVertexMapping . An example using this functionality to create a mesh of the periodic square is shown here . (Note that periodic meshes use a discontinuous nodal function for mapping the reference space to the physical one (see Mesh::SetCurvature ). The vertex coordinates are no longer meaningful after calling Mesh::MakePeriodic . You should refrain from accessing them and use the nodal grid function returned by Mesh::GetNodes or single nodes through Mesh::GetNode instead.) Example: creating a periodic line segment with a vertex map Mesh mesh = Mesh::MakeCartesian1D(10);// Make a mesh of the unit interval with 10 elements // Create the vertex mapping. To begin, create the identity mapping. std::vector v2v(mesh.GetNV()); for (int i = 0; i < mesh.GetNV(); ++i) { v2v[i] = i; } // Modify the mapping so that the last vertex gets mapped to the first vertex. v2v.back() = 0; Mesh periodic_mesh = Mesh::MakePeriodic(mesh, v2v); // Create the periodic mesh Example: creating a periodic square with translation vectors // Create a 10x10 quad mesh of the unit square; Mesh mesh = Mesh::MakeCartesian2D(10, 10, Element::QUADRILATERAL); // Create translation vectors defining the periodicity Vector x_translation({1.0, 0.0}); Vector y_translation({0.0, 1.0}); std::vector translations = {x_translation, y_translation}; // Create the periodic mesh using the vertex mapping defined by the translation vectors Mesh periodic_mesh = Mesh::MakePeriodic(mesh, mesh.CreatePeriodicVertexMapping(translations));", "title": "HowTo: Use periodic meshes and enforce periodic boundary conditions"}, {"location": "howto/periodic-boundaries/#howto-use-periodic-meshes-and-enforce-periodic-boundary-conditions", "text": "In order to solve a problem with periodic boundary conditions, the Mesh object should have a periodic topology. This can be achieved in one of two ways: By reading a periodic mesh from disk. By identifying periodic vertices (e.g. through a translation vector), and then creating a new periodic mesh.", "title": "HowTo: Use periodic meshes and enforce periodic boundary conditions"}, {"location": "howto/periodic-boundaries/#reading-a-periodic-mesh-from-disk", "text": "MFEM supports reading periodic meshes from a variety of mesh file formats . Several periodic sample meshes are included with MFEM in the data directory: MFEM format: periodic-square.mesh : a 3x3 Cartesian mesh of the (periodic) square [-1,1]^2 periodic-hexagon.mesh : a quad mesh of a periodic hexagonal domain with 12 elements periodic-cube.mesh : a 3x3x3 Cartesian mesh of the (periodic) cube [-1,1]^3 Gmsh format (the corresponding .geo files are also included): periodic-square.msh : a 4x4 Cartesian mesh of the (periodic) unit square periodic-cube.msh : a 4x4x4 Cartesian mesh of the (periodic) unit cube periodic-annulus-sector.msh : a 2D mesh of an annular sector with periodic boundaries defined by a rotation periodic-torus-sector.msh : a 3D mesh of a torus sector with periodic boundaries defined by a rotation Any of these meshes can be loaded as usual using MFEM (e.g. using the -m flag in the MFEM examples ), and the periodic topology will be automatically handled. (Note that some periodic boundaries (such as periodic-cube.mesh ) contain so-called \"internal boundary elements\", which may result in boundary conditions being enforced for some examples.) Example 0 on Periodic Annulus Example 0 on Periodic Torus", "title": "Reading a periodic mesh from disk"}, {"location": "howto/periodic-boundaries/#creating-a-periodic-mesh-by-identifying-vertices", "text": "MFEM can also create periodic meshes from non-periodic meshes by identifying periodic vertices. The function Mesh::MakePeriodic creates a periodic mesh from a non-periodic mesh given such a vertex identification. For example, if we wish to create a periodic line segment, then we would like to identify the two endpoints of the line segment since they represent the same point in the periodic topology. An example of creating this vertex mapping in the case of a line segment is described here . It is often more convenient to describe the periodicity constraints in terms of translation vectors . Any two vertices that are coincident under any of the given translation vectors will be considered topologically identical. MFEM can generate a vertex mapping from these translation vectors using the Mesh::CreatePeriodicVertexMapping . An example using this functionality to create a mesh of the periodic square is shown here . (Note that periodic meshes use a discontinuous nodal function for mapping the reference space to the physical one (see Mesh::SetCurvature ). The vertex coordinates are no longer meaningful after calling Mesh::MakePeriodic . You should refrain from accessing them and use the nodal grid function returned by Mesh::GetNodes or single nodes through Mesh::GetNode instead.)", "title": "Creating a periodic mesh by identifying vertices"}, {"location": "tutorial/", "text": "MFEM Tutorial on AWS August 22, 2024 Welcome to the MFEM tutorial, part of the LLNL HPC Software Tutorials Series in collaboration with AWS . MFEM is a modular parallel C++ library for finite element methods developed at CASC , LLNL with the help of the MFEM community worldwide. The pages below provide a self-paced overview of MFEM and its use for scalable finite element discretizations and application development. You can follow along in your own Amazon EC2 instance or in a Local Docker Container . No previous experience is necessary. Watch the video import mermaid from 'https://cdn.jsdelivr.net/npm/mermaid@9/dist/mermaid.esm.min.mjs'; mermaid.initialize({ startOnLoad: true }); %%{init: { 'theme': 'base', 'themeVariables': { 'primaryColor': '#deebf7', 'primaryBorderColor': '#3182bd' }}}%% graph LR; A[fa:fa-play-circle Getting Started]; B[fa:fa-book Finite Element Basics]; C[fa:fa-gears Tour of MFEM Examples]; D[fa:fa-picture-o Meshing and Visualization]; E[fa:fa-tasks Solvers and Scalability]; F[fa:fa-rocket Further Steps]; A-->B; B-->C; B-->D; B-->E; C-->F; D-->F; E-->F; click A \"start\" click B \"fem\" click C \"examples\" click D \"meshvis\" click E \"solvers\" click F \"further\" We recommend that you start with the Getting Started and Finite Element Basics lessons, and then, depending on your interests, pick some of the next 3 lessons: Tour of MFEM Examples , Meshing and Visualization , and Solvers and Scalability . The tutorial concludes with additional suggestions in the Further Steps page. Getting Started This is the first page you should visit to setup your tutorial environment. You will learn about: Setting up Visual Studio Code editor and terminal Setting up GLVis for visualization Testing the setup with a simple MFEM example Finite Element Basics Once you have the tutorial environment working, visit this page to learn about the basics of the finite element method and its implementation in MFEM. The lesson covers: Annotated Example 1 Serial and parallel runs GLVis keys/web interface Tour of MFEM Examples This is an optional lesson where you can learn about MFEM's main features: support for high-order methods, adaptive mesh refinement, $H^1$, $H(curl)$, $H(div)$ and $L^2$ discretizations, through several of the examples included with the library: High-order methods for the full de Rham complex (Examples 1, 2, 3, 4) Discontinuous Galerkin (Example 9) Nonlinear elasticity (Example 10) Adaptive mesh refinement (Example 15) Complex methods, PML (Examples 22, 25) Meshing and Visualization This is an optional lesson that illustrates MFEM's support for external mesh generators, internal meshing tools, and external visualization tools. You will learn about: Importing meshes from Gmsh and Cubit MFEM's meshing tools: Mesh Explorer, Mesh Optimizer, and Shaper Visualizing results in VisIt and ParaView Solvers and Scalability This is an optional lesson that showcases MFEM's parallel scalability and support for efficient solvers and preconditioners. The lesson covers: Scalable algebraic multigrid preconditioners from hypre (Examples 1, 2, 3, 4) MFEM's native Multigrid solver (Example 26) Low-order refined methods (Solvers and Transfer miniapps) Additional solver integrations via PETSc, SuperLU, and STRUMPACK Further Steps This is the final lesson with further activities, including: Explore additional examples and miniapps Write your own simple simulation starting from one of the MFEM examples Learn about integrations with other libraries and MFEM's GPU capabilities Visit the MFEM website, watch MFEM-related videos and seminar talks Join the MFEM organization on GitHub to contribute to the project MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Tutorial"}, {"location": "tutorial/#mfem-tutorial-on-aws", "text": "", "title": "MFEM Tutorial on AWS"}, {"location": "tutorial/#getting-started", "text": "This is the first page you should visit to setup your tutorial environment. You will learn about: Setting up Visual Studio Code editor and terminal Setting up GLVis for visualization Testing the setup with a simple MFEM example", "title": " Getting Started"}, {"location": "tutorial/#finite-element-basics", "text": "Once you have the tutorial environment working, visit this page to learn about the basics of the finite element method and its implementation in MFEM. The lesson covers: Annotated Example 1 Serial and parallel runs GLVis keys/web interface", "title": " Finite Element Basics"}, {"location": "tutorial/#tour-of-mfem-examples", "text": "This is an optional lesson where you can learn about MFEM's main features: support for high-order methods, adaptive mesh refinement, $H^1$, $H(curl)$, $H(div)$ and $L^2$ discretizations, through several of the examples included with the library: High-order methods for the full de Rham complex (Examples 1, 2, 3, 4) Discontinuous Galerkin (Example 9) Nonlinear elasticity (Example 10) Adaptive mesh refinement (Example 15) Complex methods, PML (Examples 22, 25)", "title": " Tour of MFEM Examples"}, {"location": "tutorial/#meshing-and-visualization", "text": "This is an optional lesson that illustrates MFEM's support for external mesh generators, internal meshing tools, and external visualization tools. You will learn about: Importing meshes from Gmsh and Cubit MFEM's meshing tools: Mesh Explorer, Mesh Optimizer, and Shaper Visualizing results in VisIt and ParaView", "title": " Meshing and Visualization"}, {"location": "tutorial/#solvers-and-scalability", "text": "This is an optional lesson that showcases MFEM's parallel scalability and support for efficient solvers and preconditioners. The lesson covers: Scalable algebraic multigrid preconditioners from hypre (Examples 1, 2, 3, 4) MFEM's native Multigrid solver (Example 26) Low-order refined methods (Solvers and Transfer miniapps) Additional solver integrations via PETSc, SuperLU, and STRUMPACK", "title": " Solvers and Scalability"}, {"location": "tutorial/#further-steps", "text": "This is the final lesson with further activities, including: Explore additional examples and miniapps Write your own simple simulation starting from one of the MFEM examples Learn about integrations with other libraries and MFEM's GPU capabilities Visit the MFEM website, watch MFEM-related videos and seminar talks Join the MFEM organization on GitHub to contribute to the project MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": " Further Steps"}, {"location": "tutorial/docker/", "text": "Local Docker Container 15 minutes basic You don't need a cloud instance to run the MFEM tutorial. Instead, you can directly run the MFEM Docker container on a computer available to you. The mfem/developer containers has been specifically created to kickstart the exploration of MFEM and its capabilities in a variety of computing environments: from the cloud (like AWS), to HPC clusters, and your own laptop. There are CPU and GPU variations of the image, we will refer to it generically as mfem/developer during the tutorial. Below are instructions on how to start the container on Linux and macOS , and how to use it to run the tutorial locally . You can also use the container (and similar commands) to setup your own cloud instance. See for example this AWS script . Linux Depending on your Linux distribution, you have to first install Docker . See the official instructions for e.g. Ubuntu . Once the installation is complete and the docker command is in your path, pull the prebuilt mfem/developer-cpu container with: docker pull ghcr.io/mfem/containers/developer-cpu:latest Depending on your connection, this may take a while to download and extract (the image is about 2GB). To start the container, run: docker run --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cpu:latest You can later stop this by pressing Ctrl-C . See the docker documentation for more details. We provide two variations of our containers that are configured with CPU or CPU and GPU capabilities. If you have an NVIDIA supported CUDA GPU you have to install the NVIDIA Container Toolkit . Our CUDA images are built with the sm_70 compute capability by default. If your GPU is an sm_70 you can use the prebuilt mfem/developer-cuda-sm70 image with: docker pull ghcr.io/mfem/containers/developer-cuda-sm70:latest To start the container use docker run --gpus all --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cuda-sm70:latest If you need a different compute capability, you can clone the mfem/containers repository and build an image e.g., for sm_80 , with git clone git@github.com:mfem/containers.git cd containers docker-compose build --build-arg cuda_arch_sm=80 cuda && docker image tag cuda:latest cuda-sm80:latest docker-compose build --build-arg cuda_arch_sm=80 cuda-tpls && docker image tag cuda-tpls:latest cuda-tpls-sm80:latest This automatically builds all libraries with the correctly supported CUDA compute capability. Note The forwarding of ports 3000 , 8000 and 8080 is needed for VS Code , GLVis and the websocket connection between them. The --cap-add=SYS_PTRACE option is added to resolve MPI warnings. macOS On macOS we recommend using Podman . See the official installation instructions here . After installing it, use the following commands to create a Podman machine and pull the mfem/developer container: podman machine init podman pull ghcr.io/mfem/containers/developer-cpu:latest Both of these can take a while, depending on your hardware and network connection. To start the virtual machine and the container in it, run: podman machine start podman run --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cpu:latest You can later stop these by pressing Ctrl-C and typing podman machine stop . Note One can also use Docker Desktop on macOS and follow the Linux instructions above. Running the tutorial locally Once the mfem/developer container is running, you can proceed with the Getting Started page using the following IP : 127.0.0.1 . You can alternatively use localhost for the IP . In particular, the VS Code and GLVis windows can be accessed at localhost:3000 and localhost:8000/live respectively. Furthermore, you can use the above pages from any other devices (tablets, phones) that are connected to the same network as the machine running the container. For example you can run an example from the VS Code terminal on your laptop and visualize the results on a GLVis window on your phone. To connect other devices, first run hostname -s to get the local host name and then use that {hostname} for the IP in the rest of the tutorial. Questions? Ask for help in the tutorial Slack channel . Next Steps Go to the Getting Started page. Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Docker"}, {"location": "tutorial/docker/#local-docker-container", "text": "15 minutes basic You don't need a cloud instance to run the MFEM tutorial. Instead, you can directly run the MFEM Docker container on a computer available to you. The mfem/developer containers has been specifically created to kickstart the exploration of MFEM and its capabilities in a variety of computing environments: from the cloud (like AWS), to HPC clusters, and your own laptop. There are CPU and GPU variations of the image, we will refer to it generically as mfem/developer during the tutorial. Below are instructions on how to start the container on Linux and macOS , and how to use it to run the tutorial locally . You can also use the container (and similar commands) to setup your own cloud instance. See for example this AWS script .", "title": "  Local Docker Container"}, {"location": "tutorial/docker/#linux", "text": "Depending on your Linux distribution, you have to first install Docker . See the official instructions for e.g. Ubuntu . Once the installation is complete and the docker command is in your path, pull the prebuilt mfem/developer-cpu container with: docker pull ghcr.io/mfem/containers/developer-cpu:latest Depending on your connection, this may take a while to download and extract (the image is about 2GB). To start the container, run: docker run --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cpu:latest You can later stop this by pressing Ctrl-C . See the docker documentation for more details. We provide two variations of our containers that are configured with CPU or CPU and GPU capabilities. If you have an NVIDIA supported CUDA GPU you have to install the NVIDIA Container Toolkit . Our CUDA images are built with the sm_70 compute capability by default. If your GPU is an sm_70 you can use the prebuilt mfem/developer-cuda-sm70 image with: docker pull ghcr.io/mfem/containers/developer-cuda-sm70:latest To start the container use docker run --gpus all --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cuda-sm70:latest If you need a different compute capability, you can clone the mfem/containers repository and build an image e.g., for sm_80 , with git clone git@github.com:mfem/containers.git cd containers docker-compose build --build-arg cuda_arch_sm=80 cuda && docker image tag cuda:latest cuda-sm80:latest docker-compose build --build-arg cuda_arch_sm=80 cuda-tpls && docker image tag cuda-tpls:latest cuda-tpls-sm80:latest This automatically builds all libraries with the correctly supported CUDA compute capability.", "title": "  Linux"}, {"location": "tutorial/docker/#macos", "text": "On macOS we recommend using Podman . See the official installation instructions here . After installing it, use the following commands to create a Podman machine and pull the mfem/developer container: podman machine init podman pull ghcr.io/mfem/containers/developer-cpu:latest Both of these can take a while, depending on your hardware and network connection. To start the virtual machine and the container in it, run: podman machine start podman run --cap-add=SYS_PTRACE -p 3000:3000 -p 8000:8000 -p 8080:8080 ghcr.io/mfem/containers/developer-cpu:latest You can later stop these by pressing Ctrl-C and typing podman machine stop .", "title": "  macOS"}, {"location": "tutorial/docker/#running-the-tutorial-locally", "text": "Once the mfem/developer container is running, you can proceed with the Getting Started page using the following IP : 127.0.0.1 . You can alternatively use localhost for the IP . In particular, the VS Code and GLVis windows can be accessed at localhost:3000 and localhost:8000/live respectively. Furthermore, you can use the above pages from any other devices (tablets, phones) that are connected to the same network as the machine running the container. For example you can run an example from the VS Code terminal on your laptop and visualize the results on a GLVis window on your phone. To connect other devices, first run hostname -s to get the local host name and then use that {hostname} for the IP in the rest of the tutorial.", "title": "  Running the tutorial locally"}, {"location": "tutorial/examples/", "text": "Tour of MFEM Examples 45 minutes intermediate Lesson Objectives Learn about MFEM's main features through several of the examples included with the library. Note Please complete the Getting Started and Finite Element Basics pages before this lesson. High-order methods MFEM includes support for the full de Rham complex , $H^1-$conforming (continuous), $H(curl)-$conforming (continuous tangential component), $H(div)-$conforming (continuous normal component), and $L^2-$conforming (discontinuous) finite element discretization spaces in 2D and 3D. A compatible high-order de Rham complex on the discrete level can be constructed using the *_FECollection classes with * replaced by H1 , ND , RT , and L2 , respectively. Note that MFEM supports arbitrary discretization order for the full de Rham complex. For example, here is an illustration of the FEM degrees of freedom on quadrilaterals for orders 1\u20143: The first four MFEM examples serve as an introduction on how to construct and use these discrete spaces for the solution of various PDEs. All of them have the -o / --order command line parameter to specify the finite element space order at runtime. Before building the example codes, make sure you are in the examples directory: cd ~/mfem/examples . Note Remember to compile each numbered example before executing its sample runs: make ex* for the serial version or make ex*p for the parallel version. You can build multiple examples in the same command: make ex3 ex4 ex3p ex4p . Example 1 ( ex1.cpp and ex1p.cpp ) solves a simple Poisson problem using a scalar $H^1$ space. More specifically, it solves the problem $$-\\Delta u = 1$$ with homogeneous Dirichlet boundary conditions. Try the following sample runs: ./ex1 -m ../data/square-disc.mesh ./ex1 -m ../data/fichera.mesh mpirun -np 4 ex1p -m ../data/star-surf.mesh mpirun -np 4 ex1p -m ../data/mobius-strip.mesh The plot on the right corresponds to the 2nd sample run with i , Z and m pressed in the GLVis window, followed by rotation with the mouse Left button. Example 2 ( ex2.cpp and ex2p.cpp ) solves a linear elasticity problem using a vector $H^1$ space. The problem describes a multi-material cantilever beam. The weak form is $$-{\\rm div}({\\sigma}({\\bf u})) = 0$$ where $${\\sigma}({\\bf u}) = \\lambda\\, {\\rm div}({\\bf u})\\,I + \\mu\\,(\\nabla{\\bf u} + \\nabla{\\bf u}^T)$$ is the stress tensor corresponding to displacement field ${\\bf u}$, and $\\lambda$ and $\\mu$ are the material Lame constants. The boundary conditions are ${\\bf u}=0$ on the fixed part of the boundary with attribute 1, and ${\\sigma}({\\bf u})\\cdot n = f$ on the remainder with $f$ being a constant pull down vector on boundary elements with attribute 2, and zero otherwise. Try the following sample runs: ./ex2 -m ../data/beam-tri.mesh ./ex2 -m ../data/beam-hex.mesh mpirun -np 4 ex2p -m ../data/beam-wedge.mesh mpirun -np 4 ex2p -m ../data/beam-quad.mesh -o 3 -elast The plot on the right corresponds to the 2nd sample run with m pressed in the GLVis window. Example 3 ( ex3.cpp and ex3p.cpp ) solves a 3D electromagnetic diffusion problem (definite Maxwell) using an $H(curl)$ finite element space. It solves the equation $$\\nabla\\times\\nabla\\times\\, E + E = f$$ with boundary condition $ E \\times n $ = \"given tangential field\". Here, the r.h.s. $f$ and the boundary condition data are computed using a given exact solution $E$. Try the following sample runs: ./ex3 -m ../data/star.mesh ./ex3 -m ../data/beam-tri.mesh -o 2 mpirun -np 4 ex3p -m ../data/fichera.mesh mpirun -np 4 ex3p -m ../data/escher.mesh -o 2 The plot on the right corresponds to the 3rd sample run with m and A pressed in the GLVis window. Example 4 ( ex4.cpp and ex4p.cpp ) solves a 2D/3D $H(div)$ diffusion problem using an $H(div)$ finite element space. The $H(div)$ diffusion problem corresponds to the second-order definite equation $$-{\\rm grad}(\\alpha\\,{\\rm div}(F)) + \\beta F = f$$ with boundary condition $F \\cdot n$ = \"given normal field\". Here, the r.h.s. $f$ and the boundary condition data are computed using a given exact solution $F$. Try the following sample runs: ./ex4 -m ../data/square-disc.mesh ./ex4 -m ../data/periodic-square.mesh -no-bc mpirun -np 4 ex4p -m ../data/fichera-q2.vtk mpirun -np 4 ex4p -m ../data/amr-quad.mesh The plot on the right is similar to the 1st sample run with R , j and l pressed in the GLVis window. Discontinuous Galerkin MFEM supports high-order Discontinuous Galerkin (DG) discretizations through various face integrators. Additionally, it includes numerous explicit and implicit ODE time integrators which are used for the solution of time-dependent PDEs. Example 9 ( ex9.cpp and ex9p.cpp ) solves the time-dependent advection equation $$\\frac{\\partial u}{\\partial t} + v \\cdot \\nabla u = 0,$$ where $v$ is a given fluid velocity, and $u_0(x)=u(0,x)$ is a given initial condition. The example demonstrates the use of DG bilinear forms, the use of explicit and implicit (with block ILU preconditioning) ODE time integrators, the definition of periodic boundary conditions through periodic meshes, as well as the use of GLVis for persistent visualization of a time-evolving solution. Try the following sample runs: ./ex9 -m ../data/periodic-square.mesh -p 3 -r 4 -dt 0.0025 -tf 9 -vs 20 ./ex9 -m ../data/disc-nurbs.mesh -p 1 -r 3 -dt 0.005 -tf 9 mpirun -np 4 ex9p -m ../data/star-q3.mesh -p 1 -rp 1 -dt 0.004 -tf 9 mpirun -np 16 ex9p -m ../data/amr-hex.mesh -p 1 -rs 1 -rp 0 -dt 0.005 -tf 0.5 The plot on the right corresponds to the 1st sample run with R , j and l pressed in the GLVis window. Note In time-dependent simulations, the GLVis window will be automatically updated with the solutions at the new time steps as they are computed (how frequently this is done is governed by the -vs command line parameter above). To start/pause these updates press space in the GLVis window, or click the icon in the upper center portion of the window. Nonlinear elasticity Example 10 ( ex10.cpp and ex10p.cpp ) solves a time dependent nonlinear elasticity problem of the form $$ \\frac{dv}{dt} = H(x) + S v\\,,\\qquad \\frac{dx}{dt} = v\\,, $$ where $H$ is a hyperelastic model and $S$ is a viscosity operator of Laplacian type. The geometry of the domain is assumed to be as follows: The example demonstrates the use of nonlinear operators, as well as their implicit time integration using a Newton method for solving an associated reduced backward-Euler type nonlinear equation. Each Newton step requires the inversion of a Jacobian matrix, which is done through a (preconditioned) inner solver. Before trying this example, modify the source code of ex10.cpp to disable the second visualization stream as follows: @@ -298,7 +298,7 @@ int main(int argc, char *argv[]) vis_v.precision(8); v.SetFromTrueVector(); x.SetFromTrueVector(); visualize(vis_v, mesh, &x, &v, \"Velocity\", true); - vis_w.open(vishost, visport); + // vis_w.open(vishost, visport); if (vis_w) { oper.GetElasticEnergyDensity(x, w); Make identical change in ex10p.cpp , line 347. Now rebuild both examples: make ex10 ex10p , and try the following sample runs: ./ex10 -m ../data/beam-hex.mesh -s 2 -r 1 -o 2 -dt 3 ./ex10 -m ../data/beam-tri.mesh -s 3 -r 2 -o 2 -dt 3 mpirun -np 4 ex10p -m ../data/beam-wedge.mesh -s 2 -rs 1 -dt 3 mpirun -np 4 ex10p -m ../data/beam-tet.mesh -s 2 -rs 1 -dt 3 The plot on the right corresponds to the 1st sample run. Adaptive mesh refinement MFEM provides support for local conforming and non-conforming adaptive mesh refinement (AMR) with arbitrary-order hanging nodes, anisotropic refinement, derefinement, and parallel load balancing. The AMR support covers the full de Rham complex, i.e., the energy spaces $H^1$, $H(curl)$, $H(div)$ and $L^2$. You can choose from several error estimators, such as the Zienkiewicz-Zhu (ZZ) or the Kelly estimator, to drive the AMRs. We recommend looking at examples 6, 15, 21, and 30 for some simulations with AMR. Example 15 ( ex15.cpp and ex15p.cpp ) demonstrates MFEM's capability to refine, derefine, and load balance non-conforming meshes in 2D and 3D as well as on linear, curved, and surface meshes. In this example the mesh is adapted to a time-dependent solution. At each time step the problem is solved on a sequence of adaptive meshes that are refined based on a simple ZZ estimator. At the end of the refinement process, the error estimates are used to identify elements that are over-refined, and a single derefinement step is performed. Finally, in the parallel case, a load-balancing step is executed. Try the following sample runs: ./ex15 -n 3 ./ex15 -m ../data/square-disc.mesh ./ex15 -est 1 -e 0.0001 mpirun -np 4 ex15p -m ../data/mobius-strip.mesh mpirun -np 4 ex15p -m ../data/fichera.mesh -tf 0.5 The plot on the right is related to the parallel version of the 1st sample run with R , j , l and m pressed in the GLVis window. Complex-valued problems MFEM provides a user-friendly interface for solving complex valued systems. These kinds of problems can be formulated using the classes ComplexOperator , ComplexLinearForm , SesquilinearForm , ComplexGridFunction , and their parallel counterparts. You can define the weak formulation by providing the integrators of real and imaginary parts independently and then use similar methods as in the real problems (such us Assemble , FormLinearSystem , and RecoverFEMSolution ) to recover the solution. Currently, there are two examples demonstrating the use of complex-valued systems. Example 22 ( ex22.cpp and ex22p.cpp ) implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ In each case the field is driven by a forced oscillation, with angular frequency $\\omega$ imposed at the boundary or a portion of the boundary. Before trying this example, modify the source code of ex22.cpp to disable the additional visualization streams as follows: @@ -272,8 +272,8 @@ int main(int argc, char *argv[]) { char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_r(vishost, visport); - socketstream sol_sock_i(vishost, visport); + socketstream sol_sock_r(vishost, visport+1); + socketstream sol_sock_i(vishost, visport+2); sol_sock_r.precision(8); sol_sock_i.precision(8); sol_sock_r < < \"solution\\n\" < < *mesh < < u_exact->real() @@ -482,8 +482,8 @@ int main(int argc, char *argv[]) { char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_r(vishost, visport); - socketstream sol_sock_i(vishost, visport); + socketstream sol_sock_r(vishost, visport+3); + socketstream sol_sock_i(vishost, visport+4); sol_sock_r.precision(8); sol_sock_i.precision(8); sol_sock_r < < \"solution\\n\" < < *mesh < < u.real() @@ -497,8 +497,8 @@ int main(int argc, char *argv[]) char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_r(vishost, visport); - socketstream sol_sock_i(vishost, visport); + socketstream sol_sock_r(vishost, visport+5); + socketstream sol_sock_i(vishost, visport+6); sol_sock_r.precision(8); sol_sock_i.precision(8); sol_sock_r < < \"solution\\n\" < < *mesh < < u_exact->real() @@ -522,7 +522,7 @@ int main(int argc, char *argv[]) < < \" Press space (in the GLVis window) to resume it.\\n\"; int num_frames = 32; int i = 0; - while (sol_sock) + while (sol_sock && i < 3*num_frames) { double t = (double)(i % num_frames) / num_frames; ostringstream oss; Make identical changes in ex22p.cpp , lines 304-305, 532-533, 549-550 and 577. Now rebuild both examples: make ex22 ex22p , and try the following sample runs: ./ex22 -m ../data/inline-quad.mesh -o 3 -p 1 ./ex22 -m ../data/inline-hex.mesh -o 2 -p 2 -pa mpirun -np 1 ex22p -m ../data/star.mesh -o 2 -sigma 10.0 mpirun -np 16 ex22p -m ../data/star.mesh -o 2 -sigma 10.0 -rs 4 -rp 3 -no-vis mpirun -np 1 ex22p -m ../data/inline-pyramid.mesh -o 1 mpirun -np 16 ex22p -m ../data/inline-pyramid.mesh -o 1 -rs 2 -rp 2 -no-vis The plot on the right corresponds to the 3rd and 4th sample runs with R , j and l pressed in the GLVis window. Example 25 ( ex25.cpp and ex25p.cpp ) illustrates the use of a Perfectly Matched Layer (PML) for the simulation of time-harmonic electromagnetic waves propagating in unbounded domains. The implementation involves the introduction of an artificial absorbing layer that minimizes undesired reflections. Inside this layer a complex coordinate stretching map forces the wave modes to decay exponentially. The example solves the indefinite Maxwell equations $$ \\nabla \\times (a \\nabla \\times E) - \\omega^2 b E = f $$ where $a = \\mu^{-1} |J|^{-1} J^T J$, $b= \\epsilon |J| J^{-1} J^{-T}$ and $J$ is the Jacobian matrix of the coordinate transformation. Before trying this example, modify the source code of ex25.cpp to disable the additional visualization streams as follows: @@ -570,13 +570,13 @@ int main(int argc, char *argv[]) char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_re(vishost, visport); + socketstream sol_sock_re(vishost, visport+1); sol_sock_re.precision(8); sol_sock_re < < \"solution\\n\" < < *mesh < < x.real() < < keys < < \"window_title 'Solution real part'\" < < flush; - socketstream sol_sock_im(vishost, visport); + socketstream sol_sock_im(vishost, visport+2); sol_sock_im.precision(8); sol_sock_im < < \"solution\\n\" < < *mesh < < x.imag() < < keys @@ -594,7 +594,7 @@ int main(int argc, char *argv[]) < < \" Press space (in the GLVis window) to resume it.\\n\"; int num_frames = 32; int i = 0; - while (sol_sock) + while (sol_sock && i < 3*num_frames) { double t = (double)(i % num_frames) / num_frames; ostringstream oss; Make identical changes in ex25p.cpp , lines 638, 647 and 674. Now rebuild both examples: make ex25 ex25p , and try the following sample runs: ./ex25 -o 2 -f 5.0 -ref 4 -prob 2 ./ex25 -o 2 -f 1.0 -ref 2 -prob 3 mpirun -np 1 ex25p -o 2 -f 8.0 -rs 2 -rp 2 -prob 4 -m ../data/inline-quad.mesh mpirun -np 32 ex25p -o 2 -f 8.0 -rs 3 -rp 3 -prob 4 -m ../data/inline-quad.mesh -no-vis mpirun -np 1 ex25p -o 2 -f 1.0 -rs 2 -rp 2 -prob 0 -m ../data/beam-quad.mesh mpirun -np 48 ex25p -o 2 -f 1.0 -rs 4 -rp 4 -prob 0 -m ../data/beam-quad.mesh -no-vis The plot on the right corresponds to the 1st sample run with aaa , mm , c and several p pressed in the GLVis window. Questions? Ask for help in the tutorial Slack channel . Next Steps Depending on your interests pick one of the following lessons: Meshing and Visualization Solvers and Scalability Further Steps Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Examples"}, {"location": "tutorial/examples/#tour-of-mfem-examples", "text": "45 minutes intermediate", "title": "  Tour of MFEM Examples"}, {"location": "tutorial/examples/#high-order-methods", "text": "MFEM includes support for the full de Rham complex , $H^1-$conforming (continuous), $H(curl)-$conforming (continuous tangential component), $H(div)-$conforming (continuous normal component), and $L^2-$conforming (discontinuous) finite element discretization spaces in 2D and 3D. A compatible high-order de Rham complex on the discrete level can be constructed using the *_FECollection classes with * replaced by H1 , ND , RT , and L2 , respectively. Note that MFEM supports arbitrary discretization order for the full de Rham complex. For example, here is an illustration of the FEM degrees of freedom on quadrilaterals for orders 1\u20143: The first four MFEM examples serve as an introduction on how to construct and use these discrete spaces for the solution of various PDEs. All of them have the -o / --order command line parameter to specify the finite element space order at runtime. Before building the example codes, make sure you are in the examples directory: cd ~/mfem/examples .", "title": "  High-order methods"}, {"location": "tutorial/examples/#discontinuous-galerkin", "text": "MFEM supports high-order Discontinuous Galerkin (DG) discretizations through various face integrators. Additionally, it includes numerous explicit and implicit ODE time integrators which are used for the solution of time-dependent PDEs. Example 9 ( ex9.cpp and ex9p.cpp ) solves the time-dependent advection equation $$\\frac{\\partial u}{\\partial t} + v \\cdot \\nabla u = 0,$$ where $v$ is a given fluid velocity, and $u_0(x)=u(0,x)$ is a given initial condition. The example demonstrates the use of DG bilinear forms, the use of explicit and implicit (with block ILU preconditioning) ODE time integrators, the definition of periodic boundary conditions through periodic meshes, as well as the use of GLVis for persistent visualization of a time-evolving solution. Try the following sample runs: ./ex9 -m ../data/periodic-square.mesh -p 3 -r 4 -dt 0.0025 -tf 9 -vs 20 ./ex9 -m ../data/disc-nurbs.mesh -p 1 -r 3 -dt 0.005 -tf 9 mpirun -np 4 ex9p -m ../data/star-q3.mesh -p 1 -rp 1 -dt 0.004 -tf 9 mpirun -np 16 ex9p -m ../data/amr-hex.mesh -p 1 -rs 1 -rp 0 -dt 0.005 -tf 0.5 The plot on the right corresponds to the 1st sample run with R , j and l pressed in the GLVis window.", "title": "  Discontinuous Galerkin"}, {"location": "tutorial/examples/#nonlinear-elasticity", "text": "Example 10 ( ex10.cpp and ex10p.cpp ) solves a time dependent nonlinear elasticity problem of the form $$ \\frac{dv}{dt} = H(x) + S v\\,,\\qquad \\frac{dx}{dt} = v\\,, $$ where $H$ is a hyperelastic model and $S$ is a viscosity operator of Laplacian type. The geometry of the domain is assumed to be as follows: The example demonstrates the use of nonlinear operators, as well as their implicit time integration using a Newton method for solving an associated reduced backward-Euler type nonlinear equation. Each Newton step requires the inversion of a Jacobian matrix, which is done through a (preconditioned) inner solver. Before trying this example, modify the source code of ex10.cpp to disable the second visualization stream as follows: @@ -298,7 +298,7 @@ int main(int argc, char *argv[]) vis_v.precision(8); v.SetFromTrueVector(); x.SetFromTrueVector(); visualize(vis_v, mesh, &x, &v, \"Velocity\", true); - vis_w.open(vishost, visport); + // vis_w.open(vishost, visport); if (vis_w) { oper.GetElasticEnergyDensity(x, w); Make identical change in ex10p.cpp , line 347. Now rebuild both examples: make ex10 ex10p , and try the following sample runs: ./ex10 -m ../data/beam-hex.mesh -s 2 -r 1 -o 2 -dt 3 ./ex10 -m ../data/beam-tri.mesh -s 3 -r 2 -o 2 -dt 3 mpirun -np 4 ex10p -m ../data/beam-wedge.mesh -s 2 -rs 1 -dt 3 mpirun -np 4 ex10p -m ../data/beam-tet.mesh -s 2 -rs 1 -dt 3 The plot on the right corresponds to the 1st sample run.", "title": "  Nonlinear elasticity"}, {"location": "tutorial/examples/#adaptive-mesh-refinement", "text": "MFEM provides support for local conforming and non-conforming adaptive mesh refinement (AMR) with arbitrary-order hanging nodes, anisotropic refinement, derefinement, and parallel load balancing. The AMR support covers the full de Rham complex, i.e., the energy spaces $H^1$, $H(curl)$, $H(div)$ and $L^2$. You can choose from several error estimators, such as the Zienkiewicz-Zhu (ZZ) or the Kelly estimator, to drive the AMRs. We recommend looking at examples 6, 15, 21, and 30 for some simulations with AMR. Example 15 ( ex15.cpp and ex15p.cpp ) demonstrates MFEM's capability to refine, derefine, and load balance non-conforming meshes in 2D and 3D as well as on linear, curved, and surface meshes. In this example the mesh is adapted to a time-dependent solution. At each time step the problem is solved on a sequence of adaptive meshes that are refined based on a simple ZZ estimator. At the end of the refinement process, the error estimates are used to identify elements that are over-refined, and a single derefinement step is performed. Finally, in the parallel case, a load-balancing step is executed. Try the following sample runs: ./ex15 -n 3 ./ex15 -m ../data/square-disc.mesh ./ex15 -est 1 -e 0.0001 mpirun -np 4 ex15p -m ../data/mobius-strip.mesh mpirun -np 4 ex15p -m ../data/fichera.mesh -tf 0.5 The plot on the right is related to the parallel version of the 1st sample run with R , j , l and m pressed in the GLVis window.", "title": "  Adaptive mesh refinement"}, {"location": "tutorial/examples/#complex-valued-problems", "text": "MFEM provides a user-friendly interface for solving complex valued systems. These kinds of problems can be formulated using the classes ComplexOperator , ComplexLinearForm , SesquilinearForm , ComplexGridFunction , and their parallel counterparts. You can define the weak formulation by providing the integrators of real and imaginary parts independently and then use similar methods as in the real problems (such us Assemble , FormLinearSystem , and RecoverFEMSolution ) to recover the solution. Currently, there are two examples demonstrating the use of complex-valued systems. Example 22 ( ex22.cpp and ex22p.cpp ) implements three variants of a damped harmonic oscillator: A scalar $H^1$ field: $$-\\nabla\\cdot\\left(a \\nabla u\\right) - \\omega^2 b\\,u + i\\,\\omega\\,c\\,u = 0$$ A vector $H(curl)$ field: $$\\nabla\\times\\left(a\\nabla\\times\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ A vector $H(div)$ field: $$-\\nabla\\left(a \\nabla\\cdot\\vec{u}\\right) - \\omega^2 b\\,\\vec{u} + i\\,\\omega\\,c\\,\\vec{u} = 0$$ In each case the field is driven by a forced oscillation, with angular frequency $\\omega$ imposed at the boundary or a portion of the boundary. Before trying this example, modify the source code of ex22.cpp to disable the additional visualization streams as follows: @@ -272,8 +272,8 @@ int main(int argc, char *argv[]) { char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_r(vishost, visport); - socketstream sol_sock_i(vishost, visport); + socketstream sol_sock_r(vishost, visport+1); + socketstream sol_sock_i(vishost, visport+2); sol_sock_r.precision(8); sol_sock_i.precision(8); sol_sock_r < < \"solution\\n\" < < *mesh < < u_exact->real() @@ -482,8 +482,8 @@ int main(int argc, char *argv[]) { char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_r(vishost, visport); - socketstream sol_sock_i(vishost, visport); + socketstream sol_sock_r(vishost, visport+3); + socketstream sol_sock_i(vishost, visport+4); sol_sock_r.precision(8); sol_sock_i.precision(8); sol_sock_r < < \"solution\\n\" < < *mesh < < u.real() @@ -497,8 +497,8 @@ int main(int argc, char *argv[]) char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_r(vishost, visport); - socketstream sol_sock_i(vishost, visport); + socketstream sol_sock_r(vishost, visport+5); + socketstream sol_sock_i(vishost, visport+6); sol_sock_r.precision(8); sol_sock_i.precision(8); sol_sock_r < < \"solution\\n\" < < *mesh < < u_exact->real() @@ -522,7 +522,7 @@ int main(int argc, char *argv[]) < < \" Press space (in the GLVis window) to resume it.\\n\"; int num_frames = 32; int i = 0; - while (sol_sock) + while (sol_sock && i < 3*num_frames) { double t = (double)(i % num_frames) / num_frames; ostringstream oss; Make identical changes in ex22p.cpp , lines 304-305, 532-533, 549-550 and 577. Now rebuild both examples: make ex22 ex22p , and try the following sample runs: ./ex22 -m ../data/inline-quad.mesh -o 3 -p 1 ./ex22 -m ../data/inline-hex.mesh -o 2 -p 2 -pa mpirun -np 1 ex22p -m ../data/star.mesh -o 2 -sigma 10.0 mpirun -np 16 ex22p -m ../data/star.mesh -o 2 -sigma 10.0 -rs 4 -rp 3 -no-vis mpirun -np 1 ex22p -m ../data/inline-pyramid.mesh -o 1 mpirun -np 16 ex22p -m ../data/inline-pyramid.mesh -o 1 -rs 2 -rp 2 -no-vis The plot on the right corresponds to the 3rd and 4th sample runs with R , j and l pressed in the GLVis window. Example 25 ( ex25.cpp and ex25p.cpp ) illustrates the use of a Perfectly Matched Layer (PML) for the simulation of time-harmonic electromagnetic waves propagating in unbounded domains. The implementation involves the introduction of an artificial absorbing layer that minimizes undesired reflections. Inside this layer a complex coordinate stretching map forces the wave modes to decay exponentially. The example solves the indefinite Maxwell equations $$ \\nabla \\times (a \\nabla \\times E) - \\omega^2 b E = f $$ where $a = \\mu^{-1} |J|^{-1} J^T J$, $b= \\epsilon |J| J^{-1} J^{-T}$ and $J$ is the Jacobian matrix of the coordinate transformation. Before trying this example, modify the source code of ex25.cpp to disable the additional visualization streams as follows: @@ -570,13 +570,13 @@ int main(int argc, char *argv[]) char vishost[] = \"localhost\"; int visport = 19916; - socketstream sol_sock_re(vishost, visport); + socketstream sol_sock_re(vishost, visport+1); sol_sock_re.precision(8); sol_sock_re < < \"solution\\n\" < < *mesh < < x.real() < < keys < < \"window_title 'Solution real part'\" < < flush; - socketstream sol_sock_im(vishost, visport); + socketstream sol_sock_im(vishost, visport+2); sol_sock_im.precision(8); sol_sock_im < < \"solution\\n\" < < *mesh < < x.imag() < < keys @@ -594,7 +594,7 @@ int main(int argc, char *argv[]) < < \" Press space (in the GLVis window) to resume it.\\n\"; int num_frames = 32; int i = 0; - while (sol_sock) + while (sol_sock && i < 3*num_frames) { double t = (double)(i % num_frames) / num_frames; ostringstream oss; Make identical changes in ex25p.cpp , lines 638, 647 and 674. Now rebuild both examples: make ex25 ex25p , and try the following sample runs: ./ex25 -o 2 -f 5.0 -ref 4 -prob 2 ./ex25 -o 2 -f 1.0 -ref 2 -prob 3 mpirun -np 1 ex25p -o 2 -f 8.0 -rs 2 -rp 2 -prob 4 -m ../data/inline-quad.mesh mpirun -np 32 ex25p -o 2 -f 8.0 -rs 3 -rp 3 -prob 4 -m ../data/inline-quad.mesh -no-vis mpirun -np 1 ex25p -o 2 -f 1.0 -rs 2 -rp 2 -prob 0 -m ../data/beam-quad.mesh mpirun -np 48 ex25p -o 2 -f 1.0 -rs 4 -rp 4 -prob 0 -m ../data/beam-quad.mesh -no-vis The plot on the right corresponds to the 1st sample run with aaa , mm , c and several p pressed in the GLVis window.", "title": "  Complex-valued problems"}, {"location": "tutorial/fem/", "text": "Finite Element Basics 45 minutes basic Lesson Objectives Understand a basic finite element discretization of the Poisson equation in MFEM. Learn how to launch serial and parallel runs of MFEM examples. Learn how to visualize the results of MFEM simulations. Note Please complete the Getting Started page before this lesson. Poisson equation The Poisson Equation is a partial differential equation (PDE) that can be used to model steady-state heat conduction, electric potentials, and gravitational fields. In mathematical terms $$ -\\Delta u = f $$ where u is the potential field and f is the source function. This PDE is a generalization of the Laplace Equation . To approximately solve the above continuous equation on computers, we need to discretize it by introducing a finite (discrete) number of unknowns to compute for. In the Finite Element Method (FEM), this is done using the concept of basis functions . Instead of calculating the exact analytic solution u , we approximate it $$ u \\approx u_h := \\sum_{j=1}^n c_j \\varphi_j $$ where $u_h$ is the finite element approximation with degrees of freedom (unknown coefficients) $c_j$, and $\\varphi_j$ are known basis functions . The FEM basis functions are typically piecewise-polynomial functions on a given computational mesh, which are only non-zero on small portions of the mesh. With finite elements, the mesh can be totally unstructured, curved, and non-conforming: To solve for the unknown coefficients in (2), we consider the weak (or variational) form of the Poisson equation. This is obtained by first multiplying with another (test) basis function $\\varphi_i$: $$-\\sum_{j=1}^n c_j \\int_\\Omega \\Delta \\varphi_j \\varphi_i = \\int_\\Omega f \\varphi_i$$ and then integrating by parts using the divergence theorem : $$\\sum_{j=1}^n c_j \\int_\\Omega \\nabla \\varphi_j \\cdot \\nabla \\varphi_i = \\int_\\Omega f \\varphi_i$$ Here we are assuming that the boundary term vanishes due to homogeneous Dirichlet boundary conditions corresponding, for example, to zero temperature on the whole boundary. Since the basis functions are known, we can rewrite (4) as $$ A x = b $$ where $$ A_{ij} = \\int_\\Omega \\nabla \\varphi_j \\cdot \\nabla \\varphi_i $$ $$ b_i = \\int_\\Omega f \\varphi_i $$ $$ x_j = c_j $$ This is a $n \\times n$ linear system that can be solved directly or iteratively for the unknown coefficients. Note that we are free to choose the computational mesh and the basis functions $\\varphi_i$, and therefore the finite space, as we see fit. Note The above is a basic introduction to finite elements in the simplest possible settings. To learn more, you can visit MFEM's Finite Element Method page. Annotated Example 1 MFEM's Example 1 implements the above simple FEM for the Poisson problem in the source file examples/ex1.cpp . We set $f=1$ in (1) and enforce homogeneous Dirichlet boundary conditions on the whole boundary. Below we highlight selected portions of the example code and connect them with the description in the previous section. You can follow along by browsing ex1.cpp in your VS Code browser window. In the settings of this tutorial, the visualization will automatically update in the GLVis browser window. The computational mesh is provided as input (option -m ) that could be 3D, 2D, surface, hex/tet, etc. (It defaults to star.mesh in line 77 .) The code in lines 120-124 loads the mesh from the given file, mesh_file and creates the corresponding MFEM object mesh of class Mesh . Mesh mesh(mesh_file, 1, 1); int dim = mesh.Dimension(); The following code (lines 126-137 ) refines the mesh uniformly to about 50,000 elements. You can easily modify the refinement by changing the definition of ref_levels . int ref_levels = (int)floor(log(50000./mesh.GetNE())/log(2.)/dim); for (int l = 0; l < ref_levels; l++) { mesh.UniformRefinement(); } In the next section we create the finite element space, i.e., specify the finite element basis functions $\\varphi_j$ on the mesh. This involves the MFEM classes FiniteElementCollection , which specifies the space (including its order , provided as input via -o ), and FiniteElementSpace , which connects the space and the mesh. Focusing on the common case order > 0 , the code in lines 139-162 is essentially: FiniteElementCollection *fec = new H1_FECollection(order, dim); FiniteElementSpace fespace(&mesh, fec); cout << \"Number of finite element unknowns: \" << fespace.GetTrueVSize() << endl; The printed number of finite element unknowns (typically) corresponds to the size of the linear system $n$ from the previous section. The finite element degrees of freedom that are on the domain boundary are then extracted in lines 164-174 . We need those to impose the Dirichlet boundary conditions. Array ess_tdof_list; if (mesh.bdr_attributes.Size()) { Array ess_bdr(mesh.bdr_attributes.Max()); ess_bdr = 1; fespace.GetEssentialTrueDofs(ess_bdr, ess_tdof_list); } The method GetEssentialTrueDofs takes a marker array of Mesh boundary attributes and returns the FiniteElementSpace degrees of freedom that belong to the marked attributes (the non-zero entries of ess_bdr ). The right-hand side $b$ is constructed in lines 176-182 . In MFEM terminology, integrals of the form (7) are implemented in the class LinearForm . The Coefficient object corresponds to $f$ from the previous section, which here is set to $1$. You can easily specify more general $f$ with other coefficient classes, e.g., FunctionCoefficient . LinearForm b(&fespace); ConstantCoefficient one(1.0); b.AddDomainIntegrator(new DomainLFIntegrator(one)); b.Assemble(); The finite element approximation $u_h$ is described in MFEM as a GridFunction belonging to the FiniteElementSpace . Note that a GridFunction object can be viewed both as the function $u_h$ in (2) as well as the vector of degrees of freedom $x$ in (8). See lines 184-188 . GridFunction x(&fespace); x = 0.0; We need to initialize x with the boundary values we want to impose as Dirichlet boundary conditions (for simplicity, here we just set x=0 in the whole domain). The matrix $A$ is represented as a BilinearForm object, with a specific DiffusionIntegrator corresponding to the weak form (6). See lines 190-210 . BilinearForm a(&fespace); if (pa) { a.SetAssemblyLevel(AssemblyLevel::PARTIAL); } if (fa) { a.SetAssemblyLevel(AssemblyLevel::FULL); } a.AddDomainIntegrator(new DiffusionIntegrator(one)); a.Assemble(); MFEM supports different assembly levels for $A$ (from global matrix to matrix-free) and many different integrators . You can also provide a variety of coefficients to the integrator, for example, PWConstCoefficient to specify different material properties in different portions of the domain. The linear system (5) is formed in lines 212-216 and solved with a variety of options in lines 218-252 . One simple case is: OperatorPtr A; Vector B, X; a.FormLinearSystem(ess_tdof_list, x, b, A, X, B); cout << \"Size of linear system: \" << A->Height() << endl; GSSmoother M((SparseMatrix&)(*A)); PCG(*A, M, B, X, 1, 200, 1e-12, 0.0); The method FormLinearSystem takes the BilinearForm , LinearForm , GridFunction , and boundary conditions (i.e., a , b , x , and ess_tdof_list ); applies any necessary transformations such as eliminating boundary conditions (specified by the boundary values of x , applying conforming constraints for non-conforming AMR, static condensation, etc.); and produces the corresponding matrix $A$, right-hand side vector $B$, and unknown vector $X$. In the above example, we then solve A X = B with conjugate gradient iterations, using a simple Gauss-Seidel preconditioner. We set the maximum number of iterations to 200 and a convergence criteria of residual norm reduction by 6 orders of magnitude ( 1e-12 is the square of that relative tolerance). Solving the linear system is one of the main computational bottlenecks in the FEM. It can take many preconditioned conjugate gradient (PCG) iterations depending on the problem size, the difficulty of the problem, and the choice of the preconditioner. Once the linear system is solved, we recover the solution as a finite element grid function, and then visualize and save the final results to disk (files refined.mesh and sol.gf ). See lines 254-274 . a.RecoverFEMSolution(X, b, x); ofstream mesh_ofs(\"refined.mesh\"); mesh.Print(mesh_ofs); ofstream sol_ofs(\"sol.gf\"); x.Save(sol_ofs); socketstream sol_sock(\"localhost\", 19916); sol_sock << \"solution\\n\" << mesh << x << flush; Parallel Example 1p Like most MFEM examples, Example 1 has also a parallel version in the source file examples/ex1p.cpp , which illustrates the ease of transition between sequential and MPI-parallel code. The parallel version supports all options of the serial example, and can be executed on varying numbers of MPI ranks, e.g., with mpirun -np . Besides MPI, in parallel we also depend on METIS for mesh partitioning and hypre for solvers. The differences between the two versions are small, and you can compare them for yourself by opening both files in your VS Code window. The main additions in ex1p.cpp compared to ex1.cpp are: Initializing MPI and hypre Mpi::Init(); Hypre::Init(); Splitting the serial mesh in parallel with additional parallel refinement ParMesh pmesh(MPI_COMM_WORLD, mesh); int par_ref_levels = 2; for (int l = 0; l < par_ref_levels; l++) { pmesh.UniformRefinement(); } Using the Par -prefixed versions of the classes ParFiniteElementSpace fespace(&pmesh, fec); ParLinearForm b(&fespace); ParGridFunction x(&fespace); ParBilinearForm a(&fespace); Parallel PCG with hypre's algebraic multigrid BoomerAMG preconditioner Solver *prec = new HypreBoomerAMG; CGSolver cg(MPI_COMM_WORLD); cg.SetRelTol(1e-12); cg.SetMaxIter(2000); cg.SetPrintLevel(1); cg.SetPreconditioner(*prec); cg.SetOperator(*A); cg.Mult(B, X); Note Unlike in the serial version, we expect the number of PCG iterations to remain relatively bounded with the BoomerAMG preconditioner independent of the mesh size, coefficient jumps, and number of MPI ranks. Note, however, that algebraic multigrid has a non-trivial setup phase, which can be comparable in terms of time with the PCG solve phase. For more details, see the Solvers and Scalability page. Serial and parallel runs Both ex1 and ex1p come pre-built in the tutorial environment. You can see a number of sample runs at the beginning of their corresponding source files when you open them in VS Code. To get a feel for how these examples work, you can copy and paste some of these runs from the source to the terminal in VS Code. Try this! Specify a couple different meshes with -m in the VS Code terminal to see how the image rendered by GLVis changes. Run ./ex1 -m ../data/escher.mesh ./ex1 -m ../data/l-shape.mesh ./ex1 -m ../data/mobius-strip.mesh Warning The current directory is not in the VS Code PATH so make sure to add ./ before the executable, e.g., ./ex1 -m ../data/pipe-nurbs.mesh not ex1 -m ../data/pipe-nurbs.mesh . Note The GLVis visualization is local to your browser, so it may take a while to update after a sample run. Once the data arrives, interaction with the visualization window should be fast. Try this! Now try out some sample parallel runs: mpirun -np 16 ex1p mpirun -np 16 ex1p -m ../data/pipe-nurbs.mesh mpirun -np 48 ex1p -m ../data/escher-p2.mesh Warning If you are getting errors from mpirun that there are \"not enough slots available in the system\" , try adding the --oversubscribe option. For example: mpirun --oversubscribe -np 16 ex1p The above plot shows the parallel decomposition in the first sample run, with the following manipulations in the GLVis window: pressing keys R , j , b , g , F11 twice, p a number of times, and zooming in with the Right mouse button. GPU runs If your container supports CUDA you can explore GPU computations with: mpirun -np 4 ex1p -pa -d cuda Additionally you can try out AmgX by changing your directory to examples/amgx and building: cd amgx && make ex1p After that you can run the example with mpirun -np 4 ex1p -d cuda --amgx-file amg_pcg.json GLVis interface GLVis is a lightweight tool for accurate and flexible finite element visualization based on MFEM. In this tutorial we use its web version, which should work on any machine with a modern browser, including mobile touch devices such as tablets and phones. Note The GLVis and VS Code browser windows do not need to be on the same device. For example, you can run VS Code on a computer, while GLVis shows the results on your phone/tablet. GLVis natively understands finite element data and can manipulate it in various ways through the web interface or by typing (case sensitive) keystrokes in the GLVis window. To access the web interface, move to the top right of the GLVis window and press the Visualization controls icon . This will open a number of buttons for controlling the mesh, colors, and position of the plot: You can perform additional operations with the GLVis key commands and mouse functions. Most of them are described in the Help window that appears when clicking the icon in the upper left corner, or by pressing the h key. Some of the more useful key commands and mouse functions are: A \u2014 Turn on/off the use of anti-aliasing/multi-sampling b \u2014 Toggle the boundary in 2D scalar mode c \u2014 Show/hide color bar F11 / F12 \u2014 Shrink/Zoom parallel subdomains g \u2014 Toggle background color (white/black) i \u2014 Toggle cutting plane j \u2014 Turn on/off perspective Left \u2014 Rotate the plot Left + Shift \u2014 Spin the plot (according to the dragging vector) m \u2014 Toggle the mesh state. p / P \u2014 Cycle through color palettes (lots of options) r \u2014 Reset the plot to 3D view R \u2014 Cycle through 2D projections (looking above/below in x / y / z directions) Right \u2014 Zoom in/out S \u2014 Take an image snapshot space \u2014 Pause solution update in time-dependent simulations t \u2014 Cycle materials and lights x / X \u2014 Rotate cutting plane ( \\phi ) in 3D y / Y \u2014 Rotate cutting plane ( \\theta ) in 3D z / Z \u2014 Translate cutting plane in 3D Note that you may need to press fn and/or Ctrl to escape some of the function keys. Try this! After running Example 1, experiment with the key command m in the GLVis window to change the appearance of the mesh. Use i to make a cut through the visual and y to change the position of the cutting plane. For more details, see the full list of key commands and mouse functions in the GLVis README . Warning If the GLVis window becomes unresponsive, try disconnecting and connecting again. If this doesn't help, run the following in the VS Code terminal: pkill -f glvis-browser-server , then force-reload the GLVis browser window and connect again. Questions? Ask for help in the tutorial Slack channel . Next Steps Depending on your interests pick one of the following lessons: Tour of MFEM Examples Meshing and Visualization Solvers and Scalability Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Fem"}, {"location": "tutorial/fem/#finite-element-basics", "text": "45 minutes basic", "title": "  Finite Element Basics"}, {"location": "tutorial/fem/#poisson-equation", "text": "The Poisson Equation is a partial differential equation (PDE) that can be used to model steady-state heat conduction, electric potentials, and gravitational fields. In mathematical terms $$ -\\Delta u = f $$ where u is the potential field and f is the source function. This PDE is a generalization of the Laplace Equation . To approximately solve the above continuous equation on computers, we need to discretize it by introducing a finite (discrete) number of unknowns to compute for. In the Finite Element Method (FEM), this is done using the concept of basis functions . Instead of calculating the exact analytic solution u , we approximate it $$ u \\approx u_h := \\sum_{j=1}^n c_j \\varphi_j $$ where $u_h$ is the finite element approximation with degrees of freedom (unknown coefficients) $c_j$, and $\\varphi_j$ are known basis functions . The FEM basis functions are typically piecewise-polynomial functions on a given computational mesh, which are only non-zero on small portions of the mesh. With finite elements, the mesh can be totally unstructured, curved, and non-conforming: To solve for the unknown coefficients in (2), we consider the weak (or variational) form of the Poisson equation. This is obtained by first multiplying with another (test) basis function $\\varphi_i$: $$-\\sum_{j=1}^n c_j \\int_\\Omega \\Delta \\varphi_j \\varphi_i = \\int_\\Omega f \\varphi_i$$ and then integrating by parts using the divergence theorem : $$\\sum_{j=1}^n c_j \\int_\\Omega \\nabla \\varphi_j \\cdot \\nabla \\varphi_i = \\int_\\Omega f \\varphi_i$$ Here we are assuming that the boundary term vanishes due to homogeneous Dirichlet boundary conditions corresponding, for example, to zero temperature on the whole boundary. Since the basis functions are known, we can rewrite (4) as $$ A x = b $$ where $$ A_{ij} = \\int_\\Omega \\nabla \\varphi_j \\cdot \\nabla \\varphi_i $$ $$ b_i = \\int_\\Omega f \\varphi_i $$ $$ x_j = c_j $$ This is a $n \\times n$ linear system that can be solved directly or iteratively for the unknown coefficients. Note that we are free to choose the computational mesh and the basis functions $\\varphi_i$, and therefore the finite space, as we see fit.", "title": "  Poisson equation"}, {"location": "tutorial/fem/#annotated-example-1", "text": "MFEM's Example 1 implements the above simple FEM for the Poisson problem in the source file examples/ex1.cpp . We set $f=1$ in (1) and enforce homogeneous Dirichlet boundary conditions on the whole boundary. Below we highlight selected portions of the example code and connect them with the description in the previous section. You can follow along by browsing ex1.cpp in your VS Code browser window. In the settings of this tutorial, the visualization will automatically update in the GLVis browser window. The computational mesh is provided as input (option -m ) that could be 3D, 2D, surface, hex/tet, etc. (It defaults to star.mesh in line 77 .) The code in lines 120-124 loads the mesh from the given file, mesh_file and creates the corresponding MFEM object mesh of class Mesh . Mesh mesh(mesh_file, 1, 1); int dim = mesh.Dimension(); The following code (lines 126-137 ) refines the mesh uniformly to about 50,000 elements. You can easily modify the refinement by changing the definition of ref_levels . int ref_levels = (int)floor(log(50000./mesh.GetNE())/log(2.)/dim); for (int l = 0; l < ref_levels; l++) { mesh.UniformRefinement(); } In the next section we create the finite element space, i.e., specify the finite element basis functions $\\varphi_j$ on the mesh. This involves the MFEM classes FiniteElementCollection , which specifies the space (including its order , provided as input via -o ), and FiniteElementSpace , which connects the space and the mesh. Focusing on the common case order > 0 , the code in lines 139-162 is essentially: FiniteElementCollection *fec = new H1_FECollection(order, dim); FiniteElementSpace fespace(&mesh, fec); cout << \"Number of finite element unknowns: \" << fespace.GetTrueVSize() << endl; The printed number of finite element unknowns (typically) corresponds to the size of the linear system $n$ from the previous section. The finite element degrees of freedom that are on the domain boundary are then extracted in lines 164-174 . We need those to impose the Dirichlet boundary conditions. Array ess_tdof_list; if (mesh.bdr_attributes.Size()) { Array ess_bdr(mesh.bdr_attributes.Max()); ess_bdr = 1; fespace.GetEssentialTrueDofs(ess_bdr, ess_tdof_list); } The method GetEssentialTrueDofs takes a marker array of Mesh boundary attributes and returns the FiniteElementSpace degrees of freedom that belong to the marked attributes (the non-zero entries of ess_bdr ). The right-hand side $b$ is constructed in lines 176-182 . In MFEM terminology, integrals of the form (7) are implemented in the class LinearForm . The Coefficient object corresponds to $f$ from the previous section, which here is set to $1$. You can easily specify more general $f$ with other coefficient classes, e.g., FunctionCoefficient . LinearForm b(&fespace); ConstantCoefficient one(1.0); b.AddDomainIntegrator(new DomainLFIntegrator(one)); b.Assemble(); The finite element approximation $u_h$ is described in MFEM as a GridFunction belonging to the FiniteElementSpace . Note that a GridFunction object can be viewed both as the function $u_h$ in (2) as well as the vector of degrees of freedom $x$ in (8). See lines 184-188 . GridFunction x(&fespace); x = 0.0; We need to initialize x with the boundary values we want to impose as Dirichlet boundary conditions (for simplicity, here we just set x=0 in the whole domain). The matrix $A$ is represented as a BilinearForm object, with a specific DiffusionIntegrator corresponding to the weak form (6). See lines 190-210 . BilinearForm a(&fespace); if (pa) { a.SetAssemblyLevel(AssemblyLevel::PARTIAL); } if (fa) { a.SetAssemblyLevel(AssemblyLevel::FULL); } a.AddDomainIntegrator(new DiffusionIntegrator(one)); a.Assemble(); MFEM supports different assembly levels for $A$ (from global matrix to matrix-free) and many different integrators . You can also provide a variety of coefficients to the integrator, for example, PWConstCoefficient to specify different material properties in different portions of the domain. The linear system (5) is formed in lines 212-216 and solved with a variety of options in lines 218-252 . One simple case is: OperatorPtr A; Vector B, X; a.FormLinearSystem(ess_tdof_list, x, b, A, X, B); cout << \"Size of linear system: \" << A->Height() << endl; GSSmoother M((SparseMatrix&)(*A)); PCG(*A, M, B, X, 1, 200, 1e-12, 0.0); The method FormLinearSystem takes the BilinearForm , LinearForm , GridFunction , and boundary conditions (i.e., a , b , x , and ess_tdof_list ); applies any necessary transformations such as eliminating boundary conditions (specified by the boundary values of x , applying conforming constraints for non-conforming AMR, static condensation, etc.); and produces the corresponding matrix $A$, right-hand side vector $B$, and unknown vector $X$. In the above example, we then solve A X = B with conjugate gradient iterations, using a simple Gauss-Seidel preconditioner. We set the maximum number of iterations to 200 and a convergence criteria of residual norm reduction by 6 orders of magnitude ( 1e-12 is the square of that relative tolerance). Solving the linear system is one of the main computational bottlenecks in the FEM. It can take many preconditioned conjugate gradient (PCG) iterations depending on the problem size, the difficulty of the problem, and the choice of the preconditioner. Once the linear system is solved, we recover the solution as a finite element grid function, and then visualize and save the final results to disk (files refined.mesh and sol.gf ). See lines 254-274 . a.RecoverFEMSolution(X, b, x); ofstream mesh_ofs(\"refined.mesh\"); mesh.Print(mesh_ofs); ofstream sol_ofs(\"sol.gf\"); x.Save(sol_ofs); socketstream sol_sock(\"localhost\", 19916); sol_sock << \"solution\\n\" << mesh << x << flush;", "title": "  Annotated Example 1"}, {"location": "tutorial/fem/#parallel-example-1p", "text": "Like most MFEM examples, Example 1 has also a parallel version in the source file examples/ex1p.cpp , which illustrates the ease of transition between sequential and MPI-parallel code. The parallel version supports all options of the serial example, and can be executed on varying numbers of MPI ranks, e.g., with mpirun -np . Besides MPI, in parallel we also depend on METIS for mesh partitioning and hypre for solvers. The differences between the two versions are small, and you can compare them for yourself by opening both files in your VS Code window. The main additions in ex1p.cpp compared to ex1.cpp are: Initializing MPI and hypre Mpi::Init(); Hypre::Init(); Splitting the serial mesh in parallel with additional parallel refinement ParMesh pmesh(MPI_COMM_WORLD, mesh); int par_ref_levels = 2; for (int l = 0; l < par_ref_levels; l++) { pmesh.UniformRefinement(); } Using the Par -prefixed versions of the classes ParFiniteElementSpace fespace(&pmesh, fec); ParLinearForm b(&fespace); ParGridFunction x(&fespace); ParBilinearForm a(&fespace); Parallel PCG with hypre's algebraic multigrid BoomerAMG preconditioner Solver *prec = new HypreBoomerAMG; CGSolver cg(MPI_COMM_WORLD); cg.SetRelTol(1e-12); cg.SetMaxIter(2000); cg.SetPrintLevel(1); cg.SetPreconditioner(*prec); cg.SetOperator(*A); cg.Mult(B, X);", "title": "  Parallel Example 1p"}, {"location": "tutorial/fem/#serial-and-parallel-runs", "text": "Both ex1 and ex1p come pre-built in the tutorial environment. You can see a number of sample runs at the beginning of their corresponding source files when you open them in VS Code. To get a feel for how these examples work, you can copy and paste some of these runs from the source to the terminal in VS Code.", "title": "  Serial and parallel runs"}, {"location": "tutorial/fem/#gpu-runs", "text": "If your container supports CUDA you can explore GPU computations with: mpirun -np 4 ex1p -pa -d cuda Additionally you can try out AmgX by changing your directory to examples/amgx and building: cd amgx && make ex1p After that you can run the example with mpirun -np 4 ex1p -d cuda --amgx-file amg_pcg.json", "title": "  GPU runs"}, {"location": "tutorial/fem/#glvis-interface", "text": "GLVis is a lightweight tool for accurate and flexible finite element visualization based on MFEM. In this tutorial we use its web version, which should work on any machine with a modern browser, including mobile touch devices such as tablets and phones.", "title": "  GLVis interface"}, {"location": "tutorial/further/", "text": "Further Steps 30 minutes advanced Lesson Objectives Explore additional examples and miniapps. Write a simple simulation by extending existing examples. Learn more about MFEM and join the community. Note Please complete Getting Started , Finite Element Basics and at least one of the Tour of MFEM Examples , Meshing and Visualization , or Solvers and Scalability pages before this lesson. Explore additional examples and miniapps MFEM includes a number of well-documented example codes and miniapps that can be used as tutorials, as well as simple starting points for user applications. These examples and miniapps are available in the mfem/examples and mfem/miniapps subdirectories of your VS Code terminal. The full list of examples is below. Feel free to explore any of them depending on your interests, but we recommend starting with the ones marked with a \u2b50. Example 0 \u2014 Simplest MFEM example, good starting point for new users (nodal H1 FEM for the Laplace problem). \u2b50 Example 1 \u2014 Nodal H1 FEM for the Laplace problem. \u2b50 Example 2 \u2014 Vector FEM for linear elasticity. Example 3 \u2014 Nedelec H(curl) FEM for the definite Maxwell problem. Example 4 \u2014 Raviart-Thomas H(div) FEM for the grad-div problem. Example 5 \u2014 Mixed pressure-velocity FEM for the Darcy problem. Example 6 \u2014 Non-conforming adaptive mesh refinement (AMR) for the Laplace problem. Example 7 \u2014 Laplace problem on a surface (the unit sphere). \u2b50 Example 8 \u2014 Discontinuous Petrov-Galerkin (DPG) for the Laplace problem. Example 9 \u2014 Discontinuous Galerkin (DG) time-dependent advection. \u2b50 Example 10 \u2014 Time-dependent implicit nonlinear elasticity. \u2b50 Example 11 \u2014 Parallel Laplace eigensolver. Example 12 \u2014 Parallel linear elasticity eigensolver. Example 13 \u2014 Parallel Maxwell eigensolver. Example 14 \u2014 DG for the Laplace problem. Example 15 \u2014 Dynamic AMR for Laplace with prescribed time-dependent source. \u2b50 Example 16 \u2014 Time-dependent nonlinear heat equation. Example 17 \u2014 DG for linear elasticity. Example 18 \u2014 DG for the Euler equations. Example 19 \u2014 Incompressible nonlinear elasticity. Example 20 \u2014 Symplectic ODE integration. Example 21 \u2014 AMR for linear elasticity. Example 22 \u2014 Complex-valued linear systems. \u2b50 Example 23 \u2014 Second-order in time wave equation. \u2b50 Example 24 \u2014 Mixed finite element spaces and interpolators. Example 25 \u2014 Perfectly Matched Layer (PML) for Maxwell equations. Example 26 \u2014 Multigrid preconditioner for the Laplace problem. \u2b50 Example 27 \u2014 Boundary conditions for the Laplace problem. Example 28 \u2014 Constraints and sliding boundary conditions. Example 29 \u2014 Solving PDEs on embedded surfaces. Example 30 \u2014 Mesh preprocessing, resolving problem data. Example 31 \u2014 Nedelec H(curl) FEM for the anisotropic definite Maxwell problem. Example 32 \u2014 Parallel Nedelec Maxwell eigensolver with anisotropic permittivity. Example 33 \u2014 Nodal C0 FEM for the fractional Laplacian problem. Example 34 \u2014 Source function from SubMesh. Example 35 \u2014 Port boundary condition from SubMesh. Example 36 \u2014 High-order FEM for the obstacle problem. Example 37 \u2014 Topology optimization. Example 38 \u2014 Cut-surface and cut-volume integration. Example 39 \u2014 Named mesh attributes. Most of these examples have a serial and a parallel version, illustrating the ease of transition and the minimal code changes between the two. Many examples also have modifications that take advantage of optional third-party libraries such as PETSc , SLEPc , SUNDIALS , PUMI , Ginkgo , and HiOp . Beyond the examples, a number of miniapps are available that are more representative of the advanced usage of the library in physics/application codes. Some of the included miniapps are: Volta \u2014 Simple electrostatics simulation code. Tesla \u2014 Simple magnetostatics simulation code. Maxwell \u2014 Transient electromagnetics simulation code. Joule \u2014 Transient magnetics and Joule heating miniapp. Navier \u2014 Solver for the incompressible time-dependent Navier-Stokes equations. Mesh Explorer \u2014 Visualize and manipulate meshes. Mesh Optimizer \u2014 Optimize high-order meshes. Shaper \u2014 Resolve material interfaces by mesh refinement. Interpolation \u2014 Evaluation of high-order finite element functions in physical space. Overlapping Grids \u2014 Schwarz coupling of single- and multi-physics problems. Extrapolation \u2014 Finite element extrapolation solver. Distance \u2014 Finite element distance solver. Shifted Diffusion \u2014 High-Order shifted boundary method for non body-fitted meshes. Minimal Surface \u2014 Compute the minimal surface of a given mesh. Display Basis \u2014 Visualize finite element basis functions. LOR Transfer \u2014 Map functions between high-order and low-order-refined spaces. SPDE \u2014 Generate a Gaussian random field via the SPDE method; i.e., by solving a fractional PDE with random load. Contact \u2014 Mortar contact patch test for elasticity using the Tribol library. Multidomain \u2014 Multidomain and SubMesh demonstration Miniapp. DPG \u2014 Discontinuous Petrov-Galerkin (DPG) for various examples. In addition, the sources for several external benchmark/proxy-apps built on top of MFEM are available: Laghos \u2014 High-Order Lagrangian hydrodynamics miniapp. Remhos \u2014 High-Order advection remap miniapp. Mulard \u2014 Multigroup thermal radiation diffusion miniapp. A handful of \"toy\" miniapps of a less serious nature demonstrate the flexibility of MFEM (and provide a bit of fun): Automata \u2014 Model of a simple cellular automata. Life \u2014 Model of Conway's game of life. Lissajous \u2014 Spinning optical illusion. Mandel \u2014 Fractal visualization with AMR. Mondrian \u2014 Convert any image to an AMR mesh. Rubik \u2014 Interactive Rubik's Cube\u2122 puzzle. Snake \u2014 Model of the Rubik's Snake\u2122 puzzle. Write a simple simulation Modify the miniapps and example codes to create a simple simulation of your own. You can edit the source code and rebuild the binary simply with make . For example, you can solve a steady-state heat conduction problem in 2D and 3D using the shaper miniapp (modified for the cable shape) to define the mesh and ex1 or ex1p to solve it (modified to include separate coefficients for air and cable). Please consult the MFEM code documentation and don't hesitate to ask if you have any implementation questions. We want to see your creativity! Post your visualization images in the Slack channel for a chance to be featured on MFEM's gallery page ! Install MFEM + GLVis on your own machine Download MFEM from mfem.org/download or clone it from GitHub and follow the building instructions here: mfem.org/building . You should be able to download and install the serial version in 10 minutes. The parallel version of MFEM requires installing hypre and METIS (see the building instructions ). Alternatively, if you already have Spack, you can build with spack install mfem glvis . With your own installation, you can explore additional topics not covered in this tutorial such as: Partial Assembly and the Finite Element Operator Decomposition . GPU Support on NVIDIA and AMD hardware. Integrations with PETSc , SUNDIALS , SuperLU , libCEED , PUMI , Ginkgo , HiOp , and more. Python support with the PyMFEM wrapper and Jupyter notebooks . Visit the MFEM website For more information about MFEM, visit the website, mfem.org , including the Features , Examples , Publications , and Finite Elements , pages. Review the Videos for recordings from MFEM seminars , workshops , and conference presentations: You may also be interested in visiting the websites of the related GLVis , CEED , and BLAST projects. Join the community If MFEM looks exciting to you, please join the community on GitHub and help us make it better! \ud83d\ude80 We welcome contributions and feedback at all levels: bugfixes; code improvements; simplifications; new mesh, discretization, or solver capabilities; improved documentation; new examples and miniapps; HPC performance improvements; etc. See CONTRIBUTING.md for more details. You can contact the MFEM team by posting to the GitHub issue tracker or at mfem-dev@llnl.gov . Thank you! Thank you for participating in the MFEM tutorial. Please let us know if you have any questions in the Slack channel . Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Further"}, {"location": "tutorial/further/#further-steps", "text": "30 minutes advanced", "title": "  Further Steps"}, {"location": "tutorial/further/#explore-additional-examples-and-miniapps", "text": "MFEM includes a number of well-documented example codes and miniapps that can be used as tutorials, as well as simple starting points for user applications. These examples and miniapps are available in the mfem/examples and mfem/miniapps subdirectories of your VS Code terminal. The full list of examples is below. Feel free to explore any of them depending on your interests, but we recommend starting with the ones marked with a \u2b50. Example 0 \u2014 Simplest MFEM example, good starting point for new users (nodal H1 FEM for the Laplace problem). \u2b50 Example 1 \u2014 Nodal H1 FEM for the Laplace problem. \u2b50 Example 2 \u2014 Vector FEM for linear elasticity. Example 3 \u2014 Nedelec H(curl) FEM for the definite Maxwell problem. Example 4 \u2014 Raviart-Thomas H(div) FEM for the grad-div problem. Example 5 \u2014 Mixed pressure-velocity FEM for the Darcy problem. Example 6 \u2014 Non-conforming adaptive mesh refinement (AMR) for the Laplace problem. Example 7 \u2014 Laplace problem on a surface (the unit sphere). \u2b50 Example 8 \u2014 Discontinuous Petrov-Galerkin (DPG) for the Laplace problem. Example 9 \u2014 Discontinuous Galerkin (DG) time-dependent advection. \u2b50 Example 10 \u2014 Time-dependent implicit nonlinear elasticity. \u2b50 Example 11 \u2014 Parallel Laplace eigensolver. Example 12 \u2014 Parallel linear elasticity eigensolver. Example 13 \u2014 Parallel Maxwell eigensolver. Example 14 \u2014 DG for the Laplace problem. Example 15 \u2014 Dynamic AMR for Laplace with prescribed time-dependent source. \u2b50 Example 16 \u2014 Time-dependent nonlinear heat equation. Example 17 \u2014 DG for linear elasticity. Example 18 \u2014 DG for the Euler equations. Example 19 \u2014 Incompressible nonlinear elasticity. Example 20 \u2014 Symplectic ODE integration. Example 21 \u2014 AMR for linear elasticity. Example 22 \u2014 Complex-valued linear systems. \u2b50 Example 23 \u2014 Second-order in time wave equation. \u2b50 Example 24 \u2014 Mixed finite element spaces and interpolators. Example 25 \u2014 Perfectly Matched Layer (PML) for Maxwell equations. Example 26 \u2014 Multigrid preconditioner for the Laplace problem. \u2b50 Example 27 \u2014 Boundary conditions for the Laplace problem. Example 28 \u2014 Constraints and sliding boundary conditions. Example 29 \u2014 Solving PDEs on embedded surfaces. Example 30 \u2014 Mesh preprocessing, resolving problem data. Example 31 \u2014 Nedelec H(curl) FEM for the anisotropic definite Maxwell problem. Example 32 \u2014 Parallel Nedelec Maxwell eigensolver with anisotropic permittivity. Example 33 \u2014 Nodal C0 FEM for the fractional Laplacian problem. Example 34 \u2014 Source function from SubMesh. Example 35 \u2014 Port boundary condition from SubMesh. Example 36 \u2014 High-order FEM for the obstacle problem. Example 37 \u2014 Topology optimization. Example 38 \u2014 Cut-surface and cut-volume integration. Example 39 \u2014 Named mesh attributes. Most of these examples have a serial and a parallel version, illustrating the ease of transition and the minimal code changes between the two. Many examples also have modifications that take advantage of optional third-party libraries such as PETSc , SLEPc , SUNDIALS , PUMI , Ginkgo , and HiOp . Beyond the examples, a number of miniapps are available that are more representative of the advanced usage of the library in physics/application codes. Some of the included miniapps are: Volta \u2014 Simple electrostatics simulation code. Tesla \u2014 Simple magnetostatics simulation code. Maxwell \u2014 Transient electromagnetics simulation code. Joule \u2014 Transient magnetics and Joule heating miniapp. Navier \u2014 Solver for the incompressible time-dependent Navier-Stokes equations. Mesh Explorer \u2014 Visualize and manipulate meshes. Mesh Optimizer \u2014 Optimize high-order meshes. Shaper \u2014 Resolve material interfaces by mesh refinement. Interpolation \u2014 Evaluation of high-order finite element functions in physical space. Overlapping Grids \u2014 Schwarz coupling of single- and multi-physics problems. Extrapolation \u2014 Finite element extrapolation solver. Distance \u2014 Finite element distance solver. Shifted Diffusion \u2014 High-Order shifted boundary method for non body-fitted meshes. Minimal Surface \u2014 Compute the minimal surface of a given mesh. Display Basis \u2014 Visualize finite element basis functions. LOR Transfer \u2014 Map functions between high-order and low-order-refined spaces. SPDE \u2014 Generate a Gaussian random field via the SPDE method; i.e., by solving a fractional PDE with random load. Contact \u2014 Mortar contact patch test for elasticity using the Tribol library. Multidomain \u2014 Multidomain and SubMesh demonstration Miniapp. DPG \u2014 Discontinuous Petrov-Galerkin (DPG) for various examples. In addition, the sources for several external benchmark/proxy-apps built on top of MFEM are available: Laghos \u2014 High-Order Lagrangian hydrodynamics miniapp. Remhos \u2014 High-Order advection remap miniapp. Mulard \u2014 Multigroup thermal radiation diffusion miniapp. A handful of \"toy\" miniapps of a less serious nature demonstrate the flexibility of MFEM (and provide a bit of fun): Automata \u2014 Model of a simple cellular automata. Life \u2014 Model of Conway's game of life. Lissajous \u2014 Spinning optical illusion. Mandel \u2014 Fractal visualization with AMR. Mondrian \u2014 Convert any image to an AMR mesh. Rubik \u2014 Interactive Rubik's Cube\u2122 puzzle. Snake \u2014 Model of the Rubik's Snake\u2122 puzzle.", "title": "  Explore additional examples and miniapps"}, {"location": "tutorial/further/#write-a-simple-simulation", "text": "Modify the miniapps and example codes to create a simple simulation of your own. You can edit the source code and rebuild the binary simply with make . For example, you can solve a steady-state heat conduction problem in 2D and 3D using the shaper miniapp (modified for the cable shape) to define the mesh and ex1 or ex1p to solve it (modified to include separate coefficients for air and cable). Please consult the MFEM code documentation and don't hesitate to ask if you have any implementation questions.", "title": "  Write a simple simulation"}, {"location": "tutorial/further/#install-mfem-glvis-on-your-own-machine", "text": "Download MFEM from mfem.org/download or clone it from GitHub and follow the building instructions here: mfem.org/building . You should be able to download and install the serial version in 10 minutes. The parallel version of MFEM requires installing hypre and METIS (see the building instructions ). Alternatively, if you already have Spack, you can build with spack install mfem glvis . With your own installation, you can explore additional topics not covered in this tutorial such as: Partial Assembly and the Finite Element Operator Decomposition . GPU Support on NVIDIA and AMD hardware. Integrations with PETSc , SUNDIALS , SuperLU , libCEED , PUMI , Ginkgo , HiOp , and more. Python support with the PyMFEM wrapper and Jupyter notebooks .", "title": "  Install MFEM + GLVis on your own machine"}, {"location": "tutorial/further/#visit-the-mfem-website", "text": "For more information about MFEM, visit the website, mfem.org , including the Features , Examples , Publications , and Finite Elements , pages. Review the Videos for recordings from MFEM seminars , workshops , and conference presentations: You may also be interested in visiting the websites of the related GLVis , CEED , and BLAST projects.", "title": "  Visit the MFEM website"}, {"location": "tutorial/further/#join-the-community", "text": "If MFEM looks exciting to you, please join the community on GitHub and help us make it better! \ud83d\ude80 We welcome contributions and feedback at all levels: bugfixes; code improvements; simplifications; new mesh, discretization, or solver capabilities; improved documentation; new examples and miniapps; HPC performance improvements; etc. See CONTRIBUTING.md for more details. You can contact the MFEM team by posting to the GitHub issue tracker or at mfem-dev@llnl.gov .", "title": "  Join the community"}, {"location": "tutorial/meshvis/", "text": "Meshing and Visualization 45 minutes intermediate Lesson Objectives Learn about external mesh generators that can be used with MFEM. Learn about MFEM's internal meshing tools. Learn about external visualization tools that can be used with MFEM. Note Please complete the Getting Started and Finite Element Basics pages before this lesson. Importing meshes from Gmsh and Cubit In this section we demonstrate the common steps necessary for generating high-quality meshes in Gmsh and Cubit and how to use them in finite element simulations with MFEM. Gmsh is an open-source, freely available mesh generation tool with built-in computer-aided design (CAD) functionality and a postprocessor. The input to Gmsh can be a simple text file that provides a description of the geometry of the finite element model. The geometry can be generated using the Gmsh graphical user interface (GUI), simple text editors such as Vi/Vim/Emacs, or using more sophisticated CAD tools such as SolidWorks or Autocad. CAD models in IGES or STEP formats can be imported by the CAD engine of Gmsh, meshed, and prepared as inputs to the MFEM examples. Here, however, we focus on simpler examples showing the process of generating meshes suitable for MFEM and not on the actual geometry. Many examples together with documentation on the input syntax can be found at the Gmsh website . Users familiar with Gmsh can skip the first steps and download already prepared geometries for meshing. If Gmsh is not installed on your local machine, please download it and follow the installation instructions . We will start with the definitions of a cube with edge length L=1 and two cylinders with a radius L/10 and heights equal to L. The following snippet defines these objects: SetFactory(\"OpenCASCADE\"); Mesh.Algorithm = 6; Mesh.CharacteristicLengthMin = 0.1; Mesh.CharacteristicLengthMax = 0.1; L=1.0; Box(1) = {0,0,0,L,L,L}; Rc=L/10; Cylinder(2) = {L/8, 0.0, L/4, 0, L, 0, Rc , 2*Pi}; Cylinder(3) = {4*L/8, 0.0, L/4, 0, L, 0, Rc , 2*Pi}; Here is a screenshot of the GUI of Gmsh with the generated objects: The first line in the Gmsh input file defines the geometric engine. Here it is assumed that Gmsh is compiled with CAD support. Such precompiled binaries for Windows, Mac, and Linux can be downloaded from the Gmsh website . The next three lines define the mesh algorithm, which will be used later for generating the mesh and the associated characteristic length scale. Finer or coarser meshes can be obtained by adjusting these numbers. The following line defines a parameter L which is utilized in the definition of the cube. A parameter R defines the radius of the base of the two cylinders. The final geometry, which will be used for simulations, is obtained by subtracting the two cylinders from the cube as: BooleanDifference(50) = { Volume{1}; Delete; }{ Volume{2,3}; Delete; }; Gmsh uses the obtained geometry for generating the mesh. However, without additional specifications, we cannot impose boundary conditions without any attributes assigned to the boundaries. Different attributes can be assigned to the volumetric part of the mesh for using different material coefficients within the domain. Here, however, we use only a single attribute, as the first example uses only a single diffusion coefficient. Physical Volume(1) = {50}; Physical Surface(1) = {1,6,8}; Mesh.MshFileVersion = 2.2; The first line from the above snippet defines physical volume 1 to coincide with the geometry volume 50, which is the final volume obtained by the Boolean operation. The second line defines physical surface 1 to include geometric surfaces {1,6,8}. Finally, the last line specifies the file format. Note that MFEM can only read ASCII Gmsh format version 2.2. The generated mesh is shown in the figures above. Careful inspection reveals that the cylindrical surface is not represented well by the linear elements. We can improve the representation by refining the mesh. We encourage you to play with the mesh and to generate finer discretizations for the simulations. You can download the Gmsh input file here and the resulting mesh file here . For users without access to the Gmsh GUI, a mesh can be generated in your local terminal with the following command: gmsh -3 cross_heat.geo To run simulations with the generated mesh, drag-and-drop the mesh file from your computer to the AWS browser window in the MFEM examples directory: To run Example 1 with the newly prepared mesh, be sure you are in the examples directory and then run the following command: mpirun -np 24 ./ex1p -m cross_heat.msh -no-vis The solution of the diffusion equation for the generated mesh is shown in the following two pictures. The figures are generated with ParaView, and the process of visualization is explained at the end of this tutorial session. If we want to enforce Dirichlet boundary conditions different than zero on some other surface, we must export it as a physical surface. For example, to enforce value one on the other cylindrical surface, add the following line to the cross_heat.geo file: Physical Surface(2) = {7}; The line should be inserted in any place after the definition of geometrical surface 7, e.g., after the boolean operation defining the final geometry. If we run ex1.cpp without modifications, a zero value will be assigned to the newly defined surface. Thus, in order to set it to one, modify section 10 in ex1p.cpp : // 10. Define the solution vector x as a parallel finite element grid // function corresponding to fespace. Initialize x with initial guess of // zero, which satisfies the boundary conditions. ParGridFunction x(&fespace); x = 0.0; { Array ess_bdr(pmesh.bdr_attributes.Max()); ess_bdr = 0; ess_bdr[1] = 1; ConstantCoefficient zero(0.0); Coefficient* coeff[1]; coeff[0]=&one; x.ProjectBdrCoefficient(coeff,ess_bdr); } In the above snippet, we project coefficient one on the degrees of freedom associated with physical surface 2 (the indexing starts at zero). Executing the modified code with the newly created mesh will result in the following solution: The results can be seen in the GLVis windows as well. However, the users will see only the defined physical surfaces (1,2) and the boundaries between the parallel partitions. Any 2D cuts will work as usual. MFEM can import meshes saved in Exodus II format generated with Cubit . However, this feature requires compilation of the library with HDF5, NetCDF, and Exodus, which is not available in the AWS tutorial image. MFEM's meshing tools MFEM provides many tools, routines, and examples for mesh manipulation. The miniapp examples illustrate a large part of the MFEM functionality in the miniapps/meshing subdirectory. Below we provide more details about only two of these miniapps. However, users are encouraged to also explore the other meshing miniapps . Mesh Explorer The mesh explorer miniapp is a handy tool to examine, visualize and manipulate a given mesh. Users have to compile it in the miniapps/meshing subdirectory: cd ~/mfem/miniapps/meshing make mesh-explorer Once compiled, it can be executed in the same directory by typing in the terminal ./mesh-explorer Before executing it, users should ensure that the GLVis window is open and connected to the AWS machine. Once started, many options will appear in the terminal window. An example screenshot of provided below By pressing the corresponding keys, a number of operations can be performed on the input mesh files, including: Visualizing of mesh materials with m , and individual mesh elements with e . Mesh refinement with r , scaling with s , randomization with j , and transformation with t . Manipulation of the mesh curvature with c . The ability to simulate parallel partitioning with p . Quantitative and visual reports of mesh quality with x , h and J . Saving the resulting mesh with in MFEM or VTK format with S and V . For example, selecting v in the prompt and pressing enter will display the default mesh of a hex-meshed beam in the GLVis window. To operate on a different mesh, users should exit the miniapp with q and start it again with the following line ./mesh-explorer -m new_mesh_file.msh Here new_mesh_file.msh is the mesh file selected by the user. The input mesh can be in any format supported by MFEM. In addition, the miniapp can save the loaded mesh in native MFEM and VTK formats. Shaper Shaper is a miniapp that performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. To experiment with it, go to the miniapps/meshing subdirectory and type: cd ~/mfem/miniapps/meshing make shaper ./shaper The result of the execution with five levels of refinement and default setting can be seen in the following screenshot. Users can specify different material distributions by modifying the function int material(Vector &x, Vector &xmin, Vector &xmax) in the begging of shaper.cpp . The current function returns integer values of 1 if a point is located within a simple annulus/shell with a relative inner radius of 0.4 and outer radius of 0.6 and 2 otherwise. The coordinates of a point within the mesh are mapped to values between minus one and one. Users are encouraged to modify the material distribution function and use different meshes as input. The refinement level is controlled in the terminal by pressing y for further refinement or n for completing the run. The resulting mesh is written in a file shaper.mesh . Once the mesh is written, users can use it as an input to other examples or miniapps. Note See also the related Mandel and Mondrian miniapps in the miniapps/toys subdirectory. Visualizing results in ParaView and VisIt To save the simulation results from the parallel version of Example 1 ( ex1p.cpp ) in ParaView format, add the following lines just before step 17 in the file. { ParaViewDataCollection *pd = NULL; pd = new ParaViewDataCollection(\"Example1P\", &pmesh); pd->SetPrefixPath(\"ParaView\"); pd->RegisterField(\"solution\", &x); pd->SetLevelsOfDetail(order); pd->SetDataFormat(VTKFormat::BINARY); pd->SetHighOrderOutput(true); pd->SetCycle(0); pd->SetTime(0.0); pd->Save(); delete pd; } The first line defines a ParaViewDataCollection for saving data in ParaView data format. The following two lines define the name of the data collection and the prefix path, which is set to ParaView. Thus, the data set will be written in the directory ParaView relative to the current execution path. The following line registers the ParGridFunction x in the data collection. The remaining lines set different parameters for the format and the data set, and finally, the set is saved and deleted. See MFEM documentation for more detailed information about ParaView. Compile and execute the modified example. To download the results saved in ParaView format to your local machine, compress and gather all files in a single archive with the following command: tar cvfz paraview.tgz ParaView/ which will generate the file paraview.tgz in the current directory. Download the file to your local machine by dragging it from the Explorer window: Then go to the download location and extract the archive with tar vxfz paraview.tgz ParaView/ The above assumes a UNIX type of environment. Windows users could use the GUI or WSL/WSL2 engines. ParaView can be freely downloaded both as a source code or precompiled binaries. The precompiled binaries are available for Linux, macOS, and Windows. Please follow the instructions for the corresponding operating system for installation instructions. To visualize the downloaded simulation data, run ParaView and open the file Example1P.pvd in the ParaView/Example1P directory, where the path is relative to the directory where the archive was downloaded. Next, click on the Apply button and select Solution in the drop-down menu in the second row of buttons. The geometry, together with the solution, can be rotated on the screen by holding and dragging the mouse. Replacing ParaviewDataCollection with VisItDataCollection allows you to write data in VisIt data format. VisIt can be freely downloaded and installed on Linux, macOS, and Windows and provides another alternative to ParaView. The steps for downloading and the simulation data are the same as the steps outlined above for ParaView. Questions? Ask for help in the tutorial Slack channel . Next Steps Depending on your interests pick one of the following lessons: Tour of MFEM Examples Solvers and Scalability Further Steps Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Meshvis"}, {"location": "tutorial/meshvis/#meshing-and-visualization", "text": "45 minutes intermediate", "title": "  Meshing and Visualization"}, {"location": "tutorial/meshvis/#importing-meshes-from-gmsh-and-cubit", "text": "In this section we demonstrate the common steps necessary for generating high-quality meshes in Gmsh and Cubit and how to use them in finite element simulations with MFEM. Gmsh is an open-source, freely available mesh generation tool with built-in computer-aided design (CAD) functionality and a postprocessor. The input to Gmsh can be a simple text file that provides a description of the geometry of the finite element model. The geometry can be generated using the Gmsh graphical user interface (GUI), simple text editors such as Vi/Vim/Emacs, or using more sophisticated CAD tools such as SolidWorks or Autocad. CAD models in IGES or STEP formats can be imported by the CAD engine of Gmsh, meshed, and prepared as inputs to the MFEM examples. Here, however, we focus on simpler examples showing the process of generating meshes suitable for MFEM and not on the actual geometry. Many examples together with documentation on the input syntax can be found at the Gmsh website . Users familiar with Gmsh can skip the first steps and download already prepared geometries for meshing. If Gmsh is not installed on your local machine, please download it and follow the installation instructions . We will start with the definitions of a cube with edge length L=1 and two cylinders with a radius L/10 and heights equal to L. The following snippet defines these objects: SetFactory(\"OpenCASCADE\"); Mesh.Algorithm = 6; Mesh.CharacteristicLengthMin = 0.1; Mesh.CharacteristicLengthMax = 0.1; L=1.0; Box(1) = {0,0,0,L,L,L}; Rc=L/10; Cylinder(2) = {L/8, 0.0, L/4, 0, L, 0, Rc , 2*Pi}; Cylinder(3) = {4*L/8, 0.0, L/4, 0, L, 0, Rc , 2*Pi}; Here is a screenshot of the GUI of Gmsh with the generated objects: The first line in the Gmsh input file defines the geometric engine. Here it is assumed that Gmsh is compiled with CAD support. Such precompiled binaries for Windows, Mac, and Linux can be downloaded from the Gmsh website . The next three lines define the mesh algorithm, which will be used later for generating the mesh and the associated characteristic length scale. Finer or coarser meshes can be obtained by adjusting these numbers. The following line defines a parameter L which is utilized in the definition of the cube. A parameter R defines the radius of the base of the two cylinders. The final geometry, which will be used for simulations, is obtained by subtracting the two cylinders from the cube as: BooleanDifference(50) = { Volume{1}; Delete; }{ Volume{2,3}; Delete; }; Gmsh uses the obtained geometry for generating the mesh. However, without additional specifications, we cannot impose boundary conditions without any attributes assigned to the boundaries. Different attributes can be assigned to the volumetric part of the mesh for using different material coefficients within the domain. Here, however, we use only a single attribute, as the first example uses only a single diffusion coefficient. Physical Volume(1) = {50}; Physical Surface(1) = {1,6,8}; Mesh.MshFileVersion = 2.2; The first line from the above snippet defines physical volume 1 to coincide with the geometry volume 50, which is the final volume obtained by the Boolean operation. The second line defines physical surface 1 to include geometric surfaces {1,6,8}. Finally, the last line specifies the file format. Note that MFEM can only read ASCII Gmsh format version 2.2. The generated mesh is shown in the figures above. Careful inspection reveals that the cylindrical surface is not represented well by the linear elements. We can improve the representation by refining the mesh. We encourage you to play with the mesh and to generate finer discretizations for the simulations. You can download the Gmsh input file here and the resulting mesh file here . For users without access to the Gmsh GUI, a mesh can be generated in your local terminal with the following command: gmsh -3 cross_heat.geo To run simulations with the generated mesh, drag-and-drop the mesh file from your computer to the AWS browser window in the MFEM examples directory: To run Example 1 with the newly prepared mesh, be sure you are in the examples directory and then run the following command: mpirun -np 24 ./ex1p -m cross_heat.msh -no-vis The solution of the diffusion equation for the generated mesh is shown in the following two pictures. The figures are generated with ParaView, and the process of visualization is explained at the end of this tutorial session. If we want to enforce Dirichlet boundary conditions different than zero on some other surface, we must export it as a physical surface. For example, to enforce value one on the other cylindrical surface, add the following line to the cross_heat.geo file: Physical Surface(2) = {7}; The line should be inserted in any place after the definition of geometrical surface 7, e.g., after the boolean operation defining the final geometry. If we run ex1.cpp without modifications, a zero value will be assigned to the newly defined surface. Thus, in order to set it to one, modify section 10 in ex1p.cpp : // 10. Define the solution vector x as a parallel finite element grid // function corresponding to fespace. Initialize x with initial guess of // zero, which satisfies the boundary conditions. ParGridFunction x(&fespace); x = 0.0; { Array ess_bdr(pmesh.bdr_attributes.Max()); ess_bdr = 0; ess_bdr[1] = 1; ConstantCoefficient zero(0.0); Coefficient* coeff[1]; coeff[0]=&one; x.ProjectBdrCoefficient(coeff,ess_bdr); } In the above snippet, we project coefficient one on the degrees of freedom associated with physical surface 2 (the indexing starts at zero). Executing the modified code with the newly created mesh will result in the following solution: The results can be seen in the GLVis windows as well. However, the users will see only the defined physical surfaces (1,2) and the boundaries between the parallel partitions. Any 2D cuts will work as usual. MFEM can import meshes saved in Exodus II format generated with Cubit . However, this feature requires compilation of the library with HDF5, NetCDF, and Exodus, which is not available in the AWS tutorial image.", "title": "  Importing meshes from Gmsh and Cubit"}, {"location": "tutorial/meshvis/#mfems-meshing-tools", "text": "MFEM provides many tools, routines, and examples for mesh manipulation. The miniapp examples illustrate a large part of the MFEM functionality in the miniapps/meshing subdirectory. Below we provide more details about only two of these miniapps. However, users are encouraged to also explore the other meshing miniapps .", "title": "  MFEM's meshing tools"}, {"location": "tutorial/meshvis/#mesh-explorer", "text": "The mesh explorer miniapp is a handy tool to examine, visualize and manipulate a given mesh. Users have to compile it in the miniapps/meshing subdirectory: cd ~/mfem/miniapps/meshing make mesh-explorer Once compiled, it can be executed in the same directory by typing in the terminal ./mesh-explorer Before executing it, users should ensure that the GLVis window is open and connected to the AWS machine. Once started, many options will appear in the terminal window. An example screenshot of provided below By pressing the corresponding keys, a number of operations can be performed on the input mesh files, including: Visualizing of mesh materials with m , and individual mesh elements with e . Mesh refinement with r , scaling with s , randomization with j , and transformation with t . Manipulation of the mesh curvature with c . The ability to simulate parallel partitioning with p . Quantitative and visual reports of mesh quality with x , h and J . Saving the resulting mesh with in MFEM or VTK format with S and V . For example, selecting v in the prompt and pressing enter will display the default mesh of a hex-meshed beam in the GLVis window. To operate on a different mesh, users should exit the miniapp with q and start it again with the following line ./mesh-explorer -m new_mesh_file.msh Here new_mesh_file.msh is the mesh file selected by the user. The input mesh can be in any format supported by MFEM. In addition, the miniapp can save the loaded mesh in native MFEM and VTK formats.", "title": "  Mesh Explorer"}, {"location": "tutorial/meshvis/#shaper", "text": "Shaper is a miniapp that performs multiple levels of adaptive mesh refinement to resolve the interfaces between different \"materials\" in the mesh, as specified by a given material function. It can be used as a simple initial mesh generator, for example in the case when the interface is too complex to describe without local refinement. Both conforming and non-conforming refinements are supported. To experiment with it, go to the miniapps/meshing subdirectory and type: cd ~/mfem/miniapps/meshing make shaper ./shaper The result of the execution with five levels of refinement and default setting can be seen in the following screenshot. Users can specify different material distributions by modifying the function int material(Vector &x, Vector &xmin, Vector &xmax) in the begging of shaper.cpp . The current function returns integer values of 1 if a point is located within a simple annulus/shell with a relative inner radius of 0.4 and outer radius of 0.6 and 2 otherwise. The coordinates of a point within the mesh are mapped to values between minus one and one. Users are encouraged to modify the material distribution function and use different meshes as input. The refinement level is controlled in the terminal by pressing y for further refinement or n for completing the run. The resulting mesh is written in a file shaper.mesh . Once the mesh is written, users can use it as an input to other examples or miniapps.", "title": "  Shaper"}, {"location": "tutorial/meshvis/#visualizing-results-in-paraview-and-visit", "text": "To save the simulation results from the parallel version of Example 1 ( ex1p.cpp ) in ParaView format, add the following lines just before step 17 in the file. { ParaViewDataCollection *pd = NULL; pd = new ParaViewDataCollection(\"Example1P\", &pmesh); pd->SetPrefixPath(\"ParaView\"); pd->RegisterField(\"solution\", &x); pd->SetLevelsOfDetail(order); pd->SetDataFormat(VTKFormat::BINARY); pd->SetHighOrderOutput(true); pd->SetCycle(0); pd->SetTime(0.0); pd->Save(); delete pd; } The first line defines a ParaViewDataCollection for saving data in ParaView data format. The following two lines define the name of the data collection and the prefix path, which is set to ParaView. Thus, the data set will be written in the directory ParaView relative to the current execution path. The following line registers the ParGridFunction x in the data collection. The remaining lines set different parameters for the format and the data set, and finally, the set is saved and deleted. See MFEM documentation for more detailed information about ParaView. Compile and execute the modified example. To download the results saved in ParaView format to your local machine, compress and gather all files in a single archive with the following command: tar cvfz paraview.tgz ParaView/ which will generate the file paraview.tgz in the current directory. Download the file to your local machine by dragging it from the Explorer window: Then go to the download location and extract the archive with tar vxfz paraview.tgz ParaView/ The above assumes a UNIX type of environment. Windows users could use the GUI or WSL/WSL2 engines. ParaView can be freely downloaded both as a source code or precompiled binaries. The precompiled binaries are available for Linux, macOS, and Windows. Please follow the instructions for the corresponding operating system for installation instructions. To visualize the downloaded simulation data, run ParaView and open the file Example1P.pvd in the ParaView/Example1P directory, where the path is relative to the directory where the archive was downloaded. Next, click on the Apply button and select Solution in the drop-down menu in the second row of buttons. The geometry, together with the solution, can be rotated on the screen by holding and dragging the mouse. Replacing ParaviewDataCollection with VisItDataCollection allows you to write data in VisIt data format. VisIt can be freely downloaded and installed on Linux, macOS, and Windows and provides another alternative to ParaView. The steps for downloading and the simulation data are the same as the steps outlined above for ParaView.", "title": "  Visualizing results in ParaView and VisIt"}, {"location": "tutorial/solvers/", "text": "Solvers and Scalability 45 minutes intermediate Lesson Objectives Learn about MFEM's parallel scalability. Learn about MFEM's support for efficient solvers and preconditioners. Note Please complete the Getting Started and Finite Element Basics pages before this lesson. MFEM is designed to be highly scalable and efficient on a wide variety of platforms: from laptops to GPU-accelerated supercomputers . The solvers described in this lesson play a critical role in this parallel scalability. Scalable algebraic multigrid preconditioners from hypre MFEM comes with a large number of example codes that demonstrate different physical applications, finite element discretizations, and linear solvers: Example 1 solves a Poisson problem, Example 2 solves a linear elasticity problem, Example 3 solves a definite Maxwell (electromagnetics) problem, and Example 4 solves grad-div diffusion problem. The parallel versions of these examples ( ex1p , ex2p , ex3p , and ex4p ) each use suitable algebraic multigrid (AMG) preconditioners from the hypre solvers library. We describe sample runs with each of these examples in more details below. Example 1: Poisson problem and AMG First, make sure you are in the examples subdirectory: cd ~/mfem/examples Build the parallel version of Example 1: make ex1p Run the parallel version of Example 1, solving a Poisson problem: ./ex1p After forming the linear system, MFEM uses hypre to construct and apply an AMG preconditioner. Details of the AMG preconditioner are provided in the example output under the headers BoomerAMG SETUP PARAMETERS and BoomerAMG SOLVER PARAMETERS . Click here to view the terminal output A key feature of AMG methods is their scalability: with default options, convergence is achieved in only 18 conjugate gradient iterations. Let's see what happens if we increase the mesh refinement. Edit ex1p.cpp changing line 153 as follows: @@ -150,7 +150,7 @@ int main(int argc, char *argv[]) ParMesh pmesh(MPI_COMM_WORLD, mesh); mesh.Clear(); { - int par_ref_levels = 2; + int par_ref_levels = 3; for (int l = 0; l < par_ref_levels; l++) { pmesh.UniformRefinement(); This adds one additional level of refinement, making the problem roughly 4 times as large in 2D, or 8 times as large in 3D. Rebuild the example ( make ex1p ) and re-run it: ./ex1p Although the number of unknowns for this problem has increased by roughly 4x, the iteration count remains at 18 due to the scalability of the AMG preconditioner. Let's now try a 3D problem. For that, we just need to choose a 3D mesh using the -m or --mesh command line argument. Because these problems are more computationally expensive, let's first reduce the refinement level, setting int par_ref_levels = 1; in the ex1p.cpp source code. Rebuild the example ( make ex1p ) and re-run it using the three-dimensional Fichera mesh: ./ex1p -m ../data/fichera.mesh . Convergence is attained in only 16 iterations. Finally, let's take a look at the parallel scalability of the solvers: Increase the refinement level: int par_ref_levels = 2; Recompile: make ex1p Now run the 3D example on 8 cores: mpirun -np 8 ./ex1p -m ../data/fichera.mesh This is an example of a weak scaling test : the problem size and the number of processors are both increased by a factor of 8. Because the PCG iteration counts remain roughly constant, the total time to solution should remain roughly fixed (minus some overhead and communication cost), even though we are solving a problem that is 8 times larger. Example 2: Linear Elasticity This example demonstrates solving a linear elasticity cantilever beam problem with different materials. This example is designed to work with any of the \"beam\" meshes provided by MFEM. Run ls ../data | grep beam to list the available 2D and 3D meshes: beam-hex-nurbs.mesh , beam-hex.mesh , beam-hex.vtk , beam-quad-amr.mesh , beam-quad-nurbs.mesh , beam-quad.mesh , beam-quad.vtk , beam-tet.mesh , beam-tet.vtk , beam-tri.mesh , beam-tri.vtk , beam-wedge.mesh , and beam-wedge.vtk . The elements and boundaries of these meshes are assigned attributes/materials suitable for the cantilever problem: +----------+----------+ boundary --->| material | material |<--- boundary attribute 1 | 1 | 2 | attribute 2 (fixed) +----------+----------+ (pull down) Build the example with make ex2p . Try running ./ex2p in the terminal to run a 2D elasticity problem. As in Example 1, the linear system is solved using AMG. For this example, two types of AMG solvers can be used: A special version of AMG designed specifically for elasticity ( see this paper ). AMG for systems. To enable the special elasticity AMG, add the flag -elast to the command line, otherwise, AMG for systems will be used. For example: ./ex2p -elast . The polynomial degree (order) can be changed with the --order command line argument ( -o for short). For example: ./ex2p -o 2 . By default, low-order $(p=1)$ elements are used. Warning Using higher-order elements can quickly become computationally expensive. See the section below on Low-order-refined methods for a more efficient approach. Additionally, static condensation can be used to eliminate interior high-order degrees of freedom and obtain a smaller system. For --order 1 , this has no effect. For higher-order problems, static condensation can improve efficiency. In this example, as before, the mesh refinement level can be controlled in the source code through par_ref_levels . Note Remember to recompile the example after editing the source code ( make ex2p ). Running with more than one MPI rank will partition the mesh and run the problem in parallel. Here is a sample 3D run: mpirun -np 8 ./ex2p -m ../data/beam-hex.mesh Try experimenting with different discretization, solver, and parallelization options. Examples 3 and 4: the de Rham Complex The next two examples demonstrate the use of vector finite element spaces . Example 3 solves an electromagnetics problem using $H(\\mathrm{curl})$ finite elements. Example 4 solves a grad-div problem using $H(\\mathrm{div})$ finite elements. Standard multigrid methods don't always work well for these problems, so we need specialized solvers! (See here for a paper on this topic.) For $H(\\mathrm{curl})$ problems, we use the AMS solver from hypre. For $H(\\mathrm{div})$ problems, we either use the ADS solver from hypre or a special hybridization solver . A recent saddle-point $H(\\mathrm{div})$ solver is also available in the miniapps/hdiv-linear-solver directory . See this paper for more details. Try experimenting with different options to get a feel for the performance of the discretizations and solvers: Change the mesh (2D or 3D) using the --mesh ( -m ) command line argument. For example: mpirun -np 16 ex3p -m ../data/beam-hex.mesh . Change the polynomial degree using the --order ( -o ) command line argument. For example: mpirun -np 32 ex4p -m ../data/square-disc-nurbs.mesh -o 3 . Run problems in parallel using mpirun . For ex4p , enable hybridization using the -hb flag. For example: mpirun -np 48 ex4p -m ../data/star-surf.mesh -o 3 -hb . Note Remember to build the examples first: make ex3 ex4 ex3p ex4p MFEM's native Multigrid solver The previous examples ( ex1p , ex2p , ex3p , and ex4p ) all used algebraic multigrid methods. MFEM also supports geometric ($h$- and $p$-multigrid) methods. These solvers are illustrated in Example 26 (and its parallel variant); see the ex26.cpp and ex26p.cpp source files. Mesh refinement can be set using the --geometric-refinements ( -gr ) command line argument. The finite element order can be controlled using the --order-refinements ( -or ) command line argument. Warning Each additional order refinement increases the order by a factor of 2. This quickly becomes computationally expensive, so be careful when increasing the order refinements. This example runs matrix-free using MFEM's partial assembly algorithms . Matrix-free methods are much more efficient for high-order problems and also work better on GPU architectures. Try comparing the performance of ex1p and ex26p for higher-order problems. For example, compare the run time of the following two runs: mpirun -np 32 ./ex26p -m ../data/fichera.mesh -or 2 mpirun -np 32 ./ex1p -m ../data/fichera.mesh -o 1 Both examples solve a degree-4 Poisson problem with 1,884,545 degrees of freedom, but one of them is significantly faster. Explore how the number of CG iterations changes as -or and -gr are increased. (For large problems, it may be worth running ex26p in parallel with mpirun .) Low-order-refined methods Examples 1, 2, 3, and 4 used algebraic methods applied to the discretization matrix for each of the problems. Example 26 showed how to use geometric multigrid together with matrix-free methods. Low-order-refined (LOR) is an alternative matrix-free methodology for solving these problems. The LOR solvers miniapp provides matrix-free solvers for the same problems solved in Examples 1, 3, and 4. Go to the LOR solvers miniapp directory: cd ~/mfem/miniapps/solvers Run make plor_solvers to build the parallel LOR solvers miniapp. The --fe-type (or -fe ) command line argument can be used to choose the problem type. -fe h solves an $H^1$ problem (Poisson, equivalent to ex1 ). -fe n solves a Nedelec problem (Maxwell in $H(\\mathrm{curl})$, equivalent to ex3 ). -fe r solves a Raviart-Thomas problem (grad-div in $H(\\mathrm{div})$, equivalent to ex4 ). As usual, the --mesh ( -m ) argument can be used to choose the mesh file. (Keep in mind that MFEM's meshes in the data directory are now found in ../../data relative to the miniapp directory.) The number of mesh refinements in serial and parallel can be controlled with the --refine-serial and --refine-parallel ( -rs and -rp ) command line arguments The polynomial degree can be controlled with the --order ( -o ) argument. Compare the performance of high-order problems with plor_solvers to that of Examples 1, 3, and 4. Here are some sample runs to compare: // 2D, 5th order, 256,800 DOFs mpirun -np 8 ./plor_solvers -fe n -m ../../data/star.mesh -rs 2 -rp 2 -o 5 -no-vis mpirun -np 8 ../../examples/ex3p -m ../../data/star.mesh -o 5 // 3D, 2nd order, 2,378,016 DOFs mpirun -np 24 ./plor_solvers -fe n -m ../../data/fichera.mesh -rs 2 -rp 2 -o 3 -no-vis mpirun -np 24 ../../examples/ex3p -m ../../data/fichera.mesh -o 3 For more details on how LOR solvers work in MFEM, see the High-Order Matrix-Free Solvers talk ( PDF , video ) from the 2021 MFEM community workshop . Additional solver integrations In addition to the hypre AMG solvers and MFEM's built-in solvers illustrated above, MFEM also integrates with a number of third-party solver libraries, including: PETSc \u2014 see the ~/mfem/examples/petsc directory SuperLU \u2014 see the ~/mfem/examples/superlu directory STRUMPACK \u2014 see ~/mfem/examples/ex11p.cpp Ginkgo \u2014 see the ~/mfem/examples/ginkgo directory AmgX \u2014 see the ~/mfem/examples/amgx directory Most third-party libraries are not pre-installed in the AWS image, but you can still peruse the example source code to see the capabilities of the various integrations. You can check the containers repository to see which third-party libraries are available for the image you chose. As of December 2023, we pre-install PETSc and SuperLU for the CPU images and AmgX for the CUDA images. Note If you install MFEM locally , you can enable these third-party solver library integrations with the MFEM_USE_* configuration variables, e.g., by specifying MFEM_USE_PETSC=YES . Questions? Ask for help in the tutorial Slack channel . Next Steps Depending on your interests pick one of the following lessons: Tour of MFEM Examples Meshing and Visualization Further Steps Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Solvers"}, {"location": "tutorial/solvers/#solvers-and-scalability", "text": "45 minutes intermediate", "title": "  Solvers and Scalability"}, {"location": "tutorial/solvers/#scalable-algebraic-multigrid-preconditioners-from-hypre", "text": "MFEM comes with a large number of example codes that demonstrate different physical applications, finite element discretizations, and linear solvers: Example 1 solves a Poisson problem, Example 2 solves a linear elasticity problem, Example 3 solves a definite Maxwell (electromagnetics) problem, and Example 4 solves grad-div diffusion problem. The parallel versions of these examples ( ex1p , ex2p , ex3p , and ex4p ) each use suitable algebraic multigrid (AMG) preconditioners from the hypre solvers library. We describe sample runs with each of these examples in more details below.", "title": "  Scalable algebraic multigrid preconditioners from hypre"}, {"location": "tutorial/solvers/#example-1-poisson-problem-and-amg", "text": "First, make sure you are in the examples subdirectory: cd ~/mfem/examples Build the parallel version of Example 1: make ex1p Run the parallel version of Example 1, solving a Poisson problem: ./ex1p After forming the linear system, MFEM uses hypre to construct and apply an AMG preconditioner. Details of the AMG preconditioner are provided in the example output under the headers BoomerAMG SETUP PARAMETERS and BoomerAMG SOLVER PARAMETERS . Click here to view the terminal output A key feature of AMG methods is their scalability: with default options, convergence is achieved in only 18 conjugate gradient iterations. Let's see what happens if we increase the mesh refinement. Edit ex1p.cpp changing line 153 as follows: @@ -150,7 +150,7 @@ int main(int argc, char *argv[]) ParMesh pmesh(MPI_COMM_WORLD, mesh); mesh.Clear(); { - int par_ref_levels = 2; + int par_ref_levels = 3; for (int l = 0; l < par_ref_levels; l++) { pmesh.UniformRefinement(); This adds one additional level of refinement, making the problem roughly 4 times as large in 2D, or 8 times as large in 3D. Rebuild the example ( make ex1p ) and re-run it: ./ex1p Although the number of unknowns for this problem has increased by roughly 4x, the iteration count remains at 18 due to the scalability of the AMG preconditioner. Let's now try a 3D problem. For that, we just need to choose a 3D mesh using the -m or --mesh command line argument. Because these problems are more computationally expensive, let's first reduce the refinement level, setting int par_ref_levels = 1; in the ex1p.cpp source code. Rebuild the example ( make ex1p ) and re-run it using the three-dimensional Fichera mesh: ./ex1p -m ../data/fichera.mesh . Convergence is attained in only 16 iterations. Finally, let's take a look at the parallel scalability of the solvers: Increase the refinement level: int par_ref_levels = 2; Recompile: make ex1p Now run the 3D example on 8 cores: mpirun -np 8 ./ex1p -m ../data/fichera.mesh This is an example of a weak scaling test : the problem size and the number of processors are both increased by a factor of 8. Because the PCG iteration counts remain roughly constant, the total time to solution should remain roughly fixed (minus some overhead and communication cost), even though we are solving a problem that is 8 times larger.", "title": "  Example 1: Poisson problem and AMG"}, {"location": "tutorial/solvers/#example-2-linear-elasticity", "text": "This example demonstrates solving a linear elasticity cantilever beam problem with different materials. This example is designed to work with any of the \"beam\" meshes provided by MFEM. Run ls ../data | grep beam to list the available 2D and 3D meshes: beam-hex-nurbs.mesh , beam-hex.mesh , beam-hex.vtk , beam-quad-amr.mesh , beam-quad-nurbs.mesh , beam-quad.mesh , beam-quad.vtk , beam-tet.mesh , beam-tet.vtk , beam-tri.mesh , beam-tri.vtk , beam-wedge.mesh , and beam-wedge.vtk . The elements and boundaries of these meshes are assigned attributes/materials suitable for the cantilever problem: +----------+----------+ boundary --->| material | material |<--- boundary attribute 1 | 1 | 2 | attribute 2 (fixed) +----------+----------+ (pull down) Build the example with make ex2p . Try running ./ex2p in the terminal to run a 2D elasticity problem. As in Example 1, the linear system is solved using AMG. For this example, two types of AMG solvers can be used: A special version of AMG designed specifically for elasticity ( see this paper ). AMG for systems. To enable the special elasticity AMG, add the flag -elast to the command line, otherwise, AMG for systems will be used. For example: ./ex2p -elast . The polynomial degree (order) can be changed with the --order command line argument ( -o for short). For example: ./ex2p -o 2 . By default, low-order $(p=1)$ elements are used.", "title": "  Example 2: Linear Elasticity"}, {"location": "tutorial/solvers/#examples-3-and-4-the-de-rham-complex", "text": "The next two examples demonstrate the use of vector finite element spaces . Example 3 solves an electromagnetics problem using $H(\\mathrm{curl})$ finite elements. Example 4 solves a grad-div problem using $H(\\mathrm{div})$ finite elements. Standard multigrid methods don't always work well for these problems, so we need specialized solvers! (See here for a paper on this topic.) For $H(\\mathrm{curl})$ problems, we use the AMS solver from hypre. For $H(\\mathrm{div})$ problems, we either use the ADS solver from hypre or a special hybridization solver . A recent saddle-point $H(\\mathrm{div})$ solver is also available in the miniapps/hdiv-linear-solver directory . See this paper for more details. Try experimenting with different options to get a feel for the performance of the discretizations and solvers: Change the mesh (2D or 3D) using the --mesh ( -m ) command line argument. For example: mpirun -np 16 ex3p -m ../data/beam-hex.mesh . Change the polynomial degree using the --order ( -o ) command line argument. For example: mpirun -np 32 ex4p -m ../data/square-disc-nurbs.mesh -o 3 . Run problems in parallel using mpirun . For ex4p , enable hybridization using the -hb flag. For example: mpirun -np 48 ex4p -m ../data/star-surf.mesh -o 3 -hb .", "title": "  Examples 3 and 4: the de Rham Complex"}, {"location": "tutorial/solvers/#mfems-native-multigrid-solver", "text": "The previous examples ( ex1p , ex2p , ex3p , and ex4p ) all used algebraic multigrid methods. MFEM also supports geometric ($h$- and $p$-multigrid) methods. These solvers are illustrated in Example 26 (and its parallel variant); see the ex26.cpp and ex26p.cpp source files. Mesh refinement can be set using the --geometric-refinements ( -gr ) command line argument. The finite element order can be controlled using the --order-refinements ( -or ) command line argument.", "title": "  MFEM's native Multigrid solver"}, {"location": "tutorial/solvers/#low-order-refined-methods", "text": "Examples 1, 2, 3, and 4 used algebraic methods applied to the discretization matrix for each of the problems. Example 26 showed how to use geometric multigrid together with matrix-free methods. Low-order-refined (LOR) is an alternative matrix-free methodology for solving these problems. The LOR solvers miniapp provides matrix-free solvers for the same problems solved in Examples 1, 3, and 4. Go to the LOR solvers miniapp directory: cd ~/mfem/miniapps/solvers Run make plor_solvers to build the parallel LOR solvers miniapp. The --fe-type (or -fe ) command line argument can be used to choose the problem type. -fe h solves an $H^1$ problem (Poisson, equivalent to ex1 ). -fe n solves a Nedelec problem (Maxwell in $H(\\mathrm{curl})$, equivalent to ex3 ). -fe r solves a Raviart-Thomas problem (grad-div in $H(\\mathrm{div})$, equivalent to ex4 ). As usual, the --mesh ( -m ) argument can be used to choose the mesh file. (Keep in mind that MFEM's meshes in the data directory are now found in ../../data relative to the miniapp directory.) The number of mesh refinements in serial and parallel can be controlled with the --refine-serial and --refine-parallel ( -rs and -rp ) command line arguments The polynomial degree can be controlled with the --order ( -o ) argument. Compare the performance of high-order problems with plor_solvers to that of Examples 1, 3, and 4. Here are some sample runs to compare: // 2D, 5th order, 256,800 DOFs mpirun -np 8 ./plor_solvers -fe n -m ../../data/star.mesh -rs 2 -rp 2 -o 5 -no-vis mpirun -np 8 ../../examples/ex3p -m ../../data/star.mesh -o 5 // 3D, 2nd order, 2,378,016 DOFs mpirun -np 24 ./plor_solvers -fe n -m ../../data/fichera.mesh -rs 2 -rp 2 -o 3 -no-vis mpirun -np 24 ../../examples/ex3p -m ../../data/fichera.mesh -o 3 For more details on how LOR solvers work in MFEM, see the High-Order Matrix-Free Solvers talk ( PDF , video ) from the 2021 MFEM community workshop .", "title": "  Low-order-refined methods"}, {"location": "tutorial/solvers/#additional-solver-integrations", "text": "In addition to the hypre AMG solvers and MFEM's built-in solvers illustrated above, MFEM also integrates with a number of third-party solver libraries, including: PETSc \u2014 see the ~/mfem/examples/petsc directory SuperLU \u2014 see the ~/mfem/examples/superlu directory STRUMPACK \u2014 see ~/mfem/examples/ex11p.cpp Ginkgo \u2014 see the ~/mfem/examples/ginkgo directory AmgX \u2014 see the ~/mfem/examples/amgx directory Most third-party libraries are not pre-installed in the AWS image, but you can still peruse the example source code to see the capabilities of the various integrations. You can check the containers repository to see which third-party libraries are available for the image you chose. As of December 2023, we pre-install PETSc and SuperLU for the CPU images and AmgX for the CUDA images.", "title": "  Additional solver integrations"}, {"location": "tutorial/start/", "text": "Getting Started 15 minutes basic Lesson Objectives Setup a browser-based MFEM development environment. Run a simple MFEM code to test the environment. Note You need an IP address to follow the steps described below. If you are part of the HPC software tutorial series , you should have received an email with the AWS instance IP address allocated to you. Use that in place of IP in the instructions below. If you are running a Docker container locally, as described in the Local Docker Container page, use localhost in place of IP in the instructions below. If you setup your own cloud instance with the Docker container, you should use the cloud instance IP address. Warning If you use VPN, make sure to turn it off before following the instructions below. Set up VS Code Open a new browser window and load http://IP:3000 . You should see the Visual Studio Code (VS Code) interface. Click on Mark Done to continue. Click on open a folder (under Recent ), then select mfem , then click OK . In the left pane, open examples and select ex1.cpp . Open a new terminal by clicking on in the upper left corner, then Terminal , and then New Terminal . Alternatively you can open a new terminal by pressing Ctrl + Shift + ` . You should now see the MFEM source tree and a terminal in the ~/mfem directory. Note The browser window contains a fully functioning copy of Visual Studio Code. You can customize it further, and adjust it similarly to the desktop version. Set up GLVis In this tutorial we use GLVis for finite element visualization based on MFEM. Open a new browser window and load http://IP:8000/live . When you move the mouse to the top of the window you should see the GLVis interface: Click on the Connect to socket icon in the upper left corner, then click CONNECT . Note The Host field in the Connect to socket dialog should match your IP . When the button switches to DISCONNECT , click outside of the Connect to socket dialog to close it. Your environment should now look like: Simple test To test your environment, run ex1 , which together with the MFEM library itself, comes pre-build in the AWS image. In the VS Code terminal, type cd examples ./ex1 You should see 111 iterations printed in the terminal and the image in the GLVis window should change: To test the visualization, click in the GLVis window, and make sure you can rotate the plot with the Left mouse button and zoom in/out with the Right mouse button. Questions? Ask for help in the tutorial Slack channel . Next Steps Go to the Finite Element Basics page. Back to the MFEM tutorial page MathJax.Hub.Config({TeX: {equationNumbers: {autoNumber: \"all\"}}, tex2jax: {inlineMath: [['$','$']]}});", "title": "Start"}, {"location": "tutorial/start/#getting-started", "text": "15 minutes basic", "title": "  Getting Started"}, {"location": "tutorial/start/#set-up-vs-code", "text": "Open a new browser window and load http://IP:3000 . You should see the Visual Studio Code (VS Code) interface. Click on Mark Done to continue. Click on open a folder (under Recent ), then select mfem , then click OK . In the left pane, open examples and select ex1.cpp . Open a new terminal by clicking on in the upper left corner, then Terminal , and then New Terminal . Alternatively you can open a new terminal by pressing Ctrl + Shift + ` . You should now see the MFEM source tree and a terminal in the ~/mfem directory.", "title": "  Set up VS Code"}, {"location": "tutorial/start/#set-up-glvis", "text": "In this tutorial we use GLVis for finite element visualization based on MFEM. Open a new browser window and load http://IP:8000/live . When you move the mouse to the top of the window you should see the GLVis interface: Click on the Connect to socket icon in the upper left corner, then click CONNECT .", "title": "  Set up GLVis"}, {"location": "tutorial/start/#simple-test", "text": "To test your environment, run ex1 , which together with the MFEM library itself, comes pre-build in the AWS image. In the VS Code terminal, type cd examples ./ex1 You should see 111 iterations printed in the terminal and the image in the GLVis window should change: To test the visualization, click in the GLVis window, and make sure you can rotate the plot with the Left mouse button and zoom in/out with the Right mouse button.", "title": "  Simple test"}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index f4828bb360921e3ceb92d2a87d6845a0088d18f1..cd40236dd20c5772ff57bde9c917ebc9ea4781a0 100644 GIT binary patch delta 15 WcmZo+X<=cL@8;mpKfjUfA0q%CvIK