Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
112 changes: 86 additions & 26 deletions docs/Scientific_Computing/Supported_Applications/GROMACS.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
---

Check warning on line 1 in docs/Scientific_Computing/Supported_Applications/GROMACS.md

View workflow job for this annotation

GitHub Actions / Check page meta

meta.siblings

Parent category 'Supported_Applications' has too many children (48). Try to keep number of items in a category under '8', maybe add some new categories?
created_at: '2019-02-21T02:46:25Z'
tags:
- molecular dynamics
Expand All @@ -13,22 +13,95 @@
{% include "partials/app_header.html" %}
[//]: <> (APPS PAGE BOILERPLATE END)

GROMACS (the GROningen MAchine for Chemical Simulations) is a versatile
GROMACS (proper name, Not an acronym) is a versatile
package to perform molecular dynamics, i.e. simulate the Newtonian
equations of motion for systems with hundreds to millions of particles.

It is primarily designed for biochemical molecules like proteins, lipids
and nucleic acids that have a lot of complicated bonded interactions,
but since GROMACS is extremely fast at calculating the nonbonded

Check warning on line 22 in docs/Scientific_Computing/Supported_Applications/GROMACS.md

View workflow job for this annotation

GitHub Actions / Check Spelling

spelling

Word 'nonbonded' is misspelled.
interactions (that usually dominate simulations) many groups are also
using it for research on non-biological systems, e.g. polymers.

GROMACS is available to anyone at no cost under the terms of
[the GNU Lesser General Public Licence](http://www.gnu.org/licenses/lgpl-2.1.html).
Gromacs is a joint effort, with contributions from developers around the world: users agree

Check warning on line 28 in docs/Scientific_Computing/Supported_Applications/GROMACS.md

View workflow job for this annotation

GitHub Actions / Check Spelling

spelling

Word 'Gromacs' is misspelled.
to acknowledge use of GROMACS in any reports or publications of results
obtained with the Software.


## Examples

=== "Serial"
For when only one CPU is required, generally as part of
a [job array](../../Getting_Started/Next_Steps/Parallel_Execution.md#job-arrays)

```sl
#!/bin/bash -e

#SBATCH --job-name GROMACS-serial
#SBATCH --time 00:05:00 # Walltime
#SBATCH --account nesi99991 # Your project ID
#SBATCH --mem 1500 # How much memory.

module load GROMACS/{{app.default}}

# Note: In version 2021.5 and older use `gmx-serial` instead of `gmx`
srun gmx mdrun -s input.tpr -o trajectory.trr -c struct.gro -e energies.edr
```

=== "Shared Memory"
Uses a nodes shared memory for communication.

```sl
#!/bin/bash -e

#SBATCH --job-name GROMACS-shared-mem
#SBATCH --time 00:05:00 # Walltime
#SBATCH --account nesi99991 # Your project ID
#SBATCH --cpus-per-task 8 # Will use 8 CPUs
#SBATCH --mem 1500 # How much memory.

module load GROMACS/{{app.default}}

# Note: In version 2021.5 and older use `gmx-serial` instead of `gmx`
srun gmx mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s input.tpr -o trajectory.trr -c struct.gro -e energies.edr
```
=== "Multi Node (Hybrid)"
Should only be used in the case you need more CPUs than available on a single node.

```sl
#!/bin/bash -e

#SBATCH --job-name GROMACS-multi-node
#SBATCH --time 00:05:00 # Walltime
#SBATCH --account nesi99991 # Your project ID
#SBATCH --nodes 2 # wiLL use 2 nodes.
#SBATCH --mem 1500 # How much memory.

module load GROMACS/{{app.default}}
srun gmx-mpi mdrun-mpi -ntomp ${SLURM_CPUS_PER_TASK} -nomp ${SLURM_NNODES) -s input.tpr -o trajectory.trr -c struct.gro -e energies.edr

=== "GPU"
For more information on using GPUs see [GPU use on NeSI](../Batch_Jobs/GPU_use_on_NeSI.md)
```sl
#!/bin/bash -e

#SBATCH --job-name GROMACS-multi-node
#SBATCH --time 00:05:00 # Walltime
#SBATCH --account nesi99991 # Your project ID
#SBATCH --gpus-per-node 1
#SBATCH --cpus-per-task 8
#SBATCH --mem 1500 # How much memory.

module load GROMACS/{{app.default}}
# Note: In version 2021.5 and older use `gmx-serial` instead of `gmx`
srun gmx mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s input.tpr -o trajectory.trr -c struct.gro -e energies.edr
```
`



## Performance

GROMACS performance depends on several factors, such as usage (or lack
Expand All @@ -39,47 +112,34 @@
For a complete set of GROMACS options, please refer to GROMACS
documentation.

Within each GROMACS environment module we have two versions of GROMACS,
one built with with "thread-MPI", which is really just
multithreading, and one with real MPI which can run across multiple nodes in
a distributed job, ie: with `--ntasks` > 1.
In *GROMACS/2025.2-foss-2023a-cuda-12.5.0-hybrid* and more
recent environment modules the two programs are named `gmx` and `gmx-mpi`.
In our older GROMACS environment modules `gmx` was renamed to `gmx-serial`.
Each GROMACS environment module contains two executables one built with shared memory parallelism `gmx`,
and one with distrubuted memorory parallelism (MPI) `gmx-mpi`. which can run across multiple nodes,
ie: with `--ntasks` > 1.

Unless your problem is so large
that it does not fit on one whole compute node you are probably best
off not using `gmx-mpi`. The GROMACS documentation says on this:

!!! quote ""

The thread-MPI library implements a subset of the MPI 1.1 specification,
based on the system threading support. … Acting as a drop-in replacement
for MPI, thread-MPI enables compiling and running mdrun on a single machine
(i.e. not across a network) without MPI. Additionally, it not only provides
a convenient way to use computers with multicore CPU(s), but thread-MPI
does in some cases make mdrun run slightly faster than with MPI.

!!! quote ""
Thread-MPI is compatible with most mdrun features and parallelization schemes,
including OpenMP, GPUs; it is not compatible with MPI and multi-simulation runs.
!!! warning
In versions of GROMACS older than `GROMACS/2025.2-foss-2023a-cuda-12.5.0-hybrid`
the `gmx` executable is instead called `gmx-serial`.

## CUDA

GROMACS is built with CUDA support, but that is optional to use - it will run without a GPU.

### MPI

Check warning on line 127 in docs/Scientific_Computing/Supported_Applications/GROMACS.md

View workflow job for this annotation

GitHub Actions / Check page meta

walk_toc

Header 'MPI' is a useless only-child. Give it siblings or remove it.

If you do elect to use `gmx-mpi`, note that hybrid parallelisation (ie with --cpus-per-task > 1) can be
If you do elect to use `gmx-mpi`, note that hybrid parallelisation (i.e. with `--cpus-per-task` > `1`) can be
more efficient than MPI-only parallelisation. With hybrid parallelisation, it is important to run
`mdrun_mpi` with the `-ntomp <number>` option, where `<number>` should
be the number of CPUs per task. You can make sure the value is correct

Check warning on line 132 in docs/Scientific_Computing/Supported_Applications/GROMACS.md

View workflow job for this annotation

GitHub Actions / Check Prose

consistency.spacing

'Inconsistent spacing after period (1 vs. 2 spaces).'
by using `-ntomp ${SLURM_CPUS_PER_TASK}`.

## Checkpointing

The `-cpt 30` option instructs Gromacs to

Check warning on line 137 in docs/Scientific_Computing/Supported_Applications/GROMACS.md

View workflow job for this annotation

GitHub Actions / Check Spelling

spelling

Word 'Gromacs' is misspelled.
write a full checkpoint file every 30 minutes. You can restart from a
write a full checkpoint file every 30 minutes.

We reccomend including this flag for long running jobs.

You can restart from a
checkpoint file using the `-cpi` flag, thus: `-cpi state.cpt`.

## Further Documentation
Expand Down
Loading