diff --git a/docs/user_guide/examples/explanation_kernelloop.md b/docs/user_guide/examples/explanation_kernelloop.md
new file mode 100644
index 0000000000..26f7607dd2
--- /dev/null
+++ b/docs/user_guide/examples/explanation_kernelloop.md
@@ -0,0 +1,144 @@
+---
+file_format: mystnb
+kernelspec:
+ name: python3
+---
+
+# The Parcels Kernel loop
+
+On this page we discuss Parcels' execution loop, and what happens under the hood when you combine multiple Kernels.
+
+This is not very relevant when you only use the built-in Advection kernels, but can be important when you are writing and combining your own Kernels!
+
+## Background
+
+When you run a Parcels simulation (i.e. a call to `pset.execute()`), the Kernel loop is the main part of the code that is executed. This part of the code loops through time and executes the Kernels for all particle.
+
+In order to make sure that the displacements of a particle in the different Kernels can be summed, all Kernels add to a _change_ in position (`particles.dlon`, `particles.dlat`, and `particles.dz`). This is important, because there are situations where movement kernels would otherwise not commute. Take the example of advecting particles by currents _and_ winds. If the particle would first be moved by the currents and then by the winds, the result could be different from first moving by the winds and then by the currents. Instead, by summing the _changes_ in position, the ordering of the Kernels has no consequence on the particle displacement.
+
+## Basic implementation
+
+Below is a structured overview of how the Kernel loop is implemented. Note that this is for `time` and `lon` only, but the process for `lon` is also applied to `lat` and `z`.
+
+1. Initialise an extra Variable `particles.dlon=0`
+
+2. Within the Kernel loop, for each particle:
+ 1. Update `particles.lon += particles.dlon`
+
+ 2. Update `particles.time += particles.dt` (except for on the first iteration of the Kernel loop)
+
+ 3. Set variable `particles.dlon = 0`
+
+ 4. For each Kernel in the list of Kernels:
+ 1. Execute the Kernel
+
+ 2. Update `particles.dlon` by adding the change in longitude, if needed
+
+ 5. If `outputdt` is a multiple of `particles.time`, write `particles.lon` and `particles.time` to zarr output file
+
+Besides having commutable Kernels, the main advantage of this implementation is that, when using Field Sampling with e.g. `particles.temp = fieldset.Temp[particles.time, particles.z, particles.lat, particles.lon]`, the particle location stays the same throughout the entire Kernel loop. Additionally, this implementation ensures that the particle location is the same as the location of the sampled field in the output file.
+
+## Example with currents and winds
+
+Below is a simple example of some particles at the surface of the ocean. We create an idealised zonal wind flow that will "push" a particle that is already affected by the surface currents. The Kernel loop ensures that these two forces act at the same time and location.
+
+```{code-cell}
+import matplotlib.pyplot as plt
+import numpy as np
+import xarray as xr
+
+import parcels
+
+# Load the CopernicusMarine data in the Agulhas region from the example_datasets
+example_dataset_folder = parcels.download_example_dataset(
+ "CopernicusMarine_data_for_Argo_tutorial"
+)
+
+ds_fields = xr.open_mfdataset(f"{example_dataset_folder}/*.nc", combine="by_coords")
+ds_fields.load() # load the dataset into memory
+
+# Create an idealised wind field and add it to the dataset
+tdim, ydim, xdim = (len(ds_fields.time),len(ds_fields.latitude), len(ds_fields.longitude))
+ds_fields["UWind"] = xr.DataArray(
+ data=np.ones((tdim, ydim, xdim)) * np.sin(ds_fields.latitude.values)[None, :, None],
+ coords=[ds_fields.time, ds_fields.latitude, ds_fields.longitude])
+
+ds_fields["VWind"] = xr.DataArray(
+ data=np.zeros((tdim, ydim, xdim)),
+ coords=[ds_fields.time, ds_fields.latitude, ds_fields.longitude])
+
+fieldset = parcels.FieldSet.from_copernicusmarine(ds_fields)
+
+# Set unit converters for custom wind fields
+fieldset.UWind.units = parcels.GeographicPolar()
+fieldset.VWind.units = parcels.Geographic()
+```
+
+Now we define a wind kernel that uses a forward Euler method to apply the wind forcing. Note that we update the `particles.dlon` and `particles.dlat` variables, rather than `particles.lon` and `particles.lat` directly.
+
+```{code-cell}
+def wind_kernel(particles, fieldset):
+ dt_float = particles.dt / np.timedelta64(1, 's')
+ particles.dlon += (
+ fieldset.UWind[particles] * dt_float
+ )
+ particles.dlat += (
+ fieldset.VWind[particles] * dt_float
+ )
+```
+
+First run a simulation where we apply kernels as `[AdvectionRK4, wind_kernel]`
+
+```{code-cell}
+:tags: [hide-output]
+npart = 10
+z = np.repeat(ds_fields.depth[0].values, npart)
+lons = np.repeat(31, npart)
+lats = np.linspace(-32.5, -30.5, npart)
+
+pset = parcels.ParticleSet(fieldset, pclass=parcels.Particle, z=z, lat=lats, lon=lons)
+output_file = parcels.ParticleFile(
+ store="advection_then_wind.zarr", outputdt=np.timedelta64(6,'h')
+)
+pset.execute(
+ [parcels.kernels.AdvectionRK4, wind_kernel],
+ runtime=np.timedelta64(5,'D'),
+ dt=np.timedelta64(1,'h'),
+ output_file=output_file,
+)
+```
+
+Then also run a simulation where we apply the kernels in the reverse order as `[wind_kernel, AdvectionRK4]`
+
+```{code-cell}
+:tags: [hide-output]
+pset_reverse = parcels.ParticleSet(
+ fieldset, pclass=parcels.Particle, z=z, lat=lats, lon=lons
+)
+output_file_reverse = parcels.ParticleFile(
+ store="wind_then_advection.zarr", outputdt=np.timedelta64(6,"h")
+)
+pset_reverse.execute(
+ [wind_kernel, parcels.kernels.AdvectionRK4],
+ runtime=np.timedelta64(5,"D"),
+ dt=np.timedelta64(1,"h"),
+ output_file=output_file_reverse,
+)
+```
+
+Finally, plot the trajectories to show that they are identical in the two simulations.
+
+```{code-cell}
+# Plot the resulting particle trajectories overlapped for both cases
+advection_then_wind = xr.open_zarr("advection_then_wind.zarr")
+wind_then_advection = xr.open_zarr("wind_then_advection.zarr")
+plt.plot(wind_then_advection.lon.T, wind_then_advection.lat.T, "-")
+plt.plot(advection_then_wind.lon.T, advection_then_wind.lat.T, "--", c="k", alpha=0.7)
+plt.show()
+```
+
+```{warning}
+It is better not to update `particles.lon` directly in a Kernel, as it can interfere with the loop above. Assigning a value to `particles.lon` in a Kernel will throw a warning.
+
+Instead, update the local variable `particles.dlon`.
+```
diff --git a/docs/user_guide/examples_v3/tutorial_kernelloop.ipynb b/docs/user_guide/examples_v3/tutorial_kernelloop.ipynb
deleted file mode 100644
index e10d2c4101..0000000000
--- a/docs/user_guide/examples_v3/tutorial_kernelloop.ipynb
+++ /dev/null
@@ -1,377 +0,0 @@
-{
- "cells": [
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# The Parcels Kernel loop\n"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "This tutorial explains how Parcels executes multiple Kernels, and what happens under the hood when you combine Kernels. \n",
- "\n",
- "This is probably not very relevant when you only use the built-in Advection kernels, but can be important when you are writing and combining your own Kernels!"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Background\n",
- "\n",
- "When you run a Parcels simulation (i.e. a call to `pset.execute()`), the Kernel loop is the main part of the code that is executed. This part of the code loops through all particles and executes the Kernels that are defined for each particle.\n",
- "\n",
- "In order to make sure that the displacements of a particle in the different Kernels can be summed, all Kernels add to a _change_ in position (`particles.dlon`, `particles.dlat`, and `particles.dz`). This is important, because there are situations where movement kernels would otherwise not commute. Take the example of advecting particles by currents _and_ winds. If the particle would first be moved by the currents and then by the winds, the result could be different from first moving by the winds and then by the currents. Instead, by adding the changes in position, the ordering of the Kernels has no consequence on the particle displacement."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Basic implementation"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Below is a structured overview of the Kernel loop is implemented. Note that this is for `lon` only, but the same process is applied for `lat` and `z`.\n",
- "\n",
- "1. Initialise an extra Variable `particles.dlon=0`\n",
- "\n",
- "2. Within the Kernel loop, for each particle:
\n",
- "\n",
- " 1. Update `particles.lon += particles.dlon`
\n",
- "\n",
- " 2. Update `particles.time += particles.dt` (except for on the first iteration of the Kernel loop)
\n",
- "\n",
- " 3. Set variable `particles.dlon = 0`
\n",
- "\n",
- " 4. For each Kernel in the list of Kernels:\n",
- " \n",
- " 1. Execute the Kernel\n",
- " \n",
- " 2. Update `particles.dlon` by adding the change in longitude, if needed
\n",
- "\n",
- " 5. If `outputdt` is a multiple of `particle.time`, write `particle.lon` and `particle.time` to zarr output file
\n",
- "\n",
- "Besides having commutable Kernels, the main advantage of this implementation is that, when using Field Sampling with e.g. `particle.temp = fieldset.Temp[particle.time, particle.z, particle.lat, particle.lon]`, the particle location stays the same throughout the entire Kernel loop. Additionally, this implementation ensures that the particle location is the same as the location of the sampled field in the output file."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Example use"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Below is a simple example of some particles at the surface of the ocean. We create an idealised zonal wind flow that will \"push\" a particle that is already affected by the surface currents."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from datetime import timedelta\n",
- "\n",
- "import matplotlib.pyplot as plt\n",
- "import numpy as np\n",
- "import xarray as xr\n",
- "\n",
- "import parcels\n",
- "\n",
- "# Load the GlobCurrent data in the Agulhas region from the example_data\n",
- "example_dataset_folder = parcels.download_example_dataset(\"GlobCurrent_example_data\")\n",
- "filenames = {\n",
- " \"U\": f\"{example_dataset_folder}/20*.nc\",\n",
- " \"V\": f\"{example_dataset_folder}/20*.nc\",\n",
- "}\n",
- "variables = {\n",
- " \"U\": \"eastward_eulerian_current_velocity\",\n",
- " \"V\": \"northward_eulerian_current_velocity\",\n",
- "}\n",
- "dimensions = {\"lat\": \"lat\", \"lon\": \"lon\", \"time\": \"time\"}\n",
- "fieldset = parcels.FieldSet.from_netcdf(\n",
- " filenames, variables, dimensions, allow_time_extrapolation=True\n",
- ")\n",
- "# uppermost layer in the hydrodynamic data\n",
- "fieldset.mindepth = fieldset.U.depth[0]"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Create an idealised wind field and add it to the fieldset\n",
- "xdim, ydim = (len(fieldset.U.lon), len(fieldset.U.lat))\n",
- "UWind = parcels.Field(\n",
- " \"UWind\",\n",
- " np.ones((ydim, xdim), dtype=np.float32) * np.sin(fieldset.U.lat)[:, None],\n",
- " lon=fieldset.U.lon,\n",
- " lat=fieldset.U.lat,\n",
- " mesh=\"spherical\",\n",
- ")\n",
- "VWind = parcels.Field(\n",
- " \"VWind\",\n",
- " np.zeros((ydim, xdim), dtype=np.float32),\n",
- " grid=UWind.grid,\n",
- ")\n",
- "UWind.units = parcels.tools.converters.GeographicPolar()\n",
- "VWind.units = parcels.tools.converters.Geographic()\n",
- "\n",
- "fieldset_wind = parcels.FieldSet(UWind, VWind)\n",
- "\n",
- "fieldset.add_field(fieldset_wind.U, name=\"UWind\")\n",
- "fieldset.add_field(fieldset_wind.V, name=\"VWind\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Now define a wind kernel that uses a forward Euler method to apply the wind forcing. Note that we update the `particle_dlon` and `particle_dlat` variables, rather than `particle.lon` and `particle.lat` directly."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def wind_kernel(particle, fieldset, time):\n",
- " particle_dlon += (\n",
- " fieldset.UWind[time, particle.z, particle.lat, particle.lon] * particle.dt\n",
- " )\n",
- " particle_dlat += (\n",
- " fieldset.VWind[time, particle.z, particle.lat, particle.lon] * particle.dt\n",
- " )"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Run a simulation where we apply first kernels as `[AdvectionRK4, wind_kernel]`"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "lons = 26.0 * np.ones(10)\n",
- "lats = np.linspace(-37.5, -34.5, 10)\n",
- "\n",
- "pset = parcels.ParticleSet(fieldset, pclass=parcels.Particle, lon=lons, lat=lats)\n",
- "output_file = pset.ParticleFile(\n",
- " name=\"advection_then_wind.zarr\", outputdt=timedelta(hours=6)\n",
- ")\n",
- "pset.execute(\n",
- " [parcels.kernels.AdvectionRK4, wind_kernel],\n",
- " runtime=timedelta(days=5),\n",
- " dt=timedelta(hours=1),\n",
- " output_file=output_file,\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "And also run a simulation where we apply the kernels in the reverse order as `[wind_kernel, AdvectionRK4]`"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "pset_reverse = parcels.ParticleSet(\n",
- " fieldset, pclass=parcels.Particle, lon=lons, lat=lats\n",
- ")\n",
- "output_file_reverse = pset_reverse.ParticleFile(\n",
- " name=\"wind_then_advection.zarr\", outputdt=timedelta(hours=6)\n",
- ")\n",
- "pset_reverse.execute(\n",
- " [wind_kernel, parcels.kernels.AdvectionRK4],\n",
- " runtime=timedelta(days=5),\n",
- " dt=timedelta(hours=1),\n",
- " output_file=output_file_reverse,\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Finally, plot the trajectories to show that they are identical in the two simulations."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Plot the resulting particle trajectories overlapped for both cases\n",
- "advection_then_wind = xr.open_zarr(\"advection_then_wind.zarr\")\n",
- "wind_then_advection = xr.open_zarr(\"wind_then_advection.zarr\")\n",
- "plt.plot(wind_then_advection.lon.T, wind_then_advection.lat.T, \"-\")\n",
- "plt.plot(advection_then_wind.lon.T, advection_then_wind.lat.T, \"--\", c=\"k\", alpha=0.7)\n",
- "plt.show()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Caveat: Avoid updating particle locations directly in Kernels"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "It is better not to update `particle.lon` directly in a Kernel, as it can interfere with the loop above. Assigning a value to `particle.lon` in a Kernel will throw a warning. \n",
- "\n",
- "Instead, update the local variable `particle.dlon`."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Working with Status Codes"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "In order to capture errors in the Kernel loop, Parcels uses a Status Code system. There are several Status Codes, listed below."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from parcels import StatusCode\n",
- "\n",
- "for statuscode, val in StatusCode.__dict__.items():\n",
- " if statuscode.startswith(\"__\"):\n",
- " continue\n",
- " print(f\"{statuscode} = {val}\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Once an error is thrown (for example, a Field Interpolation error), then the `particle.state` is updated to the corresponding status code. This gives you the flexibility to write a Kernel that checks for a status code and does something with it. \n",
- "\n",
- "For example, you can write a Kernel that checks for `particle.state == StatusCode.ErrorOutOfBounds` and deletes the particle, and then append this to the Kernel list in `pset.execute()`."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def CheckOutOfBounds(particle, fieldset, time):\n",
- " if particle.state == StatusCode.ErrorOutOfBounds:\n",
- " particle.delete()\n",
- "\n",
- "\n",
- "def CheckError(particle, fieldset, time):\n",
- " if particle.state >= 50: # This captures all Errors\n",
- " particle.delete()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "But of course, you can also write code for more sophisticated behaviour than just deleting the particle. It's up to you! Note that if you don't delete the particle, you will have to update the `particle.state = StatusCode.Success` yourself. For example:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def Move1DegreeWest(particle, fieldset, time):\n",
- " if particle.state == StatusCode.ErrorOutOfBounds:\n",
- " particle_dlon -= 1.0\n",
- " particle.state = StatusCode.Success"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Or, if you want to make sure that particles don't escape through the water surface"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "def KeepInOcean(particle, fieldset, time):\n",
- " if particle.state == StatusCode.ErrorThroughSurface:\n",
- " particle_dz = 0.0\n",
- " particle.state = StatusCode.Success"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Kernel functions such as the ones above can then be added to the list of kernels in `pset.execute()`. \n",
- "\n",
- "Note that these Kernels that control what to do with `particle.state` should typically be added at the _end_ of the Kernel list, because otherwise later Kernels may overwrite the `particle.state` or the `particle_dlon` variables."
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "parcels",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.12.3"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 1
-}
diff --git a/docs/user_guide/index.md b/docs/user_guide/index.md
index c3fe1a3cb8..cf58501f75 100644
--- a/docs/user_guide/index.md
+++ b/docs/user_guide/index.md
@@ -44,6 +44,7 @@ examples/tutorial_delaystart.ipynb
:caption: Write Kernels
:titlesonly:
+examples/explanation_kernelloop.md
examples/tutorial_sampling.ipynb
examples/tutorial_gsw_density.ipynb
examples/tutorial_Argofloats.ipynb