Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
12f11db
update to post-#776 main state without soa
CKehl Apr 3, 2020
715768a
post parcels-2.1.5 main update (with chunking fixes and sgrid fixes)
CKehl Apr 9, 2020
cb1c899
perlin-noise benchmark update to avoid plotting error-aborts in MPI e…
CKehl Apr 9, 2020
f804492
changed create_progressbar() name to make it inheritable. Introduced …
CKehl May 11, 2020
f83aacb
included fixes on the error handling and ErrorCode numbers
CKehl May 11, 2020
99faea1
applied changes to the perlin-noise sample
CKehl May 11, 2020
63567e9
added full suite of benchmarks - synthetic: stommel, CMEMS; realistic…
CKehl May 11, 2020
b27e0b1
fixed tests
CKehl May 11, 2020
06f1891
benchmarking - generally fixing performance logging issues with stomm…
CKehl Jul 16, 2020
e621340
benchmarking - split compute time measurement in side the kernel into…
CKehl Jul 17, 2020
2bb94f8
benchmarking - updated performance logging scripts
CKehl Jul 17, 2020
caabe66
benchmarking - added manual-run testing functions to check MPI behavi…
CKehl Jul 17, 2020
54e4c18
benchmarking - added perlin noise scripts.
CKehl Jul 20, 2020
44a7fa9
benchmarking - added temporal perlin field generators
CKehl Oct 23, 2020
629b5a7
benchmarking - changed the single-core non-MPI timing to a process_ti…
CKehl Oct 23, 2020
52785a6
benchmarking - applying the new plotting and logging interface to the…
CKehl Oct 23, 2020
a9563af
benchmarking - fixed incompatibility issues
CKehl Oct 23, 2020
ef87617
benchmarking - fixed bug in particleset benchmark plotting function
CKehl Oct 23, 2020
f0a519d
benchmarking - fixed bugs in particleset benchmark plotting and loggi…
CKehl Oct 23, 2020
0ca15ca
benchmarking - fixed bugs in particleset benchmark plotting and loggi…
CKehl Oct 23, 2020
c146825
benchmarking - split up IO costs in IO (external) and memory-IO (inte…
CKehl Nov 2, 2020
dada671
benchmarking - fixed the _kernel error in ParticleSet_Benchmark
CKehl Nov 5, 2020
ed978f1
benchmarking - reduced the perlin dataset size to reduce memory cunsu…
CKehl Nov 5, 2020
837b888
benchmarking - fixed IndexError in performance logging
CKehl Nov 6, 2020
11cb02d
benchmarking - fixed MPI-manycore getter of particle numbers (or, gen…
CKehl Nov 9, 2020
900b3b7
benchmarking - fix new folder name of CARTESIUS
CKehl Nov 9, 2020
a13ac67
benchmarking - fixing the cluster-check of CARTESIUS as naming has ch…
CKehl Nov 11, 2020
a611e78
benchmarking - fixing output folders for cartesius
CKehl Nov 13, 2020
aaeef35
benchmarking - added measuring the deletion-of-particles time in the …
CKehl Nov 17, 2020
1cb4d50
benchmarking - added the possibility to adapt plotting ranges for x-y…
CKehl Nov 17, 2020
c4ea4c1
benchmarking - added a toString() function for printing errors
CKehl Nov 17, 2020
e0fd3aa
benchmarking - changed the life_expectancy to make the deletion-bench…
CKehl Nov 17, 2020
6e7ac23
benchmarking - fixed missing x-y axes plotting limits to the syntheti…
CKehl Nov 17, 2020
4586923
benchmarking - balanced compute load on particle numbers (i.e. all sc…
CKehl Nov 20, 2020
d8d5b8f
benchmarking - rebalanced the deletion and addition time to get a pre…
CKehl Nov 21, 2020
39aee93
benchmarking - removed the measurement of write-back of particles to …
CKehl Nov 28, 2020
f9e724a
benchmarking - added measuring the garbage-collection time, fixed the…
CKehl Dec 8, 2020
d34747b
benchmarking - integrated the improved async memory tracker
CKehl Dec 14, 2020
34c6c03
benchmarking - changed the curve-plotting so that the memory measures…
CKehl Dec 14, 2020
7ef5444
benchmarking - after seeing that memory trends between sync and async…
CKehl Dec 28, 2020
9f37436
benchmarking - moved the measuring of some code snippet from compute-…
CKehl Jan 14, 2021
d119a11
benchmarking - added experiments for double-gyre and bickleyjet
CKehl Feb 1, 2021
3cbc0b9
benchmarking - recent modifications for data-writeout and speed modul…
CKehl Feb 1, 2021
23d2fe3
benchmarking - added computer- and branch variables and print stateme…
CKehl Feb 4, 2021
ad1a170
benchmarking - changed the Cartesius addressing for palaeo-parcels, g…
CKehl Feb 8, 2021
bd8c8cc
benchmarking - try to fix periodicity issue in galapagos-case
CKehl Feb 8, 2021
53bb2ed
benchmarking - cleaned up and fixed imports for deep-migration case
CKehl Feb 8, 2021
ed4fa4d
benchmarking - deep migration fix the CARTESIUS domain file location
CKehl Feb 8, 2021
1be5eba
benchmarking - changed the periodicity of the galapagos case
CKehl Feb 9, 2021
11aec69
benchmarking - extended the palaeo-plankton script to include periodi…
CKehl Feb 9, 2021
cfc89a2
benchmarking - changed the time-field composition and the chunksize t…
CKehl Feb 10, 2021
d75d814
benchmarking - modified the palaeo-parcels script to detail the diffe…
CKehl Feb 11, 2021
82fafc8
benchmarking - adapted chunksize again to make the script work
CKehl Feb 11, 2021
bb25273
benchmarking - fixed the chunksize problem with the old chunksize def…
CKehl Feb 11, 2021
7496d6d
benchmarking - fixed periodicity of the bathymetry-field, which has a…
CKehl Feb 12, 2021
ebd7a14
benchmarking - changed the time-length computation to fix the time_or…
CKehl Feb 12, 2021
43ed6bd
benchmarking - try to track time_origin error with improve error output.
CKehl Feb 12, 2021
15c77d7
benchmarking - added time dimension to the bathymetry field to attemp…
CKehl Feb 12, 2021
a0d534d
benchmarking - changed bdimension name in palaeo-parcels
CKehl Feb 12, 2021
ec426af
benchmarking - disabling periodic loading for bathymetry entirely.
CKehl Feb 12, 2021
e2865be
benchmarking - fixing 'time(s)' variable naming.
CKehl Feb 12, 2021
4b78b8f
benchmarking - fixing galapagos script
CKehl Apr 6, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion environment_py3_win.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ channels:
dependencies:
- python=3.6
- cachetools>=1.0.0
- cgen
- cgen>=2020.1
- coverage
- ffmpeg>=3.2.3,<3.2.6
- flake8>=2.1.0
Expand Down
1 change: 1 addition & 0 deletions parcels/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
from parcels.fieldset import * # noqa
from parcels.particle import * # noqa
from parcels.particleset import * # noqa
from parcels.particleset_benchmark import * # noqa
from parcels.field import * # noqa
from parcels.kernel import * # noqa
import parcels.rng as random # noqa
Expand Down
4 changes: 2 additions & 2 deletions parcels/codegenerator.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
import ast
import collections
import math
import numpy as np
import random
from copy import copy

import cgen as c
import numpy as np

from parcels.field import Field
from parcels.field import NestedField
Expand Down Expand Up @@ -163,7 +163,7 @@ def __getattr__(self, attr):


class ErrorCodeNode(IntrinsicNode):
symbol_map = {'Success': 'SUCCESS', 'Evaluate': 'EVALUATE', 'Repeat': 'REPEAT', 'Delete': 'DELETE',
symbol_map = {'Success': 'SUCCESS', 'Evaluate': 'EVALUATE', 'Repeat': 'REPEAT', 'Delete': 'DELETE', 'StopExecution': 'STOP_EXECUTION',
'Error': 'ERROR', 'ErrorInterpolation': 'ERROR_INTERPOLATION',
'ErrorOutOfBounds': 'ERROR_OUT_OF_BOUNDS', 'ErrorThroughSurface': 'ERROR_THROUGH_SURFACE'}

Expand Down
494 changes: 494 additions & 0 deletions parcels/examples/example_dask_chunk_OCMs.py

Large diffs are not rendered by default.

101 changes: 100 additions & 1 deletion parcels/examples/example_nemo_curvilinear.py
Original file line number Diff line number Diff line change
@@ -1,17 +1,20 @@
import math
from argparse import ArgumentParser
from datetime import timedelta as delta
from glob import glob
from os import path

import numpy as np
import pytest
import dask

from parcels import AdvectionRK4
from parcels import FieldSet
from parcels import JITParticle
from parcels import ParticleFile
from parcels import ParticleSet
from parcels import ScipyParticle
from parcels import ErrorCode

ptype = {'scipy': ScipyParticle, 'jit': JITParticle}

Expand All @@ -28,7 +31,7 @@ def run_nemo_curvilinear(mode, outfile):
'data': data_path + 'V_purely_zonal-ORCA025_grid_V.nc4'}}
variables = {'U': 'U', 'V': 'V'}
dimensions = {'lon': 'glamf', 'lat': 'gphif'}
field_chunksize = {'lon': 2, 'lat': 2}
field_chunksize = {'y': 2, 'x': 2}
field_set = FieldSet.from_nemo(filenames, variables, dimensions, field_chunksize=field_chunksize)
assert field_set.U.field_chunksize == field_chunksize

Expand Down Expand Up @@ -101,6 +104,102 @@ def test_nemo_3D_samegrid():
assert fieldset.U.dataFiles is not fieldset.W.dataFiles


def fieldset_nemo_setup():
data_path = path.join(path.dirname(__file__), 'NemoNorthSeaORCA025-N006_data/')
ufiles = sorted(glob(data_path + 'ORCA*U.nc'))
vfiles = sorted(glob(data_path + 'ORCA*V.nc'))
wfiles = sorted(glob(data_path + 'ORCA*W.nc'))
mesh_mask = data_path + 'coordinates.nc'

filenames = {'U': {'lon': mesh_mask, 'lat': mesh_mask, 'depth': wfiles[0], 'data': ufiles},
'V': {'lon': mesh_mask, 'lat': mesh_mask, 'depth': wfiles[0], 'data': vfiles},
'W': {'lon': mesh_mask, 'lat': mesh_mask, 'depth': wfiles[0], 'data': wfiles}}
variables = {'U': 'uo',
'V': 'vo',
'W': 'wo'}
dimensions = {'U': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw', 'time': 'time_counter'},
'V': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw', 'time': 'time_counter'},
'W': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw', 'time': 'time_counter'}}

return filenames, variables, dimensions


def compute_particle_advection(field_set, mode, lonp, latp):

def periodicBC(particle, fieldSet, time):
if particle.lon > 15.0:
particle.lon -= 15.0
if particle.lon < 0:
particle.lon += 15.0
if particle.lat > 60.0:
particle.lat -= 11.0
if particle.lat < 49.0:
particle.lat += 11.0

def OutOfBounds_reinitialisation(particle, fieldset, time):
particle.lat = 2.5
particle.lon = 52.0 + (-1e-3 + np.random.rand() * 2.0 * 1e-3)

pset = ParticleSet.from_list(field_set, ptype[mode], lon=lonp, lat=latp)
pfile = ParticleFile("nemo_particles", pset, outputdt=delta(days=1))
kernels = pset.Kernel(AdvectionRK4) + periodicBC
pset.execute(kernels, runtime=delta(days=4), dt=delta(hours=6),
output_file=pfile, recovery={ErrorCode.ErrorOutOfBounds: OutOfBounds_reinitialisation})
return pset


@pytest.mark.parametrize('mode', ['jit']) # Only testing jit as scipy is very slow
def test_nemo_curvilinear_auto_chunking(mode):
dask.config.set({'array.chunk-size': '2MiB'})
filenames, variables, dimensions = fieldset_nemo_setup()
field_set = FieldSet.from_nemo(filenames, variables, dimensions, field_chunksize='auto')
assert field_set.U.dataFiles is not field_set.W.dataFiles
npart = 20
lonp = 2.5 * np.ones(npart)
latp = [i for i in 52.0+(-1e-3+np.random.rand(npart)*2.0*1e-3)]
compute_particle_advection(field_set, mode, lonp, latp)
# Nemo sample file dimensions: depthu=75, y=201, x=151
assert (len(field_set.U.grid.load_chunk) == len(field_set.V.grid.load_chunk))
assert (len(field_set.U.grid.load_chunk) == len(field_set.W.grid.load_chunk))
assert (len(field_set.U.grid.load_chunk) != 1)


@pytest.mark.parametrize('mode', ['jit']) # Only testing jit as scipy is very slow
def test_nemo_curvilinear_no_chunking(mode):
dask.config.set({'array.chunk-size': '128MiB'})
filenames, variables, dimensions = fieldset_nemo_setup()
field_set = FieldSet.from_nemo(filenames, variables, dimensions, field_chunksize=False)
assert field_set.U.dataFiles is not field_set.W.dataFiles
npart = 20
lonp = 2.5 * np.ones(npart)
latp = [i for i in 52.0+(-1e-3+np.random.rand(npart)*2.0*1e-3)]
compute_particle_advection(field_set, mode, lonp, latp)
# Nemo sample file dimensions: depthu=75, y=201, x=151
assert (len(field_set.U.grid.load_chunk) == len(field_set.V.grid.load_chunk))
assert (len(field_set.U.grid.load_chunk) == len(field_set.W.grid.load_chunk))
assert (len(field_set.U.grid.load_chunk) == 1)


@pytest.mark.parametrize('mode', ['jit']) # Only testing jit as scipy is very slow
def test_nemo_curvilinear_specific_chunking(mode):
dask.config.set({'array.chunk-size': '128MiB'})
filenames, variables, dimensions = fieldset_nemo_setup()
chs = {'U': {'depthu': 75, 'y': 16, 'x': 16},
'V': {'depthv': 75, 'y': 16, 'x': 16},
'W': {'depthw': 75, 'y': 16, 'x': 16}}

field_set = FieldSet.from_nemo(filenames, variables, dimensions, field_chunksize=chs)
assert field_set.U.dataFiles is not field_set.W.dataFiles
npart = 20
lonp = 2.5 * np.ones(npart)
latp = [i for i in 52.0+(-1e-3+np.random.rand(npart)*2.0*1e-3)]
compute_particle_advection(field_set, mode, lonp, latp)
# Nemo sample file dimensions: depthu=75, y=201, x=151
assert (len(field_set.U.grid.load_chunk) == len(field_set.V.grid.load_chunk))
assert (len(field_set.U.grid.load_chunk) == len(field_set.W.grid.load_chunk))
assert (len(field_set.U.grid.load_chunk) == (1 * int(math.ceil(201.0/16.0)) * int(math.ceil(151.0/16.0))))


if __name__ == "__main__":
p = ArgumentParser(description="""Chose the mode using mode option""")
p.add_argument('--mode', choices=('scipy', 'jit'), nargs='?', default='jit',
Expand Down
2 changes: 1 addition & 1 deletion parcels/examples/example_recursive_errorhandling.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

@pytest.mark.parametrize('mode', ['scipy', 'jit'])
def test_recursive_errorhandling(mode, xdim=2, ydim=2):
"""Example script to show how recursaive error handling can work.
"""Example script to show how recursive error handling can work.

In this example, a set of Particles is started at Longitude 0.5.
These are run through a Kernel that throws an error if the
Expand Down
167 changes: 167 additions & 0 deletions parcels/examples/tutorial_timevaryingdepthdimensions.ipynb

Large diffs are not rendered by default.

Loading