Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CMIP6] multiple statistics from one download #24

Closed
tdcwilliams opened this issue Mar 16, 2023 · 85 comments
Closed

[CMIP6] multiple statistics from one download #24

tdcwilliams opened this issue Mar 16, 2023 · 85 comments
Labels
notebook question Further information is requested wp4

Comments

@tdcwilliams
Copy link

Hi Mattia (@malmans2),

I would like to get daily time series of different quantities calculated from the sea ice concentration using one download if possible.
The statistics I want to get for each day are for each model, and for Arctic and Antarctic:

  • sea_ice_extent = area_of_grid_cells[region_mask] * sea_ice_mask[region_mask] with sea_ice_mask = 1 if sea_ice_concentration > 0.3 else 0 and eg region_mask_arctic = lat > 40 or region_mask_antarctic = lat < -40
  • sea _ice_area = area_of_grid_cells[region_mask] * sea_ice_concentration[region_mask]

Are there already or could there be some functions implemented that:

  • calculate an array with the area of all the grid cells?
  • give a list of all the CMIP6 models?

Also could the transform_func argument of download_and_transform also take a list or even a dict of different transform functions (eg transform_func=[get_arctic_extent, get_arctic_area, get_antarctic_extent, get_antarctic_area]) and output a list/dict/pandas.DataFrame so we don't have to download everytime we want to make a calculate a different statistic?

@malmans2
Copy link
Member

Hi @tdcwilliams,

None of the diagnostics/functions that you are looking for have been implemented yet.

It is not possible to pass multiple diagnostics functions at the moment, but the downloading step is cached separately under the hood. If you re-run dawnload_and_transform changing the transform_func only, no data is actually downloaded but we use cached data. So you should be able to easily build the list yourself:

out = []
for transform_func in transform_funcs:
    out.append(download.download_and_transfrom(..., transform_func=transform_func))

If it's of use, we just added a much more advance notebook: https://github.com/bopen/c3s-eqc-toolbox-template/blob/main/notebooks/wp4/clima_and_bias_pr_cmip6_regionalised.ipynb

@tdcwilliams
Copy link
Author

Hi @malmans2,
I'll take a look at that one and try out your suggestion.
Regards, Tim

@tdcwilliams
Copy link
Author

Hi Mattia @malmans2,

I've made a bit of progress with this
cmip6_sea_ice_diagnostics.ipynb.zip

However I am not so used to xarray and was not sure what to return for get_sea_ice_extent and get_sea_ice_area.
I just want time series, but don't know how to do this in xarray. I would also like to delete the sea ice concentration array which I don't need after the extent/area calculations.

Regards,
Tim

@malmans2
Copy link
Member

malmans2 commented Mar 21, 2023

Hi Tim,
You always have to return a Dataset.
If you want to cache them separately, you can just convert the dataarray to a dataset (and add the model dimension if you want to concatenate different models).
For example, you can do:

ds = sie.to_dataset(name="extent")
return ds.expand_dims(model=[model])

If you compute them in the same function (and therefore you cache a single dataset with both variables), you can do:

ds = xr.merge([sie.rename("extent"), sia.rename("area")])
return ds.expand_dims(model=[model])

That way you don't have to drop any variables.
(But if you need to drop variables, you are looking for the drop_vars method of xarray).

Hope it helps!

@tdcwilliams
Copy link
Author

Thanks @malmans2 - it did help. I am still unclear about sie in my get_sea_ice_extent function though. So far I just turn it into a numpy array - should I be using some kind of xarray data type instead?

def get_sea_ice_extent(ds, sic_name, model, **kwargs):
    grid_cell_areas = get_grid_cell_areas(ds, **kwargs)
    sie = np.array([np.sum(grid_cell_areas[sic > SIC_THRESH])
                   for sic in ds[sic_name][:]])
    return ds.expand_dims(model=[model])

@malmans2
Copy link
Member

Yes, sie should be a DataArray, which is a xarray's type.
Looking at you code it looks like you don't need to convert to numpy.
I can show you how to do it if you send me a minimal reproducible example.

I.e., send me a notebook or a python script that does this:

  1. Define a small request just for testing purposes and use ds = download.download_and_transform(cllection_id, request)
  2. Applies get_sea_ice_* to ds (please define all args and kwargs as well). Don't worry about using xarray or returning xarray's object, I will show you how you can do the same you are doing with numpy with xarray.

@tdcwilliams
Copy link
Author

Hi @malmans2 - I have just got something downloaded and am playing around to get to know xarray a bit.
Unfortunately, it is not a regular lon-lat grid as I thought so the get_grid_cell_areas function is not quite right.

@malmans2 malmans2 added the question Further information is requested label Mar 24, 2023
@tdcwilliams
Copy link
Author

Hi @malmans2,
Could you take a look at this notebook please?
cmip6_sea_ice_diagnostics.ipynb.zip

I have the downloading and transforming working, to give some time series of sea ice extent and area.
they are datasets with dimensions model, region (Arctic or Antarctic), time
I would like to add some time series plots at the end, showing the mean and spread of the models.
If you could show me how to do this for the Arctic and one experiment that would be great.

Also the calculation of the grid areas is maybe a bit complicated and doesn't work for either ERA5 or one CMIP6 model (they don't provide the corners of the grid cells), so maybe it would just be simpler to just regrid onto a 100km equal-area grid or something. Is regridding very slow?

Ciao and happy easter,
Tim

@malmans2
Copy link
Member

malmans2 commented Apr 5, 2023

Ciao @tdcwilliams !

I should be able to take a look this week.

Also the calculation of the grid areas is maybe a bit complicated and doesn't work for either ERA5 or one CMIP6 model (they don't provide the corners of the grid cells), so maybe it would just be simpler to just regrid onto a 100km equal-area grid or something. Is regridding very slow?

We actually already have a function that is optimised for interpolations. It's called diagnostics.regrid (see here).

Sorry, it was originally meant to be a private method (and I might move it under utils in the future), so I forgot to add docstrings. But it's quite simple and we use https://xesmf.readthedocs.io/en/latest/ under the hood. Here is the docstring:

  • obj is the xarray object you want to regrid
  • grid_out is the output grid
  • method is the xesmf interpolation you want to use (e.g., bilinear, conservative, ...)
  • **kwargs is any other keyword argument for xesmf

For example, it's used in a couple of notebook in wp4 to interpolate CMIP to ERA5 (see here).

Interpolating shouldn't be too slow at all, and if you use our regrid function it's optimised (we compute the weights only once, and we cache them on disk through a netcdf file. That way, anytime you need to interpolate data from/to the same grids the weights are re-used). So it's very much up to you, if you think it's scientifically OK interpolating, you can probably do that.

@tdcwilliams
Copy link
Author

Thanks @malmans2, I might use that regrid method instead of trying to code up areas for yet another type of grid. Great to hear it is optimised. The model SICs are pretty smooth so it should probably be fine to interpolate them.
Ciao!

@malmans2
Copy link
Member

Hi @tdcwilliams,

I'm looking at you notebook, sorry about the delay.
Quick question: Why are you computing the cell areas rather than using the model output?
Can we use the variable Grid-cell area for ocean variables that is available on the CDS?
See: https://cds.climate.copernicus.eu/cdsapp#!/dataset/projections-cmip6?tab=form

@tdcwilliams
Copy link
Author

Hi @malmans2 - I hadn't noticed we could output the grid-cell areas - that is certainly a much easier way!

@malmans2
Copy link
Member

OK! I'm making a notebook template for your use case.

Looks like we still need to estimate the grid cell areas for ERA5, but I've asked ECMWF if that variable is available somewhere.

@malmans2
Copy link
Member

malmans2 commented Apr 12, 2023

Hi @tdcwilliams,

I've added a template for your use case. You can find it here.
It's just a starting point, please make sure the analysis is done correctly. We will update the template as you make progress.

A couple of comments:

  1. I've executed it using WP4 on the VM, so you should find everything already cached there.
  2. Unfortunately, we have to compute the cell areas for ERA5. I've added a diagnostic to do that, you can check the code here.
  3. I think the cell area might be missing for some CMIP model. Let me know if that's the case for some of the models you need. If we are lucky and they have regular grids, we can just use the same diagnostic used for ERA5. Otherwise, we will have to re-grid or implement something similar to what you did in your notebook.
  4. When you start analysing large time periods, you might want to explore different chunking (e.g., 10 years).

The template notebook should produce these figures:
image
image

@tdcwilliams
Copy link
Author

Hi @malmans2 - thanks for the help with this.

I had a closer look at the area_of_ocean_grid_cell variable but unfortunately very few models provide this variable,
so maybe the diagnostics.regrid option might be the best "one-size-fits-all" approach?

Relating to the chunking, I have been trying to process more data but am getting many crashes

  • most seem to be lost connection to CDS for download - would having larger chunks help this?
  • another crash (kernel just restarts) is with an HR dataset CMCC-CM2-HR4 (I guess it is a high-resolution one?) maybe from running out of memory.
    • I tried transform_chunks=True but got an error; after that I did my own manual chunking (download_and_transform followed by xr.merge) and I was able to process this dataset.
  • I wonder for such large datasets like CMIP6 maybe most users wouldn't try to download and process on the fly (for me it is taking hours if not days) - probably they would download first and then do their processing? That is, perhaps a download script plus a notebook might be a better solution?

Ciao, Tim

@malmans2
Copy link
Member

malmans2 commented Apr 13, 2023

Hi Tim,

maybe the diagnostics.regrid option might be the best "one-size-fits-all" approach?

I'll implement this in the template. I just need to know which grid to use for the interpolation. Maybe ERA5? I think that's what CMCC does in a template for WP4.

most seem to be lost connection to CDS for download - would having larger chunks help this?

Not sure, unfortunately these issues are very hard to debug as we don't maintain the VM or cdsapi. They are also hard to reproduce, often they are just showing up when the system is busy. I think CMCC has been working OK with 10y chunking (see this notebook)

another crash (kernel just restarts) is with an HR dataset CMCC-CM2-HR4 (I guess it is a high-resolution one?) maybe from running out of memory.
I tried transform_chunks=True but got an error; after that I did my own manual chunking (download_and_transform followed by xr.merge) and I was able to process this dataset.

I'll look into this. But I don't understand why something changed with transform_chunks=True. Are you referring to the template I uploaded yesterday? That argument is not specified, so it should be True by default.

I wonder for such large datasets like CMIP6 maybe most users wouldn't try to download and process on the fly (for me it is taking hours if not days) - probably they would download first and then do their processing? That is, perhaps a download script plus a notebook might be a better solution?

It's not really processed on the fly.
This is what's happening with transform_chunks=True:

  1. download and save a raw chunk to disk
  2. open the raw chunk
  3. transform the raw chunk
  4. save the transformed chunk to disk (one file per chunk is cached)

If you want to download all data first, then transform, you can just run download_and_transform with transform_func=None. E.g.:

for transform_func in (None, my_transform_func):
    ds = download.download_and_transform(collection_id, request, transform_func=transform_func, **kwargs)

If it's not clear, with transform_chunk=False this is the workflow:

  1. download and save ALL raw chunks to disk
  2. open and merge ALL raw chunks
  3. transform the merged dataset
  4. save the transformed dataset to disk (only one file is cached)

I would only use transform_chunk=False for operations that perform reductions on the same dimension of the chunks (e.g., dimension is time and you need to compute the climatology).

@malmans2
Copy link
Member

malmans2 commented Apr 13, 2023

Hi @tdcwilliams,

I've explored a bit the issue you mentioned in CMCC-CM2-HR4. The problem is that siconc and areacello have inconsistent sizes (I believe this is due to NEMO's halo points).

I've added this code in the template:

        # Remove extra-points
        isel_dict = {}
        for dim, size in areacello.sizes.items():
            match size - siconc.sizes[dim]:
                case 1:
                    isel_dict[dim] = slice(None, -1)
                case 2:
                    isel_dict[dim] = slice(1, -1)
        if isel_dict:
            areacello = areacello.isel(**isel_dict).drop(list(isel_dict))

CMCC-CM2-HR4 is now successfully plotted:
image
image

@malmans2
Copy link
Member

Update, I tried all models with sea-ice variables (listed in you orignal notebook).
Out of 37 model, there are 11 models that are causing issues (mostly because they do not provide areacello and do not have regular lat/lon grid).
Here are the errors raised that I'm exploring:

{'cams_csm1_0': 'Exception("an internal error occurred processing your request. No matching data for request {\'experiment\': \'historical\', \'model\': \'CAMS-CSM1-0\', \'temporal_resolution\': \'fx\', \'variable\': \'areacello\'}.")',
 'fgoals_f3_l': 'Exception("an internal error occurred processing your request. No matching data for request {\'experiment\': \'historical\', \'model\': \'FGOALS-f3-L\', \'temporal_resolution\': \'fx\', \'variable\': \'areacello\'}.")',
 'hadgem3_gc31_ll': 'Exception("an internal error occurred processing your request. No matching data for request {\'experiment\': \'historical\', \'model\': \'HadGEM3-GC31-LL\', \'temporal_resolution\': \'fx\', \'variable\': \'areacello\'}.")',
 'hadgem3_gc31_mm': 'Exception("an internal error occurred processing your request. No matching data for request {\'experiment\': \'historical\', \'model\': \'HadGEM3-GC31-MM\', \'temporal_resolution\': \'fx\', \'variable\': \'areacello\'}.")',
 'ipsl_cm5a2_inca': 'Exception("an internal error occurred processing your request. No matching data for request {\'experiment\': \'historical\', \'model\': \'IPSL-CM5A2-INCA\', \'temporal_resolution\': \'fx\', \'variable\': \'areacello\'}.")',
 'ipsl_cm6a_lr': 'Exception("an internal error occurred processing your request. No matching data for request {\'experiment\': \'historical\', \'model\': \'IPSL-CM6A-LR\', \'temporal_resolution\': \'fx\', \'variable\': \'areacello\'}.")',
 'kiost_esm': 'ValueError("cannot align objects with join=\'exact\' where index[/labels/sizes](https://file+.vscode-resource.vscode-cdn.net/labels/sizes) are not equal along these coordinates (dimensions): \'latitude\' (\'latitude\',)")',
 'miroc_es2l': "KeyError('y')",
 'nesm3': 'Exception("an internal error occurred processing your request. No matching data for request {\'experiment\': \'historical\', \'model\': \'NESM3\', \'temporal_resolution\': \'fx\', \'variable\': \'areacello\'}.")',
 'taiesm1': 'Exception("an internal error occurred processing your request. No matching data for request {\'experiment\': \'historical\', \'model\': \'TaiESM1\', \'temporal_resolution\': \'fx\', \'variable\': \'areacello\'}.")',
 'ukesm1_0_ll': 'Exception("an internal error occurred processing your request. No matching data for request {\'experiment\': \'historical\', \'model\': \'UKESM1-0-LL\', \'temporal_resolution\': \'fx\', \'variable\': \'areacello\'}.")'}

@malmans2
Copy link
Member

Hi @tdcwilliams,

I made some progress.

  1. I've generalised the function to compute areas, so it works OK with both 1D and 2D lat/lon (see here). The areas are now cached, so they are computed only once when lat/lon bounds are the same.
  2. I've changed the notebook template. Now the areas are always computed and never downloaded. Looks like it's working OK, except for a model fio_esm_2_0 (I believe the issue is the order of the vertices provided, but I need to investigate)
  3. I've tried a couple of years for all models, and there are a couple of models that are quite different (next week I'll make sure that there's no bug in the code that computes the areas).

image
image

Let's catch up next week. You now have a few options:

  1. Regrid all models to the same grid, then compute areas
  2. Download areas when available, compute otherwise
  3. Always compute areas on the native grid

Have a good weekend!

@tdcwilliams
Copy link
Author

Ciao @malmans2 - thanks a lot - that looks great!
Sorry to be slow in getting back to you - I was a bit sick last week.

I think I am leaning towards (1) the regridding approach (simplicity of being able to treat all models the same and not needing many different cases).

A convenient target grid would be the ones (Arctic or Antarctic) for https://cds.climate.copernicus.eu/cdsapp#!/dataset/satellite-sea-ice-concentration?tab=overview (25-km equal-area grids, so each grid cell is just 625km^2). This would let also let us reuse the approach to add comparisons to the satellite observations later on.

Ciao, Tim

@malmans2
Copy link
Member

Hi Tim,
I'll update the template using (1).
What kind of interpolation do you think is the best for sea ice concentration variables?
These are the interpolations available: https://xesmf.readthedocs.io/en/latest/notebooks/Compare_algorithms.html

@tdcwilliams
Copy link
Author

Thanks Mattia. Probably conservative is the best interpolation method.

@malmans2
Copy link
Member

malmans2 commented Apr 17, 2023

Hi @tdcwilliams,

The template notebook is ready: https://github.com/bopen/c3s-eqc-toolbox-template/blob/main/notebooks/wp4/cmip6_sea_ice_diagnostics.ipynb

I think I had the units wrong in my previous figures, should be OK now.
I only tested a couple of models, I'm now running the same notebook for all CMIP models, but CDS queue is quite long at the moment.

image
image

@malmans2
Copy link
Member

Hi there,
I've updated the template notebook. It now works with all CMIP6 models and it's already cached on the VM, user WP4.

Note that the conservative interpolation also needs ignore_degenerate=True for several models.
The reason why ignore_degenerate is needed is explained here: JiaweiZhuang/xESMF#60.
Not sure if this is not ideal and therefore you should use a bilinear interpolation instead (you can just change the method argument).

Here is a quick-and-dirty plot of the results:
image
image

@tdcwilliams
Copy link
Author

Thanks @malmans2 - I'll try it out soon. Will try changing the method to bilinear; nearest_s2d could be another option, but maybe it doesn't matter too much.
Ciao, Tim

@malmans2 malmans2 added the wp4 label Jun 26, 2023
@malmans2
Copy link
Member

Hi @tdcwilliams,

What's the status of this issue? Are you still working on this and/or are there any standing issues?

@malmans2 malmans2 added the stale This will not be worked on label Aug 23, 2023
@malmans2
Copy link
Member

malmans2 commented Sep 22, 2023

Hi Tim,

I'm scraping the CDS forms because most of the experiments have some inconsistency.
Which experiments would you like to analyse?
All of these, more, or less? {"historical", "ssp1_2_6", "ssp2_4_5", "ssp3_7_0", "ssp5_8_5"}

@malmans2
Copy link
Member

malmans2 commented Sep 22, 2023

Here is ERA5 with interpolation method="bilinear" and periodic=True
image
image

@tdcwilliams
Copy link
Author

Hi @malmans2 - I added another issue for evaluation of sea ice thickness: #103.
I don't think we can manage to get these notebooks finished before Monday so I've asked for an extension.

@malmans2
Copy link
Member

Hi @tdcwilliams,

OK. The extension is good, although we might be able to get it done before the deadline.

Hi Tim,

I'm scraping the CDS forms because most of the experiments have some inconsistency. Which experiments would you like to analyse? All of these, more, or less? {"historical", "ssp1_2_6", "ssp2_4_5", "ssp3_7_0", "ssp5_8_5"}

Are these the experiments that you need to analysise?

@tdcwilliams
Copy link
Author

Hi @malmans2 - that's right - those are the experiments with a significant number of models having sea ice concentration

@tdcwilliams
Copy link
Author

Hi @malmans2 - the new deadline for the task is 30 November

@malmans2
Copy link
Member

OK. BTW, it's now caching the last experiment.

@tdcwilliams
Copy link
Author

great!

@malmans2
Copy link
Member

Hi @tdcwilliams,

The notebook is ready.
Here is the template: https://github.com/bopen/c3s-eqc-toolbox-template/blob/main/notebooks/wp4/cmip6_sea_ice_diagnostics.ipynb
Here is the notebook executed: https://gist.github.com/malmans2/07518e17ad464a49e26cceeedf47298d

I resampled the timeseries (yearly means), otherwise the seasonal variability makes the plot too messy.
ERA5 in Antarctic looks odd, let me know if this is unexpected and there's something wrong in the template.

@malmans2
Copy link
Member

PS for the plotting - with the full range the seasonal cycle might be too squashed to be visible, so maybe we could have the current plots for present time +/- 10 or 20 years. For the full period, we could plot the yearly maximum and minimum of each of the diagnostics. Thanks a lot for the help.

Ooops, I forgot about this. Implementing it now.

@malmans2
Copy link
Member

@tdcwilliams
Copy link
Author

Thanks a lot @malmans2, I'll play around a bit with the plotting.
Ciao, Tim

@tdcwilliams
Copy link
Author

PS ERA5 does look weird in Antarctica. It looks a bit like there are missing years in the time series that just got joined up with straight lines.

@malmans2
Copy link
Member

I'll run some check! (probably tomorrow)

@malmans2
Copy link
Member

malmans2 commented Sep 27, 2023

I invalidated the cache and recomputed the diagnostics for ERA5, but Antartica still looks weird.
Have you seen this page? https://climate.copernicus.eu/ESOTC/2019/sea-ice
If you look at the bottom under Processing steps, it looks like some additional filtering is done (e.g., remove lake and/or spurious points, ...).

Do you think that could be the cause of the issues?

@tdcwilliams
Copy link
Author

Hi @malmans2, I have looked at it more closely myself and there is data in those flat periods, but maybe they have done some artificial constraining of the sea ice (eg with a climatology) when there was no data to assimilate (pre-1979) which could be why there is so little variability at times. I can't find anything about it in the documentation though. Perhaps we just stick to after 1980 in our plots.

Here's my latest notebook.
I've changed the plotting a bit to make things a bit clearer eg just plot interquartile range instead of tertiles and full range (on the advice of Lorenzo).

I kind of prefer a couple of extra functions get_sic and get_output_grid to make the cache function clearer so I added them.

I've also added extent and area for satellites.
However the missing data messes up the yearly max and min - do you know how to skip years with missing data?
(I guess somehow getting xarray to use max instead of nanmax would do it?)
cmip6_sea_ice_diagnostics.ipynb.zip

@malmans2
Copy link
Member

malmans2 commented Sep 28, 2023

Hi @tdcwilliams,

Perhaps we just stick to after 1980 in our plots.

OK

I've changed the plotting a bit to make things a bit clearer eg just plot interquartile range instead of tertiles and full range (on the advice of Lorenzo).

OK, I'll implement the same in the template

I kind of prefer a couple of extra functions get_sic and get_output_grid to make the cache function clearer so I added them.

Fine by me. But to do so, we need to invalidate the cache and re-run the transform functions. If you are OK with it, I'll go ahead (but please don't run this notebook until everything is cached again)

BTW, are you sure that all models/satellites have the proper CF attribute "standard_name"? If yes, we can replace your code to get sic with this: da_sic = ds.cf["sea_ice_area_fraction"]

I've also added extent and area for satellites.
However the missing data messes up the yearly max and min - do you know how to skip years with missing data?
(I guess somehow getting xarray to use max instead of nanmax would do it?)

OK, I'll add satellite data in the template

@tdcwilliams
Copy link
Author

Hi @malmans2,
OK, it seems like get_sic works when I used it on models and satellite data, but your way is quicker if it also works.
If you are rerunning can you change the units to "$10^6$km$^2$" (so km is not in italics)?

@tdcwilliams
Copy link
Author

btw when I said staying later than 1980, I only meant for ERA5 not CMIP6

@tdcwilliams
Copy link
Author

Hi @malmans2, is it OK for me to run this notebook again?
Ciao, Tim

@malmans2
Copy link
Member

malmans2 commented Oct 3, 2023

Hi @tdcwilliams,

Not yet, I'm caching everything right now. Hopefully it will be ready in the afternoon.
Could you please check if the way the functions are coded is more clear?
https://github.com/bopen/c3s-eqc-toolbox-template/blob/main/notebooks/wp4/cmip6_sea_ice_diagnostics.ipynb

@tdcwilliams
Copy link
Author

Thanks @malmans2. I think the code is very nice now. The satellite code is certainly much simpler with yearly chunking - is that working OK? Also, how does filling missing months with 0 avoid the max/min problem for the observations?

@malmans2
Copy link
Member

malmans2 commented Oct 3, 2023

The satellite code is certainly much simpler with yearly chunking - is that working OK?

It works OK, besides for EUMETSAT April/May 1986 (no-data which results in 0s). This is why I'm getting rid of those months: ds = ds.where(ds != 0).dropna("time")

Also, how does filling missing months with 0 avoid the max/min problem for the observations?

That's a separate issue, which is addressed in this function:

def full_year_only_resample(ds, reduction):
    mask = ds["time"].resample(time="Y").count() == 12
    return getattr(ds.resample(time="Y"), reduction)().where(mask, drop=True)

@tdcwilliams
Copy link
Author

great, sounds good

@malmans2
Copy link
Member

malmans2 commented Oct 3, 2023

@malmans2
Copy link
Member

malmans2 commented Oct 4, 2023

Hi @tdcwilliams,

I'm planning to work on #102 now.
Let me know if this notebook is OK as we'll probably re-use most of the code.

@tdcwilliams
Copy link
Author

Hi @malmans2, it seems good.
Cheers,
Tim

@malmans2 malmans2 removed the wip This issue or pull request already exists label Oct 4, 2023
@malmans2 malmans2 closed this as completed Oct 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
notebook question Further information is requested wp4
Projects
None yet
Development

No branches or pull requests

2 participants