You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
enable cmems authentication on github and re-enable cmems source in test_ssh_catalog_subset() and test_ssh_retrieve_data()
conversion to generic netcdf format, maybe based on DCSM slev files: "p:\archivedprojects\11205259-004-dcsm-fm\waterlevel_data\all\ncFiles\A12.nc". All datasets already have a waterlevel variable. Also enrich with global attrs from metadata and variable attributes like units/standardname if not present yet. Prevent adding uneccesary dims, in that case improve tidal_plot_tools. >> Conversion to hydrotools-compliant ds was done in add slev retrieval from ddl #794
cmems my dataset has LATITUDE/LONGITUDE/POSITION dimensions, can be flattened but might not be an issue anymore after moving from ftp to api >> remove these vars (also for nrt) since lat/lon attrs are already present and station_x_coordinate (and y) are added anyway.
solve code smells (including convert TODO to issues)
cleanup ds.attrs, for instance latitude vs geospatial-lat-min and max, probably more duplicate data present.
are station id/name from hydrotools-compliant dataset aligned with attrs in read_catalog? Probably not. Consider adding all catalog values as attrs to ds. Make clear what attrs are available in ds always and check presence in retrieve testcase.
cannot do multiple cmems runs at the same time, since the temporary raw filename is the same always and we can get hdf5 errors. Hopefully solved once we can access the data via copernicusmarine
merge based on distance from ssc list? (including standardized station names)
request gesla direct download of meta and zip?
consider dropping all preselection of ds (like gesla coastal and ioc no dart), make it simpler. Simplify ioc no_uhslc selection by checking len for these two columns and check if both are >0.
add velicities and water quality measurements? Or more
add observation points (SSC or other sources?) to modelbuilder notebook and example script (including xyn file)
consider clip arg ipv 4x lat/lon, dan ook poly support
add to notebook: note that cmems is public but requires credentials, be mindful of licenses. Add information about quality and url for each dataset. Add dfmt.references() with per source a url, license, nrt/historic, quality and more information.
maybe remove disabled ssc code, or see if still useful
ask UHSLC to include some gesla/ioc stations. Gesla has high coverage in Canada, Chili(?), Japan and Australia. IOC has high coverage in Chili(?) and India. Probably more countries. Overview of gesla data providers is available. Also check this for cmems in europe, although coverage has already increased drastically.
NOAA, NHS (Norway) and Marine Institute (Ireland) have data via API. CMEMS probably covers most of it.
align country/country_code column/attribute? Sometimes it is a name, sometimes a 3-digit/2-digit letter code and sometimes a numeric code
add option to subset "all" sources at once. It might already be possible to pass a catalog dataframe of mixed sources so that would simplify the user code a bit. We would need time-subsetting for rwsddl in this case, and also suppress the progresbar for rwsddl.
CMEMS: script from dec2023 subsets and downloads directly from FTP (no temporal subsetting possible).
CMEMS multiyear: cmems_obs-ins_glo_phy-ssh_my_na_PT1H is probably the best source. It contains validated hourly data with 6-18 months delay. Not yet subsettable via API
CMEMS multiyear irregular: hourly is nice, but maybe cmems_obs-ins_glo_phy-ssh_my_na_irr is better for model surge validation (if also validated)? However, this dataset contains multiple time frequencies for french stations
GESLA3: gesla metadata and data have no direct download link, so cannot automate retrieval. Otherwise, we could consider downloading to cachedir and unzipping. Dataset is also static and a collection, so consider retrieving data at original data providers instead. Now local p-drive link.
EMODNET: script from feb2023 can download from ERDDAP, how to quickly subset stations? Data is collected, so not a unique dataset: "For the Sea Level we are getting the data from CMEMS, IOC, TAD, UHSLC and PSMSL". Not necessary to add.
TAD is the only source in EMODNET that was not inlcluded already. Table seems not easy to read with python, json format available? The data is available via json. There is an overview of providers. Could add coverage in India and potentially elsewhere. All stations in india were red/offline in march 2024, so first check if added value. Also historic available? Or would this be collected in EMODNET?
PSMSL: scripts on gtsm repos, status unknown. GLOSS delayed mode redirects to BODC and PSMSL high freq but much of that is UHSLC data. Potentially add monthly/yearly means and vertical reference data.
hatyan predict tide from ticon-3 in bes project: p:\11209231-003-bes-modellering\hydrodynamica\preprocessing\modelbuilder\modelbuilder_parts_for_waterleveldata_and_xynFile_v2.py
The text was updated successfully, but these errors were encountered:
TODO:
test_ssh_catalog_subset()
andtest_ssh_retrieve_data()
waterlevel
variable. Also enrich with global attrs from metadata and variable attributes like units/standardname if not present yet. Prevent adding uneccesary dims, in that case improve tidal_plot_tools. >> Conversion to hydrotools-compliant ds was done in add slev retrieval from ddl #794disable_progress_bar=True
sometimes hangs for insitu data #893ds.attrs
, for instance latitude vs geospatial-lat-min and max, probably more duplicate data present.dfmt.references()
with per source a url, license, nrt/historic, quality and more information.ssh_retrieve_data
Sources and connections:
cmems_obs-ins_glo_phy-ssh_my_na_irr
is better for model surge validation (if also validated)? However, this dataset contains multiple time frequencies for french stationsAlso tidal water level from components:
compatibility fix voor iho.nc
mail)The text was updated successfully, but these errors were encountered: