-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Publish Hazard Datasets calculated by ZAMG as Open Data #9
Comments
For the Data Management Plan it is at the moment only relevant to consider how the datasets can be made publicly available for re-use by other interested parties (this is also a dissemination issue). Here we concentrate first on releasing the original datasets produced ZAMG as open data and address derived datasets (those considering the local effects) in a separate issue. When we talk about 3325 datasets, the publication process (Example) must be automated:
Both Zenodo and CKAN offer APIs, so we can develop some simple scripts that automates this process. Theoretically it would also be possible to configure CKAN to automatically harvest the meta-data from Zenodo. Questions to @clarity-h2020/science-support-team
|
The original data sets produced by ZAMG will be stored on a server of the CCCA and, after all licenses have been checked, will be released on data.ccca.ac.at. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
In todays telco, a decision was made that Louis (Meteogrid) will draft a letter for requesting the use of data from the owners. This is needed mainly for EURO-CORDEX data, as far as I understand. |
O.K. In practical terms that means that we
In Data Management Plan we can then directly refer to data.ccca.ac.at. Perfect. Where and how to publish (in terms of Data Management, not CSIS WMS/WCS publication) derived hazard datasets (+local effects) is another story and has to be discussed with @clarity-h2020/data-processing-team |
This comment has been minimized.
This comment has been minimized.
OK, so the implications are
|
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Any progress to be reported here? |
The data sets are not yet published on CCCA. Regarding the license issue: According to the following list, Lena has directed us to: |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This isn't valid any more, right?
All datasets will be made available on Zendo? |
Yes, this is correct. Initially when this statement was made, I was not aware of Zenodo. Then when I made the comparison of uploading the data, I found it was much easier to upload it via Zenodo than on CCCA. |
All datasets are now available on Zenodo, right? So can close this issue. |
Yes |
Thanks, Robert! |
According to the status presentation, ZAMG calculates datasets for
= 3325 unique datasets. Btw, why 3325 datasets not 4800 (25x16x4x3)?
An example for Heatwave Duration Hazard NetCDF file can be found in this issue.
Note: This data has to be "rasterised" to GeoTIF 500km grid (example for the same dataset here) and then the local effects are taken into account to generate derived datasets. The complete process chain will eventually documented here. So in the end, we would possibly calculate 3 x 3325 datasets that have to be published as open data according to H2020 Open Access Guidelines. However, it is up to the @clarity-h2020/data-processing-team and @clarity-h2020/mathematical-models-implementation-team to discuss and decide, if really we need that amount of derived datasets. But this is better addressed in this issue and other HC, HC-LE and EE related questions I'm going to ask soon.
The text was updated successfully, but these errors were encountered: