You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Especially for fields where there are vast amounts of data and/or non-file-based representations, I think there is a requirement to reference globally unique IDs in a external repository instead of importing the data into Dataverse. This would be up to the owner of the Dataverse to determine which external repositories were eligible to fulfil the contract required by Dataverse to maintain data integrity.
An example is @openmicroscopyOMERO which is for managing microscopy data. This data is very large and very complex and requires advanced tools to manage and visualise and it would be great to take advantage of those external systems to manage that data, but be published with versioning through Dataverse.
In addition, it would be great if it were possible to build modules to allow these external tools to be used through APIs from Dataverse. For example, in the OMERO case I can imagine that a microscopy metadata query could use the OMERO API to run a query against the OMERO server which could then return some results to Dataverse.
The other obvious use case I could think of would be for source code. Instead of uploading a zip file of source code, a reference to a specific GitHub commit/tag could be specified. As I said before, this relies on the owner of the Dataverse trusting these external sources.
Thanks
The text was updated successfully, but these errors were encountered:
Hi,
Especially for fields where there are vast amounts of data and/or non-file-based representations, I think there is a requirement to reference globally unique IDs in a external repository instead of importing the data into Dataverse. This would be up to the owner of the Dataverse to determine which external repositories were eligible to fulfil the contract required by Dataverse to maintain data integrity.
An example is @openmicroscopy OMERO which is for managing microscopy data. This data is very large and very complex and requires advanced tools to manage and visualise and it would be great to take advantage of those external systems to manage that data, but be published with versioning through Dataverse.
In addition, it would be great if it were possible to build modules to allow these external tools to be used through APIs from Dataverse. For example, in the OMERO case I can imagine that a microscopy metadata query could use the OMERO API to run a query against the OMERO server which could then return some results to Dataverse.
The other obvious use case I could think of would be for source code. Instead of uploading a zip file of source code, a reference to a specific GitHub commit/tag could be specified. As I said before, this relies on the owner of the Dataverse trusting these external sources.
Thanks
The text was updated successfully, but these errors were encountered: