-
Notifications
You must be signed in to change notification settings - Fork 492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
API endpoint as new object type in Dataverse #2245
Comments
@4tikhonov as we discussed, perhaps we could think about defining a different kind of dataset, perhaps similar to a "harvested" dataset. It sounds like you're going to do some more thinking and prototyping. Also, perhaps the functionality you're looking for could be built into a separate "microservice" that runs alongside Dataverse, rather than being part of the Dataverse code itself. |
Well, I've just intended Breakout Session for Metadata Curation and have beautiful idea to extend Metadata section with extra field called API. What is the best way to create new metadata fields in Dataverse? |
@4tikhonov would all the recent work on support for Swift and rsync help? What is your latest thinking on this? |
@4tikhonov are you still interested in this? I sure hope that by now (three years later) you got an answer about how to create a custom metadata block! It was nice seeing you again last month at the community meeting! |
Sure, I think we're moving fast in this direction now and able to create any metadata fields. And a lot of new opportunities are coming with Docker. |
@4tikhonov I'm glad you're able to create custom metadata blocks. See also #3168 This issue has been open for over three years and there are about 800 open issues. Should we consider closing it if there is no active development? We could open it again and add it to the new "Community Dev" column at https://waffle.io/IQSS/dataverse if you or someone else starts working on a pull request. |
Sure, please close it. |
@4tikhonov thanks! Now we're at 799. 😄 |
Problem statement
During community meeting it was big discussion how to describe, store and process Big Data (petabytes of data) in Dataverse.
Suggestion
Dataverse data model should be extended with new object called "API" that will keep the information about all API endpoints and will allow to add technical specification how to use specific APIs.
Solution
Prototype
I used our our project API endpoint to create file containing link to API and uploaded user guide how to query it with some examples. It will create metadata level and will allow people find our datasets provided by API in Dataverse but API service API will be provided by other system (Hadoop, Spark, MongoDB, etc).
http://dv.sandbox.socialhistoryservices.org/dataset.xhtml?persistentId=hdl%3A10622%2FDXZAZM&version=DRAFT
The concept
We can let different Big Data systems communicate to each other. The only problem is harmonization but it should be done on research side.
Is it difficult to implement feature like that?
The text was updated successfully, but these errors were encountered: