You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be good to have support to create APIs from SPARQL queries stored in nanopublications.
Perhaps the easiest way of doing this is by using the specUrl option, using the nanopub URI as query locators. If dereferencing the URI returns a plain grlc decorated SPARQL query, that should be enough to build the spec.
Other options could be to have people write their own template of how to use the nanopubs to compose the API, so to let them build their APIs by just publishing nanopublications. For this we’d need new grlc code but I think it’d be rather minimal—possibly in the fileLoader.
But ideally grlc could check several places in the nanopub network, where the nanopub is available at various locations, so the downtime of a single server would affect the retrieval of the content. The above nanopub is available at these places for example:
Thanks for summarizing our discussion here, and sorry for taking so long to react.
I think doing it via specUrl first is indeed the best first step. I will build this into the next generation of nanopub services that I am currently working on. So they can produce the spec as grlc expects it, and no change on the grlc side is needed for the time being.
Later we could investigate how grlc could work with nanopubs directly and exploit the decentralization/redundancy.
It would be good to have support to create APIs from SPARQL queries stored in nanopublications.
Perhaps the easiest way of doing this is by using the specUrl option, using the nanopub URI as query locators. If dereferencing the URI returns a plain grlc decorated SPARQL query, that should be enough to build the spec.
Other options could be to have people write their own template of how to use the nanopubs to compose the API, so to let them build their APIs by just publishing nanopublications. For this we’d need new grlc code but I think it’d be rather minimal—possibly in the fileLoader.
But ideally grlc could check several places in the nanopub network, where the nanopub is available at various locations, so the downtime of a single server would affect the retrieval of the content. The above nanopub is available at these places for example:
So ideally grlc could check several servers. This could be via a hard-coded list of servers, or later by querying the network itself for
an up-to-date list. Or as an URL argument, so the grlc request would include
"?nanopub-servers=https://np.petapico.org/+https://np.knowledgepixels.com/+https://server.np.trustyuri.net/".
The text was updated successfully, but these errors were encountered: