This repository contains the scripts to generate the IDPcentral Knowledge Graph based on data harvested from DisProt, MobiDB, and ProteinEnsemble (PED).
The starting point for this repository were the files developed during the ELIXIR sponsored BioHackathon-Europe 2020. That work was reported in BioHackrXiv v3jct. This repo updates the scripts for the revised deployments, and scales the work to the entire content of the three sites.
Authors:
- Alasdair J G Gray (@AlasdairGray)
- Petros Papadopoulos (@petrospaps)
- Imran Asif (@ImranAsif48)
- Ivan Mičetić (@ivanmicetic)
- Andras Hatos
Citing IDP-KG: If you used IDP-KG in your work, please cite the SWAT4HCLS paper:
@inproceedings{GrayEtal:bioschemas-idpkg:swat4hcls2022,
author = {Gray, Alasdair J. G. and Papadopoulos, Petros and Asif, Imran and Micetic, Ivan and Hatos, Andr{\'{a}}s},
title = {Creating and Exploiting the Intrinsically Disordered Protein Knowledge Graph {(IDP-KG)}},
booktitle = {13th International Conference on Semantic Web Applications and Tools for Health Care and Life Sciences, {SWAT4HCLS} 2022, Virtual Event, Leiden, The Netherlands, January 10th to 14th, 2022},
series = {{CEUR} Workshop Proceedings},
volume = {3127},
pages = {1--10},
publisher = {CEUR-WS.org},
year = {2022},
url = {http://ceur-ws.org/Vol-3127/paper-1.pdf}
}
- The term 'source' is used to distinguish the page that was scraped
- The term 'dataset' is used to identify the collection of data that a particular record page (e.g. disprot:DP000003) belongs to
The repository contains two Jupyter notebooks in the notebooks directory:
-
ETLProcess notebook converts the harvested data into a semantic knowledge graph represented in RDF using the Bioschemas terms;
-
AnalysisQueries notebook runs some queries over the resulting knowledge graph.
Full instructions for running the notebooks are contained within the notebooks. In both notebooks, all cells should be run and then the GUI used to generate the desired outputs.
To install the dependencies that the notebooks rely on run the following from the command line (or Jupyter shell terminal):
pip install -r requirements.txt
The notebook for exploring the generated knowledge graph can be run on the cloud using the mybinder service1; click on logo below to get going.
A Linked Data REST API is provided using the grlc services.
- Swagger docs: https://grlc.io/api-url?specUrl=https://raw.githubusercontent.com/AlasdairGray/IDP-KG/main/idpcentral-api.yml#/
- Configuration file: idpcentral-api.yml