Skip to content

ProtTrans is providing state of the art pre-trained models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using Transformers Models.

Notifications You must be signed in to change notification settings

GhaliaRehawi/ProtTrans

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

90 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


ProtTrans



ProtTrans is providing state of the art pre-trained models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using various Transformers Models.

Have a look at our paper ProtTrans: cracking the language of life’s code through self-supervised deep learning and high performance computing for more information about our work.


ProtTrans Attention Visualization


This repository will be updated regulary with new pre-trained models for proteins as part of supporting bioinformatics community in general, and Covid-19 research specifically through our Accelerate SARS-CoV-2 research with transfer learning using pre-trained language modeling models project.

Table of Contents

⌛️  Models Availability

Model Availability
ProtBert-BFD coming soon
ProtBert Public
ProtAlbert Public
ProtXLNet Public
ProtElectra-Generator coming soon
ProtElectra-Discriminator coming soon
ProtTXL coming soon
ProtTXL-BFD coming soon
ProtT5 Training

🚀  Usage

How to use ProtTrans:

  • 🧬  Feature Extraction:
    Please check: Embedding Section. More information coming soon.

📊  Expected Results

  • 🧬  Secondary Structure Prediction (Q3):
Model CASP12 TS115 CB513
ProtBert-BFD 76 84 83
ProtBert 75 83 81
ProtAlbert 74 82 79
ProtXLNet 73 81 78
ProtElectra-Generator 73 78 76
ProtElectra-Discriminator 74 81 79
ProtTXL 71 76 74
ProtTXL-BFD 72 75 77

  • 🧬  Secondary Structure Prediction (Q8):
Model CASP12 TS115 CB513
ProtBert-BFD 65 73 70
ProtBert 63 72 66
ProtAlbert 62 70 65
ProtXLNet 62 69 63
ProtElectra-Generator coming soon coming soon coming soon
ProtElectra-Discriminator coming soon coming soon coming soon
ProtTXL 59 64 59
ProtTXL-BFD 60 65 60

  • 🧬  Membrane-bound vs Water-soluble (Q2):
Model DeepLoc
ProtBert-BFD 89
ProtBert 89
ProtAlbert 88
ProtXLNet 87
ProtElectra-Generator coming soon
ProtElectra-Discriminator coming soon
ProtTXL 85
ProtTXL-BFD 86

  • 🧬  Subcellular Localization (Q10):
Model DeepLoc
ProtBert-BFD 74
ProtBert 74
ProtAlbert 74
ProtXLNet 68
ProtElectra-Generator coming soon
ProtElectra-Discriminator coming soon
ProtTXL 66
ProtTXL-BFD 65

❤️  Community and Contributions

The ProtTrans project is a open source project supported by various partner companies and research institutions. We are committed to share all our pre-trained models and knowledge. We are more than happy if you could help us on sharing new ptrained models, fixing bugs, proposing new feature, improving our documentation, spreading the word, or support our project.

📫  Have a question?

We are happy to hear your question in our issues page ProtTrans! Obviously if you have a private question or want to cooperate with us, you can always reach out to us directly via our RostLab email

🤝  Found a bug?

Feel free to file a new issue with a respective title and description on the the ProtTrans repository. If you already found a solution to your problem, we would love to review your pull request!.

✅  Requirements

For protein feature extraction or fine-tuninng our pre-trained models, Pytorch and Transformers library from huggingface is needed. For model visualization, you need to install BertViz library.

🤵  Team

  • Technical University of Munich:
Ahmed Elnaggar Michael Heinzinger Christian Dallago Ghalia Rihawi Burkhard Rost
  • Med AI Technology:
Yu Wang
  • Google:
Llion Jones
  • Nvidia:
Tom Gibbs Tamas Feher Christoph Angerer
  • ORNL:
Debsindhu Bhowmik

💰  Sponsors

Nvidia Google ORNL Software Campus

📘  License

The ProtTrans pretrained models are released under the under terms of the MIT License.

✏️  Citation

If you use this code or our pretrained models for your publication, please cite the original paper:

@article {Elnaggar2020.07.12.199554,
	author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rihawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and BHOWMIK, DEBSINDHU and Rost, Burkhard},
	title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing},
	elocation-id = {2020.07.12.199554},
	year = {2020},
	doi = {10.1101/2020.07.12.199554},
	publisher = {Cold Spring Harbor Laboratory},
	abstract = {Motivation: Natural Language Processing (NLP) continues improving substantially through auto-regressive (AR) and auto-encoding (AE) Language Models (LMs). These LMs require expensive computing resources for self-supervised or un-supervised learning from huge unlabelled text corpora. The information learned is transferred through so-called embeddings to downstream prediction tasks. Computational biology and bioinformatics provide vast gold-mines of structured and sequentially ordered text data leading to extraordinarily successful protein sequence LMs that promise new frontiers for generative and predictive tasks at low inference cost. As recent NLP advances link corpus size to model size and accuracy, we addressed two questions: (1) To which extent can High-Performance Computing (HPC) up-scale protein LMs to larger databases and larger models? (2) To which extent can LMs extract features from single proteins to get closer to the performance of methods using evolutionary information? Methodology: Here, we trained two auto-regressive language models (Transformer-XL and XLNet) and two auto-encoder models (BERT and Albert) on 80 billion amino acids from 200 million protein sequences (UniRef100) and one language model (Transformer-XL) on 393 billion amino acids from 2.1 billion protein sequences taken from the Big Fat Database (BFD), today{\textquoteright}s largest set of protein sequences (corresponding to 22- and 112-times, respectively of the entire English Wikipedia). The LMs were trained on the Summit supercomputer, using 936 nodes with 6 GPUs each (in total 5616 GPUs) and one TPU Pod, using V3-512 cores. Results: We validated the feasibility of training big LMs on proteins and the advantage of up-scaling LMs to larger models supported by more data. The latter was assessed by predicting secondary structure in three- and eight-states (Q3=75-83, Q8=63-72), localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabelled data (only protein sequences) captured important biophysical properties of the protein alphabet, namely the amino acids, and their well orchestrated interplay in governing the shape of proteins. In the analogy of NLP, this implied having learned some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC slightly reduced the gap between models trained on evolutionary information and LMs. Additionally, our results highlighted the importance of bi-directionality when processing proteins as the uni-directional TransformerXL was outperformed by its bi-directional counterparts;Competing Interest StatementThe authors have declared no competing interest.},
	URL = {https://www.biorxiv.org/content/early/2020/07/12/2020.07.12.199554},
	eprint = {https://www.biorxiv.org/content/early/2020/07/12/2020.07.12.199554.full.pdf},
	journal = {bioRxiv}
}

About

ProtTrans is providing state of the art pre-trained models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using Transformers Models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%