Skip to content

Commit 32d2b83

Browse files
committed
Update README.md
1 parent 10eb9b8 commit 32d2b83

File tree

1 file changed

+6
-4
lines changed

1 file changed

+6
-4
lines changed

README.md

+6-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Colab Implement: ConVIRT model
22
### Contrastive VIsual Representation Learning from Text
33

4-
The repo is a Colab implementation of the architecture descibed in the ConVIRT paper: [*Contrastive Learning of Medical Visual Representations from Paired Images and Text*](https://arxiv.org/abs/2010.00747). The authors of paper are Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D. Manning, Curtis P. Langlotz.
4+
Deep neural networks learn from a large amount of data to obtain the correct parameters to perform a specific task. However, in practice, we often encounter a problem: **insufficient amount of labeled data**. However, if your data contains pairs of images and text, you can solve the problem with c.
55

6-
Deep neural networks learn from a large amount of data to obtain the correct parameters to perform a specific task. However, in practice, we often encounter a problem: **insufficient amount of labeled data**. However, if your data contains pairs of images and text, you can solve the problem with Contrastive Learning.
6+
Contrastive learning is a kind of self-supervised learning method. It does not require specialized labels, but rather a method to learn the correct parameters from the unlabeled data itself. It aims to learn an encoder that makes the encoding results of similar classes of data similar and makes the encoding results of different classes of data as different as possible. Typical contrast learning is done based on comparisons between two images. However, if we have paired image and text data, contrast learning can also be applied between images and text.
77

88
Based on this repository, we can implement various paired-image-text Contrastive Learning tasks on [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb), which enable you to train effective pre-training models for transfer learning with insufficient data volume. With this pre-trained model, you can train with less labeled data to get a good performing model.
99

@@ -61,7 +61,7 @@ loss:
6161
alpha_weight: 0.75
6262
```
6363

64-
The models used (res_base_model, bert_base_model) refers to the models provided by [transformers](https://huggingface.co/transformers/).
64+
The models used [res_base_model, bert_base_model] refers to the models provided by [transformers](https://huggingface.co/transformers/).
6565

6666
### 3. Training
6767

@@ -76,7 +76,9 @@ At the end of training, the final model and the corresponding config.yaml will b
7676

7777
## Others
7878

79-
Note: This repository was forked and modified from https://github.com/sthalles/SimCLR.
79+
The repository is a Colab implementation of the architecture descibed in the ConVIRT paper: [*Contrastive Learning of Medical Visual Representations from Paired Images and Text*](https://arxiv.org/abs/2010.00747). The authors of paper are Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D. Manning, Curtis P. Langlotz.
80+
81+
This repository was originally modified from https://github.com/sthalles/SimCLR.
8082

8183
References:
8284
- Yuhao Zhang et al. Contrastive Learning of Medical Visual Representations from Paired Images and Text. https://arxiv.org/pdf/2010.00747.pdf

0 commit comments

Comments
 (0)