Skip to content

Commit

Permalink
Update deepset/roberta-base-squad2 model card (huggingface#8522)
Browse files Browse the repository at this point in the history
* Update README.md

* Update README.md
  • Loading branch information
brandenchan authored and fabiocapsouza committed Nov 15, 2020
1 parent 5375105 commit e49cf2e
Showing 1 changed file with 16 additions and 7 deletions.
23 changes: 16 additions & 7 deletions model_cards/deepset/roberta-base-squad2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ datasets:

# roberta-base for QA

NOTE: This model has been superseded by deepset/roberta-base-squad2-v2. For an explanation of why, see [this github issue](https://github.com/deepset-ai/FARM/issues/552) from the FARM repository.
NOTE: This is version 2 of the model. See [this github issue](https://github.com/deepset-ai/FARM/issues/552) from the FARM repository for an explanation of why we updated. If you'd like to use version 1, specify `revision="v1.0"` when loading the model in Transformers 3.5.

## Overview
**Language model:** roberta-base
Expand All @@ -19,10 +19,10 @@ NOTE: This model has been superseded by deepset/roberta-base-squad2-v2. For an e
## Hyperparameters

```
batch_size = 50
n_epochs = 3
batch_size = 96
n_epochs = 2
base_LM_model = "roberta-base"
max_seq_len = 384
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
Expand All @@ -32,9 +32,18 @@ max_query_length=64

## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).

```
"exact": 78.49743114629833,
"f1": 81.73092721240889
"exact": 79.97136359807968
"f1": 83.00449234495325
"total": 11873
"HasAns_exact": 78.03643724696356
"HasAns_f1": 84.11139298441825
"HasAns_total": 5928
"NoAns_exact": 81.90075693860386
"NoAns_f1": 81.90075693860386
"NoAns_total": 5945
```

## Usage
Expand Down Expand Up @@ -85,7 +94,7 @@ For doing QA at scale (i.e. many docs instead of single paragraph), you can load
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```


Expand Down

0 comments on commit e49cf2e

Please sign in to comment.