forked from espnet/espnet
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request espnet#3063 from sw005320/iwslt21_asr
added results and uploaded models
- Loading branch information
Showing
6 changed files
with
133 additions
and
10 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
# RESULTS | ||
## Environments | ||
- date: `Tue Mar 9 09:50:14 EST 2021` | ||
- python version: `3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0]` | ||
- espnet version: `espnet 0.9.8` | ||
- chainer version: `chainer 6.0.0` | ||
- pytorch version: `pytorch 1.7.1` | ||
- Git hash: `99d89903e42013dda5c5bc08bcf37a529eab7eb7` | ||
- Commit date: `Tue Mar 9 08:58:35 2021 -0500` | ||
|
||
## train_pytorch_train_pytorch_conformer_large_mustc_like_bpe5000_specaug | ||
- Model files (archived to model.mustc_like.tar.gz by `$ pack_model.sh`) | ||
- model link: https://drive.google.com/file/d/107ujDaIrlj6tFHiWLNP6aUBuV0PVyX_Y/view?usp=sharing | ||
- training config file: `conf/tuning/train_pytorch_conformer_large_mustc_like.yaml` | ||
- decoding config file: `conf/tuning/decode_pytorch_transformer.yaml` | ||
- cmvn file: `data/train/cmvn.ark` | ||
- e2e file: `exp/train_pytorch_train_pytorch_conformer_large_mustc_like_bpe5000_specaug/results/model.val5.avg.best` | ||
- e2e JSON file: `exp/train_pytorch_train_pytorch_conformer_large_mustc_like_bpe5000_specaug/results/model.json` | ||
- dict file: `data/lang_1spm` | ||
- No LM. 4 GPU training | ||
### CER | ||
|
||
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| | ||
|---|---|---|---|---|---|---|---|---| | ||
|decode_et_librispeech_test_other_decode|2939|71179|92.4|5.7|1.8|1.1|8.7|56.5| | ||
|decode_et_mustc_tst-COMMON_decode|2641|58047|94.7|2.8|2.6|1.1|6.4|36.6| | ||
|decode_et_tedlium2_test_decode|1155|33696|94.1|2.7|3.2|1.2|7.2|56.4| | ||
|
||
### WER | ||
|
||
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| | ||
|---|---|---|---|---|---|---|---|---| | ||
|decode_et_librispeech_test_other_decode|2939|53022|93.3|6.0|0.7|0.8|7.5|56.4| | ||
|decode_et_mustc_tst-COMMON_decode|2641|47335|95.2|2.9|1.8|1.1|5.8|36.6| | ||
|decode_et_tedlium2_test_decode|1155|27500|94.0|3.0|3.0|1.2|7.2|56.3| | ||
|
||
## train_pytorch_train_pytorch_conformer_large_librispeech_like_bpe5000_specaug | ||
- Model files (archived to model.librispeech_like.tar.gz by `$ pack_model.sh`) | ||
- model link: https://drive.google.com/file/d/1C2iZQu4P5RKxWAjpD-ZkJZcHIg2-ED51/view?usp=sharing | ||
- training config file: `conf/tuning/train_pytorch_conformer_large_librispeech_like.yaml` | ||
- decoding config file: `conf/tuning/decode_pytorch_transformer.yaml` | ||
- cmvn file: `data/train/cmvn.ark` | ||
- e2e file: `exp/train_pytorch_train_pytorch_conformer_large_librispeech_like_bpe5000_specaug/results/model.val5.avg.best` | ||
- e2e JSON file: `exp/train_pytorch_train_pytorch_conformer_large_librispeech_like_bpe5000_specaug/results/model.json` | ||
- dict file: `data/lang_1spm` | ||
- No LM. 4 GPU training | ||
### CER | ||
|
||
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| | ||
|---|---|---|---|---|---|---|---|---| | ||
|decode_et_librispeech_test_other_decode|2939|71179|92.7|5.5|1.8|1.0|8.3|54.0| | ||
|decode_et_mustc_tst-COMMON_decode|2641|58047|94.8|2.6|2.6|1.0|6.2|37.0| | ||
|decode_et_tedlium2_test_decode|1155|33696|94.9|2.4|2.6|1.1|6.2|54.3| | ||
### WER | ||
|
||
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| | ||
|---|---|---|---|---|---|---|---|---| | ||
|decode_et_librispeech_test_other_decode|2939|53022|93.7|5.6|0.7|0.8|7.1|53.8| | ||
|decode_et_mustc_tst-COMMON_decode|2641|47335|95.4|2.7|1.8|1.0|5.6|37.0| | ||
|decode_et_tedlium2_test_decode|1155|27500|94.8|2.6|2.5|1.0|6.2|54.3| |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1 @@ | ||
tuning/train_pytorch_transformer_large.yaml | ||
tuning/train_pytorch_conformer_large_librispeech_like.yaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -4,4 +4,4 @@ penalty: 0.0 | |
maxlenratio: 0.0 | ||
minlenratio: 0.0 | ||
ctc-weight: 0.5 | ||
lm-weight: 0.7 | ||
lm-weight: 0.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
54 changes: 54 additions & 0 deletions
54
egs/iwslt21/asr1/conf/tuning/train_pytorch_conformer_large_mustc_like.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
# network architecture | ||
# encoder related | ||
elayers: 12 | ||
eunits: 2048 | ||
# decoder related | ||
dlayers: 6 | ||
dunits: 2048 | ||
# attention related | ||
adim: 512 | ||
aheads: 8 | ||
|
||
# hybrid CTC/attention | ||
mtlalpha: 0.3 | ||
|
||
# label smoothing | ||
lsm-weight: 0.1 | ||
|
||
# minibatch related | ||
batch-size: 50 # worth tuning! | ||
maxlen-in: 512 # if input length > maxlen-in, batchsize is automatically reduced | ||
maxlen-out: 150 # if output length > maxlen-out, batchsize is automatically reduced | ||
#batch-bins: 15000000 | ||
|
||
# optimization related | ||
sortagrad: 0 # Feed samples from shortest to longest ; -1: enabled for all epochs, 0: disabled, other: enabled for 'other' epochs | ||
opt: noam | ||
accum-grad: 2 # worth tuning! | ||
grad-clip: 5 | ||
patience: 0 | ||
epochs: 30 | ||
dropout-rate: 0.1 | ||
|
||
# transformer specific setting | ||
backend: pytorch | ||
model-module: "espnet.nets.pytorch_backend.e2e_asr_conformer:E2E" | ||
transformer-input-layer: conv2d # encoder architecture type | ||
transformer-lr: 2.0 # worth tuning! | ||
transformer-warmup-steps: 25000 | ||
transformer-attn-dropout-rate: 0.0 | ||
transformer-length-normalized-loss: false | ||
transformer-init: pytorch | ||
|
||
# Report CER & WER | ||
report-cer: true | ||
report-wer: true | ||
|
||
# conformer specific setting | ||
transformer-encoder-pos-enc-layer-type: rel_pos | ||
transformer-encoder-selfattn-layer-type: rel_selfattn | ||
rel-pos-type: latest | ||
transformer-encoder-activation-type: swish | ||
macaron-style: true | ||
use-cnn-module: true | ||
cnn-module-kernel: 15 # worth tuning! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters