Skip to content

Commit

Permalink
Merge branch 'espnet:master' into st_bugfix
Browse files Browse the repository at this point in the history
  • Loading branch information
chintu619 authored May 6, 2022
2 parents eb6dc2d + 793b999 commit 6d1bd3a
Show file tree
Hide file tree
Showing 40 changed files with 429 additions and 17 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ Demonstration
- Conformer FastSpeech & FastSpeech2
- VITS
- Multi-speaker & multi-language extention
- Pretrined speaker embedding (e.g., X-vector)
- Pretrained speaker embedding (e.g., X-vector)
- Speaker ID embedding
- Language ID embedding
- Global style token (GST) embedding
Expand Down
1 change: 1 addition & 0 deletions egs2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ See: https://espnet.github.io/espnet/espnet2_tutorial.html#recipes-using-espnet2
| bur_openslr80 | Burmese ASR training dataset | ASR | BUR | https://openslr.org/80/ | |
| catslu | CATSLU-MAPS | SLU | CMN | https://sites.google.com/view/catslu/home | |
| chime4 | The 4th CHiME Speech Separation and Recognition Challenge | ASR/Multichannel ASR | ENG | http://spandh.dcs.shef.ac.uk/chime_challenge/chime2016/ | |
| chime6 | The 6th CHiME Speech Separation and Recognition Challenge | ASR | ENG | https://chimechallenge.github.io/chime6/ | |
| clarity21 | The First Clarity Enhancement Challenge CEC1 | SE | ENG | https://claritychallenge.github.io/clarity_CEC1_doc/ | |
| cmu_indic | CMU INDIC | TTS | 7 languages | http://festvox.org/cmu_indic/ | |
| commonvoice | The Mozilla Common Voice | ASR | 13 languages | https://voice.mozilla.org/datasets | |
Expand Down
1 change: 1 addition & 0 deletions egs2/TEMPLATE/asr1/db.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ REVERB=
REVERB_OUT="${PWD}/REVERB" # Output file path
CHIME3=
CHIME4=
CHIME5=
CSJDATATOP=
CSJVER=dvd ## Set your CSJ format (dvd or usb).
## Usage :
Expand Down
23 changes: 13 additions & 10 deletions egs2/TEMPLATE/asr1/scripts/utils/show_translation_result.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#!/usr/bin/env bash
mindepth=0
maxdepth=3
maxdepth=1
case=tc

. utils/parse_options.sh
Expand Down Expand Up @@ -44,24 +44,27 @@ cat << EOF
EOF

# only show BLEU score for now
metrics="bleu"

while IFS= read -r expdir; do
if ls "${expdir}"/*/*/score_*/result.${case}.txt &> /dev/null; then
echo "## $(basename ${expdir})"
for type in $metrics; do
cat << EOF
for type in ${metrics}; do
cat << EOF
### ${type^^}
|dataset|bleu_score|verbose_score|
|dataset|score|verbose_score|
|---|---|---|
EOF
data=$(echo "${expdir}"/*/*/score_*/result.${case}.txt | cut -d '/' -f4)
bleu=$(sed -n '5p' "${expdir}"/*/*/score_*/result.${case}.txt | cut -d ' ' -f 3 | tr -d ',')
verbose=$(sed -n '7p' "${expdir}"/*/*/score_*/result.${case}.txt | cut -d ' ' -f 3- | tr -d '",')
echo "${data}|${bleu}|${verbose}"

for result in "${expdir}"/*/*/score_"${type}"/result."${case}".txt; do
inference_tag=$(echo "${result}" | rev | cut -d/ -f4 | rev)
test_set=$(echo "${result}" | rev | cut -d/ -f3 | rev)
score=$(sed -n '5p' "${result}" | cut -d ' ' -f 3 | tr -d ',')
verbose=$(sed -n '7p' "${result}" | cut -d ' ' -f 3- | tr -d '",')
echo "|${inference_tag}/${test_set}|${score}|${verbose}|"
done
done
fi

done < <(find ${exp} -mindepth ${mindepth} -maxdepth ${maxdepth} -type d)
6 changes: 3 additions & 3 deletions egs2/TEMPLATE/mt1/mt.sh
Original file line number Diff line number Diff line change
Expand Up @@ -1215,7 +1215,7 @@ if ! "${skip_eval}"; then
detokenizer.perl -l ${tgt_lang} -q < "${_scoredir}/hyp.trn" > "${_scoredir}/hyp.trn.detok"

if [ ${tgt_case} = "tc" ]; then
echo "Case sensitive BLEU result (single-reference)" >> ${_scoredir}/result.tc.txt
echo "Case sensitive BLEU result (single-reference)" > ${_scoredir}/result.tc.txt
sacrebleu "${_scoredir}/ref.trn.detok" \
-i "${_scoredir}/hyp.trn.detok" \
-m bleu chrf ter \
Expand All @@ -1227,7 +1227,7 @@ if ! "${skip_eval}"; then
# detokenize & remove punctuation except apostrophe
remove_punctuation.pl < "${_scoredir}/ref.trn.detok" > "${_scoredir}/ref.trn.detok.lc.rm"
remove_punctuation.pl < "${_scoredir}/hyp.trn.detok" > "${_scoredir}/hyp.trn.detok.lc.rm"
echo "Case insensitive BLEU result (single-reference)" >> ${_scoredir}/result.lc.txt
echo "Case insensitive BLEU result (single-reference)" > ${_scoredir}/result.lc.txt
sacrebleu -lc "${_scoredir}/ref.trn.detok.lc.rm" \
-i "${_scoredir}/hyp.trn.detok.lc.rm" \
-m bleu chrf ter \
Expand Down Expand Up @@ -1279,7 +1279,7 @@ if ! "${skip_eval}"; then

# Show results in Markdown syntax
scripts/utils/show_translation_result.sh --case $tgt_case "${mt_exp}" > "${mt_exp}"/RESULTS.md
cat "${cat_exp}"/RESULTS.md
cat "${mt_exp}"/RESULTS.md
fi
else
log "Skip the evaluation stages"
Expand Down
30 changes: 30 additions & 0 deletions egs2/chime6/asr1/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue May 3 16:47:10 EDT 2022`
- python version: `3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.10.1`
- Git hash: `b757b89d45d5574cebf44e225cbe32e3e9e4f522`
- Commit date: `Mon May 2 09:21:08 2022 -0400`

## asr_train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k_raw_en_bpe1000_sp
- Pretrained model: https://huggingface.co/espnet/simpleoier_chime6_asr_transformer_wavlm_lr1e-3
### WER

|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|58881|69.4|20.2|10.4|8.6|39.1|75.8|

### CER

|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|280767|80.6|7.4|12.0|8.9|28.3|76.6|

### TER

|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|92680|68.9|17.7|13.4|8.2|39.3|76.6|

1 change: 1 addition & 0 deletions egs2/chime6/asr1/asr.sh
110 changes: 110 additions & 0 deletions egs2/chime6/asr1/cmd.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
# ====== About run.pl, queue.pl, slurm.pl, and ssh.pl ======
# Usage: <cmd>.pl [options] JOB=1:<nj> <log> <command...>
# e.g.
# run.pl --mem 4G JOB=1:10 echo.JOB.log echo JOB
#
# Options:
# --time <time>: Limit the maximum time to execute.
# --mem <mem>: Limit the maximum memory usage.
# -–max-jobs-run <njob>: Limit the number parallel jobs. This is ignored for non-array jobs.
# --num-threads <ngpu>: Specify the number of CPU core.
# --gpu <ngpu>: Specify the number of GPU devices.
# --config: Change the configuration file from default.
#
# "JOB=1:10" is used for "array jobs" and it can control the number of parallel jobs.
# The left string of "=", i.e. "JOB", is replaced by <N>(Nth job) in the command and the log file name,
# e.g. "echo JOB" is changed to "echo 3" for the 3rd job and "echo 8" for 8th job respectively.
# Note that the number must start with a positive number, so you can't use "JOB=0:10" for example.
#
# run.pl, queue.pl, slurm.pl, and ssh.pl have unified interface, not depending on its backend.
# These options are mapping to specific options for each backend and
# it is configured by "conf/queue.conf" and "conf/slurm.conf" by default.
# If jobs failed, your configuration might be wrong for your environment.
#
#
# The official documentation for run.pl, queue.pl, slurm.pl, and ssh.pl:
# "Parallelization in Kaldi": http://kaldi-asr.org/doc/queue.html
# =========================================================~


# Select the backend used by run.sh from "local", "stdout", "sge", "slurm", or "ssh"
cmd_backend='local'

# Local machine, without any Job scheduling system
if [ "${cmd_backend}" = local ]; then

# The other usage
export train_cmd="run.pl"
# Used for "*_train.py": "--gpu" is appended optionally by run.sh
export cuda_cmd="run.pl"
# Used for "*_recog.py"
export decode_cmd="run.pl"

# Local machine logging to stdout and log file, without any Job scheduling system
elif [ "${cmd_backend}" = stdout ]; then

# The other usage
export train_cmd="stdout.pl"
# Used for "*_train.py": "--gpu" is appended optionally by run.sh
export cuda_cmd="stdout.pl"
# Used for "*_recog.py"
export decode_cmd="stdout.pl"


# "qsub" (Sun Grid Engine, or derivation of it)
elif [ "${cmd_backend}" = sge ]; then
# The default setting is written in conf/queue.conf.
# You must change "-q g.q" for the "queue" for your environment.
# To know the "queue" names, type "qhost -q"
# Note that to use "--gpu *", you have to setup "complex_value" for the system scheduler.

export train_cmd="queue.pl"
export cuda_cmd="queue.pl"
export decode_cmd="queue.pl"


# "qsub" (Torque/PBS.)
elif [ "${cmd_backend}" = pbs ]; then
# The default setting is written in conf/pbs.conf.

export train_cmd="pbs.pl"
export cuda_cmd="pbs.pl"
export decode_cmd="pbs.pl"


# "sbatch" (Slurm)
elif [ "${cmd_backend}" = slurm ]; then
# The default setting is written in conf/slurm.conf.
# You must change "-p cpu" and "-p gpu" for the "partition" for your environment.
# To know the "partion" names, type "sinfo".
# You can use "--gpu * " by default for slurm and it is interpreted as "--gres gpu:*"
# The devices are allocated exclusively using "${CUDA_VISIBLE_DEVICES}".

export train_cmd="slurm.pl"
export cuda_cmd="slurm.pl"
export decode_cmd="slurm.pl"

elif [ "${cmd_backend}" = ssh ]; then
# You have to create ".queue/machines" to specify the host to execute jobs.
# e.g. .queue/machines
# host1
# host2
# host3
# Assuming you can login them without any password, i.e. You have to set ssh keys.

export train_cmd="ssh.pl"
export cuda_cmd="ssh.pl"
export decode_cmd="ssh.pl"

# This is an example of specifying several unique options in the JHU CLSP cluster setup.
# Users can modify/add their own command options according to their cluster environments.
elif [ "${cmd_backend}" = jhu ]; then

export train_cmd="queue.pl --mem 2G"
export cuda_cmd="queue-freegpu.pl --mem 2G --gpu 1 --config conf/queue.conf"
export decode_cmd="queue.pl --mem 4G"

else
echo "$0: Error: Unknown cmd_backend=${cmd_backend}" 1>&2
return 1
fi
7 changes: 7 additions & 0 deletions egs2/chime6/asr1/conf/decode_asr_transformer.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
batch_size: 0
beam_size: 10
penalty: 0.0
maxlenratio: 0.0
minlenratio: 0.0
ctc_weight: 0.3
lm-weight: 0.0
2 changes: 2 additions & 0 deletions egs2/chime6/asr1/conf/fbank.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
--sample-frequency=16000
--num-mel-bins=80
11 changes: 11 additions & 0 deletions egs2/chime6/asr1/conf/pbs.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Default configuration
command qsub -V -v PATH -S /bin/bash
option name=* -N $0
option mem=* -l mem=$0
option mem=0 # Do not add anything to qsub_opts
option num_threads=* -l ncpus=$0
option num_threads=1 # Do not add anything to qsub_opts
option num_nodes=* -l nodes=$0:ppn=1
default gpu=0
option gpu=0
option gpu=* -l ngpus=$0
1 change: 1 addition & 0 deletions egs2/chime6/asr1/conf/pitch.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
--sample-frequency=16000
12 changes: 12 additions & 0 deletions egs2/chime6/asr1/conf/queue.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Default configuration
command qsub -v PATH -cwd -S /bin/bash -j y -l arch=*64*
option name=* -N $0
option mem=* -l mem_free=$0,ram_free=$0
option mem=0 # Do not add anything to qsub_opts
option num_threads=* -pe smp $0
option num_threads=1 # Do not add anything to qsub_opts
option max_jobs_run=* -tc $0
option num_nodes=* -pe mpi $0 # You must set this PE as allocation_rule=1
default gpu=0
option gpu=0
option gpu=* -l gpu=$0 -q g.q
14 changes: 14 additions & 0 deletions egs2/chime6/asr1/conf/slurm.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Default configuration
command sbatch --export=PATH
option name=* --job-name $0
option time=* --time $0
option mem=* --mem-per-cpu $0
option mem=0
option num_threads=* --cpus-per-task $0
option num_threads=1 --cpus-per-task 1
option num_nodes=* --nodes $0
default gpu=0
option gpu=0 -p cpu
option gpu=* -p gpu --gres=gpu:$0 -c $0 # Recommend allocating more CPU than, or equal to the number of GPU
# note: the --max-jobs-run option is supported as a special case
# by slurm.pl and you don't have to handle it in the config file.
16 changes: 16 additions & 0 deletions egs2/chime6/asr1/conf/train_lm.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
optim: sgd
patience: 3
max_epoch: 20
batch_type: folded
batch_size: 1024 # 300 for word LMs
lm: seq_rnn
lm_conf:
rnn_type: lstm
nlayers: 2 # 1 for word LMs
unit: 650 # 1000 for word LMs

best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 1
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# minibatch related
batch_type: folded
batch_size: 48
accum_grad: 1
grad_clip: 5
max_epoch: 8
patience: 4
# The initialization method for model parameters
init: xavier_uniform
val_scheduler_criterion:
- valid
- loss
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
unused_parameters: true
freeze_param: [
"frontend.upstream"
]

# network architecture
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_large # Note: If the upstream is changed, please change the input_size in the preencoder.
download_dir: ./hub
multilayer_feature: True

preencoder: linear
preencoder_conf:
input_size: 1024 # Note: If the upstream is changed, please change this value accordingly.
output_size: 128

# encoder related
encoder: transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d2
normalize_before: true

# decoder related
decoder: transformer
decoder_conf:
input_layer: embed
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.0
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0

model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false

optim: adam
optim_conf:
lr: 0.001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 20000

specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 100
num_freq_mask: 4
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
Loading

0 comments on commit 6d1bd3a

Please sign in to comment.