Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR is for various improvements to the SID recipes. This is primarily motivated by issues tracked on Git or in the Kaldi forums.
The i-vector scripts that use nnet2 in sid/*_dnn.sh now support using GPU explicitly. This is in response to complaints of slowness in these scripts. This will close Issue SRE10 v2 Improvements #165.
A fix to src/fgmmbin/fgmm-global-init-from-accs.cc so that when a Gaussian has very low occupancy, we don't just crash (See https://groups.google.com/d/msg/kaldi-help/U_L_6IWBN1c/L8oPTcE5AgAJ)
LID and SID scripts now do more cleanup with the --cleanup=true option. This closes Issue cleanup when training iVector extractor #1059.
SID i-vector training scripts now use '--num-threads N' instead of '-pe smp N.' This closes Issue SID and LID recipes should use --num-threads N #1096. (The LID scripts were already doing the right thing).
In sre10/v1/local/plda_scoring.sh added an option for --simple-length-norm (which defaults to 'false' since it gives better performance in SRE10). This closes issue In SID recipes, provide a script-level option for simple-length-normalization #1097.
In egs/sre10/{v1,v2}/run.sh, the PLDA scores are now written to exp instead of local. This is better, since v1 and v2 share the same local directory and would override each other otherwise. Also changing the old-style memory options to the new ones (E.g., --mem 5G))
The scripts to train the DNN for SRE10 have been moved from sre10/v1/local to sre08/v1/sid/nnet2 (this mirrors what we did with lre07/v1/lid/nnet2). This is consist with other setups, and makes them easier to access by new (or user created) SID recipes.
In sre10/v1/local/dnn/run_nnet2_multisplice.sh we now use 8 GPUs to the train the DNN, instead of 18 (which is excessive, and might've been a typo).
Various cosmetic fixes: fixed indentation in several sid and lid scripts. Removed trailing whitespace in src/ivectorbin/*cc. Fixed a typo in src/gmm/full-gmm.cc . Changed wording in egs/sre10/v1/local/dnn/train_dnn.sh so that it, which is an nnet2 pnorm recipe, is no longer referred to as the "current best recipe" but rather as an "older nnet2 recipe," which is now the correct thing to say.