Skip to content
/ pkwrap Public

A pytorch wrapper for LF-MMI training and parallel training in Kaldi

License

Notifications You must be signed in to change notification settings

idiap/pkwrap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

README

UPDATE (2021-02-03): See changelog for the latest updates to the code

This is a (yet another!) python wrapper for Kaldi. The main goal is to be able to train acoustic models in Pytorch so that we can

  1. use MMI cost function during training
  2. use NG-SGD for affine transformations, which enables multi-GPU training with SGE

Table of Contents


Motivation

The main motivation of this project is to run MMI training in Pytorch. The idea is to use existing functionality in Kaldi so that we don't have to re-implement anything.

Why not use the existing wrappers?

Pykaldi is good for exposing high-level functionalties. However, many things are still not possible (e.g. loading NNet3 models and having access to the parameters of the model). Modifying Pykaldi requires us to have custom versions of Kaldi and CLIF. Moreover, if one simply wanted to converting Tensors in GPU to Kaldi CuMatrix was not efficient (the general route afaik would be Tensor GPU -> Tensor CPU -> Kaldi Matrix CPU -> Kaldi Matrix GPU).

Pykaldi2 provides a version of LF-MMI training, which uses Pykaldi functions.


Installation

Pkwrap has been tested with the following pytorch and CUDA libraries

Pytorch CUDA
1.6 9.2, 10.2
1.7 10.2
1.8 10.2, 11.1
  1. Activate your pytorch environment.
  2. Install requirements with pip install -r requirements.txt
  3. Compile Kaldi with CXXFLAGS="-D_GLIBCXX_USE_CXX11_ABI=0".
  4. Set KALDI_ROOT and optionally MKL_ROOT in the environment. Note: in the future this will be made easier with autoconf.
  5. Run make

Known Issues / Common Pitfalls

  • the g++ version of pytorch, kaldi and pkwrap should match!

Usage

Before importing do check if Kaldi libraries used to compile the package are accessible in your environment. Otherwise, it should be added to $LD_LIBRARY_PATH as follows

LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$KALDI_ROOT/src/lib

Currently there are recipes for conventional LF-MMI training with pkwrap

For flatstart LF-MMI training there is a recipe in Librispeech.

For experiments related to quantization of acoustic models trained in Kaldi see egs/librispeech/quant in load_kaldi_models branch.


Works based on Pkwrap

Following list of works are based on this repository and might be of interest:

1.Lattice-Free MMI Adaptation Of Self-Supervised Pretrained Acoustic Models


References and Citation

The technical report is now available here. The report can be cited as in the following bibtex example:

@article{madikeri2020pkwrap,
  title={Pkwrap: a PyTorch Package for LF-MMI Training of Acoustic Models},
  author={Madikeri, Srikanth and Tong, Sibo and Zuluaga-Gomez, Juan and Vyas, Apoorv and Motlicek, Petr and Bourlard, Herv{\'e}},
  journal={arXiv preprint arXiv:2010.03466},
  year={2020}
}

About

A pytorch wrapper for LF-MMI training and parallel training in Kaldi

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published