Skip to content

Commit

Permalink
fix typo (#1324)
Browse files Browse the repository at this point in the history
  • Loading branch information
Rudra-Ji authored Oct 19, 2023
1 parent 973dc10 commit eef47ad
Show file tree
Hide file tree
Showing 5 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion docs/source/decoding-with-langugage-models/LODR.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ during decoding for transducer model:
\lambda_1 \log p_{\text{Target LM}}\left(y_u|\mathit{x},y_{1:u-1}\right) -
\lambda_2 \log p_{\text{bi-gram}}\left(y_u|\mathit{x},y_{1:u-1}\right)
In LODR, an additional bi-gram LM estimated on the source domain (e.g training corpus) is required. Comared to DR,
In LODR, an additional bi-gram LM estimated on the source domain (e.g training corpus) is required. Compared to DR,
the only difference lies in the choice of source domain LM. According to the original `paper <https://arxiv.org/abs/2203.16776>`_,
LODR achieves similar performance compared DR in both intra-domain and cross-domain settings.
As a bi-gram is much faster to evaluate, LODR is usually much faster.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/model-export/export-ncnn-conv-emformer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ Python code. We have also set up ``PATH`` so that you can use
.. caution::

Please don't use `<https://github.com/tencent/ncnn>`_.
We have made some modifications to the offical `ncnn`_.
We have made some modifications to the official `ncnn`_.

We will synchronize `<https://github.com/csukuangfj/ncnn>`_ periodically
with the official one.
Expand Down
2 changes: 1 addition & 1 deletion egs/wenetspeech/ASR/pruned_transducer_stateless2/decode.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ def get_parser():
"--beam-size",
type=int,
default=4,
help="""An interger indicating how many candidates we will keep for each
help="""An integer indicating how many candidates we will keep for each
frame. Used only when --decoding-method is beam_search or
modified_beam_search.""",
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def add_finetune_arguments(parser: argparse.ArgumentParser):
default=None,
help="""
Modules to be initialized. It matches all parameters starting with
a specific key. The keys are given with Comma seperated. If None,
a specific key. The keys are given with Comma separated. If None,
all modules will be initialised. For example, if you only want to
initialise all parameters staring with "encoder", use "encoder";
if you want to initialise parameters starting with encoder or decoder,
Expand Down
2 changes: 1 addition & 1 deletion icefall/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -1977,7 +1977,7 @@ def parse_timestamps_and_texts(
A k2.Fsa with best_paths.arcs.num_axes() == 3, i.e.
containing multiple FSAs, which is expected to be the result
of k2.shortest_path (otherwise the returned values won't
be meaningful). Attribtute `labels` is the prediction unit,
be meaningful). Attribute `labels` is the prediction unit,
e.g., phone or BPE tokens. Attribute `aux_labels` is the word index.
word_table:
The word symbol table.
Expand Down

0 comments on commit eef47ad

Please sign in to comment.