diff --git a/tutorials/00_NeMo_Primer.ipynb b/tutorials/00_NeMo_Primer.ipynb index c21696702a39..18b8652fa10a 100644 --- a/tutorials/00_NeMo_Primer.ipynb +++ b/tutorials/00_NeMo_Primer.ipynb @@ -588,7 +588,7 @@ "id": "U7Eezf_sAVS0" }, "source": [ - "You might wonder why we didnt explicitly set `citrinet.cfg.optim = cfg.optim`. \n", + "You might wonder why we didn't explicitly set `citrinet.cfg.optim = cfg.optim`. \n", "\n", "This is because the `setup_optimization()` method does it for you! You can still update the config manually." ] diff --git a/tutorials/asr/ASR_Confidence_Estimation.ipynb b/tutorials/asr/ASR_Confidence_Estimation.ipynb index e177a5132b26..06bb75f8f237 100644 --- a/tutorials/asr/ASR_Confidence_Estimation.ipynb +++ b/tutorials/asr/ASR_Confidence_Estimation.ipynb @@ -284,7 +284,7 @@ " eps_padded_hyp, labels, padded_labels, fill_confidence_deletions(confidence_scores, labels)\n", " ):\n", " word_len = len(word)\n", - " # shield angle brakets for \n", + " # shield angle brackets for \n", " if html and word == \"\":\n", " word = \"<eps>\"\n", " if current_line_len + word_len + 1 <= terminal_width:\n", @@ -307,7 +307,7 @@ " current_word_line = \"\"\n", " for word, score in zip(transcript_list, confidence_scores):\n", " word_len = len(word)\n", - " # shield angle brakets for \n", + " # shield angle brackets for \n", " if html and word == \"\":\n", " word = \"<eps>\"\n", " if current_line_len + word_len + 1 <= terminal_width:\n", diff --git a/tutorials/asr/ASR_Context_Biasing.ipynb b/tutorials/asr/ASR_Context_Biasing.ipynb index 75385234ce29..ec8c0c1b78c6 100644 --- a/tutorials/asr/ASR_Context_Biasing.ipynb +++ b/tutorials/asr/ASR_Context_Biasing.ipynb @@ -361,7 +361,7 @@ "source": [ "## Create a context-biasing list\n", "\n", - "Now, we need to select the words, recognition of wich we want to improve by CTC-WS context-biasing.\n", + "Now, we need to select the words, recognition of which we want to improve by CTC-WS context-biasing.\n", "Usually, we select only nontrivial words with the lowest recognition accuracy.\n", "Such words should have a character length >= 3 because short words in a context-biasing list may produce high false-positive recognition.\n", "In this toy example, we will select all the words that look like names with a recognition accuracy less than 1.0.\n", diff --git a/tutorials/asr/Speech_Commands.ipynb b/tutorials/asr/Speech_Commands.ipynb index f0671763b984..e50e8d1f283e 100644 --- a/tutorials/asr/Speech_Commands.ipynb +++ b/tutorials/asr/Speech_Commands.ipynb @@ -1431,10 +1431,10 @@ "# Lets change the scheduler\n", "optim_sched_cfg.sched.name = \"CosineAnnealing\"\n", "\n", - "# \"power\" isnt applicable to CosineAnnealing so let's remove it\n", + "# \"power\" isn't applicable to CosineAnnealing so let's remove it\n", "optim_sched_cfg.sched.pop('power')\n", "\n", - "# \"hold_ratio\" isnt applicable to CosineAnnealing, so let's remove it\n", + "# \"hold_ratio\" isn't applicable to CosineAnnealing, so let's remove it\n", "optim_sched_cfg.sched.pop('hold_ratio')\n", "\n", "# Set \"min_lr\" to lower value\n", diff --git a/tutorials/nlp/Joint_Intent_and_Slot_Classification.ipynb b/tutorials/nlp/Joint_Intent_and_Slot_Classification.ipynb index a0ff0faf511b..b21fdfe36020 100644 --- a/tutorials/nlp/Joint_Intent_and_Slot_Classification.ipynb +++ b/tutorials/nlp/Joint_Intent_and_Slot_Classification.ipynb @@ -749,7 +749,7 @@ "source": [ "### Optimizing Threshold\n", "\n", - "As mentioned above, when classifiying a given query such as `show all flights and fares from denver to san francisco`, our model checks whether each individual intent would be suitable. Before assigning the final labels for a query, the model assigns a probability an intent matches the query. For example, if our `dict.intents.csv` had 5 different intents, then the model could output for a given query \\[0.52, 0.38, 0.21, 0.67. 0.80\\] where each value represents the probability that query matches that particular intent. \n", + "As mentioned above, when classifying a given query such as `show all flights and fares from denver to san francisco`, our model checks whether each individual intent would be suitable. Before assigning the final labels for a query, the model assigns a probability an intent matches the query. For example, if our `dict.intents.csv` had 5 different intents, then the model could output for a given query \\[0.52, 0.38, 0.21, 0.67. 0.80\\] where each value represents the probability that query matches that particular intent. \n", "\n", "We need to use these probabilities to generate final label predictions of 0 or 1 for each label. While we can use 0.5 as the probability threshold, it is usually the case that there is a better threshold to use depending on the metric we want to optimize. For this tutorial, we will be finding the threshold that gives us the best micro-F1 score on the validation set. After running the `optimize_threshold` method, the threshold attribute for our model will be updated." ]