diff --git a/tutorials/nlp/lora.ipynb b/tutorials/nlp/lora.ipynb new file mode 100644 index 000000000000..fc79f74a6e2a --- /dev/null +++ b/tutorials/nlp/lora.ipynb @@ -0,0 +1,1720 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 2, + "id": "b7a434f4", + "metadata": {}, + "outputs": [], + "source": [ + "BRANCH='main'\n", + "import os\n", + "import wget" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "developmental-gibraltar", + "metadata": {}, + "outputs": [], + "source": [ + "\"\"\"\n", + "You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n", + "\n", + "Instructions for setting up Colab are as follows:\n", + "1. Open a new Python 3 notebook.\n", + "2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n", + "3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n", + "4. Run this cell to set up dependencies.\n", + "\"\"\"\n", + "# If you're using Google Colab and not running locally, run this cell\n", + "\n", + "# install NeMo\n", + "!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[nlp]" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "42daf8bf", + "metadata": {}, + "source": [ + "### Introduction\n", + "\n", + "In this notebook we demonstrate how to use NeMo's implementation of LoRA (Low Rank Adaptation) for fine-tuning large language models. Our implementation is based on the [paper](https://openreview.net/pdf?id=nZeVKeeFYf9) by Hu et al.\n", + "\n", + "We are going to show you how to:\n", + " \n", + " 1. Train a LoRA model on a simple Extractive QA task.\n", + " 2. Inspect the trained LoRA model showing the parameters it contains.\n", + " 3. Run inference with the based model with the LoRA parameters.\n", + " 4. Merge the LoRA parameters into the base model and run inference again on the merged model.\n", + "\n", + "In this tutorial we will be focusing on LoRA, but the training and evaluation methods described here will be applicable for other Parameter-efficient Fine tuning (PEFT) methods in NeMo." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "0bfc7709", + "metadata": {}, + "source": [ + "### Tasks and Datasets\n", + "We will be using LoRA to teach our GPT model to do Extractive Question Answering.\n", + "\n", + "We will be using the [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) reading comprehension dataset, consisting of questions posed by crowd workers on a set of Wikipedia articles, where the answer to every question is a segment of text. More information on [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) can be found on their website or in their paper by Rajpurkar et. al \"[Know What You Don’t Know: Unanswerable Questions for SQuAD](https://arxiv.org/pdf/1806.03822.pdf)\".\n", + "\n", + "LoRA (and all PEFT tuning) models expect at least two fields in the jsonl files. The `input` field should contain all the tokens necessary for the model to generate the `output`. For example for extractive QA, the `input` should contain the context text as well as the question.\n", + "\n", + "```\n", + "[\n", + " {\"input\": \"User: Context: [CONTEXT_1] Question: [QUESTION_1]\\n\\nAssistant:\", \"output\": [ANSWER_1]},\n", + " {\"input\": \"User: Context: [CONTEXT_2] Question: [QUESTION_2]\\n\\nAssistant:\", \"output\": [ANSWER_2]},\n", + " {\"input\": \"User: Context: [CONTEXT_3] Question: [QUESTION_3]\\n\\nAssistant:\", \"output\": [ANSWER_3]},\n", + "]\n", + "```\n", + "Note that we use keywords in the input like `Context:`, `Question:` to separate the text representing the context and question. We also use the keyword `User:` and end each of the input with `\\n\\nAssistant:` tokens. These are recommended because NeMo's instruction-tuned models are trained with a prefix of `User:` and suffix `\\n\\nAssistant:`." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "0dbd41fd", + "metadata": {}, + "outputs": [], + "source": [ + "# You can replace DATA_DIR and NEMO_DIR with your own locations\n", + "DATA_DIR = \"data\"\n", + "NEMO_DIR = \".\"\n", + "os.makedirs(DATA_DIR, exist_ok=True)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "504a7b40", + "metadata": {}, + "source": [ + "\n", + "For each dataset we have preprocessing scripts pre-written in NeMo's example directory located in `examples/nlp`. Let's download those now. " + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "e72a1dc1", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "File ‘prompt_learning_squad_preprocessing.py’ already there; not retrieving.\n", + "\n" + ] + } + ], + "source": [ + "# download the preprocessing scripts from github for the purpose of this tutorial\n", + "! wget -nc https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/scripts/dataset_processing/nlp/squad/prompt_learning_squad_preprocessing.py" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "71813919", + "metadata": {}, + "source": [ + "Now let's down load and process the dataset." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "fa16d8ac", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "--2023-05-30 14:07:23-- https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\n", + "Resolving rajpurkar.github.io (rajpurkar.github.io)... 185.199.109.153, 185.199.111.153, 185.199.108.153, ...\n", + "Connecting to rajpurkar.github.io (rajpurkar.github.io)|185.199.109.153|:443... connected.\n", + "HTTP request sent, awaiting response... 200 OK\n", + "Length: 30288272 (29M) [application/json]\n", + "Saving to: ‘train-v1.1.json’\n", + "\n", + "train-v1.1.json 100%[===================>] 28.88M 84.3MB/s in 0.3s \n", + "\n", + "2023-05-30 14:07:25 (84.3 MB/s) - ‘train-v1.1.json’ saved [30288272/30288272]\n", + "\n", + "--2023-05-30 14:07:26-- https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json\n", + "Resolving rajpurkar.github.io (rajpurkar.github.io)... 185.199.110.153, 185.199.108.153, 185.199.111.153, ...\n", + "Connecting to rajpurkar.github.io (rajpurkar.github.io)|185.199.110.153|:443... connected.\n", + "HTTP request sent, awaiting response... 200 OK\n", + "Length: 4854279 (4.6M) [application/json]\n", + "Saving to: ‘dev-v1.1.json’\n", + "\n", + "dev-v1.1.json 100%[===================>] 4.63M --.-KB/s in 0.1s \n", + "\n", + "2023-05-30 14:07:27 (43.8 MB/s) - ‘dev-v1.1.json’ saved [4854279/4854279]\n", + "\n" + ] + } + ], + "source": [ + "SQUAD_DIR = os.path.join(DATA_DIR, \"SQuAD\")\n", + "os.makedirs(SQUAD_DIR, exist_ok=True)\n", + "\n", + "# Download the SQuAD dataset\n", + "!wget -nc https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\n", + "!wget -nc https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json\n", + "!mv train-v1.1.json {SQUAD_DIR}\n", + "!mv dev-v1.1.json {SQUAD_DIR}" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "64e3e25b", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Saving train split to data/SQuAD/squad_train.jsonl\n", + "100%|█████████████████████████████████| 87599/87599 [00:00<00:00, 204336.27it/s]\n", + "Saving val split to data/SQuAD/squad_val.jsonl\n", + "100%|█████████████████████████████████| 10570/10570 [00:00<00:00, 158654.55it/s]\n", + "Saving test split to data/SQuAD/squad_test_ground_truth.jsonl\n", + "100%|█████████████████████████████████| 10570/10570 [00:00<00:00, 183040.92it/s]\n", + "Saving test split to data/SQuAD/squad_test.jsonl\n", + "100%|█████████████████████████████████| 10570/10570 [00:00<00:00, 196367.94it/s]\n" + ] + } + ], + "source": [ + "# Preprocess squad data\n", + "!python prompt_learning_squad_preprocessing.py --sft-format --data-dir {SQUAD_DIR}" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "b562d1de", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{\"input\": \"User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24\\u201310 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \\\"golden anniversary\\\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \\\"Super Bowl L\\\"), so that the logo could prominently feature the Arabic numerals 50. Question:Which NFL team represented the AFC at Super Bowl 50?\\n\\nAssistant:\", \"output\": \"Denver Broncos\"}\n", + "{\"input\": \"User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24\\u201310 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \\\"golden anniversary\\\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \\\"Super Bowl L\\\"), so that the logo could prominently feature the Arabic numerals 50. Question:Which NFL team represented the NFC at Super Bowl 50?\\n\\nAssistant:\", \"output\": \"Carolina Panthers\"}\n", + "{\"input\": \"User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24\\u201310 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \\\"golden anniversary\\\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \\\"Super Bowl L\\\"), so that the logo could prominently feature the Arabic numerals 50. Question:Where did Super Bowl 50 take place?\\n\\nAssistant:\", \"output\": \"Santa Clara, California\"}\n", + "{\"input\": \"User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24\\u201310 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \\\"golden anniversary\\\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \\\"Super Bowl L\\\"), so that the logo could prominently feature the Arabic numerals 50. Question:Which NFL team won Super Bowl 50?\\n\\nAssistant:\", \"output\": \"Denver Broncos\"}\n", + "{\"input\": \"User: Context:Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \\\"Venite Ad Me Omnes\\\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary. Question:To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?\\n\\nAssistant:\", \"output\": \"Saint Bernadette Soubirous\"}\n", + "{\"input\": \"User: Context:Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \\\"Venite Ad Me Omnes\\\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary. Question:What is in front of the Notre Dame Main Building?\\n\\nAssistant:\", \"output\": \"a copper statue of Christ\"}\n", + "{\"input\": \"User: Context:Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \\\"Venite Ad Me Omnes\\\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary. Question:The Basilica of the Sacred heart at Notre Dame is beside to which structure?\\n\\nAssistant:\", \"output\": \"the Main Building\"}\n", + "{\"input\": \"User: Context:Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \\\"Venite Ad Me Omnes\\\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary. Question:What is the Grotto at Notre Dame?\\n\\nAssistant:\", \"output\": \"a Marian place of prayer and reflection\"}\n" + ] + } + ], + "source": [ + "# What the squad dataset looks like after processing\n", + "! head -200 $SQUAD_DIR/squad_train.jsonl > $SQUAD_DIR/squad_short_train.jsonl\n", + "! head -20 $SQUAD_DIR/squad_val.jsonl > $SQUAD_DIR/squad_short_val.jsonl\n", + "! head -4 $SQUAD_DIR/squad_short_val.jsonl\n", + "! head -4 $SQUAD_DIR/squad_short_train.jsonl" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "2e19c8dc", + "metadata": {}, + "source": [ + "### Model Config Setup\n", + "Now we will begin setting up the config file needed for PEFT tuning. We use a single config for all supported PEFT methods (LoRA, Adapter and P-Tuning). All PEFT methods use classes defined in [megatron_gpt_peft_models.py](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/language_modeling/megatron_gpt_peft_models.py). All PEFT Classes inherit from `MegatronGPTSFTModel` which is the class that governs instruction tuning." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "5749c387", + "metadata": {}, + "outputs": [], + "source": [ + "from omegaconf import OmegaConf\n", + "\n", + "CONFIG_DIR = os.path.join(NEMO_DIR, \"conf\")\n", + "os.makedirs(CONFIG_DIR, exist_ok=True)\n", + "\n", + "# Download the example config file\n", + "wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/tuning/conf/megatron_gpt_peft_tuning_config.yaml', CONFIG_DIR)\n", + "\n", + "# Load the example config file so we can start editing it\n", + "CONFIG_PATH = os.path.join(CONFIG_DIR, \"megatron_gpt_peft_tuning_config.yaml\")\n", + "config = OmegaConf.load(CONFIG_PATH)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "ce966bcf", + "metadata": {}, + "source": [ + "The `config` contains several attributes required by the `MegatronGPTPEFTModel`. First we will set the training data path and the validation data path in the config.\n", + "The `config` allows us to set a list of `jsonl` files as training files and sample examples from each file with different probabilities. For simplicity we are going to use just one training file and thus the sampling probability is set to `1.0`\n", + "\n", + "We can also monitor validation loss from multiple validation files during training. Again for simplicity we will use just one validation file." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "6bb1590f", + "metadata": {}, + "outputs": [], + "source": [ + "config.model.data.train_ds.file_names = [f\"{SQUAD_DIR}/squad_short_train.jsonl\"]\n", + "config.model.data.train_ds.concat_sampling_probabilities=[1.0]\n", + "config.model.data.validation_ds.file_names = [f\"{SQUAD_DIR}/squad_short_val.jsonl\"]\n", + "config.model.data.validation_ds.names=[\"squad_val\"]" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "f6b7831a", + "metadata": {}, + "source": [ + "### PEFT Config\n", + "The attribute [config.model.peft](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/language_modeling/tuning/conf/megatron_gpt_peft_tuning_config.yaml#L78) contains settings that control the PEFT training method and its related hyperpameters. We currently support `lora`, `adapters`, `ptuning` and `ia3`. We can instruct the training script to use one of these methods by setting the config.model.peft.peft_scheme attribute.\n", + "\n", + "The other hyperparams associated with lora tuning are present in the [config.model.peft.lora_tuning](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/language_modeling/tuning/conf/megatron_gpt_peft_tuning_config.yaml#L92) attribute." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "72c9f966", + "metadata": {}, + "outputs": [], + "source": [ + "config.model.peft.peft_scheme=\"lora\" # we can also set this to adapter or ptuning or ia3\n", + "print(OmegaConf.to_yaml(config.model.peft.lora_tuning))" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "c32e73c3", + "metadata": {}, + "source": [ + "**Note:** In the original LoRA paper each attention projection (`K`, `Q`, `V` and `O`) can have their own Low-Rank projections. However, NeMo's attention implementation fuses `KQV` into a single projection and thus our LoRA implementation learns a single Low-Rank projection for `KQV` in a combined fashion. We do not support LoRA for the `O` matrix at this point." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "4e021b24", + "metadata": {}, + "source": [ + "### Prompt Formatting\n", + "The `config.model.data.train_ds.prompt_template` attribute allows us to further tweak the format of the input and output if needed. In this example, we have \"encoding\" our format inside the `jsonl` file directly. So we can keep the `prompt_template` in the config simple.(See previous section on Data Preparation). " + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "1b6aa5c7", + "metadata": {}, + "outputs": [], + "source": [ + "config.model.data.train_ds.prompt_template =\"{input} {output}\"" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "a0d5017e", + "metadata": {}, + "source": [ + "### Setting the Pretrained GPT Model\n", + "Next we will set the \"base language model\" upon which we will perform LoRA tuning. Obviously, larger base models will have better performance on downstream tasks but for the purposes of this tutorial we will use a small 345M parameter GPT model." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "48cdf868", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[NeMo W 2023-05-30 14:08:23 experimental:27] Module is experimental, not ready for production and is not fully supported. Use at your own risk.\n", + "[NeMo W 2023-05-30 14:08:24 experimental:27] Module is experimental, not ready for production and is not fully supported. Use at your own risk.\n" + ] + }, + { + "data": { + "text/plain": [ + "'https://api.ngc.nvidia.com/v2/models/nvidia/nemo/megatron_gpt_345m/versions/1/files/megatron_gpt_345m.nemo'" + ] + }, + "execution_count": 11, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Check what GPT .nemo models we have available on NGC\n", + "from nemo.collections.nlp.models.language_modeling.megatron_gpt_model import MegatronGPTModel\n", + "megatron_gpt_345m_nemo_url = MegatronGPTModel.list_available_models()[0].location\n", + "megatron_gpt_345m_nemo_url # should point to the 345m megatron gpt model '.nemo' file" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "ede350ed", + "metadata": {}, + "source": [ + "If we wanted to use the GPT model class directly, we could instantiate a trainer then download the model by calling running \n", + "`gpt_model = MegatronGPTModel.from_pretrained(model_name=\"megatron_gpt_345m\", trainer=trainer).cuda()`. But we just need the `.nemo` file in our working NeMo directory in this tutorial, so we will download it using `wget`. " + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "364439a1", + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "File ‘./megatron_gpt_345m.nemo’ already there; not retrieving.\n" + ] + } + ], + "source": [ + "# Download the model from NGC\n", + "gpt_file_name = \"megatron_gpt_345m.nemo\"\n", + "!wget -nc --content-disposition {megatron_gpt_345m_nemo_url} -O {NEMO_DIR}/{gpt_file_name}" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "1d6a8a67", + "metadata": {}, + "source": [ + "Now that we have a `.nemo` GPT file to work with. We need to add its path in our prompt learning config. " + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "2778a5fa", + "metadata": {}, + "outputs": [], + "source": [ + "# Set GPT model path on prompt learning config\n", + "config.model.restore_from_path = gpt_file_name" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "943a9c83", + "metadata": {}, + "source": [ + "Next, we will set where we want to save all the intermediate training logs and checkpoints. As well as other training settings such as: number of training steps, batch size and validation check interval, and num_workers for data processing." + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "a278cbdf", + "metadata": {}, + "outputs": [], + "source": [ + "config.exp_manager.exp_dir=f\"{NEMO_DIR}/peft_lora\"\n", + "config.exp_manager.explicit_log_dir=\"training_info\"\n", + "config.trainer.max_steps=100\n", + "config.model.micro_batch_size=1\n", + "config.model.global_batch_size=4\n", + "config.trainer.val_check_interval=50\n", + "config.model.data.train_ds.num_workers=0 # 0 is recommended which just uses the main thread to process training examples\n", + "config.model.data.validation_ds.num_workers=0 # 0 is recommended which just uses the main thread to process the validation examples" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "a988d16e", + "metadata": {}, + "source": [ + "Let's have a look at all the values we've set in the model config. You can change any of these values in the same manner we've been using above. " + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "id": "12a37ada", + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "seed: 1234\n", + "tensor_model_parallel_size: 1\n", + "pipeline_model_parallel_size: 1\n", + "global_batch_size: 4\n", + "micro_batch_size: 1\n", + "restore_from_path: megatron_gpt_345m.nemo\n", + "resume_from_checkpoint: null\n", + "save_nemo_on_validation_end: false\n", + "sync_batch_comm: false\n", + "megatron_amp_O2: false\n", + "sequence_parallel: false\n", + "activations_checkpoint_granularity: null\n", + "activations_checkpoint_method: null\n", + "activations_checkpoint_num_layers: null\n", + "answer_only_loss: true\n", + "gradient_as_bucket_view: false\n", + "hidden_dropout: 0.0\n", + "attention_dropout: 0.0\n", + "ffn_dropout: 0.0\n", + "peft:\n", + " peft_scheme: adapter\n", + " restore_from_path: null\n", + " adapter_tuning:\n", + " type: parallel_adapter\n", + " adapter_dim: 32\n", + " adapter_dropout: 0.0\n", + " norm_position: pre\n", + " column_init_method: xavier\n", + " row_init_method: zero\n", + " norm_type: mixedfusedlayernorm\n", + " lora_tuning:\n", + " adapter_dim: 32\n", + " adapter_dropout: 0.0\n", + " column_init_method: xavier\n", + " row_init_method: zero\n", + " p_tuning:\n", + " virtual_tokens: 10\n", + " bottleneck_dim: 1024\n", + " embedding_dim: 1024\n", + " init_std: 0.023\n", + "data:\n", + " train_ds:\n", + " file_names:\n", + " - data/SQuAD/squad_short_train.jsonl\n", + " global_batch_size: ${model.global_batch_size}\n", + " micro_batch_size: ${model.micro_batch_size}\n", + " shuffle: true\n", + " num_workers: 0\n", + " pin_memory: true\n", + " max_seq_length: 2048\n", + " min_seq_length: 1\n", + " drop_last: true\n", + " concat_sampling_probabilities:\n", + " - 1.0\n", + " context_key: input\n", + " label_key: output\n", + " add_eos: true\n", + " add_sep: false\n", + " add_bos: false\n", + " separate_prompt_and_response_with_newline: false\n", + " truncation_field: context\n", + " index_mapping_dir: null\n", + " prompt_template: '{input} {output}'\n", + " validation_ds:\n", + " file_names:\n", + " - data/SQuAD/squad_short_val.jsonl\n", + " names:\n", + " - squad_val\n", + " global_batch_size: ${model.global_batch_size}\n", + " micro_batch_size: ${model.micro_batch_size}\n", + " shuffle: false\n", + " num_workers: 0\n", + " pin_memory: true\n", + " max_seq_length: 2048\n", + " min_seq_length: 1\n", + " drop_last: false\n", + " context_key: input\n", + " label_key: output\n", + " add_eos: ${model.data.train_ds.add_eos}\n", + " add_sep: ${model.data.train_ds.add_sep}\n", + " add_bos: ${model.data.train_ds.add_bos}\n", + " separate_prompt_and_response_with_newline: ${model.data.train_ds.separate_prompt_and_response_with_newline}\n", + " write_predictions_to_file: false\n", + " output_file_path_prefix: null\n", + " truncation_field: context\n", + " index_mapping_dir: null\n", + " prompt_template: ${model.data.train_ds.prompt_template}\n", + " metric:\n", + " name: loss\n", + " average: null\n", + " num_classes: null\n", + "test_ds:\n", + " file_names: null\n", + " names: null\n", + " global_batch_size: ${model.global_batch_size}\n", + " micro_batch_size: ${model.micro_batch_size}\n", + " shuffle: false\n", + " num_workers: 4\n", + " pin_memory: true\n", + " max_seq_length: 2048\n", + " min_seq_length: 1\n", + " drop_last: false\n", + " context_key: input\n", + " label_key: output\n", + " add_eos: ${model.data.train_ds.add_eos}\n", + " add_sep: ${model.data.train_ds.add_sep}\n", + " add_bos: ${model.data.train_ds.add_bos}\n", + " separate_prompt_and_response_with_newline: ${model.data.train_ds.separate_prompt_and_response_with_newline}\n", + " write_predictions_to_file: false\n", + " output_file_path_prefix: null\n", + " truncation_field: context\n", + " index_mapping_dir: null\n", + " prompt_template: ${model.data.train_ds.prompt_template}\n", + " metric:\n", + " name: loss\n", + " average: null\n", + " num_classes: null\n", + "optim:\n", + " name: fused_adam\n", + " lr: 0.0001\n", + " weight_decay: 0.01\n", + " betas:\n", + " - 0.9\n", + " - 0.98\n", + " sched:\n", + " name: CosineAnnealing\n", + " warmup_steps: 50\n", + " min_lr: 0.0\n", + " constant_steps: 0\n", + " monitor: val_loss\n", + " reduce_on_plateau: false\n", + "\n" + ] + } + ], + "source": [ + "# Final model config\n", + "print(OmegaConf.to_yaml(config.model))" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "4c048852", + "metadata": {}, + "source": [ + "### Building the PyTorch Lightning Trainer\n", + "NeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.\n", + "\n", + "Let's first instantiate a Trainer object" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "90f85b2a", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Using 16bit None Automatic Mixed Precision (AMP)\n", + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "IPU available: False, using: 0 IPUs\n", + "HPU available: False, using: 0 HPUs\n", + "`Trainer(val_check_interval=1.0)` was configured so validation will run at the end of the training epoch..\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Trainer config - \n", + "\n", + "devices: 1\n", + "accelerator: gpu\n", + "num_nodes: 1\n", + "precision: 16\n", + "logger: false\n", + "enable_checkpointing: false\n", + "replace_sampler_ddp: false\n", + "max_epochs: 4\n", + "max_steps: 100\n", + "log_every_n_steps: 10\n", + "val_check_interval: 1.0\n", + "gradient_clip_val: 1.0\n", + "\n" + ] + } + ], + "source": [ + "import torch\n", + "import pytorch_lightning as pl\n", + "from nemo.collections.nlp.parts.nlp_overrides import NLPDDPStrategy\n", + "from pytorch_lightning.plugins.environments import TorchElasticEnvironment\n", + "\n", + "# let's modify some trainer configs\n", + "# check if we have GPU available and uses it\n", + "accelerator = 'gpu' if torch.cuda.is_available() else 'cpu'\n", + "config.trainer.accelerator = accelerator\n", + "config.trainer.devices = 1\n", + "config.trainer.max_epochs = 4\n", + "config.trainer.val_check_interval = 1.0\n", + "\n", + "# for PyTorch Native AMP set precision=16\n", + "config.trainer.precision = 16 if torch.cuda.is_available() else 32\n", + "\n", + "# setup cluster environment parameters\"\n", + "# use torch elastic cluster environment so `create_process_externally` is True\n", + "# the launcher is set to None. It will not try to spawn new processes.\n", + "# It won't create the misconfiguration error because of the `interactive session`\n", + "os.environ[\"LOCAL_RANK\"] = '0'\n", + "os.environ[\"RANK\"] = '0'\n", + "os.environ[\"WORLD_SIZE\"] = '1'\n", + "\n", + "strategy = NLPDDPStrategy(find_unused_parameters=False, no_ddp_communication_hook=True)\n", + "plugins = [TorchElasticEnvironment()]\n", + "trainer = pl.Trainer(plugins= plugins, strategy=strategy, **config.trainer)\n", + "\n", + "print(\"Trainer config - \\n\")\n", + "print(OmegaConf.to_yaml(config.trainer))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "890f0dc5", + "metadata": {}, + "outputs": [], + "source": [ + "print(OmegaConf.to_yaml(config.exp_manager))" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "4d0124c1", + "metadata": {}, + "source": [ + "### Setting up a NeMo Experiment\n", + "\n", + "NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "f2c943ba", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[NeMo E 2023-05-30 14:09:17 exp_manager:646] exp_manager received explicit_log_dir: training_info and at least one of exp_dir: ./peft_lora, or version: None. Please note that exp_dir, name, and version will be ignored.\n", + "[NeMo W 2023-05-30 14:09:17 exp_manager:651] Exp_manager is logging to training_info, but it already exists.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[NeMo I 2023-05-30 14:09:17 exp_manager:374] Experiments will be logged at training_info\n", + "[NeMo I 2023-05-30 14:09:17 exp_manager:797] TensorboardLogger has been set up\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[NeMo W 2023-05-30 14:09:17 exp_manager:893] The checkpoint callback was told to monitor a validation value and trainer's max_steps was set to 100. Please ensure that max_steps will run for at least 1 epochs to ensure that checkpointing will not error out.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "training_info\n" + ] + } + ], + "source": [ + "from nemo.utils.exp_manager import exp_manager\n", + "\n", + "# Set name of the experiment \n", + "config.name = 'lora_example_tuning'\n", + "config.exp_manager.resume_if_exists = False\n", + "\n", + "# Init the experiment manager and view the exp_dir\n", + "exp_dir = exp_manager(trainer, config.get(\"exp_manager\", None))\n", + "exp_dir = str(exp_dir)\n", + "print(exp_dir)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "298b3dce", + "metadata": {}, + "source": [ + "### LoRA Training\n", + "We now set up the process for training a LoRA model. We first require a config that contains details about the base language model upon which we will train our LoRA model. So we first extract the `base_model_cfg`" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "id": "edb38445", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[NeMo W 2023-05-30 14:09:30 experimental:27] Module is experimental, not ready for production and is not fully supported. Use at your own risk.\n" + ] + } + ], + "source": [ + "from nemo.collections.nlp.models.language_modeling.megatron_gpt_sft_model import MegatronGPTModel\n", + "from nemo.collections.nlp.parts.nlp_overrides import NLPSaveRestoreConnector, PEFTSaveRestoreConnector\n", + "base_model_save_restore_connector = NLPSaveRestoreConnector()\n", + "base_model_cfg = MegatronGPTModel.restore_from(\n", + " restore_path=config.model.restore_from_path,\n", + " trainer=trainer,\n", + " return_config=True,\n", + " save_restore_connector=base_model_save_restore_connector,\n", + " )" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "16bace39", + "metadata": {}, + "source": [ + "Next, we update the `base_model_cfg` with any new settings we employ in our current (LoRA) `config`." + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "id": "fd350dbc", + "metadata": {}, + "outputs": [], + "source": [ + "from omegaconf.omegaconf import open_dict\n", + "from nemo.collections.nlp.models.language_modeling.megatron_gpt_peft_models import MegatronGPTLoRAModel\n", + "OmegaConf.set_struct(base_model_cfg, True)\n", + "OmegaConf.resolve(config)\n", + "with open_dict(base_model_cfg):\n", + " base_model_cfg.megatron_amp_O2 = config.model.get('megatron_amp_O2', False)\n", + " base_model_cfg.micro_batch_size = config.model.data.train_ds.micro_batch_size\n", + " base_model_cfg.global_batch_size = config.model.data.train_ds.global_batch_size\n", + " base_model_cfg.sequence_parallel = config.model.get(\"sequence_parallel\", False)\n", + " base_model_cfg.data = config.model.data\n", + " base_model_cfg.optim = config.model.optim\n", + " base_model_cfg.precision = config.trainer.precision\n", + " base_model_cfg.answer_only_loss = config.model.answer_only_loss\n", + " base_model_cfg.restore_from_path = config.model.restore_from_path\n", + " base_model_cfg.resume_from_checkpoint = config.model.resume_from_checkpoint\n", + " base_model_cfg.save_nemo_on_validation_end = config.model.save_nemo_on_validation_end\n", + " base_model_cfg.peft = config.model.peft\n", + " base_model_cfg.target = f\"{MegatronGPTLoRAModel.__module__}.{MegatronGPTLoRAModel.__name__}\"" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "dfc55a1c", + "metadata": {}, + "source": [ + "Next, we instantiate the LoRA model class" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "id": "a81d8741", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[NeMo I 2023-05-30 14:09:39 megatron_init:232] Rank 0 has data parallel group: [0]\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:235] All data parallel group ranks: [[0]]\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:236] Ranks 0 has data parallel rank: 0\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:244] Rank 0 has model parallel group: [0]\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:245] All model parallel group ranks: [[0]]\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:255] Rank 0 has tensor model parallel group: [0]\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:259] All tensor model parallel group ranks: [[0]]\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:260] Rank 0 has tensor model parallel rank: 0\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:274] Rank 0 has pipeline model parallel group: [0]\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:286] Rank 0 has embedding group: [0]\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:292] All pipeline model parallel group ranks: [[0]]\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:293] Rank 0 has pipeline model parallel rank 0\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:294] All embedding group ranks: [[0]]\n", + "[NeMo I 2023-05-30 14:09:39 megatron_init:295] Rank 0 has embedding rank: 0\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[NeMo W 2023-05-30 14:09:39 modelPT:244] You tried to register an artifact under config key=tokenizer.vocab_file but an artifact for it has already been registered.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[NeMo I 2023-05-30 14:09:39 tokenizer_utils:204] Getting Megatron tokenizer for pretrained model name: megatron-gpt-345m, custom vocab file: /tmp/tmp1qljai9b/bfcdca5e44814366bdb5dcd651325152_gpt2-vocab.json, and merges file: /tmp/tmp1qljai9b/315a11fd68be49d6abdb34363e8c4997_gpt2-merge.txt\n", + "[NeMo I 2023-05-30 14:09:39 tokenizer_utils:130] Getting HuggingFace AutoTokenizer with pretrained_model_name: gpt2, vocab_file: /tmp/tmp1qljai9b/bfcdca5e44814366bdb5dcd651325152_gpt2-vocab.json, merges_files: /tmp/tmp1qljai9b/315a11fd68be49d6abdb34363e8c4997_gpt2-merge.txt, special_tokens_dict: {}, and use_fast: False\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Using sep_token, but it is not set yet.\n", + "Using cls_token, but it is not set yet.\n", + "Using pad_token, but it is not set yet.\n", + "Using mask_token, but it is not set yet.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[NeMo I 2023-05-30 14:09:40 megatron_base_model:238] Padded vocab_size: 50304, original vocab_size: 50257, dummy tokens: 47.\n", + "[NeMo I 2023-05-30 14:09:41 megatron_gpt_peft_models:56] Before adding PEFT params:\n", + " | Name | Type | Params\n", + " -----------------------------------\n", + " 0 | model | GPTModel | 354 M \n", + " -----------------------------------\n", + " 354 M Trainable params\n", + " 0 Non-trainable params\n", + " 354 M Total params\n", + " 1,419.485 Total estimated model params size (MB)\n", + "[NeMo I 2023-05-30 14:09:41 megatron_gpt_peft_models:65] After adding PEFT params:\n", + " | Name | Type | Params\n", + " -----------------------------------\n", + " 0 | model | GPTModel | 358 M \n", + " -----------------------------------\n", + " 358 M Trainable params\n", + " 0 Non-trainable params\n", + " 358 M Total params\n", + " 1,432.068 Total estimated model params size (MB)\n", + "[NeMo I 2023-05-30 14:09:42 nlp_overrides:491] Model MegatronGPTLoRAModel was successfully restored from /home/adithyare/NeMo/tutorials/nlp/megatron_gpt_345m.nemo.\n" + ] + } + ], + "source": [ + "from nemo.collections.nlp.parts.nlp_overrides import PEFTSaveRestoreConnector\n", + "peft_save_restore_connector = PEFTSaveRestoreConnector(\n", + " peft_model_nemo_path=None, peft_model_ckpt_path=None\n", + " )\n", + "model = MegatronGPTLoRAModel.restore_from(\n", + " restore_path=config.model.restore_from_path,\n", + " trainer=trainer,\n", + " override_config_path=base_model_cfg,\n", + " save_restore_connector=peft_save_restore_connector,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "id": "2d99f433", + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[NeMo W 2023-05-30 14:09:46 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/trainer/configuration_validator.py:175: UserWarning: The `batch_idx` argument in `MegatronGPTLoRAModel.on_train_batch_start` hook may not match with the actual batch index when using a `dataloader_iter` argument in your `training_step`.\n", + " rank_zero_warn(\n", + " \n", + "[NeMo W 2023-05-30 14:09:46 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/trainer/configuration_validator.py:175: UserWarning: The `batch_idx` argument in `MegatronGPTLoRAModel.on_train_batch_end` hook may not match with the actual batch index when using a `dataloader_iter` argument in your `training_step`.\n", + " rank_zero_warn(\n", + " \n", + "[NeMo W 2023-05-30 14:09:46 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/lightning_fabric/plugins/environments/torchelastic.py:36: UserWarning: MASTER_ADDR environment variable is not defined. Set as localhost\n", + " rank_zero_warn(\"MASTER_ADDR environment variable is not defined. Set as localhost\")\n", + " \n", + "[NeMo W 2023-05-30 14:09:46 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/lightning_fabric/plugins/environments/torchelastic.py:44: UserWarning: MASTER_PORT environment variable is not defined. Set as 12910\n", + " rank_zero_warn(\"MASTER_PORT environment variable is not defined. Set as 12910\")\n", + " \n", + "Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1\n", + "----------------------------------------------------------------------------------------------------\n", + "distributed_backend=nccl\n", + "All distributed processes registered. Starting with 1 processes\n", + "----------------------------------------------------------------------------------------------------\n", + "\n", + "You are using a CUDA device ('NVIDIA RTX A6000') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision\n", + "[NeMo W 2023-05-30 14:09:46 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:613: UserWarning: Checkpoint directory /home/adithyare/NeMo/tutorials/nlp/training_info/checkpoints exists and is not empty.\n", + " rank_zero_warn(f\"Checkpoint directory {dirpath} exists and is not empty.\")\n", + " \n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[NeMo I 2023-05-30 14:09:46 megatron_gpt_sft_model:634] Building GPT SFT validation datasets.\n", + "[NeMo I 2023-05-30 14:09:46 text_memmap_dataset:104] Building data files\n", + "[NeMo I 2023-05-30 14:09:46 text_memmap_dataset:343] Processing 1 data files using 12 workers\n", + "[NeMo I 2023-05-30 14:09:47 text_memmap_dataset:349] Time building 0 / 1 mem-mapped files: 0:00:00.360761\n", + "[NeMo I 2023-05-30 14:09:47 text_memmap_dataset:114] Loading data files\n", + "[NeMo I 2023-05-30 14:09:47 text_memmap_dataset:205] Loading data/SQuAD/squad_short_val.jsonl\n", + "[NeMo I 2023-05-30 14:09:47 text_memmap_dataset:117] Time loading 1 mem-mapped files: 0:00:00.002361\n", + "[NeMo I 2023-05-30 14:09:47 text_memmap_dataset:121] Computing global indices\n", + "[NeMo I 2023-05-30 14:09:47 megatron_gpt_sft_model:637] Length of val dataset: 20\n", + "[NeMo I 2023-05-30 14:09:47 megatron_gpt_sft_model:648] Building GPT SFT traing datasets.\n", + "[NeMo I 2023-05-30 14:09:47 text_memmap_dataset:104] Building data files\n", + "[NeMo I 2023-05-30 14:09:47 text_memmap_dataset:343] Processing 1 data files using 12 workers\n", + "[NeMo I 2023-05-30 14:09:47 text_memmap_dataset:349] Time building 0 / 1 mem-mapped files: 0:00:00.299554\n", + "[NeMo I 2023-05-30 14:09:47 text_memmap_dataset:114] Loading data files\n", + "[NeMo I 2023-05-30 14:09:47 text_memmap_dataset:205] Loading data/SQuAD/squad_short_train.jsonl\n", + "[NeMo I 2023-05-30 14:09:47 text_memmap_dataset:117] Time loading 1 mem-mapped files: 0:00:00.001065\n", + "[NeMo I 2023-05-30 14:09:47 text_memmap_dataset:121] Computing global indices\n", + "[NeMo I 2023-05-30 14:09:47 dataset_utils:1341] > loading indexed mapping from data/SQuAD/squad_short_train.jsonl_squad_short_train.jsonl_indexmap_402mns_2046msl_0.00ssp_1234s.npy\n", + "[NeMo I 2023-05-30 14:09:47 dataset_utils:1344] loaded indexed file in 0.001 seconds\n", + "[NeMo I 2023-05-30 14:09:47 dataset_utils:1345] total number of samples: 600\n", + "make: Entering directory '/home/adithyare/NeMo/nemo/collections/nlp/data/language_modeling/megatron'\n", + "make: Nothing to be done for 'default'.\n", + "make: Leaving directory '/home/adithyare/NeMo/nemo/collections/nlp/data/language_modeling/megatron'\n", + "[NeMo I 2023-05-30 14:09:47 blendable_dataset:67] > elapsed time for building blendable dataset indices: 0.09 (sec)\n", + "> building indices for blendable datasets ...\n", + " > sample ratios:\n", + " dataset 0, input: 1, achieved: 1\n", + "[NeMo I 2023-05-30 14:09:47 megatron_gpt_sft_model:650] Length of train dataset: 402\n", + "[NeMo I 2023-05-30 14:09:47 megatron_gpt_sft_model:655] Building dataloader with consumed samples: 0\n", + "[NeMo I 2023-05-30 14:09:47 megatron_gpt_sft_model:655] Building dataloader with consumed samples: 0\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[NeMo I 2023-05-30 14:09:47 nlp_overrides:124] Configuring DDP for model parallelism.\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 adapter_mixins:430] Unfrozen adapter : lora_kqv_adapter\n", + "[NeMo I 2023-05-30 14:09:47 megatron_gpt_peft_models:130] Optimizer groups set:\n", + " | Name | Type | Params\n", + " -----------------------------------\n", + " 0 | model | GPTModel | 358 M \n", + " -----------------------------------\n", + " 3.1 M Trainable params\n", + " 354 M Non-trainable params\n", + " 358 M Total params\n", + " 716.034 Total estimated model params size (MB)\n", + "[NeMo I 2023-05-30 14:09:47 modelPT:721] Optimizer config = FusedAdam (\n", + " Parameter Group 0\n", + " betas: [0.9, 0.98]\n", + " bias_correction: True\n", + " eps: 1e-08\n", + " lr: 0.0001\n", + " weight_decay: 0.01\n", + " )\n", + "[NeMo I 2023-05-30 14:09:47 lr_scheduler:910] Scheduler \"\" \n", + " will be used during training (effective maximum steps = 100) - \n", + " Parameters : \n", + " (warmup_steps: 50\n", + " min_lr: 0.0\n", + " constant_steps: 0\n", + " max_steps: 100\n", + " )\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "\n", + " | Name | Type | Params\n", + "-----------------------------------\n", + "0 | model | GPTModel | 358 M \n", + "-----------------------------------\n", + "3.1 M Trainable params\n", + "354 M Non-trainable params\n", + "358 M Total params\n", + "716.034 Total estimated model params size (MB)\n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "3cb87a7b9d4b46e4a0fb0f0670351fbd", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Sanity Checking: 0it [00:00, ?it/s]" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[NeMo W 2023-05-30 14:09:48 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:224: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 24 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.\n", + " rank_zero_warn(\n", + " \n", + "[NeMo W 2023-05-30 14:09:48 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py:401: UserWarning: Found `dataloader_iter` argument in the `validation_step`. Note that the support for this signature is experimental and the behavior is subject to change.\n", + " rank_zero_warn(\n", + " \n", + "[NeMo W 2023-05-30 14:09:48 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/apex/transformer/pipeline_parallel/utils.py:81: UserWarning: This function is only for unittest\n", + " warnings.warn(\"This function is only for unittest\")\n", + " \n", + "[NeMo W 2023-05-30 14:09:49 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:536: PossibleUserWarning: It is recommended to use `self.log('val_loss', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.\n", + " warning_cache.warn(\n", + " \n", + "[NeMo W 2023-05-30 14:09:49 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:536: PossibleUserWarning: It is recommended to use `self.log('validation_loss_squad_val', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.\n", + " warning_cache.warn(\n", + " \n", + "[NeMo W 2023-05-30 14:09:49 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:536: PossibleUserWarning: It is recommended to use `self.log('validation_loss', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.\n", + " warning_cache.warn(\n", + " \n", + "[NeMo W 2023-05-30 14:09:49 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:224: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 24 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.\n", + " rank_zero_warn(\n", + " \n", + "[NeMo W 2023-05-30 14:09:49 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py:344: UserWarning: Found `dataloader_iter` argument in the `training_step`. Note that the support for this signature is experimental and the behavior is subject to change.\n", + " rank_zero_warn(\n", + " \n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "c7a473adeca64c828d2a1338dab1e76b", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Training: 0it [00:00, ?it/s]" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[NeMo W 2023-05-30 14:09:51 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:232: UserWarning: You called `self.log('global_step', ...)` in your `training_step` but the value needs to be floating point. Converting it to torch.float32.\n", + " warning_cache.warn(\n", + " \n", + "[NeMo W 2023-05-30 14:09:51 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:232: UserWarning: You called `self.log('consumed_samples', ...)` in your `training_step` but the value needs to be floating point. Converting it to torch.float32.\n", + " warning_cache.warn(\n", + " \n", + "[NeMo W 2023-05-30 14:09:51 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate\n", + " warnings.warn(\"Detected call of `lr_scheduler.step()` before `optimizer.step()`. \"\n", + " \n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "a0606700c7ab495eb08ed88c16949569", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Validation: 0it [00:00, ?it/s]" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Epoch 0, global step 100: 'validation_loss' reached 0.30823 (best 0.30823), saving model to '/home/adithyare/NeMo/tutorials/nlp/training_info/checkpoints/lora_example_tuning--validation_loss=0.308-step=100-consumed_samples=396.0-v2.ckpt' as top 1\n", + "Metric val_loss improved. New best score: 0.308\n", + "`Trainer.fit` stopped: `max_steps=100` reached.\n", + "Restoring states from the checkpoint path at /home/adithyare/NeMo/tutorials/nlp/training_info/checkpoints/lora_example_tuning--validation_loss=0.308-step=100-consumed_samples=396.0-v2.ckpt\n", + "Restored all states from the checkpoint file at /home/adithyare/NeMo/tutorials/nlp/training_info/checkpoints/lora_example_tuning--validation_loss=0.308-step=100-consumed_samples=396.0-v2.ckpt\n" + ] + } + ], + "source": [ + "# Training set to 2 epochs by default in a cell above\n", + "trainer.fit(model)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "b8210d6d", + "metadata": {}, + "source": [ + "Once training is completed you should see a saved '.nemo' file in this folder `{config.exp_manager.explicit_log_dir}/checkpoints`" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "id": "e4e19e65", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "total 230M\n", + "-rw-rw-r-- 1 adithyare adithyare 14M May 30 14:10 lora_example_tuning.nemo\n", + "-rw-rw-r-- 1 adithyare adithyare 37M May 27 09:47 'lora_example_tuning--validation_loss=0.308-step=100-consumed_samples=396.0.ckpt'\n", + "-rw-rw-r-- 1 adithyare adithyare 37M May 27 09:47 'lora_example_tuning--validation_loss=0.308-step=100-consumed_samples=396.0-last.ckpt'\n", + "-rw-rw-r-- 1 adithyare adithyare 37M May 30 11:12 'lora_example_tuning--validation_loss=0.308-step=100-consumed_samples=396.0-last-v1.ckpt'\n", + "-rw-rw-r-- 1 adithyare adithyare 37M May 30 14:10 'lora_example_tuning--validation_loss=0.308-step=100-consumed_samples=396.0-last-v2.ckpt'\n", + "-rw-rw-r-- 1 adithyare adithyare 37M May 30 11:12 'lora_example_tuning--validation_loss=0.308-step=100-consumed_samples=396.0-v1.ckpt'\n", + "-rw-rw-r-- 1 adithyare adithyare 37M May 30 14:10 'lora_example_tuning--validation_loss=0.308-step=100-consumed_samples=396.0-v2.ckpt'\n", + "training_info\n" + ] + } + ], + "source": [ + "# The trained '.nemo' model is saved in the location below:\n", + "! ls -lh {config.exp_manager.explicit_log_dir}/checkpoints\n", + "print(config.exp_manager.explicit_log_dir)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "6aab09d4", + "metadata": {}, + "source": [ + "### Inference\n", + "The model object from `trainer.fit(model)` is also capable of doing inference. But for the tutorial we will re-load the saved `.nemo` lora model along with a `.nemo` base language model to simulate a more realistic scenario (where training does not happen right before inference).\n", + "\n", + "First, we will load and modify a config file that will be used for inference." + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "id": "41ab98a9", + "metadata": {}, + "outputs": [], + "source": [ + "# Download the example config file\n", + "wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/tuning/conf/megatron_gpt_peft_eval_config.yaml', CONFIG_DIR)\n", + "\n", + "# Load the example config file so we can start editing it\n", + "CONFIG_EVAL_PATH = os.path.join(CONFIG_DIR, \"megatron_gpt_peft_eval_config.yaml\")\n", + "config_eval = OmegaConf.load(CONFIG_EVAL_PATH)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "36c58c18", + "metadata": {}, + "source": [ + "We are going to modify the `config_eval` object that we created above. We will set the base language model as the `345m` model we downloaded earlier.\n", + "\n", + "Additionally, we will also set the `model.peft.restore_from_path` with the lora model we just trained. For the tutorial we will just use the validation data for inference as well." + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "id": "64a4e71a", + "metadata": {}, + "outputs": [], + "source": [ + "config_eval.model.restore_from_path=\"megatron_gpt_345m.nemo\"\n", + "config_eval.model.peft.restore_from_path=\"./training_info/checkpoints/lora_example_tuning.nemo\"\n", + "config_eval.model.data.test_ds.file_names=[f\"{SQUAD_DIR}/squad_short_val.jsonl\"]\n", + "config_eval.model.data.test_ds.names=[\"test_set\"]\n", + "config_eval.model.data.test_ds.global_batch_size=1\n", + "config_eval.model.data.test_ds.micro_batch_size=1\n", + "config_eval.model.data.test_ds.tokens_to_generate=30\n", + "config_eval.inference.greedy=True" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "id": "d8ace8f9", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Using 16bit None Automatic Mixed Precision (AMP)\n", + "GPU available: True (cuda), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "IPU available: False, using: 0 IPUs\n", + "HPU available: False, using: 0 HPUs\n" + ] + } + ], + "source": [ + "strategy_eval = NLPDDPStrategy(find_unused_parameters=False, no_ddp_communication_hook=True)\n", + "plugins_eval = [TorchElasticEnvironment()]\n", + "# notice the plugins, strategy and config.trainer args are the same as is training portion of this tutorial\n", + "# we just create a new object with no overlap from the training section of this tutorial\n", + "trainer_eval = pl.Trainer(plugins= plugins_eval, strategy=strategy_eval, **config_eval.trainer) " + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "e745ac5e", + "metadata": {}, + "source": [ + "The `config_eval` object is the hydra config at \"inference/test time\". This means it should contain information relevant for inference/test time. But we still need to know some properties that were set at training time. For example, was the training done with `BOS` enabled or not? And other model specific attributes.\n", + "\n", + "So we extract the `peft_model_cfg` from the '.nemo' file of the lora model we just trained." + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "id": "e04a2201", + "metadata": {}, + "outputs": [], + "source": [ + "from nemo.collections.nlp.models.language_modeling.megatron_gpt_peft_models import MegatronGPTPEFTModel\n", + "peft_model_cfg = MegatronGPTPEFTModel.restore_from(\n", + " restore_path=\"./training_info/checkpoints/lora_example_tuning.nemo\", trainer=trainer_eval, return_config=True,\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "79a17ac7", + "metadata": {}, + "source": [ + "We modify `peft_model_cfg` to include attributes from the `config_eval` that are specific to inference time." + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "id": "0e0a17aa", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'file_names': ['data/SQuAD/squad_short_val.jsonl'], 'names': ['test_set'], 'global_batch_size': 1, 'micro_batch_size': 1, 'shuffle': False, 'num_workers': 0, 'pin_memory': True, 'max_seq_length': 2048, 'min_seq_length': 1, 'drop_last': False, 'context_key': '${data.train_ds.context_key}', 'label_key': '${data.train_ds.label_key}', 'add_eos': '${data.train_ds.add_eos}', 'add_sep': '${data.train_ds.add_sep}', 'add_bos': '${data.train_ds.add_bos}', 'separate_prompt_and_response_with_newline': '${data.train_ds.separate_prompt_and_response_with_newline}', 'write_predictions_to_file': False, 'output_file_path_prefix': None, 'truncation_field': '${data.train_ds.truncation_field}', 'index_mapping_dir': None, 'prompt_template': '${data.train_ds.prompt_template}', 'tokens_to_generate': 30, 'metric': {'name': 'loss', 'average': None, 'num_classes': None}}\n" + ] + } + ], + "source": [ + "with open_dict(peft_model_cfg):\n", + " # update the model config of the trained model with params we want to set at inference time.\n", + " peft_model_cfg.precision = config_eval.trainer.precision\n", + " peft_model_cfg.data.test_ds = config_eval.model.data.test_ds\n", + " peft_model_cfg.activations_checkpoint_granularity = None\n", + " peft_model_cfg.activations_checkpoint_method = None\n", + "\n", + "with open_dict(config_eval):\n", + " # update the config with the trained model config\n", + " # required for hydra interpolation to work inside cfg.inference\n", + " config_eval.inference.add_BOS = peft_model_cfg.data.test_ds.add_bos\n", + " config_eval.inference.tokens_to_generate = peft_model_cfg.data.test_ds.tokens_to_generate\n", + "\n", + "print(peft_model_cfg.data.test_ds)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "132ae378", + "metadata": {}, + "source": [ + "Next, we load the base language model as well as the lora model we just trained." + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "id": "b19cd0ce", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[NeMo I 2023-05-30 14:11:11 megatron_init:232] Rank 0 has data parallel group: [0]\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:235] All data parallel group ranks: [[0]]\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:236] Ranks 0 has data parallel rank: 0\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:244] Rank 0 has model parallel group: [0]\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:245] All model parallel group ranks: [[0]]\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:255] Rank 0 has tensor model parallel group: [0]\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:259] All tensor model parallel group ranks: [[0]]\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:260] Rank 0 has tensor model parallel rank: 0\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:274] Rank 0 has pipeline model parallel group: [0]\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:286] Rank 0 has embedding group: [0]\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:292] All pipeline model parallel group ranks: [[0]]\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:293] Rank 0 has pipeline model parallel rank 0\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:294] All embedding group ranks: [[0]]\n", + "[NeMo I 2023-05-30 14:11:11 megatron_init:295] Rank 0 has embedding rank: 0\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[NeMo W 2023-05-30 14:11:11 modelPT:244] You tried to register an artifact under config key=tokenizer.vocab_file but an artifact for it has already been registered.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[NeMo I 2023-05-30 14:11:11 tokenizer_utils:204] Getting Megatron tokenizer for pretrained model name: megatron-gpt-345m, custom vocab file: /tmp/tmp5lxz3z8d/bfcdca5e44814366bdb5dcd651325152_gpt2-vocab.json, and merges file: /tmp/tmp5lxz3z8d/315a11fd68be49d6abdb34363e8c4997_gpt2-merge.txt\n", + "[NeMo I 2023-05-30 14:11:11 tokenizer_utils:130] Getting HuggingFace AutoTokenizer with pretrained_model_name: gpt2, vocab_file: /tmp/tmp5lxz3z8d/bfcdca5e44814366bdb5dcd651325152_gpt2-vocab.json, merges_files: /tmp/tmp5lxz3z8d/315a11fd68be49d6abdb34363e8c4997_gpt2-merge.txt, special_tokens_dict: {}, and use_fast: False\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Using sep_token, but it is not set yet.\n", + "Using cls_token, but it is not set yet.\n", + "Using pad_token, but it is not set yet.\n", + "Using mask_token, but it is not set yet.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[NeMo I 2023-05-30 14:11:12 megatron_base_model:238] Padded vocab_size: 50304, original vocab_size: 50257, dummy tokens: 47.\n", + "[NeMo I 2023-05-30 14:11:12 build_model:143] > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 354871296\n", + "[NeMo I 2023-05-30 14:11:12 megatron_gpt_peft_models:56] Before adding PEFT params:\n", + " | Name | Type | Params\n", + " -----------------------------------\n", + " 0 | model | GPTModel | 354 M \n", + " -----------------------------------\n", + " 354 M Trainable params\n", + " 0 Non-trainable params\n", + " 354 M Total params\n", + " 1,419.485 Total estimated model params size (MB)\n", + "[NeMo I 2023-05-30 14:11:12 megatron_gpt_peft_models:65] After adding PEFT params:\n", + " | Name | Type | Params\n", + " -----------------------------------\n", + " 0 | model | GPTModel | 358 M \n", + " -----------------------------------\n", + " 358 M Trainable params\n", + " 0 Non-trainable params\n", + " 358 M Total params\n", + " 1,432.068 Total estimated model params size (MB)\n", + "[NeMo I 2023-05-30 14:11:13 nlp_overrides:491] Model MegatronGPTLoRAModel was successfully restored from /home/adithyare/NeMo/tutorials/nlp/megatron_gpt_345m.nemo.\n" + ] + } + ], + "source": [ + "save_restore_connector = PEFTSaveRestoreConnector(\n", + " peft_model_nemo_path=config_eval.model.peft.restore_from_path, peft_model_ckpt_path=None,\n", + ")\n", + "from nemo.collections.nlp.models.nlp_model import NLPModel\n", + "model_eval = MegatronGPTPEFTModel.restore_from(\n", + " restore_path=config_eval.model.restore_from_path,\n", + " trainer=trainer,\n", + " override_config_path=peft_model_cfg,\n", + " save_restore_connector=save_restore_connector,\n", + ")\n", + "\n", + "model_eval.freeze()" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "012439d9", + "metadata": {}, + "source": [ + "Next, we prepare the dataset and the dataloader objects that the model will perform inference on." + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "id": "12c390f8", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[NeMo I 2023-05-30 14:11:18 text_memmap_dataset:104] Building data files\n", + "[NeMo I 2023-05-30 14:11:18 text_memmap_dataset:343] Processing 1 data files using 12 workers\n", + "[NeMo I 2023-05-30 14:11:18 text_memmap_dataset:349] Time building 0 / 1 mem-mapped files: 0:00:00.706630\n", + "[NeMo I 2023-05-30 14:11:18 text_memmap_dataset:114] Loading data files\n", + "[NeMo I 2023-05-30 14:11:18 text_memmap_dataset:205] Loading data/SQuAD/squad_short_val.jsonl\n", + "[NeMo I 2023-05-30 14:11:18 text_memmap_dataset:117] Time loading 1 mem-mapped files: 0:00:00.001054\n", + "[NeMo I 2023-05-30 14:11:18 text_memmap_dataset:121] Computing global indices\n" + ] + } + ], + "source": [ + "_test_ds = model_eval._build_dataset(peft_model_cfg.data.test_ds, is_train=False)\n", + "from torch.utils.data import DataLoader\n", + "request_dl = DataLoader(\n", + " dataset=_test_ds[0],\n", + " batch_size=peft_model_cfg.data.test_ds.global_batch_size,\n", + " collate_fn=_test_ds[0].collate_fn,\n", + ")\n", + "config_inference = OmegaConf.to_container(config_eval.inference, resolve=True)\n", + "model_eval.set_inference_config(config_inference)\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "76592a1e", + "metadata": {}, + "source": [ + "And finally, we call `trainer.predict` which triggers the inference process. The `response` object contains the outputs of the model." + ] + }, + { + "cell_type": "markdown", + "id": "733c172c", + "metadata": {}, + "source": [] + }, + { + "cell_type": "code", + "execution_count": 30, + "id": "5ba6a70c", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "You are using a CUDA device ('NVIDIA RTX A6000') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision\n", + "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]\n", + "[NeMo W 2023-05-30 14:11:30 nemo_logging:349] /home/adithyare/miniconda3/envs/n22/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:224: PossibleUserWarning: The dataloader, predict_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 24 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.\n", + " rank_zero_warn(\n", + " \n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "ddcc3ce26ed74665a8429953b929a037", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Predicting: 100it [00:00, ?it/s]" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[NeMo W 2023-05-30 14:11:30 nemo_logging:349] /home/adithyare/NeMo/nemo/collections/nlp/modules/common/text_generation_utils.py:306: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /opt/conda/conda-bld/pytorch_1678402379298/work/torch/csrc/utils/tensor_numpy.cpp:206.)\n", + " string_tensor = torch.as_tensor(\n", + " \n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:Which NFL team represented the AFC at Super Bowl 50?\n", + "\n", + "Assistant: Denver Broncos\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:Which NFL team represented the NFC at Super Bowl 50?\n", + "\n", + "Assistant: Denver Broncos\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:Where did Super Bowl 50 take place?\n", + "\n", + "Assistant: Santa Clara, California\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:Which NFL team won Super Bowl 50?\n", + "\n", + "Assistant: Denver Broncos\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What color was used to emphasize the 50th anniversary of the Super Bowl?\n", + "\n", + "Assistant: gold\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What was the theme of Super Bowl 50?\n", + "\n", + "Assistant: \"Gold\"\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What day was the game played on?\n", + "\n", + "Assistant: February 7, 2016\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What is the AFC short for?\n", + "\n", + "Assistant: Super Bowl 50\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What was the theme of Super Bowl 50?\n", + "\n", + "Assistant: \"Gold\"\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What does AFC stand for?\n", + "\n", + "Assistant: Super Bowl L\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What day was the Super Bowl played on?\n", + "\n", + "Assistant: February 7, 2016\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:Who won Super Bowl 50?\n", + "\n", + "Assistant: Denver Broncos\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What venue did Super Bowl 50 take place in?\n", + "\n", + "Assistant: Levi's Stadium\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What city did Super Bowl 50 take place in?\n", + "\n", + "Assistant: San Francisco\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:If Roman numerals were used, what would Super Bowl 50 have been called?\n", + "\n", + "Assistant: Super Bowl L\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:Super Bowl 50 decided the NFL champion for what season?\n", + "\n", + "Assistant: 2015\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What year did the Denver Broncos secure a Super Bowl title for the third time?\n", + "\n", + "Assistant: 2015\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What city did Super Bowl 50 take place in?\n", + "\n", + "Assistant: San Francisco\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What stadium did Super Bowl 50 take place in?\n", + "\n", + "Assistant: Levi's Stadium\n", + "\n", + "\n", + "User: Context:Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. Question:What was the final score of Super Bowl 50? \n", + "\n", + "Assistant: 24–10\n", + "\n", + "\n" + ] + } + ], + "source": [ + "response = trainer.predict(model_eval, request_dl)\n", + "for batch in response:\n", + " for s in batch['sentences']:\n", + " print(f\"{s}\\n\\n\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.16" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +}