Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify nemo example readme #2352

Merged
merged 1 commit into from
Feb 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions integration/nemo/examples/peft/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,16 @@ In the following, we assume this example folder of the container is mounted to `
> Note in the following, mount both the [current directory](./) and the [job_templates](../../../../job_templates)
> directory to locations inside the docker container. Please make sure you have cloned the full NVFlare repo.

Start the docker container using
Start the docker container from **this directory** using
```
# cd NVFlare/integration/nemo/examples/peft
DOCKER_IMAGE="nvcr.io/nvidia/nemo:23.10"
docker run --runtime=nvidia -it --rm --shm-size=16g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 \
-v ${PWD}/../../../../job_templates:/job_templates -v ${PWD}:/workspace -w /workspace ${DOCKER_IMAGE}
```

For easy experimentation with NeMo, install NVFlare and mount the code inside the [nemo_nvflare](./nemo_nvflare) folder.
```
cd nemo_nvflare
pip install nvflare~=2.4.0rc7
export PYTHONPATH=${PYTHONPATH}:/workspace
```
Expand All @@ -35,7 +35,6 @@ export PYTHONPATH=${PYTHONPATH}:/workspace
We use [JupyterLab](https://jupyterlab.readthedocs.io) for this example.
To start JupyterLab, run
```
cd /workspace
jupyter lab .
```
and open [peft.ipynb](./peft.ipynb).
Expand Down
5 changes: 2 additions & 3 deletions integration/nemo/examples/prompt_learning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,16 @@ In our federated implementation, the LLM parameters stay fixed. Prompt encoder p
The example was tested with the [NeMo 23.02 container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo).
In the following, we assume this example folder of the container is mounted to `/workspace` and all downloading, etc. operations are based on this root path.

Start the docker container using
Start the docker container from **this directory** using
```
# cd NVFlare/integration/nemo/examples/prompt_learning
DOCKER_IMAGE="nvcr.io/nvidia/nemo:23.02"
docker run --runtime=nvidia -it --rm --shm-size=16g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 \
-v ${PWD}:/workspace -w /workspace ${DOCKER_IMAGE}
```

For easy experimentation with NeMo, install NVFlare and mount the code inside the [nemo_nvflare](./nemo_nvflare) folder.
```
cd nemo_nvflare
pip install nvflare~=2.4.0rc7
export PYTHONPATH=${PYTHONPATH}:/workspace
```
Expand All @@ -35,7 +35,6 @@ export PYTHONPATH=${PYTHONPATH}:/workspace
We use [JupyterLab](https://jupyterlab.readthedocs.io) for this example.
To start JupyterLab, run
```
cd /workspace
jupyter lab .
```
and open [prompt_learning.ipynb](./prompt_learning.ipynb).
Expand Down
4 changes: 2 additions & 2 deletions integration/nemo/examples/supervised_fine_tuning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,16 @@ The example was tested using the [NeMo Docker container](https://catalog.ngc.nvi
available with `docker pull nvcr.io/nvidia/nemo:23.06`.
In the following, we assume this example folder of the container is mounted to `/workspace` and all downloading, etc. operations are based on this root path.

Start the docker container using
Start the docker container from **this directory** using
```
# cd NVFlare/integration/nemo/examples/supervised_fine_tuning
DOCKER_IMAGE="nvcr.io/nvidia/nemo:23.06"
docker run --runtime=nvidia -it --rm --shm-size=16g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 \
-v ${PWD}:/workspace -w /workspace ${DOCKER_IMAGE}
```

For easy experimentation with NeMo, install NVFlare and mount the code inside the [nemo_nvflare](./nemo_nvflare) folder.
```
cd nemo_nvflare
pip install nvflare~=2.4.0rc7
export PYTHONPATH=${PYTHONPATH}:/workspace
```
Expand Down
Loading