Skip to content

Commit

Permalink
fix docs (#2877)
Browse files Browse the repository at this point in the history
  • Loading branch information
holgerroth authored Aug 29, 2024
1 parent a4cff1a commit a8245d2
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 7 deletions.
7 changes: 2 additions & 5 deletions docs/release_notes/flare_250.rst
Original file line number Diff line number Diff line change
Expand Up @@ -98,11 +98,8 @@ BioNemo example for Drug Discovery
`BioNeMo <https://www.nvidia.com/en-us/clara/bionemo/>`_ is NVIDIA's generative AI platform for drug discovery.
We included several examples of running BioNeMo in a federated learning environment using NVFlare:

- The :github_nvflare_link:`task fitting example <examples/advanced/bionemo/task_fitting/README.md>` includes a notebook that
shows how to obtain protein-learned representations in the form of embeddings using the ESM-1nv pre-trained model. The
model is trained with NVIDIA's BioNeMo framework for Large Language Model training and inference.
- The :github_nvflare_link:`downstream example <examples/advanced/bionemo/downstream/README.md>` shows three different downstream
tasks for fine-tuning a BioNeMo ESM-style model.
- The :github_nvflare_link:`task fitting example <examples/advanced/bionemo/task_fitting/README.md>` includes a notebook that shows how to obtain protein-learned representations in the form of embeddings using the ESM-1nv pre-trained model. The model is trained with NVIDIA's BioNeMo framework for Large Language Model training and inference.
- The :github_nvflare_link:`downstream example <examples/advanced/bionemo/downstream/README.md>` shows three different downstream tasks for fine-tuning a BioNeMo ESM-style model.

Hearchical Federated Statistics
--------------------------------
Expand Down
5 changes: 3 additions & 2 deletions examples/advanced/bionemo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,15 @@ The model is trained with NVIDIA's BioNeMo framework for Large Language Model tr

## Requirements

Download and run the latest [BioNeMo docker container](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/containers/bionemo-framework).
Download and run the [BioNeMo docker container](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/containers/bionemo-framework).
Note, we tested this example with `nvcr.io/nvidia/clara/bionemo-framework:1.0`.

We recommend following the [Quickstart Guide](https://docs.nvidia.com/bionemo-framework/latest/quickstart-fw.html#docker-container-access)
on how to get the BioNeMo container.

First, copy the NeMo code to a local directory and configure the launch script so that downloaded models can be reused
```commandline
CONTAINER="nvcr.io/nvidia/clara/bionemo-framework:latest"
CONTAINER="nvcr.io/nvidia/clara/bionemo-framework:1.0"
DEST_PATH="."
CONTAINER_NAME=bionemo
docker run --name $CONTAINER_NAME -itd --rm $CONTAINER bash
Expand Down

0 comments on commit a8245d2

Please sign in to comment.