Skip to content

Commit

Permalink
Update Latest News (NVIDIA#8837)
Browse files Browse the repository at this point in the history
* Update Latest News

Adds links to articles on
* NeMo framework on GKE
* Responsible Gen AI using NeMo and Picasso
* NeMo powering Amazon Titan foundation models

Signed-off-by: Shashank Verma <shashankv@nvidia.com>

* Minor updates to latest news in README

* Remove bullets
* Editing text for clarity

Signed-off-by: Shashank Verma <shashankv@nvidia.com>

* Format latest news as a dropdown list

* Uses embedded html to format news to dropdown, hiding lengthy details
* Fixes formatting of the title

Signed-off-by: Shashank Verma <shashankv@nvidia.com>

* Add break to improve readability of latest news image

Signed-off-by: Shashank Verma <shashankv@nvidia.com>

* Add LLM and MM section in latest news

Signed-off-by: Shashank Verma <shashankv@nvidia.com>

* Add margin in latest news expandable lists

Signed-off-by: Shashank Verma <shashankv@nvidia.com>

* Remove styling of expandable list

* Github appears to not render styled elements when
embedded as raw html in rst

Signed-off-by: Shashank Verma <shashankv@nvidia.com>

* Fold the first news item by default

Signed-off-by: Shashank Verma <shashankv@nvidia.com>

---------

Signed-off-by: Shashank Verma <shashankv@nvidia.com>
Signed-off-by: Shashank Verma <shashank3959@gmail.com>
  • Loading branch information
shashank3959 authored Apr 19, 2024
1 parent 98daf6b commit 5937640
Showing 1 changed file with 34 additions and 8 deletions.
42 changes: 34 additions & 8 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,17 +41,43 @@
Latest News
-----------

- 2023/12/06 `New NVIDIA NeMo Framework Features and NVIDIA H200 <https://developer.nvidia.com/blog/new-nvidia-nemo-framework-features-and-nvidia-h200-supercharge-llm-training-performance-and-versatility/>`_
.. raw:: html

.. image:: https://github.com/sbhavani/TransformerEngine/blob/main/docs/examples/H200-NeMo-performance.png
:target: https://developer.nvidia.com/blog/new-nvidia-nemo-framework-features-and-nvidia-h200-supercharge-llm-training-performance-and-versatility
:alt: H200-NeMo-performance
:width: 600
<details open>
<summary><b>Large Language Models and Multimodal</b></summary>
<details>
<summary><a href="https://cloud.google.com/blog/products/compute/gke-and-nvidia-nemo-framework-to-train-generative-ai-models">Accelerate your generative AI journey with NVIDIA NeMo framework on GKE</a> (2024/03/16) </summary>

NeMo Framework has been updated with state-of-the-art features,
such as FSDP, Mixture-of-Experts, and RLHF with TensorRT-LLM to provide speedups up to 4.2x for Llama-2 pre-training on H200.
**All of these features will be available in an upcoming release.**
An end-to-end walkthrough to train generative AI models on the Google Kubernetes Engine (GKE) using the NVIDIA NeMo Framework is available at https://github.com/GoogleCloudPlatform/nvidia-nemo-on-gke. The walkthrough includes detailed instructions on how to set up a Google Cloud Project and pre-train a GPT model using the NeMo Framework.
<br><br>
</details>

<details>
<summary><a href="https://blogs.nvidia.com/blog/bria-builds-responsible-generative-ai-using-nemo-picasso/">Bria Builds Responsible Generative AI for Enterprises Using NVIDIA NeMo, Picasso</a> (2024/03/06) </summary>

Bria, a Tel Aviv startup at the forefront of visual generative AI for enterprises now leverages the NVIDIA NeMo Framework. The Bria.ai platform uses reference implementations from the NeMo Multimodal collection, trained on NVIDIA Tensor Core GPUs, to enable high-throughput and low-latency image generation. Bria has also adopted NVIDIA Picasso, a foundry for visual generative AI models, to run inference.
<br><br>
</details>

<details>
<summary><a href="https://blogs.nvidia.com/blog/bria-builds-responsible-generative-ai-using-nemo-picasso/">New NVIDIA NeMo Framework Features and NVIDIA H200</a> (2023/12/06) </summary>

NVIDIA NeMo Framework now includes several optimizations and enhancements, including: 1) Fully Sharded Data Parallelism (FSDP) to improve the efficiency of training large-scale AI models, 2) Mix of Experts (MoE)-based LLM architectures with expert parallelism for efficient LLM training at scale, 3) Reinforcement Learning from Human Feedback (RLHF) with TensorRT-LLM for inference stage acceleration, and 4) up to 4.2x speedups for Llama 2 pre-training on NVIDIA H200 Tensor Core GPUs.
<br><br>
<a href="https://developer.nvidia.com/blog/new-nvidia-nemo-framework-features-and-nvidia-h200-supercharge-llm-training-performance-and-versatility"><img src="https://github.com/sbhavani/TransformerEngine/blob/main/docs/examples/H200-NeMo-performance.png" alt="H200-NeMo-performance" style="width: 600px;"></a>
<br><br>
</details>

<details>
<summary><a href="https://blogs.nvidia.com/blog/nemo-amazon-titan/">NVIDIA now powers training for Amazon Titan Foundation models</a> (2023/11/28) </summary>

NVIDIA NeMo framework now empowers the Amazon Titan foundation models (FM) with efficient training of large language models (LLMs). The Titan FMs form the basis of Amazon’s generative AI service, Amazon Bedrock. The NeMo Framework provides a versatile framework for building, customizing, and running LLMs.
<br><br>
</details>

</details>




Introduction
Expand Down

0 comments on commit 5937640

Please sign in to comment.