- San Jose, CA
- linkedin.com/in/sudhakarsingh27/
Pinned Loading
-
NVIDIA/Megatron-LM
NVIDIA/Megatron-LM PublicOngoing research training transformer models at scale
-
NVIDIA/TransformerEngine
NVIDIA/TransformerEngine PublicA library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilizatio…
-
lit-llama
lit-llama PublicForked from Lightning-AI/lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
Python
-
huggingface/accelerate
huggingface/accelerate Public🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
-
-
NVIDIA/NeMo
NVIDIA/NeMo PublicA scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
If the problem persists, check the GitHub status page or contact support.