Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama-2-chat-70b-hf (with LoRA) CUDA OOMs on 4 x A100 (80gb) at first training step #118

Closed
1 of 2 tasks
jshin49 opened this issue Aug 15, 2023 · 3 comments
Closed
1 of 2 tasks

Comments

@jshin49
Copy link

jshin49 commented Aug 15, 2023

System Info

Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.1
Libc version: glibc-2.31

Python version: 3.9.17 (main, Jul  5 2023, 20:41:20)  [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-149-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB

Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   48 bits physical, 48 bits virtual
CPU(s):                          256
On-line CPU(s) list:             0-254
Off-line CPU(s) list:            255
Thread(s) per core:              1
Core(s) per socket:              64
Socket(s):                       2
NUMA node(s):                    2
Vendor ID:                       AuthenticAMD
CPU family:                      25
Model:                           1
Model name:                      AMD EPYC 7763 64-Core Processor
Stepping:                        1
Frequency boost:                 enabled
CPU MHz:                         1470.637
CPU max MHz:                     2450.0000
CPU min MHz:                     1500.0000
BogoMIPS:                        4900.17
Virtualization:                  AMD-V
L1d cache:                       2 MiB
L1i cache:                       2 MiB
L2 cache:                        32 MiB
L3 cache:                        256 MiB
NUMA node0 CPU(s):               0-63,128-191
NUMA node1 CPU(s):               64-127,192-254
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca

Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] torch==2.0.1
[conda] numpy                     1.25.2                   pypi_0    pypi
[conda] torch                     2.0.1                    pypi_0    pypi

Information

  • The official example scripts
  • My own modified scripts

🐛 Describe the bug

As title suggests,
Llama-2-chat-70b-hf (with LoRA) CUDA OOMs on 4 x A100 (80gb) at first training step

Error logs

Training Epoch0:   0%|                                                                                                                                                                                                                                                                                                                               | 0/389 [01:44<?, ?it/s]
Training Epoch0:   0%|                                                                                                                                                                                                                                                                                                                               | 0/389 [00:22<?, ?it/s]
Training Epoch0:   0%|                                                                                                                                                                                                                                                                                                                               | 0/389 [01:05<?, ?it/s]
Training Epoch0:   0%|                                                                                                                                                                                                                                                                                                                               | 0/389 [00:41<?, ?it/s]
Traceback (most recent call last):
  File "~/MoR/llama-recipes/llama_finetuning.py", line 256, in <module>
    fire.Fire(main)
  File "~/anaconda3/envs/mor/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "~/anaconda3/envs/mor/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "~/anaconda3/envs/mor/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "~/MoR/llama-recipes/llama_finetuning.py", line 239, in main
    results = train(
  File "~/MoR/llama-recipes/utils/train_utils.py", line 106, in train
    optimizer.step()
  File "~/anaconda3/envs/mor/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 69, in wrapper
    return wrapped(*args, **kwargs)
  File "~/anaconda3/envs/mor/lib/python3.9/site-packages/torch/optim/optimizer.py", line 280, in wrapper
    out = func(*args, **kwargs)
  File "~/anaconda3/envs/mor/lib/python3.9/site-packages/torch/optim/optimizer.py", line 33, in _use_grad
    ret = func(self, *args, **kwargs)
  File "~/anaconda3/envs/mor/lib/python3.9/site-packages/torch/optim/adamw.py", line 160, in step
    self._init_group(
  File "~/anaconda3/envs/mor/lib/python3.9/site-packages/torch/optim/adamw.py", line 118, in _init_group
    state["exp_avg_sq"] = torch.zeros_like(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 410.00 MiB (GPU 0; 79.15 GiB total capacity; 75.66 GiB already allocated; 378.44 MiB free; 77.04 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Expected behavior

First of all, a quick search made me check #96 and #77.

Based on the Multi-GPU one node docs, I tried running 70B with LoRA, and I get the above errors at the first training step (model loading seemed to have worked).

Here's the scripts I used:
torchrun --nnodes 1 --nproc_per_node 4 llama_finetuning.py --enable_fsdp --use_peft --peft_method lora --model_name /patht_of_model_folder/7B --pure_bf16 --output_dir Path/to/save/PEFT/model --use_fast_kernels

Is there any known minimum hardware requirement that I'm missing, or is it a config issue?
Note that I use pytorch 2.0.1 stable built for cu11.7 while our CUDA version is 12.1, so it may not be optimal settings, and since I don't use Nightly, I cannot use the Low CPU FSDP. Could that be the cause?

@mreso
Copy link
Contributor

mreso commented Aug 21, 2023

Hi,
please use PT nightlies for PEFT + FSDP training as it contains important fixes.

I'll close the issue for now, but please feel free to re-open if you still encounter this issue after updating to nightlies.

@mreso mreso closed this as completed Aug 21, 2023
@jshin49
Copy link
Author

jshin49 commented Aug 22, 2023

Yea. that fixed the issues. Thanks!

@mreso
Copy link
Contributor

mreso commented Aug 22, 2023

Great! Let us know if you run into any other issue down the road!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants