Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss.backward() on an LSTM causes SegFault on Arc GPU #698

Open
LivesayMe opened this issue Sep 4, 2024 · 5 comments
Open

loss.backward() on an LSTM causes SegFault on Arc GPU #698

LivesayMe opened this issue Sep 4, 2024 · 5 comments
Assignees
Labels
ARC ARC GPU Crash Execution crashes XPU/GPU XPU/GPU specific issues

Comments

@LivesayMe
Copy link

LivesayMe commented Sep 4, 2024

Describe the bug

Training a model with an LSTM module causes a segmentation fault after loss.backward() has been called a random number of times. The number of times the training loop can run seems to be dependent on what the tensor x is, but I can't find what the relationship is. On my system the code below will run through the loop twice before having a segmentation fault. If x is instead torch.randn(10, 1) it runs for 3 loops. If ipex.optimize is not called it will not have a segfault as long as x is small. If it is large it will still cause a segfault.

Code to reproduce:

import torch 
import intel_extension_for_pytorch as ipex

class LSTMModel(torch.nn.Module):
    def __init__(self):
        super(LSTMModel, self).__init__()
        self.l1 = torch.nn.Linear(1, 10)
        self.lstm = torch.nn.LSTM(10, 10, batch_first=True)
        self.l2 = torch.nn.Linear(10, 1)
    
    def forward(self, x):
        x = self.l1(x)
        # x = torch.nn.functional.relu(x)
        x, _ = self.lstm(x)
        x = self.l2(x)
        return x

model = LSTMModel()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
loss_fn = torch.nn.MSELoss()
x = torch.tensor([[1.0], [0.0], [1.0], [0.0]]).unsqueeze(0)

use_ipex = True
if use_ipex:
    model.to("xpu")
    model, optimizer = ipex.optimize(model, optimizer=optimizer)
    x = x.to("xpu")

for i in range(10):
    optimizer.zero_grad()
    pred = model(x)
    loss = loss_fn(pred, x)
    loss.backward()
    optimizer.step()
    print(loss)

Output

tensor(0.8119, device='xpu:0', grad_fn=)
tensor(0.7972, device='xpu:0', grad_fn=)
Segmentation fault (core dumped)

Versions

PyTorch version: 2.1.0.post3+cxx11.abi
PyTorch CXX11 ABI: Yes
IPEX version: 2.1.40+xpu
IPEX commit: 80ed476
Build type: Release

OS: Linux Mint 22 (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: N/A
IGC version: 2024.2.1 (2024.2.1.20240711)
CMake version: N/A
Libc version: glibc-2.39

Python version: 3.11.9 (main, Apr 27 2024, 21:16:11) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-41-generic-x86_64-with-glibc2.39
Is XPU available: True
DPCPP runtime version: 2024.2
MKL version: 2024.2
GPU models and configuration:
[0] _DeviceProperties(name='Intel(R) Arc(TM) A770 Graphics', platform_name='Intel(R) Level-Zero', dev_type='gpu', driver_version='1.3.29735', has_fp64=0, total_memory=15473MB, max_compute_units=512, gpu_eu_count=512)
Intel OpenCL ICD version: 24.22.29735.27-91422.04
Level Zero version: 1.3.29735.27-914
22.04

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 5600 6-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 70%
CPU max MHz: 4467.2852
CPU min MHz: 2200.0000
BogoMIPS: 6999.84
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.1.40+xpu
[pip3] numpy==1.26.4
[pip3] torch==2.1.0.post3+cxx11.abi
[pip3] torchaudio==2.1.0.post3+cxx11.abi
[pip3] torchvision==0.16.0.post3+cxx11.abi
[conda] N/A

@wangkl2 wangkl2 self-assigned this Sep 5, 2024
@wangkl2 wangkl2 added XPU/GPU XPU/GPU specific issues ARC ARC GPU Crash Execution crashes labels Sep 5, 2024
@wangkl2
Copy link
Member

wangkl2 commented Sep 5, 2024

@LivesayMe Thanks for reporting the issue. Will look into it and give feedback later.

@wangkl2
Copy link
Member

wangkl2 commented Sep 10, 2024

@LivesayMe I am able to reproduce the SegFalut issue with your code snippet.

I saved the core dumped file and used GDB to look into the details and found it crashed at the libze_intel_gpu.so.1, with below output:

tensor(0.2660, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(0.2631, device='xpu:0', grad_fn=<MseLossBackward0>)                                                                
Thread 31 "python" received signal SIGSEGV, Segmentation fault.                                                           
[Switching to Thread 0x7fffd0bf9640 (LWP 1109906)]                                                                        
0x00007ffea151a4c0 in ?? () from /lib/x86_64-linux-gnu/libze_intel_gpu.so.1

The beginning of the backtrace:

(gdb) bt
#0  0x00007ffea151a4c0 in ?? () from /lib/x86_64-linux-gnu/libze_intel_gpu.so.1
#1  0x00007ffea1499526 in ?? () from /lib/x86_64-linux-gnu/libze_intel_gpu.so.1
#2  0x00007ffea15e5a2c in ?? () from /lib/x86_64-linux-gnu/libze_intel_gpu.so.1
#3  0x00007ffea15a3244 in ?? () from /lib/x86_64-linux-gnu/libze_intel_gpu.so.1
#4  0x00007ffea126cfd3 in ?? () from /lib/x86_64-linux-gnu/libze_intel_gpu.so.1
#5  0x00007ffea112b31b in ?? () from /lib/x86_64-linux-gnu/libze_intel_gpu.so.1                                           
#6  0x00007fffdd462557 in ur_queue_handle_t_::executeCommandList(std::__1::__hash_map_iterator<std::__1::__hash_iterator<std::__1::__hash_node<std::__1::__hash_value_type<_ze_command_list_handle_t*, ur_command_list_info_t>, void*>*> >, bool, bool) () from /opt/intel/oneapi/compiler/2024.2/lib/libpi_level_zero.so

The end of the backtrace:

#18 0x00007fff1686ab99 in void at::AtenIpexTypeXPU::dpcpp_loops_kernel<at::impl::direct_copy_kernel_gpu_functor<float>, false, true>(at::TensorIteratorBase&, at::impl::direct_copy_kernel_gpu_functor<float>) () from /hd2/wangk2/miniforge3/envs/ipex-xpu-py310/lib/python3.10/site-packages/intel
_extension_for_pytorch/lib/libintel-ext-pt-gpu.so
#19 0x00007fff166c8414 in at::impl::direct_copy_kernel_gpu(at::TensorIteratorBase&) () from /hd2/wangk2/miniforge3/envs/ipex-xpu-py310/lib/python3.10/site-packages/intel_extension_for_pytorch/lib/libintel-ext-pt-gpu.so

…

#29 0x00007fffe5a168ba in torch::autograd::AccumulateGrad::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) () from /hd2/wangk2/miniforge3/envs/ipex-xpu-py310/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
#30 0x00007fffe5a1138b in torch::autograd::Node::operator()(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) () from /hd2/wangk2/miniforge3/envs/ipex-xpu-py310/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
#31 0x00007fffe5a0b468 in torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) () from /hd2/wangk2/miniforge3/envs/ipex-xpu-py310/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
#32 0x00007fffe5a0c5e5 in torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) () from /hd2/wangk2/miniforge3/envs/ipex-xpu-py310/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
#33 0x00007fffe5a03aa9 in torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) () from /hd2/wangk2/miniforge3/envs/ipex-xpu-py310/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
#34 0x00007ffff62f1621 in torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) () from /hd2/wangk2/miniforge3/envs/ipex-xpu-py310/lib/python3.10/site-packages/torch/lib/libtorch_python.so
#35 0x00007fffe12e62b3 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6

The above stack trace indicates that the program crashed at calling ExecuteCommandList related function for level-zero runtime at libpi_level_zero.so lib after invoking torch::autograd at libtorch_python.so during the Backprop at the third training iteration in this case.

@wangkl2
Copy link
Member

wangkl2 commented Sep 10, 2024

@LivesayMe But I'm a little bit confused about your current LSTM training code snippet where:

  1. the current input data x is always the same in the loop instead of loading from a dataloader or randomizing during each iteration, which might not good for the robustness of the model.
  2. loss = loss_fn(pred, x) is using the input data instead of the labeled targets to pass into the loss function. Is it intended?

If I modify the code with varying the random input data and use a separate synthetic target data during the training loop:

for i in range(10):
    x = torch.randn(1, 4, 1).to("xpu")
    target = torch.randn(1, 4, 1).to("xpu")
    optimizer.zero_grad()
    pred = model(x)
    #loss = loss_fn(pred, x)
    loss = loss_fn(pred, target)
    loss.backward()
    optimizer.step()
    print(loss)

The training normally executes with the following output:

tensor(0.8712, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(0.6560, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(1.5420, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(1.0311, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(0.4135, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(0.7653, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(1.7324, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(0.9488, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(0.5480, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(0.2006, device='xpu:0', grad_fn=<MseLossBackward0>)

Enlarges the input tensor shape also works, such as to (100, 40, 1)

@LivesayMe
Copy link
Author

@wangkl2 The code I shared wasn't supposed to be a fully fledged implementation, rather just a minimum code to reproduce the issue I was having.
It seems that the issue is related to the loss function having the same tensor passed in for the sample and targets, since that was in both my original implementation and this minimum reproducible example, but isn't in the code sample you got working.
The reason why I had x as both the input to the model and the target in the loss was because I was doing next token prediction, in my original code I had shifted the target by 1.

@wangkl2
Copy link
Member

wangkl2 commented Sep 13, 2024

@LivesayMe Okay, thanks for the clarification. Makes sense.

As a workaround, you can set SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 before executing the training. Please check it out.

I've verified it works for:
(a) either same data passed into the model or different data for each iteration,
(b) either using x as the target or using a separate target tensor in loss function,
(c) either small input tensors or large input tensors, the following is the output for x=torch.randn(1000, 4000, 1):

tensor(1.1229, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(1.1146, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(1.1065, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(1.0987, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(1.0911, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(1.0838, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(1.0767, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(1.0699, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(1.0634, device='xpu:0', grad_fn=<MseLossBackward0>)
tensor(1.0570, device='xpu:0', grad_fn=<MseLossBackward0>)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ARC ARC GPU Crash Execution crashes XPU/GPU XPU/GPU specific issues
Projects
None yet
Development

No branches or pull requests

2 participants