Skip to content

Commit ebaee9b

Browse files
authored
CUDA in Docker Container Debug
Added README.md content to output log and commented out LD_LIBRARY_PATH echo.
1 parent e3778be commit ebaee9b

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

.github/workflows/flash_attention.yml

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,14 +59,17 @@ jobs:
5959
6060
nvidia-smi
6161
62+
cat /workspace/README.md >> /tmp/workspace/fa4_output.txt
6263
echo ls /usr/local/cuda >> /tmp/workspace/fa4_output.txt
6364
ls /usr/local/cuda >> /tmp/workspace/fa4_output.txt
6465
echo ls /usr/local/cuda/lib64 >> /tmp/workspace/fa4_output.txt
6566
ls /usr/local/cuda/lib64 >> /tmp/workspace/fa4_output.txt
6667
which python >> /tmp/workspace/fa4_output.txt
6768
which nvcc >> /tmp/workspace/fa4_output.txt
6869
#echo CUDA_HOME $CUDA_HOME >> /tmp/workspace/fa4_output.txt
69-
echo LD_LIBRARY_PATH $LD_LIBRARY_PATH >> /tmp/workspace/fa4_output.txt
70+
#echo LD_LIBRARY_PATH $LD_LIBRARY_PATH >> /tmp/workspace/fa4_output.txt
71+
#python -c "import torch; print(torch.cuda.is_available())"
72+
#python -c "import torch; print(torch.cuda.device_count())"
7073
7174
python setup.py install
7275
pip install -e flash_attn/cute/

0 commit comments

Comments
 (0)