Description
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
ggml should not spawn threads for the initial prompt ingestion when using BLAS.
Current Behavior
ggml does spawn threads even when using BLAS.
Environment and Context
Reproducible using latest OpenBLAS with PR OpenMathLib/OpenBLAS#3970 (for Intel 13th gen support) and Intel MKL's BLAS implementation.
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i5-13500
CPU family: 6
Model: 191
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 2
CPU max MHz: 4800.0000
CPU min MHz: 800.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx
fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bt
s rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 mo
nitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe po
pcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb i
nvpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad
fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_p
t sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_a
ct_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdi
ri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
- Operating System, e.g. for Linux:
Ubuntu 22.04 with custom Kernel
Linux XXX 6.1.6-060106-generic #202301141035 SMP PREEMPT_DYNAMIC Sat Jan 14 11:15:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Failure Information (for bugs)
Read this discussion for full context #229 (reply in thread)
@slaren mentioned that the issue is:
By default llama.cpp will limit ggml to 1 thread when using BLAS only if the batch size is >255:
The problem is that there is a mismatch in ggml which will use BLAS as long as the batch size is >= 32:
https://github.com/ggerganov/llama.cpp/blob/4b8efff0e3945090379aa2f897ff125c8f9cdbae/ggml.c#L5784This leads to issues when the batch size is >32 and <=255. We need to determine what is the optimal batch size to start using BLAS, and use that value consistently.
Steps to Reproduce
I tried using -b 256 and -b 512, and ggml's 6 threads (from -t 6) are still spawned by ggml (alongside BLAS threads) when doing initial prompt ingestion:
llama -m /opt/models/llama-30B/ggml-model-q4_0.bin -n -1 --color -i -r "User:" -f /opt/prompts/chat-with-bob.txt -t 6 -b 256 -c 2048
Using -t 1
yields the expected behavior (only 1 thread for ggml, and the threads I set in env variable for BLAS)
Failure Logs
htop shows more core usages than expected.