-
-
Notifications
You must be signed in to change notification settings - Fork 11k
Description
Your current environment
The output of python collect_env.py
==============================
System Info
==============================
OS : Ubuntu 20.04.6 LTS (x86_64)
GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.31
==============================
PyTorch Info
==============================
PyTorch version : 2.6.0+cu124
Is debug build : False
CUDA used to build PyTorch : 12.4
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-5.4.0-189-generic-x86_64-with-glibc2.31
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.1.66
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: Tesla T4
GPU 1: Tesla T4
GPU 2: Tesla T4
GPU 3: Tesla T4
GPU 4: Tesla T4
GPU 5: Tesla T4
Nvidia driver version : 535.183.06
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz
Stepping: 7
CPU MHz: 1270.336
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 32 MiB
L3 cache: 44 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Vulnerable, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
==============================
Versions of relevant libraries
==============================
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-ml-py==12.575.51
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.4.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] transformers==4.52.4
[pip3] triton==3.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-ml-py 12.575.51 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pyzmq 26.4.0 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] transformers 4.52.4 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
==============================
vLLM Info
==============================
ROCM Version : Could not collect
Neuron SDK Version : N/A
vLLM Version : 0.8.5.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NODE NODE SYS SYS SYS SYS 0,2,4,6,8,10 0 N/A
GPU1 NODE X PHB SYS SYS SYS SYS 0,2,4,6,8,10 0 N/A
GPU2 NODE PHB X SYS SYS SYS SYS 0,2,4,6,8,10 0 N/A
GPU3 SYS SYS SYS X PHB NODE NODE 1,3,5,7,9,11 1 N/A
GPU4 SYS SYS SYS PHB X NODE NODE 1,3,5,7,9,11 1 N/A
GPU5 SYS SYS SYS NODE NODE X NODE 1,3,5,7,9,11 1 N/A
NIC0 SYS SYS SYS NODE NODE NODE X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
==============================
Environment Variables
==============================
LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/lib64:
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
🐛 Describe the bug
INFO 06-16 01:48:43 [multiproc_worker_utils.py:137] Terminating local vLLM worker processes
(VllmWorkerProcess pid=4086430) INFO 06-16 01:48:43 [multiproc_worker_utils.py:259] Worker exiting
(VllmWorkerProcess pid=4086431) INFO 06-16 01:48:43 [multiproc_worker_utils.py:259] Worker exiting
(VllmWorkerProcess pid=4086432) INFO 06-16 01:48:43 [multiproc_worker_utils.py:259] Worker exiting
INFO 06-16 01:48:43 [config.py:717] This model supports multiple tasks: {'classify', 'generate', 'reward', 'score', 'embed'}. Defaulting to 'generate'.
WARNING 06-16 01:48:43 [arg_utils.py:1525] Chunked prefill is enabled by default for models with max_model_len > 32K. Chunked prefill might not work with some features or models. If you encounter any issues, please disable by launching with --enable-chunked-prefill=False.
INFO 06-16 01:48:43 [config.py:1770] Defaulting to use mp for distributed inference
INFO 06-16 01:48:43 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 06-16 01:48:43 [llm_engine.py:240] Initializing a V0 LLM engine (v0.8.5.post1) with config: model='work_dirs/llama3_8b_instruct_qlora_alpaca_e3_copy/merged', speculative_config=None, tokenizer='work_dirs/llama3_8b_instruct_qlora_alpaca_e3_copy/merged', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=131072, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=work_dirs/llama3_8b_instruct_qlora_alpaca_e3_copy/merged, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=None, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=False,
WARNING 06-16 01:48:44 [utils.py:2382] We must use the spawn multiprocessing start method. Overriding VLLM_WORKER_MULTIPROC_METHOD to 'spawn'. See https://docs.vllm.ai/en/latest/getting_started/troubleshooting.html#python-multiprocessing for more information. Reason: CUDA is initialized
INFO 06-16 01:48:48 [importing.py:53] Triton module has been replaced with a placeholder.
INFO 06-16 01:48:48 [importing.py:53] Triton module has been replaced with a placeholder.
INFO 06-16 01:48:49 [init.py:239] Automatically detected platform cuda.
INFO 06-16 01:48:49 [init.py:239] Automatically detected platform cuda.
INFO 06-16 01:48:49 [importing.py:53] Triton module has been replaced with a placeholder.
INFO 06-16 01:48:49 [init.py:239] Automatically detected platform cuda.
(VllmWorkerProcess pid=4089753) INFO 06-16 01:48:50 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=4089754) INFO 06-16 01:48:50 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=4089755) INFO 06-16 01:48:50 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=4089753) INFO 06-16 01:48:51 [cuda.py:240] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
(VllmWorkerProcess pid=4089754) INFO 06-16 01:48:51 [cuda.py:240] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
(VllmWorkerProcess pid=4089753) INFO 06-16 01:48:51 [cuda.py:289] Using XFormers backend.
(VllmWorkerProcess pid=4089754) INFO 06-16 01:48:51 [cuda.py:289] Using XFormers backend.
(VllmWorkerProcess pid=4089755) INFO 06-16 01:48:51 [cuda.py:240] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
(VllmWorkerProcess pid=4089755) INFO 06-16 01:48:51 [cuda.py:289] Using XFormers backend.
[E616 01:58:20.389957361 socket.cpp:1023] [c10d] The client socket has timed out after 600000ms while trying to connect to (172.17.0.9, 50437).
[W616 01:58:20.390554917 TCPStore.cpp:330] [c10d] TCP client failed to connect/validate to host 172.17.0.9:50437 - retrying (try=0, timeout=600000ms, delay=41215ms): The client socket has timed out after 600000ms while trying to connect to (172.17.0.9, 50437).
Exception raised from throwTimeoutError at /pytorch/torch/csrc/distributed/c10d/socket.cpp:1025 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7fcd9950a1b6 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: + 0x16144fe (0x7fcdcfe114fe in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #2: + 0x63501ce (0x7fcdd4b4d1ce in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #3: + 0x6350386 (0x7fcdd4b4d386 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #4: + 0x63507f4 (0x7fcdd4b4d7f4 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #5: + 0x630d216 (0x7fcdd4b0a216 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #6: c10d::TCPStore::TCPStore(std::string, c10d::TCPStoreOptions const&) + 0x20c (0x7fcdd4b0d14c in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #7: + 0xe486ef (0x7fcde47336ef in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #8: + 0x51a017 (0x7fcde3e05017 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #9: /root/miniconda3/envs/agent-r1/bin/python3() [0x528b17]
frame #10: _PyObject_MakeTpCall + 0x27c (0x50452c in /root/miniconda3/envs/agent-r1/bin/python3)
frame #11: /root/miniconda3/envs/agent-r1/bin/python3() [0x5579ce]
frame #12: _PyObject_Call + 0x11f (0x54330f in /root/miniconda3/envs/agent-r1/bin/python3)
frame #13: /root/miniconda3/envs/agent-r1/bin/python3() [0x540849]
frame #14: /root/miniconda3/envs/agent-r1/bin/python3() [0x50492c]
frame #15: + 0x51880b (0x7fcde3e0380b in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #16: _PyObject_MakeTpCall + 0x27c (0x50452c in /root/miniconda3/envs/agent-r1/bin/python3)
frame #17: _PyEval_EvalFrameDefault + 0x6a6 (0x511a76 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #18: /root/miniconda3/envs/agent-r1/bin/python3() [0x5a3197]
frame #19: /root/miniconda3/envs/agent-r1/bin/python3() [0x52f30b]
frame #20: PyObject_Vectorcall + 0x31 (0x51ea31 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #21: _PyEval_EvalFrameDefault + 0x6a6 (0x511a76 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #22: _PyFunction_Vectorcall + 0x173 (0x539153 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #23: PyObject_Call + 0x12c (0x5430ac in /root/miniconda3/envs/agent-r1/bin/python3)
frame #24: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #25: _PyFunction_Vectorcall + 0x173 (0x539153 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #26: PyObject_Call + 0x12c (0x5430ac in /root/miniconda3/envs/agent-r1/bin/python3)
frame #27: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #28: /root/miniconda3/envs/agent-r1/bin/python3() [0x5581df]
frame #29: /root/miniconda3/envs/agent-r1/bin/python3() [0x557a20]
frame #30: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #31: _PyFunction_Vectorcall + 0x173 (0x539153 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #32: PyObject_Call + 0x12c (0x5430ac in /root/miniconda3/envs/agent-r1/bin/python3)
frame #33: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #34: /root/miniconda3/envs/agent-r1/bin/python3() [0x5cc3aa]
frame #35: PyEval_EvalCode + 0x9f (0x5cba7f in /root/miniconda3/envs/agent-r1/bin/python3)
frame #36: /root/miniconda3/envs/agent-r1/bin/python3() [0x5ecba7]
frame #37: /root/miniconda3/envs/agent-r1/bin/python3() [0x5e8740]
frame #38: PyRun_StringFlags + 0x5f (0x5db24f in /root/miniconda3/envs/agent-r1/bin/python3)
frame #39: PyRun_SimpleStringFlags + 0x3b (0x5daffb in /root/miniconda3/envs/agent-r1/bin/python3)
frame #40: Py_RunMain + 0x388 (0x5f7498 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #41: Py_BytesMain + 0x39 (0x5bc149 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #42: __libc_start_main + 0xf3 (0x7fcdecef4083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #43: /root/miniconda3/envs/agent-r1/bin/python3() [0x5bbf93]
[E616 01:58:46.636324557 socket.cpp:1023] [c10d] The client socket has timed out after 600000ms while trying to connect to (172.17.0.9, 50437).
[W616 01:58:46.636956994 TCPStore.cpp:330] [c10d] TCP client failed to connect/validate to host 172.17.0.9:50437 - retrying (try=0, timeout=600000ms, delay=56936ms): The client socket has timed out after 600000ms while trying to connect to (172.17.0.9, 50437).
Exception raised from throwTimeoutError at /pytorch/torch/csrc/distributed/c10d/socket.cpp:1025 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7eff3fde01b6 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: + 0x16144fe (0x7eff766e74fe in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #2: + 0x63501ce (0x7eff7b4231ce in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #3: + 0x6350386 (0x7eff7b423386 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #4: + 0x63507f4 (0x7eff7b4237f4 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #5: + 0x630d216 (0x7eff7b3e0216 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #6: c10d::TCPStore::TCPStore(std::string, c10d::TCPStoreOptions const&) + 0x20c (0x7eff7b3e314c in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #7: + 0xe486ef (0x7eff8b0096ef in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #8: + 0x51a017 (0x7eff8a6db017 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #9: /root/miniconda3/envs/agent-r1/bin/python3() [0x528b17]
frame #10: _PyObject_MakeTpCall + 0x27c (0x50452c in /root/miniconda3/envs/agent-r1/bin/python3)
frame #11: /root/miniconda3/envs/agent-r1/bin/python3() [0x5579ce]
frame #12: _PyObject_Call + 0x11f (0x54330f in /root/miniconda3/envs/agent-r1/bin/python3)
frame #13: /root/miniconda3/envs/agent-r1/bin/python3() [0x540849]
frame #14: /root/miniconda3/envs/agent-r1/bin/python3() [0x50492c]
frame #15: + 0x51880b (0x7eff8a6d980b in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #16: _PyObject_MakeTpCall + 0x27c (0x50452c in /root/miniconda3/envs/agent-r1/bin/python3)
frame #17: _PyEval_EvalFrameDefault + 0x6a6 (0x511a76 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #18: /root/miniconda3/envs/agent-r1/bin/python3() [0x5a3197]
frame #19: /root/miniconda3/envs/agent-r1/bin/python3() [0x52f30b]
frame #20: PyObject_Vectorcall + 0x31 (0x51ea31 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #21: _PyEval_EvalFrameDefault + 0x6a6 (0x511a76 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #22: _PyFunction_Vectorcall + 0x173 (0x539153 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #23: PyObject_Call + 0x12c (0x5430ac in /root/miniconda3/envs/agent-r1/bin/python3)
frame #24: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #25: _PyFunction_Vectorcall + 0x173 (0x539153 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #26: PyObject_Call + 0x12c (0x5430ac in /root/miniconda3/envs/agent-r1/bin/python3)
frame #27: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #28: /root/miniconda3/envs/agent-r1/bin/python3() [0x5581df]
frame #29: /root/miniconda3/envs/agent-r1/bin/python3() [0x557a20]
frame #30: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #31: _PyFunction_Vectorcall + 0x173 (0x539153 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #32: PyObject_Call + 0x12c (0x5430ac in /root/miniconda3/envs/agent-r1/bin/python3)
frame #33: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #34: /root/miniconda3/envs/agent-r1/bin/python3() [0x5cc3aa]
frame #35: PyEval_EvalCode + 0x9f (0x5cba7f in /root/miniconda3/envs/agent-r1/bin/python3)
frame #36: /root/miniconda3/envs/agent-r1/bin/python3() [0x5ecba7]
frame #37: /root/miniconda3/envs/agent-r1/bin/python3() [0x5e8740]
frame #38: PyRun_StringFlags + 0x5f (0x5db24f in /root/miniconda3/envs/agent-r1/bin/python3)
frame #39: PyRun_SimpleStringFlags + 0x3b (0x5daffb in /root/miniconda3/envs/agent-r1/bin/python3)
frame #40: Py_RunMain + 0x388 (0x5f7498 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #41: Py_BytesMain + 0x39 (0x5bc149 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #42: __libc_start_main + 0xf3 (0x7eff937ca083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #43: /root/miniconda3/envs/agent-r1/bin/python3() [0x5bbf93]
[E616 01:58:50.269541905 socket.cpp:1023] [c10d] The client socket has timed out after 600000ms while trying to connect to (172.17.0.9, 50437).
[W616 01:58:50.270092731 TCPStore.cpp:330] [c10d] TCP client failed to connect/validate to host 172.17.0.9:50437 - retrying (try=0, timeout=600000ms, delay=48747ms): The client socket has timed out after 600000ms while trying to connect to (172.17.0.9, 50437).
Exception raised from throwTimeoutError at /pytorch/torch/csrc/distributed/c10d/socket.cpp:1025 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7fead98201b6 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: + 0x16144fe (0x7feb101274fe in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #2: + 0x63501ce (0x7feb14e631ce in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #3: + 0x6350386 (0x7feb14e63386 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #4: + 0x63507f4 (0x7feb14e637f4 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #5: + 0x630d216 (0x7feb14e20216 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #6: c10d::TCPStore::TCPStore(std::string, c10d::TCPStoreOptions const&) + 0x20c (0x7feb14e2314c in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #7: + 0xe486ef (0x7feb24a496ef in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #8: + 0x51a017 (0x7feb2411b017 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #9: /root/miniconda3/envs/agent-r1/bin/python3() [0x528b17]
frame #10: _PyObject_MakeTpCall + 0x27c (0x50452c in /root/miniconda3/envs/agent-r1/bin/python3)
frame #11: /root/miniconda3/envs/agent-r1/bin/python3() [0x5579ce]
frame #12: _PyObject_Call + 0x11f (0x54330f in /root/miniconda3/envs/agent-r1/bin/python3)
frame #13: /root/miniconda3/envs/agent-r1/bin/python3() [0x540849]
frame #14: /root/miniconda3/envs/agent-r1/bin/python3() [0x50492c]
frame #15: + 0x51880b (0x7feb2411980b in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #16: _PyObject_MakeTpCall + 0x27c (0x50452c in /root/miniconda3/envs/agent-r1/bin/python3)
frame #17: _PyEval_EvalFrameDefault + 0x6a6 (0x511a76 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #18: /root/miniconda3/envs/agent-r1/bin/python3() [0x5a3197]
frame #19: /root/miniconda3/envs/agent-r1/bin/python3() [0x52f30b]
frame #20: PyObject_Vectorcall + 0x31 (0x51ea31 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #21: _PyEval_EvalFrameDefault + 0x6a6 (0x511a76 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #22: _PyFunction_Vectorcall + 0x173 (0x539153 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #23: PyObject_Call + 0x12c (0x5430ac in /root/miniconda3/envs/agent-r1/bin/python3)
frame #24: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #25: _PyFunction_Vectorcall + 0x173 (0x539153 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #26: PyObject_Call + 0x12c (0x5430ac in /root/miniconda3/envs/agent-r1/bin/python3)
frame #27: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #28: /root/miniconda3/envs/agent-r1/bin/python3() [0x5581df]
frame #29: /root/miniconda3/envs/agent-r1/bin/python3() [0x557a20]
frame #30: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #31: _PyFunction_Vectorcall + 0x173 (0x539153 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #32: PyObject_Call + 0x12c (0x5430ac in /root/miniconda3/envs/agent-r1/bin/python3)
frame #33: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #34: /root/miniconda3/envs/agent-r1/bin/python3() [0x5cc3aa]
frame #35: PyEval_EvalCode + 0x9f (0x5cba7f in /root/miniconda3/envs/agent-r1/bin/python3)
frame #36: /root/miniconda3/envs/agent-r1/bin/python3() [0x5ecba7]
frame #37: /root/miniconda3/envs/agent-r1/bin/python3() [0x5e8740]
frame #38: PyRun_StringFlags + 0x5f (0x5db24f in /root/miniconda3/envs/agent-r1/bin/python3)
frame #39: PyRun_SimpleStringFlags + 0x3b (0x5daffb in /root/miniconda3/envs/agent-r1/bin/python3)
frame #40: Py_RunMain + 0x388 (0x5f7498 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #41: Py_BytesMain + 0x39 (0x5bc149 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #42: __libc_start_main + 0xf3 (0x7feb2d20a083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #43: /root/miniconda3/envs/agent-r1/bin/python3() [0x5bbf93]
[E616 02:08:15.931416442 socket.cpp:1023] [c10d] The client socket has timed out after 600000ms while trying to connect to (172.17.0.9, 50437).
[E616 02:08:15.931621754 TCPStore.cpp:318] [c10d] TCP client failed to connect/validate to host 172.17.0.9:50437 - timed out (try=1, timeout=600000ms): The client socket has timed out after 600000ms while trying to connect to (172.17.0.9, 50437).
Exception raised from throwTimeoutError at /pytorch/torch/csrc/distributed/c10d/socket.cpp:1025 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7fcd9950a1b6 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: + 0x16144fe (0x7fcdcfe114fe in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #2: + 0x63501ce (0x7fcdd4b4d1ce in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #3: + 0x6350386 (0x7fcdd4b4d386 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #4: + 0x63507f4 (0x7fcdd4b4d7f4 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #5: + 0x630d216 (0x7fcdd4b0a216 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #6: c10d::TCPStore::TCPStore(std::string, c10d::TCPStoreOptions const&) + 0x20c (0x7fcdd4b0d14c in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_cpu.so)
frame #7: + 0xe486ef (0x7fcde47336ef in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #8: + 0x51a017 (0x7fcde3e05017 in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #9: /root/miniconda3/envs/agent-r1/bin/python3() [0x528b17]
frame #10: _PyObject_MakeTpCall + 0x27c (0x50452c in /root/miniconda3/envs/agent-r1/bin/python3)
frame #11: /root/miniconda3/envs/agent-r1/bin/python3() [0x5579ce]
frame #12: _PyObject_Call + 0x11f (0x54330f in /root/miniconda3/envs/agent-r1/bin/python3)
frame #13: /root/miniconda3/envs/agent-r1/bin/python3() [0x540849]
frame #14: /root/miniconda3/envs/agent-r1/bin/python3() [0x50492c]
frame #15: + 0x51880b (0x7fcde3e0380b in /root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/lib/libtorch_python.so)
frame #16: _PyObject_MakeTpCall + 0x27c (0x50452c in /root/miniconda3/envs/agent-r1/bin/python3)
frame #17: _PyEval_EvalFrameDefault + 0x6a6 (0x511a76 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #18: /root/miniconda3/envs/agent-r1/bin/python3() [0x5a3197]
frame #19: /root/miniconda3/envs/agent-r1/bin/python3() [0x52f30b]
frame #20: PyObject_Vectorcall + 0x31 (0x51ea31 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #21: _PyEval_EvalFrameDefault + 0x6a6 (0x511a76 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #22: _PyFunction_Vectorcall + 0x173 (0x539153 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #23: PyObject_Call + 0x12c (0x5430ac in /root/miniconda3/envs/agent-r1/bin/python3)
frame #24: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #25: _PyFunction_Vectorcall + 0x173 (0x539153 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #26: PyObject_Call + 0x12c (0x5430ac in /root/miniconda3/envs/agent-r1/bin/python3)
frame #27: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #28: /root/miniconda3/envs/agent-r1/bin/python3() [0x5581df]
frame #29: /root/miniconda3/envs/agent-r1/bin/python3() [0x557a20]
frame #30: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #31: _PyFunction_Vectorcall + 0x173 (0x539153 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #32: PyObject_Call + 0x12c (0x5430ac in /root/miniconda3/envs/agent-r1/bin/python3)
frame #33: _PyEval_EvalFrameDefault + 0x47c0 (0x515b90 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #34: /root/miniconda3/envs/agent-r1/bin/python3() [0x5cc3aa]
frame #35: PyEval_EvalCode + 0x9f (0x5cba7f in /root/miniconda3/envs/agent-r1/bin/python3)
frame #36: /root/miniconda3/envs/agent-r1/bin/python3() [0x5ecba7]
frame #37: /root/miniconda3/envs/agent-r1/bin/python3() [0x5e8740]
frame #38: PyRun_StringFlags + 0x5f (0x5db24f in /root/miniconda3/envs/agent-r1/bin/python3)
frame #39: PyRun_SimpleStringFlags + 0x3b (0x5daffb in /root/miniconda3/envs/agent-r1/bin/python3)
frame #40: Py_RunMain + 0x388 (0x5f7498 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #41: Py_BytesMain + 0x39 (0x5bc149 in /root/miniconda3/envs/agent-r1/bin/python3)
frame #42: __libc_start_main + 0xf3 (0x7fcdecef4083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #43: /root/miniconda3/envs/agent-r1/bin/python3() [0x5bbf93]
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] Exception in worker VllmWorkerProcess while processing method init_device.
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] Traceback (most recent call last):
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/executor/multiproc_worker_utils.py", line 232, in _run_worker_process
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/utils.py", line 2456, in run_method
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] return func(*args, **kwargs)
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] self.worker.init_device() # type: ignore
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] ^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/worker/worker.py", line 186, in init_device
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] init_worker_distributed_environment(self.vllm_config, self.rank,
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/worker/worker.py", line 525, in init_worker_distributed_environment
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] init_distributed_environment(parallel_config.world_size, rank,
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 891, in init_distributed_environment
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] torch.distributed.init_process_group(
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] return func(*args, **kwargs)
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 95, in wrapper
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] func_return = func(*args, **kwargs)
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 1714, in init_process_group
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] store, rank, world_size = next(rendezvous_iterator)
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] ^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/distributed/rendezvous.py", line 226, in _tcp_rendezvous_handler
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] store = _create_c10d_store(
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] ^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/torch/distributed/rendezvous.py", line 194, in _create_c10d_store
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] return TCPStore(
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] ^^^^^^^^^
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] torch.distributed.DistNetworkError: The client socket has timed out after 600000ms while trying to connect to (172.17.0.9, 50437).
[rank0]: Traceback (most recent call last):
[rank0]: File "/remote-home/2432192/Agent-R/eval.py", line 95, in
[rank0]: main(Task, args.model_name, args.env_server_base, args.max_steps)
[rank0]: File "/remote-home/2432192/Agent-R/eval.py", line 79, in main
[rank0]: perform_test(FuncCallOffline(model_name=model_name), env, conv, model_name, idx, max_steps)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/remote-home/2432192/Agent-R/mcts_utils/llm_server.py", line 30, in init
[rank0]: self.llm = LLM(model=os.environ["MODEL_DIR"], tensor_parallel_size=4, dtype="half")
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/utils.py", line 1161, in inner
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 247, in init
[rank0]: self.llm_engine = LLMEngine.from_engine_args(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 510, in from_engine_args
[rank0]: return engine_cls.from_vllm_config(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 486, in from_vllm_config
[rank0]: return cls(
[rank0]: ^^^^
[rank0]: File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 275, in init
[rank0]: self.model_executor = executor_class(vllm_config=vllm_config)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 286, in init
[rank0]: super().init(*args, **kwargs)
[rank0]: File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in init
[rank0]: self._init_executor()
[rank0]: File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 124, in _init_executor
[rank0]: self._run_workers("init_device")
[rank0]: File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 190, in _run_workers
[rank0]: ] + [output.get() for output in worker_outputs]
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 190, in
[rank0]: ] + [output.get() for output in worker_outputs]
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/agent-r1/lib/python3.11/site-packages/vllm/executor/multiproc_worker_utils.py", line 58, in get
[rank0]: raise self.result.exception
[rank0]: torch.distributed.DistNetworkError: The client socket has timed out after 600000ms while trying to connect to (172.17.0.9, 50437).
ERROR 06-16 02:08:15 [multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 4089753 died, exit code: -15
INFO 06-16 02:08:15 [multiproc_worker_utils.py:124] Killing local vLLM worker processes
[rank0]:[W616 02:08:16.983330818 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
/root/miniconda3/envs/agent-r1/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
When consecutively executing multiple inference tasks, after completing the first inference task, there is an occurrence of "Worker exiting." According to the logs, it shows "Worker ready; awaiting tasks," but while waiting for tasks, multiple worker processes fail to connect to the master process's TCP address and port for distributed communication.
(VllmWorkerProcess pid=4089753) ERROR 06-16 02:08:15 [multiproc_worker_utils.py:238] Exception in worker VllmWorkerProcess while processing method init_device.
could be similar issue with #15850
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.