Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: the most recent xla nightly is breaking vllm on TPU #12451

Closed
1 task done
hosseinsarshar opened this issue Jan 26, 2025 · 2 comments · Fixed by #12453
Closed
1 task done

[Bug]: the most recent xla nightly is breaking vllm on TPU #12451

hosseinsarshar opened this issue Jan 26, 2025 · 2 comments · Fixed by #12453
Labels
bug Something isn't working

Comments

@hosseinsarshar
Copy link
Contributor

Your current environment

The output of `python collect_env.py`
python collect_env.py
INFO 01-26 17:56:46 __init__.py:187] No platform detected, vLLM is running on UnspecifiedPlatform
Collecting environment information...
PyTorch version: 2.7.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35

Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1015-gcp-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        52 bits physical, 57 bits virtual
Byte Order:                           Little Endian
CPU(s):                               180
On-line CPU(s) list:                  0-179
Vendor ID:                            AuthenticAMD
Model name:                           AMD EPYC 9B14
CPU family:                           25
Model:                                17
Thread(s) per core:                   1
Core(s) per socket:                   90
Socket(s):                            2
Stepping:                             1
BogoMIPS:                             5199.99
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Hypervisor vendor:                    KVM
Virtualization type:                  full
L1d cache:                            5.6 MiB (180 instances)
L1i cache:                            5.6 MiB (180 instances)
L2 cache:                             180 MiB (180 instances)
L3 cache:                             768 MiB (24 instances)
NUMA node(s):                         2
NUMA node0 CPU(s):                    0-89
NUMA node1 CPU(s):                    90-179
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Mitigation; Safe RET
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pyzmq==26.2.0
[pip3] torch==2.7.0
[pip3] torch-xla==2.7.0+git6f367df
[pip3] transformers==4.48.1
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] pyzmq                     26.2.0                   pypi_0    pypi
[conda] torch                     2.7.0                    pypi_0    pypi
[conda] torch-xla                 2.7.0+git6f367df          pypi_0    pypi
[conda] transformers              4.48.1                   pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.6.post2.dev384+gaa2cd2c4
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

LD_LIBRARY_PATH=/home/hosseins/miniconda3/envs/test-new-nightly/lib/python3.10/site-packages/cv2/../../lib64:
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1

Model Input Dumps

No response

🐛 Describe the bug

The recent change to 20250124 is causing vllm to break -

$ vllm serve "meta-llama/Meta-Llama-3.1-8B" --download_dir /dev/shm --num-scheduler-steps 4 --swap-space 16 --disable-log-requests --tensor_parallel_size=8 --max-model-len=256 --dtype=bfloat16
INFO 01-26 17:51:40 __init__.py:187] No platform detected, vLLM is running on UnspecifiedPlatform
INFO 01-26 17:51:41 api_server.py:786] vLLM API server version 0.6.6.post2.dev384+gaa2cd2c4
INFO 01-26 17:51:41 api_server.py:787] args: Namespace(subparser='serve', model_tag='meta-llama/Meta-Llama-3.1-8B', config='', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='meta-llama/Meta-Llama-3.1-8B', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir='/dev/shm', load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='bfloat16', kv_cache_dtype='auto', max_model_len=256, guided_decoding_backend='xgrammar', logits_processor_pattern=None, distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=8, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=16.0, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=4, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, disable_log_requests=True, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, dispatch_function=<function serve at 0x7038b677c430>)
INFO 01-26 17:51:41 api_server.py:201] Started engine process with PID 2294180
Traceback (most recent call last):
  File "/home/hosseins/miniconda3/envs/test-new-nightly/bin/vllm", line 33, in <module>
    sys.exit(load_entry_point('vllm', 'console_scripts', 'vllm')())
  File "/home/hosseins/vllm-new-nightly/vllm/scripts.py", line 201, in main
    args.dispatch_function(args)
  File "/home/hosseins/vllm-new-nightly/vllm/scripts.py", line 42, in serve
    uvloop.run(run_server(args))
  File "/home/hosseins/miniconda3/envs/test-new-nightly/lib/python3.10/site-packages/uvloop/__init__.py", line 82, in run
    return loop.run_until_complete(wrapper())
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/home/hosseins/miniconda3/envs/test-new-nightly/lib/python3.10/site-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
  File "/home/hosseins/vllm-new-nightly/vllm/entrypoints/openai/api_server.py", line 814, in run_server
    async with build_async_engine_client(args) as engine_client:
  File "/home/hosseins/miniconda3/envs/test-new-nightly/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/home/hosseins/vllm-new-nightly/vllm/entrypoints/openai/api_server.py", line 131, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
  File "/home/hosseins/miniconda3/envs/test-new-nightly/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/home/hosseins/vllm-new-nightly/vllm/entrypoints/openai/api_server.py", line 212, in build_async_engine_client_from_engine_args
    engine_config = engine_args.create_engine_config()
  File "/home/hosseins/vllm-new-nightly/vllm/engine/arg_utils.py", line 1046, in create_engine_config
    device_config = DeviceConfig(device=self.device)
  File "/home/hosseins/vllm-new-nightly/vllm/config.py", line 1553, in __init__
    raise RuntimeError("Failed to infer device type")
RuntimeError: Failed to infer device type
INFO 01-26 17:51:43 __init__.py:187] No platform detected, vLLM is running on UnspecifiedPlatform
ERROR 01-26 17:51:44 engine.py:387] Failed to infer device type
ERROR 01-26 17:51:44 engine.py:387] Traceback (most recent call last):
ERROR 01-26 17:51:44 engine.py:387]   File "/home/hosseins/vllm-new-nightly/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine
ERROR 01-26 17:51:44 engine.py:387]     engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 01-26 17:51:44 engine.py:387]   File "/home/hosseins/vllm-new-nightly/vllm/engine/multiprocessing/engine.py", line 116, in from_engine_args
ERROR 01-26 17:51:44 engine.py:387]     engine_config = engine_args.create_engine_config(usage_context)
ERROR 01-26 17:51:44 engine.py:387]   File "/home/hosseins/vllm-new-nightly/vllm/engine/arg_utils.py", line 1046, in create_engine_config
ERROR 01-26 17:51:44 engine.py:387]     device_config = DeviceConfig(device=self.device)
ERROR 01-26 17:51:44 engine.py:387]   File "/home/hosseins/vllm-new-nightly/vllm/config.py", line 1553, in __init__
ERROR 01-26 17:51:44 engine.py:387]     raise RuntimeError("Failed to infer device type")
ERROR 01-26 17:51:44 engine.py:387] RuntimeError: Failed to infer device type
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/home/hosseins/miniconda3/envs/test-new-nightly/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/hosseins/miniconda3/envs/test-new-nightly/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/hosseins/vllm-new-nightly/vllm/engine/multiprocessing/engine.py", line 389, in run_mp_engine
    raise e
  File "/home/hosseins/vllm-new-nightly/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine
    engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
  File "/home/hosseins/vllm-new-nightly/vllm/engine/multiprocessing/engine.py", line 116, in from_engine_args
    engine_config = engine_args.create_engine_config(usage_context)
  File "/home/hosseins/vllm-new-nightly/vllm/engine/arg_utils.py", line 1046, in create_engine_config
    device_config = DeviceConfig(device=self.device)
  File "/home/hosseins/vllm-new-nightly/vllm/config.py", line 1553, in __init__
    raise RuntimeError("Failed to infer device type")
RuntimeError: Failed to infer device type
$ python
Python 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch_xla
WARNING:root:Defaulting to PJRT_DEVICE=CPU
PJRT_DEVICE=TPU python
Python 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch_xla
>>> torch_xla.devices()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/hosseins/miniconda3/envs/test-new-nightly/lib/python3.10/site-packages/torch_xla/torch_xla.py", line 43, in devices
    return [torch.device(d) for d in xm.get_xla_supported_devices()]
  File "/home/hosseins/miniconda3/envs/test-new-nightly/lib/python3.10/site-packages/torch_xla/core/xla_model.py", line 93, in get_xla_supported_devices
    devices = torch_xla._XLAC._xla_get_devices()
  File "/home/hosseins/miniconda3/envs/test-new-nightly/lib/python3.10/site-packages/torch_xla/_internal/tpu.py", line 334, in library_path
    raise EnvironmentError('libtpu not found')
OSError: libtpu not found
pip list | grep torch
torch                             2.7.0
torch-xla                         2.7.0+git6f367df

pip list | grep jax
jax                               0.4.39.dev20250113
jaxlib                            0.4.39.dev20250113

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@miladm
Copy link
Collaborator

miladm commented Jan 27, 2025

cc @lsy323

@lsy323
Copy link
Collaborator

lsy323 commented Jan 27, 2025

Hi @hosseinsarshar, the torchxla nightly and torch nightly may not be compatible, due to c++ symbol issue. Please see these lines for details.

If you uninstall torch 2.7 and install torch from requirements-tpu.txt, would it work?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants