Skip to content

Conversation

@AndreSlavescu
Copy link
Contributor

reference to issue #198

@WoosukKwon WoosukKwon self-requested a review June 23, 2023 21:11
@WoosukKwon
Copy link
Collaborator

@AndreSlavescu Awesome! Thanks for your contribution. Is this PR ready for review? Otherwise, please ping me when you are ready. Thanks again!

@silvacarl2
Copy link

can you merge this change? so we can test it out with our fine tuned gpt-j model?

8-)

@WoosukKwon
Copy link
Collaborator

@AndreSlavescu What's going on with the PR? If you are not able to continue it, no worries, I can take it. Please let us know if you have any question.

@AndreSlavescu
Copy link
Contributor Author

@WoosukKwon Hi sorry for the delayed reply, had a busy schedule this past week. I won't have much time to continue this coming week, so please continue on it if you'd like.
Thanks!

@ri938
Copy link
Contributor

ri938 commented Jun 28, 2023

Is it just waiting for review or requires additional work? Is it expected to be working (if so I can use it now).

@WoosukKwon
Copy link
Collaborator

@ri938 This PR is not ready yet. I'll take this over and finish the PR soon.

@WoosukKwon
Copy link
Collaborator

WoosukKwon commented Jul 7, 2023

The PR is currently blocked because GPT-J's rotary embedding requires a new kernel (IIUC, it's different from GPT-NeoX's rotary embedding). I will address it this weekend. Turns out that this is not a problem.

@WoosukKwon WoosukKwon requested a review from zhuohan123 July 8, 2023 18:25
@WoosukKwon
Copy link
Collaborator

@zhuohan123 This PR is ready for review. Please take a look at it.

@WoosukKwon WoosukKwon changed the title GPT-J model [Model] Add support for GPT-J Jul 8, 2023
Copy link
Member

@zhuohan123 zhuohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Left some minor comments.

@WoosukKwon WoosukKwon merged commit c894836 into vllm-project:main Jul 9, 2023
@WoosukKwon
Copy link
Collaborator

@silvacarl2 @ri938 We'v just merged this PR. Please install vLLM from source and try it out!

@silvacarl2
Copy link

Cool will do!!

@silvacarl2
Copy link

got this error:

python offline_inference.py
INFO 07-09 10:47:10 llm_engine.py:59] Initializing an LLM engine with config: model='EleutherAI/gpt-j-6b', dtype=torch.float16, use_dummy_weights=False, download_dir=None, use_np_weights=False, tensor_parallel_size=1, seed=0)
Traceback (most recent call last):
File "offline_inference.py", line 14, in
llm = LLM(model="EleutherAI/gpt-j-6b")
File "/home/silvacarl/.local/lib/python3.8/site-packages/vllm/entrypoints/llm.py", line 55, in init
self.llm_engine = LLMEngine.from_engine_args(engine_args)
File "/home/silvacarl/.local/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 151, in from_engine_args
engine = cls(*engine_configs, distributed_init_method, devices,
File "/home/silvacarl/.local/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 93, in init
worker = worker_cls(
File "/home/silvacarl/.local/lib/python3.8/site-packages/vllm/worker/worker.py", line 45, in init
self.model = get_model(model_config)
File "/home/silvacarl/.local/lib/python3.8/site-packages/vllm/model_executor/model_loader.py", line 34, in get_model
model_class = _get_model_architecture(model_config.hf_config)
File "/home/silvacarl/.local/lib/python3.8/site-packages/vllm/model_executor/model_loader.py", line 27, in _get_model_architecture
raise ValueError(
ValueError: Model architectures ['GPTJForCausalLM'] are not supported for now. Supported architectures: ['GPT2LMHeadModel', 'GPTNeoXForCausalLM', 'LlamaForCausalLM', 'OPTForCausalLM']

@silvacarl2
Copy link

same with gpt-neo:

python offline_inference.py
Downloading (…)lve/main/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.46k/1.46k [00:00<00:00, 1.09MB/s]
INFO 07-09 10:48:12 llm_engine.py:59] Initializing an LLM engine with config: model='EleutherAI/gpt-neo-2.7B', dtype=torch.float16, use_dummy_weights=False, download_dir=None, use_np_weights=False, tensor_parallel_size=1, seed=0)
Downloading (…)okenizer_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:00<00:00, 173kB/s]
Downloading (…)olve/main/vocab.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 798k/798k [00:00<00:00, 5.96MB/s]
Downloading (…)olve/main/merges.txt: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 456k/456k [00:00<00:00, 38.1MB/s]
Downloading (…)cial_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 90.0/90.0 [00:00<00:00, 168kB/s]
Traceback (most recent call last):
File "offline_inference.py", line 14, in
llm = LLM(model="EleutherAI/gpt-neo-2.7B")
File "/home/silvacarl/.local/lib/python3.8/site-packages/vllm/entrypoints/llm.py", line 55, in init
self.llm_engine = LLMEngine.from_engine_args(engine_args)
File "/home/silvacarl/.local/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 151, in from_engine_args
engine = cls(*engine_configs, distributed_init_method, devices,
File "/home/silvacarl/.local/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 93, in init
worker = worker_cls(
File "/home/silvacarl/.local/lib/python3.8/site-packages/vllm/worker/worker.py", line 45, in init
self.model = get_model(model_config)
File "/home/silvacarl/.local/lib/python3.8/site-packages/vllm/model_executor/model_loader.py", line 34, in get_model
model_class = _get_model_architecture(model_config.hf_config)
File "/home/silvacarl/.local/lib/python3.8/site-packages/vllm/model_executor/model_loader.py", line 27, in _get_model_architecture
raise ValueError(
ValueError: Model architectures ['GPTNeoForCausalLM'] are not supported for now. Supported architectures: ['GPT2LMHeadModel', 'GPTNeoXForCausalLM', 'LlamaForCausalLM', 'OPTForCausalLM']

@WoosukKwon
Copy link
Collaborator

@silvacarl2 Could you check again if you installed the latest vLLM from source?

BTW, GPTNeo is not supported yet.

@silvacarl2
Copy link

NP, trying out others

@zhuohan123 zhuohan123 mentioned this pull request Jul 12, 2023
@leegohi04517
Copy link
Contributor

install vllm from source. i encountered this problem:
(generator38) fsuser@recau5mvammeirzd3:~/chat_generator$ python -m vllm.entrypoints.openai.api_server \

--model PygmalionAI/pygmalion-6b
--host 0.0.0.0
INFO 07-20 03:51:12 llm_engine.py:60] Initializing an LLM engine with config: model='PygmalionAI/pygmalion-6b', tokenizer='PygmalionAI/pygmalion-6b', tokenizer_mode=auto, trust_remote_code=False, dtype=torch.float16, use_dummy_weights=False, download_dir=None, use_np_weights=False, tensor_parallel_size=1, seed=0)
Traceback (most recent call last):
File "/home/fsuser/anaconda3/envs/generator38/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/fsuser/anaconda3/envs/generator38/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/fsuser/vllm/vllm/entrypoints/openai/api_server.py", line 583, in
engine = AsyncLLMEngine.from_engine_args(engine_args)
File "/home/fsuser/vllm/vllm/engine/async_llm_engine.py", line 232, in from_engine_args
engine = cls(engine_args.worker_use_ray,
File "/home/fsuser/vllm/vllm/engine/async_llm_engine.py", line 55, in init
self.engine = engine_class(*args, **kwargs)
File "/home/fsuser/vllm/vllm/engine/llm_engine.py", line 99, in init
worker = worker_cls(
File "/home/fsuser/vllm/vllm/worker/worker.py", line 45, in init
self.model = get_model(model_config)
File "/home/fsuser/vllm/vllm/model_executor/model_loader.py", line 43, in get_model
model = model_class(model_config.hf_config)
File "/home/fsuser/vllm/vllm/model_executor/models/gpt_j.py", line 192, in init
self.transformer = GPTJModel(config)
File "/home/fsuser/vllm/vllm/model_executor/models/gpt_j.py", line 157, in init
[GPTJBlock(config) for _ in range(config.n_layer)])
File "/home/fsuser/vllm/vllm/model_executor/models/gpt_j.py", line 157, in
[GPTJBlock(config) for _ in range(config.n_layer)])
File "/home/fsuser/vllm/vllm/model_executor/models/gpt_j.py", line 122, in init
self.attn = GPTJAttention(config)
File "/home/fsuser/vllm/vllm/model_executor/models/gpt_j.py", line 68, in init
assert config.rotary
File "/home/fsuser/anaconda3/envs/generator38/lib/python3.8/site-packages/transformers/configuration_utils.py", line 260, in getattribute
return super().getattribute(key)
AttributeError: 'GPTJConfig' object has no attribute 'rotary'

hongxiayang pushed a commit to hongxiayang/vllm that referenced this pull request Feb 13, 2024
Co-authored-by: woWoosuk Kwon <woosuk.kwon@berkeley.edu>
mht-sharma pushed a commit to mht-sharma/vllm that referenced this pull request Oct 30, 2024
* [Build/CI] Upgrade to gcc 10 in the base build Docker image (vllm-project#8814)

* [Docs] Add README to the build docker image (vllm-project#8825)

* [CI/Build] Fix missing ci dependencies (vllm-project#8834)

* [misc][installation] build from source without compilation (vllm-project#8818)

* [ci] Soft fail Entrypoints, Samplers, LoRA, Decoder-only VLM (vllm-project#8872)

Signed-off-by: kevin <kevin@anyscale.com>

* [Bugfix] Include encoder prompts len to non-stream api usage response (vllm-project#8861)

* [Misc] Change dummy profiling and BOS fallback warns to log once (vllm-project#8820)

* [Bugfix] Fix print_warning_once's line info (vllm-project#8867)

* fix validation: Only set tool_choice `auto` if at least one tool is provided (vllm-project#8568)

* [Bugfix] Fixup advance_step.cu warning (vllm-project#8815)

* [BugFix] Fix test breakages from transformers 4.45 upgrade (vllm-project#8829)

* [Installation] Allow lower versions of FastAPI to maintain Ray 2.9 compatibility (vllm-project#8764)

* [Feature] Add support for Llama 3.1 and 3.2 tool use (vllm-project#8343)

Signed-off-by: Max de Bayser <mbayser@br.ibm.com>

* [Core] rename`PromptInputs` and `inputs` (vllm-project#8876)

* [misc] fix collect env (vllm-project#8894)

* [MISC] Fix invalid escape sequence '\' (vllm-project#8830)

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>

* [Bugfix][VLM] Fix Fuyu batching inference with `max_num_seqs>1` (vllm-project#8892)

* [TPU] Update pallas.py to support trillium (vllm-project#8871)

* [torch.compile] use empty tensor instead of None for profiling (vllm-project#8875)

* [Kernel] AQ AZP 4/4: Integrate asymmetric quantization to linear method (vllm-project#7271)

* [Bugfix] fix for deepseek w4a16 (vllm-project#8906)

Co-authored-by: mgoin <michael@neuralmagic.com>

* [Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (vllm-project#8378)

Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

* [misc][distributed] add VLLM_SKIP_P2P_CHECK flag (vllm-project#8911)

* [Core] Priority-based scheduling in async engine (vllm-project#8850)

* [misc] fix wheel name (vllm-project#8919)

* [Bugfix][Intel] Fix XPU Dockerfile Build (vllm-project#7824)

Signed-off-by: tylertitsworth <tyler.titsworth@intel.com>
Co-authored-by: youkaichao <youkaichao@126.com>

* [Misc] Remove vLLM patch of `BaichuanTokenizer` (vllm-project#8921)

* [Bugfix] Fix code for downloading models from modelscope (vllm-project#8443)

* [Bugfix] Fix PP for Multi-Step (vllm-project#8887)

* [CI/Build] Update models tests & examples (vllm-project#8874)

Co-authored-by: Roger Wang <ywang@roblox.com>

* [Frontend] Make beam search emulator temperature modifiable (vllm-project#8928)

Co-authored-by: Eduard Balzin <nfunctor@yahoo.fr>

* [Bugfix] Support testing prefill throughput with benchmark_serving.py --hf-output-len 1 (vllm-project#8891)

* [doc] organize installation doc and expose per-commit docker (vllm-project#8931)

* [Core] Improve choice of Python multiprocessing method (vllm-project#8823)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: youkaichao <youkaichao@126.com>

* [Bugfix] Block manager v2 with preemption and lookahead slots (vllm-project#8824)

* [Bugfix] Fix Marlin MoE act order when is_k_full == False (vllm-project#8741)

Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [CI/Build] Add test decorator for minimum GPU memory (vllm-project#8925)

* [Build/CI] Set FETCHCONTENT_BASE_DIR to one location for better caching (vllm-project#8930)

* [Model] Support Qwen2.5-Math-RM-72B (vllm-project#8896)

* [Model][LoRA]LoRA support added for MiniCPMV2.5 (vllm-project#7199)

* [BugFix] Fix seeded random sampling with encoder-decoder models (vllm-project#8870)

Co-authored-by: Roger Wang <ywang@roblox.com>

* [Misc] Fix typo in BlockSpaceManagerV1 (vllm-project#8944)

* [Frontend] Added support for HF's new `continue_final_message` parameter (vllm-project#8942)

* [Kernel][Model] Varlen prefill + Prefill chunking support for mamba kernels and Jamba model (vllm-project#8533)

* [Model] support input embeddings for qwen2vl (vllm-project#8856)

* [Misc][CI/Build] Include `cv2` via `mistral_common[opencv]`  (vllm-project#8951)

* [Model][LoRA]LoRA support added for MiniCPMV2.6 (vllm-project#8943)

Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Model] Expose InternVL2 max_dynamic_patch as a mm_processor_kwarg (vllm-project#8946)

* [Core] Make scheduling policy settable via EngineArgs (vllm-project#8956)

* [Misc] Adjust max_position_embeddings for LoRA compatibility (vllm-project#8957)

* [ci] Add CODEOWNERS for test directories  (vllm-project#8795)

Signed-off-by: kevin <kevin@anyscale.com>

* [CI][SpecDecode] Fix spec decode tests, use flash attention backend for spec decode CI tests. (vllm-project#8975)

* [Frontend][Core] Move guided decoding params into sampling params (vllm-project#8252)

Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Co-authored-by: Nick Hill <nickhill@us.ibm.com>

* [CI/Build] Fix machete generated kernel files ordering (vllm-project#8976)

Signed-off-by: kevin <kevin@anyscale.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>

* [torch.compile] fix tensor alias (vllm-project#8982)

* [Misc] add process_weights_after_loading for DummyLoader (vllm-project#8969)

* [Bugfix] Fix Fuyu tensor parallel inference (vllm-project#8986)

* [Bugfix] Fix Token IDs Reference for MiniCPM-V When Images are Provided With No Placeholders (vllm-project#8991)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* [Core] [Frontend] Priority scheduling for embeddings and in the OpenAI-API (vllm-project#8965)

* [Doc] Update list of supported models (vllm-project#8987)

* Update benchmark_serving.py to read and write json-datasets, results in UTF8, for better compatibility with Windows (vllm-project#8997)

* [Spec Decode] (1/2) Remove batch expansion (vllm-project#8839)

* [Core] Combined support for multi-step scheduling, chunked prefill & prefix caching (vllm-project#8804)

Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Andrew Feldman <afeld2012@gmail.com>

* [Misc] Update Default Image Mapper Error Log (vllm-project#8977)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>

* [Core] CUDA Graphs for Multi-Step + Chunked-Prefill (vllm-project#8645)

Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

* [OpenVINO] Enable GPU support for OpenVINO vLLM backend (vllm-project#8192)

* [Model]  Adding Granite MoE. (vllm-project#8206)

Co-authored-by: Nick Hill <nickhill@us.ibm.com>

* [Doc] Update Granite model docs (vllm-project#9025)

* [Bugfix] example template should not add parallel_tool_prompt if tools is none (vllm-project#9007)

* [Misc] log when using default MoE config (vllm-project#8971)

* [BugFix] Enforce Mistral ToolCall id constraint when using the Mistral tool call parser (vllm-project#9020)

* [Core] Make BlockSpaceManagerV2 the default BlockManager to use. (vllm-project#8678)

* [Frontend] [Neuron] Parse literals out of override-neuron-config (vllm-project#8959)

Co-authored-by: Jerzy Zagorski <jzagorsk@amazon.com>

* [misc] add forward context for attention (vllm-project#9029)

* Fix failing spec decode test (vllm-project#9054)

* [Bugfix] Weight loading fix for OPT model (vllm-project#9042)

Co-authored-by: dvres <dvres@fri.uni-lj.si>

* [Frontend][Feature] support tool calling for internlm/internlm2_5-7b-chat model (vllm-project#8405)

* [CI/Build] Per file CUDA Archs (improve wheel size and dev build times) (vllm-project#8845)

* [Misc] Enable multi-step output streaming by default (vllm-project#9047)

* [Models] Add remaining model PP support (vllm-project#7168)

Signed-off-by: Muralidhar Andoorveedu <muralidhar.andoorveedu@centml.ai>
Signed-off-by: Murali Andoorveedu <muralidhar.andoorveedu@centml.ai>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Move registry to its own file (vllm-project#9064)

* [Bugfix] Reshape the dimensions of the input image embeddings in Qwen2VL (vllm-project#9071)

* [Bugfix] Flash attention arches not getting set properly (vllm-project#9062)

* [Model] add a bunch of supported lora modules for mixtral (vllm-project#9008)

Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>

* Remove AMD Ray Summit Banner (vllm-project#9075)

* [Hardware][PowerPC] Make oneDNN dependency optional for Power (vllm-project#9039)

Signed-off-by: Varad Ahirwadkar <varad.ahirwadkar1@ibm.com>

* [Core][VLM] Test registration for OOT multimodal models (vllm-project#8717)

Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Adds truncate_prompt_tokens param for embeddings creation (vllm-project#8999)

Signed-off-by: Flavia Beo <flavia.beo@ibm.com>

* [Kernel] Zero point support in fused MarlinMoE kernel + AWQ Fused MoE (vllm-project#8973)

Co-authored-by: Dipika <dipikasikka1@gmail.com>
Co-authored-by: Dipika Sikka <ds3822@columbia.edu>

* [CI] Update performance benchmark: upgrade trt-llm to r24.07, and add SGLang (vllm-project#7412)

* [Misc] Improved prefix cache example (vllm-project#9077)

* [Misc] Add random seed for prefix cache benchmark (vllm-project#9081)

* [Misc] Fix CI lint (vllm-project#9085)

* [Hardware][Neuron] Add on-device sampling support for Neuron (vllm-project#8746)

Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>

* [torch.compile] improve allreduce registration (vllm-project#9061)

* [Doc] Update README.md with Ray summit slides (vllm-project#9088)

* [Bugfix] use blockmanagerv1 for encoder-decoder (vllm-project#9084)

Co-authored-by: Roger Wang <ywang@roblox.com>

* [Bugfix] Fixes Phi3v & Ultravox Multimodal EmbeddingInputs (vllm-project#8979)

* [Model] Support Gemma2 embedding model (vllm-project#9004)

* [Bugfix] Deprecate registration of custom configs to huggingface (vllm-project#9083)

* [Bugfix] Fix order of arguments matters in config.yaml (vllm-project#8960)

* [core] use forward context for flash infer (vllm-project#9097)

* [Bugfix] Fix try-catch conditions to import correct Flash Attention Backend in Draft Model (vllm-project#9101)

* [Frontend] API support for beam search (vllm-project#9087)

Co-authored-by: youkaichao <youkaichao@126.com>

* [Misc] Remove user-facing error for removed VLM args (vllm-project#9104)

* [Model] PP support for embedding models and update docs (vllm-project#9090)

Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>

* [Bugfix] fix tool_parser error handling when serve a model not support it (vllm-project#8709)

* [Bugfix] Fix incorrect updates to num_computed_tokens in multi-step scheduling (vllm-project#9038)

Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

* [Bugfix][Hardware][CPU] Fix CPU model input for decode (vllm-project#9044)

* [BugFix][Core] Fix BlockManagerV2 when Encoder Input is None (vllm-project#9103)

* [core] remove beam search from the core (vllm-project#9105)

* [Model] Explicit interface for vLLM models and support OOT embedding models (vllm-project#9108)

* [Hardware][CPU] Cross-attention and Encoder-Decoder models support on CPU backend (vllm-project#9089)

* [Core] Refactor GGUF parameters packing and forwarding (vllm-project#8859)

* [Model] Support NVLM-D and fix QK Norm in InternViT (vllm-project#9045)

Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Doc]: Add deploying_with_k8s guide (vllm-project#8451)

* [CI/Build] Add linting for github actions workflows (vllm-project#7876)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Doc] Include performance benchmark in README (vllm-project#9135)

* [misc] fix comment and variable name (vllm-project#9139)

* Add Slack to README (vllm-project#9137)

* [misc] update utils to support comparing multiple settings (vllm-project#9140)

* [Intel GPU] Fix xpu decode input  (vllm-project#9145)

* [misc] improve ux on readme (vllm-project#9147)

* [Frontend] API support for beam search for MQLLMEngine (vllm-project#9117)

* [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (vllm-project#9131)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* Factor out common weight loading code

* Fix EAGLE model loading

* [Frontend] Add Early Validation For Chat Template / Tool Call Parser (vllm-project#9151)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* Improve efficiency

* Rename

* Update LLaVA-NeXT-Video

* [CI/Build] Add examples folder into Docker image so that we can leverage the templates*.jinja when serving models (vllm-project#8758)

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>

* [Bugfix] fix OpenAI API server startup with --disable-frontend-multiprocessing (vllm-project#8537)

* Automatic loading and save memory

* Rename

* Update docstring

* Simplify

* Cleanup

* Fully enable recursive loading

* Clarify

* [Doc] Update vlm.rst to include an example on videos (vllm-project#9155)

Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* Fix incorrect semantics

* Move function

* Update error message

* Fix Ultravox loading

* spacing

* [Doc] Improve contributing and installation documentation (vllm-project#9132)

Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>

* Fix server

* [Bugfix] Try to handle older versions of pytorch (vllm-project#9086)

---------

Signed-off-by: kevin <kevin@anyscale.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
Signed-off-by: tylertitsworth <tyler.titsworth@intel.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
Signed-off-by: Muralidhar Andoorveedu <muralidhar.andoorveedu@centml.ai>
Signed-off-by: Murali Andoorveedu <muralidhar.andoorveedu@centml.ai>
Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>
Signed-off-by: Varad Ahirwadkar <varad.ahirwadkar1@ibm.com>
Signed-off-by: Flavia Beo <flavia.beo@ibm.com>
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
Co-authored-by: fyuan1316 <yuanfang@alauda.io>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
Co-authored-by: Pernekhan Utemuratov <pernekhan@deepinfra.com>
Co-authored-by: Chirag Jain <jain.chirag925@gmail.com>
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: Peter Pan <peter.pan@daocloud.io>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Brittany <24945384+bvrockwell@users.noreply.github.com>
Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com>
Co-authored-by: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Sebastian Schoennenbeck <sebastian.schoennenbeck@comma-soft.com>
Co-authored-by: Tyler Titsworth <titswortht@gmail.com>
Co-authored-by: youkaichao <youkaichao@126.com>
Co-authored-by: tastelikefeet <58414341+tastelikefeet@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Edouard B. <eduard.r.balzin@gmail.com>
Co-authored-by: Eduard Balzin <nfunctor@yahoo.fr>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: sroy745 <142070531+sroy745@users.noreply.github.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Zilin Zhu <zilinzhu@tencent.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: juncheoll <127460634+juncheoll@users.noreply.github.com>
Co-authored-by: danieljannai21 <100521221+danieljannai21@users.noreply.github.com>
Co-authored-by: Mor Zusman <mor.zusmann@gmail.com>
Co-authored-by: whyiug <whyiug@hotmail.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: Lily Liu <lilyliupku@gmail.com>
Co-authored-by: Joe Runde <Joseph.Runde@ibm.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
Co-authored-by: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
Co-authored-by: Alex Brooks <alex.brooks@ibm.com>
Co-authored-by: vlsav <vl_sav@mail.ru>
Co-authored-by: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com>
Co-authored-by: Andrew Feldman <afeld2012@gmail.com>
Co-authored-by: Sergey Shlyapnikov <Sergeishlyapnikov@gmail.com>
Co-authored-by: Shawn Tan <shawn@wtf.sg>
Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com>
Co-authored-by: Guillaume Calmettes <guillaume.calmettes@gmail.com>
Co-authored-by: xendo <xendoo@gmail.com>
Co-authored-by: Jerzy Zagorski <jzagorsk@amazon.com>
Co-authored-by: Domen Vreš <56541137+domenVres@users.noreply.github.com>
Co-authored-by: dvres <dvres@fri.uni-lj.si>
Co-authored-by: 代君 <sydnash@users.noreply.github.com>
Co-authored-by: Murali Andoorveedu <37849411+andoorve@users.noreply.github.com>
Co-authored-by: Prashant Gupta <prashantgupta@us.ibm.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Varad Ahirwadkar <86718090+varad-ahirwadkar@users.noreply.github.com>
Co-authored-by: Flávia Béo <119421251+flaviabeo@users.noreply.github.com>
Co-authored-by: Dipika <dipikasikka1@gmail.com>
Co-authored-by: Dipika Sikka <ds3822@columbia.edu>
Co-authored-by: Kuntai Du <kuntai@uchicago.edu>
Co-authored-by: Andy Dai <76841985+Imss27@users.noreply.github.com>
Co-authored-by: Chongming Ni <chongmni@amazon.com>
Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
Co-authored-by: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Co-authored-by: Xin Yang <105740670+xyang16@users.noreply.github.com>
Co-authored-by: TJian <tunjian1996@gmail.com>
Co-authored-by: Brendan Wong <35351983+LunrEclipse@users.noreply.github.com>
Co-authored-by: Yanyi Liu <wolfsonliu@163.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: TimWang <7367474+haitwang-cloud@users.noreply.github.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
Co-authored-by: Daniele <36171005+dtrifiro@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Rafael Vasquez <rafvasq21@gmail.com>
Co-authored-by: bnellnm <49004751+bnellnm@users.noreply.github.com>
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Jul 7, 2025
yma11 added a commit to yma11/vllm that referenced this pull request Jul 16, 2025
yma11 added a commit to yma11/vllm that referenced this pull request Jul 24, 2025
yma11 added a commit to yma11/vllm that referenced this pull request Jul 25, 2025
yma11 added a commit to yma11/vllm that referenced this pull request Sep 1, 2025
zhenwei-intel pushed a commit to zhenwei-intel/vllm that referenced this pull request Sep 11, 2025
yma11 added a commit to yma11/vllm that referenced this pull request Sep 12, 2025
yma11 added a commit to yma11/vllm that referenced this pull request Sep 15, 2025
yma11 added a commit to yma11/vllm that referenced this pull request Nov 3, 2025
yma11 added a commit to yma11/vllm that referenced this pull request Nov 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants