Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core] Support dynamically loading Lora adapter from HuggingFace #6234

Merged
merged 10 commits into from
Jul 22, 2024

Conversation

Jeffwan
Copy link
Contributor

@Jeffwan Jeffwan commented Jul 8, 2024

This PR enhances the flexibility of LoRa adapter artifact locations. It allows users to specify the location using either a relative path or a Hugging Face model id.

part of #6275

BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE


PR Checklist (Click to Expand)

Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Model] for adding a new model or improving an existing model. Model name should appear in the title.
  • [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.)
  • [Kernel] for changes affecting CUDA kernels or other compute kernels.
  • [Core] for changes in the core vLLM logic (e.g., LLMEngine, AsyncLLMEngine, Scheduler, etc.)
  • [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD]).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • We adhere to Google Python style guide and Google C++ style guide.
  • Pass all linter checks. Please use format.sh to format your code.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests.
  • Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.

Notes for Large Changes

Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR.

What to Expect for the Reviews

The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:

  • After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability.
  • After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team.
  • After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.
  • Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion.

Thank You

Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!

@Jeffwan Jeffwan changed the title Support Lora adapter loading from relative path and Huggingface Support dynamically loading Lora adapter from HuggingFace Jul 9, 2024
@Jeffwan Jeffwan force-pushed the jiaxin/load-lora-from-hg branch from 944a65c to bcc22eb Compare July 9, 2024 02:14
Copy link
Collaborator

@Yard1 Yard1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this looks fine, some nits. A little worried about the performance of this but since this is optional it should be ok.

Let's add a test? There's a lora on HF hub we use for testing, we can just use that (it's in one of the conftest.py files)

vllm/lora/utils.py Outdated Show resolved Hide resolved
@@ -18,7 +18,7 @@ class LoRARequest:

lora_name: str
lora_int_id: int
lora_local_path: str
lora_path: str
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we keep lora_local_path as a deprecated alias for lora_path? let's warn people when it's used so they update, but ideally we should avoid API breakage here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just notice this is the user facing api if users use vllm server instead of openai server. Sounds good. I will add deprecation notice there.

vllm/lora/utils.py Outdated Show resolved Hide resolved
@Jeffwan Jeffwan force-pushed the jiaxin/load-lora-from-hg branch from bcc22eb to 5987d94 Compare July 9, 2024 06:56
@Jeffwan
Copy link
Contributor Author

Jeffwan commented Jul 9, 2024

Let's add a test? There's a lora on HF hub we use for testing, we can just use that (it's in one of the conftest.py files)

@Yard1 You mean add unit test? or some other integration or e2e test? I am new to conftest.py and I will check its details.


Update 20240709:

Seems conftest.py creates a few fixture, all of them are return an absolute path via snapshot_download. I think we have two options.

  1. We can change any of the lora files to hugging face repo id. the corresponding logic automatically test the target function.
@pytest.fixture(scope="session")
def sql_lora_files():
    return snapshot_download(repo_id="yard1/llama-2-7b-sql-lora-test")

=>

@pytest.fixture(scope="session")
def sql_lora_files():
    return "yard1/llama-2-7b-sql-lora-test"
  1. Let's explicitly create a separate test like tests/lora/test_lora_checkpoint.py. instead of injecting the fixture, we can use huggingface repo id.

@Yard1 Do you have preferred ways?

@Jeffwan Jeffwan force-pushed the jiaxin/load-lora-from-hg branch from b2792d4 to 89edb62 Compare July 9, 2024 07:13
@Yard1
Copy link
Collaborator

Yard1 commented Jul 9, 2024

@Jeffwan let's add a separate test (or parameterize the existing one) so we test both paths (loading from local path and loading from repository)

@Jeffwan Jeffwan force-pushed the jiaxin/load-lora-from-hg branch from 89edb62 to 031d7b4 Compare July 12, 2024 23:32
Copy link
Collaborator

@Yard1 Yard1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the tests looks good. let's address https://github.com/vllm-project/vllm/pull/6234/files#r1669808107 and we can merge

tests/lora/conftest.py Outdated Show resolved Hide resolved
@Jeffwan
Copy link
Contributor Author

Jeffwan commented Jul 13, 2024

the tests looks good. let's address https://github.com/vllm-project/vllm/pull/6234/files#r1669808107 and we can merge

Sounds good. Let me update the PR to include this change.

@Jeffwan
Copy link
Contributor Author

Jeffwan commented Jul 15, 2024

@Yard1 I added the deprecation notice into the LoraRequest, Please have another look

default behavior

image

explicitly use deprecated field

image

required field missing

image

@Jeffwan Jeffwan force-pushed the jiaxin/load-lora-from-hg branch 3 times, most recently from cb267b8 to 61a6cff Compare July 15, 2024 20:12
@Jeffwan
Copy link
Contributor Author

Jeffwan commented Jul 15, 2024

image
Seems the test is flaky now. I rebase the upstream changes and let's take a look

@Jeffwan
Copy link
Contributor Author

Jeffwan commented Jul 15, 2024

@Yard1 Should I wait for upstream to fix the document build issue?

@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 16, 2024
@DarkLight1337
Copy link
Member

The error has been fixed so you can merge main to resolve it. Next time this happens, you can safely ignore it as the approver can ask others to force-merge the PR in the case of broken CI.

@Jeffwan Jeffwan force-pushed the jiaxin/load-lora-from-hg branch 3 times, most recently from e082ebc to d8e5a11 Compare July 17, 2024 00:18
@Jeffwan
Copy link
Contributor Author

Jeffwan commented Jul 17, 2024

@Yard1 Can you help check the failed test? I am not that sure whether it's related to this change. The test suite pass earlier. I didn't see this test is kicked off in other PRs

image

image

@Yard1
Copy link
Collaborator

Yard1 commented Jul 17, 2024

Yeah it's a bit weird. Can you merge master to retry CI?

@Jeffwan Jeffwan force-pushed the jiaxin/load-lora-from-hg branch from d8e5a11 to 240e1e6 Compare July 17, 2024 21:25
@Jeffwan Jeffwan force-pushed the jiaxin/load-lora-from-hg branch from 240e1e6 to 0ea0d0d Compare July 17, 2024 21:25
@Jeffwan
Copy link
Contributor Author

Jeffwan commented Jul 17, 2024

Looks like it still has that error. I am running a 4-gpu node and run test/lora/test_long_context.py against the master branch and see what happens.

@Jeffwan
Copy link
Contributor Author

Jeffwan commented Jul 18, 2024

master branch test pass

Master branch
```
root@37bd47dfb2b2:/workspace/vllm# export VLLM_WORKER_MULTIPROC_METHOD=spawn
root@37bd47dfb2b2:/workspace/vllm# pytest -v -s -x tests/lora/test_long_context.py
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
0it [00:00, ?it/s]
============================================================== test session starts ==============================================================
platform linux -- Python 3.10.12, pytest-8.2.2, pluggy-1.5.0 -- /usr/bin/python
cachedir: .pytest_cache
rootdir: /workspace/vllm
configfile: pyproject.toml
plugins: shard-0.1.2, rerunfailures-14.0, forked-1.6.0, asyncio-0.23.8, anyio-4.2.0
asyncio: mode=strict
collected 5 items                                                                                                                               
Running 5 items in this shard: tests/lora/test_long_context.py::test_rotary_emb_replaced, tests/lora/test_long_context.py::test_batched_rope_kernel, tests/lora/test_long_context.py::test_self_consistency, tests/lora/test_long_context.py::test_quality, tests/lora/test_long_context.py::test_max_len

config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 609/609 [00:00<00:00, 1.94MB/s]
INFO 07-17 23:35:32 weight_utils.py:219] Using model weights format ['*.safetensors']
model-00002-of-00002.safetensors: 100%|█████████████████████████████████████████████████████████████████████| 3.50G/3.50G [01:19<00:00, 44.2MB/s]
model-00001-of-00002.safetensors: 100%|█████████████████████████████████████████████████████████████████████| 9.98G/9.98G [03:32<00:00, 46.9MB/s]
model.safetensors.index.json: 100%|█████████████████████████████████████████████████████████████████████████| 26.8k/26.8k [00:00<00:00, 51.4MB/s]
INFO 07-17 23:39:08 model_runner.py:559] Loading model weights took 12.5562 GB
PASSED
.gitattributes: 100%|███████████████████████████████████████████████████████████████████████████████████████| 1.52k/1.52k [00:00<00:00, 11.0MB/s]
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 812/812 [00:00<00:00, 4.32MB/s]
tokenizer.model: 100%|████████████████████████████████████████████████████████████████████████████████████████| 500k/500k [00:00<00:00, 98.1MB/s]
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████| 875/875 [00:00<00:00, 5.30MB/s]
adapter_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████| 615/615 [00:00<00:00, 2.34MB/s]
special_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████| 95.0/95.0 [00:00<00:00, 448kB/s]
added_tokens.json: 100%|███████████████████████████████████████████████████████████████████████████████████████| 42.0/42.0 [00:00<00:00, 247kB/s]
README.md: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 93.0/93.0 [00:00<00:00, 553kB/s]
adapter_model.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████| 63.8M/63.8M [00:00<00:00, 389MB/s]
tokenizer.json: 100%|███████████████████████████████████████████████████████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 8.96MB/s]
Fetching 10 files: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 21.98it/s]
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 772/772 [00:00<00:00, 5.51MB/s]
new_embeddings.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████| 16.0/16.0 [00:00<00:00, 48.6kB/s]
README.md: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 93.0/93.0 [00:00<00:00, 392kB/s]
adapter_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████| 615/615 [00:00<00:00, 2.08MB/s]
special_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████| 437/437 [00:00<00:00, 3.34MB/s]
tokenizer.model: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 500k/500k [00:00<00:00, 121MB/s]
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████| 892/892 [00:00<00:00, 3.71MB/s]
adapter_model.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████| 63.8M/63.8M [00:00<00:00, 349MB/s]
.gitattributes: 100%|███████████████████████████████████████████████████████████████████████████████████████| 1.52k/1.52k [00:00<00:00, 3.96MB/s]
tokenizer.json: 100%|███████████████████████████████████████████████████████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 9.44MB/s]
Fetching 10 files: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 27.16it/s]
.gitattributes: 100%|███████████████████████████████████████████████████████████████████████████████████████| 1.52k/1.52k [00:00<00:00, 7.25MB/s]
adapter_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████| 615/615 [00:00<00:00, 2.39MB/s]
README.md: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 93.0/93.0 [00:00<00:00, 611kB/s]
added_tokens.json: 100%|███████████████████████████████████████████████████████████████████████████████████████| 42.0/42.0 [00:00<00:00, 135kB/s]
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 812/812 [00:00<00:00, 2.54MB/s]
special_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████| 95.0/95.0 [00:00<00:00, 550kB/s]
tokenizer.model: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 500k/500k [00:00<00:00, 108MB/s]
adapter_model.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████| 63.8M/63.8M [00:00<00:00, 287MB/s]
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████| 875/875 [00:00<00:00, 2.66MB/s]
tokenizer.json: 100%|███████████████████████████████████████████████████████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 8.43MB/s]
Fetching 10 files: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 24.37it/s]
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 587/587 [00:00<00:00, 1.94MB/s]
INFO 07-17 23:39:10 llm_engine.py:174] Initializing an LLM engine (v0.5.2) with config: model='meta-llama/Llama-2-13b-chat-hf', speculative_config=None, tokenizer='meta-llama/Llama-2-13b-chat-hf', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=meta-llama/Llama-2-13b-chat-hf, use_v2_block_manager=False, enable_prefix_caching=False)
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████████████████████| 1.62k/1.62k [00:00<00:00, 3.89MB/s]
tokenizer.model: 100%|████████████████████████████████████████████████████████████████████████████████████████| 500k/500k [00:00<00:00, 9.31MB/s]
tokenizer.json: 100%|███████████████████████████████████████████████████████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 27.1MB/s]
special_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████| 414/414 [00:00<00:00, 1.90MB/s]
generation_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████| 188/188 [00:00<00:00, 1.03MB/s]
INFO 07-17 23:39:11 custom_cache_manager.py:17] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager
(VllmWorkerProcess pid=6921) INFO 07-17 23:39:16 multiproc_worker_utils.py:215] Worker ready; awaiting tasks
(VllmWorkerProcess pid=6923) INFO 07-17 23:39:16 multiproc_worker_utils.py:215] Worker ready; awaiting tasks
(VllmWorkerProcess pid=6922) INFO 07-17 23:39:17 multiproc_worker_utils.py:215] Worker ready; awaiting tasks
(VllmWorkerProcess pid=6922) INFO 07-17 23:39:17 utils.py:737] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=6922) INFO 07-17 23:39:17 pynccl.py:63] vLLM is using nccl==2.20.5
INFO 07-17 23:39:17 utils.py:737] Found nccl from library libnccl.so.2
INFO 07-17 23:39:17 pynccl.py:63] vLLM is using nccl==2.20.5
(VllmWorkerProcess pid=6921) INFO 07-17 23:39:17 utils.py:737] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=6921) INFO 07-17 23:39:17 pynccl.py:63] vLLM is using nccl==2.20.5
(VllmWorkerProcess pid=6923) INFO 07-17 23:39:17 utils.py:737] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=6923) INFO 07-17 23:39:17 pynccl.py:63] vLLM is using nccl==2.20.5
WARNING 07-17 23:39:18 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=6923) WARNING 07-17 23:39:18 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=6921) WARNING 07-17 23:39:18 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=6922) WARNING 07-17 23:39:18 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
INFO 07-17 23:39:18 shm_broadcast.py:233] vLLM message queue communication handle: Handle(connect_ip='127.0.0.1', local_reader_ranks=[1, 2, 3], buffer=<vllm.distributed.device_communicators.shm_broadcast.ShmRingBuffer object at 0x7f893074e380>, local_subscribe_port=48719, local_sync_port=42317, remote_subscribe_port=None, remote_sync_port=None)
INFO 07-17 23:39:18 weight_utils.py:219] Using model weights format ['*.safetensors']
model-00002-of-00003.safetensors:   0%|                                                                              | 0.00/9.90G [00:00<?, ?B/s](VllmWorkerProcess pid=6921) INFO 07-17 23:39:18 weight_utils.py:219] Using model weights format ['*.safetensors']
(VllmWorkerProcess pid=6922) INFO 07-17 23:39:18 weight_utils.py:219] Using model weights format ['*.safetensors']
model-00002-of-00003.safetensors:   0%|▎                                                                     | 41.9M/9.90G [00:00<00:34, 284MB/s](VllmWorkerProcess pid=6923) INFO 07-17 23:39:18 weight_utils.py:219] Using model weights format ['*.safetensors']   | 0.00/6.18G [00:00<?, ?B/s]
model-00003-of-00003.safetensors: 100%|██████████████████████████████████████████████████████████████████████| 6.18G/6.18G [00:13<00:00, 456MB/s]
model-00002-of-00003.safetensors: 100%|██████████████████████████████████████████████████████████████████████| 9.90G/9.90G [00:20<00:00, 472MB/s]
model-00001-of-00003.safetensors: 100%|█████████████████████████████████████████████████████████████████████| 9.95G/9.95G [01:54<00:00, 87.0MB/s]
model.safetensors.index.json: 100%|█████████████████████████████████████████████████████████████████████████| 33.4k/33.4k [00:00<00:00, 59.7MB/s]
INFO 07-17 23:41:14 model_runner.py:559] Loading model weights took 6.1128 GB██████████████████████████████▉| 9.94G/9.95G [01:54<00:00, 89.1MB/s]
(VllmWorkerProcess pid=6923) INFO 07-17 23:41:16 model_runner.py:559] Loading model weights took 6.1138 GB
(VllmWorkerProcess pid=6922) INFO 07-17 23:41:16 model_runner.py:559] Loading model weights took 6.1138 GB
(VllmWorkerProcess pid=6921) INFO 07-17 23:41:16 model_runner.py:559] Loading model weights took 6.1138 GB
INFO 07-17 23:41:28 distributed_gpu_executor.py:56] # GPU blocks: 10039, # CPU blocks: 1310
INFO 07-17 23:41:30 model_runner.py:847] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 07-17 23:41:30 model_runner.py:851] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=6923) INFO 07-17 23:41:31 model_runner.py:847] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=6923) INFO 07-17 23:41:31 model_runner.py:851] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=6922) INFO 07-17 23:41:31 model_runner.py:847] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=6922) INFO 07-17 23:41:31 model_runner.py:851] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=6921) INFO 07-17 23:41:31 model_runner.py:847] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
(VllmWorkerProcess pid=6921) INFO 07-17 23:41:31 model_runner.py:851] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=6922) INFO 07-17 23:41:33 model_runner.py:1048] Graph capturing finished in 2 secs.
(VllmWorkerProcess pid=6923) INFO 07-17 23:41:33 model_runner.py:1048] Graph capturing finished in 2 secs.
INFO 07-17 23:41:33 model_runner.py:1048] Graph capturing finished in 3 secs.
(VllmWorkerProcess pid=6921) INFO 07-17 23:41:33 model_runner.py:1048] Graph capturing finished in 2 secs.
Processed prompts: 100%|███████████████████████████████████| 1/1 [00:06<00:00,  6.74s/it, est. speed input: 1667.45 toks/s, output: 11.72 toks/s]
Processed prompts: 100%|███████████████████████████████████| 1/1 [00:06<00:00,  6.79s/it, est. speed input: 1655.20 toks/s, output: 11.63 toks/s]
Processed prompts: 100%|████████████████████████████████████| 1/1 [00:08<00:00,  8.71s/it, est. speed input: 2620.16 toks/s, output: 6.08 toks/s]
Processed prompts: 100%|███████████████████████████████████| 3/3 [00:18<00:00,  6.13s/it, est. speed input: 2464.13 toks/s, output: 11.47 toks/s]
PASSED
Processed prompts: 100%|███████████████████████████████████| 3/3 [00:18<00:00,  6.08s/it, est. speed input: 2486.21 toks/s, output: 11.58 toks/s]
Processed prompts: 100%|███████████████████████████████████| 3/3 [00:18<00:00,  6.23s/it, est. speed input: 2423.89 toks/s, output: 11.29 toks/s]
PASSED
Processed prompts: 100%|███████████████████████████████████| 1/1 [00:06<00:00,  6.57s/it, est. speed input: 1712.48 toks/s, output: 12.03 toks/s]
Processed prompts: 100%|███████████████████████████████████| 1/1 [00:06<00:00,  6.64s/it, est. speed input: 1722.46 toks/s, output: 12.04 toks/s]
Processed prompts: 100%|███████████████████████████████████| 1/1 [00:06<00:00,  6.51s/it, est. speed input: 1767.33 toks/s, output: 11.67 toks/s]
Processed prompts: 100%|███████████████████████████████████| 1/1 [00:06<00:00,  6.51s/it, est. speed input: 1751.99 toks/s, output: 11.67 toks/s]
Processed prompts: 100%|███████████████████████████████████| 1/1 [00:06<00:00,  6.65s/it, est. speed input: 1690.69 toks/s, output: 11.88 toks/s]
Processed prompts: 100%|███████████████████████████████████| 1/1 [00:06<00:00,  6.76s/it, est. speed input: 1693.01 toks/s, output: 11.84 toks/s]
Processed prompts: 100%|███████████████████████████████████| 1/1 [00:06<00:00,  6.59s/it, est. speed input: 1746.39 toks/s, output: 11.53 toks/s]
Processed prompts: 100%|███████████████████████████████████| 1/1 [00:06<00:00,  6.48s/it, est. speed input: 1761.08 toks/s, output: 11.73 toks/s]
Processed prompts: 100%|████████████████████████████████████| 1/1 [00:08<00:00,  8.98s/it, est. speed input: 2541.00 toks/s, output: 5.90 toks/s]
PASSED
Processed prompts: 100%|███████████████████████████████████| 1/1 [00:06<00:00,  6.65s/it, est. speed input: 1692.07 toks/s, output: 11.89 toks/s]
Processed prompts: 100%|███████████████████████████████████| 1/1 [00:06<00:00,  6.88s/it, est. speed input: 1635.11 toks/s, output: 11.49 toks/s]
Processed prompts: 100%|████████████████████████████████████| 1/1 [00:09<00:00,  9.00s/it, est. speed input: 2535.25 toks/s, output: 5.89 toks/s]
PASSED

========================================================= 5 passed in 525.85s (0:08:45) =========================================================
ERROR 07-17 23:44:18 multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 6923 died, exit code: -15
INFO 07-17 23:44:18 multiproc_worker_utils.py:123] Killing local vLLM worker processes
/usr/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 3 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
/usr/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
```

PR branch fails

PR branch
  ```
  root@37bd47dfb2b2:/workspace/vllm# git rev-parse HEAD
  0ea0d0d26ba5d4f6434be865e79ab77ec9cd8aad
  root@37bd47dfb2b2:/workspace/vllm# export VLLM_WORKER_MULTIPROC_METHOD=spawn
  root@37bd47dfb2b2:/workspace/vllm# pytest -v -s -x tests/lora/test_long_context.py
  ====================================================================================== test session starts =======================================================================================
  platform linux -- Python 3.10.12, pytest-8.2.2, pluggy-1.5.0 -- /usr/bin/python
  cachedir: .pytest_cache
  rootdir: /workspace/vllm
  configfile: pyproject.toml
  plugins: shard-0.1.2, rerunfailures-14.0, forked-1.6.0, asyncio-0.23.8, anyio-4.2.0
  asyncio: mode=strict
  collected 5 items                                                                                                                                                                                
  Running 5 items in this shard: tests/lora/test_long_context.py::test_rotary_emb_replaced, tests/lora/test_long_context.py::test_batched_rope_kernel, tests/lora/test_long_context.py::test_self_consistency, tests/lora/test_long_context.py::test_quality, tests/lora/test_long_context.py::test_max_len
  
  config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 609/609 [00:00<00:00, 1.89MB/s]
  INFO 07-18 00:04:04 weight_utils.py:219] Using model weights format ['*.safetensors']
  model-00001-of-00002.safetensors: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.98G/9.98G [00:18<00:00, 548MB/s]
  model-00002-of-00002.safetensors: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.50G/3.50G [00:40<00:00, 87.3MB/s]
  model.safetensors.index.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 26.8k/26.8k [00:00<00:00, 59.9MB/s]
  INFO 07-18 00:04:48 model_runner.py:559] Loading model weights took 12.5562 GB
  PASSED
  Fetching 10 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 9767.82it/s]
  Fetching 10 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 43106.93it/s]
  Fetching 10 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 17353.35it/s]
  INFO 07-18 00:04:49 llm_engine.py:174] Initializing an LLM engine (v0.5.2) with config: model='meta-llama/Llama-2-13b-chat-hf', speculative_config=None, tokenizer='meta-llama/Llama-2-13b-chat-hf', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=meta-llama/Llama-2-13b-chat-hf, use_v2_block_manager=False, enable_prefix_caching=False)
  INFO 07-18 00:04:49 custom_cache_manager.py:17] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager
  (VllmWorkerProcess pid=8912) INFO 07-18 00:04:53 multiproc_worker_utils.py:215] Worker ready; awaiting tasks
  (VllmWorkerProcess pid=8911) INFO 07-18 00:04:53 multiproc_worker_utils.py:215] Worker ready; awaiting tasks
  (VllmWorkerProcess pid=8913) INFO 07-18 00:04:53 multiproc_worker_utils.py:215] Worker ready; awaiting tasks
  INFO 07-18 00:04:54 utils.py:737] Found nccl from library libnccl.so.2
  INFO 07-18 00:04:54 pynccl.py:63] vLLM is using nccl==2.20.5
  (VllmWorkerProcess pid=8912) INFO 07-18 00:04:54 utils.py:737] Found nccl from library libnccl.so.2
  (VllmWorkerProcess pid=8912) INFO 07-18 00:04:54 pynccl.py:63] vLLM is using nccl==2.20.5
  (VllmWorkerProcess pid=8911) INFO 07-18 00:04:54 utils.py:737] Found nccl from library libnccl.so.2
  (VllmWorkerProcess pid=8911) INFO 07-18 00:04:54 pynccl.py:63] vLLM is using nccl==2.20.5
  (VllmWorkerProcess pid=8913) INFO 07-18 00:04:54 utils.py:737] Found nccl from library libnccl.so.2
  (VllmWorkerProcess pid=8913) INFO 07-18 00:04:54 pynccl.py:63] vLLM is using nccl==2.20.5
  (VllmWorkerProcess pid=8913) WARNING 07-18 00:04:55 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
  (VllmWorkerProcess pid=8912) WARNING 07-18 00:04:55 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
  (VllmWorkerProcess pid=8911) WARNING 07-18 00:04:55 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
  WARNING 07-18 00:04:55 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
  INFO 07-18 00:04:55 shm_broadcast.py:233] vLLM message queue communication handle: Handle(connect_ip='127.0.0.1', local_reader_ranks=[1, 2, 3], buffer=<vllm.distributed.device_communicators.shm_broadcast.ShmRingBuffer object at 0x7fdff04d6020>, local_subscribe_port=54255, local_sync_port=53789, remote_subscribe_port=None, remote_sync_port=None)
  INFO 07-18 00:04:55 weight_utils.py:219] Using model weights format ['*.safetensors']
  (VllmWorkerProcess pid=8911) INFO 07-18 00:04:55 weight_utils.py:219] Using model weights format ['*.safetensors']
  (VllmWorkerProcess pid=8912) INFO 07-18 00:04:55 weight_utils.py:219] Using model weights format ['*.safetensors']
  (VllmWorkerProcess pid=8913) INFO 07-18 00:04:55 weight_utils.py:219] Using model weights format ['*.safetensors']
  INFO 07-18 00:04:57 model_runner.py:559] Loading model weights took 6.1128 GB
  (VllmWorkerProcess pid=8911) INFO 07-18 00:04:58 model_runner.py:559] Loading model weights took 6.1138 GB
  (VllmWorkerProcess pid=8912) INFO 07-18 00:04:59 model_runner.py:559] Loading model weights took 6.1138 GB
  (VllmWorkerProcess pid=8913) INFO 07-18 00:04:59 model_runner.py:559] Loading model weights took 6.1138 GB
  INFO 07-18 00:05:10 distributed_gpu_executor.py:56] # GPU blocks: 10039, # CPU blocks: 1310
  INFO 07-18 00:05:12 model_runner.py:847] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
  INFO 07-18 00:05:12 model_runner.py:851] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
  (VllmWorkerProcess pid=8912) INFO 07-18 00:05:13 model_runner.py:847] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
  (VllmWorkerProcess pid=8912) INFO 07-18 00:05:13 model_runner.py:851] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
  (VllmWorkerProcess pid=8913) INFO 07-18 00:05:13 model_runner.py:847] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
  (VllmWorkerProcess pid=8913) INFO 07-18 00:05:13 model_runner.py:851] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
  (VllmWorkerProcess pid=8911) INFO 07-18 00:05:13 model_runner.py:847] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
  (VllmWorkerProcess pid=8911) INFO 07-18 00:05:13 model_runner.py:851] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
  (VllmWorkerProcess pid=8912) INFO 07-18 00:05:15 model_runner.py:1048] Graph capturing finished in 2 secs.
  INFO 07-18 00:05:15 model_runner.py:1048] Graph capturing finished in 3 secs.
  (VllmWorkerProcess pid=8913) INFO 07-18 00:05:15 model_runner.py:1048] Graph capturing finished in 2 secs.
  (VllmWorkerProcess pid=8911) INFO 07-18 00:05:15 model_runner.py:1048] Graph capturing finished in 2 secs.
  Processed prompts:   0%|                                                                                                | 0/1 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]WARNING 07-18 00:05:15 scheduler.py:699] Input prompt (11244 tokens) is too long and exceeds limit of 4096
  Processed prompts: 100%|███████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2263.52it/s, est. speed input: 29438673.02 toks/s, output: 0.00 toks/s]
  Processed prompts:   0%|                                                                                                | 0/1 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]WARNING 07-18 00:05:15 scheduler.py:699] Input prompt (11244 tokens) is too long and exceeds limit of 4096
  Processed prompts: 100%|███████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2560.63it/s, est. speed input: 33399967.55 toks/s, output: 0.00 toks/s]
  Processed prompts:   0%|                                                                                                | 0/1 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]WARNING 07-18 00:05:15 scheduler.py:699] Input prompt (22825 tokens) is too long and exceeds limit of 4096
  Processed prompts: 100%|███████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2013.59it/s, est. speed input: 52228580.91 toks/s, output: 0.00 toks/s]
  Processed prompts:   0%|                                                                                                | 0/3 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]WARNING 07-18 00:05:15 scheduler.py:699] Input prompt (11244 tokens) is too long and exceeds limit of 4096
  WARNING 07-18 00:05:15 scheduler.py:699] Input prompt (11244 tokens) is too long and exceeds limit of 4096
  WARNING 07-18 00:05:15 scheduler.py:699] Input prompt (22825 tokens) is too long and exceeds limit of 4096
  Processed prompts: 100%|███████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3807.24it/s, est. speed input: 60954617.43 toks/s, output: 0.00 toks/s]
  PASSED
  Processed prompts:   0%|                                                                                                | 0/3 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]WARNING 07-18 00:05:15 scheduler.py:699] Input prompt (11244 tokens) is too long and exceeds limit of 4096
  WARNING 07-18 00:05:15 scheduler.py:699] Input prompt (11244 tokens) is too long and exceeds limit of 4096
  WARNING 07-18 00:05:15 scheduler.py:699] Input prompt (22825 tokens) is too long and exceeds limit of 4096
  Processed prompts: 100%|███████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 4068.19it/s, est. speed input: 65266654.24 toks/s, output: 0.00 toks/s]
  Processed prompts:   0%|                                                                                                | 0/3 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]WARNING 07-18 00:05:15 scheduler.py:699] Input prompt (22825 tokens) is too long and exceeds limit of 4096
  WARNING 07-18 00:05:15 scheduler.py:699] Input prompt (11244 tokens) is too long and exceeds limit of 4096
  WARNING 07-18 00:05:15 scheduler.py:699] Input prompt (11244 tokens) is too long and exceeds limit of 4096
  Processed prompts: 100%|███████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 4105.35it/s, est. speed input: 65877468.68 toks/s, output: 0.00 toks/s]
  PASSED
  Processed prompts:   0%|                                                                                                | 0/1 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]WARNING 07-18 00:05:16 scheduler.py:699] Input prompt (11244 tokens) is too long and exceeds limit of 4096
  Processed prompts: 100%|███████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 3172.70it/s, est. speed input: 42912424.18 toks/s, output: 0.00 toks/s]
  FAILED
  
  ============================================================================================ FAILURES ============================================================================================
  __________________________________________________________________________________________ test_quality __________________________________________________________________________________________
  
  model_response = ''
  golden_response = {'date_of_birth': {'day': 6, 'month': 3, 'year': 1993}, 'date_of_death': {'day': 26, 'month': 5, 'year': 2015}, 'nationality': 'American', 'politician': False, ...}
  
      def evaluate_json_response(model_response, golden_response):
          """Evaluates the model response against the golden response.
      
          Returns a score between 0 and 1, where 1 is a perfect match and 0 is no
          match. The score quantifies how well the model is able to extract the
          golden JSON from the long context.
          """
          try:
  >           model_response = ast.literal_eval(model_response)
  
  tests/lora/test_long_context.py:44: 
  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
  /usr/lib/python3.10/ast.py:64: in literal_eval
      node_or_string = parse(node_or_string.lstrip(" \t"), mode='eval')
  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
  
  source = '', filename = '<unknown>', mode = 'eval'
  
      def parse(source, filename='<unknown>', mode='exec', *,
                type_comments=False, feature_version=None):
          """
          Parse the source into an AST node.
          Equivalent to compile(source, filename, mode, PyCF_ONLY_AST).
          Pass type_comments=True to get back type comments where the syntax allows.
          """
          flags = PyCF_ONLY_AST
          if type_comments:
              flags |= PyCF_TYPE_COMMENTS
          if isinstance(feature_version, tuple):
              major, minor = feature_version  # Should be a 2-tuple.
              assert major == 3
              feature_version = minor
          elif feature_version is None:
              feature_version = -1
          # Else it should be an int giving the minor version for 3.x.
  >       return compile(source, filename, mode, flags,
                         _feature_version=feature_version)
  E         File "<unknown>", line 0
  E           
  E       SyntaxError: invalid syntax
  
  /usr/lib/python3.10/ast.py:50: SyntaxError
  
  The above exception was the direct cause of the following exception:
  
  lora_llm = <vllm.entrypoints.llm.LLM object at 0x7fdffc41cf40>
  long_context_infos = {1: {'context_length': '16k', 'lora': '/root/.cache/huggingface/hub/models--SangBinCho--long_context_16k_testing_1/sna...ache/huggingface/hub/models--SangBinCho--long_context_32k_testing/snapshots/697b5fbf3a38357722ee3fb2e8a6b8aba39f7658'}}
  
      @pytest.mark.skip_global_cleanup
      def test_quality(lora_llm, long_context_infos):
          """We test the quality of the answers given by the LoRA model by
              comparing the generated text to the merged model's outputs.
      
          This is effectively a mini-benchmark over four prompts.
          If this test fails, this indicates that the quality of the LoRA model
          is suboptimal compared to the merged model. For example, if the model
          does not output valid dictionaries, this test will fail.
      
          If needed for testing, the merged versions of the models are available
          as part of the `conftest`.
      
          The test is expected to run for about 1 minute on a p4de.24xlarge
          instance.
          """
          scores: List[float] = []
          for lora_id, info in long_context_infos.items():
              context_len = info["context_length"]
              for prompt_and_response in prompts_and_responses[context_len]:
                  lora_prompt = (prompt_and_response["prompt"], sampling_params,
                                 _create_lora_request(lora_id, long_context_infos))
                  response = generate(lora_llm, lora_prompt)
                  golden_answer = prompt_and_response["golden_answer"]
  >               score = evaluate_json_response(response, golden_answer)
  
  tests/lora/test_long_context.py:256: 
  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
  
  model_response = ''
  golden_response = {'date_of_birth': {'day': 6, 'month': 3, 'year': 1993}, 'date_of_death': {'day': 26, 'month': 5, 'year': 2015}, 'nationality': 'American', 'politician': False, ...}
  
      def evaluate_json_response(model_response, golden_response):
          """Evaluates the model response against the golden response.
      
          Returns a score between 0 and 1, where 1 is a perfect match and 0 is no
          match. The score quantifies how well the model is able to extract the
          golden JSON from the long context.
          """
          try:
              model_response = ast.literal_eval(model_response)
          except Exception as e:
  >           raise ValueError(
                  f"Model response is not a valid JSON. Expected {golden_response}, "
                  f"got  {model_response}") from e
  E           ValueError: Model response is not a valid JSON. Expected {'nationality': 'American', 'date_of_birth': {'day': 6, 'month': 3, 'year': 1993}, 'date_of_death': {'day': 26, 'month': 5, 'year': 2015}, 'sportsperson': True, 'politician': False}, got
  
  tests/lora/test_long_context.py:46: ValueError
  ======================================================================================== warnings summary ========================================================================================
  tests/lora/test_long_context.py: 15 warnings
    <string>:8: DeprecationWarning: The 'lora_local_path' attribute is deprecated and will be removed in a future version. Please use 'lora_path' instead.
  
  -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
  ==================================================================================== short test summary info =====================================================================================
  FAILED tests/lora/test_long_context.py::test_quality - ValueError: Model response is not a valid JSON. Expected {'nationality': 'American', 'date_of_birth': {'day': 6, 'month': 3, 'year': 1993}, 'date_of_death': {'day': 26, 'month': 5, 'year': ...
  !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
  ====================================================================== 1 failed, 3 passed, 15 warnings in 73.46s (0:01:13) =======================================================================
  ERROR 07-18 00:05:19 multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 8913 died, exit code: -15
  Fatal Python error: _enter_buffered_busy: could not acquire lock for <_io.BufferedWriter name='<stdout>'> at interpreter shutdown, possibly due to daemon threads
  Python runtime state: finalizing (tstate=0x0000564fc9ffc700)
  
  Current thread 0x00007fe15b45c480 (most recent call first):
    <no Python frame>
  
  Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, PIL._imaging, charset_normalizer.md, requests.packages.charset_normalizer.md, requests.packages.chardet.md, yaml._yaml, sentencepiece._sentencepiece, psutil._psutil_linux, psutil._psutil_posix, msgpack._cmsgpack, google.protobuf.pyext._message, setproctitle, uvloop.loop, ray._raylet, hiredis.hiredis, zmq.backend.cython.context, zmq.backend.cython.message, zmq.backend.cython.socket, zmq.backend.cython._device, zmq.backend.cython._poll, zmq.backend.cython._proxy_steerable, zmq.backend.cython._version, zmq.backend.cython.error, zmq.backend.cython.utils (total: 43)
  /usr/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 3 leaked semaphore objects to clean up at shutdown
    warnings.warn('resource_tracker: There appear to be %d '
  /usr/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
    warnings.warn('resource_tracker: There appear to be %d '
  Aborted
  ```
LoRARequest(lora_name='16k', lora_int_id=1, lora_path='/root/.cache/huggingface/hub/models--SangBinCho--long_context_16k_testing_1/snapshots/1f46be8e9b25251714555776f90dcb7a853806c6', long_lora_max_len=None))

Processed prompts:   0%|                                                                                                | 0/1 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]WARNING 07-18 00:24:09 scheduler.py:699] Input prompt (11244 tokens) is too long and exceeds limit of 4096
Processed prompts: 100%|███████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2402.24it/s, est. speed input: 31886919.66 toks/s, output: 0.00 toks/s]
----------------

{'nationality': 'American', 'date_of_birth': {'day': 6, 'month': 3, 'year': 1993}, 'date_of_death': {'day': 26, 'month': 5, 'year': 2015}, 'sportsperson': True, 'politician': False}
----------------

Seems the lora is 16k context but the base is 4096. In that case, seems it generates nothing and then it failed the evaluation. After the investigation, I find the problem. I introduce the new field in LoraRequest and the sequence get changed. if user did't specify the field, then it could be a problem. Add Node and fix the issue

return LoRARequest(context_len, lora_id,
long_context_infos[lora_id]["lora"],
4096 * scaling_factor)

@Jeffwan Jeffwan force-pushed the jiaxin/load-lora-from-hg branch from 56daa6c to 26a22d5 Compare July 18, 2024 01:08
The problem comes from new added field and the test didn't explicitly set the field but using position information.
@Jeffwan Jeffwan force-pushed the jiaxin/load-lora-from-hg branch from 26a22d5 to ba2a04d Compare July 18, 2024 02:33
@Jeffwan
Copy link
Contributor Author

Jeffwan commented Jul 18, 2024

image lora issue fixed but there's PP issue. I made an minor comment change to trigger the tests

@Jeffwan
Copy link
Contributor Author

Jeffwan commented Jul 18, 2024

@Yard1 Please have another look. The previous failure has been fixed and all tests pass now

@Jeffwan Jeffwan changed the title Support dynamically loading Lora adapter from HuggingFace [Core] Support dynamically loading Lora adapter from HuggingFace Jul 19, 2024
@Jeffwan
Copy link
Contributor Author

Jeffwan commented Jul 22, 2024

@Yard1 Do you have further comments or suggestion on this PR? I have some other WIP PR need this change. If this can be merged, that definitely simplify the rebase work. Thanks

@Yard1 Yard1 merged commit 42c7f66 into vllm-project:main Jul 22, 2024
72 checks passed
@Yard1
Copy link
Collaborator

Yard1 commented Jul 22, 2024

Merged, thanks!

@Jeffwan Jeffwan deleted the jiaxin/load-lora-from-hg branch July 22, 2024 22:50
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jul 24, 2024
gnpinkert pushed a commit to gnpinkert/vllm that referenced this pull request Jul 26, 2024
@Jeffwan
Copy link
Contributor Author

Jeffwan commented Jul 29, 2024

@ZXTFINAL it's supposed to be model agnostic. Do you mind cutting an issue with more details? Please cc me and I can help debug it.

cduk pushed a commit to cduk/vllm-pascal that referenced this pull request Aug 6, 2024
@codybum
Copy link

codybum commented Aug 10, 2024

We are testing this branch, and things seem to work well for us. It would be great to see this merged with main in the near future.

kylesayrs pushed a commit to neuralmagic/vllm that referenced this pull request Aug 17, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
…m-project#6234)

Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
Signed-off-by: Alvant <alvasian@yandex.ru>
KuntaiDu pushed a commit to KuntaiDu/vllm that referenced this pull request Nov 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: relative path doesn't work for Lora adapter model
4 participants