[Model] LoRA with lm_head and embed_tokens fully trained #24949
Annotations
36 errors
Ruff (F401):
benchmarks/benchmark_latency.py#L15
benchmarks/benchmark_latency.py:15:25: F401 `vllm.inputs.PromptInputs` imported but unused
|
Ruff (F401):
benchmarks/benchmark_latency.py#L17
benchmarks/benchmark_latency.py:17:53: F401 `vllm.model_executor.layers.quantization.QUANTIZATION_METHODS` imported but unused
|
Ruff (E501):
tests/conftest.py#L906
tests/conftest.py:906:81: E501 Line too long (89 > 80)
|
Ruff (E501):
vllm/engine/arg_utils.py#L141
vllm/engine/arg_utils.py:141:81: E501 Line too long (145 > 80)
|
Ruff (E721):
vllm/lora/layers.py#L1394
vllm/lora/layers.py:1394:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
Ruff (E501):
vllm/lora/layers.py#L1405
vllm/lora/layers.py:1405:81: E501 Line too long (94 > 80)
|
Ruff (E721):
vllm/lora/layers.py#L1406
vllm/lora/layers.py:1406:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
Ruff (E501):
vllm/lora/layers.py#L1407
vllm/lora/layers.py:1407:81: E501 Line too long (112 > 80)
|
Ruff (E501):
vllm/lora/models.py#L123
vllm/lora/models.py:123:81: E501 Line too long (105 > 80)
|
Ruff (E501):
vllm/lora/models.py#L490
vllm/lora/models.py:490:81: E501 Line too long (97 > 80)
|
ruff (3.9)
The job was canceled because "_3_8" failed.
|
Ruff (F401):
benchmarks/benchmark_latency.py#L15
benchmarks/benchmark_latency.py:15:25: F401 `vllm.inputs.PromptInputs` imported but unused
|
Ruff (F401):
benchmarks/benchmark_latency.py#L17
benchmarks/benchmark_latency.py:17:53: F401 `vllm.model_executor.layers.quantization.QUANTIZATION_METHODS` imported but unused
|
Ruff (E501):
tests/conftest.py#L906
tests/conftest.py:906:81: E501 Line too long (89 > 80)
|
Ruff (E501):
vllm/engine/arg_utils.py#L141
vllm/engine/arg_utils.py:141:81: E501 Line too long (145 > 80)
|
Ruff (E721):
vllm/lora/layers.py#L1394
vllm/lora/layers.py:1394:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
Ruff (E501):
vllm/lora/layers.py#L1405
vllm/lora/layers.py:1405:81: E501 Line too long (94 > 80)
|
Ruff (E721):
vllm/lora/layers.py#L1406
vllm/lora/layers.py:1406:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
Ruff (E501):
vllm/lora/layers.py#L1407
vllm/lora/layers.py:1407:81: E501 Line too long (112 > 80)
|
Ruff (E501):
vllm/lora/models.py#L123
vllm/lora/models.py:123:81: E501 Line too long (105 > 80)
|
Ruff (E501):
vllm/lora/models.py#L490
vllm/lora/models.py:490:81: E501 Line too long (97 > 80)
|
ruff (3.12)
The job was canceled because "_3_8" failed.
|
Ruff (F401):
benchmarks/benchmark_latency.py#L15
benchmarks/benchmark_latency.py:15:25: F401 `vllm.inputs.PromptInputs` imported but unused
|
Ruff (F401):
benchmarks/benchmark_latency.py#L17
benchmarks/benchmark_latency.py:17:53: F401 `vllm.model_executor.layers.quantization.QUANTIZATION_METHODS` imported but unused
|
Ruff (E501):
tests/conftest.py#L906
tests/conftest.py:906:81: E501 Line too long (89 > 80)
|
Ruff (E501):
vllm/engine/arg_utils.py#L141
vllm/engine/arg_utils.py:141:81: E501 Line too long (145 > 80)
|
Ruff (E721):
vllm/lora/layers.py#L1394
vllm/lora/layers.py:1394:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
Ruff (E501):
vllm/lora/layers.py#L1405
vllm/lora/layers.py:1405:81: E501 Line too long (94 > 80)
|
Ruff (E721):
vllm/lora/layers.py#L1406
vllm/lora/layers.py:1406:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
|
Ruff (E501):
vllm/lora/layers.py#L1407
vllm/lora/layers.py:1407:81: E501 Line too long (112 > 80)
|
Ruff (E501):
vllm/lora/models.py#L123
vllm/lora/models.py:123:81: E501 Line too long (105 > 80)
|
Ruff (E501):
vllm/lora/models.py#L490
vllm/lora/models.py:490:81: E501 Line too long (97 > 80)
|
ruff (3.11)
The job was canceled because "_3_8" failed.
|
ruff (3.11)
The operation was canceled.
|
ruff (3.10)
The job was canceled because "_3_8" failed.
|
ruff (3.10)
The operation was canceled.
|