Skip to content

Merge vLLM deployer project to llm-finetuning #799

Merge vLLM deployer project to llm-finetuning

Merge vLLM deployer project to llm-finetuning #799

Triggered via pull request January 31, 2025 17:25
Status Failure
Total duration 2m 13s
Artifacts

pull_request.yml

on: pull_request
spell-check
25s
spell-check
markdown-link-check  /  markdown-link-check
2m 1s
markdown-link-check / markdown-link-check
Fit to window
Zoom out
Zoom in

Annotations

1 error and 10 warnings
spell-check
Process completed with exit code 2.
spell-check: llm-lora-finetuning/configs/phi3.5_finetune_local.yaml#L42
"quantised" should be "quantized".
spell-check: customer-satisfaction/steps/predictor.py#L26
"lenght" should be "length".
spell-check: customer-satisfaction/steps/predictor.py#L27
"lenght" should be "length".
spell-check: llm-lora-finetuning/configs/llama3-1_finetune_local.yaml#L42
"quantised" should be "quantized".
spell-check: customer-satisfaction/streamlit_app.py#L86
"lenght" should be "length".
spell-check: customer-satisfaction/streamlit_app.py#L87
"lenght" should be "length".
spell-check: orbit-user-analysis/steps/report.py#L68
"colour" should be "color".
spell-check: customer-satisfaction/data/olist_customers_dataset.csv#L1
"lenght" should be "length".
spell-check: customer-satisfaction/data/olist_customers_dataset.csv#L1
"lenght" should be "length".