Skip to content

correctly handle cases when converted with --not-add-special-tokens #59

correctly handle cases when converted with --not-add-special-tokens

correctly handle cases when converted with --not-add-special-tokens #59

# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python
name: llm_bench Python Test
env:
LLM_BENCH_PYPATH: llm_bench/python
WWB_PATH: llm_bench/python/who_what_benchmark
on:
push:
branches: [ "master" ]
paths:
- llm_bench/python/**
pull_request:
paths:
- llm_bench/python/**
- .github/workflows/llm_bench-python.yml
permissions: read-all # Required by https://github.com/ossf/scorecard/blob/e23b8ad91fd6a64a0a971ca4fc0a4d1650725615/docs/checks.md#token-permissions
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.9"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install flake8 pytest black
GIT_CLONE_PROTECTION_ACTIVE=false pip install -r ${{ env.LLM_BENCH_PYPATH }}/requirements.txt
python -m pip install -U --pre openvino openvino-tokenizers openvino-genai --extra-index-url
https://storage.openvinotoolkit.org/simple/wheels/nightly

Check failure on line 44 in .github/workflows/llm_bench-python.yml

View workflow run for this annotation

GitHub Actions / .github/workflows/llm_bench-python.yml

Invalid workflow file

You have an error in your yaml syntax on line 44
GIT_CLONE_PROTECTION_ACTIVE=false pip install -r ${{ env.WWB_PATH }}/requirements.txt
GIT_CLONE_PROTECTION_ACTIVE=false pip install ${{ env.WWB_PATH }}
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
python -m flake8 ${{ env.LLM_BENCH_PYPATH }} --config=${{ env.LLM_BENCH_PYPATH }}/setup.cfg
- name: Create code style diff for samples
if: failure()
run: |
python -m black -l 160 -S ${{ env.LLM_BENCH_PYPATH }}/
git diff > llm.bench_diff.diff
- uses: actions/upload-artifact@v3
if: failure()
with:
name: llm.bench_diff
path: llm.bench_diff.diff
- name: Test native pytorch model on Linux
run: |
export GIT_LFS_SKIP_SMUDGE=0
git clone --depth 1 https://huggingface.co/katuni4ka/tiny-random-qwen
python ./llm_bench/python/benchmark.py -m tiny-random-qwen -d cpu -n 1 -f pt
- name: Test tiny-random-baichuan2 on Linux
run: |
python ./llm_bench/python/convert.py --model_id katuni4ka/tiny-random-baichuan2 --output_dir ./ov_models/tiny-random-baichuan2 --precision FP16
python ./llm_bench/python/benchmark.py -m ./ov_models/tiny-random-baichuan2/pytorch/dldt/FP16/ -d cpu -n 1
- name: Test tiny-stable-diffusion on Linux
run: |
python ./llm_bench/python/convert.py --model_id segmind/tiny-sd --output_dir ./ov_models/tiny-sd --precision FP16
python ./llm_bench/python/benchmark.py -m ./ov_models/tiny-sd/pytorch/dldt/FP16/ -pf ./llm_bench/python/prompts/stable-diffusion.jsonl -d cpu -n 1
- name: WWB Tests
run: |
python -m pytest llm_bench/python/who_what_benchmark/tests
stateful:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Test stateful
run: |
GIT_CLONE_PROTECTION_ACTIVE=false python -m pip install -r llm_bench/python/requirements.txt
python -m pip uninstall --yes openvino
python -m pip install -U --pre openvino openvino-tokenizers openvino-genai --extra-index-url
https://storage.openvinotoolkit.org/simple/wheels/nightly
python llm_bench/python/convert.py --model_id TinyLlama/TinyLlama-1.1B-Chat-v1.0 --output_dir . --stateful
grep beam_idx pytorch/dldt/FP32/openvino_model.xml
- name: WWB Tests
run: |
GIT_CLONE_PROTECTION_ACTIVE=false pip install -r llm_bench/python/who_what_benchmark/requirements.txt
GIT_CLONE_PROTECTION_ACTIVE=false pip install llm_bench/python/who_what_benchmark/
pip install pytest
python -m pytest llm_bench/python/who_what_benchmark/tests