Skip to content

Commit

Permalink
bump accelerate to 0.34.2 (#1901)
Browse files Browse the repository at this point in the history
* bump accelerate

* add fixture to predownload the test model

* change fixture
  • Loading branch information
winglian committed Sep 7, 2024
1 parent 6e35468 commit 3853ab7
Show file tree
Hide file tree
Showing 3 changed files with 11 additions and 1 deletion.
3 changes: 3 additions & 0 deletions .github/workflows/multi-gpu-e2e.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
name: docker-multigpu-tests-biweekly

on:
pull_request:
paths:
- 'tests/e2e/multigpu/*.py'
workflow_dispatch:
schedule:
- cron: '0 0 * * 1,4' # Runs at 00:00 UTC every monday & thursday
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ peft==0.12.0
transformers==4.44.2
tokenizers>=0.19.1
bitsandbytes==0.43.3
accelerate==0.34.0
accelerate==0.34.2
datasets==2.20.0
deepspeed==0.14.4
pydantic==2.6.3
Expand Down
7 changes: 7 additions & 0 deletions tests/e2e/multigpu/test_llama.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
import pytest
import yaml
from accelerate.test_utils import execute_subprocess_async
from huggingface_hub import snapshot_download

from axolotl.utils.dict import DictDefault

Expand All @@ -19,6 +20,12 @@
os.environ["WANDB_DISABLED"] = "true"


@pytest.fixture(scope="session", autouse=True)
def download_model():
# download the model
snapshot_download("TinyLlama/TinyLlama_v1.1")


class TestMultiGPULlama(unittest.TestCase):
"""
Test case for Llama models using LoRA
Expand Down

0 comments on commit 3853ab7

Please sign in to comment.