You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just tried to start a Mistral Nemo 12B Base training run on a rtx 4090 cloud service.
I am running pytorch_2.3.0-cuda12.1-cudnn8-devel with the newest versions of unsloth and these packages from the colab notebooks:
!pip install --no-deps "xformers<0.0.27" trl peft accelerate bitsandbytes
Not quite sure what went wrong but I read in another github repo that numpy is not playing nice with bfloat16?
Could be outdated though, was from Sep. 2023
The text was updated successfully, but these errors were encountered:
I just tried to start a Mistral Nemo 12B Base training run on a rtx 4090 cloud service.
I am running pytorch_2.3.0-cuda12.1-cudnn8-devel with the newest versions of unsloth and these packages from the colab notebooks:
!pip install --no-deps "xformers<0.0.27" trl peft accelerate bitsandbytes
When the trainer starts this error comes out:
Not quite sure what went wrong but I read in another github repo that numpy is not playing nice with bfloat16?
Could be outdated though, was from Sep. 2023
The text was updated successfully, but these errors were encountered: