-
Notifications
You must be signed in to change notification settings - Fork 27k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why MPS can never be used successfully? #32035
Comments
@AimoneAndex I was looking for this too, luckily it has already been fixed already, see #31812. It is a matter of waiting for a new release. In the meantime you can install the package from github to get the fix now: pip install git+https://github.com/huggingface/transformers.git @ArthurZucker could you tell us when is the next release planned? not being able to use MPS on mac devices is quite annoying 😥 |
Release will be this week! Sorry all for the troubles, I am also a mac user and this sucks! |
Thank you!And I will update it as soon as the new version releases! |
Thanks a lot!It is developers like you that make Transformers easier for everyone to build our dreams.Truly thank you,and you everyone, so much! |
I've seen the new version's coming,and I'll try it as soon as I come back to my computer.Truly thanks for everyone! |
Everything's OK in this new version.Thanks for everyone! |
Can MPS use FP16 when training?Why I can't? File ~/Data/AIHub/Trans-Penv/transformers/src/transformers/trainer.py:409, in Trainer.init(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics) File ~/Data/AIHub/Trans-Penv/transformers/src/transformers/trainer.py:4648, in Trainer.create_accelerator_and_postprocess(self) File /opt/anaconda3/envs/tfs/lib/python3.12/site-packages/accelerate/accelerator.py:467, in Accelerator.init(self, device_placement, split_batches, mixed_precision, gradient_accumulation_steps, cpu, dataloader_config, deepspeed_plugin, fsdp_plugin, megatron_lm_plugin, rng_types, log_with, project_dir, project_config, gradient_accumulation_plugin, dispatch_batches, even_batches, use_seedable_sampler, step_scheduler_with_optimizer, kwargs_handlers, dynamo_backend) ValueError: fp16 mixed precision requires a GPU (not 'mps'). |
cc @muellerzr as this appears to be being raised in accelerate |
Correct, there's nothing we can do for now until stable torch supports mixed precision MPS:
It looks like the nightlies may have it, so soon! |
That's OK!Thanks a lot! |
System Info
Device:Apple M3 Pro
OS:macOS Sonoma 14.1
packages:
datasets 2.20.1.dev0
evaluate 0.4.2
huggingface-hub 0.23.5
tokenizers 0.19.1
torch 2.5.0.dev20240717
torchaudio 2.4.0.dev20240717
torchvision 0.20.0.dev20240717
transformers 4.43.0.dev0
Who can help?
@ArthurZucker @muellerzr @ArthurZucker
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Then it prints:
RuntimeError: Placeholder storage has not been allocated on MPS device!
Expected behavior
Train successfully.
The text was updated successfully, but these errors were encountered: