-
Notifications
You must be signed in to change notification settings - Fork 641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Apple silicon #252
Comments
Hi there, I will contribute too, in order to get it to work on Metal Apple M1 this is my trace:
|
Nice to hear! It would be good to hear from the maintainers that they are at all interested in making this package cross-platform. It is very much CUDA focused at the moment. Getting I've just started looking at the unit tests and the Python libraries. The C++ code is quite nicely structured, but the Python code would need some refactoring since most of the calls assume CUDA (x.cuda() instead of x.to(device), etc). Also, since the CPU version does not cover 100% of the feature set, testing is going to be quite some work as there is no real baseline. I suppose one question is if it would make sense to make the CPU cover 100% of the API calls, even if inefficient, just to provide a baseline that the GPU implementations could compare against? If pursuing this, I propose implementing cross-platform CPU support first, then tackling MPS. MPS is of course what makes it useful. (I have the exact same setup BTW, 2021 MBP) Edit: Specifically, here's how I imagine the unit tests would have to work So at least one CPU test pass on my M1 Mac :) |
please have a look at Building on Jetson AGX Xavier Development Kit fails #221 |
Wow .. not to be inflammatory , but are we saying that there's no immediate solution for this if you have any macbook in the last like .. 5 years? Yuck. |
https://en.wikipedia.org/wiki/Apple_M1 introduced less than 3 years ago. |
when will this be done? |
Looking forward to the support for this too, got the below errors when I tried to fine-tune llama2 7B with File "/Users/ben/opt/miniconda3/envs/finetune/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/Users/ben/opt/miniconda3/envs/finetune/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py", line 293, in forward
using_igemmlt = supports_igemmlt(A.device) and not state.force_no_igemmlt
File "/Users/ben/opt/miniconda3/envs/finetune/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py", line 226, in supports_igemmlt
if torch.cuda.get_device_capability(device=device) < (7, 5):
File "/Users/ben/opt/miniconda3/envs/finetune/lib/python3.10/site-packages/torch/cuda/__init__.py", line 381, in get_device_capability
prop = get_device_properties(device)
File "/Users/ben/opt/miniconda3/envs/finetune/lib/python3.10/site-packages/torch/cuda/__init__.py", line 395, in get_device_properties
_lazy_init() # will define _get_device_properties
File "/Users/ben/opt/miniconda3/envs/finetune/lib/python3.10/site-packages/torch/cuda/__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled |
@benjaminhuo Getting the same issue as you. |
This seems to be due to calling torch.cuda even if the device type isn't cuda. if device.type != 'cuda':
return False mps returns "mps" as device.type |
same issue here, MPS seems to be the problem |
getting same issue with apple silicon. would love to see some support for it soon! |
Same issue. Would be nice to have support for MPS. |
Same here, please have support for MPS |
|
+1 MPS support would be absolutely great! |
adding a comment to keep this alive. MPS support would be awesome! |
Once the device abstraction has been been merged, we can start adding MPS-accelerated versions of the functions |
Yay. Thanks to all your efforts. |
Looking forward to MPS support! |
Looking forward to MPS Support!!!! |
looking forward to mps support |
Please...MPS support |
Appreciate your work, Apple Silicon MPS support would be fantastic! <3 :) |
I guess you all know, and I'll be hated for this, but you don't need to comment "+1" or "please". The thumbs up of 100 people already works for this. |
Point taken, but if people are not making noise about this issue then how will the team know that this is what people want? If you find the notifications of this thread annoying, one option that is available to you is to mute it. |
Same works for the developers and that's not what we want. |
Any update on this request? |
Make MPS support! please |
MPS support required! |
+1 |
MPS support needed! |
Please, MPS support is appreciated!! |
Make MPS support! please |
+1. Please add support for MPS :D |
+1 |
1 similar comment
+1 |
+1 to keep it alive |
Please 🙏 |
+1 |
3 similar comments
+1 |
+1 |
+1 |
Make MPS support! please |
+1 |
3 similar comments
+1 |
+1 |
+1 |
MPS support! Don't let it die |
Would it make sense for this library to support platforms other than cuda on x64 Linux? I am specifically looking for Apple silicon support. Currently not even cpuonly works since it assumes SSE2 support (Even without Neon. Support).
i would guess that the first step would be a full cross platform compile (arm64), then ideally support for Metal Performance Shaders as an alternative to CUDA (assuming it is at all feasible).
I could probably contribute some towards support if there is interest for bitsandbytes to be multi platform. I have some experience setting up cross platform Python libraries.
The text was updated successfully, but these errors were encountered: