Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bitsandbytes
-Linear8bitLt
integration intotransformers
models #17901bitsandbytes
-Linear8bitLt
integration intotransformers
models #17901Changes from 57 commits
ed1dd12
b9d0da6
dd9a464
35e1534
d01822b
839c9cd
93a5ac6
42a6845
97f64f8
a1fe7fc
05739e3
7816ef9
1155549
b222b9a
32f48cd
e116e21
39c46a0
b92c25c
3779f5d
b41c250
311dcbf
c91a58e
db16cf8
be6ce29
848d64d
15a81e0
514758d
a09e055
387aa1e
7199292
a4c19c1
a5cd157
9bd326b
6e4fee6
9e121b7
a2ac688
ac370b9
27e9486
a0db982
21bd590
f1fcf77
9b81c67
659f427
147683e
c2e1918
f6eb945
ceff43e
55cec55
67bf4fb
56e9147
5a03a86
7abb914
961e57e
1326a42
9a0051b
59f9a5a
418690b
6d93424
d428d8d
0324f4b
af229c4
70ad8cb
1eedb90
163ef77
8100d03
c9589f6
eb9a26d
838e2a9
c4a1e9b
31fce94
3e4a2a4
fdf37b3
53e0b2e
91364c4
5ea8976
8b72d08
32a4863
c6c139f
3d3224f
0d5bc2b
bc8f332
5adcadc
bb00e7a
630b4f7
a925641
279b8c4
e49a2ea
5718d78
a40667a
61faa28
36fbbd2
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also needs to be updated to just
pip install bitsandbytes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I am wrong, but I think you can make it simpler:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new release now works with just a pip install. And it is no longer needed to compile bitsandbytes. For the full release (done in a couple of days) the library would also reside directly on pip. So best to change this already to:
pip install bitsandbytes
It might be useful to add one line about how it works before linking to the GPT3.int8() paper:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this could be better written as a
torch.fx
transformation, but this way works for more models since not all models are currently traceable for torch.fx.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just changed the way bitsandbytes is installed. This line no longer uses the cuda suffix:
pip install -i https://test.pypi.org/simple/ bitsandbytes