Skip to content

PyTorch code for our paper "ARB-LLM: Alternating Refined Binarizations for Large Language Models"

Notifications You must be signed in to change notification settings

ZHITENGLI/ARB-LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

ARB-LLM: Alternating Refined Binarizations for Large Language Models

Zhiteng Li, Xianglong Yan, Tianao Zhang, Haotong Qin, Dong Xie, Jiang Tian, Zhongchao Shi, Linghe Kong, Yulun Zhang, and Xiaokang Yang, "ARB-LLM: Alternating Refined Binarizations for Large Language Models", arXiv, 2024

[arXiv] [supplementary material] [visual results] [models]

Abstract: Large Language Models (LLMs) have greatly pushed forward advancements in natural language processing, yet their high memory and computational demands hinder practical deployment. Binarization, as an effective compression technique, can shrink model weights to just 1 bit, significantly reducing the high demands on computation and memory. However, current binarization methods struggle to narrow the distribution gap between binarized and full-precision weights, while also overlooking the column deviation in LLM weight distribution. To tackle these issues, we propose ARB-LLM, a novel 1-bit post-training quantization (PTQ) technique tailored for LLMs. To narrow the distribution shift between binarized and full-precision weights, we first design an alternating refined binarization (ARB) algorithm to progressively update the binarization parameters, which significantly reduces the quantization error. Moreover, considering the pivot role of calibration data and the column deviation in LLM weights, we further extend ARB to ARB-X and ARB-RC. In addition, we refine the weight partition strategy with column-group bitmap (CGB), which further enhance performance. Equipping ARB-X and ARB-RC with CGB, we obtain ARB-LLMX and ARB-LLMRC respectively, which significantly outperform state-of-the-art (SOTA) binarization methods for LLMs. As a binary PTQ method, our ARB-LLMRC is the first to surpass FP16 models of the same size. The code and models will be available at https://github.com/ZHITENGLI/ARB-LLM.


Figure 1 in the main paper demonstrates that our proposed ARB-LLMRC outperforms the previous state-of-the-art binary PTQ method, BiLLM, across all scales of the OPT model family. Furthermore, our binarized model surpasses full-precision models of similar size. For example, the memory footprint of the binarized OPT-13B is comparable to that of the full-precision OPT-2.7B, yet the binarized model achieves better performance.


⚒️ TODO

  • Complete this repository

🔗 Contents

🔎 Results

ARB-LLM achieves superior perplexity performance on WikiText2 datasets. (click to expand)
  • OPT family

  • LLaMA, LLaMA-2 and LLaMA-3 families

  • Vicuna 7B and 13B

ARB-LLM achieves superior average accuracy on 7 zero-shot QA datasets. (click to expand)

Citation

If you find the code helpful in your research or work, please cite the following paper.

@article{li2024arbllmalternatingrefinedbinarizations,
      title={ARB-LLM: Alternating Refined Binarizations for Large Language Models}, 
      author={Zhiteng Li and Xianglong Yan and Tianao Zhang and Haotong Qin and Dong Xie and Jiang Tian and zhongchao shi and Linghe Kong and Yulun Zhang and Xiaokang Yang},
      year={2024},
      eprint={2410.03129},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2410.03129}, 
}

💡 Acknowledgements

This work is released under the Apache 2.0 license. The codes are based on BiLLM. Please also follow their licenses. Thanks for their awesome works.

About

PyTorch code for our paper "ARB-LLM: Alternating Refined Binarizations for Large Language Models"

Resources

Stars

Watchers

Forks

Packages

No packages published