Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Reduce the memory usage of logits from O(context_length) to O(1) (#4688)
Summary: The logits size is big, with size [context_length x vocab_size]. But we always use the last (new) logits, because the model generates one new token in each Transformer inference. This PR changes the transformer to return the logits of the last token only. In the runner code, we don't have to fetch the logits for the last token specifically, but directly use the output . Test command: ``` python -m examples.models.llama2.export_llama --checkpoint /Users/myuan/data/llama/story110m/checkpoint.pt --params /Users/myuan/data/llama/story110m/params.json -kv --use_sdpa_with_kv_cache -X -qmode 8da4w --group_size 128 -d fp32 --max_seq_length 1024 --profile_memory ``` Before: 284 MB activation, with 262 MB on logits After: 162 MB activation, with 0.128 MB on logits Verified with llamma_runner, before and after it generates the same text with temperature=0. Now the dominant memory usage would be KV cache. TODO: - Improve KV cache memory usage using pf16 or quantization. - This PR only fixes logits. Further activation memory optimization with one token output. Differential Revision: D61246566
- Loading branch information