Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix layerwise targets #36

Merged
merged 1 commit into from
Jul 24, 2024
Merged

Fix layerwise targets #36

merged 1 commit into from
Jul 24, 2024

Conversation

Satrat
Copy link
Contributor

@Satrat Satrat commented Jul 24, 2024

SUMMARY:
When we previously added in support for the targets/scheme UX to GPTQModifier, this overwrote an existing attribute also named targets that stored the names of the transformer layers (used for sequential updates). This PR renames targets to restore the original functionality

TEST PLAN:
Manual testing. Previously GPTQModifier(targets="Linear", scheme="W8A8", ignore=["lm_head"], sequential_update=True) would run calibration for every linear layer rather than every transformer layer. After the fix we get the correct number of layers

@Satrat Satrat requested review from bfineran and rahul-tuli July 24, 2024 19:58
@bfineran bfineran merged commit 29cb10d into main Jul 24, 2024
8 of 12 checks passed
@bfineran bfineran deleted the fix_targets_alias branch July 24, 2024 20:06
markmc pushed a commit to markmc/llm-compressor that referenced this pull request Nov 13, 2024
* Create LICENSE

* Add license and description details to package

* Update setup.py

* Set the same ceiling for torch dep as in `sparseml`

* revert

---------

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants