Skip to content

Commit

Permalink
Remove unused parameter in AutoAWQ
Browse files Browse the repository at this point in the history
  • Loading branch information
oobabooga committed Oct 24, 2023
1 parent 1edf321 commit ef1489c
Show file tree
Hide file tree
Showing 3 changed files with 1 addition and 4 deletions.
2 changes: 0 additions & 2 deletions models/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,6 @@
model_type: 'dollyv2'
.*replit:
model_type: 'replit'
.*AWQ:
n_batch: 1
.*(oasst|openassistant-|stablelm-7b-sft-v7-epoch-3):
instruction_template: 'Open Assistant'
skip_special_tokens: false
Expand Down
1 change: 0 additions & 1 deletion modules/loaders.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,6 @@
'gpu_memory',
'auto_devices',
'max_seq_len',
'n_batch',
'no_inject_fused_attention',
'trust_remote_code',
'use_fast',
Expand Down
2 changes: 1 addition & 1 deletion modules/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,7 @@ def AutoAWQ_loader(model_name):
trust_remote_code=shared.args.trust_remote_code,
fuse_layers=not shared.args.no_inject_fused_attention,
max_memory=get_max_memory_dict(),
batch_size=shared.args.n_batch,
batch_size=1,
safetensors=any(model_dir.glob('*.safetensors')),
)

Expand Down

0 comments on commit ef1489c

Please sign in to comment.