Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash at the end of training #9

Closed
bkgoksel opened this issue Nov 8, 2018 · 2 comments
Closed

Crash at the end of training #9

bkgoksel opened this issue Nov 8, 2018 · 2 comments

Comments

@bkgoksel
Copy link

bkgoksel commented Nov 8, 2018

Hi, I tried running the Squad model this morning (on a single GPU with gradient accumulation over 3 steps) but after 3 hours of training, my job failed with the following output:

I was running the code, unmodified, from commit 3bfbc21

Is this an issue you know about?

11/08/2018 17:50:03 - INFO - __main__ -   device cuda n_gpu 1 distributed training False
11/08/2018 17:50:18 - INFO - __main__ -   *** Example ***
11/08/2018 17:50:18 - INFO - __main__ -   unique_id: 1000000000
11/08/2018 17:50:18 - INFO - __main__ -   example_index: 0
11/08/2018 17:50:18 - INFO - __main__ -   doc_span_index: 0
11/08/2018 17:50:18 - INFO - __main__ -   tokens: [CLS] to whom did the virgin mary allegedly appear in 1858 in lou ##rdes france ? [SEP] architectural ##ly , the school has a catholic character . atop the main building ' s gold dome is a golden statue of the virgin mary . immediately in front of the main building and facing it , is a copper statue of christ with arms up ##rai ##sed with the legend " ve ##ni ##te ad me om ##nes " . next to the main building is the basilica of the sacred heart . immediately behind the basilica is the gr ##otto , a marian place of prayer and reflection . it is a replica of the gr ##otto at lou ##rdes , france where the virgin mary reputed ##ly appeared to saint bern ##ade ##tte so ##ub ##iro ##us in 1858 . at the end of the main drive ( and in a direct line that connects through 3 statues and the gold dome ) , is a simple , modern stone statue of mary . [SEP]
11/08/2018 17:50:18 - INFO - __main__ -   token_to_orig_map: 17:0 18:0 19:0 20:1 21:2 22:3 23:4 24:5 25:6 26:6 27:7 28:8 29:9 30:10 31:10 32:10 33:11 34:12 35:13 36:14 37:15 38:16 39:17 40:18 41:19 42:20 43:20 44:21 45:22 46:23 47:24 48:25 49:26 50:27 51:28 52:29 53:30 54:30 55:31 56:32 57:33 58:34 59:35 60:36 61:37 62:38 63:39 64:39 65:39 66:40 67:41 68:42 69:43 70:43 71:43 72:43 73:44 74:45 75:46 76:46 77:46 78:46 79:47 80:48 81:49 82:50 83:51 84:52 85:53 86:54 87:55 88:56 89:57 90:58 91:58 92:59 93:60 94:61 95:62 96:63 97:64 98:65 99:65 100:65 101:66 102:67 103:68 104:69 105:70 106:71 107:72 108:72 109:73 110:74 111:75 112:76 113:77 114:78 115:79 116:79 117:80 118:81 119:81 120:81 121:82 122:83 123:84 124:85 125:86 126:87 127:87 128:88 129:89 130:90 131:91 132:91 133:91 134:92 135:92 136:92 137:92 138:93 139:94 140:94 141:95 142:96 143:97 144:98 145:99 146:100 147:101 148:102 149:102 150:103 151:104 152:105 153:106 154:107 155:108 156:109 157:110 158:111 159:112 160:113 161:114 162:115 163:115 164:115 165:116 166:117 167:118 168:118 169:119 170:120 171:121 172:122 173:123 174:123
11/08/2018 17:50:18 - INFO - __main__ -   token_is_max_context: 17:True 18:True 19:True 20:True 21:True 22:True 23:True 24:True 25:True 26:True 27:True 28:True 29:True 30:True 31:True 32:True 33:True 34:True 35:True 36:True 37:True 38:True 39:True 40:True 41:True 42:True 43:True 44:True 45:True 46:True 47:True 48:True 49:True 50:True 51:True 52:True 53:True 54:True 55:True 56:True 57:True 58:True 59:True 60:True 61:True 62:True 63:True 64:True 65:True 66:True 67:True 68:True 69:True 70:True 71:True 72:True 73:True 74:True 75:True 76:True 77:True 78:True 79:True 80:True 81:True 82:True 83:True 84:True 85:True 86:True 87:True 88:True 89:True 90:True 91:True 92:True 93:True 94:True 95:True 96:True 97:True 98:True 99:True 100:True 101:True 102:True 103:True 104:True 105:True 106:True 107:True 108:True 109:True 110:True 111:True 112:True 113:True 114:True 115:True 116:True 117:True 118:True 119:True 120:True 121:True 122:True 123:True 124:True 125:True 126:True 127:True 128:True 129:True 130:True 131:True 132:True 133:True 134:True 135:True 136:True 137:True 138:True 139:True 140:True 141:True 142:True 143:True 144:True 145:True 146:True 147:True 148:True 149:True 150:True 151:True 152:True 153:True 154:True 155:True 156:True 157:True 158:True 159:True 160:True 161:True 162:True 163:True 164:True 165:True 166:True 167:True 168:True 169:True 170:True 171:True 172:True 173:True 174:True
11/08/2018 17:50:18 - INFO - __main__ -   input_ids: 101 2000 3183 2106 1996 6261 2984 9382 3711 1999 8517 1999 10223 26371 2605 1029 102 6549 2135 1010 1996 2082 2038 1037 3234 2839 1012 10234 1996 2364 2311 1005 1055 2751 8514 2003 1037 3585 6231 1997 1996 6261 2984 1012 3202 1999 2392 1997 1996 2364 2311 1998 5307 2009 1010 2003 1037 6967 6231 1997 4828 2007 2608 2039 14995 6924 2007 1996 5722 1000 2310 3490 2618 4748 2033 18168 5267 1000 1012 2279 2000 1996 2364 2311 2003 1996 13546 1997 1996 6730 2540 1012 3202 2369 1996 13546 2003 1996 24665 23052 1010 1037 14042 2173 1997 7083 1998 9185 1012 2009 2003 1037 15059 1997 1996 24665 23052 2012 10223 26371 1010 2605 2073 1996 6261 2984 22353 2135 2596 2000 3002 16595 9648 4674 2061 12083 9711 2271 1999 8517 1012 2012 1996 2203 1997 1996 2364 3298 1006 1998 1999 1037 3622 2240 2008 8539 2083 1017 11342 1998 1996 2751 8514 1007 1010 2003 1037 3722 1010 2715 2962 6231 1997 2984 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11/08/2018 17:50:18 - INFO - __main__ -   input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

... [truncated] ...

Iteration: 100%|█████████▉| 29314/29324 [3:27:55<00:04,  2.36it/s]�[A

Iteration: 100%|█████████▉| 29315/29324 [3:27:55<00:03,  2.44it/s]�[A

Iteration: 100%|█████████▉| 29316/29324 [3:27:56<00:03,  2.26it/s]�[A

Iteration: 100%|█████████▉| 29317/29324 [3:27:56<00:02,  2.35it/s]�[A

Iteration: 100%|█████████▉| 29318/29324 [3:27:56<00:02,  2.44it/s]�[A

Iteration: 100%|█████████▉| 29319/29324 [3:27:57<00:02,  2.25it/s]�[A

Iteration: 100%|█████████▉| 29320/29324 [3:27:57<00:01,  2.35it/s]�[A

Iteration: 100%|█████████▉| 29321/29324 [3:27:58<00:01,  2.41it/s]�[A

Iteration: 100%|█████████▉| 29322/29324 [3:27:58<00:00,  2.25it/s]�[A

Iteration: 100%|█████████▉| 29323/29324 [3:27:59<00:00,  2.36it/s]�[ATraceback (most recent call last):
  File "code/run_squad.py", line 929, in <module>
    main()
  File "code/run_squad.py", line 862, in main
    loss = model(input_ids, segment_ids, input_mask, start_positions, end_positions)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/0x0d4ff90d01fa4168983197b17d73bb0c_dependencies/code/modeling.py", line 467, in forward
    start_loss = loss_fct(start_logits, start_positions)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 862, in forward
    ignore_index=self.ignore_index, reduction=self.reduction)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1550, in cross_entropy
    return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1403, in nll_loss
    if input.size(0) != target.size(0):
RuntimeError: dimension specified as 0 but tensor has no dimensions

Exception ignored in: <bound method tqdm.__del__ of Iteration: 100%|█████████▉| 29323/29324 [3:27:59<00:00,  2.36it/s]>
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 931, in __del__
    self.close()
  File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 1133, in close
    self._decr_instances(self)
  File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 496, in _decr_instances
    cls.monitor.exit()
  File "/usr/local/lib/python3.6/dist-packages/tqdm/_monitor.py", line 52, in exit
    self.join()
  File "/usr/lib/python3.6/threading.py", line 1053, in join
    raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread
@bkgoksel
Copy link
Author

bkgoksel commented Nov 8, 2018

Here's the specific command I ran for more context:

python3.6 code/run_squad.py \
  --bert_config_file bert/bert_config.json \
  --vocab_file bert/vocab.txt \
  --output_dir output \
  --train_file data/original/train.json \
  --predict_file data/original/dev.json \
  --init_checkpoint bert-pytorch/pytorch_model.bin \
  --do_lower_case \
  --do_train \
  --do_predict \
  --train_batch_size 10 \
  --gradient_accumulation_steps 3 \
  --accumulate_gradients 3

@thomwolf
Copy link
Member

thomwolf commented Nov 9, 2018

Hi Kerem, yes I fixed this bug yesterday in commit 2c5d993 (a bug with batches of dimension 1)
You can try again with the current version and it should be fine.

I got good results with these hyperparameters last night:

python run_squad.py \
  --vocab_file $BERT_BASE_DIR/vocab.txt \
  --bert_config_file $BERT_BASE_DIR/bert_config.json \
  --init_checkpoint $BERT_PYTORCH_DIR/pytorch_model.bin \
  --do_train \
  --do_predict \
  --do_lower_case
  --train_file $SQUAD_DIR/train-v1.1.json \
  --predict_file $SQUAD_DIR/dev-v1.1.json \
  --train_batch_size 12 \
  --learning_rate 3e-5 \
  --num_train_epochs 2.0 \
  --max_seq_length 384 \
  --doc_stride 128 \
  --output_dir ../debug_squad/

I found:

{"f1": 88.52381567990474, "exact_match": 81.22043519394512}

Feel free to reopen the issue if needed.

@thomwolf thomwolf closed this as completed Nov 9, 2018
stevezheng23 added a commit to stevezheng23/transformers that referenced this issue Mar 24, 2020
LysandreJik added a commit that referenced this issue Apr 10, 2020
* Initial commit to get BERT + run_glue.py on TPU

* Add README section for TPU and address comments.

* Cleanup TPU bits from run_glue.py (#3)

TPU runner is currently implemented in:
https://github.com/pytorch-tpu/transformers/blob/tpu/examples/run_glue_tpu.py.

We plan to upstream this directly into `huggingface/transformers`
(either `master` or `tpu`) branch once it's been more thoroughly tested.

* Cleanup TPU bits from run_glue.py

TPU runner is currently implemented in:
https://github.com/pytorch-tpu/transformers/blob/tpu/examples/run_glue_tpu.py.

We plan to upstream this directly into `huggingface/transformers`
(either `master` or `tpu`) branch once it's been more thoroughly tested.

* No need to call `xm.mark_step()` explicitly (#4)

Since for gradient accumulation we're accumulating on batches from
`ParallelLoader` instance which on next() marks the step itself.

* Resolve R/W conflicts from multiprocessing (#5)

* Add XLNet in list of models for `run_glue_tpu.py` (#6)

* Add RoBERTa to list of models in TPU GLUE (#7)

* Add RoBERTa and DistilBert to list of models in TPU GLUE (#8)

* Use barriers to reduce duplicate work/resources (#9)

* Shard eval dataset and aggregate eval metrics (#10)

* Shard eval dataset and aggregate eval metrics

Also, instead of calling `eval_loss.item()` every time do summation with
tensors on device.

* Change defaultdict to float

* Reduce the pred, label tensors instead of metrics

As brought up during review some metrics like f1 cannot be aggregated
via averaging. GLUE task metrics depends largely on the dataset, so
instead we sync the prediction and label tensors so that the metrics can
be computed accurately on those instead.

* Only use tb_writer from master (#11)

* Apply huggingface black code formatting

* Style

* Remove `--do_lower_case` as example uses cased

* Add option to specify tensorboard logdir

This is needed for our testing framework which checks regressions
against key metrics writtern by the summary writer.

* Using configuration for `xla_device`

* Prefix TPU specific comments.

* num_cores clarification and namespace eval metrics

* Cache features file under `args.cache_dir`

Instead of under `args.data_dir`. This is needed as our test infra uses
data_dir with a read-only filesystem.

* Rename `run_glue_tpu` to `run_tpu_glue`

Co-authored-by: LysandreJik <lysandre.debut@reseau.eseo.fr>
amathews-amd referenced this issue in ROCm/transformers Aug 6, 2021
rraminen pushed a commit to rraminen/transformers that referenced this issue Jun 3, 2022
jlamypoirier added a commit to jlamypoirier/transformers that referenced this issue Apr 4, 2023
* dockerfile
* formatting and fixes
* cleanup
* style
xloem pushed a commit to xloem/transformers that referenced this issue Apr 9, 2023
* Update trainer and model flows to accommodate sparseml

Disable FP16 on QAT start (huggingface#12)

* Override LRScheduler when using LRModifiers

* Disable FP16 on QAT start

* keep wrapped scaler object for training after disabling

Using QATMatMul in DistilBERT model class (huggingface#41)

Removed double quantization of output of context layer. (huggingface#45)

Fix DataParallel validation forward signatures (huggingface#47)

* Fix: DataParallel validation forward signatures

* Update: generalize forward_fn selection

Best model after epoch (huggingface#46)

fix sclaer check for non fp16 mode in trainer (huggingface#38)

Mobilebert QAT (huggingface#55)

* Remove duplicate quantization of vocabulary.

enable a QATWrapper for non-parameterized matmuls in BERT self attention (huggingface#9)

* Utils and auxillary changes

update Zoo stub loading for SparseZoo 1.1 refactor (huggingface#54)

add flag to signal NM integration is active (huggingface#32)

Add recipe_name to file names

* Fix errors introduced in manual cherry-pick upgrade

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
sim-so added a commit to sim-so/transformers that referenced this issue Apr 23, 2023
# This is the 1st commit message:

Update docs/source/ko/tasks/summarization.mdx

Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com>
# This is the commit message huggingface#2:

Update docs/source/ko/tasks/summarization.mdx

Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com>
# This is the commit message huggingface#3:

Update docs/source/ko/tasks/summarization.mdx

Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com>
# This is the commit message huggingface#4:

Update docs/source/ko/tasks/summarization.mdx

Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com>
# This is the commit message huggingface#5:

Update docs/source/ko/tasks/summarization.mdx

Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com>
# This is the commit message huggingface#6:

Update docs/source/ko/tasks/summarization.mdx

Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com>
# This is the commit message huggingface#7:

Update docs/source/ko/tasks/summarization.mdx

Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com>
# This is the commit message huggingface#8:

Update docs/source/ko/tasks/summarization.mdx

Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com>
# This is the commit message huggingface#9:

Update docs/source/ko/tasks/summarization.mdx

Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com>
# This is the commit message huggingface#10:

Update docs/source/ko/tasks/summarization.mdx

Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com>
# This is the commit message huggingface#11:

Update docs/source/ko/tasks/summarization.mdx
jameshennessytempus pushed a commit to jameshennessytempus/transformers that referenced this issue Jun 1, 2023
ocavue pushed a commit to ocavue/transformers that referenced this issue Sep 13, 2023
younesbelkada pushed a commit to younesbelkada/transformers that referenced this issue Mar 14, 2024
LysandreJik pushed a commit that referenced this issue Mar 15, 2024
* Cohere Model Release (#1)

Cohere Model Release

* Remove unnecessary files and code (#2)

Some cleanup

* Delete cohere-model directory (#3)

* Make Fix (#5)

* Pr fixes (#6)

* fixes for pr

* pr fixes for the format

* pr fixes for the format

* src/transformers/models/auto/tokenization_auto.py

* Tokenizer test (#8)

* tokenizer test

* format fix

* Adding Docs and other minor changes (#7)

* Add modeling tests (#9)

* Smol Fix (#11)

* tokenization tests are fixed

* format fixes

* fix pr doc tests

* fix pr doc tests

* fix pr doc tests

* fix pr style check

* small changes in cohere.md

* FIX: Address final comments for transformers integration (#13)

* fix modeling final nits and add proper test file

* for now leave empty tests

* add integration test

* push new test

* fix modeling cohere (#14)

* Update chat templates to use the new API (#15)

---------

Co-authored-by: ahmetustun <ahmetustun89@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
lcong pushed a commit to lcong/transformers that referenced this issue Apr 9, 2024
ArthurZucker pushed a commit that referenced this issue Apr 9, 2024
itazap pushed a commit that referenced this issue May 14, 2024
* Cohere Model Release (#1)

Cohere Model Release

* Remove unnecessary files and code (#2)

Some cleanup

* Delete cohere-model directory (#3)

* Make Fix (#5)

* Pr fixes (#6)

* fixes for pr

* pr fixes for the format

* pr fixes for the format

* src/transformers/models/auto/tokenization_auto.py

* Tokenizer test (#8)

* tokenizer test

* format fix

* Adding Docs and other minor changes (#7)

* Add modeling tests (#9)

* Smol Fix (#11)

* tokenization tests are fixed

* format fixes

* fix pr doc tests

* fix pr doc tests

* fix pr doc tests

* fix pr style check

* small changes in cohere.md

* FIX: Address final comments for transformers integration (#13)

* fix modeling final nits and add proper test file

* for now leave empty tests

* add integration test

* push new test

* fix modeling cohere (#14)

* Update chat templates to use the new API (#15)

---------

Co-authored-by: ahmetustun <ahmetustun89@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
SangbumChoi added a commit to SangbumChoi/transformers that referenced this issue Aug 22, 2024
add num_nms parameters and set to 100
ZYC-ModelCloud pushed a commit to ZYC-ModelCloud/transformers that referenced this issue Nov 14, 2024
* remove (zeors -= 1)

* add warning

* support backwards compatibility

* support and fix bug

* remove not necessary parm

* fix test_q4 bug

* fix test_q4 bug

* fix bug double converting

* Update _utils.py

* FIX type error

* module is nn.Module

* sync name

* need return module

* modify default format to gptq_v2

* fix need return model

* remove fixme and default to gptq_v2 for quantize_config

* save _qlinear_kernel and allow save to older format

* fix name

* pass quantize

* update

* store quant log/stats in dict slice and return to user in quantize()

* accept saved quant_log in quantize() and calculate diff

* tqdm the layer loop

* log awq vs autogptq outputs in awq compat test

* fix cached models is not compatible with new pr. add v2 to cache file name

* add deprecation warning for loading .bin/.pt weights

* add missing termcolor req

* spell

* fix triton v2

* rename quant log column 'name' to 'module'

* ruff

* add quantization tests for sym=False

* spell

* fix type hint

* more testing, fix serialization bug, no additional dependency

* fix version

* no need for ... in tqdm

* Use threadpoolctl to limit packing threads

* layer # sync with visual tqdm

* use thread limit 1: as good as 4 and 1 beats 16 threads in testing

* fix saving of gptq (v1)

* deep copy

* remove todo: verified

* TEST/DEBUG underflow protection and output underflow stats
fix underflow cond reversed

* force underflow math (testing shows this is better than skipping math)

* 1) disable serialization of sym=False to v1 by default. 2) disable loading of v1 sym=False by default.

* revert adding underflow check/stats

* pass test_quant and test both v1 and v2 save/load

* performance fix for convert_v1/v2().

* need to ++ version so can delimit models make pre/post pr

* add meta and meta.quantizer to quantized_config.json

* fix json save and add meta check to test_quantization. distutils is deprecated by python. add packaging. depend

* fix failed test

* fix awq unpack/repacking thread regression

* remove highly flaky mistral tiny test with input/output that is nonsensical

* now we can detect quant producer, we don't need use_unsafe_math for loading

* updat tests

* default to gptq v1 for max compat and remove use_unsafe_math check in save_quantized

* misc

* separate the concept of meta.quantizer and meta.packer (intel/auto-round as example)

* clean

* test allow loading quantized lm_head

* rename

* fix quantized lm_head loading

* sync with main

* ADD GLM model support

---------

Co-authored-by: leejunjae <qwopqwop200@gmail.com>
Co-authored-by: Liurl26 <lrl@lbx.dev>
Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>
ZYC-ModelCloud pushed a commit to ZYC-ModelCloud/transformers that referenced this issue Nov 14, 2024
* Update README.md

* Update README.md
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants