Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

py2 code #12

Closed
antxiaojun opened this issue Nov 10, 2018 · 1 comment
Closed

py2 code #12

antxiaojun opened this issue Nov 10, 2018 · 1 comment

Comments

@antxiaojun
Copy link

if I convert code to python2 version of code, it can't converage ; Would you present py2 code?

@thomwolf
Copy link
Member

Hi, we won't provide a python 2 version but if you want to do a python 2/3 compatible version feel free to open a PR.

stevezheng23 added a commit to stevezheng23/transformers that referenced this issue Mar 24, 2020
amathews-amd referenced this issue in ROCm/transformers Aug 6, 2021
* add ort config for debertav2 model

* remove prints

* remove old commented code

* fix run style error

* add flake ignore comment

* trial to fix blackify format error
lvwerra pushed a commit that referenced this issue Sep 15, 2021
Add Layer Scaling & Upcast/Reordering Flags + Functionality
xloem pushed a commit to xloem/transformers that referenced this issue Apr 9, 2023
* Update trainer and model flows to accommodate sparseml

Disable FP16 on QAT start (huggingface#12)

* Override LRScheduler when using LRModifiers

* Disable FP16 on QAT start

* keep wrapped scaler object for training after disabling

Using QATMatMul in DistilBERT model class (huggingface#41)

Removed double quantization of output of context layer. (huggingface#45)

Fix DataParallel validation forward signatures (huggingface#47)

* Fix: DataParallel validation forward signatures

* Update: generalize forward_fn selection

Best model after epoch (huggingface#46)

fix sclaer check for non fp16 mode in trainer (huggingface#38)

Mobilebert QAT (huggingface#55)

* Remove duplicate quantization of vocabulary.

enable a QATWrapper for non-parameterized matmuls in BERT self attention (huggingface#9)

* Utils and auxillary changes

update Zoo stub loading for SparseZoo 1.1 refactor (huggingface#54)

add flag to signal NM integration is active (huggingface#32)

Add recipe_name to file names

* Fix errors introduced in manual cherry-pick upgrade

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
jameshennessytempus pushed a commit to jameshennessytempus/transformers that referenced this issue Jun 1, 2023
ocavue pushed a commit to ocavue/transformers that referenced this issue Sep 13, 2023
younesbelkada pushed a commit to younesbelkada/transformers that referenced this issue Mar 14, 2024
ZYC-ModelCloud pushed a commit to ZYC-ModelCloud/transformers that referenced this issue Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants