Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fetch upstream #3

Merged
merged 4 commits into from
May 18, 2022
Merged

fetch upstream #3

merged 4 commits into from
May 18, 2022

Conversation

tongjinle123
Copy link

No description provided.

biofoolgreen and others added 4 commits April 22, 2022 09:22
* [IPU options] add ipu opptions related

* [dataloader] add ipu dataloader

* [network] rmake lable_smoothing_loss and add_sos_eos run on IPU

* [pipeline] enable model running with pipeline

* [hotfix] fix batch size for Dtaset,  drop accum_grad option

* [optimizer] use poptorch optimizer

* [scheduler] add device factor on scheduler for ipu device iteration

* [scripting] walk around for torch.jit.script

* [training] adapt training process for IPU

* [network] enable model run on IPU, change eps for half precision

* [install] use cpu version torchaudio

* [hotfix] ix bug  in yaml, lbs loss, executor

* [hotfix] fixbug in training process

* [logging] add throughput calculation

* [doc] add description for device_factor in lr scheduler

* [ipu option] refactor ipu options

* [hotfix] fix bug, add options for profiling

* [doc] add ipu readme

* [doc] drop unused

* [doc] fix readme

* [lint] fix lint

* [lint] fix lint

* [lint] fix lint

* [hotfix] changes as the review comments

* [lint] fix lint

Co-authored-by: Tong Jinle <tongjinle123@live.com>
Co-authored-by: tongjinle123 <lancertong@live.com>
* [IPU options] add ipu opptions related

* [dataloader] add ipu dataloader

* [network] rmake lable_smoothing_loss and add_sos_eos run on IPU

* [pipeline] enable model running with pipeline

* [hotfix] fix batch size for Dtaset,  drop accum_grad option

* [optimizer] use poptorch optimizer

* [scheduler] add device factor on scheduler for ipu device iteration

* [scripting] walk around for torch.jit.script

* [training] adapt training process for IPU

* [network] enable model run on IPU, change eps for half precision

* [install] use cpu version torchaudio

* [hotfix] ix bug  in yaml, lbs loss, executor

* [hotfix] fixbug in training process

* [logging] add throughput calculation

* [doc] add description for device_factor in lr scheduler

* [ipu option] refactor ipu options

* [hotfix] fix bug, add options for profiling

* [doc] add ipu readme

* [doc] drop unused

* [doc] fix readme

* [lint] fix lint

* [lint] fix lint

* [lint] fix lint

* [hotfix] changes as the review comments

* [lint] fix lint

* [hotfix] fix bug in train.py

Co-authored-by: Tong Jinle <tongjinle123@live.com>
* [IPU options] add ipu opptions related

* [dataloader] add ipu dataloader

* [network] rmake lable_smoothing_loss and add_sos_eos run on IPU

* [pipeline] enable model running with pipeline

* [hotfix] fix batch size for Dtaset,  drop accum_grad option

* [optimizer] use poptorch optimizer

* [scheduler] add device factor on scheduler for ipu device iteration

* [scripting] walk around for torch.jit.script

* [training] adapt training process for IPU

* [network] enable model run on IPU, change eps for half precision

* [install] use cpu version torchaudio

* [hotfix] ix bug  in yaml, lbs loss, executor

* [hotfix] fixbug in training process

* [logging] add throughput calculation

* [doc] add description for device_factor in lr scheduler

* [ipu option] refactor ipu options

* [hotfix] fix bug, add options for profiling

* [doc] add ipu readme

* [doc] drop unused

* [doc] fix readme

* [lint] fix lint

* [lint] fix lint

* [lint] fix lint

* [hotfix] changes as the review comments

* [lint] fix lint

* [hotfix] fix bug in train.py

* [example] add support for aishell and librispeech

* [fix] fix yaml, drop unused

* [fix] fix yaml

* [lint] fix lint

Co-authored-by: Tong Jinle <tongjinle123@live.com>
@tongjinle123 tongjinle123 merged commit 3b8c403 into graphcore:merge_base May 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants