Skip to content

winnechan/pytorch_practices

Repository files navigation

pytorch practices

notes of pytorch practices in experiments

Requirements

PyTorch 1.0.0 or PyTorch 0.4.0

Content

  1. single PC with multiple GPUs using DataParallel (DataParallel may hang with PyTorch 0.4.0, Tesla V100/K80 due to nccl issue)

  2. single PC with multiple GPUs using DistributedDataParallel

  3. specify different learning rates for different layers

  4. load model trained on multi gpus using torch.save({"model": model.state_dict()}, "xxx") to save instance of DataParallel: build a new OrderedDict with keys removing "module"

  5. save model trained on multi gpus in order to load it without multi gpus: save the model without DataParallel wrap

  6. load model trained on multi gpus by wrapping it in DataParallel again

  7. word embedding tutorial

  8. sequence models tutorial

About

notes of pytorch practices in experiments

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published