Skip to content

Latest commit

 

History

History
143 lines (89 loc) · 5.84 KB

README.md

File metadata and controls

143 lines (89 loc) · 5.84 KB

PyTorch README

Currently, we have already implemented both the the PyTorch -> IR part and the IR -> PyTorch part.

Models Caffe CoreML CNTK Keras MXNet PyTorch TensorFlow Onnx
Vgg16
Inception_v3
ResNet 50
MobileNet V1
Tiny-yolo

- Correctness tested

o - Some difference after conversion

space - not tested

The PyTorch parser is modified from branch pytorch , using jit CppOP to build the graph.

Any contribution is welcome.

Extract PyTorch pre-trained models

You can refer PyTorch model extractor to extract your pytorch models.

$ mmdownload -f pytorch -h
Supported models: ['alexnet', 'densenet121', 'densenet161', 'densenet169', 'densenet201', 'inception_v3', 'resnet101', 'resnet152', 'resnet18', 'resnet34', 'resnet50', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', 'vgg19', 'vgg19_bn']

$ mmdownload -f pytorch -n resnet101 -o ./
Downloading: "https://download.pytorch.org/models/resnet101-5d3b4d8f.pth" to /my/home/.torch/models/resnet101-5d3b4d8f.pth
███████████████████| 102502400/102502400 [00:06<00:00, 15858546.50it/s]
PyTorch pretrained model is saved as [./imagenet_resnet101.pth].

Convert Pytorch pre-trained models to IR

You can convert the whole pytorch model to IR structure. Please remember for the generality, we now only take the whole model pth, not just the state dict. To be more specific, it is save using torch.save() and torch.load() can load the whole model.

$ mmtoir -f pytorch -d resnet101 --inputShape 3,224,224 -n imagenet_resnet101.pth

Please bear in mind that always add --inputShape argparse. This thing is different from other framework because pytorch is a dynamic framework.

Then you will get

IR network structure is saved as [resnet101.json].
IR network structure is saved as [resnet101.pb].
IR weights are saved as [resnet101.npy].

Convert models from IR to PyTorch code snippet and weights

You can use following bash command to convert the IR architecture file [inception_v3.pb] and weights file [inception_v3.npy] to Caffe Python code file[pytorch_inception_v3.py] and IR weights file suit for caffe model[pytorch_inception_v3.npy]

Note: We need to transform the IR weights to PyTorch suitable weights. Use argument -dw to specify the output weight file name.

$ mmtocode -f pytorch -n inception_v3.pb --IRWeightPath inception_v3.npy --dstModelPath pytorch_inception_v3.py -dw pytorch_inception_v3.npy

Parse file [inception_v3.pb] with binary format successfully.
Target network code snippet is saved as [pytorch_inception_v3.py].
Target weights are saved as [pytorch_inception_v3.npy].

Generate PyTorch model from code snippet file and weight file

You can use following bash command to generate PyTorch model file [pytorch_inception_v3.pth] from python code [pytorch_inception_v3.py] and weights file [pytorch_inception_v3.npy] for further usage.

$ mmtomodel -f pytorch -in pytorch_inception_v3.py -iw pytorch_inception_v3.npy -o pytorch_inception_v3.pth

PyTorch model file is saved as [pytorch_inception_v3.pth], generated by [pytorch_inception_v3.py] and [pytorch_inception_v3.npy]. Notice that you may need [pytorch_inception_v3.py] to load the model back.

Example

Detail scripts of Tensorflow slim resnet_v1_101 model to PyTorch conversion are in issue 22. You can refer it to implement your conversion.

Develop version

Ubuntu 16.04 with

  • PyTorch 0.4.0

@ 2018/04/25

Links

Limitation

  • The main dataflow in a pytorch network is converted from NHWC(channel last) to NCHW(channel first) format, but some operators (like Concat) with axis may not transform correctly. You may need to correct it manually.

  • Currently, no RNN-related operations supported

FAQ

  • There are two types models saved in PyTorch. One is including architecture and weights, which is supported in the MMdnn now. The other one is only including the weights, which is not supported now.
only_weight_file = "./alexnet-owt-4df8aa71.pth"      # Download from the model zoo
architecture_weight_file = "imagenet_alexnet.pth"    # Download using mmdownload()

m = torch.load(only_weight_file)                    # <class 'collections.OrderedDict'>
m_1 = torch.load(architecture_weight_file)          # <class 'torchvision.models.alexnet.AlexNet'> supported!
  • When you get the error "AttributeError: 'collections.OrderedDict' object has no attribute 'state_dict'" , it's because you use the model only include weights part. You need to save a new model with archietecture
torch.save(model, filename)
  • How to load the converted PyTorch model ?
import torch
import imp
import numpy as np
MainModel = imp.load_source('MainModel', "tf_pytorch_vgg19.py")

the_model = torch.load("tf_pytorch_vgg19.pth")
the_model.eval()

x = np.random.random([224,224,3])
x = np.transpose(x, (2, 0, 1))
x = np.expand_dims(x, 0).copy()
data = torch.from_numpy(x)
data = torch.autograd.Variable(data, requires_grad = False).float()

predict = the_model(data)