He helped me understand what the problem was and suggested the solution I implemented.
Check out his awesome tutorials for more information on converting CoreML models!
To reproduce the error, do the following:
- Clone this repo
- Run
python extract_weights.py
- Run
python load_weights.py
- Run
python convert-to-coreml.py
It turns out CoreML doesn't support fractional (or even uneven upscaling), so I replaced the custom BilinearUpsampling layers with normal UpSampling2D layers. This means that you're stuck with generating models that are multiples of 2 and are evenly sized. But hey, life isn't perfect!
I'm including a sample XCode project showing the model loading and calculating the argmax from the results
It's not really a complete example, but might help you get started. Check out this repo for better examples on using Apple's Vision methods to predict images.
https://github.com/hollance/YOLO-CoreML-MPSNNGraph
DeepLab is a state-of-art deep learning model for semantic image segmentation.
Model is based on the original TF frozen graph. It is possible to load pretrained weights into this model. Weights are directly imported from original TF checkpoint.
Segmentation results of original TF model. Output Stride = 8
Segmentation results of this repo model with loaded weights and OS = 8
Results are identical to the TF model
Segmentation results of this repo model with loaded weights and OS = 16
Results are still good
Model will return tensor of shape (batch_size,height,width,classes). To obtain labels, you need to apply argmax to logits at exit layer. Example of predicting on image1.jpg:
from matplotlib import pyplot as plt
import cv2 # used for resize. if you dont have it, use anything else
import numpy as np
from model import Deeplabv3
deeplab_model = Deeplabv3()
img = plt.imread("imgs/image1.jpg")
w, h, _ = img.shape
ratio = 512. / np.max([w,h])
resized = cv2.resize(img,(int(ratio*h),int(ratio*w)))
resized = resized / 127.5 - 1.
pad_x = int(512 - resized.shape[0])
resized2 = np.pad(resized,((0,pad_x),(0,0),(0,0)),mode='constant')
res = deeplab_model.predict(np.expand_dims(resized2,0))
labels = np.argmax(res.squeeze(),-1)
plt.imshow(labels[:-pad_x])
from model import Deeplabv3
deeplab_model = Deeplabv3(input_shape=(384,384,3), classes=4)
After that you will get a usual Keras model which you can train using .fit and .fit_generator methods
You can find a lot of usefull parameters in original repo: https://github.com/tensorflow/models/blob/master/research/deeplab/train.py
Important notes:
- This model don't have default weight decay, you need to add it yourself
- Xception backbone should be trained with OS=16, and only inferenced with OS=8
- You can freeze feature extractor for Xception backbone (first 356 layers) and only fine-tune decoder
- If you want to train BN layers too, use batch size of at least 12 (16+ is even better)
In order to load model after using model.save() use this code:
from model import relu6, BilinearUpsampling
deeplab_model = load_model('example.h5',custom_objects={'relu6':relu6,'BilinearUpsampling':BilinearUpsampling })
There are 2 available backbones. Xception backbone is more accurate, but has 25 times more parameters than MobileNetv2. For MobileNetv2 there are pretrained weights only for alpha==1., but you can initiate model with different values of alpha.
Keras==2.1.5
tensorflow-gpu==1.6.0
CUDA==9.0