TensorLayer 1.8.4rc0 ~ 1
Pre-release
Pre-release
TL Models - Provides pre-trained VGG16, SqueezeNet and MobileNetV1 in one line of code (by @lgarithm @zsdonghao), more models will be provided soon!
- Classify ImageNet classes, see tutorial_models_mobilenetv1.py
>>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get the whole model
>>> net = tl.models.MobileNetV1(x)
>>> # restore pre-trained parameters
>>> sess = tf.InteractiveSession()
>>> net.restore_params(sess)
>>> # use for inferencing
>>> probs = tf.nn.softmax(net.outputs)
- Extract features and Train a classifier with 100 classes
>>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get model without the last layer
>>> cnn = tl.models.MobileNetV1(x, end_with='reshape')
>>> # add one more layer
>>> net = Conv2d(cnn, 100, (1, 1), (1, 1), name='out')
>>> net = FlattenLayer(net, name='flatten')
>>> # initialize all parameters
>>> sess = tf.InteractiveSession()
>>> tl.layers.initialize_global_variables(sess)
>>> # restore pre-trained parameters
>>> cnn.restore_params(sess)
>>> # train your own classifier (only update the last layer)
>>> train_params = tl.layers.get_variables_with_name('out')
- Reuse model
>>> x1 = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> x2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get network without the last layer
>>> net1 = tl.models.MobileNetV1(x1, end_with='reshape')
>>> # reuse the parameters with different input
>>> net2 = tl.models.MobileNetV1(x2, end_with='reshape', reuse=True)
>>> # restore pre-trained parameters (as they share parameters, we don’t need to restore net2)
>>> sess = tf.InteractiveSession()
>>> net1.restore_params(sess)