Skip to content

VGGFace implementation with Keras Framework

License

Notifications You must be signed in to change notification settings

popsa-hq/keras-vggface

 
 

Repository files navigation

Keras-vggface: Popsa edition

This library was forked to provide model-conversion (to TensorFlow Lite for Android & CoreML for iOS) and mobile-friendly preprocessing (e.g. Normalization, RGB->BGR channel reversal) instead of using the original library implementation which uses python code not available on Android or iOS. As such, this library now depends on CoreMLTools and TensorFlow Lite related libraries (tflite-support and Tensorflow lite).

This library is used by prototype.face-similarity repo.

  • For iOS, please see keras_vggface/ios_model_creation.py
  • For Android, please see keras_vggface/android_model_creation.py
  • For modifying Model metadata, see keras_vggface/strings_model_metadata.py

Prerequisites

  • Create virtual environment: python3 -m venv venv
  • Activate the environment: . ./venv/bin/activate
  • pip install -e . Dependencies are also installed, since they are specified in setup.py

Original README: keras-vggface Build Status PyPI Status PyPI Status

Oxford VGGFace Implementation using Keras Functional Framework v2+

  • Models are converted from original caffe networks.
  • It supports only Tensorflow backend.
  • You can also load only feature extraction layers with VGGFace(include_top=False) initiation.
  • When you use it for the first time , weights are downloaded and stored in ~/.keras/models/vggface folder.
  • If you don't know where to start check the blog posts that are using this library.
# Most Recent One (Suggested)
pip install git+https://github.com/rcmalli/keras-vggface.git
# Release Version
pip install keras_vggface

Library Versions

  • Keras v2.2.4
  • Tensorflow v1.14.0
  • Warning: Theano backend is not supported/tested for now.

Example Usage

Available Models

from keras_vggface.vggface import VGGFace

# Based on VGG16 architecture -> old paper(2015)
vggface = VGGFace(model='vgg16') # or VGGFace() as default

# Based on RESNET50 architecture -> new paper(2017)
vggface = VGGFace(model='resnet50')

# Based on SENET50 architecture -> new paper(2017)
vggface = VGGFace(model='senet50')

Feature Extraction

  • Convolution Features

    from keras.engine import  Model
    from keras.layers import Input
    from keras_vggface.vggface import VGGFace
    
    # Convolution Features
    vgg_features = VGGFace(include_top=False, input_shape=(224, 224, 3), pooling='avg') # pooling: None, avg or max
    
    # After this point you can use your model to predict.
    # ...
  • Specific Layer Features

    from keras.engine import  Model
    from keras.layers import Input
    from keras_vggface.vggface import VGGFace
    
    # Layer Features
    layer_name = 'layer_name' # edit this line
    vgg_model = VGGFace() # pooling: None, avg or max
    out = vgg_model.get_layer(layer_name).output
    vgg_model_new = Model(vgg_model.input, out)
    
    # After this point you can use your model to predict.
    # ...

Finetuning

  • VGG16

    from keras.engine import  Model
    from keras.layers import Flatten, Dense, Input
    from keras_vggface.vggface import VGGFace
    
    #custom parameters
    nb_class = 2
    hidden_dim = 512
    
    vgg_model = VGGFace(include_top=False, input_shape=(224, 224, 3))
    last_layer = vgg_model.get_layer('pool5').output
    x = Flatten(name='flatten')(last_layer)
    x = Dense(hidden_dim, activation='relu', name='fc6')(x)
    x = Dense(hidden_dim, activation='relu', name='fc7')(x)
    out = Dense(nb_class, activation='softmax', name='fc8')(x)
    custom_vgg_model = Model(vgg_model.input, out)
    
    # Train your model as usual.
    # ...
  • RESNET50 or SENET50

    from keras.engine import  Model
    from keras.layers import Flatten, Dense, Input
    from keras_vggface.vggface import VGGFace
    
    #custom parameters
    nb_class = 2
    
    vgg_model = VGGFace(include_top=False, input_shape=(224, 224, 3))
    last_layer = vgg_model.get_layer('avg_pool').output
    x = Flatten(name='flatten')(last_layer)
    out = Dense(nb_class, activation='softmax', name='classifier')(x)
    custom_vgg_model = Model(vgg_model.input, out)
    
    # Train your model as usual.
    # ...

Prediction

  • Use utils.preprocess_input(x, version=1) for VGG16

  • Use utils.preprocess_input(x, version=2) for RESNET50 or SENET50

    import numpy as np
    from keras.preprocessing import image
    from keras_vggface.vggface import VGGFace
    from keras_vggface import utils
    
    # tensorflow
    model = VGGFace() # default : VGG16 , you can use model='resnet50' or 'senet50'
    
    # Change the image path with yours.
    img = image.load_img('../image/ajb.jpg', target_size=(224, 224))
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    x = utils.preprocess_input(x, version=1) # or version=2
    preds = model.predict(x)
    print('Predicted:', utils.decode_predictions(preds))

References

Licence

  • Check Oxford Webpage for the license of the original models.

  • The code that provided in this project is under MIT License.

Projects / Blog Posts

If you find this project useful, please include reference link in your work. You can create PR's to this document with your project/blog link.

About

VGGFace implementation with Keras Framework

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 91.4%
  • Jupyter Notebook 8.6%