I trained a custom Deep Learning model to recognize some One Piece characters (17, at the moment) with Tensorflow, Keras, and a fine-tuned MobilenetV2. Afterwards, the model was converted to a TFLite model for running inferences on small devices, and a Tensorflow Serving Docker container for HTTP-based inferences.
96.84% validation accuracy.
**Deployed on my personal Docker Hub repository: Click here
**Kaggle Notebook link: Kaggle notebook
**Tensorflow Lite model : op_classifier_V16.tflite
A fine-tuned mobilenet has been used. The training session was runned on Kaggle with a GPU execution type.
Dataset link : Click here
Notebook link : Click here
- A data augmentation layer which creates "modified" images of the training set
- A MobileNet layer which detects the features
- A Global average pooling layer which converts the feature vectors into a 1280 element vector
- 1*3 Dense layers followed by a dropout layer to prevent overfitting
- An activation layer (sigmoid) which represents the final output: Probability of input(X) belonging to each class
- Output classes (17 probabilities) : ['Ace', 'Akainu', 'Brook', 'Chopper', 'Crocodile', 'Franky', 'Jinbei', 'Kurohige', 'Law', 'Luffy', 'Mihawk', 'Nami', 'Robin', 'Sanji', 'Shanks', 'Usopp', 'Zoro']
Best valdiation accuracy: 96.84%.
1-First option: Using the command line runner
The image source can be a file path or a URL. Set the "mode" parameter to 'image' or 'url' accordingly.
2-Second option: Using the Tensorflow Serving image deployed here (TAG: OP_serving)
Pull the Docker image with the OP_serving tag, then run inferences using the 8501 port.
A test script example is available here
- Python 3.7 or higher
- IDE: Jupyter Lab/Kaggle Notebooks/Google Colab
- Frameworks: Tensorflow 2.6 or higher and its dependencies
- Libraries : OpenCV, PIL, NumPy