You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The original DeepInsight implementation in the MXNet first preprocesses the image (Face detection, cropping and preprocessing), then extracts its embeddings and finally normalizes it so that when we want to compare two features the code would be like this:
When I checked your example.py file I didn't see any of the mentioned preprocessings. The code simply reads an image, resizes it to (112,112) (without detecting the face) and then proceeds to extract its features.
Shouldn't we first detect the face and then the extract the embeddings?
I went ahead and did some tests with this code and the Similarities are off the charts and plain wrong (even with different dropout rates):
Hi
The original DeepInsight implementation in the MXNet first preprocesses the image (Face detection, cropping and preprocessing), then extracts its embeddings and finally normalizes it so that when we want to compare two features the code would be like this:
When I checked your
example.py
file I didn't see any of the mentioned preprocessings. The code simply reads an image, resizes it to(112,112)
(without detecting the face) and then proceeds to extract its features.Shouldn't we first detect the face and then the extract the embeddings?
I went ahead and did some tests with this code and the Similarities are off the charts and plain wrong (even with different dropout rates):
Am I doing something wrong?
Thanks
The text was updated successfully, but these errors were encountered: