-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intel: SSD-MobileNet accuracy mAP=15.280% #27
Comments
If I remove the
|
Using the other quantized model without
|
Using the other quantized model with
|
|
Here's the full conversion log:
|
After my blunder with #29 where I didn't think of running the ImageNet accuracy script with another data type parameter, I decided to check the COCO accuracy script on the data from the v0.5 submission. I discovered that for the
Note that the JSON logs for the Server, Offline and SingleScream scenarios have 31754, 66074 and 4255 lines, respectively, not 5002 as expected. |
For the
|
No problems with DellEMC's results (including OpenVINO and TensorRT):
|
So the resolution seems to have two aspects to it:
This can be confirmed via a new Docker image (tagged Alternatively, you can run this natively:
|
The
|
... while the benchmark code doesn't build with
|
I'm guessing new include paths need to be set for 2020.1:
as well as modifying the program:
but still doesn't build:
|
Also trouble is expected with 2020.2:
|
Still waiting for Intel's response on these issues. cc: @christ1ne |
@psyhtest I tried a "parallel" reproduction exercise and found similar results. I saw you have a working example in CK: https://github.com/ctuning/ck-openvino#accuracy-on-the-coco-2017-validation-set but I didn't get if you figured out the "reverse_input_channels" question.
I tried to generate both FP32 precision (default) and FP16 precision models w/wo "reverse" option but I get a "good enough' mAP only when not using the option (which is different from what suggested in the submission: intel Regarding the OpenVINO version 2020.x it seems cpu_extension is now part of the library itself and doesn't need to be mentioned separately( openvinotoolkit/openvino#916 )
I tried to have a "minimal" equivalent code, not using much from the newer submissions but only:
Regarding the issue: I know this is the first submission round but it is interesting to compare old and newer version, that's why it is important to me to clarify those doubts. |
So, as far as I get, the current status is to not use "--reverse_input_channels" option when converting to OpenVINO model representation. By the way, I started looked at this long time ago as well, I'm back on it since (I thought) I could understand it a bit better now. Thanks for your answer. |
We've meticulously reconstructed all components of Intel's MLPerf Inference v0.5 submission, including:
CMakeLists.txt
;Unfortunately, the reached accuracy (15.280%) is much lower than expected (22.627%):
To reproduce, follow the instructions for the prebuilt Docker image.
You can also reproduce the same in a native Debian environment by following Collective Knowledge steps in the source Dockerfile.
The text was updated successfully, but these errors were encountered: