-
Notifications
You must be signed in to change notification settings - Fork 651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Segmentation masks do not correspond to classification results #1380
Comments
Hello. I think this might come from the way image and pixel threshold is calculated. Since these are independent thresholds, calculated on a different (yet not entirely independent) data, I believe that it can happen that you get some anomalous pixels in the segmentation map but the entire image is still classified as normal. This most likely happens due to the fact, that the anomaly score in most models is produced by taking the maximum from an anomaly map, not as a separate process that would actually calculate anomaly score (like PatchCore for example, or some other models that have sort of a sub-network to get score out of anomaly map and other features). So I think that this is expected. |
I understand that if the anomaly score is calculated differently than the anomaly map (Patchcore), then this behavior is expected. However, for methods that take the anomaly map's maximum value, wouldn't it make more sense to tie the choice of the anomaly threshold to the score threshold? |
I'm not sure really, but right now these are separate values, calculated separately to maximize each task. I assume it would depend on the model used, but I can't really say anything for sure. |
Describe the bug
Hello,
Having tested a few methods using the changes made in PR #1378, I have noticed that the segmentation results do not correspond to the classification results.
In the following photographs, you will see that "normal" images may contain defective areas: this is particularly prevalent with DRAEM, but it happens with other methods, such as PaDiM and EfficientAD.
EfficientAD:
PaDiM:
DRAEM:
Dataset
MVTec
Model
N/A
Steps to reproduce the behavior
git clone anomalib
cd anomalib
build container via VSCode
pip install -e .
checkout phcarval:more_segmentation_info
python3 tools/train.py --config src/anomalib/models/{model}/config.yaml
OS information
OS information:
Expected behavior
Predicted masks should only appear when the classification result is "anomalous".
Screenshots
No response
Pip/GitHub
GitHub
What version/branch did you use?
phcarval:more_segmentation_info
Configuration YAML
Logs
I have lost them but can try to provide if needed
Code of Conduct
The text was updated successfully, but these errors were encountered: