Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenML Python runs may have swapped truth and prediction labels (at least for classification, regression) #1185

Open
LennartPurucker opened this issue Mar 22, 2022 · 2 comments

Comments

@LennartPurucker
Copy link
Contributor

Description

For a small set of flows, the predictions.arff files of some runs contain faulty entries. In these entries, the prediction does not correspond to the class with the highest confidence.

As far as I was able to find out, all affected flows are sklearn pipelines and published/uploaded using openml-python.
Moreover, the confidences of these pipelines should be, to the best of my knowledge, representative for the prediction (unlike, for example, the confidences of SVM).
Furthermore, the confidences are off by a large margin. This is not a result of two or more classes having almost equal confidences or a precision problem.

Example

Flow 19039 with Run 10581112 and the associated predictions file.

row_id predicted class in predictions.arff confidence.1 confidence.2 prediction based on confidence
95 1 0.2552 0.7448 2
349 1 0.0601 0.9399 2
980 2 0.6280 0.3720 1

Expected Results

The predictions in the predictions.arff should correspond to the class with the highest confidence in the predictions.arff.

Actual Results

The predictions in the predictions.arff correspond to the class with the second highest confidence. In other cases, the prediction does not correspond to a high-confidence class at all but seems to be chosen at random.

Affected Flows

In my research, I have found the following list of flows to run into this problem at least once: [19030, 19037, 19039, 19035, 18818, 17839, 17761].
These include sklearn pipelines using decision trees (19030, 18818), Gradient Boosting (19307,19039), KNN (19035), SGD (17839), and LDA (17761).

Versions

I assume that the flows [19030, 19037, 19039, 1903] used the newest version of openml-python based on their upload date and feedback gather by the original uploader. For the other flows, I am not certain which version was used.

@LennartPurucker
Copy link
Contributor Author

The reasons for the corrupted files were most likely fixed in openml/openml-python#1209.

@PGijsbers should we close this or let this stay open until the runs on the server have been updated?

@PGijsbers PGijsbers transferred this issue from openml/openml-python Feb 24, 2023
@PGijsbers PGijsbers changed the title Mismatches between the Confidences and the Prediction of a Flow in Predictions.arff Files OpenML Python runs may have swapped truth and prediction labels (at least for classification, regression) Feb 24, 2023
@PGijsbers
Copy link
Contributor

TODO:
For each run uploaded by OpenML Python check if the columns were swapped, and if so swap them back and overwrite the old arff file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants