You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks so much for the awesome library!
I was trying to run the "query_instance.py", the only changes I made to the code are:
i) Increased the num_of_queries from 50 to 500
ii) I set the number of rounds (probably "folds" is the word that you would like to use) to 1
I only tried QBC and random methods, and you will find the plot from the attached.
It is observable that QBC does not outperform random, both are performing alike on the simulated data. Any thoughts on it?
The text was updated successfully, but these errors were encountered:
The main reason may be that the simulated data created by sklearn.make_classification is too simple. The default Logistic Regression classifier can easily fit. And there may be a lot of samples that is similar. It would be useless to add the similar samples after the one has been learned by the model.
And the results of a single experiment may be random. The results are more convincing if the number of experiments(round) is set as a large value.
There is no guarantee that the QBC is must better than Random in any case. The QBC is generally better.
Thanks so much for the awesome library!
I was trying to run the "query_instance.py", the only changes I made to the code are:
i) Increased the num_of_queries from 50 to 500
ii) I set the number of rounds (probably "folds" is the word that you would like to use) to 1
I only tried QBC and random methods, and you will find the plot from the attached.
It is observable that QBC does not outperform random, both are performing alike on the simulated data. Any thoughts on it?
The text was updated successfully, but these errors were encountered: