You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a little new to Anomaly detection but I was curious about what is the right way to do cross validation while using ADBench as the test and train samples are already split via datagenerator. An easy way will be to concatenate test and train datasets and then put them in the CV loop, but is there a cleaner way possible?
The text was updated successfully, but these errors were encountered:
I sincerely apologize for my late reply. Since in anomaly detection problems, there often exist only few labeled samples (e.g., 5 labeled anomalies) in the training set, while the labeled samples would even be reduced further in the cross-validation (CV) scenario.
Some suggestions are that:
You can apply some data augmentation methods like oversampling or SMOTE, and then use CV on the concatenated dataset of training and testing datasets.
You can set the la (the ratio of labeled anomalies) to 1.00, therefore all the labeled anomalies are available in the training set, which can be further concatenated with testing set to perform cross-validation, although anomalies may still be very rare on some datasets.
Hi,
I am a little new to Anomaly detection but I was curious about what is the right way to do cross validation while using ADBench as the test and train samples are already split via datagenerator. An easy way will be to concatenate test and train datasets and then put them in the CV loop, but is there a cleaner way possible?
The text was updated successfully, but these errors were encountered: