This is the implementation of ICDM 2020 paper Meta-AAD: Active Anomaly Detection with Deep Reinforcement Learning. We propose to learn a meta-policy with deep reinforcement learning to optimize the performance of active anomaly detection. Please refer the paper for more deteails.
If you find this project helpful, for now, please cite our ArXiv version
@article{zha2020metaaad,
title={Meta-AAD: Active Anomaly Detection with Deep Reinforcement Learning},
author={Daochen Zha and Kwei-Herng Lai and Mingyang Wan and Xia Hu},
year={2020},
journal={arXiv preprint arXiv:2009.07415},
}
Make sure you have python 3.5+ installed.
git clone https://github.com/daochenzha/Meta-AAD.git
cd Meta-AAD
pip install -r requirments.txt
pip install -e .
Train a meta-policy with train.py
. The important arguments are as follows.
--train
: the datasets used for training, seperated by commas.--test
: the datasets used for testing, seperated by commas.--num_timesteps
: the number of training steps of reinforcement learning agents--log
: where the log and models will be outputted
By default, the reinforcement learning training log will be saved in log/
, the anomaly discovery curves will be saved in log/anomaly_curves
, and the trained model will be saved in log/
.
You may evaluate a trained model with evaluate.py
. The important arguments are as follows.
--load
: the path tomodel.zip
file.--test
: the datasets used for testing, seperated by commas.
We provide two baselines in this repo for comparison: a random query strategy and IForest query strategy. They are available in evaluate_baselines.py
. For other baselines, please refer to the following repos.