See TimeEval Algorithms for algorithms that are compatible to this tool.
The algorithms in that repository are containerized and can be executed using the DockerAdapter
of TimeEval.
If you use TimeEval, please consider citing our paper.
📖 TimeEval's documentation is hosted at https://timeeval.readthedocs.io.
- Large integrated benchmark dataset collection with more than 700 datasets
- Benchmark dataset interface to select datasets easily
- Adapter architecture for algorithm integration
- DockerAdapter
- JarAdapter
- DistributedAdapter
- MultivarAdapter
- ... (add your own adapter)
- Large collection of existing algorithm implementations (in TimeEval Algorithms repository)
- Automatic algorithm detection quality scoring using AUC (Area under the ROC curve, also c-statistic) or range-based metrics
- Automatic timing of the algorithm execution (differentiates pre-, main-, and post-processing)
- Distributed experiment execution
- Output and logfile tracking for subsequent inspection
TimeEval can be installed as a package or from source.
⚠️ Attention!Currently, TimeEval is tested only on Linux and macOS and relies on unixoid capabilities. On Windows, you can use TimeEval within WSL. If you want to use the provided detection algorithms, Docker is required.
Builds of TimeEval
are published to PyPI:
- python >= 3.7, <= 3.11
- pip >= 20
- Docker (for the anomaly detection algorithms)
- (optional)
rsync
for distributed TimeEval
You can use pip
to install TimeEval from PyPI:
pip install TimeEval
tl;dr
git clone git@github.com:TimeEval/TimeEval.git
cd timeeval/
conda create -n timeeval python=3.7
conda activate timeeval
pip install -r requirements.txt
python setup.py bdist_wheel
pip install dist/TimeEval-*-py3-none-any.whl
The following tools are required to install TimeEval from source:
- git
- Python > 3.7 and Pip (anaconda or miniconda is preferred)
- Clone this repository using git and change into its root directory.
- Create a conda-environment and install all required dependencies.
conda create -n timeeval python=3.7 conda activate timeeval pip install -r requirements.txt
- Build TimeEval:
python setup.py bdist_wheel
. This should create a Python wheel in thedist/
-folder. - Install TimeEval and all of its dependencies:
pip install dist/TimeEval-*-py3-none-any.whl
. - If you want to make changes to TimeEval or run the tests, you need to install the development dependencies from
requirements.dev
:pip install -r requirements.dev
.
tl;dr
from pathlib import Path
from typing import Dict, Any
import numpy as np
from timeeval import TimeEval, DatasetManager, Algorithm, TrainingType, InputDimensionality
from timeeval.adapters import FunctionAdapter
from timeeval.algorithms import subsequence_if
from timeeval.params import FixedParameters
# Load dataset metadata
dm = DatasetManager(Path("tests/example_data"), create_if_missing=False)
# Define algorithm
def my_algorithm(data: np.ndarray, args: Dict[str, Any]) -> np.ndarray:
score_value = args.get("score_value", 0)
return np.full_like(data, fill_value=score_value)
# Select datasets and algorithms
datasets = dm.select()
datasets = datasets[-1:]
# Add algorithms to evaluate...
algorithms = [
Algorithm(
name="MyAlgorithm",
main=FunctionAdapter(my_algorithm),
data_as_file=False,
training_type=TrainingType.UNSUPERVISED,
input_dimensionality=InputDimensionality.UNIVARIATE,
param_config=FixedParameters({"score_value": 1.})
),
subsequence_if(params=FixedParameters({"n_trees": 50}))
]
timeeval = TimeEval(dm, datasets, algorithms)
# execute evaluation
timeeval.run()
# retrieve results
print(timeeval.get_results())
If you use TimeEval in your project or research, please cite our demonstration paper:
Phillip Wenig, Sebastian Schmidl, and Thorsten Papenbrock. TimeEval: A Benchmarking Toolkit for Time Series Anomaly Detection Algorithms. PVLDB, 15(12): 3678 - 3681, 2022. doi:10.14778/3554821.3554873
@article{WenigEtAl2022TimeEval,
title = {TimeEval: {{A}} Benchmarking Toolkit for Time Series Anomaly Detection Algorithms},
author = {Wenig, Phillip and Schmidl, Sebastian and Papenbrock, Thorsten},
date = {2022},
journaltitle = {Proceedings of the {{VLDB Endowment}} ({{PVLDB}})},
volume = {15},
number = {12},
pages = {3678--3681},
doi = {10.14778/3554821.3554873}
}