Skip to content

Commit

Permalink
Merge branch 'master' into feature/add-ci-linting
Browse files Browse the repository at this point in the history
  • Loading branch information
mauicv committed May 15, 2023
2 parents 20cf72f + 5a38fc2 commit 149f08b
Show file tree
Hide file tree
Showing 58 changed files with 2,973 additions and 237 deletions.
9 changes: 6 additions & 3 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -84,9 +84,12 @@ jobs:
# are removed from tests, this can be removed, allowing all tests to use random seeds.
- name: Upload coverage to Codecov
if: ${{ success() }}
run: |
codecov -F ${{ matrix.os }}-${{ matrix.python-version }}
uses: codecov/codecov-action@v3
with:
directory: .
env_vars: ${{matrix.os}}, ${{matrix.python-version}}
fail_ci_if_error: false
verbose: true

- name: Build Python package
run: |
Expand Down
6 changes: 3 additions & 3 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.4.0
- repo: https://github.com/pycqa/flake8
rev: 6.0.0
hooks:
- id: flake8
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.0.1
hooks:
- id: mypy
additional_dependencies: [
types-requests~=2.25,
types-requests~=2.28,
types-toml~=0.10
]
24 changes: 22 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,27 @@
# Change Log

## v0.12.0dev
[Full Changelog](https://github.com/SeldonIO/alibi-detect/compare/v0.11.1...master)
## [v0.11.2](https://github.com/SeldonIO/alibi-detect/tree/v0.11.2) (2023-04-28)
[Full Changelog](https://github.com/SeldonIO/alibi-detect/compare/v0.11.1...v0.11.2)

### Fixed
- Failure of `plot_feature_outlier_image` utility function when no outliers are detected ([#774](https://github.com/SeldonIO/alibi-detect/pull/774) - thanks [@signupatgmx](https://github.com/signupatgmx) !).

### Changed
- Refactored methods that use `tensorflow` optimizers to work with the new optimizers introduced in `2.11` ([#739](https://github.com/SeldonIO/alibi-detect/pull/739)).
- Maximum supported version of `tensorflow` bumped to `2.12.x` ([#764](https://github.com/SeldonIO/alibi-detect/pull/764)).
- Maximum supported version of `tensorflow-probability` version to `0.19.x` ([#687](https://github.com/SeldonIO/alibi-detect/pull/687)).
- Supported version of `pandas` bumped to `>1.0.0, <3.0.0` ([#765](https://github.com/SeldonIO/alibi-detect/pull/765)).
- Maximum supported version of `scikit-image` bumped to `0.20.x` ([#751](https://github.com/SeldonIO/alibi-detect/pull/751)).

### Development
- Migrate `codecov` to use Github Actions and don't fail CI on coverage report upload failure due to rate limiting ([#768](https://github.com/SeldonIO/alibi-detect/pull/768), [#776](https://github.com/SeldonIO/alibi-detect/pull/776)).
- Bump `mypy` version to `>=1.0, <2.0` ([#754](https://github.com/SeldonIO/alibi-detect/pull/754)).
- Bump `sphinx` version to `6.x` ([#709](https://github.com/SeldonIO/alibi-detect/pull/709)).
- Bump `sphinx-design` version to `0.4.1` ([#769](https://github.com/SeldonIO/alibi-detect/pull/769)).
- Bump `nbsphinx` version to `0.9.x` ([#757](https://github.com/SeldonIO/alibi-detect/pull/757)).
- Bump `myst-parser` version to `>=1.0, <2.0` ([#756](https://github.com/SeldonIO/alibi-detect/pull/756)).
- Bump `twine` version to `4.x` ([#511](https://github.com/SeldonIO/alibi-detect/pull/511)).
- Bump `pre-commit` version to `3.x` and update the config ([#731](https://github.com/SeldonIO/alibi-detect/pull/731)).

## v0.11.1
[Full Changelog](https://github.com/SeldonIO/alibi-detect/compare/v0.11.0...v0.11.1)
Expand Down
4 changes: 2 additions & 2 deletions CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,6 @@ authors:
- family-names: "Athorne"
given-names: "Alex"
title: "Alibi Detect: Algorithms for outlier, adversarial and drift detection"
version: 0.11.1
date-released: 2023-03-03
version: 0.11.2
date-released: 2023-04-28
url: "https://github.com/SeldonIO/alibi-detect"
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -407,8 +407,8 @@ BibTeX entry:
title = {Alibi Detect: Algorithms for outlier, adversarial and drift detection},
author = {Van Looveren, Arnaud and Klaise, Janis and Vacanti, Giovanni and Cobb, Oliver and Scillitoe, Ashley and Samoilescu, Robert and Athorne, Alex},
url = {https://github.com/SeldonIO/alibi-detect},
version = {0.11.1},
date = {2023-03-03},
version = {0.11.2},
date = {2023-04-28},
year = {2019}
}
```
4 changes: 3 additions & 1 deletion alibi_detect/ad/adversarialae.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
from alibi_detect.models.tensorflow.losses import loss_adv_ae
from alibi_detect.models.tensorflow.trainer import trainer
from alibi_detect.utils.tensorflow.prediction import predict_batch
from alibi_detect.utils._types import OptimizerTF
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.losses import kld
from tensorflow.keras.models import Model
Expand Down Expand Up @@ -139,7 +140,7 @@ def fit(self,
loss_fn: tf.keras.losses = loss_adv_ae,
w_model: float = 1.,
w_recon: float = 0.,
optimizer: tf.keras.optimizers = tf.keras.optimizers.Adam(learning_rate=1e-3),
optimizer: OptimizerTF = tf.keras.optimizers.Adam,
epochs: int = 20,
batch_size: int = 128,
verbose: bool = True,
Expand Down Expand Up @@ -177,6 +178,7 @@ def fit(self,
"""
# train arguments
args = [self.ae, loss_fn, X]
optimizer = optimizer() if isinstance(optimizer, type) else optimizer
kwargs = {
'optimizer': optimizer,
'epochs': epochs,
Expand Down
4 changes: 3 additions & 1 deletion alibi_detect/ad/model_distillation.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
from alibi_detect.models.tensorflow.losses import loss_distillation
from alibi_detect.models.tensorflow.trainer import trainer
from alibi_detect.utils.tensorflow.prediction import predict_batch
from alibi_detect.utils._types import OptimizerTF
from tensorflow.keras.losses import categorical_crossentropy, kld

logger = logging.getLogger(__name__)
Expand Down Expand Up @@ -67,7 +68,7 @@ def __init__(self,
def fit(self,
X: np.ndarray,
loss_fn: tf.keras.losses = loss_distillation,
optimizer: tf.keras.optimizers = tf.keras.optimizers.Adam(learning_rate=1e-3),
optimizer: OptimizerTF = tf.keras.optimizers.Adam,
epochs: int = 20,
batch_size: int = 128,
verbose: bool = True,
Expand Down Expand Up @@ -101,6 +102,7 @@ def fit(self,
"""
# train arguments
args = [self.distilled_model, loss_fn, X]
optimizer = optimizer() if isinstance(optimizer, type) else optimizer
kwargs = {
'optimizer': optimizer,
'epochs': epochs,
Expand Down
11 changes: 8 additions & 3 deletions alibi_detect/cd/tensorflow/classifier.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
from alibi_detect.utils.tensorflow.prediction import predict_batch
from alibi_detect.utils.warnings import deprecated_alias
from alibi_detect.utils.frameworks import Framework
from alibi_detect.utils._types import OptimizerTF


class ClassifierDriftTF(BaseClassifierDrift):
Expand All @@ -31,7 +32,7 @@ def __init__(
n_folds: Optional[int] = None,
retrain_from_scratch: bool = True,
seed: int = 0,
optimizer: tf.keras.optimizers.Optimizer = tf.keras.optimizers.Adam,
optimizer: OptimizerTF = tf.keras.optimizers.Adam,
learning_rate: float = 1e-3,
batch_size: int = 32,
preprocess_batch_fn: Optional[Callable] = None,
Expand Down Expand Up @@ -176,8 +177,12 @@ def score(self, x: np.ndarray) -> Tuple[float, float, np.ndarray, np.ndarray, #
else:
raise TypeError(f'x needs to be of type np.ndarray or list and not {type(x)}.')
ds_tr = self.dataset(x_tr, y_tr)
self.model = clone_model(self.original_model) if self.retrain_from_scratch \
else self.model
if self.retrain_from_scratch:
# clone model to re-initialise
self.model = clone_model(self.original_model)
# Clone optimizer to prevent error due to cloned model (with new tf>=2.11 optimizers)
optimizer = self.train_kwargs['optimizer']
self.train_kwargs['optimizer'] = optimizer.__class__.from_config(optimizer.get_config())
train_args = [self.model, self.loss_fn, None]
self.train_kwargs.update({'dataset': ds_tr})
trainer(*train_args, **self.train_kwargs) # type: ignore
Expand Down
5 changes: 3 additions & 2 deletions alibi_detect/cd/tensorflow/learned_kernel.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
from alibi_detect.utils.tensorflow.distance import mmd2_from_kernel_matrix, batch_compute_kernel_matrix
from alibi_detect.utils.warnings import deprecated_alias
from alibi_detect.utils.frameworks import Framework
from alibi_detect.utils._types import OptimizerTF


class LearnedKernelDriftTF(BaseLearnedKernelDrift):
Expand All @@ -26,7 +27,7 @@ def __init__(
reg_loss_fn: Callable = (lambda kernel: 0),
train_size: Optional[float] = .75,
retrain_from_scratch: bool = True,
optimizer: tf.keras.optimizers.Optimizer = tf.keras.optimizers.Adam,
optimizer: OptimizerTF = tf.keras.optimizers.Adam,
learning_rate: float = 1e-3,
batch_size: int = 32,
batch_size_predict: int = 32,
Expand Down Expand Up @@ -204,7 +205,7 @@ def score(self, x: Union[np.ndarray, list]) -> Tuple[float, float, float]:
def trainer(
j_hat: JHat,
datasets: Tuple[tf.keras.utils.Sequence, tf.keras.utils.Sequence],
optimizer: tf.keras.optimizers.Optimizer = tf.keras.optimizers.Adam,
optimizer: OptimizerTF = tf.keras.optimizers.Adam,
learning_rate: float = 1e-3,
preprocess_fn: Callable = None,
epochs: int = 20,
Expand Down
2 changes: 1 addition & 1 deletion alibi_detect/cd/tensorflow/mmd_online.py
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ def _configure_thresholds(self):
2 * tf.reduce_sum(k_xy_col_sums[w:w + w_size]))
for k_xx_sum, y_inds_w, k_xy_col_sums in zip(k_xx_sums_all, y_inds_all_w, k_xy_col_sums_all)
]
mmds = tf.concat(mmds, axis=0) # an mmd for each bootstrap sample
mmds = tf.stack(mmds, axis=0) # an mmd for each bootstrap sample

# Now we discard all bootstrap samples for which mmd is in top (1/ert)% and record the thresholds
thresholds.append(quantile(mmds, 1 - self.fpr))
Expand Down
42 changes: 42 additions & 0 deletions alibi_detect/models/pytorch/gmm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
from torch import nn
import torch


class GMMModel(nn.Module):
def __init__(self, n_components: int, dim: int) -> None:
"""Gaussian Mixture Model (GMM).
Parameters
----------
n_components
The number of mixture components.
dim
The dimensionality of the data.
"""
super().__init__()
self.weight_logits = nn.Parameter(torch.zeros(n_components))
self.means = nn.Parameter(torch.randn(n_components, dim))
self.inv_cov_factor = nn.Parameter(torch.randn(n_components, dim, dim)/10)

@property
def _inv_cov(self) -> torch.Tensor:
return torch.bmm(self.inv_cov_factor, self.inv_cov_factor.transpose(1, 2))

@property
def _weights(self) -> torch.Tensor:
return nn.functional.softmax(self.weight_logits, dim=0)

def forward(self, x: torch.Tensor) -> torch.Tensor:
"""Compute the log-likelihood of the data.
Parameters
----------
x
Data to score.
"""
det = torch.linalg.det(self._inv_cov) # Note det(A^-1)=1/det(A)
to_means = x[:, None, :] - self.means[None, :, :]
likelihood = ((-0.5 * (
torch.einsum('bke,bke->bk', (torch.einsum('bkd,kde->bke', to_means, self._inv_cov), to_means))
)).exp()*det[None, :]*self._weights[None, :]).sum(-1)
return -likelihood.log()
3 changes: 2 additions & 1 deletion alibi_detect/models/tensorflow/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ def trainer(
x_train: np.ndarray,
y_train: np.ndarray = None,
dataset: tf.keras.utils.Sequence = None,
optimizer: tf.keras.optimizers = tf.keras.optimizers.Adam(learning_rate=1e-3),
optimizer: tf.keras.optimizers = tf.keras.optimizers.Adam,
loss_fn_kwargs: dict = None,
preprocess_fn: Callable = None,
epochs: int = 20,
Expand Down Expand Up @@ -57,6 +57,7 @@ def trainer(
callbacks
Callbacks used during training.
"""
optimizer = optimizer() if isinstance(optimizer, type) else optimizer
return_xy = False if not isinstance(dataset, tf.keras.utils.Sequence) and y_train is None else True
if not isinstance(dataset, tf.keras.utils.Sequence): # create dataset
train_data = x_train if y_train is None else (x_train, y_train)
Expand Down
Loading

0 comments on commit 149f08b

Please sign in to comment.