Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update tensorflow requirement from !=2.6.0,!=2.6.1,<2.15.0,>=2.2.0 to >=2.2.0,!=2.6.0,!=2.6.1,<2.19.0 #908

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
badf192
Update tensorflow requirement
dependabot[bot] Nov 18, 2024
d1d62b9
Updated upper and lower bound on tensorflow
RobertSamoilescu Dec 6, 2024
189fd8e
Fixed input shape for test_misc_tf
RobertSamoilescu Dec 6, 2024
3fb914c
Fixed activation serialization issue for test_saving_legacy.py
RobertSamoilescu Dec 6, 2024
548dc1b
Fixed preprocessor
RobertSamoilescu Dec 6, 2024
32cbf0c
Fixed admd
RobertSamoilescu Dec 6, 2024
5287f4f
Fixed model cloning
RobertSamoilescu Dec 9, 2024
6edaec4
Fixed TFDataset index error
RobertSamoilescu Dec 9, 2024
728db4f
Fixed trainable vars for spot the diff detector
RobertSamoilescu Dec 9, 2024
e3d6dc9
Fixed infer sigma flag in mmd
RobertSamoilescu Dec 9, 2024
207ceb4
Fixed classifier tf test
RobertSamoilescu Dec 9, 2024
a4d4a5b
Fixed kernes trainable variables
RobertSamoilescu Dec 9, 2024
945e69f
Fixed llr tests
RobertSamoilescu Dec 9, 2024
f72c462
Fixed saving and optimizer saving
RobertSamoilescu Dec 9, 2024
f20de56
Included test entry in Makefile and updated ci
RobertSamoilescu Dec 9, 2024
71e7331
Test all notebooks - to be reverted
RobertSamoilescu Dec 9, 2024
f1c394f
Removed python3.8 from ci
RobertSamoilescu Dec 9, 2024
21379eb
Improved test command
RobertSamoilescu Dec 9, 2024
481c6a9
Fixed saving test models
RobertSamoilescu Dec 10, 2024
689723d
Fixed non-tensor inputs as positional arguments
RobertSamoilescu Dec 10, 2024
2cdfe8e
Fixed env variable in makefile
RobertSamoilescu Dec 10, 2024
f46b67d
Fixed optimizer tests, including legacy tests
RobertSamoilescu Dec 10, 2024
4fa6ee2
Fixed optional dependencies imports
RobertSamoilescu Dec 10, 2024
ecce171
Fixed od_vae_adult.ipynb
RobertSamoilescu Dec 10, 2024
ec1e9f6
Fixed od_vae_cifar10.ipynb
RobertSamoilescu Dec 10, 2024
e946877
Fixed cd_model_unc_cifar10_wine.ipynb
RobertSamoilescu Dec 10, 2024
47836b7
Fixed od_aegmm_kddcup.ipynb
RobertSamoilescu Dec 10, 2024
17889a8
Fixed od_vae_kddcup.ipynb
RobertSamoilescu Dec 10, 2024
e9cdb44
Fixed od_seq2seq_ecg.ipynb
RobertSamoilescu Dec 10, 2024
6c1b95b
Fixed od_ae_cifar10.ipynb
RobertSamoilescu Dec 10, 2024
8c5e7d5
Fixed cd_distillation_cifar10.ipynb
RobertSamoilescu Dec 10, 2024
688dc25
Fixed cd_ks_cifar10.ipynb
RobertSamoilescu Dec 10, 2024
c94dc27
Fixed cd_mmd_cifar10.ipynb
RobertSamoilescu Dec 10, 2024
a7b4f35
Fixed od_llr_genome.ipynb
RobertSamoilescu Dec 10, 2024
10fb508
Fixed od_llr_mnist.ipynb
RobertSamoilescu Dec 10, 2024
bc81fc0
Fixed od_seq2seq_synth.ipynb
RobertSamoilescu Dec 10, 2024
9bf3cb4
Fixed cd_text_imdb.ipynb
RobertSamoilescu Dec 10, 2024
5944751
Fixed alibi_detect_deploy.ipynb
RobertSamoilescu Dec 10, 2024
9aed8e6
Fixed ad_ae_cifar10.ipynb
RobertSamoilescu Dec 10, 2024
f961c13
Reverted a few things in misc
RobertSamoilescu Dec 10, 2024
a55ce0e
Fixed flake8 errors
RobertSamoilescu Dec 10, 2024
5c73414
Reverted test all notebooks github actions
RobertSamoilescu Dec 10, 2024
4b35346
Addressed PR comments
RobertSamoilescu Dec 11, 2024
88c8e71
Fixed flake8 error
RobertSamoilescu Dec 11, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 2 additions & 5 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ jobs:
strategy:
matrix:
os: [ ubuntu-latest ]
python-version: [ '3.8', '3.9', '3.10', '3.11']
python-version: ['3.9', '3.10', '3.11']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we add 3.12 here as well?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will be added in a future PR.

pydantic-version: [ '1.10.15', '2.7.1' ]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not related to this PR, do we still support pydantic v1?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking to remove it in a future PR. Was about to ask you about that.

include: # Run windows tests on only one python version
- os: windows-latest
Expand Down Expand Up @@ -71,10 +71,7 @@ jobs:
limit-access-to-actor: true

- name: Test with pytest
run: |
pytest --randomly-seed=0 alibi_detect
# Note: The pytest-randomly seed is fixed at 0 for now. Once the legacy np.random.seed(0)'s
# are removed from tests, this can be removed, allowing all tests to use random seeds.
run: make test

- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
Expand Down
8 changes: 6 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,12 @@ install-dev:
install:
pip install -e .[all]

# Note: The pytest-randomly seed is fixed at 0 for now. Once the legacy np.random.seed(0)'s
# are removed from tests, this can be removed, allowing all tests to use random seeds.
.PHONY: test
test: ## Run all tests
python setup.py test
test:
TF_USE_LEGACY_KERAS=1 pytest --randomly-seed=0 alibi_detect/utils/tests/test_saving_legacy.py
pytest --randomly-seed=0 --ignore=alibi_detect/utils/tests/test_saving_legacy.py alibi_detect

.PHONY: lint
lint: ## Check linting according to the flake8 configuration in setup.cfg
Expand Down Expand Up @@ -68,3 +71,4 @@ check_licenses:
tox-env=default
repl:
env COMMAND="python" tox -e $(tox-env)

2 changes: 1 addition & 1 deletion alibi_detect/ad/tests/test_admd.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ def test_adv_md(adv_md_params):
threshold, loss_type, threshold_perc, return_instance_score = adv_md_params

# define ancillary model
layers = [tf.keras.layers.InputLayer(input_shape=(input_dim)),
layers = [tf.keras.layers.InputLayer(input_shape=(input_dim, )),
tf.keras.layers.Dense(y.shape[1], activation=tf.nn.softmax)]
distilled_model = tf.keras.Sequential(layers)

Expand Down
2 changes: 1 addition & 1 deletion alibi_detect/cd/tensorflow/mmd.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ def __init__(

def kernel_matrix(self, x: Union[np.ndarray, tf.Tensor], y: Union[np.ndarray, tf.Tensor]) -> tf.Tensor:
""" Compute and return full kernel matrix between arrays x and y. """
k_xy = self.kernel(x, y, self.infer_sigma)
k_xy = self.kernel(x, y, infer_sigma=self.infer_sigma)
k_xx = self.k_xx if self.k_xx is not None and self.update_x_ref is None else self.kernel(x, x)
k_yy = self.kernel(y, y)
kernel_mat = tf.concat([tf.concat([k_xx, k_xy], 1), tf.concat([tf.transpose(k_xy, (1, 0)), k_yy], 1)], 0)
Expand Down
22 changes: 16 additions & 6 deletions alibi_detect/cd/tensorflow/preprocess.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,11 @@

import numpy as np
import tensorflow as tf

from alibi_detect.utils.tensorflow.prediction import (
predict_batch, predict_batch_transformer)
from tensorflow.keras.layers import Dense, Flatten, Input, InputLayer
predict_batch, predict_batch_transformer, get_call_arg_mapping
)
from tensorflow.keras.layers import Dense, Flatten, Input, Lambda
from tensorflow.keras.models import Model


Expand Down Expand Up @@ -34,7 +36,11 @@ def __init__(
'tf.keras.Sequential or tf.keras.Model `mlp`')

def call(self, x: Union[np.ndarray, tf.Tensor, Dict[str, tf.Tensor]]) -> tf.Tensor:
x = self.input_layer(x)
if not isinstance(x, (np.ndarray, tf.Tensor)):
x = get_call_arg_mapping(self.input_layer, x)
x = self.input_layer(**x)
else:
x = self.input_layer(x)
return self.mlp(x)


Expand All @@ -52,7 +58,7 @@ def __init__(
if is_enc:
self.encoder = encoder_net
elif not is_enc and is_enc_dim: # set default encoder
input_layer = InputLayer(input_shape=shape) if input_layer is None else input_layer
input_layer = Lambda(lambda x: x) if input_layer is None else input_layer
input_dim = np.prod(shape)
step_dim = int((input_dim - enc_dim) / 3)
self.encoder = _Encoder(input_layer, enc_dim=enc_dim, step_dim=step_dim)
Expand All @@ -61,7 +67,11 @@ def __init__(
' or tf.keras.Model `encoder_net`.')

def call(self, x: Union[np.ndarray, tf.Tensor, Dict[str, tf.Tensor]]) -> tf.Tensor:
return self.encoder(x)
if not isinstance(x, (np.ndarray, tf.Tensor)):
x = get_call_arg_mapping(self.encoder, x)
return self.encoder(**x)
else:
return self.encoder(x)


class HiddenOutput(tf.keras.Model):
Expand All @@ -73,7 +83,7 @@ def __init__(
flatten: bool = False
) -> None:
super().__init__()
if input_shape and not model.inputs:
if input_shape and not (hasattr(model, 'inputs') and model.inputs):
inputs = Input(shape=input_shape)
model.call(inputs)
else:
Expand Down
20 changes: 17 additions & 3 deletions alibi_detect/cd/tensorflow/spot_the_diff.py
Original file line number Diff line number Diff line change
Expand Up @@ -170,9 +170,23 @@ def __init__(self, kernel: tf.keras.Model, x_ref: np.ndarray, initial_diffs: np.
self.config = {'kernel': kernel, 'x_ref': x_ref, 'initial_diffs': initial_diffs}
self.kernel = kernel
self.mean = tf.convert_to_tensor(x_ref.mean(0))
self.diffs = tf.Variable(initial_diffs, dtype=np.float32)
self.bias = tf.Variable(tf.zeros((1,)))
self.coeffs = tf.Variable(tf.zeros((len(initial_diffs),)))

self.diffs = self.add_weight(
shape=initial_diffs.shape,
initializer=tf.keras.initializers.Constant(initial_diffs),
dtype=tf.float32,
trainable=True
)
self.bias = self.add_weight(
shape=(1,),
initializer="zeros",
trainable=True,
)
self.coeffs = self.add_weight(
shape=(len(initial_diffs),),
initializer="zeros",
trainable=True,
)

def call(self, x: tf.Tensor) -> tf.Tensor:
k_xtl = self.kernel(x, self.mean + self.diffs)
Expand Down
4 changes: 2 additions & 2 deletions alibi_detect/cd/tensorflow/tests/test_classifier_tf.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
import numpy as np
import pytest
import tensorflow as tf
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.layers import Dense, Input, Softmax
from typing import Union
from alibi_detect.cd.tensorflow.classifier import ClassifierDriftTF

Expand All @@ -14,7 +14,7 @@ def mymodel(shape, softmax: bool = True):
x = Dense(20, activation=tf.nn.relu)(x_in)
x = Dense(2)(x)
if softmax:
x = tf.nn.softmax(x)
x = Softmax()(x)
return tf.keras.models.Model(inputs=x_in, outputs=x)


Expand Down
4 changes: 2 additions & 2 deletions alibi_detect/od/tests/test_llr.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
import numpy as np
import pytest
import tensorflow as tf
from tensorflow.keras.layers import Dense, Input, LSTM
from tensorflow.keras.layers import Dense, Input, LSTM, CategoryEncoding
from alibi_detect.od import LLR
from alibi_detect.version import __version__

Expand Down Expand Up @@ -48,7 +48,7 @@ def test_llr(llr_params):

# define model and detector
inputs = Input(shape=(shape[-1] - 1,), dtype=tf.int32)
x = tf.one_hot(tf.cast(inputs, tf.int32), input_dim)
x = CategoryEncoding(num_tokens=input_dim, output_mode="one_hot")(inputs)
x = LSTM(hidden_dim, return_sequences=True)(x)
logits = Dense(input_dim, activation=None)(x)
model = tf.keras.Model(inputs=inputs, outputs=logits)
Expand Down
28 changes: 10 additions & 18 deletions alibi_detect/saving/_tensorflow/tests/test_saving_tf.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,26 +15,18 @@
backend = param_fixture("backend", ['tensorflow'])


# Note: The full save/load functionality of optimizers (inc. validation) is tested in test_save_classifierdrift.
@pytest.mark.skipif(version.parse(tf.__version__) < version.parse('2.11.0'),
reason="Skipping since tensorflow < 2.11.0")
@parametrize('legacy', [True, False])
def test_load_optimizer_object_tf2pt11(legacy, backend):
def test_load_optimizer_object_tf2pt11(backend):
"""
Test the _load_optimizer_config with a tensorflow optimizer config. Only run if tensorflow>=2.11.

Here we test that "new" and legacy optimizers can be saved/laoded. We expect the returned optimizer to be an
instantiated `tf.keras.optimizers.Optimizer` object. Also test that the loaded optimizer can be saved.
Test the _load_optimizer_config with a tensorflow optimizer config. Only run if tensorflow>=2.16.
"""
class_name = 'Adam'
class_str = class_name if legacy else 'Custom>' + class_name # Note: see discussion in #739 re 'Custom>'
learning_rate = np.float32(0.01) # Set as float32 since this is what _save_optimizer_config returns
epsilon = np.float32(1e-7)
learning_rate = 0.01
epsilon = 1e-7
amsgrad = False

# Load
cfg_opt = {
'class_name': class_str,
'class_name': class_name,
'config': {
'name': class_name,
'learning_rate': learning_rate,
Expand All @@ -45,10 +37,7 @@ def test_load_optimizer_object_tf2pt11(legacy, backend):
optimizer = _load_optimizer_config(cfg_opt, backend=backend)
# Check optimizer
SupportedOptimizer.validate_optimizer(optimizer, {'backend': 'tensorflow'})
if legacy:
assert isinstance(optimizer, tf.keras.optimizers.legacy.Optimizer)
else:
assert isinstance(optimizer, tf.keras.optimizers.Optimizer)
assert isinstance(optimizer, tf.keras.optimizers.Optimizer)
assert type(optimizer).__name__ == class_name
assert optimizer.learning_rate == learning_rate
assert optimizer.epsilon == epsilon
Expand All @@ -58,7 +47,10 @@ def test_load_optimizer_object_tf2pt11(legacy, backend):
cfg_saved = _save_optimizer_config(optimizer)
# Compare to original config
for key, value in cfg_opt['config'].items():
assert value == cfg_saved['config'][key]
if isinstance(value, float):
assert np.isclose(value, cfg_saved['config'][key])
else:
assert value == cfg_saved['config'][key]


@pytest.mark.skipif(version.parse(tf.__version__) >= version.parse('2.11.0'),
Expand Down
9 changes: 5 additions & 4 deletions alibi_detect/saving/tests/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@

import numpy as np
import tensorflow as tf
from tensorflow.keras.activations import relu, softmax
import torch
import torch.nn as nn
from sklearn.ensemble import RandomForestClassifier
Expand Down Expand Up @@ -46,7 +47,7 @@ def encoder_model(backend, current_cases):
model = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(input_dim,)),
tf.keras.layers.Dense(5, activation=tf.nn.relu),
tf.keras.layers.Dense(5, activation=relu),
tf.keras.layers.Dense(LATENT_DIM, activation=None)
]
)
Expand All @@ -73,7 +74,7 @@ def encoder_dropout_model(backend, current_cases):
model = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(input_dim,)),
tf.keras.layers.Dense(5, activation=tf.nn.relu),
tf.keras.layers.Dense(5, activation=relu),
tf.keras.layers.Dropout(0.0), # 0.0 to ensure determinism
tf.keras.layers.Dense(LATENT_DIM, activation=None)
]
Expand Down Expand Up @@ -191,7 +192,7 @@ def classifier_model(backend, current_cases):
model = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(input_dim,)),
tf.keras.layers.Dense(2, activation=tf.nn.softmax),
tf.keras.layers.Dense(2, activation=softmax),
]
)
elif backend in ('pytorch', 'keops'):
Expand Down Expand Up @@ -240,7 +241,7 @@ def nlp_embedding_and_tokenizer(model_name, max_len, uae, backend):
except (OSError, HTTPError):
pytest.skip(f"Problem downloading {model_name} from huggingface.co")
if uae:
x_emb = embedding(tokens)
x_emb = embedding(tokens=tokens)
shape = (x_emb.shape[1],)
embedding = UAE_tf(input_layer=embedding, shape=shape, enc_dim=enc_dim)
elif backend == 'pt':
Expand Down
1 change: 1 addition & 0 deletions alibi_detect/utils/missing_optional_dependency.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@
"prophet": 'prophet',
"tensorflow_probability": 'tensorflow',
"tensorflow": 'tensorflow',
"keras": 'tensorflow',
"torch": 'torch',
"pytorch": 'torch',
"keops": 'keops',
Expand Down
3 changes: 3 additions & 0 deletions alibi_detect/utils/tensorflow/data.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,9 @@ def __init__(
self.shuffle = shuffle

def __getitem__(self, idx: int) -> Union[Tuple[Indexable, ...], Indexable]:
if idx >= self.__len__():
raise IndexError("Index out of bounds.")

istart, istop = idx * self.batch_size, (idx + 1) * self.batch_size
output = tuple(indexable[istart:istop] for indexable in self.indexables)
return output if len(output) > 1 else output[0]
Expand Down
13 changes: 10 additions & 3 deletions alibi_detect/utils/tensorflow/kernels.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
import tensorflow as tf
import numpy as np
from . import distance
from typing import Optional, Union, Callable
from scipy.special import logit
Expand Down Expand Up @@ -59,11 +58,19 @@ def __init__(
init_sigma_fn = sigma_median if init_sigma_fn is None else init_sigma_fn
self.config = {'sigma': sigma, 'trainable': trainable, 'init_sigma_fn': init_sigma_fn}
if sigma is None:
self.log_sigma = tf.Variable(np.empty(1), dtype=tf.keras.backend.floatx(), trainable=trainable)
self.log_sigma = self.add_weight(
shape=(1,),
initializer='zeros',
trainable=trainable
)
self.init_required = True
else:
sigma = tf.cast(tf.reshape(sigma, (-1,)), dtype=tf.keras.backend.floatx()) # [Ns,]
self.log_sigma = tf.Variable(tf.math.log(sigma), trainable=trainable)
self.log_sigma = self.add_weight(
shape=(sigma.shape[0],),
initializer=tf.keras.initializers.Constant(tf.math.log(sigma)),
trainable=trainable
)
self.init_required = False
self.init_sigma_fn = init_sigma_fn
self.trainable = trainable
Expand Down
14 changes: 11 additions & 3 deletions alibi_detect/utils/tensorflow/misc.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
import keras
import tensorflow as tf
from tensorflow.keras.models import Sequential, Model


def zero_diag(mat: tf.Tensor) -> tf.Tensor:
Expand Down Expand Up @@ -85,13 +87,19 @@ def subset_matrix(mat: tf.Tensor, inds_0: tf.Tensor, inds_1: tf.Tensor) -> tf.Te
return subbed_rows_cols


def clone_model(model: tf.keras.Model) -> tf.keras.Model:
def clone_model(model: Model) -> Model:
""" Clone a sequential, functional or subclassed tf.keras.Model. """
try: # sequential or functional model
conditions = [
isinstance(model, Sequential),
isinstance(model, keras.src.models.functional.Functional)
]

if any(conditions):
return tf.keras.models.clone_model(model)
except ValueError: # subclassed model
else:
try:
config = model.get_config()
except NotImplementedError:
config = {}

return model.__class__.from_config(config)
Loading
Loading