Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduces Masked Language Modeling for Transformers #780

Merged
merged 25 commits into from
Oct 10, 2022
Merged

Conversation

gabrielspmoreira
Copy link
Member

@gabrielspmoreira gabrielspmoreira commented Sep 29, 2022

Fixes #692

Goals ⚽

Implements the masked language modeling (MLM) technique for training Transformers, like originally proposed for BERT (NLP) and later adapted to RecSys by BERT4Rec.

Note: This PR is the alternative implementation to #775 (which won't be merged).

Implementation Details 🚧

The MLM training with Transformers is implemented in two steps:
1 - Extract the targets from the sequential input item ids and selects (masks) the targets: SequenceMaskRandom
It should be used as a pre of model.fit(..., pre=SequenceMaskRandom())

2 - Replaces the input embeddings at masked positions by a dummy embedding: ReplaceMaskedEmbeddings. It is meant to be used as a pre of TransformerBlock.

Notes

Note #1: The SequenceMaskRandom sets the mask in both inputs and targets. The inputs mask might be later lost because of the usage of ops that dont support Keras Masking, or that are mask producers (e.g. the tf.keras.layers.Embedding).
The targets mask is forwarded to the following layers, and can be used by ReplaceMaskedEmbeddings (to mask input embeddings) and also the OutputBlock (to mask the loss)
Note #2: The SequenceMaskRandom was originally planned to be used as a transform of Loader like SequencePredictNext. But the ._keras_mask set on target and inputs tensors is lost when the tensors are provided to the model in graph mode. So we have to use SequenceMaskRandom as a pre of model.fit(..., pre=SequenceMaskRandom()) .
Example usage:

seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
loader = Loader(sequence_testing_data, batch_size=128, shuffle=False)
model = mm.Model(
    mm.InputBlockV2(
        seq_schema,
        embeddings=mm.Embeddings(
            seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
        ),
    ),
    BertBlock(
        d_model=48,
        n_head=4,
        n_layer=2,
        pre=mm.ReplaceMaskedEmbeddings(),
    ),
    mm.CategoricalOutput(
        seq_schema.select_by_name(target), default_loss="categorical_crossentropy"
    ),
)
)

model.compile(run_eagerly=False, optimizer="adam")

# Trains on randomly masked positions
seq_mask_random = mm.SequenceMaskRandom(schema=seq_schema, target=target, masking_prob=0.3)
model.fit(loader, epochs=epochs, steps_per_epoch=1, pre=seq_mask_random)

# Evaluates to predict the last position (to mimic next-item prediction)
seq_mask_last = mm.SequenceMaskLast(schema=seq_schema, target=target)
metrics = model.evaluate(
        loader, steps=1, return_dict=True, pre=seq_mask_last
    )

Masked losses and metrics

This PR also adds support of Keras masking and 3D predictions to losses and metrics.
Before computing the loss and metrics, the ragged targets are converted to dense, and mask is copied from the target to the predictions tensor, because Keras considers the mask of predictions tensor when computing the loss (ignoring False values). Finally, the target is automatically converted to one-hot encoding if it is not, as the sparse labels is a requirement from many losses and metrics

Testing Details 🔍

I have implemented a handful of tests that checks the correct behaviour of MLM and also asserts the exceptions.

@gabrielspmoreira gabrielspmoreira added the enhancement New feature or request label Sep 29, 2022
@gabrielspmoreira gabrielspmoreira self-assigned this Sep 29, 2022
@gabrielspmoreira gabrielspmoreira marked this pull request as draft September 29, 2022 15:14
@gabrielspmoreira gabrielspmoreira changed the title MLM as a transform for data loader (alternative implementation) Masked Language Modeling as a transform for data loader and pre of TransformerBlock (alternative #2) Oct 4, 2022
@gabrielspmoreira gabrielspmoreira marked this pull request as ready for review October 4, 2022 02:28
@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit 713efa9e3e8a0ceb27b6d3bc4124cdbbbd2d79e2, no merge conflicts.
Running as SYSTEM
Setting status of 713efa9e3e8a0ceb27b6d3bc4124cdbbbd2d79e2 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1440/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse 713efa9e3e8a0ceb27b6d3bc4124cdbbbd2d79e2^{commit} # timeout=10
Checking out Revision 713efa9e3e8a0ceb27b6d3bc4124cdbbbd2d79e2 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 713efa9e3e8a0ceb27b6d3bc4124cdbbbd2d79e2 # timeout=10
Commit message: "Implemented SequencePredictMasked and MaskSequenceEmbeddings"
 > git rev-list --no-walk acee8b791d677a47071115b0c603d49fdeced1a1 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins7063974506947791502.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 756 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 23%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 25%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py . [ 28%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 30%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 30%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 30%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 39%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 44%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 47%]
tests/unit/tf/models/test_base.py s................. [ 50%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 62%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%]
tests/unit/tf/transformers/test_block.py .............. [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 69%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 77%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 78%]
tests/unit/tf/transforms/test_noise.py ..... [ 79%]
tests/unit/tf/transforms/test_sequence.py ..................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 82%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 88%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 3 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 20 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 3 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 20 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:910: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_file8xe5hfyj.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========= 744 passed, 12 skipped, 1152 warnings in 1113.88s (0:18:33) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins9322485470600446094.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit 632f007c6769868a18a5a9337e6294ca7398e894, no merge conflicts.
Running as SYSTEM
Setting status of 632f007c6769868a18a5a9337e6294ca7398e894 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1442/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse 632f007c6769868a18a5a9337e6294ca7398e894^{commit} # timeout=10
Checking out Revision 632f007c6769868a18a5a9337e6294ca7398e894 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 632f007c6769868a18a5a9337e6294ca7398e894 # timeout=10
Commit message: "Fixed broken test"
 > git rev-list --no-walk c35ca4950041ae0ef26ed2f50073bb6b00200eb6 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins15798356700364473693.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 756 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 23%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 25%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py . [ 28%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 30%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 30%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 30%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 39%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 44%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 47%]
tests/unit/tf/models/test_base.py s................. [ 50%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 62%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%]
tests/unit/tf/transformers/test_block.py .............. [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 69%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 77%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 78%]
tests/unit/tf/transforms/test_noise.py ..... [ 79%]
tests/unit/tf/transforms/test_sequence.py ..................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 82%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 88%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 3 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 20 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 3 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 20 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:910: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_fileo9sq78w0.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========= 744 passed, 12 skipped, 1152 warnings in 1116.55s (0:18:36) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins7729603229479138003.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit ba4bb6c084ca8e320ab8e75782ab3577f3f756ce, no merge conflicts.
Running as SYSTEM
Setting status of ba4bb6c084ca8e320ab8e75782ab3577f3f756ce to PENDING with url https://10.20.13.93:8080/job/merlin_models/1444/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse ba4bb6c084ca8e320ab8e75782ab3577f3f756ce^{commit} # timeout=10
Checking out Revision ba4bb6c084ca8e320ab8e75782ab3577f3f756ce (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ba4bb6c084ca8e320ab8e75782ab3577f3f756ce # timeout=10
Commit message: "Renaming method and attribute of SequencePredictMasked"
 > git rev-list --no-walk 008846056b811421dc95bc690903aa02aed133ca # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins6884372801039723271.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 756 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 23%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 25%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py . [ 28%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 30%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 30%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 30%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 39%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 44%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 47%]
tests/unit/tf/models/test_base.py s................. [ 50%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 62%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%]
tests/unit/tf/transformers/test_block.py .............. [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 69%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 77%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 78%]
tests/unit/tf/transforms/test_noise.py ..... [ 79%]
tests/unit/tf/transforms/test_sequence.py ..................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 82%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 88%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 3 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 20 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 3 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 20 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:910: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_fileliwjyktc.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========= 744 passed, 12 skipped, 1152 warnings in 1107.23s (0:18:27) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins9611889281550209969.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit df1c9509580bfb8d84b3c3bc964d9b6f8f20f5a8, no merge conflicts.
Running as SYSTEM
Setting status of df1c9509580bfb8d84b3c3bc964d9b6f8f20f5a8 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1463/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse df1c9509580bfb8d84b3c3bc964d9b6f8f20f5a8^{commit} # timeout=10
Checking out Revision df1c9509580bfb8d84b3c3bc964d9b6f8f20f5a8 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f df1c9509580bfb8d84b3c3bc964d9b6f8f20f5a8 # timeout=10
Commit message: "Fix on ListToDense() to deal with cases where last dim is None"
 > git rev-list --no-walk 48cd07d2d7b74812783aeae135ed214b07f45b39 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins4383471681216234589.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 760 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 23%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 25%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py . [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 30%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 30%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 44%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 47%]
tests/unit/tf/models/test_base.py s................. [ 49%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 61%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 64%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 66%]
tests/unit/tf/transformers/test_block.py ..............F2022-10-05 21:17:40.570134: W tensorflow/core/framework/op_kernel.cc:1733] INVALID_ARGUMENT: ValueError: Could not find callback with key=pyfunc_701 in the registry.
Traceback (most recent call last):

File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/script_ops.py", line 258, in call
raise ValueError(f"Could not find callback with key={token} in the "

ValueError: Could not find callback with key=pyfunc_701 in the registry.

2022-10-05 21:17:40.570240: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: INVALID_ARGUMENT: ValueError: Could not find callback with key=pyfunc_701 in the registry.
Traceback (most recent call last):

File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/script_ops.py", line 258, in call
raise ValueError(f"Could not find callback with key={token} in the "

ValueError: Could not find callback with key=pyfunc_701 in the registry.

 [[{{node PyFunc}}]]

FFF [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 70%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 77%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%]
tests/unit/tf/transforms/test_noise.py ..... [ 79%]
tests/unit/tf/transforms/test_sequence.py ........FF..FFFF..... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 82%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 88%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
_____________ test_transformer_with_causal_language_modeling[True] _____________

args = (<tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
... [ 0.005807 , 0.03537445, -0.01988092, ..., -0.00755118,
-0.00282285, -0.05310886]]], dtype=float32)>)
kwargs = {'from_logits': True}, result = NotImplemented

@traceback_utils.filter_traceback
def op_dispatch_handler(*args, **kwargs):
  """Call `dispatch_target`, peforming dispatch when appropriate."""

  # Type-based dispatch system (dispatch v2):
  if api_dispatcher is not None:
    if iterable_params is not None:
      args, kwargs = replace_iterable_params(args, kwargs, iterable_params)
    result = api_dispatcher.Dispatch(args, kwargs)
    if result is not NotImplemented:
      return result

  # Fallback dispatch system (dispatch v1):
  try:
  return dispatch_target(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082:


y_true = <tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
[7, 13, 49]]>
y_pred = <tf.Tensor: shape=(8, 3, 51997), dtype=float32, numpy=
array([[[-0.09553812, 0.06406247, 0.03131969, ..., -0.0161687... [ 0.005807 , 0.03537445, -0.01988092, ..., -0.00755118,
-0.00282285, -0.05310886]]], dtype=float32)>
from_logits = True, axis = -1

@keras_export('keras.metrics.sparse_categorical_crossentropy',
              'keras.losses.sparse_categorical_crossentropy')
@tf.__internal__.dispatch.add_dispatch_support
def sparse_categorical_crossentropy(y_true, y_pred, from_logits=False, axis=-1):
  """Computes the sparse categorical crossentropy loss.

  Standalone usage:

  >>> y_true = [1, 2]
  >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]
  >>> loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)
  >>> assert loss.shape == (2,)
  >>> loss.numpy()
  array([0.0513, 2.303], dtype=float32)

  Args:
    y_true: Ground truth values.
    y_pred: The predicted values.
    from_logits: Whether `y_pred` is expected to be a logits tensor. By default,
      we assume that `y_pred` encodes a probability distribution.
    axis: Defaults to -1. The dimension along which the entropy is
      computed.

  Returns:
    Sparse categorical crossentropy loss value.
  """
  y_pred = tf.convert_to_tensor(y_pred)
return backend.sparse_categorical_crossentropy(
      y_true, y_pred, from_logits=from_logits, axis=axis)

/usr/local/lib/python3.8/dist-packages/keras/losses.py:1860:


args = (<tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
... [ 0.005807 , 0.03537445, -0.01988092, ..., -0.00755118,
-0.00282285, -0.05310886]]], dtype=float32)>)
kwargs = {'axis': -1, 'from_logits': True}

def error_handler(*args, **kwargs):
  try:
    if not is_traceback_filtering_enabled():
    return fn(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141:


args = (<tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
... [ 0.005807 , 0.03537445, -0.01988092, ..., -0.00755118,
-0.00282285, -0.05310886]]], dtype=float32)>)
kwargs = {'axis': -1, 'from_logits': True}
result = <object object at 0x7f102745d790>

@traceback_utils.filter_traceback
def op_dispatch_handler(*args, **kwargs):
  """Call `dispatch_target`, peforming dispatch when appropriate."""

  # Type-based dispatch system (dispatch v2):
  if api_dispatcher is not None:
    if iterable_params is not None:
      args, kwargs = replace_iterable_params(args, kwargs, iterable_params)
    result = api_dispatcher.Dispatch(args, kwargs)
    if result is not NotImplemented:
      return result

  # Fallback dispatch system (dispatch v1):
  try:
  return dispatch_target(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082:


target = <tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
[7, 13, 49]]>
output = <tf.Tensor: shape=(8, 3, 51997), dtype=float32, numpy=
array([[[-0.09553812, 0.06406247, 0.03131969, ..., -0.0161687... [ 0.005807 , 0.03537445, -0.01988092, ..., -0.00755118,
-0.00282285, -0.05310886]]], dtype=float32)>
from_logits = True, axis = -1

@keras_export('keras.backend.sparse_categorical_crossentropy')
@tf.__internal__.dispatch.add_dispatch_support
@doc_controls.do_not_generate_docs
def sparse_categorical_crossentropy(target, output, from_logits=False, axis=-1):
  """Categorical crossentropy with integer targets.

  Args:
      target: An integer tensor.
      output: A tensor resulting from a softmax
          (unless `from_logits` is True, in which
          case `output` is expected to be the logits).
      from_logits: Boolean, whether `output` is the
          result of a softmax, or is a tensor of logits.
      axis: Int specifying the channels axis. `axis=-1` corresponds to data
          format `channels_last`, and `axis=1` corresponds to data format
          `channels_first`.

  Returns:
      Output tensor.

  Raises:
      ValueError: if `axis` is neither -1 nor one of the axes of `output`.
  """
target = tf.convert_to_tensor(target)

/usr/local/lib/python3.8/dist-packages/keras/backend.py:5179:


args = (<tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
[7, 13, 49]]>,)
kwargs = {}

def error_handler(*args, **kwargs):
  try:
    if not is_traceback_filtering_enabled():
    return fn(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141:


args = (<tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
[7, 13, 49]]>,)
kwargs = {}, result = <object object at 0x7f102745d790>

@traceback_utils.filter_traceback
def op_dispatch_handler(*args, **kwargs):
  """Call `dispatch_target`, peforming dispatch when appropriate."""

  # Type-based dispatch system (dispatch v2):
  if api_dispatcher is not None:
    if iterable_params is not None:
      args, kwargs = replace_iterable_params(args, kwargs, iterable_params)
    result = api_dispatcher.Dispatch(args, kwargs)
    if result is not NotImplemented:
      return result

  # Fallback dispatch system (dispatch v1):
  try:
  return dispatch_target(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082:


value = <tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
[7, 13, 49]]>
dtype = None, dtype_hint = None, name = None

@tf_export("convert_to_tensor", v1=[])
@dispatch.add_dispatch_support
def convert_to_tensor_v2_with_dispatch(
    value, dtype=None, dtype_hint=None, name=None):
  """Converts the given `value` to a `Tensor`.

  This function converts Python objects of various types to `Tensor`
  objects. It accepts `Tensor` objects, numpy arrays, Python lists,
  and Python scalars.

  For example:

  >>> import numpy as np
  >>> def my_func(arg):
  ...   arg = tf.convert_to_tensor(arg, dtype=tf.float32)
  ...   return arg

  >>> # The following calls are equivalent.
  ...
  >>> value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]]))
  >>> print(value_1)
  tf.Tensor(
    [[1. 2.]
     [3. 4.]], shape=(2, 2), dtype=float32)
  >>> value_2 = my_func([[1.0, 2.0], [3.0, 4.0]])
  >>> print(value_2)
  tf.Tensor(
    [[1. 2.]
     [3. 4.]], shape=(2, 2), dtype=float32)
  >>> value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32))
  >>> print(value_3)
  tf.Tensor(
    [[1. 2.]
     [3. 4.]], shape=(2, 2), dtype=float32)

  This function can be useful when composing a new operation in Python
  (such as `my_func` in the example above). All standard Python op
  constructors apply this function to each of their Tensor-valued
  inputs, which allows those ops to accept numpy arrays, Python lists,
  and scalars in addition to `Tensor` objects.

  Note: This function diverges from default Numpy behavior for `float` and
    `string` types when `None` is present in a Python list or scalar. Rather
    than silently converting `None` values, an error will be thrown.

  Args:
    value: An object whose type has a registered `Tensor` conversion function.
    dtype: Optional element type for the returned tensor. If missing, the type
      is inferred from the type of `value`.
    dtype_hint: Optional element type for the returned tensor, used when dtype
      is None. In some cases, a caller may not have a dtype in mind when
      converting to a tensor, so dtype_hint can be used as a soft preference.
      If the conversion to `dtype_hint` is not possible, this argument has no
      effect.
    name: Optional name to use if a new `Tensor` is created.

  Returns:
    A `Tensor` based on `value`.

  Raises:
    TypeError: If no conversion function is registered for `value` to `dtype`.
    RuntimeError: If a registered conversion function returns an invalid value.
    ValueError: If the `value` is a tensor not of given `dtype` in graph mode.
  """
return convert_to_tensor_v2(
      value, dtype=dtype, dtype_hint=dtype_hint, name=name)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:1494:


value = <tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
[7, 13, 49]]>
dtype = None, dtype_hint = None, name = None

def convert_to_tensor_v2(value, dtype=None, dtype_hint=None, name=None):
  """Converts the given `value` to a `Tensor`."""
return convert_to_tensor(
      value=value,
      dtype=dtype,
      name=name,
      preferred_dtype=dtype_hint,
      as_ref=False)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:1500:


args = ()
kwargs = {'as_ref': False, 'dtype': None, 'name': None, 'preferred_dtype': None, ...}

@functools.wraps(func)
def wrapped(*args, **kwargs):
  if enabled:
    with Trace(trace_name, **trace_kwargs):
      return func(*args, **kwargs)
return func(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/profiler/trace.py:183:


value = <tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
[7, 13, 49]]>
dtype = None, name = None, as_ref = False, preferred_dtype = None
dtype_hint = None, ctx = None
accepted_result_types = (<class 'tensorflow.python.framework.ops.Tensor'>,)

@profiler_trace.trace_wrapper("convert_to_tensor")
def convert_to_tensor(value,
                      dtype=None,
                      name=None,
                      as_ref=False,
                      preferred_dtype=None,
                      dtype_hint=None,
                      ctx=None,
                      accepted_result_types=(Tensor,)):
  """Implementation of the public convert_to_tensor."""
  # TODO(b/142518781): Fix all call-sites and remove redundant arg
  preferred_dtype = preferred_dtype or dtype_hint
  if isinstance(value, EagerTensor):
    if ctx is None:
      ctx = context.context()
    if not ctx.executing_eagerly():
      graph = get_default_graph()
      if not graph.building_function:
        raise RuntimeError(
            _add_error_prefix(
                "Attempting to capture an EagerTensor without "
                "building a function.",
                name=name))
      return graph.capture(value, name=name)

  if dtype is not None:
    dtype = dtypes.as_dtype(dtype)
  if isinstance(value, Tensor):
    if dtype is not None and not dtype.is_compatible_with(value.dtype):
      raise ValueError(
          _add_error_prefix(
              f"Tensor conversion requested dtype {dtype.name} "
              f"for Tensor with dtype {value.dtype.name}: {value!r}",
              name=name))
    return value

  if preferred_dtype is not None:
    preferred_dtype = dtypes.as_dtype(preferred_dtype)

  # See below for the reason why it's `type(value)` and not just `value`.
  # https://docs.python.org/3.8/reference/datamodel.html#special-lookup
  overload = getattr(type(value), "__tf_tensor__", None)
  if overload is not None:
    return overload(value, dtype, name)  #  pylint: disable=not-callable

  for base_type, conversion_func in tensor_conversion_registry.get(type(value)):
    # If dtype is None but preferred_dtype is not None, we try to
    # cast to preferred_dtype first.
    ret = None
    if dtype is None and preferred_dtype is not None:
      try:
        ret = conversion_func(
            value, dtype=preferred_dtype, name=name, as_ref=as_ref)
      except (TypeError, ValueError):
        # Could not coerce the conversion to use the preferred dtype.
        pass
      else:
        if (ret is not NotImplemented and
            ret.dtype.base_dtype != preferred_dtype.base_dtype):
          raise RuntimeError(
              _add_error_prefix(
                  f"Conversion function {conversion_func!r} for type "
                  f"{base_type} returned incompatible dtype: requested = "
                  f"{preferred_dtype.base_dtype.name}, "
                  f"actual = {ret.dtype.base_dtype.name}",
                  name=name))

    if ret is None:
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:1640:


v = <tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
[7, 13, 49]]>
dtype = None, name = None, as_ref = False

def _constant_tensor_conversion_function(v, dtype=None, name=None,
                                         as_ref=False):
  _ = as_ref
return constant(v, dtype=dtype, name=name)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:343:


value = <tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
[7, 13, 49]]>
dtype = None, shape = None, name = None

@tf_export("constant", v1=[])
def constant(value, dtype=None, shape=None, name="Const"):
  """Creates a constant tensor from a tensor-like object.

  Note: All eager `tf.Tensor` values are immutable (in contrast to
  `tf.Variable`). There is nothing especially _constant_ about the value
  returned from `tf.constant`. This function is not fundamentally different from
  `tf.convert_to_tensor`. The name `tf.constant` comes from the `value` being
  embedded in a `Const` node in the `tf.Graph`. `tf.constant` is useful
  for asserting that the value can be embedded that way.

  If the argument `dtype` is not specified, then the type is inferred from
  the type of `value`.

  >>> # Constant 1-D Tensor from a python list.
  >>> tf.constant([1, 2, 3, 4, 5, 6])
  <tf.Tensor: shape=(6,), dtype=int32,
      numpy=array([1, 2, 3, 4, 5, 6], dtype=int32)>
  >>> # Or a numpy array
  >>> a = np.array([[1, 2, 3], [4, 5, 6]])
  >>> tf.constant(a)
  <tf.Tensor: shape=(2, 3), dtype=int64, numpy=
    array([[1, 2, 3],
           [4, 5, 6]])>

  If `dtype` is specified, the resulting tensor values are cast to the requested
  `dtype`.

  >>> tf.constant([1, 2, 3, 4, 5, 6], dtype=tf.float64)
  <tf.Tensor: shape=(6,), dtype=float64,
      numpy=array([1., 2., 3., 4., 5., 6.])>

  If `shape` is set, the `value` is reshaped to match. Scalars are expanded to
  fill the `shape`:

  >>> tf.constant(0, shape=(2, 3))
    <tf.Tensor: shape=(2, 3), dtype=int32, numpy=
    array([[0, 0, 0],
           [0, 0, 0]], dtype=int32)>
  >>> tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
  <tf.Tensor: shape=(2, 3), dtype=int32, numpy=
    array([[1, 2, 3],
           [4, 5, 6]], dtype=int32)>

  `tf.constant` has no effect if an eager Tensor is passed as the `value`, it
  even transmits gradients:

  >>> v = tf.Variable([0.0])
  >>> with tf.GradientTape() as g:
  ...     loss = tf.constant(v + v)
  >>> g.gradient(loss, v).numpy()
  array([2.], dtype=float32)

  But, since `tf.constant` embeds the value in the `tf.Graph` this fails for
  symbolic tensors:

  >>> with tf.compat.v1.Graph().as_default():
  ...   i = tf.compat.v1.placeholder(shape=[None, None], dtype=tf.float32)
  ...   t = tf.constant(i)
  Traceback (most recent call last):
  ...
  TypeError: ...

  `tf.constant` will create tensors on the current device. Inputs which are
  already tensors maintain their placements unchanged.

  Related Ops:

  * `tf.convert_to_tensor` is similar but:
    * It has no `shape` argument.
    * Symbolic tensors are allowed to pass through.

    >>> with tf.compat.v1.Graph().as_default():
    ...   i = tf.compat.v1.placeholder(shape=[None, None], dtype=tf.float32)
    ...   t = tf.convert_to_tensor(i)

  * `tf.fill`: differs in a few ways:
    *   `tf.constant` supports arbitrary constants, not just uniform scalar
        Tensors like `tf.fill`.
    *   `tf.fill` creates an Op in the graph that is expanded at runtime, so it
        can efficiently represent large tensors.
    *   Since `tf.fill` does not embed the value, it can produce dynamically
        sized outputs.

  Args:
    value: A constant value (or list) of output type `dtype`.
    dtype: The type of the elements of the resulting tensor.
    shape: Optional dimensions of resulting tensor.
    name: Optional name for the tensor.

  Returns:
    A Constant Tensor.

  Raises:
    TypeError: if shape is incorrectly specified or unsupported.
    ValueError: if called on a symbolic tensor.
  """
return _constant_impl(value, dtype, shape, name, verify_shape=False,
                        allow_broadcast=True)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:267:


value = <tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
[7, 13, 49]]>
dtype = None, shape = None, name = None, verify_shape = False
allow_broadcast = True

def _constant_impl(
    value, dtype, shape, name, verify_shape, allow_broadcast):
  """Implementation of constant."""
  ctx = context.context()
  if ctx.executing_eagerly():
    if trace.enabled:
      with trace.Trace("tf.constant"):
        return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
  return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:279:


ctx = <tensorflow.python.eager.context.Context object at 0x7f0f768f3dc0>
value = <tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
[7, 13, 49]]>
dtype = None, shape = None, verify_shape = False

def _constant_eager_impl(ctx, value, dtype, shape, verify_shape):
  """Creates a constant on the current device."""
t = convert_to_eager_tensor(value, ctx, dtype)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:304:


value = <tf.RaggedTensor [[1, 32, 52],
[29, 24, 24],
[14, 12, 8],
[49, 3, 19],
[28, 32, 37],
[4, 38, 11],
[30, 18, 7],
[7, 13, 49]]>
ctx = <tensorflow.python.eager.context.Context object at 0x7f0f768f3dc0>
dtype = None

def convert_to_eager_tensor(value, ctx, dtype=None):
  """Converts the given `value` to an `EagerTensor`.

  Note that this function could return cached copies of created constants for
  performance reasons.

  Args:
    value: value to convert to EagerTensor.
    ctx: value of context.context().
    dtype: optional desired dtype of the converted EagerTensor.

  Returns:
    EagerTensor created from value.

  Raises:
    TypeError: if `dtype` is not compatible with the type of t.
  """
  if isinstance(value, ops.EagerTensor):
    if dtype is not None and value.dtype != dtype:
      raise TypeError(f"Expected tensor {value} with dtype {dtype!r}, but got "
                      f"dtype {value.dtype!r}.")
    return value
  if dtype is not None:
    try:
      dtype = dtype.as_datatype_enum
    except AttributeError:
      dtype = dtypes.as_dtype(dtype).as_datatype_enum
  ctx.ensure_initialized()
return ops.EagerTensor(value, ctx.device_name, dtype)

E ValueError: TypeError: object of type 'RaggedTensor' has no len()

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:102: ValueError

During handling of the above exception, another exception occurred:

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f0d3b159e50>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_causal_language_modeling(sequence_testing_data: Dataset, run_eagerly):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
    predict_next = mm.SequencePredictNext(schema=seq_schema, target=target)

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False, transform=predict_next)

    model = mm.Model(
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        GPT2Block(d_model=48, n_head=8, n_layer=2),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target), default_loss="sparse_categorical_crossentropy"
        ),
    )

    batch = next(iter(loader))[0]
    outputs = model(batch)
    assert list(outputs.shape) == [8, 3, 51997]
  testing_utils.model_test(model, loader, run_eagerly=run_eagerly)

tests/unit/tf/transformers/test_block.py:189:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:722: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function
return step_function(self, iterator)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:580: in train_step
loss = self.compute_loss(x, outputs.targets, outputs.predictions, outputs.sample_weight)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:948: in compute_loss
return self.compiled_loss(
/usr/local/lib/python3.8/dist-packages/keras/engine/compile_utils.py:201: in call
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:139: in call
losses = call_fn(y_true, y_pred)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:243: in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1086: in op_dispatch_handler
result = dispatch(op_dispatch_handler, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:163: in dispatch
result = dispatcher.handle(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:195: in handle
return self._override_func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:1884: in _ragged_tensor_sparse_categorical_crossentropy
return _ragged_tensor_apply_loss(fn, y_true, y_pred, y_pred_extra_dim=True)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:1397: in _ragged_tensor_apply_loss
nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]
/usr/local/lib/python3.8/dist-packages/keras/losses.py:1397: in
nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]


self = <tf.Tensor: shape=(8, 3, 51997), dtype=float32, numpy=
array([[[-0.09553812, 0.06406247, 0.03131969, ..., -0.0161687... [ 0.005807 , 0.03537445, -0.01988092, ..., -0.00755118,
-0.00282285, -0.05310886]]], dtype=float32)>
name = 'nested_row_splits'

def __getattr__(self, name):
  if name in {"T", "astype", "ravel", "transpose", "reshape", "clip", "size",
              "tolist", "data"}:
    # TODO(wangpeng): Export the enable_numpy_behavior knob
    raise AttributeError(
        f"{type(self).__name__} object has no attribute '{name}'. " + """
      If you are looking for numpy-related methods, please run the following:
      from tensorflow.python.ops.numpy_ops import np_config
      np_config.enable_numpy_behavior()
    """)
self.__getattribute__(name)

E AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'nested_row_splits'

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:446: AttributeError
____________ test_transformer_with_causal_language_modeling[False] _____________

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f0d3b9747c0>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_causal_language_modeling(sequence_testing_data: Dataset, run_eagerly):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
    predict_next = mm.SequencePredictNext(schema=seq_schema, target=target)

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False, transform=predict_next)

    model = mm.Model(
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        GPT2Block(d_model=48, n_head=8, n_layer=2),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target), default_loss="sparse_categorical_crossentropy"
        ),
    )

    batch = next(iter(loader))[0]
    outputs = model(batch)
    assert list(outputs.shape) == [8, 3, 51997]
  testing_utils.model_test(model, loader, run_eagerly=run_eagerly)

tests/unit/tf/transformers/test_block.py:189:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:722: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self._call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call
self._initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_file6dpyuv56.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:580: in train_step
loss = self.compute_loss(x, outputs.targets, outputs.predictions, outputs.sample_weight)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:948: in compute_loss
return self.compiled_loss(
/usr/local/lib/python3.8/dist-packages/keras/engine/compile_utils.py:201: in call
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:139: in call
losses = call_fn(y_true, y_pred)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:331: in converted_call
return _call_unconverted(f, args, kwargs, options, False)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:243: in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1086: in op_dispatch_handler
result = dispatch(op_dispatch_handler, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:163: in dispatch
result = dispatcher.handle(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:195: in handle
return self._override_func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:1884: in _ragged_tensor_sparse_categorical_crossentropy
return _ragged_tensor_apply_loss(fn, y_true, y_pred, y_pred_extra_dim=True)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:1397: in _ragged_tensor_apply_loss
nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]
/usr/local/lib/python3.8/dist-packages/keras/losses.py:1397: in
nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]


self = <tf.Tensor 'model/item_id_seq/categorical_output/categorical_target/BiasAdd:0' shape=(None, None, 51997) dtype=float32>
name = 'nested_row_splits'

def __getattr__(self, name):
  if name in {"T", "astype", "ravel", "transpose", "reshape", "clip", "size",
              "tolist", "data"}:
    # TODO(wangpeng): Export the enable_numpy_behavior knob
    raise AttributeError(
        f"{type(self).__name__} object has no attribute '{name}'. " + """
      If you are looking for numpy-related methods, please run the following:
      from tensorflow.python.ops.numpy_ops import np_config
      np_config.enable_numpy_behavior()
    """)
self.__getattribute__(name)

E AttributeError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function **
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run
E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica
E return self._call_for_each_replica(fn, args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step **
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 580, in train_step
E loss = self.compute_loss(x, outputs.targets, outputs.predictions, outputs.sample_weight)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 948, in compute_loss
E return self.compiled_loss(
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/compile_utils.py", line 201, in call
E loss_value = loss_obj(y_t, y_p, sample_weight=sw)
E File "/usr/local/lib/python3.8/dist-packages/keras/losses.py", line 139, in call
E losses = call_fn(y_true, y_pred)
E File "/usr/local/lib/python3.8/dist-packages/keras/losses.py", line 243, in call **
E return ag_fn(y_true, y_pred, **self._fn_kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1086, in op_dispatch_handler
E result = dispatch(op_dispatch_handler, args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 163, in dispatch
E result = dispatcher.handle(args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 195, in handle
E return self._override_func(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/losses.py", line 1884, in _ragged_tensor_sparse_categorical_crossentropy
E return _ragged_tensor_apply_loss(fn, y_true, y_pred, y_pred_extra_dim=True)
E File "/usr/local/lib/python3.8/dist-packages/keras/losses.py", line 1397, in _ragged_tensor_apply_loss
E nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]
E File "/usr/local/lib/python3.8/dist-packages/keras/losses.py", line 1397, in
E nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 446, in getattr
E self.getattribute(name)
E
E AttributeError: 'Tensor' object has no attribute 'nested_row_splits'

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:446: AttributeError
_____________ test_transformer_with_masked_language_modeling[True] _____________

args = (<tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, ...e-02, -1.2484869e-02, -1.4735578e-02, ...,
7.2016641e-02, -1.7513335e-04, -9.8557752e-03]]], dtype=float32)>)
kwargs = {'from_logits': True}, result = NotImplemented

@traceback_utils.filter_traceback
def op_dispatch_handler(*args, **kwargs):
  """Call `dispatch_target`, peforming dispatch when appropriate."""

  # Type-based dispatch system (dispatch v2):
  if api_dispatcher is not None:
    if iterable_params is not None:
      args, kwargs = replace_iterable_params(args, kwargs, iterable_params)
    result = api_dispatcher.Dispatch(args, kwargs)
    if result is not NotImplemented:
      return result

  # Fallback dispatch system (dispatch v1):
  try:
  return dispatch_target(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082:


y_true = <tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, 11, 104],
[22, 4, 168, 7],
[1, 10, 6, 41]]>
y_pred = <tf.Tensor: shape=(8, 4, 51997), dtype=float32, numpy=
array([[[ 6.3700974e-03, -1.5098136e-02, 2.6254989e-02, ...,
...6e-02, -1.2484869e-02, -1.4735578e-02, ...,
7.2016641e-02, -1.7513335e-04, -9.8557752e-03]]], dtype=float32)>
from_logits = True, axis = -1

@keras_export('keras.metrics.sparse_categorical_crossentropy',
              'keras.losses.sparse_categorical_crossentropy')
@tf.__internal__.dispatch.add_dispatch_support
def sparse_categorical_crossentropy(y_true, y_pred, from_logits=False, axis=-1):
  """Computes the sparse categorical crossentropy loss.

  Standalone usage:

  >>> y_true = [1, 2]
  >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]
  >>> loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)
  >>> assert loss.shape == (2,)
  >>> loss.numpy()
  array([0.0513, 2.303], dtype=float32)

  Args:
    y_true: Ground truth values.
    y_pred: The predicted values.
    from_logits: Whether `y_pred` is expected to be a logits tensor. By default,
      we assume that `y_pred` encodes a probability distribution.
    axis: Defaults to -1. The dimension along which the entropy is
      computed.

  Returns:
    Sparse categorical crossentropy loss value.
  """
  y_pred = tf.convert_to_tensor(y_pred)
return backend.sparse_categorical_crossentropy(
      y_true, y_pred, from_logits=from_logits, axis=axis)

/usr/local/lib/python3.8/dist-packages/keras/losses.py:1860:


args = (<tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, ...e-02, -1.2484869e-02, -1.4735578e-02, ...,
7.2016641e-02, -1.7513335e-04, -9.8557752e-03]]], dtype=float32)>)
kwargs = {'axis': -1, 'from_logits': True}

def error_handler(*args, **kwargs):
  try:
    if not is_traceback_filtering_enabled():
    return fn(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141:


args = (<tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, ...e-02, -1.2484869e-02, -1.4735578e-02, ...,
7.2016641e-02, -1.7513335e-04, -9.8557752e-03]]], dtype=float32)>)
kwargs = {'axis': -1, 'from_logits': True}
result = <object object at 0x7f102745d790>

@traceback_utils.filter_traceback
def op_dispatch_handler(*args, **kwargs):
  """Call `dispatch_target`, peforming dispatch when appropriate."""

  # Type-based dispatch system (dispatch v2):
  if api_dispatcher is not None:
    if iterable_params is not None:
      args, kwargs = replace_iterable_params(args, kwargs, iterable_params)
    result = api_dispatcher.Dispatch(args, kwargs)
    if result is not NotImplemented:
      return result

  # Fallback dispatch system (dispatch v1):
  try:
  return dispatch_target(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082:


target = <tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, 11, 104],
[22, 4, 168, 7],
[1, 10, 6, 41]]>
output = <tf.Tensor: shape=(8, 4, 51997), dtype=float32, numpy=
array([[[ 6.3700974e-03, -1.5098136e-02, 2.6254989e-02, ...,
...6e-02, -1.2484869e-02, -1.4735578e-02, ...,
7.2016641e-02, -1.7513335e-04, -9.8557752e-03]]], dtype=float32)>
from_logits = True, axis = -1

@keras_export('keras.backend.sparse_categorical_crossentropy')
@tf.__internal__.dispatch.add_dispatch_support
@doc_controls.do_not_generate_docs
def sparse_categorical_crossentropy(target, output, from_logits=False, axis=-1):
  """Categorical crossentropy with integer targets.

  Args:
      target: An integer tensor.
      output: A tensor resulting from a softmax
          (unless `from_logits` is True, in which
          case `output` is expected to be the logits).
      from_logits: Boolean, whether `output` is the
          result of a softmax, or is a tensor of logits.
      axis: Int specifying the channels axis. `axis=-1` corresponds to data
          format `channels_last`, and `axis=1` corresponds to data format
          `channels_first`.

  Returns:
      Output tensor.

  Raises:
      ValueError: if `axis` is neither -1 nor one of the axes of `output`.
  """
target = tf.convert_to_tensor(target)

/usr/local/lib/python3.8/dist-packages/keras/backend.py:5179:


args = (<tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, 11, 104],
[22, 4, 168, 7],
[1, 10, 6, 41]]>,)
kwargs = {}

def error_handler(*args, **kwargs):
  try:
    if not is_traceback_filtering_enabled():
    return fn(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141:


args = (<tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, 11, 104],
[22, 4, 168, 7],
[1, 10, 6, 41]]>,)
kwargs = {}, result = <object object at 0x7f102745d790>

@traceback_utils.filter_traceback
def op_dispatch_handler(*args, **kwargs):
  """Call `dispatch_target`, peforming dispatch when appropriate."""

  # Type-based dispatch system (dispatch v2):
  if api_dispatcher is not None:
    if iterable_params is not None:
      args, kwargs = replace_iterable_params(args, kwargs, iterable_params)
    result = api_dispatcher.Dispatch(args, kwargs)
    if result is not NotImplemented:
      return result

  # Fallback dispatch system (dispatch v1):
  try:
  return dispatch_target(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082:


value = <tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, 11, 104],
[22, 4, 168, 7],
[1, 10, 6, 41]]>
dtype = None, dtype_hint = None, name = None

@tf_export("convert_to_tensor", v1=[])
@dispatch.add_dispatch_support
def convert_to_tensor_v2_with_dispatch(
    value, dtype=None, dtype_hint=None, name=None):
  """Converts the given `value` to a `Tensor`.

  This function converts Python objects of various types to `Tensor`
  objects. It accepts `Tensor` objects, numpy arrays, Python lists,
  and Python scalars.

  For example:

  >>> import numpy as np
  >>> def my_func(arg):
  ...   arg = tf.convert_to_tensor(arg, dtype=tf.float32)
  ...   return arg

  >>> # The following calls are equivalent.
  ...
  >>> value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]]))
  >>> print(value_1)
  tf.Tensor(
    [[1. 2.]
     [3. 4.]], shape=(2, 2), dtype=float32)
  >>> value_2 = my_func([[1.0, 2.0], [3.0, 4.0]])
  >>> print(value_2)
  tf.Tensor(
    [[1. 2.]
     [3. 4.]], shape=(2, 2), dtype=float32)
  >>> value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32))
  >>> print(value_3)
  tf.Tensor(
    [[1. 2.]
     [3. 4.]], shape=(2, 2), dtype=float32)

  This function can be useful when composing a new operation in Python
  (such as `my_func` in the example above). All standard Python op
  constructors apply this function to each of their Tensor-valued
  inputs, which allows those ops to accept numpy arrays, Python lists,
  and scalars in addition to `Tensor` objects.

  Note: This function diverges from default Numpy behavior for `float` and
    `string` types when `None` is present in a Python list or scalar. Rather
    than silently converting `None` values, an error will be thrown.

  Args:
    value: An object whose type has a registered `Tensor` conversion function.
    dtype: Optional element type for the returned tensor. If missing, the type
      is inferred from the type of `value`.
    dtype_hint: Optional element type for the returned tensor, used when dtype
      is None. In some cases, a caller may not have a dtype in mind when
      converting to a tensor, so dtype_hint can be used as a soft preference.
      If the conversion to `dtype_hint` is not possible, this argument has no
      effect.
    name: Optional name to use if a new `Tensor` is created.

  Returns:
    A `Tensor` based on `value`.

  Raises:
    TypeError: If no conversion function is registered for `value` to `dtype`.
    RuntimeError: If a registered conversion function returns an invalid value.
    ValueError: If the `value` is a tensor not of given `dtype` in graph mode.
  """
return convert_to_tensor_v2(
      value, dtype=dtype, dtype_hint=dtype_hint, name=name)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:1494:


value = <tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, 11, 104],
[22, 4, 168, 7],
[1, 10, 6, 41]]>
dtype = None, dtype_hint = None, name = None

def convert_to_tensor_v2(value, dtype=None, dtype_hint=None, name=None):
  """Converts the given `value` to a `Tensor`."""
return convert_to_tensor(
      value=value,
      dtype=dtype,
      name=name,
      preferred_dtype=dtype_hint,
      as_ref=False)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:1500:


args = ()
kwargs = {'as_ref': False, 'dtype': None, 'name': None, 'preferred_dtype': None, ...}

@functools.wraps(func)
def wrapped(*args, **kwargs):
  if enabled:
    with Trace(trace_name, **trace_kwargs):
      return func(*args, **kwargs)
return func(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/profiler/trace.py:183:


value = <tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, 11, 104],
[22, 4, 168, 7],
[1, 10, 6, 41]]>
dtype = None, name = None, as_ref = False, preferred_dtype = None
dtype_hint = None, ctx = None
accepted_result_types = (<class 'tensorflow.python.framework.ops.Tensor'>,)

@profiler_trace.trace_wrapper("convert_to_tensor")
def convert_to_tensor(value,
                      dtype=None,
                      name=None,
                      as_ref=False,
                      preferred_dtype=None,
                      dtype_hint=None,
                      ctx=None,
                      accepted_result_types=(Tensor,)):
  """Implementation of the public convert_to_tensor."""
  # TODO(b/142518781): Fix all call-sites and remove redundant arg
  preferred_dtype = preferred_dtype or dtype_hint
  if isinstance(value, EagerTensor):
    if ctx is None:
      ctx = context.context()
    if not ctx.executing_eagerly():
      graph = get_default_graph()
      if not graph.building_function:
        raise RuntimeError(
            _add_error_prefix(
                "Attempting to capture an EagerTensor without "
                "building a function.",
                name=name))
      return graph.capture(value, name=name)

  if dtype is not None:
    dtype = dtypes.as_dtype(dtype)
  if isinstance(value, Tensor):
    if dtype is not None and not dtype.is_compatible_with(value.dtype):
      raise ValueError(
          _add_error_prefix(
              f"Tensor conversion requested dtype {dtype.name} "
              f"for Tensor with dtype {value.dtype.name}: {value!r}",
              name=name))
    return value

  if preferred_dtype is not None:
    preferred_dtype = dtypes.as_dtype(preferred_dtype)

  # See below for the reason why it's `type(value)` and not just `value`.
  # https://docs.python.org/3.8/reference/datamodel.html#special-lookup
  overload = getattr(type(value), "__tf_tensor__", None)
  if overload is not None:
    return overload(value, dtype, name)  #  pylint: disable=not-callable

  for base_type, conversion_func in tensor_conversion_registry.get(type(value)):
    # If dtype is None but preferred_dtype is not None, we try to
    # cast to preferred_dtype first.
    ret = None
    if dtype is None and preferred_dtype is not None:
      try:
        ret = conversion_func(
            value, dtype=preferred_dtype, name=name, as_ref=as_ref)
      except (TypeError, ValueError):
        # Could not coerce the conversion to use the preferred dtype.
        pass
      else:
        if (ret is not NotImplemented and
            ret.dtype.base_dtype != preferred_dtype.base_dtype):
          raise RuntimeError(
              _add_error_prefix(
                  f"Conversion function {conversion_func!r} for type "
                  f"{base_type} returned incompatible dtype: requested = "
                  f"{preferred_dtype.base_dtype.name}, "
                  f"actual = {ret.dtype.base_dtype.name}",
                  name=name))

    if ret is None:
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:1640:


v = <tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, 11, 104],
[22, 4, 168, 7],
[1, 10, 6, 41]]>
dtype = None, name = None, as_ref = False

def _constant_tensor_conversion_function(v, dtype=None, name=None,
                                         as_ref=False):
  _ = as_ref
return constant(v, dtype=dtype, name=name)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:343:


value = <tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, 11, 104],
[22, 4, 168, 7],
[1, 10, 6, 41]]>
dtype = None, shape = None, name = None

@tf_export("constant", v1=[])
def constant(value, dtype=None, shape=None, name="Const"):
  """Creates a constant tensor from a tensor-like object.

  Note: All eager `tf.Tensor` values are immutable (in contrast to
  `tf.Variable`). There is nothing especially _constant_ about the value
  returned from `tf.constant`. This function is not fundamentally different from
  `tf.convert_to_tensor`. The name `tf.constant` comes from the `value` being
  embedded in a `Const` node in the `tf.Graph`. `tf.constant` is useful
  for asserting that the value can be embedded that way.

  If the argument `dtype` is not specified, then the type is inferred from
  the type of `value`.

  >>> # Constant 1-D Tensor from a python list.
  >>> tf.constant([1, 2, 3, 4, 5, 6])
  <tf.Tensor: shape=(6,), dtype=int32,
      numpy=array([1, 2, 3, 4, 5, 6], dtype=int32)>
  >>> # Or a numpy array
  >>> a = np.array([[1, 2, 3], [4, 5, 6]])
  >>> tf.constant(a)
  <tf.Tensor: shape=(2, 3), dtype=int64, numpy=
    array([[1, 2, 3],
           [4, 5, 6]])>

  If `dtype` is specified, the resulting tensor values are cast to the requested
  `dtype`.

  >>> tf.constant([1, 2, 3, 4, 5, 6], dtype=tf.float64)
  <tf.Tensor: shape=(6,), dtype=float64,
      numpy=array([1., 2., 3., 4., 5., 6.])>

  If `shape` is set, the `value` is reshaped to match. Scalars are expanded to
  fill the `shape`:

  >>> tf.constant(0, shape=(2, 3))
    <tf.Tensor: shape=(2, 3), dtype=int32, numpy=
    array([[0, 0, 0],
           [0, 0, 0]], dtype=int32)>
  >>> tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
  <tf.Tensor: shape=(2, 3), dtype=int32, numpy=
    array([[1, 2, 3],
           [4, 5, 6]], dtype=int32)>

  `tf.constant` has no effect if an eager Tensor is passed as the `value`, it
  even transmits gradients:

  >>> v = tf.Variable([0.0])
  >>> with tf.GradientTape() as g:
  ...     loss = tf.constant(v + v)
  >>> g.gradient(loss, v).numpy()
  array([2.], dtype=float32)

  But, since `tf.constant` embeds the value in the `tf.Graph` this fails for
  symbolic tensors:

  >>> with tf.compat.v1.Graph().as_default():
  ...   i = tf.compat.v1.placeholder(shape=[None, None], dtype=tf.float32)
  ...   t = tf.constant(i)
  Traceback (most recent call last):
  ...
  TypeError: ...

  `tf.constant` will create tensors on the current device. Inputs which are
  already tensors maintain their placements unchanged.

  Related Ops:

  * `tf.convert_to_tensor` is similar but:
    * It has no `shape` argument.
    * Symbolic tensors are allowed to pass through.

    >>> with tf.compat.v1.Graph().as_default():
    ...   i = tf.compat.v1.placeholder(shape=[None, None], dtype=tf.float32)
    ...   t = tf.convert_to_tensor(i)

  * `tf.fill`: differs in a few ways:
    *   `tf.constant` supports arbitrary constants, not just uniform scalar
        Tensors like `tf.fill`.
    *   `tf.fill` creates an Op in the graph that is expanded at runtime, so it
        can efficiently represent large tensors.
    *   Since `tf.fill` does not embed the value, it can produce dynamically
        sized outputs.

  Args:
    value: A constant value (or list) of output type `dtype`.
    dtype: The type of the elements of the resulting tensor.
    shape: Optional dimensions of resulting tensor.
    name: Optional name for the tensor.

  Returns:
    A Constant Tensor.

  Raises:
    TypeError: if shape is incorrectly specified or unsupported.
    ValueError: if called on a symbolic tensor.
  """
return _constant_impl(value, dtype, shape, name, verify_shape=False,
                        allow_broadcast=True)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:267:


value = <tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, 11, 104],
[22, 4, 168, 7],
[1, 10, 6, 41]]>
dtype = None, shape = None, name = None, verify_shape = False
allow_broadcast = True

def _constant_impl(
    value, dtype, shape, name, verify_shape, allow_broadcast):
  """Implementation of constant."""
  ctx = context.context()
  if ctx.executing_eagerly():
    if trace.enabled:
      with trace.Trace("tf.constant"):
        return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
  return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:279:


ctx = <tensorflow.python.eager.context.Context object at 0x7f0f768f3dc0>
value = <tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, 11, 104],
[22, 4, 168, 7],
[1, 10, 6, 41]]>
dtype = None, shape = None, verify_shape = False

def _constant_eager_impl(ctx, value, dtype, shape, verify_shape):
  """Creates a constant on the current device."""
t = convert_to_eager_tensor(value, ctx, dtype)

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:304:


value = <tf.RaggedTensor [[20, 32, 11, 13],
[38, 5, 7, 1],
[85, 17, 16, 41],
[13, 9, 19, 49],
[18, 22, 11, 17],
[63, 7, 11, 104],
[22, 4, 168, 7],
[1, 10, 6, 41]]>
ctx = <tensorflow.python.eager.context.Context object at 0x7f0f768f3dc0>
dtype = None

def convert_to_eager_tensor(value, ctx, dtype=None):
  """Converts the given `value` to an `EagerTensor`.

  Note that this function could return cached copies of created constants for
  performance reasons.

  Args:
    value: value to convert to EagerTensor.
    ctx: value of context.context().
    dtype: optional desired dtype of the converted EagerTensor.

  Returns:
    EagerTensor created from value.

  Raises:
    TypeError: if `dtype` is not compatible with the type of t.
  """
  if isinstance(value, ops.EagerTensor):
    if dtype is not None and value.dtype != dtype:
      raise TypeError(f"Expected tensor {value} with dtype {dtype!r}, but got "
                      f"dtype {value.dtype!r}.")
    return value
  if dtype is not None:
    try:
      dtype = dtype.as_datatype_enum
    except AttributeError:
      dtype = dtypes.as_dtype(dtype).as_datatype_enum
  ctx.ensure_initialized()
return ops.EagerTensor(value, ctx.device_name, dtype)

E ValueError: TypeError: object of type 'RaggedTensor' has no len()

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:102: ValueError

During handling of the above exception, another exception occurred:

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f0d584e7460>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling(sequence_testing_data: Dataset, run_eagerly):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
    predict_masked = mm.SequencePredictMasked(schema=seq_schema, target=target, masking_prob=0.3)

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False, transform=predict_masked)
    model = mm.Model(
        ExtractTargetsMask(),
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.MaskSequenceEmbeddings()),
        GPT2Block(
            d_model=48,
            n_head=4,
            n_layer=2,
            pre=mm.MaskSequenceEmbeddings(),
        ),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target), default_loss="sparse_categorical_crossentropy"
        ),
    )

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
  testing_utils.model_test(model, loader, run_eagerly=run_eagerly)

tests/unit/tf/transformers/test_block.py:225:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:722: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function
return step_function(self, iterator)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:580: in train_step
loss = self.compute_loss(x, outputs.targets, outputs.predictions, outputs.sample_weight)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:948: in compute_loss
return self.compiled_loss(
/usr/local/lib/python3.8/dist-packages/keras/engine/compile_utils.py:201: in call
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:139: in call
losses = call_fn(y_true, y_pred)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:243: in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1086: in op_dispatch_handler
result = dispatch(op_dispatch_handler, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:163: in dispatch
result = dispatcher.handle(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:195: in handle
return self._override_func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:1884: in _ragged_tensor_sparse_categorical_crossentropy
return _ragged_tensor_apply_loss(fn, y_true, y_pred, y_pred_extra_dim=True)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:1397: in _ragged_tensor_apply_loss
nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]
/usr/local/lib/python3.8/dist-packages/keras/losses.py:1397: in
nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]


self = <tf.Tensor: shape=(8, 4, 51997), dtype=float32, numpy=
array([[[ 6.3700974e-03, -1.5098136e-02, 2.6254989e-02, ...,
...6e-02, -1.2484869e-02, -1.4735578e-02, ...,
7.2016641e-02, -1.7513335e-04, -9.8557752e-03]]], dtype=float32)>
name = 'nested_row_splits'

def __getattr__(self, name):
  if name in {"T", "astype", "ravel", "transpose", "reshape", "clip", "size",
              "tolist", "data"}:
    # TODO(wangpeng): Export the enable_numpy_behavior knob
    raise AttributeError(
        f"{type(self).__name__} object has no attribute '{name}'. " + """
      If you are looking for numpy-related methods, please run the following:
      from tensorflow.python.ops.numpy_ops import np_config
      np_config.enable_numpy_behavior()
    """)
self.__getattribute__(name)

E AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'nested_row_splits'

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:446: AttributeError
____________ test_transformer_with_masked_language_modeling[False] _____________

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f0d5925b700>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling(sequence_testing_data: Dataset, run_eagerly):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
    predict_masked = mm.SequencePredictMasked(schema=seq_schema, target=target, masking_prob=0.3)

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False, transform=predict_masked)
    model = mm.Model(
        ExtractTargetsMask(),
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.MaskSequenceEmbeddings()),
        GPT2Block(
            d_model=48,
            n_head=4,
            n_layer=2,
            pre=mm.MaskSequenceEmbeddings(),
        ),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target), default_loss="sparse_categorical_crossentropy"
        ),
    )

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
  testing_utils.model_test(model, loader, run_eagerly=run_eagerly)

tests/unit/tf/transformers/test_block.py:225:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:722: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self._call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call
self._initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_file6dpyuv56.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:580: in train_step
loss = self.compute_loss(x, outputs.targets, outputs.predictions, outputs.sample_weight)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:948: in compute_loss
return self.compiled_loss(
/usr/local/lib/python3.8/dist-packages/keras/engine/compile_utils.py:201: in call
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:139: in call
losses = call_fn(y_true, y_pred)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:331: in converted_call
return _call_unconverted(f, args, kwargs, options, False)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:243: in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1086: in op_dispatch_handler
result = dispatch(op_dispatch_handler, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:163: in dispatch
result = dispatcher.handle(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:195: in handle
return self._override_func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:1884: in _ragged_tensor_sparse_categorical_crossentropy
return _ragged_tensor_apply_loss(fn, y_true, y_pred, y_pred_extra_dim=True)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:1397: in _ragged_tensor_apply_loss
nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]
/usr/local/lib/python3.8/dist-packages/keras/losses.py:1397: in
nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]


self = <tf.Tensor 'model/item_id_seq/categorical_output/categorical_target/BiasAdd:0' shape=(None, None, 51997) dtype=float32>
name = 'nested_row_splits'

def __getattr__(self, name):
  if name in {"T", "astype", "ravel", "transpose", "reshape", "clip", "size",
              "tolist", "data"}:
    # TODO(wangpeng): Export the enable_numpy_behavior knob
    raise AttributeError(
        f"{type(self).__name__} object has no attribute '{name}'. " + """
      If you are looking for numpy-related methods, please run the following:
      from tensorflow.python.ops.numpy_ops import np_config
      np_config.enable_numpy_behavior()
    """)
self.__getattribute__(name)

E AttributeError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function **
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run
E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica
E return self._call_for_each_replica(fn, args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step **
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 580, in train_step
E loss = self.compute_loss(x, outputs.targets, outputs.predictions, outputs.sample_weight)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 948, in compute_loss
E return self.compiled_loss(
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/compile_utils.py", line 201, in call
E loss_value = loss_obj(y_t, y_p, sample_weight=sw)
E File "/usr/local/lib/python3.8/dist-packages/keras/losses.py", line 139, in call
E losses = call_fn(y_true, y_pred)
E File "/usr/local/lib/python3.8/dist-packages/keras/losses.py", line 243, in call **
E return ag_fn(y_true, y_pred, **self._fn_kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1086, in op_dispatch_handler
E result = dispatch(op_dispatch_handler, args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 163, in dispatch
E result = dispatcher.handle(args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 195, in handle
E return self._override_func(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/losses.py", line 1884, in _ragged_tensor_sparse_categorical_crossentropy
E return _ragged_tensor_apply_loss(fn, y_true, y_pred, y_pred_extra_dim=True)
E File "/usr/local/lib/python3.8/dist-packages/keras/losses.py", line 1397, in _ragged_tensor_apply_loss
E nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]
E File "/usr/local/lib/python3.8/dist-packages/keras/losses.py", line 1397, in
E nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 446, in getattr
E self.getattribute(name)
E
E AttributeError: 'Tensor' object has no attribute 'nested_row_splits'

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:446: AttributeError
________________________ test_seq_predict_masked[False] ________________________

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f0d58558b80>
use_loader = False

@pytest.mark.parametrize("use_loader", [False, True])
def test_seq_predict_masked(sequence_testing_data: Dataset, use_loader: bool):
    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE)
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
    predict_masked = mm.SequencePredictMasked(schema=seq_schema, target=target, masking_prob=0.3)

    batch = mm.sample_batch(sequence_testing_data, batch_size=8, include_targets=False)
    if use_loader:
        dataset_transformed = Loader(
            sequence_testing_data, batch_size=8, shuffle=False, transform=predict_masked
        )
        output = next(iter(dataset_transformed))
    else:
        output = predict_masked(batch)
    output_x, output_y = output
  target_mask = output_y._keras_mask

E AttributeError: 'RaggedTensor' object has no attribute '_keras_mask'

tests/unit/tf/transforms/test_sequence.py:196: AttributeError
________________________ test_seq_predict_masked[True] _________________________

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f0d41b1f9a0>
use_loader = True

@pytest.mark.parametrize("use_loader", [False, True])
def test_seq_predict_masked(sequence_testing_data: Dataset, use_loader: bool):
    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE)
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
    predict_masked = mm.SequencePredictMasked(schema=seq_schema, target=target, masking_prob=0.3)

    batch = mm.sample_batch(sequence_testing_data, batch_size=8, include_targets=False)
    if use_loader:
        dataset_transformed = Loader(
            sequence_testing_data, batch_size=8, shuffle=False, transform=predict_masked
        )
        output = next(iter(dataset_transformed))
    else:
        output = predict_masked(batch)
    output_x, output_y = output
  target_mask = output_y._keras_mask

E AttributeError: 'RaggedTensor' object has no attribute '_keras_mask'

tests/unit/tf/transforms/test_sequence.py:196: AttributeError
___________ test_seq_predict_masked_replace_embeddings[False-False] ____________

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f0d40a0e100>
dense = False, target_as_dict = False

@pytest.mark.parametrize("dense", [False, True])
@pytest.mark.parametrize("target_as_dict", [False, True])
def test_seq_predict_masked_replace_embeddings(
    sequence_testing_data: Dataset, dense: bool, target_as_dict: bool
):
    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_name(
        ["item_id_seq", "categories"]
    )

    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
    predict_masked = mm.SequencePredictMasked(schema=seq_schema, target=target, masking_prob=0.3)

    dataset_transformed = Loader(
        sequence_testing_data, batch_size=8, shuffle=False, transform=predict_masked
    )

    batch = next(iter(dataset_transformed))
    inputs, targets = batch

    emb = tf.keras.layers.Embedding(1000, 16)
    item_id_emb_seq = emb(inputs["item_id_seq"])
    if dense:
        item_id_emb_seq = tf.sparse.to_dense(item_id_emb_seq.to_sparse())
        targets._keras_mask = tf.sparse.to_dense(targets._keras_mask.to_sparse())
  targets_mask = targets._keras_mask

E AttributeError: 'RaggedTensor' object has no attribute '_keras_mask'

tests/unit/tf/transforms/test_sequence.py:254: AttributeError
____________ test_seq_predict_masked_replace_embeddings[False-True] ____________

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f0d52517f70>
dense = True, target_as_dict = False

@pytest.mark.parametrize("dense", [False, True])
@pytest.mark.parametrize("target_as_dict", [False, True])
def test_seq_predict_masked_replace_embeddings(
    sequence_testing_data: Dataset, dense: bool, target_as_dict: bool
):
    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_name(
        ["item_id_seq", "categories"]
    )

    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
    predict_masked = mm.SequencePredictMasked(schema=seq_schema, target=target, masking_prob=0.3)

    dataset_transformed = Loader(
        sequence_testing_data, batch_size=8, shuffle=False, transform=predict_masked
    )

    batch = next(iter(dataset_transformed))
    inputs, targets = batch

    emb = tf.keras.layers.Embedding(1000, 16)
    item_id_emb_seq = emb(inputs["item_id_seq"])
    if dense:
        item_id_emb_seq = tf.sparse.to_dense(item_id_emb_seq.to_sparse())
      targets._keras_mask = tf.sparse.to_dense(targets._keras_mask.to_sparse())

E AttributeError: 'RaggedTensor' object has no attribute '_keras_mask'

tests/unit/tf/transforms/test_sequence.py:253: AttributeError
____________ test_seq_predict_masked_replace_embeddings[True-False] ____________

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f0d4188eee0>
dense = False, target_as_dict = True

@pytest.mark.parametrize("dense", [False, True])
@pytest.mark.parametrize("target_as_dict", [False, True])
def test_seq_predict_masked_replace_embeddings(
    sequence_testing_data: Dataset, dense: bool, target_as_dict: bool
):
    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_name(
        ["item_id_seq", "categories"]
    )

    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
    predict_masked = mm.SequencePredictMasked(schema=seq_schema, target=target, masking_prob=0.3)

    dataset_transformed = Loader(
        sequence_testing_data, batch_size=8, shuffle=False, transform=predict_masked
    )

    batch = next(iter(dataset_transformed))
    inputs, targets = batch

    emb = tf.keras.layers.Embedding(1000, 16)
    item_id_emb_seq = emb(inputs["item_id_seq"])
    if dense:
        item_id_emb_seq = tf.sparse.to_dense(item_id_emb_seq.to_sparse())
        targets._keras_mask = tf.sparse.to_dense(targets._keras_mask.to_sparse())
  targets_mask = targets._keras_mask

E AttributeError: 'RaggedTensor' object has no attribute '_keras_mask'

tests/unit/tf/transforms/test_sequence.py:254: AttributeError
____________ test_seq_predict_masked_replace_embeddings[True-True] _____________

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f0d5250bd90>
dense = True, target_as_dict = True

@pytest.mark.parametrize("dense", [False, True])
@pytest.mark.parametrize("target_as_dict", [False, True])
def test_seq_predict_masked_replace_embeddings(
    sequence_testing_data: Dataset, dense: bool, target_as_dict: bool
):
    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_name(
        ["item_id_seq", "categories"]
    )

    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
    predict_masked = mm.SequencePredictMasked(schema=seq_schema, target=target, masking_prob=0.3)

    dataset_transformed = Loader(
        sequence_testing_data, batch_size=8, shuffle=False, transform=predict_masked
    )

    batch = next(iter(dataset_transformed))
    inputs, targets = batch

    emb = tf.keras.layers.Embedding(1000, 16)
    item_id_emb_seq = emb(inputs["item_id_seq"])
    if dense:
        item_id_emb_seq = tf.sparse.to_dense(item_id_emb_seq.to_sparse())
      targets._keras_mask = tf.sparse.to_dense(targets._keras_mask.to_sparse())

E AttributeError: 'RaggedTensor' object has no attribute '_keras_mask'

tests/unit/tf/transforms/test_sequence.py:253: AttributeError
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 20 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 20 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:910: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_file6crtzizm.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
==== 10 failed, 738 passed, 12 skipped, 1160 warnings in 1126.57s (0:18:46) ====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins11494565819425124844.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit 1128897617e81b760e8cbf0c3172ac76d9da8200, has merge conflicts.
Running as SYSTEM
Setting status of 1128897617e81b760e8cbf0c3172ac76d9da8200 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1464/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse 1128897617e81b760e8cbf0c3172ac76d9da8200^{commit} # timeout=10
Checking out Revision 1128897617e81b760e8cbf0c3172ac76d9da8200 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1128897617e81b760e8cbf0c3172ac76d9da8200 # timeout=10
Commit message: "Ensures target ragged tensors are converted to dense and are one-hot. Fixed and added tests on the special input mask approach"
 > git rev-list --no-walk df1c9509580bfb8d84b3c3bc964d9b6f8f20f5a8 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins2824142774755003091.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 766 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 23%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py . [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 46%]
tests/unit/tf/models/test_base.py s................. [ 49%]
tests/unit/tf/models/test_benchmark.py .. [ 49%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 58%]
tests/unit/tf/outputs/test_classification.py ...... [ 59%]
tests/unit/tf/outputs/test_contrastive.py .F.F..F.... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 61%]
tests/unit/tf/outputs/test_sampling.py .... [ 61%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 64%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 64%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 65%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 66%]
tests/unit/tf/transformers/test_block.py .................. [ 68%]
tests/unit/tf/transformers/test_transforms.py ...... [ 69%]
tests/unit/tf/transforms/test_bias.py .. [ 69%]
tests/unit/tf/transforms/test_features.py s............................. [ 73%]
....................s...... [ 77%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 78%]
tests/unit/tf/transforms/test_noise.py ..... [ 79%]
tests/unit/tf/transforms/test_sequence.py ........................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
__________________________ test_contrastive_mf[False] __________________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7efbf339ed00>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_contrastive_mf(ecommerce_data: Dataset, run_eagerly: bool):
    schema = ecommerce_data.schema
    user_id = schema.select_by_tag(Tags.USER_ID)
    item_id = schema.select_by_tag(Tags.ITEM_ID)

    # TODO: Change this for new RetrievalModel
    encoders = mm.SequentialBlock(
        mm.ParallelBlock(
            mm.EmbeddingTable(64, user_id.first), mm.EmbeddingTable(64, item_id.first)
        ),
        Rename(dict(user_id="query", item_id="candidate")),
    )

    mf = mm.Model(encoders, mm.ContrastiveOutput(item_id, "in-batch"))
  testing_utils.model_test(mf, ecommerce_data, run_eagerly=run_eagerly, reload_model=True)

tests/unit/tf/outputs/test_contrastive.py:43:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:756: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self._call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call
self._initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filefr3pp1qo.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:613: in train_step
outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
merlin/models/tf/models/base.py:574: in call_train_test
self.mask_predictions_from_targets(predictions, targets)
merlin/models/tf/models/base.py:602: in mask_predictions_from_targets
== len(predictions[k].get_shape().as_list()) - 1


self = TensorShape(None)

def as_list(self):
  """Returns a list of integers or `None` for each dimension.

  Returns:
    A list of integers or `None` for each dimension.

  Raises:
    ValueError: If `self` is an unknown shape with an unknown rank.
  """
  if self._dims is None:
  raise ValueError("as_list() is not defined on an unknown TensorShape.")

E ValueError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function **
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run
E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica
E return self._call_for_each_replica(fn, args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step **
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 613, in train_step
E outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 574, in call_train_test
E self.mask_predictions_from_targets(predictions, targets)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 602, in mask_predictions_from_targets
E == len(predictions[k].get_shape().as_list()) - 1
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_shape.py", line 1347, in as_list
E raise ValueError("as_list() is not defined on an unknown TensorShape.")
E
E ValueError: as_list() is not defined on an unknown TensorShape.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_shape.py:1347: ValueError
________________ test_constrastive_mf_weights_in_output[False] _________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7efc027edf10>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_constrastive_mf_weights_in_output(ecommerce_data: Dataset, run_eagerly: bool):
    schema = ecommerce_data.schema
    schema["item_id"] = schema["item_id"].with_tags([Tags.TARGET])
    user_id = schema.select_by_tag(Tags.USER_ID)
    item_id = schema.select_by_tag(Tags.ITEM_ID)

    # TODO: Change this for new RetrievalModel
    encoder = mm.TabularBlock(mm.EmbeddingTable(64, user_id.first), aggregation="concat")

    mf = mm.Model(encoder, mm.ContrastiveOutput(item_id, "in-batch"))
  testing_utils.model_test(mf, ecommerce_data, run_eagerly=run_eagerly, reload_model=True)

tests/unit/tf/outputs/test_contrastive.py:58:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:756: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self._call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call
self._initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filefr3pp1qo.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:613: in train_step
outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
merlin/models/tf/models/base.py:574: in call_train_test
self.mask_predictions_from_targets(predictions, targets)
merlin/models/tf/models/base.py:602: in mask_predictions_from_targets
== len(predictions[k].get_shape().as_list()) - 1


self = TensorShape(None)

def as_list(self):
  """Returns a list of integers or `None` for each dimension.

  Returns:
    A list of integers or `None` for each dimension.

  Raises:
    ValueError: If `self` is an unknown shape with an unknown rank.
  """
  if self._dims is None:
  raise ValueError("as_list() is not defined on an unknown TensorShape.")

E ValueError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function **
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run
E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica
E return self._call_for_each_replica(fn, args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step **
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 613, in train_step
E outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 574, in call_train_test
E self.mask_predictions_from_targets(predictions, targets)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 602, in mask_predictions_from_targets
E == len(predictions[k].get_shape().as_list()) - 1
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_shape.py", line 1347, in as_list
E raise ValueError("as_list() is not defined on an unknown TensorShape.")
E
E ValueError: as_list() is not defined on an unknown TensorShape.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_shape.py:1347: ValueError
________________________ test_contrastive_output[False] ________________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7efc002cb790>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_contrastive_output(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema
    schema["item_category"] = schema["item_category"].with_tags(
        schema["item_category"].tags + "target"
    )
    ecommerce_data.schema = schema
    model = mm.Model(
        mm.InputBlock(schema),
        mm.MLPBlock([8]),
        mm.ContrastiveOutput(
            schema["item_category"],
            negative_samplers=PopularityBasedSamplerV2(max_id=100, max_num_samples=20),
        ),
    )
  _, history = testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/outputs/test_contrastive.py:90:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:756: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self._call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call
self._initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filefr3pp1qo.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:613: in train_step
outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
merlin/models/tf/models/base.py:574: in call_train_test
self.mask_predictions_from_targets(predictions, targets)
merlin/models/tf/models/base.py:602: in mask_predictions_from_targets
== len(predictions[k].get_shape().as_list()) - 1


self = TensorShape(None)

def as_list(self):
  """Returns a list of integers or `None` for each dimension.

  Returns:
    A list of integers or `None` for each dimension.

  Raises:
    ValueError: If `self` is an unknown shape with an unknown rank.
  """
  if self._dims is None:
  raise ValueError("as_list() is not defined on an unknown TensorShape.")

E ValueError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function **
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run
E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica
E return self._call_for_each_replica(fn, args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step **
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 613, in train_step
E outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 574, in call_train_test
E self.mask_predictions_from_targets(predictions, targets)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 602, in mask_predictions_from_targets
E == len(predictions[k].get_shape().as_list()) - 1
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_shape.py", line 1347, in as_list
E raise ValueError("as_list() is not defined on an unknown TensorShape.")
E
E ValueError: as_list() is not defined on an unknown TensorShape.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_shape.py:1347: ValueError
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 13 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 13 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:910: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_fileyngt_ljw.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
==== 3 failed, 751 passed, 12 skipped, 1166 warnings in 1106.57s (0:18:26) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins2018937121691643644.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit 79b66c7d8bc6005338f1567613a1d42c071b790d, no merge conflicts.
Running as SYSTEM
Setting status of 79b66c7d8bc6005338f1567613a1d42c071b790d to PENDING with url https://10.20.13.93:8080/job/merlin_models/1465/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse 79b66c7d8bc6005338f1567613a1d42c071b790d^{commit} # timeout=10
Checking out Revision 79b66c7d8bc6005338f1567613a1d42c071b790d (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 79b66c7d8bc6005338f1567613a1d42c071b790d # timeout=10
Commit message: "Fixed comments"
 > git rev-list --no-walk 1128897617e81b760e8cbf0c3172ac76d9da8200 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins17495214133615927596.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 770 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 46%]
tests/unit/tf/models/test_base.py s................... [ 49%]
tests/unit/tf/models/test_benchmark.py .. [ 49%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 58%]
tests/unit/tf/outputs/test_classification.py ...... [ 59%]
tests/unit/tf/outputs/test_contrastive.py .F.F..F.... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 61%]
tests/unit/tf/outputs/test_sampling.py .... [ 61%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 64%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 65%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 66%]
tests/unit/tf/transformers/test_block.py .................. [ 68%]
tests/unit/tf/transformers/test_transforms.py ...... [ 69%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 73%]
....................s...... [ 77%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 78%]
tests/unit/tf/transforms/test_noise.py ..... [ 79%]
tests/unit/tf/transforms/test_sequence.py ........................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
__________________________ test_contrastive_mf[False] __________________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7fbe7812c1c0>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_contrastive_mf(ecommerce_data: Dataset, run_eagerly: bool):
    schema = ecommerce_data.schema
    user_id = schema.select_by_tag(Tags.USER_ID)
    item_id = schema.select_by_tag(Tags.ITEM_ID)

    # TODO: Change this for new RetrievalModel
    encoders = mm.SequentialBlock(
        mm.ParallelBlock(
            mm.EmbeddingTable(64, user_id.first), mm.EmbeddingTable(64, item_id.first)
        ),
        Rename(dict(user_id="query", item_id="candidate")),
    )

    mf = mm.Model(encoders, mm.ContrastiveOutput(item_id, "in-batch"))
  testing_utils.model_test(mf, ecommerce_data, run_eagerly=run_eagerly, reload_model=True)

tests/unit/tf/outputs/test_contrastive.py:43:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:758: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self._call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call
self._initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filecvfraym6.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:615: in train_step
outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
merlin/models/tf/models/base.py:576: in call_train_test
self.mask_predictions_from_targets(predictions, targets)
merlin/models/tf/models/base.py:604: in mask_predictions_from_targets
== len(predictions[k].get_shape().as_list()) - 1


self = TensorShape(None)

def as_list(self):
  """Returns a list of integers or `None` for each dimension.

  Returns:
    A list of integers or `None` for each dimension.

  Raises:
    ValueError: If `self` is an unknown shape with an unknown rank.
  """
  if self._dims is None:
  raise ValueError("as_list() is not defined on an unknown TensorShape.")

E ValueError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function **
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run
E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica
E return self._call_for_each_replica(fn, args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step **
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 615, in train_step
E outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 576, in call_train_test
E self.mask_predictions_from_targets(predictions, targets)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 604, in mask_predictions_from_targets
E == len(predictions[k].get_shape().as_list()) - 1
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_shape.py", line 1347, in as_list
E raise ValueError("as_list() is not defined on an unknown TensorShape.")
E
E ValueError: as_list() is not defined on an unknown TensorShape.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_shape.py:1347: ValueError
________________ test_constrastive_mf_weights_in_output[False] _________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7fbe80529970>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_constrastive_mf_weights_in_output(ecommerce_data: Dataset, run_eagerly: bool):
    schema = ecommerce_data.schema
    schema["item_id"] = schema["item_id"].with_tags([Tags.TARGET])
    user_id = schema.select_by_tag(Tags.USER_ID)
    item_id = schema.select_by_tag(Tags.ITEM_ID)

    # TODO: Change this for new RetrievalModel
    encoder = mm.TabularBlock(mm.EmbeddingTable(64, user_id.first), aggregation="concat")

    mf = mm.Model(encoder, mm.ContrastiveOutput(item_id, "in-batch"))
  testing_utils.model_test(mf, ecommerce_data, run_eagerly=run_eagerly, reload_model=True)

tests/unit/tf/outputs/test_contrastive.py:58:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:758: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self._call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call
self._initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filecvfraym6.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:615: in train_step
outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
merlin/models/tf/models/base.py:576: in call_train_test
self.mask_predictions_from_targets(predictions, targets)
merlin/models/tf/models/base.py:604: in mask_predictions_from_targets
== len(predictions[k].get_shape().as_list()) - 1


self = TensorShape(None)

def as_list(self):
  """Returns a list of integers or `None` for each dimension.

  Returns:
    A list of integers or `None` for each dimension.

  Raises:
    ValueError: If `self` is an unknown shape with an unknown rank.
  """
  if self._dims is None:
  raise ValueError("as_list() is not defined on an unknown TensorShape.")

E ValueError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function **
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run
E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica
E return self._call_for_each_replica(fn, args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step **
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 615, in train_step
E outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 576, in call_train_test
E self.mask_predictions_from_targets(predictions, targets)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 604, in mask_predictions_from_targets
E == len(predictions[k].get_shape().as_list()) - 1
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_shape.py", line 1347, in as_list
E raise ValueError("as_list() is not defined on an unknown TensorShape.")
E
E ValueError: as_list() is not defined on an unknown TensorShape.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_shape.py:1347: ValueError
________________________ test_contrastive_output[False] ________________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7fbe7a584970>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_contrastive_output(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema
    schema["item_category"] = schema["item_category"].with_tags(
        schema["item_category"].tags + "target"
    )
    ecommerce_data.schema = schema
    model = mm.Model(
        mm.InputBlock(schema),
        mm.MLPBlock([8]),
        mm.ContrastiveOutput(
            schema["item_category"],
            negative_samplers=PopularityBasedSamplerV2(max_id=100, max_num_samples=20),
        ),
    )
  _, history = testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/outputs/test_contrastive.py:90:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:758: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self._call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call
self._initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filecvfraym6.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return _call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:615: in train_step
outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
merlin/models/tf/models/base.py:576: in call_train_test
self.mask_predictions_from_targets(predictions, targets)
merlin/models/tf/models/base.py:604: in mask_predictions_from_targets
== len(predictions[k].get_shape().as_list()) - 1


self = TensorShape(None)

def as_list(self):
  """Returns a list of integers or `None` for each dimension.

  Returns:
    A list of integers or `None` for each dimension.

  Raises:
    ValueError: If `self` is an unknown shape with an unknown rank.
  """
  if self._dims is None:
  raise ValueError("as_list() is not defined on an unknown TensorShape.")

E ValueError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function **
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run
E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica
E return self._call_for_each_replica(fn, args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step **
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 615, in train_step
E outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 576, in call_train_test
E self.mask_predictions_from_targets(predictions, targets)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 604, in mask_predictions_from_targets
E == len(predictions[k].get_shape().as_list()) - 1
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_shape.py", line 1347, in as_list
E raise ValueError("as_list() is not defined on an unknown TensorShape.")
E
E ValueError: as_list() is not defined on an unknown TensorShape.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_shape.py:1347: ValueError
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 22 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 13 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 22 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 13 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_filety9so31d.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
==== 3 failed, 755 passed, 12 skipped, 1187 warnings in 1179.88s (0:19:39) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins2523873728059620576.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit 0f168feed2c18a3bd7b4a43393f1b7e6fc4a90a8, no merge conflicts.
Running as SYSTEM
Setting status of 0f168feed2c18a3bd7b4a43393f1b7e6fc4a90a8 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1472/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse 0f168feed2c18a3bd7b4a43393f1b7e6fc4a90a8^{commit} # timeout=10
Checking out Revision 0f168feed2c18a3bd7b4a43393f1b7e6fc4a90a8 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 0f168feed2c18a3bd7b4a43393f1b7e6fc4a90a8 # timeout=10
Commit message: "Fixed support to Keras Masking in metrics, incluing the TopkMetric ones"
 > git rev-list --no-walk 2dd34ca77e161c01978e0b2c99b4a95a74b998f3 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins1663067831942553726.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 771 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 30%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 46%]
tests/unit/tf/models/test_base.py s................... [ 49%]
tests/unit/tf/models/test_benchmark.py .. [ 49%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ............F.F.......F.F....... [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 59%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 61%]
tests/unit/tf/outputs/test_sampling.py .... [ 61%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 64%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 65%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 66%]
tests/unit/tf/transformers/test_block.py .................. [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 69%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 73%]
....................s...... [ 77%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 78%]
tests/unit/tf/transforms/test_noise.py ..... [ 79%]
tests/unit/tf/transforms/test_sequence.py ........................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
_________ test_two_tower_model_with_custom_options[bpr-max-True-False] _________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7effac1ad460>
run_eagerly = False, logits_pop_logq_correction = True, loss = 'bpr-max'

@pytest.mark.parametrize("run_eagerly", [True, False])
@pytest.mark.parametrize("logits_pop_logq_correction", [True, False])
@pytest.mark.parametrize("loss", ["categorical_crossentropy", "bpr-max", "binary_crossentropy"])
def test_two_tower_model_with_custom_options(
    ecommerce_data: Dataset,
    run_eagerly,
    logits_pop_logq_correction,
    loss,
):
    from tensorflow.keras import regularizers

    from merlin.models.tf.transforms.bias import PopularityLogitsCorrection
    from merlin.models.utils import schema_utils

    data = ecommerce_data
    data.schema = data.schema.select_by_name(["user_categories", "item_id"])

    metrics = [
        tf.keras.metrics.AUC(from_logits=True, name="auc"),
        mm.RecallAt(5),
        mm.RecallAt(10),
        mm.MRRAt(10),
        mm.NDCGAt(10),
    ]

    post_logits = None
    if logits_pop_logq_correction:
        cardinalities = schema_utils.categorical_cardinalities(data.schema)
        item_id_cardinalities = cardinalities[
            data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
        ]
        items_frequencies = tf.sort(
            tf.random.uniform((item_id_cardinalities,), minval=0, maxval=1000, dtype=tf.int32)
        )
        post_logits = PopularityLogitsCorrection(
            items_frequencies,
            schema=data.schema,
        )

    retrieval_task = mm.ItemRetrievalTask(
        samplers=[mm.InBatchSampler()],
        schema=data.schema,
        logits_temperature=0.1,
        post_logits=post_logits,
        store_negative_ids=True,
    )

    model = mm.TwoTowerModel(
        data.schema,
        query_tower=mm.MLPBlock(
            [2],
            activation="relu",
            no_activation_last_layer=True,
            dropout=0.1,
            kernel_regularizer=regularizers.l2(1e-5),
            bias_regularizer=regularizers.l2(1e-6),
        ),
        embedding_options=mm.EmbeddingOptions(
            infer_embedding_sizes=True,
            infer_embedding_sizes_multiplier=3.0,
            infer_embeddings_ensure_dim_multiple_of_8=True,
            embeddings_l2_reg=1e-5,
        ),
        prediction_tasks=retrieval_task,
    )

    model.compile(optimizer="adam", run_eagerly=run_eagerly, loss=loss, metrics=metrics)
  losses = model.fit(data, batch_size=50, epochs=1, steps_per_epoch=1)

tests/unit/tf/models/test_retrieval.py:179:


merlin/models/tf/models/base.py:763: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self.call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in call
self.initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_file8swt47q4.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self.extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self.call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:618: in train_step
loss = self.compute_loss(x, outputs.targets, outputs.predictions, outputs.sample_weight)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:948: in compute_loss
return self.compiled_loss(
/usr/local/lib/python3.8/dist-packages/keras/engine/compile_utils.py:201: in call
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
merlin/models/tf/losses/pairwise.py:56: in call
loss = super().call(y_true, y_pred, sample_weight)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:139: in call
losses = call_fn(y_true, y_pred)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filelr4bh7gi.py:11: in tf__call
(positives_scores, negatives_scores, valid_rows_with_positive_mask) = ag
.converted_call(ag
.ld(self).separate_positives_negatives_scores, (ag
.ld(y_true), ag
.ld(y_pred)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:441: in converted_call
result = converted_f(*effective_args)
/tmp/autograph_generated_file8czp35ng.py:15: in tf___separate_positives_negatives_scores
y_pred_valid_rows = ag
.converted_call(ag
.ld(tf).boolean_mask, (ag
.ld(y_pred), ag
.ld(valid_rows_with_positive_mask)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:331: in converted_call
return _call_unconverted(f, args, kwargs, options, False)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler
return dispatch_target(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py:1978: in boolean_mask_v2
return boolean_mask(tensor, mask, name, axis)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler
return dispatch_target(*args, **kwargs)


tensor = <tf.Tensor 'retrieval_model/truediv:0' shape=(None, None) dtype=float32>
mask = <tf.Tensor 'BPRmaxLoss/Cast:0' shape= dtype=bool>
name = 'boolean_mask', axis = None

@tf_export(v1=["boolean_mask"])
@dispatch.add_dispatch_support
def boolean_mask(tensor, mask, name="boolean_mask", axis=None):
  """Apply boolean mask to tensor.

  Numpy equivalent is `tensor[mask]`.

  In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match
  the first K dimensions of `tensor`'s shape.  We then have:
    `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]`
  where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order).
  The `axis` could be used with `mask` to indicate the axis to mask from.
  In that case, `axis + dim(mask) <= dim(tensor)` and `mask`'s shape must match
  the first `axis + dim(mask)` dimensions of `tensor`'s shape.

  See also: `tf.ragged.boolean_mask`, which can be applied to both dense and
  ragged tensors, and can be used if you need to preserve the masked dimensions
  of `tensor` (rather than flattening them, as `tf.boolean_mask` does).

  Examples:

  ```python
  # 1-D example
  tensor = [0, 1, 2, 3]
  mask = np.array([True, False, True, False])
  tf.boolean_mask(tensor, mask)  # [0, 2]

  # 2-D example
  tensor = [[1, 2], [3, 4], [5, 6]]
  mask = np.array([True, False, True])
  tf.boolean_mask(tensor, mask)  # [[1, 2], [5, 6]]
  ```

  Args:
    tensor:  N-D Tensor.
    mask:  K-D boolean Tensor, K <= N and K must be known statically.
    name:  A name for this operation (optional).
    axis:  A 0-D int Tensor representing the axis in `tensor` to mask from. By
      default, axis is 0 which will mask from the first dimension. Otherwise K +
      axis <= N.

  Returns:
    (N-K+1)-dimensional tensor populated by entries in `tensor` corresponding
    to `True` values in `mask`.

  Raises:
    ValueError:  If shapes do not conform.
  """

  def _apply_mask_1d(reshaped_tensor, mask, axis=None):
    """Mask tensor along dimension 0 with a 1-D mask."""
    indices = squeeze(where_v2(mask), axis=[1])
    return gather(reshaped_tensor, indices, axis=axis)

  with ops.name_scope(name, values=[tensor, mask]):
    tensor = ops.convert_to_tensor(tensor, name="tensor")
    mask = ops.convert_to_tensor(mask, name="mask")

    shape_mask = mask.get_shape()
    ndims_mask = shape_mask.ndims
    shape_tensor = tensor.get_shape()
    if ndims_mask == 0:
      raise ValueError("mask cannot be scalar.")
    if ndims_mask is None:
    raise ValueError(
          "Number of mask dimensions must be specified, even if some dimensions"
          " are None.  E.g. shape=[None] is ok, but shape=None is not.")

E ValueError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/losses/pairwise.py", line 75, in call *
E (
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/losses/pairwise.py", line 152, in _separate_positives_negatives_scores *
E y_pred_valid_rows = tf.boolean_mask(y_pred, valid_rows_with_positive_mask)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler **
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 1978, in boolean_mask_v2
E return boolean_mask(tensor, mask, name, axis)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 1891, in boolean_mask
E raise ValueError(
E
E ValueError: Number of mask dimensions must be specified, even if some dimensions are None. E.g. shape=[None] is ok, but shape=None is not.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py:1891: ValueError
------------------------------ Captured log call -------------------------------
WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch.
________ test_two_tower_model_with_custom_options[bpr-max-False-False] _________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7eff9caa69a0>
run_eagerly = False, logits_pop_logq_correction = False, loss = 'bpr-max'

@pytest.mark.parametrize("run_eagerly", [True, False])
@pytest.mark.parametrize("logits_pop_logq_correction", [True, False])
@pytest.mark.parametrize("loss", ["categorical_crossentropy", "bpr-max", "binary_crossentropy"])
def test_two_tower_model_with_custom_options(
    ecommerce_data: Dataset,
    run_eagerly,
    logits_pop_logq_correction,
    loss,
):
    from tensorflow.keras import regularizers

    from merlin.models.tf.transforms.bias import PopularityLogitsCorrection
    from merlin.models.utils import schema_utils

    data = ecommerce_data
    data.schema = data.schema.select_by_name(["user_categories", "item_id"])

    metrics = [
        tf.keras.metrics.AUC(from_logits=True, name="auc"),
        mm.RecallAt(5),
        mm.RecallAt(10),
        mm.MRRAt(10),
        mm.NDCGAt(10),
    ]

    post_logits = None
    if logits_pop_logq_correction:
        cardinalities = schema_utils.categorical_cardinalities(data.schema)
        item_id_cardinalities = cardinalities[
            data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
        ]
        items_frequencies = tf.sort(
            tf.random.uniform((item_id_cardinalities,), minval=0, maxval=1000, dtype=tf.int32)
        )
        post_logits = PopularityLogitsCorrection(
            items_frequencies,
            schema=data.schema,
        )

    retrieval_task = mm.ItemRetrievalTask(
        samplers=[mm.InBatchSampler()],
        schema=data.schema,
        logits_temperature=0.1,
        post_logits=post_logits,
        store_negative_ids=True,
    )

    model = mm.TwoTowerModel(
        data.schema,
        query_tower=mm.MLPBlock(
            [2],
            activation="relu",
            no_activation_last_layer=True,
            dropout=0.1,
            kernel_regularizer=regularizers.l2(1e-5),
            bias_regularizer=regularizers.l2(1e-6),
        ),
        embedding_options=mm.EmbeddingOptions(
            infer_embedding_sizes=True,
            infer_embedding_sizes_multiplier=3.0,
            infer_embeddings_ensure_dim_multiple_of_8=True,
            embeddings_l2_reg=1e-5,
        ),
        prediction_tasks=retrieval_task,
    )

    model.compile(optimizer="adam", run_eagerly=run_eagerly, loss=loss, metrics=metrics)
  losses = model.fit(data, batch_size=50, epochs=1, steps_per_epoch=1)

tests/unit/tf/models/test_retrieval.py:179:


merlin/models/tf/models/base.py:763: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self.call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in call
self.initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_file8swt47q4.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self.extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self.call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:618: in train_step
loss = self.compute_loss(x, outputs.targets, outputs.predictions, outputs.sample_weight)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:948: in compute_loss
return self.compiled_loss(
/usr/local/lib/python3.8/dist-packages/keras/engine/compile_utils.py:201: in call
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
merlin/models/tf/losses/pairwise.py:56: in call
loss = super().call(y_true, y_pred, sample_weight)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:139: in call
losses = call_fn(y_true, y_pred)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filelr4bh7gi.py:11: in tf__call
(positives_scores, negatives_scores, valid_rows_with_positive_mask) = ag
.converted_call(ag
.ld(self).separate_positives_negatives_scores, (ag
.ld(y_true), ag
.ld(y_pred)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:441: in converted_call
result = converted_f(*effective_args)
/tmp/autograph_generated_file8czp35ng.py:15: in tf___separate_positives_negatives_scores
y_pred_valid_rows = ag
.converted_call(ag
.ld(tf).boolean_mask, (ag
.ld(y_pred), ag
.ld(valid_rows_with_positive_mask)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:331: in converted_call
return _call_unconverted(f, args, kwargs, options, False)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler
return dispatch_target(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py:1978: in boolean_mask_v2
return boolean_mask(tensor, mask, name, axis)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler
return dispatch_target(*args, **kwargs)


tensor = <tf.Tensor 'retrieval_model/truediv:0' shape=(None, None) dtype=float32>
mask = <tf.Tensor 'BPRmaxLoss/Cast:0' shape= dtype=bool>
name = 'boolean_mask', axis = None

@tf_export(v1=["boolean_mask"])
@dispatch.add_dispatch_support
def boolean_mask(tensor, mask, name="boolean_mask", axis=None):
  """Apply boolean mask to tensor.

  Numpy equivalent is `tensor[mask]`.

  In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match
  the first K dimensions of `tensor`'s shape.  We then have:
    `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]`
  where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order).
  The `axis` could be used with `mask` to indicate the axis to mask from.
  In that case, `axis + dim(mask) <= dim(tensor)` and `mask`'s shape must match
  the first `axis + dim(mask)` dimensions of `tensor`'s shape.

  See also: `tf.ragged.boolean_mask`, which can be applied to both dense and
  ragged tensors, and can be used if you need to preserve the masked dimensions
  of `tensor` (rather than flattening them, as `tf.boolean_mask` does).

  Examples:

  ```python
  # 1-D example
  tensor = [0, 1, 2, 3]
  mask = np.array([True, False, True, False])
  tf.boolean_mask(tensor, mask)  # [0, 2]

  # 2-D example
  tensor = [[1, 2], [3, 4], [5, 6]]
  mask = np.array([True, False, True])
  tf.boolean_mask(tensor, mask)  # [[1, 2], [5, 6]]
  ```

  Args:
    tensor:  N-D Tensor.
    mask:  K-D boolean Tensor, K <= N and K must be known statically.
    name:  A name for this operation (optional).
    axis:  A 0-D int Tensor representing the axis in `tensor` to mask from. By
      default, axis is 0 which will mask from the first dimension. Otherwise K +
      axis <= N.

  Returns:
    (N-K+1)-dimensional tensor populated by entries in `tensor` corresponding
    to `True` values in `mask`.

  Raises:
    ValueError:  If shapes do not conform.
  """

  def _apply_mask_1d(reshaped_tensor, mask, axis=None):
    """Mask tensor along dimension 0 with a 1-D mask."""
    indices = squeeze(where_v2(mask), axis=[1])
    return gather(reshaped_tensor, indices, axis=axis)

  with ops.name_scope(name, values=[tensor, mask]):
    tensor = ops.convert_to_tensor(tensor, name="tensor")
    mask = ops.convert_to_tensor(mask, name="mask")

    shape_mask = mask.get_shape()
    ndims_mask = shape_mask.ndims
    shape_tensor = tensor.get_shape()
    if ndims_mask == 0:
      raise ValueError("mask cannot be scalar.")
    if ndims_mask is None:
    raise ValueError(
          "Number of mask dimensions must be specified, even if some dimensions"
          " are None.  E.g. shape=[None] is ok, but shape=None is not.")

E ValueError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/losses/pairwise.py", line 75, in call *
E (
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/losses/pairwise.py", line 152, in _separate_positives_negatives_scores *
E y_pred_valid_rows = tf.boolean_mask(y_pred, valid_rows_with_positive_mask)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler **
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 1978, in boolean_mask_v2
E return boolean_mask(tensor, mask, name, axis)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 1891, in boolean_mask
E raise ValueError(
E
E ValueError: Number of mask dimensions must be specified, even if some dimensions are None. E.g. shape=[None] is ok, but shape=None is not.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py:1891: ValueError
------------------------------ Captured log call -------------------------------
WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch.
____________ test_two_tower_retrieval_model_with_metrics[bpr-False] ____________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7eff8d2b6460>
run_eagerly = False, loss = 'bpr'

@pytest.mark.parametrize("run_eagerly", [True, False])
@pytest.mark.parametrize(
    "loss", ["categorical_crossentropy", "bpr", "bpr-max", "binary_crossentropy"]
)
def test_two_tower_retrieval_model_with_metrics(ecommerce_data: Dataset, run_eagerly, loss):
    ecommerce_data.schema = ecommerce_data.schema.select_by_name(["user_categories", "item_id"])

    metrics = [RecallAt(5), MRRAt(5), NDCGAt(5), AvgPrecisionAt(5), PrecisionAt(5)]
    model = mm.TwoTowerModel(schema=ecommerce_data.schema, query_tower=mm.MLPBlock([4]))
    model.compile(optimizer="adam", run_eagerly=run_eagerly, metrics=metrics, loss=loss)

    # Training
  losses = model.fit(
        ecommerce_data,
        batch_size=10,
        epochs=1,
        steps_per_epoch=1,
        train_metrics_steps=3,
        validation_data=ecommerce_data,
        validation_steps=3,
    )

tests/unit/tf/models/test_retrieval.py:209:


merlin/models/tf/models/base.py:763: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self.call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in call
self.initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_file8swt47q4.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self.extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self.call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:618: in train_step
loss = self.compute_loss(x, outputs.targets, outputs.predictions, outputs.sample_weight)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:948: in compute_loss
return self.compiled_loss(
/usr/local/lib/python3.8/dist-packages/keras/engine/compile_utils.py:201: in call
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
merlin/models/tf/losses/pairwise.py:56: in call
loss = super().call(y_true, y_pred, sample_weight)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:139: in call
losses = call_fn(y_true, y_pred)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filelr4bh7gi.py:11: in tf__call
(positives_scores, negatives_scores, valid_rows_with_positive_mask) = ag
.converted_call(ag
.ld(self).separate_positives_negatives_scores, (ag
.ld(y_true), ag
.ld(y_pred)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:441: in converted_call
result = converted_f(*effective_args)
/tmp/autograph_generated_file8czp35ng.py:15: in tf___separate_positives_negatives_scores
y_pred_valid_rows = ag
.converted_call(ag
.ld(tf).boolean_mask, (ag
.ld(y_pred), ag
.ld(valid_rows_with_positive_mask)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:331: in converted_call
return _call_unconverted(f, args, kwargs, options, False)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler
return dispatch_target(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py:1978: in boolean_mask_v2
return boolean_mask(tensor, mask, name, axis)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler
return dispatch_target(*args, **kwargs)


tensor = <tf.Tensor 'retrieval_model/StatefulPartitionedCall:0' shape=(None, None) dtype=float32>
mask = <tf.Tensor 'BPRLoss/Cast:0' shape= dtype=bool>
name = 'boolean_mask', axis = None

@tf_export(v1=["boolean_mask"])
@dispatch.add_dispatch_support
def boolean_mask(tensor, mask, name="boolean_mask", axis=None):
  """Apply boolean mask to tensor.

  Numpy equivalent is `tensor[mask]`.

  In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match
  the first K dimensions of `tensor`'s shape.  We then have:
    `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]`
  where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order).
  The `axis` could be used with `mask` to indicate the axis to mask from.
  In that case, `axis + dim(mask) <= dim(tensor)` and `mask`'s shape must match
  the first `axis + dim(mask)` dimensions of `tensor`'s shape.

  See also: `tf.ragged.boolean_mask`, which can be applied to both dense and
  ragged tensors, and can be used if you need to preserve the masked dimensions
  of `tensor` (rather than flattening them, as `tf.boolean_mask` does).

  Examples:

  ```python
  # 1-D example
  tensor = [0, 1, 2, 3]
  mask = np.array([True, False, True, False])
  tf.boolean_mask(tensor, mask)  # [0, 2]

  # 2-D example
  tensor = [[1, 2], [3, 4], [5, 6]]
  mask = np.array([True, False, True])
  tf.boolean_mask(tensor, mask)  # [[1, 2], [5, 6]]
  ```

  Args:
    tensor:  N-D Tensor.
    mask:  K-D boolean Tensor, K <= N and K must be known statically.
    name:  A name for this operation (optional).
    axis:  A 0-D int Tensor representing the axis in `tensor` to mask from. By
      default, axis is 0 which will mask from the first dimension. Otherwise K +
      axis <= N.

  Returns:
    (N-K+1)-dimensional tensor populated by entries in `tensor` corresponding
    to `True` values in `mask`.

  Raises:
    ValueError:  If shapes do not conform.
  """

  def _apply_mask_1d(reshaped_tensor, mask, axis=None):
    """Mask tensor along dimension 0 with a 1-D mask."""
    indices = squeeze(where_v2(mask), axis=[1])
    return gather(reshaped_tensor, indices, axis=axis)

  with ops.name_scope(name, values=[tensor, mask]):
    tensor = ops.convert_to_tensor(tensor, name="tensor")
    mask = ops.convert_to_tensor(mask, name="mask")

    shape_mask = mask.get_shape()
    ndims_mask = shape_mask.ndims
    shape_tensor = tensor.get_shape()
    if ndims_mask == 0:
      raise ValueError("mask cannot be scalar.")
    if ndims_mask is None:
    raise ValueError(
          "Number of mask dimensions must be specified, even if some dimensions"
          " are None.  E.g. shape=[None] is ok, but shape=None is not.")

E ValueError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/losses/pairwise.py", line 75, in call *
E (
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/losses/pairwise.py", line 152, in _separate_positives_negatives_scores *
E y_pred_valid_rows = tf.boolean_mask(y_pred, valid_rows_with_positive_mask)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler **
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 1978, in boolean_mask_v2
E return boolean_mask(tensor, mask, name, axis)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 1891, in boolean_mask
E raise ValueError(
E
E ValueError: Number of mask dimensions must be specified, even if some dimensions are None. E.g. shape=[None] is ok, but shape=None is not.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py:1891: ValueError
------------------------------ Captured log call -------------------------------
WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch.
__________ test_two_tower_retrieval_model_with_metrics[bpr-max-False] __________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7eff9c3a9e80>
run_eagerly = False, loss = 'bpr-max'

@pytest.mark.parametrize("run_eagerly", [True, False])
@pytest.mark.parametrize(
    "loss", ["categorical_crossentropy", "bpr", "bpr-max", "binary_crossentropy"]
)
def test_two_tower_retrieval_model_with_metrics(ecommerce_data: Dataset, run_eagerly, loss):
    ecommerce_data.schema = ecommerce_data.schema.select_by_name(["user_categories", "item_id"])

    metrics = [RecallAt(5), MRRAt(5), NDCGAt(5), AvgPrecisionAt(5), PrecisionAt(5)]
    model = mm.TwoTowerModel(schema=ecommerce_data.schema, query_tower=mm.MLPBlock([4]))
    model.compile(optimizer="adam", run_eagerly=run_eagerly, metrics=metrics, loss=loss)

    # Training
  losses = model.fit(
        ecommerce_data,
        batch_size=10,
        epochs=1,
        steps_per_epoch=1,
        train_metrics_steps=3,
        validation_data=ecommerce_data,
        validation_steps=3,
    )

tests/unit/tf/models/test_retrieval.py:209:


merlin/models/tf/models/base.py:763: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self.call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in call
self.initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_file8swt47q4.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self.extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self.call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:618: in train_step
loss = self.compute_loss(x, outputs.targets, outputs.predictions, outputs.sample_weight)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:948: in compute_loss
return self.compiled_loss(
/usr/local/lib/python3.8/dist-packages/keras/engine/compile_utils.py:201: in call
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
merlin/models/tf/losses/pairwise.py:56: in call
loss = super().call(y_true, y_pred, sample_weight)
/usr/local/lib/python3.8/dist-packages/keras/losses.py:139: in call
losses = call_fn(y_true, y_pred)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filelr4bh7gi.py:11: in tf__call
(positives_scores, negatives_scores, valid_rows_with_positive_mask) = ag
.converted_call(ag
.ld(self).separate_positives_negatives_scores, (ag
.ld(y_true), ag
.ld(y_pred)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:441: in converted_call
result = converted_f(*effective_args)
/tmp/autograph_generated_file8czp35ng.py:15: in tf___separate_positives_negatives_scores
y_pred_valid_rows = ag
.converted_call(ag
.ld(tf).boolean_mask, (ag
.ld(y_pred), ag
.ld(valid_rows_with_positive_mask)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:331: in converted_call
return _call_unconverted(f, args, kwargs, options, False)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler
return dispatch_target(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py:1978: in boolean_mask_v2
return boolean_mask(tensor, mask, name, axis)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler
return dispatch_target(*args, **kwargs)


tensor = <tf.Tensor 'retrieval_model/StatefulPartitionedCall:0' shape=(None, None) dtype=float32>
mask = <tf.Tensor 'BPRmaxLoss/Cast:0' shape= dtype=bool>
name = 'boolean_mask', axis = None

@tf_export(v1=["boolean_mask"])
@dispatch.add_dispatch_support
def boolean_mask(tensor, mask, name="boolean_mask", axis=None):
  """Apply boolean mask to tensor.

  Numpy equivalent is `tensor[mask]`.

  In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match
  the first K dimensions of `tensor`'s shape.  We then have:
    `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]`
  where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order).
  The `axis` could be used with `mask` to indicate the axis to mask from.
  In that case, `axis + dim(mask) <= dim(tensor)` and `mask`'s shape must match
  the first `axis + dim(mask)` dimensions of `tensor`'s shape.

  See also: `tf.ragged.boolean_mask`, which can be applied to both dense and
  ragged tensors, and can be used if you need to preserve the masked dimensions
  of `tensor` (rather than flattening them, as `tf.boolean_mask` does).

  Examples:

  ```python
  # 1-D example
  tensor = [0, 1, 2, 3]
  mask = np.array([True, False, True, False])
  tf.boolean_mask(tensor, mask)  # [0, 2]

  # 2-D example
  tensor = [[1, 2], [3, 4], [5, 6]]
  mask = np.array([True, False, True])
  tf.boolean_mask(tensor, mask)  # [[1, 2], [5, 6]]
  ```

  Args:
    tensor:  N-D Tensor.
    mask:  K-D boolean Tensor, K <= N and K must be known statically.
    name:  A name for this operation (optional).
    axis:  A 0-D int Tensor representing the axis in `tensor` to mask from. By
      default, axis is 0 which will mask from the first dimension. Otherwise K +
      axis <= N.

  Returns:
    (N-K+1)-dimensional tensor populated by entries in `tensor` corresponding
    to `True` values in `mask`.

  Raises:
    ValueError:  If shapes do not conform.
  """

  def _apply_mask_1d(reshaped_tensor, mask, axis=None):
    """Mask tensor along dimension 0 with a 1-D mask."""
    indices = squeeze(where_v2(mask), axis=[1])
    return gather(reshaped_tensor, indices, axis=axis)

  with ops.name_scope(name, values=[tensor, mask]):
    tensor = ops.convert_to_tensor(tensor, name="tensor")
    mask = ops.convert_to_tensor(mask, name="mask")

    shape_mask = mask.get_shape()
    ndims_mask = shape_mask.ndims
    shape_tensor = tensor.get_shape()
    if ndims_mask == 0:
      raise ValueError("mask cannot be scalar.")
    if ndims_mask is None:
    raise ValueError(
          "Number of mask dimensions must be specified, even if some dimensions"
          " are None.  E.g. shape=[None] is ok, but shape=None is not.")

E ValueError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/losses/pairwise.py", line 75, in call *
E (
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/losses/pairwise.py", line 152, in _separate_positives_negatives_scores *
E y_pred_valid_rows = tf.boolean_mask(y_pred, valid_rows_with_positive_mask)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler **
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 1978, in boolean_mask_v2
E return boolean_mask(tensor, mask, name, axis)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 1891, in boolean_mask
E raise ValueError(
E
E ValueError: Number of mask dimensions must be specified, even if some dimensions are None. E.g. shape=[None] is ok, but shape=None is not.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py:1891: ValueError
------------------------------ Captured log call -------------------------------
WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch.
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 22 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 56 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 22 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 50 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_file9w7o0_hc.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
==== 4 failed, 755 passed, 12 skipped, 1179 warnings in 1285.10s (0:21:25) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins5214004980253581670.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit f96eb21a097a2a40d7a71dce617ac61550c6b26f, no merge conflicts.
Running as SYSTEM
Setting status of f96eb21a097a2a40d7a71dce617ac61550c6b26f to PENDING with url https://10.20.13.93:8080/job/merlin_models/1478/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse f96eb21a097a2a40d7a71dce617ac61550c6b26f^{commit} # timeout=10
Checking out Revision f96eb21a097a2a40d7a71dce617ac61550c6b26f (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f96eb21a097a2a40d7a71dce617ac61550c6b26f # timeout=10
Commit message: "Fixes tests by making adjust_predictions_and_targets ensure targets has the same shape and dtype as predictions"
 > git rev-list --no-walk 803ede781a321271ea82b50801483b8204b6fe3e # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins16524472841034863610.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 771 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 30%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 46%]
tests/unit/tf/models/test_base.py s................... [ 49%]
tests/unit/tf/models/test_benchmark.py .. [ 49%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 59%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 61%]
tests/unit/tf/outputs/test_sampling.py .... [ 61%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 64%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 65%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 66%]
tests/unit/tf/transformers/test_block.py .................. [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 69%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 73%]
....................s...... [ 77%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 78%]
tests/unit/tf/transforms/test_noise.py ..... [ 79%]
tests/unit/tf/transforms/test_sequence.py ........................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 22 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 22 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_file9iwl9_ue.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========= 759 passed, 12 skipped, 1191 warnings in 1213.34s (0:20:13) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins2185422515212709277.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit 2d5f3079b34686d825d1b45fb9e87924e183c753, no merge conflicts.
Running as SYSTEM
Setting status of 2d5f3079b34686d825d1b45fb9e87924e183c753 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1480/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse 2d5f3079b34686d825d1b45fb9e87924e183c753^{commit} # timeout=10
Checking out Revision 2d5f3079b34686d825d1b45fb9e87924e183c753 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 2d5f3079b34686d825d1b45fb9e87924e183c753 # timeout=10
Commit message: "Adding test that checks if Keras masking is correctly considered by default keras losses and metrics (including our TopkMetrics and TopKMetricsAggregator)"
 > git rev-list --no-walk 14df5ba3418907994f0d446e8ba52abdce3230a2 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins2184554165736781198.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 773 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 13%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 20%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 27%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 28%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py F [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 30%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 35%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 46%]
tests/unit/tf/models/test_base.py s..................... [ 49%]
tests/unit/tf/models/test_benchmark.py .. [ 49%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 59%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 61%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 64%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 65%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 66%]
tests/unit/tf/transformers/test_block.py .................. [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 69%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 73%]
....................s...... [ 77%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 78%]
tests/unit/tf/transforms/test_noise.py ..... [ 79%]
tests/unit/tf/transforms/test_sequence.py ........................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
_________________ test_usecase_accelerate_training_by_lazyadam _________________

tb = <testbook.client.TestbookNotebookClient object at 0x7f9f646f7f10>

@testbook(
    REPO_ROOT / p,
    timeout=180,
    execute=False,
)
def test_usecase_accelerate_training_by_lazyadam(tb):
    tb.inject(
        """
        import os
        os.environ["NUM_ROWS"] = "1000"
        """
    )
  tb.execute()

tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py:22:


/usr/local/lib/python3.8/dist-packages/testbook/client.py:147: in execute
super().execute_cell(cell, index)
/usr/local/lib/python3.8/dist-packages/nbclient/util.py:85: in wrapped
return just_run(coro(*args, **kwargs))
/usr/local/lib/python3.8/dist-packages/nbclient/util.py:60: in just_run
return loop.run_until_complete(coro)
/usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
/usr/local/lib/python3.8/dist-packages/nbclient/client.py:1025: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7f9f646f7f10>
cell = {'cell_type': 'code', 'execution_count': 7, 'id': '0500ad25-29e0-40c8-85bc-6e3864107c6a', 'metadata': {'execution': {'...e_train_function_3806]']}], 'source': 'model1.compile(optimizer="adam")\nmodel1.fit(train, batch_size=1024, epochs=1)'}
cell_index = 12
exec_reply = {'buffers': [], 'content': {'ename': 'ResourceExhaustedError', 'engine_info': {'engine_id': -1, 'engine_uuid': 'd1b1bb...e, 'engine': 'd1b1bbff-ac51-4bf0-818d-d3f3deb6a123', 'started': '2022-10-07T18:05:42.525209Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(
        self.on_cell_error, cell=cell, cell_index=cell_index, execute_reply=exec_reply
    )
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E model1.compile(optimizer="adam")
E model1.fit(train, batch_size=1024, epochs=1)
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mResourceExhaustedError�[0m Traceback (most recent call last)
E Cell �[0;32mIn [7], line 2�[0m
E �[1;32m 1�[0m model1�[38;5;241m.�[39mcompile(optimizer�[38;5;241m=�[39m�[38;5;124m"�[39m�[38;5;124madam�[39m�[38;5;124m"�[39m)
E �[0;32m----> 2�[0m �[43mmodel1�[49m�[38;5;241;43m.�[39;49m�[43mfit�[49m�[43m(�[49m�[43mtrain�[49m�[43m,�[49m�[43m �[49m�[43mbatch_size�[49m�[38;5;241;43m=�[39;49m�[38;5;241;43m1024�[39;49m�[43m,�[49m�[43m �[49m�[43mepochs�[49m�[38;5;241;43m=�[39;49m�[38;5;241;43m1�[39;49m�[43m)�[49m
E
E File �[0;32m~/workspace/merlin_models/models/merlin/models/tf/models/base.py:789�[0m, in �[0;36mBaseModel.fit�[0;34m(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing, train_metrics_steps, **kwargs)�[0m
E �[1;32m 781�[0m callbacks �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39m_add_metrics_callback(callbacks, train_metrics_steps)
E �[1;32m 783�[0m fit_kwargs �[38;5;241m=�[39m {
E �[1;32m 784�[0m k: v
E �[1;32m 785�[0m �[38;5;28;01mfor�[39;00m k, v �[38;5;129;01min�[39;00m �[38;5;28mlocals�[39m()�[38;5;241m.�[39mitems()
E �[1;32m 786�[0m �[38;5;28;01mif�[39;00m k �[38;5;129;01mnot�[39;00m �[38;5;129;01min�[39;00m [�[38;5;124m"�[39m�[38;5;124mself�[39m�[38;5;124m"�[39m, �[38;5;124m"�[39m�[38;5;124mkwargs�[39m�[38;5;124m"�[39m, �[38;5;124m"�[39m�[38;5;124mtrain_metrics_steps�[39m�[38;5;124m"�[39m, �[38;5;124m"�[39m�[38;5;124m__class__�[39m�[38;5;124m"�[39m]
E �[1;32m 787�[0m }
E �[0;32m--> 789�[0m �[38;5;28;01mreturn�[39;00m �[38;5;28;43msuper�[39;49m�[43m(�[49m�[43m)�[49m�[38;5;241;43m.�[39;49m�[43mfit�[49m�[43m(�[49m�[38;5;241;43m�[39;49m�[38;5;241;43m�[39;49m�[43mfit_kwargs�[49m�[43m)�[49m
E
E File �[0;32m/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:67�[0m, in �[0;36mfilter_traceback..error_handler�[0;34m(*args, **kwargs)�[0m
E �[1;32m 65�[0m �[38;5;28;01mexcept�[39;00m �[38;5;167;01mException�[39;00m �[38;5;28;01mas�[39;00m e: �[38;5;66;03m# pylint: disable=broad-except�[39;00m
E �[1;32m 66�[0m filtered_tb �[38;5;241m=�[39m process_traceback_frames(e�[38;5;241m.�[39m__traceback_)
E �[0;32m---> 67�[0m �[38;5;28;01mraise�[39;00m e�[38;5;241m.�[39mwith_traceback(filtered_tb) �[38;5;28;01mfrom�[39;00m �[38;5;28mNone�[39m
E �[1;32m 68�[0m �[38;5;28;01mfinally�[39;00m:
E �[1;32m 69�[0m �[38;5;28;01mdel�[39;00m filtered_tb
E
E File �[0;32m/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py:54�[0m, in �[0;36mquick_execute�[0;34m(op_name, num_outputs, inputs, attrs, ctx, name)�[0m
E �[1;32m 52�[0m �[38;5;28;01mtry�[39;00m:
E �[1;32m 53�[0m ctx�[38;5;241m.�[39mensure_initialized()
E �[0;32m---> 54�[0m tensors �[38;5;241m=�[39m pywrap_tfe�[38;5;241m.�[39mTFE_Py_Execute(ctx�[38;5;241m.�[39m_handle, device_name, op_name,
E �[1;32m 55�[0m inputs, attrs, num_outputs)
E �[1;32m 56�[0m �[38;5;28;01mexcept�[39;00m core�[38;5;241m.�[39m_NotOkStatusException �[38;5;28;01mas�[39;00m e:
E �[1;32m 57�[0m �[38;5;28;01mif�[39;00m name �[38;5;129;01mis�[39;00m �[38;5;129;01mnot�[39;00m �[38;5;28;01mNone�[39;00m:
E
E �[0;31mResourceExhaustedError�[0m: Graph execution error:
E
E Detected at node 'Adam/Adam/update_17/mul_1' defined at (most recent call last):
E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
E return _run_code(code, main_globals, None,
E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
E exec(code, run_globals)
E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in
E app.launch_new_instance()
E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance
E app.start()
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start
E self.io_loop.start()
E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start
E self.asyncio_loop.run_forever()
E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever
E self._run_once()
E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once
E handle._run()
E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run
E self._context.run(self._callback, *self._args)
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue
E await self.process_one()
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one
E await dispatch(*args)
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell
E await result
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request
E reply_content = await reply_content
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute
E res = shell.run_cell(
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell
E return super().run_cell(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell
E result = self._run_cell(
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell
E return runner(coro)
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner
E coro.send(None)
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async
E has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes
E if await self.run_code(code, result, async
=asy):
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code
E exec(code_obj, self.user_global_ns, self.user_ns)
E File "/tmp/ipykernel_7043/3741080137.py", line 2, in
E model1.fit(train, batch_size=1024, epochs=1)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 789, in fit
E return super().fit(**fit_kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit
E tmp_logs = self.train_function(iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 649, in train_step
E self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize
E return self.apply_gradients(grads_and_vars, name=name)
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients
E return tf.internal.distribute.interim.maybe_merge_call(
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply
E update_op = distribution.extended.update(
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var
E return self._resource_apply_sparse_duplicate_indices(
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices
E return self._resource_apply_sparse(summed_grad, handle, unique_indices,
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 206, in _resource_apply_sparse
E m_t = tf.compat.v1.assign(m, m * coefficients['beta_1_t'],
E Node: 'Adam/Adam/update_17/mul_1'
E Detected at node 'Adam/Adam/update_17/mul_1' defined at (most recent call last):
E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
E return _run_code(code, main_globals, None,
E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
E exec(code, run_globals)
E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in
E app.launch_new_instance()
E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance
E app.start()
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start
E self.io_loop.start()
E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start
E self.asyncio_loop.run_forever()
E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever
E self._run_once()
E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once
E handle._run()
E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run
E self._context.run(self._callback, *self._args)
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue
E await self.process_one()
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one
E await dispatch(*args)
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell
E await result
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request
E reply_content = await reply_content
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute
E res = shell.run_cell(
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell
E return super().run_cell(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell
E result = self._run_cell(
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell
E return runner(coro)
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner
E coro.send(None)
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async
E has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes
E if await self.run_code(code, result, async
=asy):
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code
E exec(code_obj, self.user_global_ns, self.user_ns)
E File "/tmp/ipykernel_7043/3741080137.py", line 2, in
E model1.fit(train, batch_size=1024, epochs=1)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 789, in fit
E return super().fit(**fit_kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit
E tmp_logs = self.train_function(iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 649, in train_step
E self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize
E return self.apply_gradients(grads_and_vars, name=name)
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients
E return tf.internal.distribute.interim.maybe_merge_call(
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply
E update_op = distribution.extended.update(
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var
E return self._resource_apply_sparse_duplicate_indices(
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices
E return self._resource_apply_sparse(summed_grad, handle, unique_indices,
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 206, in _resource_apply_sparse
E m_t = tf.compat.v1.assign(m, m * coefficients['beta_1_t'],
E Node: 'Adam/Adam/update_17/mul_1'
E 2 root error(s) found.
E (0) RESOURCE_EXHAUSTED: failed to allocate memory
E [[{{node Adam/Adam/update_17/mul_1}}]]
E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
E
E [[StatefulPartitionedCall/cond_1/pivot_t/_139/_55]]
E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
E
E (1) RESOURCE_EXHAUSTED: failed to allocate memory
E [[{{node Adam/Adam/update_17/mul_1}}]]
E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
E
E 0 successful operations.
E 0 derived errors ignored. [Op:__inference_train_function_3806]
E ResourceExhaustedError: Graph execution error:
E
E Detected at node 'Adam/Adam/update_17/mul_1' defined at (most recent call last):
E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
E return _run_code(code, main_globals, None,
E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
E exec(code, run_globals)
E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in
E app.launch_new_instance()
E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance
E app.start()
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start
E self.io_loop.start()
E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start
E self.asyncio_loop.run_forever()
E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever
E self._run_once()
E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once
E handle._run()
E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run
E self._context.run(self._callback, *self._args)
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue
E await self.process_one()
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one
E await dispatch(*args)
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell
E await result
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request
E reply_content = await reply_content
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute
E res = shell.run_cell(
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell
E return super().run_cell(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell
E result = self._run_cell(
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell
E return runner(coro)
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner
E coro.send(None)
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async
E has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes
E if await self.run_code(code, result, async
=asy):
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code
E exec(code_obj, self.user_global_ns, self.user_ns)
E File "/tmp/ipykernel_7043/3741080137.py", line 2, in
E model1.fit(train, batch_size=1024, epochs=1)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 789, in fit
E return super().fit(**fit_kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit
E tmp_logs = self.train_function(iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 649, in train_step
E self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize
E return self.apply_gradients(grads_and_vars, name=name)
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients
E return tf.internal.distribute.interim.maybe_merge_call(
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply
E update_op = distribution.extended.update(
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var
E return self._resource_apply_sparse_duplicate_indices(
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices
E return self._resource_apply_sparse(summed_grad, handle, unique_indices,
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 206, in _resource_apply_sparse
E m_t = tf.compat.v1.assign(m, m * coefficients['beta_1_t'],
E Node: 'Adam/Adam/update_17/mul_1'
E Detected at node 'Adam/Adam/update_17/mul_1' defined at (most recent call last):
E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
E return _run_code(code, main_globals, None,
E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
E exec(code, run_globals)
E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in
E app.launch_new_instance()
E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance
E app.start()
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start
E self.io_loop.start()
E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start
E self.asyncio_loop.run_forever()
E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever
E self._run_once()
E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once
E handle._run()
E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run
E self._context.run(self._callback, *self._args)
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue
E await self.process_one()
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one
E await dispatch(*args)
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell
E await result
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request
E reply_content = await reply_content
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute
E res = shell.run_cell(
E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell
E return super().run_cell(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell
E result = self._run_cell(
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell
E return runner(coro)
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner
E coro.send(None)
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async
E has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes
E if await self.run_code(code, result, async
=asy):
E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code
E exec(code_obj, self.user_global_ns, self.user_ns)
E File "/tmp/ipykernel_7043/3741080137.py", line 2, in
E model1.fit(train, batch_size=1024, epochs=1)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 789, in fit
E return super().fit(**fit_kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit
E tmp_logs = self.train_function(iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 649, in train_step
E self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize
E return self.apply_gradients(grads_and_vars, name=name)
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients
E return tf.internal.distribute.interim.maybe_merge_call(
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply
E update_op = distribution.extended.update(
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var
E return self._resource_apply_sparse_duplicate_indices(
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices
E return self._resource_apply_sparse(summed_grad, handle, unique_indices,
E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 206, in _resource_apply_sparse
E m_t = tf.compat.v1.assign(m, m * coefficients['beta_1_t'],
E Node: 'Adam/Adam/update_17/mul_1'
E 2 root error(s) found.
E (0) RESOURCE_EXHAUSTED: failed to allocate memory
E [[{{node Adam/Adam/update_17/mul_1}}]]
E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
E
E [[StatefulPartitionedCall/cond_1/pivot_t/_139/_55]]
E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
E
E (1) RESOURCE_EXHAUSTED: failed to allocate memory
E [[{{node Adam/Adam/update_17/mul_1}}]]
E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
E
E 0 successful operations.
E 0 derived errors ignored. [Op:__inference_train_function_3806]

/usr/local/lib/python3.8/dist-packages/nbclient/client.py:919: CellExecutionError
----------------------------- Captured stderr call -----------------------------
2022-10-07 18:05:38.485946: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-10-07 18:05:41.053120: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 0
2022-10-07 18:05:41.053344: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1627 MB memory: -> device: 0, name: Tesla P100-DGXS-16GB, pci bus id: 0000:07:00.0, compute capability: 6.0
2022-10-07 18:05:41.057343: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 1
2022-10-07 18:05:41.057445: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 13841 MB memory: -> device: 1, name: Tesla P100-DGXS-16GB, pci bus id: 0000:08:00.0, compute capability: 6.0
2022-10-07 18:05:52.529775: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:288] gpu_async_0 cuMemAllocAsync failed to allocate 1083564064 bytes: CUDA error: out of memory (CUDA_ERROR_OUT_OF_MEMORY)
Reported by CUDA: Free memory/Total memory: 688455680/17069309952
2022-10-07 18:05:52.529833: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:293] Stats: Limit: 1706033152
InUse: 4189475427
MaxInUse: 4189475427
NumAllocs: 242
MaxAllocSize: 1083564064
Reserved: 0
PeakReserved: 0
LargestFreeBlock: 0

2022-10-07 18:05:52.529853: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:56] Histogram of current allocation: (allocation_size_in_bytes, nb_allocation_of_that_sizes), ...;
2022-10-07 18:05:52.529862: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1, 7
2022-10-07 18:05:52.529868: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4, 34
2022-10-07 18:05:52.529875: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 8, 8
2022-10-07 18:05:52.529881: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 40, 2
2022-10-07 18:05:52.529887: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 128, 7
2022-10-07 18:05:52.529893: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 160, 8
2022-10-07 18:05:52.529900: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 192, 4
2022-10-07 18:05:52.529906: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 256, 7
2022-10-07 18:05:52.529912: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 288, 5
2022-10-07 18:05:52.529918: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 480, 4
2022-10-07 18:05:52.529924: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1028, 1
2022-10-07 18:05:52.529930: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 3168, 3
2022-10-07 18:05:52.529936: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 61440, 3
2022-10-07 18:05:52.529942: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 65536, 3
2022-10-07 18:05:52.529948: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 584352, 5
2022-10-07 18:05:52.529955: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 823872, 4
2022-10-07 18:05:52.529961: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4324736, 5
2022-10-07 18:05:52.529967: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 7426048, 5
2022-10-07 18:05:52.529973: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 15401440, 3
2022-10-07 18:05:52.530003: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 18678720, 4
2022-10-07 18:05:52.530011: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 39970560, 3
2022-10-07 18:05:52.530017: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 56589504, 4
2022-10-07 18:05:52.530023: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 135407776, 3
2022-10-07 18:05:52.530029: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1083564064, 3
2022-10-07 18:05:52.530055: W tensorflow/core/framework/op_kernel.cc:1733] RESOURCE_EXHAUSTED: failed to allocate memory
2022-10-07 18:05:52.558586: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:288] gpu_async_0 cuMemAllocAsync failed to allocate 1083564064 bytes: CUDA error: out of memory (CUDA_ERROR_OUT_OF_MEMORY)
Reported by CUDA: Free memory/Total memory: 688455680/17069309952
2022-10-07 18:05:52.558631: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:293] Stats: Limit: 1706033152
InUse: 4189475427
MaxInUse: 4189475427
NumAllocs: 242
MaxAllocSize: 1083564064
Reserved: 0
PeakReserved: 0
LargestFreeBlock: 0

2022-10-07 18:05:52.558649: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:56] Histogram of current allocation: (allocation_size_in_bytes, nb_allocation_of_that_sizes), ...;
2022-10-07 18:05:52.558657: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1, 7
2022-10-07 18:05:52.558664: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4, 34
2022-10-07 18:05:52.558670: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 8, 8
2022-10-07 18:05:52.558676: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 40, 2
2022-10-07 18:05:52.558682: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 128, 7
2022-10-07 18:05:52.558688: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 160, 8
2022-10-07 18:05:52.558694: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 192, 4
2022-10-07 18:05:52.558701: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 256, 7
2022-10-07 18:05:52.558707: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 288, 5
2022-10-07 18:05:52.558713: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 480, 4
2022-10-07 18:05:52.558719: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1028, 1
2022-10-07 18:05:52.558725: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 3168, 3
2022-10-07 18:05:52.558731: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 61440, 3
2022-10-07 18:05:52.558738: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 65536, 3
2022-10-07 18:05:52.558744: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 584352, 5
2022-10-07 18:05:52.558750: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 823872, 4
2022-10-07 18:05:52.558756: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4324736, 5
2022-10-07 18:05:52.558762: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 7426048, 5
2022-10-07 18:05:52.558769: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 15401440, 3
2022-10-07 18:05:52.558775: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 18678720, 4
2022-10-07 18:05:52.558781: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 39970560, 3
2022-10-07 18:05:52.558787: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 56589504, 4
2022-10-07 18:05:52.558793: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 135407776, 3
2022-10-07 18:05:52.558821: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1083564064, 3
2022-10-07 18:05:52.558836: W tensorflow/core/framework/op_kernel.cc:1733] RESOURCE_EXHAUSTED: failed to allocate memory
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python3.8/logging/init.py", line 2127, in shutdown
h.close()
File "/usr/local/lib/python3.8/dist-packages/absl/logging/init.py", line 934, in close
self.stream.close()
File "/usr/local/lib/python3.8/dist-packages/ipykernel/iostream.py", line 438, in close
self.watch_fd_thread.join()
AttributeError: 'OutStream' object has no attribute 'watch_fd_thread'
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 24 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 24 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_filemqhwjxqc.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
==== 1 failed, 760 passed, 12 skipped, 1195 warnings in 1223.95s (0:20:23) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins17452948367714129532.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit a2c3054becb2f9702388dc5f7099e4d1958ab665, no merge conflicts.
Running as SYSTEM
Setting status of a2c3054becb2f9702388dc5f7099e4d1958ab665 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1482/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse a2c3054becb2f9702388dc5f7099e4d1958ab665^{commit} # timeout=10
Checking out Revision a2c3054becb2f9702388dc5f7099e4d1958ab665 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a2c3054becb2f9702388dc5f7099e4d1958ab665 # timeout=10
Commit message: "Fixed test on masking"
 > git rev-list --no-walk 53d1958e94fb28e046988398ce1641057d278744 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins654604608853087965.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 773 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 13%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 20%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 27%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 28%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 30%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 35%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 46%]
tests/unit/tf/models/test_base.py s..................... [ 49%]
tests/unit/tf/models/test_benchmark.py .. [ 49%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 59%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 61%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 64%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 65%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 66%]
tests/unit/tf/transformers/test_block.py .................. [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 69%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 73%]
....................s...... [ 77%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 78%]
tests/unit/tf/transforms/test_noise.py ..... [ 79%]
tests/unit/tf/transforms/test_sequence.py ........................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 24 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 24 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 22 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_fileygec9w7u.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/mask_sequence_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========= 761 passed, 12 skipped, 1195 warnings in 1216.59s (0:20:16) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins6370977555462599450.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit 1ed3a967a88d9d468a3e29c1e7b06217622858c2, no merge conflicts.
Running as SYSTEM
Setting status of 1ed3a967a88d9d468a3e29c1e7b06217622858c2 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1484/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse 1ed3a967a88d9d468a3e29c1e7b06217622858c2^{commit} # timeout=10
Checking out Revision 1ed3a967a88d9d468a3e29c1e7b06217622858c2 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1ed3a967a88d9d468a3e29c1e7b06217622858c2 # timeout=10
Commit message: "Makes SequenceMaskRandom available only to be used in the model (not in the Loader)"
 > git rev-list --no-walk e45f25bcc59d6365a69b161c8782549d12e496d3 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins445341637274001760.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 766 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 23%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 30%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 44%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 47%]
tests/unit/tf/models/test_base.py s..................... [ 50%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 59%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 62%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 63%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%]
tests/unit/tf/transformers/test_block.py ................FF [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 70%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 78%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%]
tests/unit/tf/transforms/test_noise.py ..... [ 80%]
tests/unit/tf/transforms/test_sequence.py .................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
_____________ test_transformer_with_masked_language_modeling[True] _____________

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f06330b4340>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling(sequence_testing_data: Dataset, run_eagerly):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False)
    model = mm.Model(
        mm.SequenceMaskRandom(schema=seq_schema, target=target, masking_prob=0.3),
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        GPT2Block(d_model=48, n_head=4, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target),
            default_loss="categorical_crossentropy",
        ),
    )

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
  testing_utils.model_test(model, loader, run_eagerly=run_eagerly)

tests/unit/tf/transformers/test_block.py:219:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:789: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function
return step_function(self, iterator)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:643: in train_step
outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
merlin/models/tf/models/base.py:579: in call_train_test
self.adjust_predictions_and_targets(predictions, targets)
merlin/models/tf/models/base.py:623: in adjust_predictions_and_targets
targets[k] = tf.cast(targets[k], predictions[k].dtype)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler
return dispatch_target(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/math_ops.py:1000: in cast
x = ops.convert_to_tensor(x, name="x")
/usr/local/lib/python3.8/dist-packages/tensorflow/python/profiler/trace.py:183: in wrapped
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:1640: in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:343: in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:267: in constant
return _constant_impl(value, dtype, shape, name, verify_shape=False,
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:279: in _constant_impl
return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:304: in _constant_eager_impl
t = convert_to_eager_tensor(value, ctx, dtype)


value = None
ctx = <tensorflow.python.eager.context.Context object at 0x7f086fb54fd0>
dtype = None

def convert_to_eager_tensor(value, ctx, dtype=None):
  """Converts the given `value` to an `EagerTensor`.

  Note that this function could return cached copies of created constants for
  performance reasons.

  Args:
    value: value to convert to EagerTensor.
    ctx: value of context.context().
    dtype: optional desired dtype of the converted EagerTensor.

  Returns:
    EagerTensor created from value.

  Raises:
    TypeError: if `dtype` is not compatible with the type of t.
  """
  if isinstance(value, ops.EagerTensor):
    if dtype is not None and value.dtype != dtype:
      raise TypeError(f"Expected tensor {value} with dtype {dtype!r}, but got "
                      f"dtype {value.dtype!r}.")
    return value
  if dtype is not None:
    try:
      dtype = dtype.as_datatype_enum
    except AttributeError:
      dtype = dtypes.as_dtype(dtype).as_datatype_enum
  ctx.ensure_initialized()
return ops.EagerTensor(value, ctx.device_name, dtype)

E ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:102: ValueError
____________ test_transformer_with_masked_language_modeling[False] _____________

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f0633368d00>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling(sequence_testing_data: Dataset, run_eagerly):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False)
    model = mm.Model(
        mm.SequenceMaskRandom(schema=seq_schema, target=target, masking_prob=0.3),
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        GPT2Block(d_model=48, n_head=4, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target),
            default_loss="categorical_crossentropy",
        ),
    )

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
  testing_utils.model_test(model, loader, run_eagerly=run_eagerly)

tests/unit/tf/transformers/test_block.py:219:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:789: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self.call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in call
self.initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_file_e33v5by.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self.extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self.call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:643: in train_step
outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
merlin/models/tf/models/base.py:514: in call_train_test
forward = self(
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler
raise new_e.with_traceback(e.traceback) from None
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filei1obl6ix.py:42: in tf__call
ag
.for_stmt(ag
.ld(self).blocks, None, loop_body, get_state_1, set_state_1, ('context', 'outputs'), {'iterate_names': 'block'})
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:449: in for_stmt
py_for_stmt(iter, extra_test, body, None, None)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:498: in py_for_stmt
body(target)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:464: in protected_body
original_body(protected_iter)
/tmp/autograph_generated_filei1obl6ix.py:40: in loop_body
(outputs, context) = ag
.converted_call(ag
.ld(self).call_child, (ag
.ld(block), ag
.ld(outputs), ag
.ld(context)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:441: in converted_call
result = converted_f(*effective_args)
/tmp/autograph_generated_file0fcprncc.py:25: in tf___call_child
outputs = ag
.converted_call(ag
.ld(call_layer), (ag
_.ld(child), ag__.ld(inputs)), dict(**ag__.ld(call_kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filecym8v2oh.py:50: in tf__call_layer
retval
= ag
_.converted_call(ag__.ld(layer), ((ag__.ld(inputs),) + tuple(ag__.ld(args))), dict(**ag__.ld(filtered_kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filelca_vvo8.py:14: in tf____call
retval_ = ag__.converted_call(ag__.converted_call(ag__.ld(super), (), None, fscope).call, tuple(ag__.ld(args)), dict(**ag__.ld(kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:331: in converted_call
return call_unconverted(f, args, kwargs, options, False)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler
raise new_e.with_traceback(e.traceback) from None
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filedwouomdm.py:11: in tf__call
pre = ag
.converted_call(ag
.ld(combinators).call_sequentially, (ag__.converted_call(ag__.ld(list), (ag__.ld(self).to_call_pre,), None, fscope), ag__.ld(inputs)), dict(**ag__.ld(kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filea95w3bh5.py:25: in tf__call_sequentially
ag
.for_stmt(ag__.ld(layers), None, loop_body, get_state, set_state, ('outputs',), {'iterate_names': 'layer'})
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:449: in for_stmt
py_for_stmt(iter, extra_test, body, None, None)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:498: in py_for_stmt
body(target)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:464: in protected_body
original_body(protected_iter)
/tmp/autograph_generated_filea95w3bh5.py:23: in loop_body
outputs = ag
.converted_call(ag
_.ld(call_layer), (ag__.ld(layer), ag__.ld(outputs)), dict(**ag__.ld(kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filecym8v2oh.py:50: in tf__call_layer
retval
= ag
_.converted_call(ag__.ld(layer), ((ag__.ld(inputs),) + tuple(ag__.ld(args))), dict(**ag__.ld(filtered_kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filelca_vvo8.py:14: in tf____call
retval_ = ag__.converted_call(ag__.converted_call(ag__.ld(super), (), None, fscope).call, tuple(ag__.ld(args)), dict(**ag__.ld(kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:331: in converted_call
return call_unconverted(f, args, kwargs, options, False)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler
raise new_e.with_traceback(e.traceback) from None
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filev6zj63kq.py:28: in tf__call
ag
.if_stmt((ag
.ld(mask) is not None), if_body, else_body, get_state, set_state, ('outputs',), 1)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:1341: in if_stmt
py_if_stmt(cond, body, orelse)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:1394: in py_if_stmt
return body() if cond else orelse()
/tmp/autograph_generated_filev6zj63kq.py:23: in if_body
outputs = ag
.converted_call(ag
.ld(self).replace_masked_embeddings, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:441: in converted_call
result = converted_f(*effective_args)
/tmp/autograph_generated_file9x8gxdku.py:23: in tf___replace_masked_embeddings
ag
.if_stmt(ag__.not_(ag__.converted_call(ag__.ld(self).check_inputs_mask_compatible_shape, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)), if_body, else_body, get_state, set_state, (), 0)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:1339: in if_stmt
_tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:1385: in _tf_if_stmt
final_cond_vars = control_flow_ops.cond(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler
return dispatch_target(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/deprecation.py:561: in new_func
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/control_flow_ops.py:1202: in cond
return cond_v2.cond_v2(pred, true_fn, false_fn, name)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/cond_v2.py:80: in cond_v2
true_graph = func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:1365: in aug_body
body()


def if_body():
  raise ag__.converted_call(ag__.ld(ValueError), ('The inputs and mask need to be compatible: have the same dtype (tf.Tensor or tf.RaggedTensor) and the tf.rank(mask) == tf.rank(inputs)-1',), None, fscope)

E ValueError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function **
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run
E return self.extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica
E return self.call_for_each_replica(fn, args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in call_for_each_replica
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step **
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 643, in train_step
E outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 514, in call_train_test
E forward = self(
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 490, in call
E return super().call(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call
E outputs = call_fn(inputs, *args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 146, in error_handler
E raise new_e.with_traceback(e.traceback) from None
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
E return fn(*args, **kwargs)
E File "/tmp/autograph_generated_filei1obl6ix.py", line 42, in tf__call **
E ag
.for_stmt(ag
.ld(self).blocks, None, loop_body, get_state_1, set_state_1, ('context', 'outputs'), {'iterate_names': 'block'})
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 449, in for_stmt
E py_for_stmt(iter, extra_test, body, None, None)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 498, in py_for_stmt
E body(target)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 464, in protected_body
E original_body(protected_iter)
E File "/tmp/autograph_generated_filei1obl6ix.py", line 40, in loop_body
E (outputs, context) = ag
.converted_call(ag
.ld(self).call_child, (ag_.ld(block), ag__.ld(outputs), ag__.ld(context)), None, fscope)
E File "/tmp/autograph_generated_file0fcprncc.py", line 25, in tf___call_child **
E outputs = ag
.converted_call(ag__.ld(call_layer), (ag__.ld(child), ag__.ld(inputs)), dict(**ag__.ld(call_kwargs)), fscope)
E File "/tmp/autograph_generated_filecym8v2oh.py", line 50, in tf__call_layer **
E retval
= ag
_.converted_call(ag__.ld(layer), ((ag__.ld(inputs),) + tuple(ag__.ld(args))), dict(**ag__.ld(filtered_kwargs)), fscope)
E File "/tmp/autograph_generated_filelca_vvo8.py", line 14, in tf____call **
E retval_ = ag__.converted_call(ag__.converted_call(ag__.ld(super), (), None, fscope).call, tuple(ag__.ld(args)), dict(**ag__.ld(kwargs)), fscope)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler **
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call
E outputs = call_fn(inputs, *args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 146, in error_handler
E raise new_e.with_traceback(e.traceback) from None
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
E return fn(*args, **kwargs)
E File "/tmp/autograph_generated_filedwouomdm.py", line 11, in tf__call **
E pre = ag
.converted_call(ag__.ld(combinators).call_sequentially, (ag__.converted_call(ag__.ld(list), (ag__.ld(self).to_call_pre,), None, fscope), ag__.ld(inputs)), dict(**ag__.ld(kwargs)), fscope)
E File "/tmp/autograph_generated_filea95w3bh5.py", line 25, in tf__call_sequentially **
E ag
.for_stmt(ag__.ld(layers), None, loop_body, get_state, set_state, ('outputs',), {'iterate_names': 'layer'})
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 449, in for_stmt
E py_for_stmt(iter, extra_test, body, None, None)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 498, in py_for_stmt
E body(target)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 464, in protected_body
E original_body(protected_iter)
E File "/tmp/autograph_generated_filea95w3bh5.py", line 23, in loop_body
E outputs = ag
.converted_call(ag
_.ld(call_layer), (ag__.ld(layer), ag__.ld(outputs)), dict(**ag__.ld(kwargs)), fscope)
E File "/tmp/autograph_generated_filecym8v2oh.py", line 50, in tf__call_layer **
E retval
= ag
_.converted_call(ag__.ld(layer), ((ag__.ld(inputs),) + tuple(ag__.ld(args))), dict(**ag__.ld(filtered_kwargs)), fscope)
E File "/tmp/autograph_generated_filelca_vvo8.py", line 14, in tf____call **
E retval_ = ag__.converted_call(ag__.converted_call(ag__.ld(super), (), None, fscope).call, tuple(ag__.ld(args)), dict(**ag__.ld(kwargs)), fscope)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler **
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call
E outputs = call_fn(inputs, *args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 146, in error_handler
E raise new_e.with_traceback(e.traceback) from None
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
E return fn(*args, **kwargs)
E File "/tmp/autograph_generated_filev6zj63kq.py", line 28, in tf__call **
E ag
.if_stmt((ag__.ld(mask) is not None), if_body, else_body, get_state, set_state, ('outputs',), 1)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1341, in if_stmt
E py_if_stmt(cond, body, orelse)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1394, in py_if_stmt
E return body() if cond else orelse()
E File "/tmp/autograph_generated_filev6zj63kq.py", line 23, in if_body
E outputs = ag
.converted_call(ag
.ld(self).replace_masked_embeddings, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)
E File "/tmp/autograph_generated_file9x8gxdku.py", line 23, in tf___replace_masked_embeddings **
E ag
.if_stmt(ag__.not_(ag__.converted_call(ag__.ld(self).check_inputs_mask_compatible_shape, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)), if_body, else_body, get_state, set_state, (), 0)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1339, in if_stmt
E tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1385, in tf_if_stmt
E final_cond_vars = control_flow_ops.cond(
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/deprecation.py", line 561, in new_func
E return func(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1202, in cond
E return cond_v2.cond_v2(pred, true_fn, false_fn, name)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/cond_v2.py", line 80, in cond_v2
E true_graph = func_graph_module.func_graph_from_py_func(
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py", line 1141, in func_graph_from_py_func
E func_outputs = python_func(*func_args, **func_kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1365, in aug_body
E body()
E File "/tmp/autograph_generated_file9x8gxdku.py", line 19, in if_body
E raise ag
.converted_call(ag
.ld(ValueError), ('The inputs and mask need to be compatible: have the same dtype (tf.Tensor or tf.RaggedTensor) and the tf.rank(mask) == tf.rank(inputs)-1',), None, fscope)
E
E ValueError: Exception encountered when calling layer "model" (type Model).
E
E in user code:
E
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 988, in call *
E outputs, context = self.call_child(block, outputs, context)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1017, in call_child *
E outputs = call_layer(child, inputs, **call_kwargs)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py", line 433, in call_layer *
E return layer(inputs, *args, **filtered_kwargs)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call *
E return super().call(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler **
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call
E outputs = call_fn(inputs, *args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 146, in error_handler
E raise new_e.with_traceback(e.traceback) from None
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
E return fn(*args, **kwargs)
E File "/tmp/autograph_generated_filev6zj63kq.py", line 28, in tf__call **
E ag
.if_stmt((ag
.ld(mask) is not None), if_body, else_body, get_state, set_state, ('outputs',), 1)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1341, in if_stmt
E py_if_stmt(cond, body, orelse)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1394, in py_if_stmt
E return body() if cond else orelse()
E File "/tmp/autograph_generated_filev6zj63kq.py", line 23, in if_body
E outputs = ag
.converted_call(ag
.ld(self).replace_masked_embeddings, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)
E File "/tmp/autograph_generated_file9x8gxdku.py", line 23, in tf___replace_masked_embeddings **
E ag
.if_stmt(ag__.not_(ag__.converted_call(ag__.ld(self).check_inputs_mask_compatible_shape, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)), if_body, else_body, get_state, set_state, (), 0)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1339, in if_stmt
E tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1385, in tf_if_stmt
E final_cond_vars = control_flow_ops.cond(
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/deprecation.py", line 561, in new_func
E return func(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1202, in cond
E return cond_v2.cond_v2(pred, true_fn, false_fn, name)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/cond_v2.py", line 80, in cond_v2
E true_graph = func_graph_module.func_graph_from_py_func(
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py", line 1141, in func_graph_from_py_func
E func_outputs = python_func(*func_args, **func_kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1365, in aug_body
E body()
E File "/tmp/autograph_generated_file9x8gxdku.py", line 19, in if_body
E raise ag
.converted_call(ag
.ld(ValueError), ('The inputs and mask need to be compatible: have the same dtype (tf.Tensor or tf.RaggedTensor) and the tf.rank(mask) == tf.rank(inputs)-1',), None, fscope)
E
E ValueError: Exception encountered when calling layer "gpt2_block" (type GPT2Block).
E
E in user code:
E
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transformers/block.py", line 124, in call *
E pre = combinators.call_sequentially(list(self.to_call_pre), inputs, **kwargs)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/combinators.py", line 819, in call_sequentially *
E outputs = call_layer(layer, outputs, **kwargs)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py", line 433, in call_layer *
E return layer(inputs, *args, **filtered_kwargs)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call *
E return super().call(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler **
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call
E outputs = call_fn(inputs, *args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 146, in error_handler
E raise new_e.with_traceback(e.traceback) from None
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
E return fn(*args, **kwargs)
E File "/tmp/autograph_generated_filev6zj63kq.py", line 28, in tf__call **
E ag
.if_stmt((ag__.ld(mask) is not None), if_body, else_body, get_state, set_state, ('outputs',), 1)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1341, in if_stmt
E py_if_stmt(cond, body, orelse)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1394, in py_if_stmt
E return body() if cond else orelse()
E File "/tmp/autograph_generated_filev6zj63kq.py", line 23, in if_body
E outputs = ag
.converted_call(ag
.ld(self).replace_masked_embeddings, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)
E File "/tmp/autograph_generated_file9x8gxdku.py", line 23, in tf___replace_masked_embeddings **
E ag
.if_stmt(ag__.not_(ag__.converted_call(ag__.ld(self).check_inputs_mask_compatible_shape, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)), if_body, else_body, get_state, set_state, (), 0)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1339, in if_stmt
E tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1385, in tf_if_stmt
E final_cond_vars = control_flow_ops.cond(
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/deprecation.py", line 561, in new_func
E return func(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1202, in cond
E return cond_v2.cond_v2(pred, true_fn, false_fn, name)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/cond_v2.py", line 80, in cond_v2
E true_graph = func_graph_module.func_graph_from_py_func(
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py", line 1141, in func_graph_from_py_func
E func_outputs = python_func(*func_args, **func_kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1365, in aug_body
E body()
E File "/tmp/autograph_generated_file9x8gxdku.py", line 19, in if_body
E raise ag
.converted_call(ag
.ld(ValueError), ('The inputs and mask need to be compatible: have the same dtype (tf.Tensor or tf.RaggedTensor) and the tf.rank(mask) == tf.rank(inputs)-1',), None, fscope)
E
E ValueError: Exception encountered when calling layer "replace_masked_embeddings" (type ReplaceMaskedEmbeddings).
E
E in user code:
E
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/sequence.py", line 593, in call *
E outputs = self._replace_masked_embeddings(inputs, mask)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/sequence.py", line 656, in _replace_masked_embeddings *
E raise ValueError(
E
E ValueError: The inputs and mask need to be compatible: have the same dtype (tf.Tensor or tf.RaggedTensor) and the tf.rank(mask) == tf.rank(inputs)-1
E
E
E Call arguments received by layer "replace_masked_embeddings" (type ReplaceMaskedEmbeddings):
E • inputs=tf.RaggedTensor(values=Tensor("model/concat_features/RaggedConcat/concat:0", shape=(None, 48), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_8/control_dependency:0", shape=(None,), dtype=int32))
E • targets=tf.RaggedTensor(values=Tensor("model/sequence_mask_random/Identity:0", shape=(None,), dtype=int64), row_splits=Tensor("model/sequence_mask_random/Identity_1:0", shape=(None,), dtype=int32))
E
E
E Call arguments received by layer "gpt2_block" (type GPT2Block):
E • inputs=tf.RaggedTensor(values=Tensor("model/concat_features/RaggedConcat/concat:0", shape=(None, 48), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_8/control_dependency:0", shape=(None,), dtype=int32))
E • kwargs={'features': {'item_id_seq': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice:0", shape=(None,), dtype=int64), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths/control_dependency:0", shape=(None,), dtype=int32))', 'categories': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice_2:0", shape=(None,), dtype=int64), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_1/control_dependency:0", shape=(None,), dtype=int32))', 'test_user_id': 'tf.Tensor(shape=(None, None), dtype=int64)', 'user_country': 'tf.Tensor(shape=(None, None), dtype=int64)', 'item_age_days_norm': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice_4:0", shape=(None,), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_2/control_dependency:0", shape=(None,), dtype=int32))', 'event_hour_sin': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice_6:0", shape=(None,), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_3/control_dependency:0", shape=(None,), dtype=int32))', 'event_hour_cos': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice_8:0", shape=(None,), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_4/control_dependency:0", shape=(None,), dtype=int32))', 'event_weekday_sin': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice_10:0", shape=(None,), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_5/control_dependency:0", shape=(None,), dtype=int32))', 'event_weekday_cos': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice_12:0", shape=(None,), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_6/control_dependency:0", shape=(None,), dtype=int32))', 'user_age': 'tf.Tensor(shape=(None, None), dtype=float32)'}, 'training': 'True', 'testing': 'False', 'mask': ('None',), 'targets': 'tf.RaggedTensor(values=Tensor("model/sequence_mask_random/Identity:0", shape=(None,), dtype=int64), row_splits=Tensor("model/sequence_mask_random/Identity_1:0", shape=(None,), dtype=int32))'}
E
E
E Call arguments received by layer "model" (type Model):
E • inputs={'item_id_seq': ('tf.Tensor(shape=(None, None), dtype=int64)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'categories': ('tf.Tensor(shape=(None, None), dtype=int64)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'test_user_id': 'tf.Tensor(shape=(None, None), dtype=int64)', 'user_country': 'tf.Tensor(shape=(None, None), dtype=int64)', 'item_age_days_norm': ('tf.Tensor(shape=(None, None), dtype=float32)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'event_hour_sin': ('tf.Tensor(shape=(None, None), dtype=float32)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'event_hour_cos': ('tf.Tensor(shape=(None, None), dtype=float32)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'event_weekday_sin': ('tf.Tensor(shape=(None, None), dtype=float32)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'event_weekday_cos': ('tf.Tensor(shape=(None, None), dtype=float32)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'user_age': 'tf.Tensor(shape=(None, None), dtype=float32)'}
E • targets=None
E • training=True
E • testing=False
E • output_context=False

/tmp/__autograph_generated_file9x8gxdku.py:19: ValueError
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 24 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 24 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_filejrhq4z0s.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
==== 2 failed, 752 passed, 12 skipped, 1177 warnings in 1219.85s (0:20:19) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins3902054843722754358.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit e5544256d8545186cccd5f1431c9971a4629bf92, no merge conflicts.
Running as SYSTEM
Setting status of e5544256d8545186cccd5f1431c9971a4629bf92 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1488/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse e5544256d8545186cccd5f1431c9971a4629bf92^{commit} # timeout=10
Checking out Revision e5544256d8545186cccd5f1431c9971a4629bf92 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e5544256d8545186cccd5f1431c9971a4629bf92 # timeout=10
Commit message: "Merge branch 'main' into mlm_alt"
 > git rev-list --no-walk b60295b8482583cbc279c785a9d63b768bb46d4d # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins14796459760879837759.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 768 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 23%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 44%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 47%]
tests/unit/tf/models/test_base.py s....................... [ 50%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 59%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 62%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 63%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%]
tests/unit/tf/transformers/test_block.py ................FF [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 70%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 78%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%]
tests/unit/tf/transforms/test_noise.py ..... [ 80%]
tests/unit/tf/transforms/test_sequence.py .................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
_____________ test_transformer_with_masked_language_modeling[True] _____________

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7fb0a06514c0>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling(sequence_testing_data: Dataset, run_eagerly):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False)
    model = mm.Model(
        mm.SequenceMaskRandom(schema=seq_schema, target=target, masking_prob=0.3),
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        GPT2Block(d_model=48, n_head=4, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target),
            default_loss="categorical_crossentropy",
        ),
    )

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
  testing_utils.model_test(model, loader, run_eagerly=run_eagerly)

tests/unit/tf/transformers/test_block.py:219:


merlin/models/tf/utils/testing_utils.py:91: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs)
merlin/models/tf/models/base.py:822: in fit
out = super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function
return step_function(self, iterator)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:652: in train_step
outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
merlin/models/tf/models/base.py:580: in call_train_test
self.adjust_predictions_and_targets(predictions, targets)
merlin/models/tf/models/base.py:624: in adjust_predictions_and_targets
targets[k] = tf.cast(targets[k], predictions[k].dtype)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler
return dispatch_target(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/math_ops.py:1000: in cast
x = ops.convert_to_tensor(x, name="x")
/usr/local/lib/python3.8/dist-packages/tensorflow/python/profiler/trace.py:183: in wrapped
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:1640: in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:343: in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:267: in constant
return _constant_impl(value, dtype, shape, name, verify_shape=False,
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:279: in _constant_impl
return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:304: in _constant_eager_impl
t = convert_to_eager_tensor(value, ctx, dtype)


value = None
ctx = <tensorflow.python.eager.context.Context object at 0x7fb2c3eae340>
dtype = None

def convert_to_eager_tensor(value, ctx, dtype=None):
  """Converts the given `value` to an `EagerTensor`.

  Note that this function could return cached copies of created constants for
  performance reasons.

  Args:
    value: value to convert to EagerTensor.
    ctx: value of context.context().
    dtype: optional desired dtype of the converted EagerTensor.

  Returns:
    EagerTensor created from value.

  Raises:
    TypeError: if `dtype` is not compatible with the type of t.
  """
  if isinstance(value, ops.EagerTensor):
    if dtype is not None and value.dtype != dtype:
      raise TypeError(f"Expected tensor {value} with dtype {dtype!r}, but got "
                      f"dtype {value.dtype!r}.")
    return value
  if dtype is not None:
    try:
      dtype = dtype.as_datatype_enum
    except AttributeError:
      dtype = dtypes.as_dtype(dtype).as_datatype_enum
  ctx.ensure_initialized()
return ops.EagerTensor(value, ctx.device_name, dtype)

E ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py:102: ValueError
____________ test_transformer_with_masked_language_modeling[False] _____________

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7fb0911bee50>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling(sequence_testing_data: Dataset, run_eagerly):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False)
    model = mm.Model(
        mm.SequenceMaskRandom(schema=seq_schema, target=target, masking_prob=0.3),
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        GPT2Block(d_model=48, n_head=4, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target),
            default_loss="categorical_crossentropy",
        ),
    )

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
  testing_utils.model_test(model, loader, run_eagerly=run_eagerly)

tests/unit/tf/transformers/test_block.py:219:


merlin/models/tf/utils/testing_utils.py:91: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs)
merlin/models/tf/models/base.py:822: in fit
out = super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit
tmp_logs = self.train_function(iterator)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call
result = self.call(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in call
self.initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in initialize
self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected
graph_function, _ = self.maybe_define_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function
graph_function = self.create_graph_function(args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function
func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler
return autograph.converted_call(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_file3evzhqfd.py:15: in tf__train_function
retval
= ag
.converted_call(ag
.ld(step_function), (ag
.ld(self), ag
.ld(iterator)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in call_unconverted
return f(*args)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self.extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self.call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call
return call_unconverted(f, args, kwargs, options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step
outputs = model.train_step(data)
merlin/models/tf/models/base.py:652: in train_step
outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
merlin/models/tf/models/base.py:515: in call_train_test
forward = self(
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler
raise new_e.with_traceback(e.traceback) from None
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_file_h4ulze6.py:42: in tf__call
ag
.for_stmt(ag
.ld(self).blocks, None, loop_body, get_state_1, set_state_1, ('context', 'outputs'), {'iterate_names': 'block'})
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:449: in for_stmt
py_for_stmt(iter, extra_test, body, None, None)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:498: in py_for_stmt
body(target)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:464: in protected_body
original_body(protected_iter)
/tmp/autograph_generated_file_h4ulze6.py:40: in loop_body
(outputs, context) = ag
.converted_call(ag
.ld(self).call_child, (ag
.ld(block), ag
.ld(outputs), ag
.ld(context)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:441: in converted_call
result = converted_f(*effective_args)
/tmp/autograph_generated_filew0vokmwj.py:25: in tf___call_child
outputs = ag
.converted_call(ag
.ld(call_layer), (ag
_.ld(child), ag__.ld(inputs)), dict(**ag__.ld(call_kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_fileq96xztk5.py:50: in tf__call_layer
retval
= ag
_.converted_call(ag__.ld(layer), ((ag__.ld(inputs),) + tuple(ag__.ld(args))), dict(**ag__.ld(filtered_kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filey0z_26wv.py:14: in tf____call
retval_ = ag__.converted_call(ag__.converted_call(ag__.ld(super), (), None, fscope).call, tuple(ag__.ld(args)), dict(**ag__.ld(kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:331: in converted_call
return call_unconverted(f, args, kwargs, options, False)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler
raise new_e.with_traceback(e.traceback) from None
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_file99dneduf.py:11: in tf__call
pre = ag
.converted_call(ag
.ld(combinators).call_sequentially, (ag__.converted_call(ag__.ld(list), (ag__.ld(self).to_call_pre,), None, fscope), ag__.ld(inputs)), dict(**ag__.ld(kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_file0cakdk0p.py:25: in tf__call_sequentially
ag
.for_stmt(ag__.ld(layers), None, loop_body, get_state, set_state, ('outputs',), {'iterate_names': 'layer'})
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:449: in for_stmt
py_for_stmt(iter, extra_test, body, None, None)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:498: in py_for_stmt
body(target)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:464: in protected_body
original_body(protected_iter)
/tmp/autograph_generated_file0cakdk0p.py:23: in loop_body
outputs = ag
.converted_call(ag
_.ld(call_layer), (ag__.ld(layer), ag__.ld(outputs)), dict(**ag__.ld(kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_fileq96xztk5.py:50: in tf__call_layer
retval
= ag
_.converted_call(ag__.ld(layer), ((ag__.ld(inputs),) + tuple(ag__.ld(args))), dict(**ag__.ld(filtered_kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filey0z_26wv.py:14: in tf____call
retval_ = ag__.converted_call(ag__.converted_call(ag__.ld(super), (), None, fscope).call, tuple(ag__.ld(args)), dict(**ag__.ld(kwargs)), fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:331: in converted_call
return call_unconverted(f, args, kwargs, options, False)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in call_unconverted
return f(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler
raise new_e.with_traceback(e.traceback) from None
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:692: in wrapper
raise e.ag_error_metadata.to_exception(e)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper
return converted_call(f, args, kwargs, options=options)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call
result = converted_f(*effective_args, **kwargs)
/tmp/autograph_generated_filemx6pbzv.py:28: in tf__call
ag
.if_stmt((ag
_.ld(mask) is not None), if_body, else_body, get_state, set_state, ('outputs',), 1)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:1341: in if_stmt
py_if_stmt(cond, body, orelse)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:1394: in py_if_stmt
return body() if cond else orelse()
/tmp/autograph_generated_filemx6pbzv.py:23: in if_body
outputs = ag
.converted_call(ag
_.ld(self).replace_masked_embeddings, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:441: in converted_call
result = converted_f(*effective_args)
/tmp/autograph_generated_filewi9t9s9.py:23: in tf___replace_masked_embeddings
ag
_.if_stmt(ag__.not_(ag__.converted_call(ag__.ld(self).check_inputs_mask_compatible_shape, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)), if_body, else_body, get_state, set_state, (), 0)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:1339: in if_stmt
_tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:1385: in _tf_if_stmt
final_cond_vars = control_flow_ops.cond(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler
return dispatch_target(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/deprecation.py:561: in new_func
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/control_flow_ops.py:1202: in cond
return cond_v2.cond_v2(pred, true_fn, false_fn, name)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/cond_v2.py:80: in cond_v2
true_graph = func_graph_module.func_graph_from_py_func(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py:1365: in aug_body
body()


def if_body():
  raise ag__.converted_call(ag__.ld(ValueError), ('The inputs and mask need to be compatible: have the same dtype (tf.Tensor or tf.RaggedTensor) and the tf.rank(mask) == tf.rank(inputs)-1',), None, fscope)

E ValueError: in user code:
E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function *
E return step_function(self, iterator)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function **
E outputs = model.distribute_strategy.run(run_step, args=(data,))
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run
E return self.extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica
E return self.call_for_each_replica(fn, args, kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in call_for_each_replica
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step **
E outputs = model.train_step(data)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 652, in train_step
E outputs = self.call_train_test(x, y, sample_weight=sample_weight, training=True)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 515, in call_train_test
E forward = self(
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 490, in call
E return super().call(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call
E outputs = call_fn(inputs, *args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 146, in error_handler
E raise new_e.with_traceback(e.traceback) from None
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
E return fn(*args, **kwargs)
E File "/tmp/autograph_generated_file_h4ulze6.py", line 42, in tf__call **
E ag
.for_stmt(ag
.ld(self).blocks, None, loop_body, get_state_1, set_state_1, ('context', 'outputs'), {'iterate_names': 'block'})
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 449, in for_stmt
E py_for_stmt(iter, extra_test, body, None, None)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 498, in py_for_stmt
E body(target)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 464, in protected_body
E original_body(protected_iter)
E File "/tmp/autograph_generated_file_h4ulze6.py", line 40, in loop_body
E (outputs, context) = ag
.converted_call(ag
.ld(self).call_child, (ag_.ld(block), ag__.ld(outputs), ag__.ld(context)), None, fscope)
E File "/tmp/autograph_generated_filew0vokmwj.py", line 25, in tf___call_child **
E outputs = ag
.converted_call(ag__.ld(call_layer), (ag__.ld(child), ag__.ld(inputs)), dict(**ag__.ld(call_kwargs)), fscope)
E File "/tmp/autograph_generated_fileq96xztk5.py", line 50, in tf__call_layer **
E retval
= ag
_.converted_call(ag__.ld(layer), ((ag__.ld(inputs),) + tuple(ag__.ld(args))), dict(**ag__.ld(filtered_kwargs)), fscope)
E File "/tmp/autograph_generated_filey0z_26wv.py", line 14, in tf____call **
E retval_ = ag__.converted_call(ag__.converted_call(ag__.ld(super), (), None, fscope).call, tuple(ag__.ld(args)), dict(**ag__.ld(kwargs)), fscope)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler **
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call
E outputs = call_fn(inputs, *args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 146, in error_handler
E raise new_e.with_traceback(e.traceback) from None
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
E return fn(*args, **kwargs)
E File "/tmp/autograph_generated_file99dneduf.py", line 11, in tf__call **
E pre = ag
.converted_call(ag__.ld(combinators).call_sequentially, (ag__.converted_call(ag__.ld(list), (ag__.ld(self).to_call_pre,), None, fscope), ag__.ld(inputs)), dict(**ag__.ld(kwargs)), fscope)
E File "/tmp/autograph_generated_file0cakdk0p.py", line 25, in tf__call_sequentially **
E ag
.for_stmt(ag__.ld(layers), None, loop_body, get_state, set_state, ('outputs',), {'iterate_names': 'layer'})
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 449, in for_stmt
E py_for_stmt(iter, extra_test, body, None, None)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 498, in py_for_stmt
E body(target)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 464, in protected_body
E original_body(protected_iter)
E File "/tmp/autograph_generated_file0cakdk0p.py", line 23, in loop_body
E outputs = ag
.converted_call(ag
_.ld(call_layer), (ag__.ld(layer), ag__.ld(outputs)), dict(**ag__.ld(kwargs)), fscope)
E File "/tmp/autograph_generated_fileq96xztk5.py", line 50, in tf__call_layer **
E retval
= ag
_.converted_call(ag__.ld(layer), ((ag__.ld(inputs),) + tuple(ag__.ld(args))), dict(**ag__.ld(filtered_kwargs)), fscope)
E File "/tmp/autograph_generated_filey0z_26wv.py", line 14, in tf____call **
E retval_ = ag__.converted_call(ag__.converted_call(ag__.ld(super), (), None, fscope).call, tuple(ag__.ld(args)), dict(**ag__.ld(kwargs)), fscope)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler **
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call
E outputs = call_fn(inputs, *args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 146, in error_handler
E raise new_e.with_traceback(e.traceback) from None
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
E return fn(*args, **kwargs)
E File "/tmp/autograph_generated_filemx6pbzv.py", line 28, in tf__call **
E ag
_.if_stmt((ag__.ld(mask) is not None), if_body, else_body, get_state, set_state, ('outputs',), 1)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1341, in if_stmt
E py_if_stmt(cond, body, orelse)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1394, in py_if_stmt
E return body() if cond else orelse()
E File "/tmp/autograph_generated_filemx6pbzv.py", line 23, in if_body
E outputs = ag
.converted_call(ag
_.ld(self).replace_masked_embeddings, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)
E File "/tmp/autograph_generated_filewi9t9s9.py", line 23, in tf___replace_masked_embeddings **
E ag
_.if_stmt(ag__.not_(ag__.converted_call(ag__.ld(self).check_inputs_mask_compatible_shape, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)), if_body, else_body, get_state, set_state, (), 0)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1339, in if_stmt
E tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1385, in tf_if_stmt
E final_cond_vars = control_flow_ops.cond(
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/deprecation.py", line 561, in new_func
E return func(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1202, in cond
E return cond_v2.cond_v2(pred, true_fn, false_fn, name)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/cond_v2.py", line 80, in cond_v2
E true_graph = func_graph_module.func_graph_from_py_func(
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py", line 1141, in func_graph_from_py_func
E func_outputs = python_func(*func_args, **func_kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1365, in aug_body
E body()
E File "/tmp/autograph_generated_filewi9t9s9.py", line 19, in if_body
E raise ag
.converted_call(ag
_.ld(ValueError), ('The inputs and mask need to be compatible: have the same dtype (tf.Tensor or tf.RaggedTensor) and the tf.rank(mask) == tf.rank(inputs)-1',), None, fscope)
E
E ValueError: Exception encountered when calling layer "model" (type Model).
E
E in user code:
E
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1044, in call *
E outputs, context = self.call_child(block, outputs, context)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1073, in call_child *
E outputs = call_layer(child, inputs, **call_kwargs)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py", line 433, in call_layer *
E return layer(inputs, *args, **filtered_kwargs)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call *
E return super().call(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler **
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call
E outputs = call_fn(inputs, *args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 146, in error_handler
E raise new_e.with_traceback(e.traceback) from None
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
E return fn(*args, **kwargs)
E File "/tmp/autograph_generated_filemx6pbzv.py", line 28, in tf__call **
E ag
.if_stmt((ag
_.ld(mask) is not None), if_body, else_body, get_state, set_state, ('outputs',), 1)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1341, in if_stmt
E py_if_stmt(cond, body, orelse)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1394, in py_if_stmt
E return body() if cond else orelse()
E File "/tmp/autograph_generated_filemx6pbzv.py", line 23, in if_body
E outputs = ag
.converted_call(ag
_.ld(self).replace_masked_embeddings, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)
E File "/tmp/autograph_generated_filewi9t9s9.py", line 23, in tf___replace_masked_embeddings **
E ag
_.if_stmt(ag__.not_(ag__.converted_call(ag__.ld(self).check_inputs_mask_compatible_shape, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)), if_body, else_body, get_state, set_state, (), 0)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1339, in if_stmt
E tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1385, in tf_if_stmt
E final_cond_vars = control_flow_ops.cond(
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/deprecation.py", line 561, in new_func
E return func(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1202, in cond
E return cond_v2.cond_v2(pred, true_fn, false_fn, name)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/cond_v2.py", line 80, in cond_v2
E true_graph = func_graph_module.func_graph_from_py_func(
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py", line 1141, in func_graph_from_py_func
E func_outputs = python_func(*func_args, **func_kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1365, in aug_body
E body()
E File "/tmp/autograph_generated_filewi9t9s9.py", line 19, in if_body
E raise ag
.converted_call(ag
_.ld(ValueError), ('The inputs and mask need to be compatible: have the same dtype (tf.Tensor or tf.RaggedTensor) and the tf.rank(mask) == tf.rank(inputs)-1',), None, fscope)
E
E ValueError: Exception encountered when calling layer "gpt2_block" (type GPT2Block).
E
E in user code:
E
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transformers/block.py", line 124, in call *
E pre = combinators.call_sequentially(list(self.to_call_pre), inputs, **kwargs)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/combinators.py", line 819, in call_sequentially *
E outputs = call_layer(layer, outputs, **kwargs)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py", line 433, in call_layer *
E return layer(inputs, *args, **filtered_kwargs)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call *
E return super().call(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler **
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call
E outputs = call_fn(inputs, *args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 146, in error_handler
E raise new_e.with_traceback(e.traceback) from None
E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
E return fn(*args, **kwargs)
E File "/tmp/autograph_generated_filemx6pbzv.py", line 28, in tf__call **
E ag
_.if_stmt((ag__.ld(mask) is not None), if_body, else_body, get_state, set_state, ('outputs',), 1)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1341, in if_stmt
E py_if_stmt(cond, body, orelse)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1394, in py_if_stmt
E return body() if cond else orelse()
E File "/tmp/autograph_generated_filemx6pbzv.py", line 23, in if_body
E outputs = ag
.converted_call(ag
_.ld(self).replace_masked_embeddings, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)
E File "/tmp/autograph_generated_filewi9t9s9.py", line 23, in tf___replace_masked_embeddings **
E ag
_.if_stmt(ag__.not_(ag__.converted_call(ag__.ld(self).check_inputs_mask_compatible_shape, (ag_.ld(inputs), ag__.ld(mask)), None, fscope)), if_body, else_body, get_state, set_state, (), 0)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1339, in if_stmt
E tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1385, in tf_if_stmt
E final_cond_vars = control_flow_ops.cond(
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
E return fn(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 1082, in op_dispatch_handler
E return dispatch_target(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/deprecation.py", line 561, in new_func
E return func(*args, **kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1202, in cond
E return cond_v2.cond_v2(pred, true_fn, false_fn, name)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/cond_v2.py", line 80, in cond_v2
E true_graph = func_graph_module.func_graph_from_py_func(
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py", line 1141, in func_graph_from_py_func
E func_outputs = python_func(*func_args, **func_kwargs)
E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/operators/control_flow.py", line 1365, in aug_body
E body()
E File "/tmp/autograph_generated_filewi9t9s9.py", line 19, in if_body
E raise ag
.converted_call(ag
_.ld(ValueError), ('The inputs and mask need to be compatible: have the same dtype (tf.Tensor or tf.RaggedTensor) and the tf.rank(mask) == tf.rank(inputs)-1',), None, fscope)
E
E ValueError: Exception encountered when calling layer "replace_masked_embeddings" (type ReplaceMaskedEmbeddings).
E
E in user code:
E
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/sequence.py", line 593, in call *
E outputs = self._replace_masked_embeddings(inputs, mask)
E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/sequence.py", line 656, in _replace_masked_embeddings *
E raise ValueError(
E
E ValueError: The inputs and mask need to be compatible: have the same dtype (tf.Tensor or tf.RaggedTensor) and the tf.rank(mask) == tf.rank(inputs)-1
E
E
E Call arguments received by layer "replace_masked_embeddings" (type ReplaceMaskedEmbeddings):
E • inputs=tf.RaggedTensor(values=Tensor("model/concat_features/RaggedConcat/concat:0", shape=(None, 48), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_8/control_dependency:0", shape=(None,), dtype=int32))
E • targets=tf.RaggedTensor(values=Tensor("model/sequence_mask_random/Identity:0", shape=(None,), dtype=int64), row_splits=Tensor("model/sequence_mask_random/Identity_1:0", shape=(None,), dtype=int32))
E
E
E Call arguments received by layer "gpt2_block" (type GPT2Block):
E • inputs=tf.RaggedTensor(values=Tensor("model/concat_features/RaggedConcat/concat:0", shape=(None, 48), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_8/control_dependency:0", shape=(None,), dtype=int32))
E • kwargs={'features': {'item_id_seq': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice:0", shape=(None,), dtype=int64), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths/control_dependency:0", shape=(None,), dtype=int32))', 'categories': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice_2:0", shape=(None,), dtype=int64), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_1/control_dependency:0", shape=(None,), dtype=int32))', 'test_user_id': 'tf.Tensor(shape=(None, None), dtype=int64)', 'user_country': 'tf.Tensor(shape=(None, None), dtype=int64)', 'item_age_days_norm': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice_4:0", shape=(None,), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_2/control_dependency:0", shape=(None,), dtype=int32))', 'event_hour_sin': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice_6:0", shape=(None,), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_3/control_dependency:0", shape=(None,), dtype=int32))', 'event_hour_cos': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice_8:0", shape=(None,), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_4/control_dependency:0", shape=(None,), dtype=int32))', 'event_weekday_sin': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice_10:0", shape=(None,), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_5/control_dependency:0", shape=(None,), dtype=int32))', 'event_weekday_cos': 'tf.RaggedTensor(values=Tensor("model/list_to_ragged/strided_slice_12:0", shape=(None,), dtype=float32), row_splits=Tensor("model/list_to_ragged/RaggedFromRowLengths_6/control_dependency:0", shape=(None,), dtype=int32))', 'user_age': 'tf.Tensor(shape=(None, None), dtype=float32)'}, 'training': 'True', 'testing': 'False', 'mask': ('None',), 'targets': 'tf.RaggedTensor(values=Tensor("model/sequence_mask_random/Identity:0", shape=(None,), dtype=int64), row_splits=Tensor("model/sequence_mask_random/Identity_1:0", shape=(None,), dtype=int32))'}
E
E
E Call arguments received by layer "model" (type Model):
E • inputs={'item_id_seq': ('tf.Tensor(shape=(None, None), dtype=int64)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'categories': ('tf.Tensor(shape=(None, None), dtype=int64)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'test_user_id': 'tf.Tensor(shape=(None, None), dtype=int64)', 'user_country': 'tf.Tensor(shape=(None, None), dtype=int64)', 'item_age_days_norm': ('tf.Tensor(shape=(None, None), dtype=float32)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'event_hour_sin': ('tf.Tensor(shape=(None, None), dtype=float32)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'event_hour_cos': ('tf.Tensor(shape=(None, None), dtype=float32)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'event_weekday_sin': ('tf.Tensor(shape=(None, None), dtype=float32)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'event_weekday_cos': ('tf.Tensor(shape=(None, None), dtype=float32)', 'tf.Tensor(shape=(None, None), dtype=int32)'), 'user_age': 'tf.Tensor(shape=(None, None), dtype=float32)'}
E • targets=None
E • training=True
E • testing=False
E • output_context=False

/tmp/_autograph_generated_filewi9t9s9.py:19: ValueError
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 7 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_filenod556zl.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
==== 2 failed, 754 passed, 12 skipped, 1181 warnings in 1212.91s (0:20:12) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins8981614522092718089.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit efcdf11ba1c0e3219d77f9e3d8dae1bf53ecbaba, no merge conflicts.
Running as SYSTEM
Setting status of efcdf11ba1c0e3219d77f9e3d8dae1bf53ecbaba to PENDING with url https://10.20.13.93:8080/job/merlin_models/1489/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse efcdf11ba1c0e3219d77f9e3d8dae1bf53ecbaba^{commit} # timeout=10
Checking out Revision efcdf11ba1c0e3219d77f9e3d8dae1bf53ecbaba (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f efcdf11ba1c0e3219d77f9e3d8dae1bf53ecbaba # timeout=10
Commit message: "Fixed tests"
 > git rev-list --no-walk e5544256d8545186cccd5f1431c9971a4629bf92 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins10803849635752428732.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 770 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 47%]
tests/unit/tf/models/test_base.py s....................... [ 50%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 62%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%]
tests/unit/tf/transformers/test_block.py ..................FF [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 70%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 78%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%]
tests/unit/tf/transforms/test_noise.py ..... [ 80%]
tests/unit/tf/transforms/test_sequence.py .................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
____ test_transformer_with_masked_language_modeling_check_eval_masked[True] ____

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f83ca25a4f0>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling_check_eval_masked(
    sequence_testing_data: Dataset, run_eagerly
):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False)
    model = mm.Model(
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        GPT2Block(d_model=48, n_head=4, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target),
            default_loss="categorical_crossentropy",
        ),
    )
    seq_mask_random = mm.SequenceMaskRandom(schema=seq_schema, target=target, masking_prob=0.3)

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
    testing_utils.model_test(
        model,
        loader,
        run_eagerly=run_eagerly,
        reload_model=True,
        fit_kwargs={"pre": seq_mask_random},
        metrics=[mm.RecallAt(5000), mm.NDCGAt(5000)],
    )

    # This transform only extracts targets, but without applying mask
    seq_target_as_input_no_mask = mm.SequenceTargetAsInput(schema=seq_schema, target=target)
    metrics_all_positions1 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )
    metrics_all_positions2 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )

    def _metrics_almost_equal(metrics1, metrics2):
        return np.all(
            [
                np.isclose(metrics1[k], metrics2[k])
                for k in metrics1
                if k not in "regularization_loss"
            ]
        )

    # Ensures metrics without masked positions are equal
  assert _metrics_almost_equal(metrics_all_positions1, metrics_all_positions2)

E AssertionError: assert False
E + where False = <function test_transformer_with_masked_language_modeling_check_eval_masked.._metrics_almost_equal at 0x7f83c895ff70>({'loss': 10.841493606567383, 'ndcg_at_5000': 0.014380228705704212, 'recall_at_5000': 0.15625, 'regularization_loss': 0.0}, {'loss': 10.841493606567383, 'ndcg_at_5000': 0.014380043372511864, 'recall_at_5000': 0.15625, 'regularization_loss': 0.0})

tests/unit/tf/transformers/test_block.py:302: AssertionError
----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 4.0676 - recall_at_5000: 0.0625 - ndcg_at_5000: 0.0052 - regularization_loss: 0.0000e+00�����������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 1s 800ms/step - loss: 4.0676 - recall_at_5000: 0.0625 - ndcg_at_5000: 0.0052 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 1s 675ms/step - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 0s 317ms/step - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00
----------------------------- Captured stderr call -----------------------------
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING:tensorflow:Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
------------------------------ Captured log call -------------------------------
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING tensorflow:save_impl.py:71 Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, sequence_mask_random_layer_call_fn, sequence_mask_random_layer_call_and_return_conditional_losses, list_to_ragged_layer_call_fn while saving (showing 5 of 90). These functions will not be directly callable after loading.
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
___ test_transformer_with_masked_language_modeling_check_eval_masked[False] ____

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f83d38a3910>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling_check_eval_masked(
    sequence_testing_data: Dataset, run_eagerly
):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False)
    model = mm.Model(
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        GPT2Block(d_model=48, n_head=4, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target),
            default_loss="categorical_crossentropy",
        ),
    )
    seq_mask_random = mm.SequenceMaskRandom(schema=seq_schema, target=target, masking_prob=0.3)

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
    testing_utils.model_test(
        model,
        loader,
        run_eagerly=run_eagerly,
        reload_model=True,
        fit_kwargs={"pre": seq_mask_random},
        metrics=[mm.RecallAt(5000), mm.NDCGAt(5000)],
    )

    # This transform only extracts targets, but without applying mask
    seq_target_as_input_no_mask = mm.SequenceTargetAsInput(schema=seq_schema, target=target)
    metrics_all_positions1 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )
    metrics_all_positions2 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )

    def _metrics_almost_equal(metrics1, metrics2):
        return np.all(
            [
                np.isclose(metrics1[k], metrics2[k])
                for k in metrics1
                if k not in "regularization_loss"
            ]
        )

    # Ensures metrics without masked positions are equal
    assert _metrics_almost_equal(metrics_all_positions1, metrics_all_positions2)

    seq_mask_last = mm.SequenceMaskLast(schema=seq_schema, target=target)
    metrics_last_positions = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_mask_last
    )
    # Ensures metrics masking only last positions are different then the ones
    # considering all positions
  assert not _metrics_almost_equal(metrics_all_positions1, metrics_last_positions)

E AssertionError: assert not True
E + where True = <function test_transformer_with_masked_language_modeling_check_eval_masked.._metrics_almost_equal at 0x7f83a88a84c0>({'loss': 10.847973823547363, 'ndcg_at_5000': 0.02224910259246826, 'recall_at_5000': 0.1875, 'regularization_loss': 0.0}, {'loss': 10.847973823547363, 'ndcg_at_5000': 0.02224910259246826, 'recall_at_5000': 0.1875, 'regularization_loss': 0.0})

tests/unit/tf/transformers/test_block.py:310: AssertionError
----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 4.7367 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0136 - regularization_loss: 0.0000e+00�����������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 8s 8s/step - loss: 4.7367 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0136 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8480 - recall_at_5000: 0.1875 - ndcg_at_5000: 0.0222 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 3s 3s/step - loss: 10.8480 - recall_at_5000: 0.1875 - ndcg_at_5000: 0.0222 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8480 - recall_at_5000: 0.1875 - ndcg_at_5000: 0.0222 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 0s 250ms/step - loss: 10.8480 - recall_at_5000: 0.1875 - ndcg_at_5000: 0.0222 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8480 - recall_at_5000: 0.1875 - ndcg_at_5000: 0.0222 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 0s 251ms/step - loss: 10.8480 - recall_at_5000: 0.1875 - ndcg_at_5000: 0.0222 - regularization_loss: 0.0000e+00
----------------------------- Captured stderr call -----------------------------
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
2022-10-09 15:07:45.932244: W tensorflow/core/grappler/optimizers/loop_optimizer.cc:907] Skipping loop optimization for Merge node with control input: model/gpt2_block/replace_masked_embeddings/RaggedWhere/Assert/AssertGuard/branch_executed/_9
WARNING:tensorflow:Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
------------------------------ Captured log call -------------------------------
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING tensorflow:save_impl.py:71 Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, sequence_mask_random_layer_call_fn, sequence_mask_random_layer_call_and_return_conditional_losses, list_to_ragged_layer_call_fn while saving (showing 5 of 90). These functions will not be directly callable after loading.
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 15 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 9 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_fileybb1neer.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
==== 2 failed, 756 passed, 12 skipped, 1199 warnings in 1511.12s (0:25:11) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins11534151192218213795.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit cf47815f124a15af1a9269c643007fb802bcaa5a, no merge conflicts.
Running as SYSTEM
Setting status of cf47815f124a15af1a9269c643007fb802bcaa5a to PENDING with url https://10.20.13.93:8080/job/merlin_models/1490/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse cf47815f124a15af1a9269c643007fb802bcaa5a^{commit} # timeout=10
Checking out Revision cf47815f124a15af1a9269c643007fb802bcaa5a (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f cf47815f124a15af1a9269c643007fb802bcaa5a # timeout=10
Commit message: "Removing not necessary constant"
 > git rev-list --no-walk efcdf11ba1c0e3219d77f9e3d8dae1bf53ecbaba # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins3110086427064645160.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 770 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 47%]
tests/unit/tf/models/test_base.py s....................... [ 50%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 62%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%]
tests/unit/tf/transformers/test_block.py ..................FF [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 70%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 78%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%]
tests/unit/tf/transforms/test_noise.py ..... [ 80%]
tests/unit/tf/transforms/test_sequence.py .................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py ..........F... [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
____ test_transformer_with_masked_language_modeling_check_eval_masked[True] ____

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f92e409cc10>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling_check_eval_masked(
    sequence_testing_data: Dataset, run_eagerly
):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False)
    model = mm.Model(
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        GPT2Block(d_model=48, n_head=4, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target),
            default_loss="categorical_crossentropy",
        ),
    )
    seq_mask_random = mm.SequenceMaskRandom(schema=seq_schema, target=target, masking_prob=0.3)

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
    testing_utils.model_test(
        model,
        loader,
        run_eagerly=run_eagerly,
        reload_model=True,
        fit_kwargs={"pre": seq_mask_random},
        metrics=[mm.RecallAt(5000), mm.NDCGAt(5000)],
    )

    # This transform only extracts targets, but without applying mask
    seq_target_as_input_no_mask = mm.SequenceTargetAsInput(schema=seq_schema, target=target)
    metrics_all_positions1 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )
    metrics_all_positions2 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )

    def _metrics_almost_equal(metrics1, metrics2):
        return np.all(
            [
                np.isclose(metrics1[k], metrics2[k])
                for k in metrics1
                if k not in "regularization_loss"
            ]
        )

    # Ensures metrics without masked positions are equal
  assert _metrics_almost_equal(metrics_all_positions1, metrics_all_positions2)

E AssertionError: assert False
E + where False = <function test_transformer_with_masked_language_modeling_check_eval_masked.._metrics_almost_equal at 0x7f92e7bd95e0>({'loss': 10.841493606567383, 'ndcg_at_5000': 0.014380228705704212, 'recall_at_5000': 0.15625, 'regularization_loss': 0.0}, {'loss': 10.841493606567383, 'ndcg_at_5000': 0.014380043372511864, 'recall_at_5000': 0.15625, 'regularization_loss': 0.0})

tests/unit/tf/transformers/test_block.py:302: AssertionError
----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 4.0676 - recall_at_5000: 0.0625 - ndcg_at_5000: 0.0052 - regularization_loss: 0.0000e+00�����������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 1s 784ms/step - loss: 4.0676 - recall_at_5000: 0.0625 - ndcg_at_5000: 0.0052 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 1s 663ms/step - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 0s 293ms/step - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00
----------------------------- Captured stderr call -----------------------------
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING:tensorflow:Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
------------------------------ Captured log call -------------------------------
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING tensorflow:save_impl.py:71 Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, sequence_mask_random_layer_call_fn, sequence_mask_random_layer_call_and_return_conditional_losses, list_to_ragged_layer_call_fn while saving (showing 5 of 90). These functions will not be directly callable after loading.
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
___ test_transformer_with_masked_language_modeling_check_eval_masked[False] ____

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f930df383d0>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling_check_eval_masked(
    sequence_testing_data: Dataset, run_eagerly
):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False)
    model = mm.Model(
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        GPT2Block(d_model=48, n_head=4, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target),
            default_loss="categorical_crossentropy",
        ),
    )
    seq_mask_random = mm.SequenceMaskRandom(schema=seq_schema, target=target, masking_prob=0.3)

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
    testing_utils.model_test(
        model,
        loader,
        run_eagerly=run_eagerly,
        reload_model=True,
        fit_kwargs={"pre": seq_mask_random},
        metrics=[mm.RecallAt(5000), mm.NDCGAt(5000)],
    )

    # This transform only extracts targets, but without applying mask
    seq_target_as_input_no_mask = mm.SequenceTargetAsInput(schema=seq_schema, target=target)
    metrics_all_positions1 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )
    metrics_all_positions2 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )

    def _metrics_almost_equal(metrics1, metrics2):
        return np.all(
            [
                np.isclose(metrics1[k], metrics2[k])
                for k in metrics1
                if k not in "regularization_loss"
            ]
        )

    # Ensures metrics without masked positions are equal
    assert _metrics_almost_equal(metrics_all_positions1, metrics_all_positions2)

    seq_mask_last = mm.SequenceMaskLast(schema=seq_schema, target=target)
    metrics_last_positions = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_mask_last
    )
    # Ensures metrics masking only last positions are different then the ones
    # considering all positions
  assert not _metrics_almost_equal(metrics_all_positions1, metrics_last_positions)

E AssertionError: assert not True
E + where True = <function test_transformer_with_masked_language_modeling_check_eval_masked.._metrics_almost_equal at 0x7f92e7fc6e50>({'loss': 10.847973823547363, 'ndcg_at_5000': 0.02224910259246826, 'recall_at_5000': 0.1875, 'regularization_loss': 0.0}, {'loss': 10.847973823547363, 'ndcg_at_5000': 0.02224910259246826, 'recall_at_5000': 0.1875, 'regularization_loss': 0.0})

tests/unit/tf/transformers/test_block.py:310: AssertionError
----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 4.7367 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0136 - regularization_loss: 0.0000e+00�����������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 9s 9s/step - loss: 4.7367 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0136 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8480 - recall_at_5000: 0.1875 - ndcg_at_5000: 0.0222 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 2s 2s/step - loss: 10.8480 - recall_at_5000: 0.1875 - ndcg_at_5000: 0.0222 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8480 - recall_at_5000: 0.1875 - ndcg_at_5000: 0.0222 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 0s 244ms/step - loss: 10.8480 - recall_at_5000: 0.1875 - ndcg_at_5000: 0.0222 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8480 - recall_at_5000: 0.1875 - ndcg_at_5000: 0.0222 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 0s 249ms/step - loss: 10.8480 - recall_at_5000: 0.1875 - ndcg_at_5000: 0.0222 - regularization_loss: 0.0000e+00
----------------------------- Captured stderr call -----------------------------
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
2022-10-09 15:39:02.199021: W tensorflow/core/grappler/optimizers/loop_optimizer.cc:907] Skipping loop optimization for Merge node with control input: model/gpt2_block/replace_masked_embeddings/RaggedWhere/Assert/AssertGuard/branch_executed/_9
WARNING:tensorflow:Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
------------------------------ Captured log call -------------------------------
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING tensorflow:save_impl.py:71 Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, sequence_mask_random_layer_call_fn, sequence_mask_random_layer_call_and_return_conditional_losses, list_to_ragged_layer_call_fn while saving (showing 5 of 90). These functions will not be directly callable after loading.
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
_____________________________ test_soft_embedding ______________________________

def test_soft_embedding():
    embeddings_dim = 16
    num_embeddings = 64

    soft_embedding = ml.SoftEmbedding(num_embeddings, embeddings_dim)
    assert soft_embedding.embedding_table.weight.shape == torch.Size(
        [num_embeddings, embeddings_dim]
    ), "Internal soft embedding table does not have the expected shape"

    batch_size = 10
    seq_length = 20
    cont_feature_inputs = torch.rand((batch_size, seq_length))
    output = soft_embedding(cont_feature_inputs)

    assert output.shape == torch.Size(
        [batch_size, seq_length, embeddings_dim]
    ), "Soft embedding output has not the expected shape"

    # Checking the default embedding initialization
  assert output.detach().numpy().mean() == pytest.approx(0.0, abs=0.1)

E assert 0.11032183 == 0.0 ± 1.0e-01
E comparison failed
E Obtained: 0.11032182723283768
E Expected: 0.0 ± 1.0e-01

tests/unit/torch/features/test_embedding.py:182: AssertionError
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 15 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 9 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_file0w3xe1o3.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
==== 3 failed, 755 passed, 12 skipped, 1199 warnings in 1501.85s (0:25:01) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins15914971460412174376.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit 58609b353a3ad4fe83604abccc03cdcf533c6a3f, no merge conflicts.
Running as SYSTEM
Setting status of 58609b353a3ad4fe83604abccc03cdcf533c6a3f to PENDING with url https://10.20.13.93:8080/job/merlin_models/1493/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse 58609b353a3ad4fe83604abccc03cdcf533c6a3f^{commit} # timeout=10
Checking out Revision 58609b353a3ad4fe83604abccc03cdcf533c6a3f (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 58609b353a3ad4fe83604abccc03cdcf533c6a3f # timeout=10
Commit message: "Trying to fix failing tests"
 > git rev-list --no-walk fe0239fce6a07edeef1d75416bf603bdd9e03fec # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins4097485030025363645.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 770 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 47%]
tests/unit/tf/models/test_base.py s....................... [ 50%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 62%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%]
tests/unit/tf/transformers/test_block.py ..................F. [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 70%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 78%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%]
tests/unit/tf/transforms/test_noise.py ..... [ 80%]
tests/unit/tf/transforms/test_sequence.py .................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
____ test_transformer_with_masked_language_modeling_check_eval_masked[True] ____

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7fb0f705f280>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling_check_eval_masked(
    sequence_testing_data: Dataset, run_eagerly
):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False)
    model = mm.Model(
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        GPT2Block(d_model=48, n_head=4, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target),
            default_loss="categorical_crossentropy",
        ),
    )
    seq_mask_random = mm.SequenceMaskRandom(schema=seq_schema, target=target, masking_prob=0.3)

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
    testing_utils.model_test(
        model,
        loader,
        run_eagerly=run_eagerly,
        reload_model=True,
        fit_kwargs={"pre": seq_mask_random},
        metrics=[mm.RecallAt(5000), mm.NDCGAt(5000)],
    )

    # This transform only extracts targets, but without applying mask
    seq_target_as_input_no_mask = mm.SequenceTargetAsInput(schema=seq_schema, target=target)
    metrics_all_positions1 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )
    metrics_all_positions2 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )

    def _metrics_almost_equal(metrics1, metrics2):
        return np.all(
            [
                np.isclose(metrics1[k], metrics2[k])
                for k in metrics1
                if k not in "regularization_loss"
            ]
        )

    # Ensures metrics without masked positions are equal
  assert _metrics_almost_equal(metrics_all_positions1, metrics_all_positions2)

E AssertionError: assert False
E + where False = <function test_transformer_with_masked_language_modeling_check_eval_masked.._metrics_almost_equal at 0x7fb11c1ea8b0>({'loss': 10.841493606567383, 'ndcg_at_5000': 0.014380228705704212, 'recall_at_5000': 0.15625, 'regularization_loss': 0.0}, {'loss': 10.841493606567383, 'ndcg_at_5000': 0.014380043372511864, 'recall_at_5000': 0.15625, 'regularization_loss': 0.0})

tests/unit/tf/transformers/test_block.py:302: AssertionError
----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 4.0676 - recall_at_5000: 0.0625 - ndcg_at_5000: 0.0052 - regularization_loss: 0.0000e+00�����������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 1s 796ms/step - loss: 4.0676 - recall_at_5000: 0.0625 - ndcg_at_5000: 0.0052 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 1s 664ms/step - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 0s 309ms/step - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00
----------------------------- Captured stderr call -----------------------------
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING:tensorflow:Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
------------------------------ Captured log call -------------------------------
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING tensorflow:save_impl.py:71 Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, sequence_mask_random_layer_call_fn, sequence_mask_random_layer_call_and_return_conditional_losses, list_to_ragged_layer_call_fn while saving (showing 5 of 90). These functions will not be directly callable after loading.
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 15 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 9 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_file_n_ys7xv.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
==== 1 failed, 757 passed, 12 skipped, 1199 warnings in 1512.95s (0:25:12) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins18175790934832895051.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit 619bbb8289506a531c878ce901d39f7ccb21be2c, no merge conflicts.
Running as SYSTEM
Setting status of 619bbb8289506a531c878ce901d39f7ccb21be2c to PENDING with url https://10.20.13.93:8080/job/merlin_models/1494/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse 619bbb8289506a531c878ce901d39f7ccb21be2c^{commit} # timeout=10
Checking out Revision 619bbb8289506a531c878ce901d39f7ccb21be2c (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 619bbb8289506a531c878ce901d39f7ccb21be2c # timeout=10
Commit message: "Trying to invalidate compile-cache when pre is provided"
 > git rev-list --no-walk 58609b353a3ad4fe83604abccc03cdcf533c6a3f # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins4738722973112552079.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 770 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 47%]
tests/unit/tf/models/test_base.py s....................... [ 50%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 62%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%]
tests/unit/tf/transformers/test_block.py ..................F. [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 70%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 78%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%]
tests/unit/tf/transforms/test_noise.py ..... [ 80%]
tests/unit/tf/transforms/test_sequence.py .................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
____ test_transformer_with_masked_language_modeling_check_eval_masked[True] ____

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7fa27ca0a9d0>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling_check_eval_masked(
    sequence_testing_data: Dataset, run_eagerly
):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False)
    model = mm.Model(
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        GPT2Block(d_model=48, n_head=4, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target),
            default_loss="categorical_crossentropy",
        ),
    )
    seq_mask_random = mm.SequenceMaskRandom(schema=seq_schema, target=target, masking_prob=0.3)

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
    testing_utils.model_test(
        model,
        loader,
        run_eagerly=run_eagerly,
        reload_model=True,
        fit_kwargs={"pre": seq_mask_random},
        metrics=[mm.RecallAt(5000), mm.NDCGAt(5000)],
    )

    # This transform only extracts targets, but without applying mask
    seq_target_as_input_no_mask = mm.SequenceTargetAsInput(schema=seq_schema, target=target)
    metrics_all_positions1 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )
    metrics_all_positions2 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )

    def _metrics_almost_equal(metrics1, metrics2):
        return np.all(
            [
                np.isclose(metrics1[k], metrics2[k])
                for k in metrics1
                if k not in "regularization_loss"
            ]
        )

    # Ensures metrics without masked positions are equal
  assert _metrics_almost_equal(metrics_all_positions1, metrics_all_positions2)

E AssertionError: assert False
E + where False = <function test_transformer_with_masked_language_modeling_check_eval_masked.._metrics_almost_equal at 0x7fa28ea78700>({'loss': 10.841493606567383, 'ndcg_at_5000': 0.014380228705704212, 'recall_at_5000': 0.15625, 'regularization_loss': 0.0}, {'loss': 10.841493606567383, 'ndcg_at_5000': 0.014380043372511864, 'recall_at_5000': 0.15625, 'regularization_loss': 0.0})

tests/unit/tf/transformers/test_block.py:302: AssertionError
----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 4.0676 - recall_at_5000: 0.0625 - ndcg_at_5000: 0.0052 - regularization_loss: 0.0000e+00�����������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 1s 791ms/step - loss: 4.0676 - recall_at_5000: 0.0625 - ndcg_at_5000: 0.0052 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 1s 668ms/step - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 0s 314ms/step - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00
----------------------------- Captured stderr call -----------------------------
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING:tensorflow:Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
------------------------------ Captured log call -------------------------------
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING tensorflow:save_impl.py:71 Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, sequence_mask_random_layer_call_fn, sequence_mask_random_layer_call_and_return_conditional_losses, list_to_ragged_layer_call_fn while saving (showing 5 of 90). These functions will not be directly callable after loading.
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 15 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 9 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_file5e7y4pqq.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
==== 1 failed, 757 passed, 12 skipped, 1199 warnings in 1509.11s (0:25:09) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins2369721448711262285.sh

@gabrielspmoreira gabrielspmoreira changed the title Masked Language Modeling as a transform for data loader and pre of TransformerBlock (alternative #2) Masked Language Modeling as a transform for data loader and pre of TransformerBlock Oct 10, 2022
@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit d12f8b2148e4a1dafd3702b5aa4f31ee4487dfd0, no merge conflicts.
Running as SYSTEM
Setting status of d12f8b2148e4a1dafd3702b5aa4f31ee4487dfd0 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1495/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse d12f8b2148e4a1dafd3702b5aa4f31ee4487dfd0^{commit} # timeout=10
Checking out Revision d12f8b2148e4a1dafd3702b5aa4f31ee4487dfd0 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f d12f8b2148e4a1dafd3702b5aa4f31ee4487dfd0 # timeout=10
Commit message: "Merge branch 'mlm_alt' of github.com:NVIDIA-Merlin/models into mlm_alt"
 > git rev-list --no-walk 619bbb8289506a531c878ce901d39f7ccb21be2c # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins13500226492839596995.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 770 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 47%]
tests/unit/tf/models/test_base.py s....................... [ 50%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 62%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%]
tests/unit/tf/transformers/test_block.py ..................F. [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 70%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 78%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%]
tests/unit/tf/transforms/test_noise.py ..... [ 80%]
tests/unit/tf/transforms/test_sequence.py .................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
____ test_transformer_with_masked_language_modeling_check_eval_masked[True] ____

sequence_testing_data = <merlin.io.dataset.Dataset object at 0x7f1885e6aca0>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_transformer_with_masked_language_modeling_check_eval_masked(
    sequence_testing_data: Dataset, run_eagerly
):

    seq_schema = sequence_testing_data.schema.select_by_tag(Tags.SEQUENCE).select_by_tag(
        Tags.CATEGORICAL
    )
    target = sequence_testing_data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]

    loader = Loader(sequence_testing_data, batch_size=8, shuffle=False)
    model = mm.Model(
        mm.InputBlockV2(
            seq_schema,
            embeddings=mm.Embeddings(
                seq_schema.select_by_tag(Tags.CATEGORICAL), sequence_combiner=None
            ),
        ),
        # BertBlock(d_model=48, n_head=8, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        GPT2Block(d_model=48, n_head=4, n_layer=2, pre=mm.ReplaceMaskedEmbeddings()),
        mm.CategoricalOutput(
            seq_schema.select_by_name(target),
            default_loss="categorical_crossentropy",
        ),
    )
    seq_mask_random = mm.SequenceMaskRandom(schema=seq_schema, target=target, masking_prob=0.3)

    inputs, targets = next(iter(loader))
    outputs = model(inputs, targets=targets, training=True)
    assert list(outputs.shape) == [8, 4, 51997]
    testing_utils.model_test(
        model,
        loader,
        run_eagerly=run_eagerly,
        reload_model=True,
        fit_kwargs={"pre": seq_mask_random},
        metrics=[mm.RecallAt(5000), mm.NDCGAt(5000)],
    )

    # This transform only extracts targets, but without applying mask
    seq_target_as_input_no_mask = mm.SequenceTargetAsInput(schema=seq_schema, target=target)
    metrics_all_positions1 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )
    metrics_all_positions2 = model.evaluate(
        loader, batch_size=8, steps=1, return_dict=True, pre=seq_target_as_input_no_mask
    )

    def _metrics_almost_equal(metrics1, metrics2):
        return np.all(
            [
                np.isclose(metrics1[k], metrics2[k])
                for k in metrics1
                if k not in "regularization_loss"
            ]
        )

    # Ensures metrics without masked positions are equal
  assert _metrics_almost_equal(metrics_all_positions1, metrics_all_positions2)

E AssertionError: assert False
E + where False = <function test_transformer_with_masked_language_modeling_check_eval_masked.._metrics_almost_equal at 0x7f188ca010d0>({'loss': 10.841493606567383, 'ndcg_at_5000': 0.014380228705704212, 'recall_at_5000': 0.15625, 'regularization_loss': 0.0}, {'loss': 10.841493606567383, 'ndcg_at_5000': 0.014380043372511864, 'recall_at_5000': 0.15625, 'regularization_loss': 0.0})

tests/unit/tf/transformers/test_block.py:302: AssertionError
----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 4.0676 - recall_at_5000: 0.0625 - ndcg_at_5000: 0.0052 - regularization_loss: 0.0000e+00�����������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 1s 773ms/step - loss: 4.0676 - recall_at_5000: 0.0625 - ndcg_at_5000: 0.0052 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 1s 651ms/step - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00

1/1 [==============================] - ETA: 0s - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00������������������������������������������������������������������������������������������������������������������������������������������������
1/1 [==============================] - 0s 301ms/step - loss: 10.8415 - recall_at_5000: 0.1562 - ndcg_at_5000: 0.0144 - regularization_loss: 0.0000e+00
----------------------------- Captured stderr call -----------------------------
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING:tensorflow:Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING:tensorflow:Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
------------------------------ Captured log call -------------------------------
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
WARNING tensorflow:save_impl.py:71 Skipping full serialization of Keras layer TFSharedEmbeddings(
(_feature_shapes): Dict(
(item_id_seq): TensorShape([8, None])
(categories): TensorShape([8, None])
(test_user_id): TensorShape([8, 1])
(user_country): TensorShape([8, 1])
(item_age_days_norm): TensorShape([8, None])
(event_hour_sin): TensorShape([8, None])
(event_hour_cos): TensorShape([8, None])
(event_weekday_sin): TensorShape([8, None])
(event_weekday_cos): TensorShape([8, None])
(user_age): TensorShape([8, 1])
)
(_feature_dtypes): Dict(
(item_id_seq): tf.int64
(categories): tf.int64
(test_user_id): tf.int64
(user_country): tf.int64
(item_age_days_norm): tf.float32
(event_hour_sin): tf.float32
(event_hour_cos): tf.float32
(event_weekday_sin): tf.float32
(event_weekday_cos): tf.float32
(user_age): tf.float32
)
), because it is not built.
WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, sequence_mask_random_layer_call_fn, sequence_mask_random_layer_call_and_return_conditional_losses, list_to_ragged_layer_call_fn while saving (showing 5 of 90). These functions will not be directly callable after loading.
WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['model/embeddings:0', 'model/embeddings:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 15 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 9 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_fileuoblx0dn.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
==== 1 failed, 757 passed, 12 skipped, 1199 warnings in 1518.09s (0:25:18) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins8100820103706938293.sh

@gabrielspmoreira gabrielspmoreira changed the title Masked Language Modeling as a transform for data loader and pre of TransformerBlock Introduce Masked Language Modeling for Transformers Oct 10, 2022
@gabrielspmoreira gabrielspmoreira changed the title Introduce Masked Language Modeling for Transformers Introduces Masked Language Modeling for Transformers Oct 10, 2022
@marcromeyn
Copy link
Contributor

rerun tests

1 similar comment
@marcromeyn
Copy link
Contributor

rerun tests

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit 2c9a1547a439af393f6d63e22485ed21cb6adfd5, no merge conflicts.
Running as SYSTEM
Setting status of 2c9a1547a439af393f6d63e22485ed21cb6adfd5 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1503/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse 2c9a1547a439af393f6d63e22485ed21cb6adfd5^{commit} # timeout=10
Checking out Revision 2c9a1547a439af393f6d63e22485ed21cb6adfd5 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 2c9a1547a439af393f6d63e22485ed21cb6adfd5 # timeout=10
Commit message: "Merge branch 'main' into mlm_alt"
 > git rev-list --no-walk 27d165c092850efdb2a4b3658cc0b0016cf6c9f6 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins13617598935065921475.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 770 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 47%]
tests/unit/tf/models/test_base.py s....................... [ 50%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 62%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%]
tests/unit/tf/transformers/test_block.py .................... [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 70%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 78%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%]
tests/unit/tf/transforms/test_noise.py ..... [ 80%]
tests/unit/tf/transforms/test_sequence.py .................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 15 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 9 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_filev6e8gz1j.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========= 758 passed, 12 skipped, 1199 warnings in 1476.97s (0:24:36) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins9861572133881785282.sh

@marcromeyn marcromeyn merged commit eee1b6b into main Oct 10, 2022
@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #780 of commit 2c9a1547a439af393f6d63e22485ed21cb6adfd5, no merge conflicts.
GitHub pull request #780 of commit 2c9a1547a439af393f6d63e22485ed21cb6adfd5, no merge conflicts.
Running as SYSTEM
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/780/*:refs/remotes/origin/pr/780/* # timeout=10
 > git rev-parse 2c9a1547a439af393f6d63e22485ed21cb6adfd5^{commit} # timeout=10
Checking out Revision 2c9a1547a439af393f6d63e22485ed21cb6adfd5 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 2c9a1547a439af393f6d63e22485ed21cb6adfd5 # timeout=10
Commit message: "Merge branch 'main' into mlm_alt"
 > git rev-list --no-walk 167bc9e88411ec0c0114fb34a04e015d2c2756d1 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins5394779319908623871.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 770 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 3%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_loader.py ................ [ 6%]
tests/unit/tf/test_public_api.py . [ 6%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 9%]
tests/unit/tf/blocks/test_interactions.py ... [ 9%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 14%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%]
..................... [ 21%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 21%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 23%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%]
tests/unit/tf/core/test_aggregation.py ......... [ 24%]
tests/unit/tf/core/test_base.py .. [ 24%]
tests/unit/tf/core/test_combinators.py s.................... [ 27%]
tests/unit/tf/core/test_encoder.py .. [ 27%]
tests/unit/tf/core/test_index.py ... [ 28%]
tests/unit/tf/core/test_prediction.py .. [ 28%]
tests/unit/tf/core/test_tabular.py ...... [ 29%]
tests/unit/tf/examples/test_01_getting_started.py . [ 29%]
tests/unit/tf/examples/test_02_dataschema.py . [ 29%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 30%]
tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 30%]
[ 30%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%]
tests/unit/tf/inputs/test_continuous.py ..... [ 31%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 35%]
...... [ 36%]
tests/unit/tf/inputs/test_tabular.py .................. [ 38%]
tests/unit/tf/layers/test_queue.py .............. [ 40%]
tests/unit/tf/losses/test_losses.py ....................... [ 43%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%]
tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 47%]
tests/unit/tf/models/test_base.py s....................... [ 50%]
tests/unit/tf/models/test_benchmark.py .. [ 50%]
tests/unit/tf/models/test_ranking.py .................................. [ 54%]
tests/unit/tf/models/test_retrieval.py ................................ [ 58%]
tests/unit/tf/outputs/test_base.py ..... [ 59%]
tests/unit/tf/outputs/test_classification.py ...... [ 60%]
tests/unit/tf/outputs/test_contrastive.py ........... [ 61%]
tests/unit/tf/outputs/test_regression.py .. [ 62%]
tests/unit/tf/outputs/test_sampling.py .... [ 62%]
tests/unit/tf/outputs/test_topk.py . [ 62%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 62%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%]
tests/unit/tf/transformers/test_block.py .................... [ 69%]
tests/unit/tf/transformers/test_transforms.py ...... [ 70%]
tests/unit/tf/transforms/test_bias.py .. [ 70%]
tests/unit/tf/transforms/test_features.py s............................. [ 74%]
....................s...... [ 78%]
tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%]
tests/unit/tf/transforms/test_noise.py ..... [ 80%]
tests/unit/tf/transforms/test_sequence.py .................... [ 82%]
tests/unit/tf/transforms/test_tensor.py ... [ 83%]
tests/unit/tf/utils/test_batch.py .... [ 83%]
tests/unit/tf/utils/test_dataset.py .. [ 83%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 84%]
tests/unit/torch/test_dataset.py ......... [ 85%]
tests/unit/torch/test_public_api.py . [ 85%]
tests/unit/torch/block/test_base.py .... [ 86%]
tests/unit/torch/block/test_mlp.py . [ 86%]
tests/unit/torch/features/test_continuous.py .. [ 86%]
tests/unit/torch/features/test_embedding.py .............. [ 88%]
tests/unit/torch/features/test_tabular.py .... [ 89%]
tests/unit/torch/model/test_head.py ............ [ 90%]
tests/unit/torch/model/test_model.py .. [ 90%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 92%]
tests/unit/torch/tabular/test_transformations.py ....... [ 93%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 5 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transformers/test_block.py: 15 warnings
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 7 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 26 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 15 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transformers/test_block.py: 9 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_sequence.py: 15 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/tf/utils/test_dataset.py: 2 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_loader.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 11 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:943: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning
tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_file2yjub6ts.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings
tests/unit/tf/models/test_retrieval.py: 26 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
tests/unit/tf/utils/test_dataset.py: 1 warning
/var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead.
warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query
tests/unit/tf/models/test_base.py::test_retrieval_model_query
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False]
tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========= 758 passed, 12 skipped, 1199 warnings in 1477.14s (0:24:37) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins16882222535428127404.sh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/session-based enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[TASK] Implement PredictMasked (BERT-like masking)
3 participants