Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 0 additions & 10 deletions docs/source/de/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -473,13 +473,6 @@ Hier ist zum Beispiel ein Test, der nur ausgeführt werden muss, wenn 2 oder meh
def test_example_with_multi_gpu():
```

Wenn ein Test `tensorflow` benötigt, verwenden Sie den Dekorator `require_tf`. Zum Beispiel:

```python no-style
@require_tf
def test_tf_thing_with_tensorflow():
```

Diese Dekors können gestapelt werden. Wenn zum Beispiel ein Test langsam ist und mindestens eine GPU unter pytorch benötigt, können Sie
wie Sie ihn einrichten können:

Expand Down Expand Up @@ -1204,9 +1197,6 @@ if torch.cuda.is_available():
import numpy as np

np.random.seed(seed)

# tf RNG
tf.random.set_seed(seed)
```

### Tests debuggen
Expand Down
12 changes: 0 additions & 12 deletions docs/source/en/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -474,13 +474,6 @@ For example, here is a test that must be run only when there are 2 or more GPUs
def test_example_with_multi_gpu():
```

If a test requires `tensorflow` use the `require_tf` decorator. For example:

```python no-style
@require_tf
def test_tf_thing_with_tensorflow():
```

These decorators can be stacked. For example, if a test is slow and requires at least one GPU under pytorch, here is
how to set it up:

Expand Down Expand Up @@ -1226,11 +1219,6 @@ if torch.cuda.is_available():
import numpy as np

np.random.seed(seed)

# tf RNG
import tensorflow as tf

tf.random.set_seed(seed)
```

### Debugging tests
Expand Down
10 changes: 0 additions & 10 deletions docs/source/ja/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -445,13 +445,6 @@ CUDA_VISIBLE_DEVICES="1" pytest tests/utils/test_logging.py
def test_example_with_multi_gpu():
```

テストに `tensorflow` が必要な場合は、`require_tf` デコレータを使用します。例えば:

```python no-style
@require_tf
def test_tf_thing_with_tensorflow():
```

これらのデコレータは積み重ねることができます。たとえば、テストが遅く、pytorch で少なくとも 1 つの GPU が必要な場合は、次のようになります。
設定方法:

Expand Down Expand Up @@ -1135,9 +1128,6 @@ if torch.cuda.is_available():
import numpy as np

np.random.seed(seed)

# tf RNG
tf.random.set_seed(seed)
```


Expand Down
7 changes: 0 additions & 7 deletions docs/source/ko/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -473,13 +473,6 @@ GPU 요구 사항을 표로 정리하면 아래와 같습니디ㅏ:
def test_example_with_multi_gpu():
```

`tensorflow`가 필요한 경우 `require_tf` 데코레이터를 사용합니다. 예를 들어 다음과 같습니다:

```python no-style
@require_tf
def test_tf_thing_with_tensorflow():
```

이러한 데코레이터는 중첩될 수 있습니다.
예를 들어, 느린 테스트로 진행되고 pytorch에서 적어도 하나의 GPU가 필요한 경우 다음과 같이 설정할 수 있습니다:

Expand Down
3 changes: 3 additions & 0 deletions src/transformers/testing_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -684,6 +684,9 @@ def require_tf(test_case):
"""
Decorator marking a test that requires TensorFlow. These tests are skipped when TensorFlow isn't installed.
"""
logger.warning_once(
"TensorFlow test-related code, including `require_tf`, is deprecated and will be removed in Transformers v4.55"
)
return unittest.skipUnless(is_tf_available(), "test requires TensorFlow")(test_case)


Expand Down
106 changes: 0 additions & 106 deletions tests/models/bert/test_tokenization_bert_tf.py

This file was deleted.

131 changes: 0 additions & 131 deletions tests/models/gpt2/test_tokenization_gpt2_tf.py

This file was deleted.

37 changes: 0 additions & 37 deletions tests/models/layoutlmv3/test_tokenization_layoutlmv3.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@
from transformers.models.layoutlmv3.tokenization_layoutlmv3 import VOCAB_FILES_NAMES, LayoutLMv3Tokenizer
from transformers.testing_utils import (
require_pandas,
require_tf,
require_tokenizers,
require_torch,
slow,
Expand Down Expand Up @@ -2306,42 +2305,6 @@ def test_layoutlmv3_integration_test(self):
def test_np_encode_plus_sent_to_model(self):
pass

@require_tf
@slow
def test_tf_encode_plus_sent_to_model(self):
from transformers import TF_MODEL_MAPPING, TOKENIZER_MAPPING

MODEL_TOKENIZER_MAPPING = merge_model_tokenizer_mappings(TF_MODEL_MAPPING, TOKENIZER_MAPPING)

tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
if tokenizer.__class__ not in MODEL_TOKENIZER_MAPPING:
self.skipTest(f"{tokenizer.__class__} is not in the MODEL_TOKENIZER_MAPPING")

config_class, model_class = MODEL_TOKENIZER_MAPPING[tokenizer.__class__]
config = config_class()

if config.is_encoder_decoder or config.pad_token_id is None:
self.skipTest(reason="Model is an encoder-decoder or has no pad token id set.")

model = model_class(config)

# Make sure the model contains at least the full vocabulary size in its embedding matrix
self.assertGreaterEqual(model.config.vocab_size, len(tokenizer))

# Build sequence
first_ten_tokens = list(tokenizer.get_vocab().keys())[:10]
boxes = [[1000, 1000, 1000, 1000] for _ in range(len(first_ten_tokens))]
encoded_sequence = tokenizer.encode_plus(first_ten_tokens, boxes=boxes, return_tensors="tf")
batch_encoded_sequence = tokenizer.batch_encode_plus(
[first_ten_tokens, first_ten_tokens], boxes=[boxes, boxes], return_tensors="tf"
)

# This should not fail
model(encoded_sequence)
model(batch_encoded_sequence)

@unittest.skip(reason="Chat is not supported")
def test_chat_template(self):
pass
Expand Down
Loading