Skip to content

Commit 1d45d90

Browse files
authored
[tests] remove TF tests (uses of require_tf) (#38944)
* remove uses of require_tf * remove redundant import guards * this class has no tests * nits * del tf rng comment
1 parent d37f751 commit 1d45d90

File tree

44 files changed

+21
-2504
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+21
-2504
lines changed

docs/source/de/testing.md

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -473,13 +473,6 @@ Hier ist zum Beispiel ein Test, der nur ausgeführt werden muss, wenn 2 oder meh
473473
def test_example_with_multi_gpu():
474474
```
475475

476-
Wenn ein Test `tensorflow` benötigt, verwenden Sie den Dekorator `require_tf`. Zum Beispiel:
477-
478-
```python no-style
479-
@require_tf
480-
def test_tf_thing_with_tensorflow():
481-
```
482-
483476
Diese Dekors können gestapelt werden. Wenn zum Beispiel ein Test langsam ist und mindestens eine GPU unter pytorch benötigt, können Sie
484477
wie Sie ihn einrichten können:
485478

@@ -1204,9 +1197,6 @@ if torch.cuda.is_available():
12041197
import numpy as np
12051198

12061199
np.random.seed(seed)
1207-
1208-
# tf RNG
1209-
tf.random.set_seed(seed)
12101200
```
12111201

12121202
### Tests debuggen

docs/source/en/testing.md

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -474,13 +474,6 @@ For example, here is a test that must be run only when there are 2 or more GPUs
474474
def test_example_with_multi_gpu():
475475
```
476476

477-
If a test requires `tensorflow` use the `require_tf` decorator. For example:
478-
479-
```python no-style
480-
@require_tf
481-
def test_tf_thing_with_tensorflow():
482-
```
483-
484477
These decorators can be stacked. For example, if a test is slow and requires at least one GPU under pytorch, here is
485478
how to set it up:
486479

@@ -1226,11 +1219,6 @@ if torch.cuda.is_available():
12261219
import numpy as np
12271220

12281221
np.random.seed(seed)
1229-
1230-
# tf RNG
1231-
import tensorflow as tf
1232-
1233-
tf.random.set_seed(seed)
12341222
```
12351223

12361224
### Debugging tests

docs/source/ja/testing.md

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -445,13 +445,6 @@ CUDA_VISIBLE_DEVICES="1" pytest tests/utils/test_logging.py
445445
def test_example_with_multi_gpu():
446446
```
447447

448-
テストに `tensorflow` が必要な場合は、`require_tf` デコレータを使用します。例えば:
449-
450-
```python no-style
451-
@require_tf
452-
def test_tf_thing_with_tensorflow():
453-
```
454-
455448
これらのデコレータは積み重ねることができます。たとえば、テストが遅く、pytorch で少なくとも 1 つの GPU が必要な場合は、次のようになります。
456449
設定方法:
457450

@@ -1135,9 +1128,6 @@ if torch.cuda.is_available():
11351128
import numpy as np
11361129

11371130
np.random.seed(seed)
1138-
1139-
# tf RNG
1140-
tf.random.set_seed(seed)
11411131
```
11421132

11431133

docs/source/ko/testing.md

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -473,13 +473,6 @@ GPU 요구 사항을 표로 정리하면 아래와 같습니디ㅏ:
473473
def test_example_with_multi_gpu():
474474
```
475475

476-
`tensorflow`가 필요한 경우 `require_tf` 데코레이터를 사용합니다. 예를 들어 다음과 같습니다:
477-
478-
```python no-style
479-
@require_tf
480-
def test_tf_thing_with_tensorflow():
481-
```
482-
483476
이러한 데코레이터는 중첩될 수 있습니다.
484477
예를 들어, 느린 테스트로 진행되고 pytorch에서 적어도 하나의 GPU가 필요한 경우 다음과 같이 설정할 수 있습니다:
485478

src/transformers/testing_utils.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -705,6 +705,9 @@ def require_tf(test_case):
705705
"""
706706
Decorator marking a test that requires TensorFlow. These tests are skipped when TensorFlow isn't installed.
707707
"""
708+
logger.warning_once(
709+
"TensorFlow test-related code, including `require_tf`, is deprecated and will be removed in Transformers v4.55"
710+
)
708711
return unittest.skipUnless(is_tf_available(), "test requires TensorFlow")(test_case)
709712

710713

tests/models/bert/test_tokenization_bert_tf.py

Lines changed: 0 additions & 106 deletions
This file was deleted.

tests/models/gpt2/test_tokenization_gpt2_tf.py

Lines changed: 0 additions & 131 deletions
This file was deleted.

tests/models/layoutlmv3/test_tokenization_layoutlmv3.py

Lines changed: 0 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,6 @@
3434
from transformers.models.layoutlmv3.tokenization_layoutlmv3 import VOCAB_FILES_NAMES, LayoutLMv3Tokenizer
3535
from transformers.testing_utils import (
3636
require_pandas,
37-
require_tf,
3837
require_tokenizers,
3938
require_torch,
4039
slow,
@@ -2306,42 +2305,6 @@ def test_layoutlmv3_integration_test(self):
23062305
def test_np_encode_plus_sent_to_model(self):
23072306
pass
23082307

2309-
@require_tf
2310-
@slow
2311-
def test_tf_encode_plus_sent_to_model(self):
2312-
from transformers import TF_MODEL_MAPPING, TOKENIZER_MAPPING
2313-
2314-
MODEL_TOKENIZER_MAPPING = merge_model_tokenizer_mappings(TF_MODEL_MAPPING, TOKENIZER_MAPPING)
2315-
2316-
tokenizers = self.get_tokenizers(do_lower_case=False)
2317-
for tokenizer in tokenizers:
2318-
with self.subTest(f"{tokenizer.__class__.__name__}"):
2319-
if tokenizer.__class__ not in MODEL_TOKENIZER_MAPPING:
2320-
self.skipTest(f"{tokenizer.__class__} is not in the MODEL_TOKENIZER_MAPPING")
2321-
2322-
config_class, model_class = MODEL_TOKENIZER_MAPPING[tokenizer.__class__]
2323-
config = config_class()
2324-
2325-
if config.is_encoder_decoder or config.pad_token_id is None:
2326-
self.skipTest(reason="Model is an encoder-decoder or has no pad token id set.")
2327-
2328-
model = model_class(config)
2329-
2330-
# Make sure the model contains at least the full vocabulary size in its embedding matrix
2331-
self.assertGreaterEqual(model.config.vocab_size, len(tokenizer))
2332-
2333-
# Build sequence
2334-
first_ten_tokens = list(tokenizer.get_vocab().keys())[:10]
2335-
boxes = [[1000, 1000, 1000, 1000] for _ in range(len(first_ten_tokens))]
2336-
encoded_sequence = tokenizer.encode_plus(first_ten_tokens, boxes=boxes, return_tensors="tf")
2337-
batch_encoded_sequence = tokenizer.batch_encode_plus(
2338-
[first_ten_tokens, first_ten_tokens], boxes=[boxes, boxes], return_tensors="tf"
2339-
)
2340-
2341-
# This should not fail
2342-
model(encoded_sequence)
2343-
model(batch_encoded_sequence)
2344-
23452308
@unittest.skip(reason="Chat is not supported")
23462309
def test_chat_template(self):
23472310
pass

0 commit comments

Comments
 (0)