Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow tests in examples to use cuda or fp16,if they are available #5512

Merged
merged 18 commits into from
Aug 25, 2020

Conversation

Joel-hanson
Copy link
Contributor

The tests in examples didn't use the Cuda or fp16 even if they were available.

  • The text classification example (run_glue.py) didn't use the fp16 even if it was available but
    the device was take based on the availability(Cuda/CPU).
  • The language-modeling example (run_language_modeling.py) was having --no_cuda argument which
    made the test work without Cuda. This example is having an issue when running with fp16
    thus it not enabled (got an assertion error for perplexity due to its higher value).
  • The Cuda and fp16 is not enabled for question-answering example (run_squad.py) as it is having a
    the difference in the f1 score.
  • The text-generation example (run_generation.py) will take the Cuda or fp16 whenever it is available.

Resolves some of #5057

The tests in examples didn't use the cuda or fp16 even if they where available.
- The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but
  the device was take based on the availablity(cuda/cpu).
- The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument
  which made the test to work without cuda. This example is having issue when running with fp16
  thus it not enabled (got an assertion error for perplexity due to it higher value).
- The cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a
  difference in the f1 score.
- The text-generation example (`run_generation.py`) will take the cuda or fp16 whenever it is available.

Resolves some of: huggingface#5057
@Joel-hanson
Copy link
Contributor Author

Joel-hanson commented Jul 4, 2020

@sshleifer Hope you are doing well. sorry for the delay, I have created the PR for some of the related issues which were mentioned in #5057.

  1. They never use Cuda or fp16, even if they are available.

I have some doubts which had encountered when making this PR


1. As you had said there where some test which failed when enabling the Cuda or fp16

  • The Cuda and fp16 are not enabled for question-answering example (run_squad.py) as it is having a difference in the f1 score.
  • The language-modeling example (run_language_modeling.py) is having an issue when running with fp16.

2. I was not able to find the test_hans.py but was able to find a readme to run it. Is this intentional if not shall I have a test_hans.py file to run the same.

Screenshot 2020-07-04 at 11 56 30 AM

3. This is the list of tests which I got

$ ls -l examples/**/test*.py

examples/bert-loses-patience/test_run_glue_with_pabee.py
examples/seq2seq/bertabs/test_utils_summarization.py
examples/seq2seq/test_seq2seq_examples.py
examples/test_examples.py
examples/token-classification/test_ner_examples.py

I was not able to find the test_summarization_examples.py and test_t5_examples.py. I think I am doing something wrong.


4. I can have made the PR for some tests only, can do the same for others if the current PR satisfies your requirement.

@sshleifer
Copy link
Contributor

  1. noted
    2)test_hans.py would be nice, but can be added in a separate PR.
  2. You are all set, nothing is wrong.
  3. Would love to see the current PR!

Thanks!

@Joel-hanson
Copy link
Contributor Author

2)test_hans.py would be nice, but can be added in a separate PR.

The test_hans.py was created once via the PR #2239 but I think somehow or for some reason, it was removed

last interaction with the file was in the PR #4213

Copy link
Contributor

@sshleifer sshleifer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you also add multigpu support? If so you need to test it on a machine/show that it works, and it should probably be a separate PR.

@@ -85,6 +86,9 @@ def main():
f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
)

if "cuda" not in str(training_args.device) or not is_apex_available():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just change the test code? Or is this fixing a real bug?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sshleifer I wasn't sure whether this is a good solution but I have added this for certain reasons.

  1. For the apex to work we need to have cuda. So when was implementing this I had a scenario where the apex was installed but the GPU is not enabled thus I added both the checks. This checks for the cuda first before checking if apex is available or not.

  1. I wasn't sure if the training_args.device will give cuda or cuda:0 when there are multiple GPUs thus I have added a check to know if the training_args.device has a cuda.

Did you also add multi GPU support? If so you need to test it on a machine/show that it works, and it should probably be a separate PR.

  1. I didn't find a way to test this on multiple GPU. I will try my best to find a way to do this. But in examples/text-generation/run_generation.py I have added local_rank because I need to make every test file to have consistent arguments. This doesn't mean it adds a multi GPU support, I can remove this as it is confusing.

I am pretty sure this might be a wrong implementation please correct me if any of the above-mentioned points are not correct and if there is any other best alternative to do this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets check if cuda is available in test_examples.py, and if it is pass the fp16 argument.
Otherwise, don't pass the fp16 argument.

@sshleifer
Copy link
Contributor

also I missed this earlier, but what was the issue with run_language_modeling.py? Can you get a traceback?

@Joel-hanson
Copy link
Contributor Author

also I missed this earlier, but what was the issue with run_language_modeling.py? Can you get a traceback?

This is the short long and I have attached the pull log with this comment.

        with patch.object(sys, "argv", testargs):
            result = run_language_modeling.main()
>           self.assertLess(result["perplexity"], 35)
E           AssertionError: 36.684365356893885 not less than 35

test.log

@Joel-hanson
Copy link
Contributor Author

@sshleifer Hope you are doing well.

Is this PR ok or Should I split this into multiple smaller PRs to make things easier?

@Joel-hanson Joel-hanson requested a review from sshleifer August 6, 2020 05:44
Copy link
Contributor

@sshleifer sshleifer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should try to avoid changes to run_generation.py and run_glue.py (besides new command line args) and decide whether command line arguments are appropriate to send from text_examples.py

…da and apex is avaliable

- run_glue.py: Removed the check for cuda and fp16.
- run_generation.py: Removed the check for cuda and fp16 also removed unwanted flag creation.
@Joel-hanson
Copy link
Contributor Author

Thanks for the review @sshleifer, I have updated the code accordingly and regarding the new command line args, I have removed the args which were related to the multi-GPU support as we can make that a separate PR.

Any Idea why the test run_tests_tf and run_tests_torch_and_tf failing? Is it related to my change?

@Joel-hanson Joel-hanson requested a review from sshleifer August 8, 2020 12:38
Copy link
Contributor

@sshleifer sshleifer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one comment, thanks for iterating!

args = parser.parse_args()

args.device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
args.n_gpu = 0 if args.no_cuda else torch.cuda.device_count()

logger.warning(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on line 213 you need to add code to actually use args.fp16

if args.fp16: model.half()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sshleifer I have made this change in run_generation.py, But I have a doubt run_glue.py should also have this change right?
But when I tried I got an error with a message as follows:
msg = 'Found param distilbert.embeddings.word_embeddings.weight with type torch.cuda.HalfTensor, expected torch.cuda.FloatTe...alize, you do not need to call .half() on your model\nbefore passing it, no matter what optimization level you choose.'

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you did the right thing!

@Joel-hanson Joel-hanson requested a review from sshleifer August 11, 2020 18:08
Copy link
Contributor

@sshleifer sshleifer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the contribution!

args = parser.parse_args()

args.device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
args.n_gpu = 0 if args.no_cuda else torch.cuda.device_count()

logger.warning(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you did the right thing!

@sshleifer sshleifer requested a review from LysandreJik August 11, 2020 20:42
Copy link
Member

@LysandreJik LysandreJik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks for making this change, very useful! I left a few comments below.

Comment on lines 56 to 59
def is_cuda_and_apex_avaliable():
return torch.cuda.is_available() and is_apex_available()


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should also depend from the USE_CUDA flag, similarly to the rest of the test suite. Even if we decide to set USE_CUDA to True by default, setting USE_CUDA to False should result in the examples not using CUDA imo.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can do so by adding the following line:

from transformers.testing_utils import torch_device

and updating the statement to

return torch.cuda.is_available() and is_apex_available() and torch_device == "cuda"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch @LysandreJik

@@ -131,7 +141,6 @@ def test_run_language_modeling(self):
--do_train
--do_eval
--num_train_epochs=1
--no_cuda
Copy link
Member

@LysandreJik LysandreJik Aug 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can update this to respect the USE_CUDA flag as well:

Something like

        testargs = f"""
            run_language_modeling.py
            --model_name_or_path distilroberta-base
            [...]
            --num_train_epochs=1
            {'--no_cuda' if torch_device != 'cuda' else ''}
            """.split()

… examples

- The tests in examples which uses cuda should also depend from the USE_CUDA flag,
  similarly to the rest of the test suite. Even if we decide to set USE_CUDA to
  True by default, setting USE_CUDA to False should result in the examples not using CUDA
@Joel-hanson
Copy link
Contributor Author

Thanks for the review @LysandreJik, I have updated the PR accordingly.

Copy link
Member

@LysandreJik LysandreJik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool this is great! Thanks for iterating!



def clean_test_dir(path):
shutil.rmtree(path, ignore_errors=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should import shutil for this to work (that's why the code quality fails)

@codecov
Copy link

codecov bot commented Aug 24, 2020

Codecov Report

Merging #5512 into master will decrease coverage by 0.72%.
The diff coverage is 78.87%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #5512      +/-   ##
==========================================
- Coverage   80.37%   79.64%   -0.73%     
==========================================
  Files         156      156              
  Lines       28058    28261     +203     
==========================================
- Hits        22552    22509      -43     
- Misses       5506     5752     +246     
Impacted Files Coverage Δ
src/transformers/commands/serving.py 0.00% <0.00%> (ø)
src/transformers/commands/user.py 0.00% <ø> (ø)
src/transformers/configuration_reformer.py 100.00% <ø> (ø)
src/transformers/data/processors/glue.py 48.91% <0.00%> (-0.18%) ⬇️
src/transformers/data/test_generation_utils.py 0.00% <0.00%> (ø)
src/transformers/generation_utils.py 96.94% <ø> (ø)
src/transformers/hf_argparser.py 67.74% <0.00%> (-1.49%) ⬇️
src/transformers/modeling_auto.py 78.73% <ø> (ø)
src/transformers/modeling_bert.py 88.42% <ø> (+0.16%) ⬆️
src/transformers/modeling_electra.py 82.13% <ø> (ø)
... and 58 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a573777...51be82f. Read the comment docs.

@LysandreJik LysandreJik merged commit 4db2fa7 into huggingface:master Aug 25, 2020
stas00 added a commit to stas00/transformers that referenced this pull request Sep 13, 2020
For some reason huggingface#5512 re-added temp dir creation code that was removed by
huggingface#6494 defeating the purpose of that PR for those tests.
LysandreJik pushed a commit that referenced this pull request Sep 14, 2020
For some reason #5512 re-added temp dir creation code that was removed by
#6494 defeating the purpose of that PR for those tests.
mfuntowicz pushed a commit that referenced this pull request Sep 18, 2020
For some reason #5512 re-added temp dir creation code that was removed by
#6494 defeating the purpose of that PR for those tests.
Zigur pushed a commit to Zigur/transformers that referenced this pull request Oct 26, 2020
…ggingface#5512)

* Allow tests in examples to use cuda or fp16,if they are available

The tests in examples didn't use the cuda or fp16 even if they where available.
- The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but
  the device was take based on the availablity(cuda/cpu).
- The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument
  which made the test to work without cuda. This example is having issue when running with fp16
  thus it not enabled (got an assertion error for perplexity due to it higher value).
- The cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a
  difference in the f1 score.
- The text-generation example (`run_generation.py`) will take the cuda or fp16 whenever it is available.

Resolves some of: huggingface#5057

* Unwanted import of is_apex_available was removed

* Made changes to test examples file to have the pass --fp16 only if cuda and apex is avaliable
- run_glue.py: Removed the check for cuda and fp16.
- run_generation.py: Removed the check for cuda and fp16 also removed unwanted flag creation.

* Incorrectly sorted imports fixed

* The model needs to be converted to half precision

* Formatted single line if condition statement to multiline

* The torch_device also needed to be checked before running the test on examples
- The tests in examples which uses cuda should also depend from the USE_CUDA flag,
  similarly to the rest of the test suite. Even if we decide to set USE_CUDA to
  True by default, setting USE_CUDA to False should result in the examples not using CUDA

* Format some of the code in test_examples file

* The improper import of is_apex_available was sorted

* Formatted the code to keep the style standards

* The comma at the end of list giving a flake8 issue was fixed

* Import sort was fixed

* Removed the clean_test_dir function as its not used right now
Zigur pushed a commit to Zigur/transformers that referenced this pull request Oct 26, 2020
For some reason huggingface#5512 re-added temp dir creation code that was removed by
huggingface#6494 defeating the purpose of that PR for those tests.
fabiocapsouza pushed a commit to fabiocapsouza/transformers that referenced this pull request Nov 15, 2020
…ggingface#5512)

* Allow tests in examples to use cuda or fp16,if they are available

The tests in examples didn't use the cuda or fp16 even if they where available.
- The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but
  the device was take based on the availablity(cuda/cpu).
- The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument
  which made the test to work without cuda. This example is having issue when running with fp16
  thus it not enabled (got an assertion error for perplexity due to it higher value).
- The cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a
  difference in the f1 score.
- The text-generation example (`run_generation.py`) will take the cuda or fp16 whenever it is available.

Resolves some of: huggingface#5057

* Unwanted import of is_apex_available was removed

* Made changes to test examples file to have the pass --fp16 only if cuda and apex is avaliable
- run_glue.py: Removed the check for cuda and fp16.
- run_generation.py: Removed the check for cuda and fp16 also removed unwanted flag creation.

* Incorrectly sorted imports fixed

* The model needs to be converted to half precision

* Formatted single line if condition statement to multiline

* The torch_device also needed to be checked before running the test on examples
- The tests in examples which uses cuda should also depend from the USE_CUDA flag,
  similarly to the rest of the test suite. Even if we decide to set USE_CUDA to
  True by default, setting USE_CUDA to False should result in the examples not using CUDA

* Format some of the code in test_examples file

* The improper import of is_apex_available was sorted

* Formatted the code to keep the style standards

* The comma at the end of list giving a flake8 issue was fixed

* Import sort was fixed

* Removed the clean_test_dir function as its not used right now
fabiocapsouza pushed a commit to fabiocapsouza/transformers that referenced this pull request Nov 15, 2020
For some reason huggingface#5512 re-added temp dir creation code that was removed by
huggingface#6494 defeating the purpose of that PR for those tests.
fabiocapsouza added a commit to fabiocapsouza/transformers that referenced this pull request Nov 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants