Skip to content

add influence gpu tests not using DataParallel #1185

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

99warriors
Copy link
Contributor

Summary:
Currently, when testing implementations of TracInCPBase, if the model to be tested is on gpu, we always wrap it in DataParallel. However, it is also worth testing when the model is on gpu, but is not wrapped in DataParallel. Whether the model is on gpu is currently specified by a use_gpu flag, which is boolean. In this diff, we change use_gpu to have type Union[bool, str], which allowable values of False (model on cpu), 'cuda' (model on gpu, not using DataParallel, and 'cuda_data_parallel' (model on gpu, using DataParallel). This has backwards compatibility with classes like ExplicitDataset, which moves data to gpu if use_gpu, as strings are interpreted as being true. In further detail, the changes are as follows:

  • for tests (TestTracInSelfInfluence, TestTracInKMostInfluential) where use_gpu was called with True, now call them with values of 'cuda' and 'cuda_parallel' (in addition to False)
  • in those tests, make the layer names have the 'module' prefix only when use_gpu='cuda_data_parallel'
  • change get_random_model_and_data, which is where the use_gpu flag is used to create model and data, to reflect the new logic

Reviewed By: NarineK

Differential Revision: D47190429

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47190429

99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 14, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Reviewed By: NarineK

Differential Revision: D47190429
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47190429

99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 14, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Reviewed By: NarineK

Differential Revision: D47190429
99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 14, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Reviewed By: NarineK

Differential Revision: D47190429
99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 14, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Reviewed By: NarineK

Differential Revision: D47190429
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47190429

99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 14, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Reviewed By: NarineK

Differential Revision: D47190429
99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 14, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Reviewed By: NarineK

Differential Revision: D47190429
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47190429

99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 14, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Reviewed By: NarineK

Differential Revision: D47190429
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47190429

99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 14, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Reviewed By: NarineK

Differential Revision: D47190429
99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 26, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Reviewed By: NarineK

Differential Revision: D47190429
99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 26, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Reviewed By: NarineK

Differential Revision: D47190429
99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 26, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Differential Revision: D47190429
99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 26, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Differential Revision: D47190429
99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 27, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Differential Revision: D47190429
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47190429

99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 27, 2023
Summary:

Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Differential Revision: D47190429
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47190429

@facebook-github-bot
Copy link
Contributor

@NarineK has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 27, 2023
Summary:
Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Differential Revision: D47190429

Pulled By: NarineK
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47190429

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47190429

99warriors pushed a commit to 99warriors/captum that referenced this pull request Nov 27, 2023
Summary:
Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Differential Revision: D47190429

Pulled By: NarineK
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47190429

Summary:
Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`.  However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`.  Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean.  In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`).  This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true.  In further detail, the changes are as follows:
- for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`)
- in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'`
- change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic

Reviewed By: vivekmig

Differential Revision: D47190429
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47190429

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47190429

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 5398892.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants