-
Notifications
You must be signed in to change notification settings - Fork 516
add influence gpu tests not using DataParallel
#1185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This pull request was exported from Phabricator. Differential Revision: D47190429 |
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Reviewed By: NarineK Differential Revision: D47190429
e26b4df
to
84ac67c
Compare
This pull request was exported from Phabricator. Differential Revision: D47190429 |
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Reviewed By: NarineK Differential Revision: D47190429
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Reviewed By: NarineK Differential Revision: D47190429
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Reviewed By: NarineK Differential Revision: D47190429
84ac67c
to
d97eab6
Compare
This pull request was exported from Phabricator. Differential Revision: D47190429 |
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Reviewed By: NarineK Differential Revision: D47190429
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Reviewed By: NarineK Differential Revision: D47190429
This pull request was exported from Phabricator. Differential Revision: D47190429 |
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Reviewed By: NarineK Differential Revision: D47190429
d97eab6
to
9d8a091
Compare
This pull request was exported from Phabricator. Differential Revision: D47190429 |
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Reviewed By: NarineK Differential Revision: D47190429
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Reviewed By: NarineK Differential Revision: D47190429
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Reviewed By: NarineK Differential Revision: D47190429
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Differential Revision: D47190429
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Differential Revision: D47190429
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Differential Revision: D47190429
9d8a091
to
53ea863
Compare
This pull request was exported from Phabricator. Differential Revision: D47190429 |
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Differential Revision: D47190429
53ea863
to
9bac757
Compare
This pull request was exported from Phabricator. Differential Revision: D47190429 |
@NarineK has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
9bac757
to
e6d5aa5
Compare
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Differential Revision: D47190429 Pulled By: NarineK
This pull request was exported from Phabricator. Differential Revision: D47190429 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D47190429 |
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Differential Revision: D47190429 Pulled By: NarineK
e6d5aa5
to
f5fff22
Compare
This pull request was exported from Phabricator. Differential Revision: D47190429 |
Summary: Currently, when testing implementations of `TracInCPBase`, if the model to be tested is on gpu, we always wrap it in `DataParallel`. However, it is also worth testing when the model is on gpu, but is *not* wrapped in `DataParallel`. Whether the model is on gpu is currently specified by a `use_gpu` flag, which is boolean. In this diff, we change `use_gpu` to have type `Union[bool, str]`, which allowable values of `False` (model on cpu), `'cuda'` (model on gpu, not using `DataParallel`, and `'cuda_data_parallel'` (model on gpu, using `DataParallel`). This has backwards compatibility with classes like `ExplicitDataset`, which moves data to gpu `if use_gpu`, as strings are interpreted as being true. In further detail, the changes are as follows: - for tests (`TestTracInSelfInfluence`, `TestTracInKMostInfluential`) where `use_gpu` was called with `True`, now call them with values of `'cuda'` and `'cuda_parallel'` (in addition to `False`) - in those tests, make the layer names have the 'module' prefix only when `use_gpu='cuda_data_parallel'` - change `get_random_model_and_data`, which is where the `use_gpu` flag is used to create model and data, to reflect the new logic Reviewed By: vivekmig Differential Revision: D47190429
f5fff22
to
1464e5c
Compare
This pull request was exported from Phabricator. Differential Revision: D47190429 |
1464e5c
to
437ae82
Compare
This pull request was exported from Phabricator. Differential Revision: D47190429 |
This pull request has been merged in 5398892. |
Summary:
Currently, when testing implementations of
TracInCPBase
, if the model to be tested is on gpu, we always wrap it inDataParallel
. However, it is also worth testing when the model is on gpu, but is not wrapped inDataParallel
. Whether the model is on gpu is currently specified by ause_gpu
flag, which is boolean. In this diff, we changeuse_gpu
to have typeUnion[bool, str]
, which allowable values ofFalse
(model on cpu),'cuda'
(model on gpu, not usingDataParallel
, and'cuda_data_parallel'
(model on gpu, usingDataParallel
). This has backwards compatibility with classes likeExplicitDataset
, which moves data to gpuif use_gpu
, as strings are interpreted as being true. In further detail, the changes are as follows:TestTracInSelfInfluence
,TestTracInKMostInfluential
) whereuse_gpu
was called withTrue
, now call them with values of'cuda'
and'cuda_parallel'
(in addition toFalse
)use_gpu='cuda_data_parallel'
get_random_model_and_data
, which is where theuse_gpu
flag is used to create model and data, to reflect the new logicReviewed By: NarineK
Differential Revision: D47190429