-
Notifications
You must be signed in to change notification settings - Fork 547
let tracincp aggregate influence #1088
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: Pull Request resolved: meta-pytorch#1088 This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Differential Revision: D41830245 fbshipit-source-id: c39dda0a1ecfb427f81b68cb1b40b56d713f62a7
c259d74 to
ba2e0eb
Compare
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: Pull Request resolved: meta-pytorch#1088 This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245 fbshipit-source-id: 4a95233355f95da2547a2abd8cb7373065cc9707
ba2e0eb to
c46b9c0
Compare
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: Pull Request resolved: meta-pytorch#1088 This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245 fbshipit-source-id: ad971bd623261b8e0de7ff737f158f15b388102b
c46b9c0 to
1a3c57a
Compare
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: Pull Request resolved: meta-pytorch#1088 This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245 fbshipit-source-id: b2d61bee43b1562ae70d26812ba44fd08b0bf4a0
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
1a3c57a to
7190871
Compare
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: Pull Request resolved: meta-pytorch#1088 This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245 fbshipit-source-id: 133b00d19b2a3890ae025f22f7fb814bf0a39e4f
7190871 to
f47a4a5
Compare
Summary: Pull Request resolved: meta-pytorch#1088 This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245 fbshipit-source-id: 521197e754dd6af7524288b9e9ab88b6201c2fb7
f47a4a5 to
76c77c3
Compare
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
76c77c3 to
5a7f19a
Compare
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
5a7f19a to
e5f3244
Compare
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: Pull Request resolved: meta-pytorch#1088 This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Differential Revision: https://internalfb.com/D41830245 fbshipit-source-id: 5317b9d4e97a4dcf8b864c35913da5c9a77b9378
Summary: Pull Request resolved: meta-pytorch#1088 This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Differential Revision: https://internalfb.com/D41830245 fbshipit-source-id: b130027050ce984218acc30256cd129ef684ed45
Summary: Pull Request resolved: meta-pytorch#1088 This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Differential Revision: https://internalfb.com/D41830245 fbshipit-source-id: 796d1486c5c078236d049f4a73d780dea141f222
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
1855f26 to
abe5a28
Compare
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
abe5a28 to
afa439f
Compare
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
afa439f to
1ab6ae4
Compare
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
31e7983 to
4391e5f
Compare
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
1 similar comment
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
Summary: This diff adds an "aggregate" option to `TracInCP.influence`. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. When `aggregate` is True, `influence` in influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. When `aggregate` is True, `influence` in k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape. This option is only added for `TracInCP`, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints). Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow `inputs` for `influence` to be a dataloader, so that it does not need to fit in memory. One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric. We add the following tests: - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence` tests that calling `influence` with `aggregate=True`does give the same result as calling it with `aggregate=False`, and then summing. - in newly added `test_tracin_aggregate_influence`, `test_tracin_aggregate_influence_api` tests that the result of calling `influence` when `aggregate` is true for a DataLoader of batches is the same as when the batches are collated into a single batch. - in `test_tracin_k_most_influential`, we modify the test to allow `aggregate` to be true, which tests that the proponents computed with the memory saving approach by `influence` are the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).ar Reviewed By: cyrjano Differential Revision: D41830245
4391e5f to
78632ae
Compare
|
This pull request was exported from Phabricator. Differential Revision: D41830245 |
|
This pull request has been merged in 006c04c. |
Summary:
This diff adds an "aggregate" option to
TracInCP.influence. The "aggregate" influence score of a training example on a test dataset is the sum of the influence of the training example on all examples in the test dataset. Whenaggregateis True,influencein influence score mode returns a 2D tensor of shape (1, training dataset size) containing aggregate influence scores of all training examples. Whenaggregateis True,influencein k most influential mode returns a 2D tensor of shape (1, k) of proponents (or opponents), and a 2D tensor containing the corresponding aggregate influence scores, of the same shape.This option is only added for
TracInCP, because for it, aggregate influence can be computed more quickly than naively computing the influence score of all training examples on all test examples, and then summing across test examples. In particular, we can first sum the jacobians across all test examples, and then take the dot-product of the sum with the jacobians of training examples. (all this is done across checkpoints).Since computing aggregate influence scores is efficient, even if the test dataset is large, we now allow
inputsforinfluenceto be a dataloader, so that it does not need to fit in memory.One use case of aggregate influence is to compute the influence of a training example on some validation metric, i.e. fairness metric.
We add the following tests:
test_tracin_aggregate_influence,test_tracin_aggregate_influencetests that callinginfluencewithaggregate=Truedoes give the same result as calling it withaggregate=False, and then summing.test_tracin_aggregate_influence,test_tracin_aggregate_influence_apitests that the result of callinginfluencewhenaggregateis true for a DataLoader of batches is the same as when the batches are collated into a single batch.test_tracin_k_most_influential, we modify the test to allowaggregateto be true, which tests that the proponents computed with the memory saving approach byinfluenceare the same proponents computed via calculating all aggregate influence scores, and then sorting (not memory efficient).arDifferential Revision: D41830245