Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Run Torchvision problems with Benchmark[Problem/Runner/Metric]; conso…
…lidate PyTorchCNN problems (#2688) Summary: Pull Request resolved: #2688 Context: Consolidating on common abstractions will make it easier to add functionality to all classes. Also, this will make the code smaller and easier to navigate. This diff is substantially LOC-negative outside of tests. This PR: * Updates functionality in torchvision.py that replaces MNIST datasets with fakes when run in a test environment; the data sets are now realistic enough to be usable. * Merge the functionality in `pytorch_cnn.py` into `torchvision.py`; it was only ever used in order to support `torchvision.py`. * Remove `PyTorchCNNTorchvisionBenchmarkProblem` and its special serialization logic; it is replaced with `BenchmarkProblem` * Remove `PyTorchCNNTorchvisionRunner`, `PyTorchCNNBenchmarkProblem`, `PyTorchCNNMetric`, and `PyTorchCNNRunner` * Introduce `PyTorchCNNTorchvisionParamBasedProblem`. It does not need special serialization logic because it is a dataclass with datasets constructed in the `__post_init__`. When an instance is serialized, the data sets are not serialized; they are reconstructed when the instance is decoded. Using a dataclass here also allows for an automatic and more rigorous equality check. * Use `BenchmarkRunner`; as per D61483962, this means that this problem now has a ground truth, which won't change its behavior since it doesn't have noise added. Reviewed By: Balandat Differential Revision: D61414680
- Loading branch information