You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
test_ops.py implements tests for the C++ operators in torchvision.
Given that torchvision.ops contain dedicated CPU / CUDA implementations, they need to test for correctness on both CPU and GPU, comparing that the outputs are the same. Plus, they also implement backward computation for most ops (except NMS, which is fine as is), which means that we also need to test the gradients.
In its current form, most tests in test_ops.py have a lot of duplicate code between each test, with generally just a few lines changing between the tests. We should see if there are things we can factorize to reduce code duplication and improve maintanability.
We should also stop hard-coding values for the gradients / outputs, and we should instead rely on gradcheck to perform this checking.
Plus, TorchScript support is now handled as an extra check for each op
self.assertTrue(gradcheck(lambdax: script_func(x, rois), (x,)), 'gradcheck failed for scripted roi_pool')
but more consistency could be better.
Ideally, we would have a simpler set of tests, which still has the same coverage for contiguous / non-contiguous / CPU / CUDA / forward / backward, which has less redundancy.
How to approach this? We could have a base test class that defines all we want to test, and each operator test inherits from it.
test_ops.py
implements tests for the C++ operators in torchvision.Given that
torchvision.ops
contain dedicated CPU / CUDA implementations, they need to test for correctness on both CPU and GPU, comparing that the outputs are the same. Plus, they also implement backward computation for most ops (except NMS, which is fine as is), which means that we also need to test the gradients.In its current form, most tests in
test_ops.py
have a lot of duplicate code between each test, with generally just a few lines changing between the tests. We should see if there are things we can factorize to reduce code duplication and improve maintanability.We should also stop hard-coding values for the gradients / outputs, and we should instead rely on
gradcheck
to perform this checking.Plus, TorchScript support is now handled as an extra check for each op
vision/test/test_ops.py
Lines 191 to 195 in f612182
Ideally, we would have a simpler set of tests, which still has the same coverage for contiguous / non-contiguous / CPU / CUDA / forward / backward, which has less redundancy.
How to approach this? We could have a base test class that defines all we want to test, and each operator test inherits from it.
cc @pedrofreire for visibility
The text was updated successfully, but these errors were encountered: