-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds Backbones to FRCNN Take 2 #475
Conversation
Hello @oke-aditya! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2021-01-18 08:49:20 UTC |
Codecov Report
@@ Coverage Diff @@
## master #475 +/- ##
==========================================
- Coverage 79.30% 78.96% -0.35%
==========================================
Files 107 112 +5
Lines 6224 6299 +75
==========================================
+ Hits 4936 4974 +38
- Misses 1288 1325 +37
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
CI Test failure unrrelated cc @akihironitta Needs review cc @Borda |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@oke-aditya Sorry for the delay! I left some suggestions mainly on formatting. They should be resolved by just clicking in your browser :] Would you mind having a look at them?
|
||
parser.add_argument("--data_dir", type=str, default=".") | ||
parser.add_argument("--batch_size", type=int, default=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we decouple these data-related arguments from the model? Related: #207
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, let's discuss #207 as well. I too think we should decouple data, transforms, models and keep them independent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@akihironitta where do you suggest we put the data dir and batch size arguments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's keep this for now to keep backward compatibility. Let's discuss and keep this consistent for all models.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ananyahjha93 @oke-aditya In my opinion, --data_dir
and --batch_size
args should be added to parser (in cli_main()
) via datamodules, but not via models as previously #206 removed those args from GAN
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ananyahjha93 I'd love to know what you think of my suggestion :] (We could probably resolve this later in another PR since the related issue #207 is still an open issue though.)
Also, this introduces new detection components (which is becoming like virus). Marking discussion for #427 |
As of now, isort and black both don't go well together and hence I'm getting these errors Marking for #482 . Isort and black seem to have different way of keeping imports. I'm keeping the sort way. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@oke-aditya Sorry for the delay and thank you for your contribution! Other than the above comment #475 (comment), lgtm. cc: @ananyahjha93
@oke-aditya Forgot to mention in the above comment, can you merge master (which should fix argparse errors on Python 3.6) and resolve the failing isort tests? |
Yes I will. I propose to have linear commit history enabled from repository settings cc @Borda . It makes such sync easier and blocks non linear merges to master. Contributor and Maintainer can just click on update branch to do this.. |
Fixed the above issues. Let's merge this so that I can add #391 Retina Net 😄 |
CI Failure unrelated cc @Borda @ananyahjha93 hope we can merge 😄 |
@oke-aditya You're right that the failing tests are not related to the changes in this PR. They may be caused by the same as #409 which is still an open issue.
|
I reformmated with yapf, to pass the check. 👍 Can we merge this 😅 @Borda @ananyahjha93 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls update imports to use If available...
For documentation, we have no docs for detection currently. Let's add RetinaNet and then build comprehensive documentation? CI Failure is unrelated. |
@Borda @akihironitta can we merge this ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@oke-aditya I left some minor suggestions. I think we are ready to go once these comments are resolved.
@@ -0,0 +1,3 @@ | |||
from pl_bolts.models.detection.components.torchvision_backbones import create_torchvision_backbone |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why flake8
doesn't complain about this line without # noqa: F401
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But why should flake8 complain ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since it is inlcuded in __all__
. Flake does not complain
from pl_bolts.models.detection.faster_rcnn.backbones import create_fasterrcnn_backbone | ||
from pl_bolts.models.detection.faster_rcnn.faster_rcnn_module import FasterRCNN |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also here, I don't know why flake8
is happy about these lines without # noqa: F401
cc: @Borda
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same commoent, __all__
blocks flake8 😅
except ModuleNotFoundError: | ||
warn_missing_pkg('torchvision') # pragma: no-cover | ||
|
||
from pl_bolts.models.detection.faster_rcnn import create_fasterrcnn_backbone |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should place from pl_bolts.models.detection.faster_rcnn import create_fasterrcnn_backbone
outside if/else
since it should be possible to import create_fasterrcnn_backbone
even when torchvision is not available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so it is possible to import / use create_faster_rcnn_backbone
without torchvision, as it uses torchvision models and resnet_fpn_bakcbone
from torchvision. So safer side let's warn before
Let me update, I think it would be good to go then |
Co-authored-by: Akihiro Nitta <nitta@akihironitta.com>
Co-authored-by: Akihiro Nitta <nitta@akihironitta.com>
@oke-aditya @akihironitta seems I was too fast :D mind the updates address in follow-up PR? :] |
What does this PR do?
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃
Redo #382 . Closes #340