You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to reproduce your baseline method, but the weights are not publicly available. Training the network manually could be done, but complicates things for some people. For example, some do not have easy access to sufficient GPUs to train the model.
Would it be possible to share the weights of the semantic segmentation network with us?
Best regards,
Robert
The text was updated successfully, but these errors were encountered:
While we're discussing about the possibility to have the pre-trained weights for the baseline, I would suggest to reduce the input resolution of the network. You can do that by reducing the MULTIPLICATIVE_FACTOR as: INPUT.MULTIPLICATIVE_FACTOR = 1
You can check the config/defaults.py for details. This will reduce the input resolution but will keep the aspect ratio and generally the field of view.
Moreover, the baseline uses a pre-trained deeplabv3_resnet101; but you could consider to start from a shallower network.
Hello!
Thank you for the interesting challenge!
I tried to reproduce your baseline method, but the weights are not publicly available. Training the network manually could be done, but complicates things for some people. For example, some do not have easy access to sufficient GPUs to train the model.
Would it be possible to share the weights of the semantic segmentation network with us?
Best regards,
Robert
The text was updated successfully, but these errors were encountered: