-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implemented padding aware im2col and col2im functions #99
Conversation
nice! I don't see the need for having a version with and another version without padding though, if you can just set the padding to zero and it should still work. |
Would you mind writing a short test to compare the direct version vs the Yangqing On Wed, Feb 12, 2014 at 9:39 AM, Tobias Domhan notifications@github.comwrote:
|
@shelhamer DON'T MERGE until it is clean @mavenlin the commit for from_other filler should be in another Pull Request. It is nice idea although the name is a bit confusing, since the it is not loading from another filler but from a blob within a snapshot. |
@sguada I did not mean to put it in this pull request, it seems my new commit to my fork is automatically added, is there anyways to prevent this? |
@mavenlin this is a strong reason you should develop in features branches and not To fix this do
and delete any commits you do not want to be part of this PR, then
and wait for your PR to be merged and don't do any development in To make this more clear, I will write a short tutorial on contributing to Caffe soon to spell out these issues and save trouble for everyone. |
@shelhamer Thanks for explanation, updated. I'm writing the testing code to confirm. |
Test code added. Run the test code like below: shows: |
The major benefit seems to be significant reduction of memory usage. Would you like to add detailed relevant statistics in the spirit of #83? |
@kloudkl I don't wanna do that since it is not slowed down. |
The following is a test of the time cost using different parameters. pa : padding aware
Consistently, padding aware convolution is faster than pad + convolution. Another test is carried out to compare padding aware im2col to original im2col when the padding size is 0.
I think it would be no harm and some benefit to stick to the original im2col if pad = 0.
@tdomhan @Yangqing What do you think? @shelhamer I'll rebase on dev and submit a new PR |
Add Faster RCNN support
standardize memory optimization configurations * yjxiong/fix/mem_config: take care of share data with excluded blob improvise memory opt configs fix cudnn conv legacy bug (BVLC#96) add TOC Update README.md Update README.md (BVLC#95) Update README.md Improve the python interface (BVLC#80) Update README.md
* improvise memory opt configs * take care of share data with excluded blob
Fix to ignore 255 key as suggested by JimKlingshirn in BVLC#95
Implemented padded_im2col and padded_col2im for both GPU and CPU versions. Convolution layer and im2col layers are updated to use the padded version when the pad parameter is not zero. To use, just remove padding layers and add parameter "pad" to the convolution layer.
In the case of imagenet, removing the padding layer reduces the device memory usage from 3700M to 3100M saving about 600M memory.
According to the benchmark (#85), padding layers in imagenet takes about 3% of the total execution time for GPU mode. Removing the padding layer is not gonna improve the speed a lot. On the other hand, padded_im2col adds operation to the kernel code. Thus the speed is not gonna improve, or just negligible if there is. On my Titan card, code with padding runs 50 batches in 70s, without padding improves 1s to 69s.