Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-sizing extracted features for use in caffe #2065

Closed
saumya-jetley opened this issue Mar 7, 2015 · 3 comments
Closed

Re-sizing extracted features for use in caffe #2065

saumya-jetley opened this issue Mar 7, 2015 · 3 comments

Comments

@saumya-jetley
Copy link

So as to load only a portion of the caffe net for GPU based training, I am extracting features at layer L1 and re-using them in a model built L1 onwards.
To load the extracted (sequential) features, I must re-structure them to the blob dimension at L1. Is there a better way to do this than to manually set the blob dimensions while extracting, like so:

//snippet from the 'root-to-caffe/tools/extract_features' code
int h = h1; //Addition 1
int w = w1; //Addition 2
int channels = c1; //Addition 3
for (int n = 0; n < batch_size; ++n) {
datum.set_height(h);
datum.set_width(w);
datum.set_channels(channels);
datum.clear_data();

I have added the top three lines into the extract_features.cpp code to extract a blob of dimension h1xw1xc1; where h1-height, w1-width,c1-number of channels.

Please feel free to correct me or make suitable suggestions.
thanks.

@jyegerlehner
Copy link
Contributor

That's the same thing I ran into. So I suggest: yes, just make those changes locally in the code.

@shelhamer
Copy link
Member

Fixed in #1457 -- merged in c942dc1.

@saumya-jetley
Copy link
Author

oh, somehow i wasn't able to track down that issue. Thanks a lot guys! that helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants