Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplifying the code handling dim_ordering for tensorflow back-end #3149

Closed
MycChiu opened this issue Jul 5, 2016 · 2 comments
Closed

Simplifying the code handling dim_ordering for tensorflow back-end #3149

MycChiu opened this issue Jul 5, 2016 · 2 comments

Comments

@MycChiu
Copy link

MycChiu commented Jul 5, 2016

Tensorflow originally takes an input tensor of the shape
[batch, in_height, in_width, in_channels]
while dealing with the convolution-related operations,
and Theano takes an input of the shape
[batch, in_channels, in_height, in_width]
Therefore, currently in the master branch we are doing an additional tensor transpose when the dim_ordering complies with the theano convention.

However, since r0.8 Tensorflow has added support for different dim_ordering via the data_format argument. Right now, we can actually just pass in 'NCHW' (i.e. [n_sample,channel,height,width]) for the data_format when the dim_ordering is 'th', which will save a tf.transpose operation.

Additionally, when I run the benchmark files from soumith/convnet-benchmarks, I see a consistent 20% speed gain using 'NCHW' data_format while running on gpu. The effect seems to be caused by CuDNN using 'NCHW' data_format internally, and Tensorflow has to perform additional operation to convert the NHWC tensors to comply with CuDNN. Although I am not sure if this applies to all environments, (I am running on Ubuntu 16.04, GTX1070 with CUDA toolkit 8.0RC, CuDNN V5, keras 1.0.5, tensorflow 0.8.0 ) but if it does, I think it would be a good incentive to change the current way of dealing with different data formats.

@fchollet
Copy link
Collaborator

fchollet commented Jul 7, 2016

Thanks for pointing this out, but unfortunately support doesn't seem mature enough to switch at this point. In particular the CPU implementation only supports NHWC.

@stale stale bot added the stale label May 23, 2017
@stale stale bot closed this as completed Jun 22, 2017
@ddkang
Copy link

ddkang commented Sep 15, 2017

Do you think this is worth revisiting?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants