-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Install TensorFlow from a prebuilt binary when possible #65
Conversation
17361fa
to
c7c35d7
Compare
This massively speeds up our build time, so our Travis builds don't time out any more. 😃 Not to mention being less painful when developing locally. |
@asimshankar fyi |
@adamcrume should we still have a switch to force build? on the other hand you can always build on your own and then use that one at link time... |
@daschl @jhseu I thought about using a |
@adamcrume I agree, a feature is probably not the right abstraction level. For example, the openssl crate also exposes environment variables (https://github.com/sfackler/rust-openssl#manual-configuration) which I think also is the most portable. The only thing I worry about potentially, but let me know if I'm mistaken, is that the So bottom line is I think this is pretty great since it helps speeding up build time a lot. We should just see how we can keep the flexibility/optimization possibilities in an accessible way. |
@adamcrume the other thing which came to mind is that right now the code downloads the tarball into the target directory, so on a clean it has to be redownloaded. I wonder if we should support setting a system property for a different location so that even after a cargo clean it just picks up the tarball from that folder if it does exist? |
@daschl Yeah, the prebuilt binaries are general and so don't include the benefits of SIMD. The performance difference can be notable (~2x). A reasonable compromise here might be to use a system-installed libtensorflow.so if it's available, or add it as an option if there's an idiomatic way to do that in Rust. I'm not too worried about this in the long-term. When XLA is default enabled, we can JIT compile with SIMD instructions. Related: ideally TensorFlow would get statically linked in for both Rust and Go. Bazel should be getting support for building static libraries soon. |
This controls the location for the downloaded prebuilt binary tarball.
I added a couple of environment variables. What do you think? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tried this PR.
As expected, --copt=-march=native
is not present, and tf will report it:
(~/tensorflow_rust.git/tensorflow-sys)
-> cargo run --example multiplication
Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
Running `/home/nbigaouette/tensorflow_rust.git/target/debug/examples/multiplication`
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
The downloaded binary is downloaded in tensorflow.git/tensorflow-sys/target
, not tensorflow.git/target
. Could that be a bug in cargo's handling of CARGO_MANIFEST_DIR
and workspaces? In any case, this prevents cargo clean
from deleting the downloaded file! The file is ~17MB so it's not that large either.
I'd also like to see the CI running cargo test -vv -j 1
(it's disabled for now) since it is still failing on my machine (see #66). I really don't know why it's failing; I was hoping that a pre-built version would fix this but it doesn't look like it.
- cargo test -vv -j 2 --features tensorflow_unstable | ||
- cargo run --example regression | ||
- cargo run --features tensorflow_unstable --example expressions | ||
- cargo doc -vv --features tensorflow_unstable | ||
- (cd tensorflow-sys && cargo test -vv -j 1) | ||
- # TODO(#66): Re-enable: (cd tensorflow-sys && cargo test -vv -j 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that required in the CI? I couldn't find anybody that could reproduce #66...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes; this test fails on my local machine and on Travis: https://travis-ci.org/tensorflow/rust/builds/208479366
* started working on boolean writer * - add plain writer for float, int64 and double - dont' read definitions for required columns * bitpacker for bolean * writing strings * test for reading/writing bools * finish reading basic types * update readme
No description provided.