Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to TF 2.10.1, with macosx-arm64 support #481

Merged
merged 13 commits into from
Dec 28, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 17 additions & 6 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,6 @@ The `tensorflow-core/tensorflow-core-api/.bazelversion` file must be kept in syn
This allows using [Bazelisk](https://github.com/bazelbuild/bazelisk) which runs the bazel version given in .bazelversion instead of having to
physically reinstall a specific `bazel` version each time the TensorFlow version changes.


### GPU Support

Currently, due to build time constraints, the GPU binaries only support compute capacities 3.5 and 7.0.
Expand All @@ -61,6 +60,17 @@ To build for GPU, pass `-Djavacpp.platform.extension=-gpu` to maven. By default,
for more info. If you add `bazelrc` files, make sure the `TF_CUDA_COMPUTE_CAPABILITIES` value in them matches the value set elsewhere, as it will take
precedence if present.

### Apple Silicon

The TensorFlow Java project relies on [GitHub-hosted runners](https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners)
to build and distribute the native binaries for TensorFlow. Unfortunately until this day, GitHub Actions still does not support runners with a
karllessard marked this conversation as resolved.
Show resolved Hide resolved
Apple Silicon chip (such as M1). Therefore, we cannot distribute the binaries for this platform, so they must be compiled and installed locally on such systems.

Please follow the present [procedure](CONTRIBUTING.md#building) for building TensorFlow Java from sources.

:warning: Until this day (12-16-2022), TensorFlow fails to build on XCode command line tools version 14+. If you have such version installed, it might
karllessard marked this conversation as resolved.
Show resolved Hide resolved
be necessary to downgrade it to a [previous version](https://developer.apple.com/download/all/?q=Xcode), like 13.4.1.

## Running Tests

`ndarray` can be tested using the maven `test` target. `tensorflow-core` and `tensorflow-framework`, however, should be tested using
Expand Down Expand Up @@ -136,8 +146,8 @@ The actual classification process is a bit arbitrary and based on the good jugem
are being wrapped by a higher-level API and therefore are left unclassified, while in Java they are exposed and can be used directly by
the users.

For classifying an op, a `api_def` proto must be added to the `tensorflow-core-api` [folder](https://github.com/tensorflow/java/tree/master/tensorflow-core/tensorflow-core-api/src/bazel/api_def)
for this purpose, redefining optionally its endpoints or its visibility.
For classifying an op, an `api_def` proto must be added to the [`tensorflow-core-api/src/bazel/api_def`](https://github.com/tensorflow/java/tree/master/tensorflow-core/tensorflow-core-api/src/bazel/api_def)
folder for this purpose, redefining optionally its endpoints or its visibility.

Writing these protos and trying the guess the right location for each new operation can become a tedious job so an utility program called `java_api_import`
has been created to help you with this task. This utility is available under the `bazel-bin` folder of `tensorflow-core-api` after the
Expand All @@ -147,8 +157,7 @@ initial build. Here is how to invoke it:
cd tensorflow-core/tensorflow-core-api
./bazel-bin/java_api_import \
--java_api_dir=src/bazel/api_def \
--tf_src_dir=bazel-tensorflow-core-api/external/org_tensorflow \
--tf_lib_path=bazel-bin/external/org_tensorflow/tensorflow/libtensorflow_cc.<version>.<ext>
--tf_src_dir=bazel-tensorflow-core-api/external/org_tensorflow
```

For each new operation detected (i.e. any operation that does not have a valid `api_def` proto yet), the utility will suggest you some possible
Expand All @@ -157,7 +166,9 @@ will automatically classify the op). It is also possible to enter manually the n
application will then take care to write the `api_def` proto for each operation classified.

Make sure to erase completely the generated source folder of the `tensorflow-core-api` module before rerunning the build so you can see
if your ops have been classified properly. Don't worry, that second run of the build will be fast ;)
if your ops have been classified properly. Don't worry, that second run of the build will be faster! Please review the location of the new generated ops
after rebuilding and make necessary adjustments to the `api_def`protos manually if some of them seems to be in the "wrong" place, making sure to repeat this process
until satisfaction.

#### Ops Kernel Upgrade

Expand Down
70 changes: 51 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,15 +43,30 @@ See [CONTRIBUTING.md](CONTRIBUTING.md#building).

## Using Maven Artifacts

To include TensorFlow in your Maven application, you first need to add a dependency on either the
`tensorflow-core` or `tensorflow-core-platform` artifacts. The former could be included multiple times
for different targeted systems by their classifiers, while the later includes them as dependencies for
`linux-x86_64`, `macosx-x86_64`, and `windows-x86_64`, with more to come in the future. There are also
`tensorflow-core-platform-mkl`, `tensorflow-core-platform-gpu`, and `tensorflow-core-platform-mkl-gpu`
artifacts that depend on artifacts with MKL and/or CUDA support enabled.
There are two options for adding TensorFlow Java as a dependency to your Maven project: with individual dependencies
for each targeted platforms or with a single dependency that target them all.

### Individual dependencies

With this option, you must first add an unclassified dependency to `tensorflow-core-api` and then add one or multiple
native dependencies to this same artifact with a classifier targeting a specific platform. This option is preferred as
it minimize the size of your application by only including the TensorFlow builds you need, at the cost of being more
restrictive.

While TensorFlow Java can be compiled for [multiple platforms](https://github.com/tensorflow/java/blob/dc64755ee948c71f1321be27478828a51f1f3cf7/tensorflow-core/pom.xml#L54),
only binaries for the followings are being **supported and distributed** by this project:

- `linux-x86_64`: Linux platforms on Intel chips
- `linux-x86_64-gpu`: Linux platforms on Intel chips with Cuda GPU support
- `macosx-x86_64`: MacOS X platforms on Intel chips
- `windows-x86_64`: Windows platforms on Intel chips
- `windows-x86_64-gpu`: Windows platforms on Intel chips with Cuda GPU support

*Note: No binaries are distributed to run TensorFlow Java on machines with Apple Silicon chips (`macosx-arm64`), these
should be build from sources. See [here](CONTRIBUTING.md#apple-silicon) for more details.*

For example, for building a JAR that uses TensorFlow and is targeted to be deployed only on Linux
systems, you should add the following dependencies:
systems with no GPU support, you should add the following dependencies:
```xml
<dependency>
<groupId>org.tensorflow</groupId>
Expand All @@ -62,7 +77,7 @@ systems, you should add the following dependencies:
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow-core-api</artifactId>
<version>0.4.2</version>
<classifier>linux-x86_64${javacpp.platform.extension}</classifier>
<classifier>linux-x86_64</classifier>
</dependency>
```

Expand All @@ -78,37 +93,54 @@ native dependencies as follows:
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow-core-api</artifactId>
<version>0.4.2</version>
<classifier>linux-x86_64${javacpp.platform.extension}</classifier>
<classifier>linux-x86_64-gpu</classifier>
</dependency>
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow-core-api</artifactId>
<version>0.4.2</version>
<classifier>macosx-x86_64${javacpp.platform.extension}</classifier>
<classifier>macosx-x86_64</classifier>
</dependency>
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow-core-api</artifactId>
<version>0.4.2</version>
<classifier>windows-x86_64${javacpp.platform.extension}</classifier>
<classifier>windows-x86_64-gpu</classifier>
</dependency>
```

In some cases, pre-configured starter artifacts can help to automatically include all versions of
the native library for a given configuration. For example, the `tensorflow-core-platform`,
`tensorflow-core-platform-mkl`, `tensorflow-core-platform-gpu`, or `tensorflow-core-platform-mkl-gpu`
artifact includes transitively all the artifacts above as a single dependency:
Only one dependency can be added per platform, meaning that you cannot add native dependencies to both `linux-x86_64` and
`linux-x86_64-gpu` within the same project.

### Single dependency

In some cases, it might be preferable to add a single dependency that includes transitively all the artifacts
required to run TensorFlow Java on any [supported platforms](README.md#individual-dependencies)

- `tensorflow-core-platform`: Includes TenSupports for `linux-x86_64`, `macosx-x86_64` and `windows-x86_64`
- `tensorflow-core-platform-gpu`: Supports `linux-x86_64-gpu` and `windows-x86_64-gpu`

For example, to run TensorFlow Java on any platform for which a binary is being distributed by this project, you can
simply add this dependency to your application:
```xml
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow-core-platform</artifactId>
<version>0.4.2</version>
</dependency>
```
or this dependency if you want to run it only on platforms with GPU support:
```xml
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow-core-platform${javacpp.platform.extension}</artifactId>
<artifactId>tensorflow-core-platform-gpu</artifactId>
<version>0.4.2</version>
</dependency>
```

Be aware though that the native library is quite large and including too many versions of it may
significantly increase the size of your JAR. So it is good practice to limit your dependencies to
the platforms you are targeting. For this purpose the `-platform` artifacts include profiles that follow
Be aware though that the builds of TensorFlow are quite voluminous and including too many native dependencies may
significantly increase the size of your application. So it is good practice to limit your dependencies to
the platforms you are targeting. For this purpose these artifacts include profiles that follow
the conventions established on this page:
* [Reducing the Number of Dependencies](https://github.com/bytedeco/javacpp-presets/wiki/Reducing-the-Number-of-Dependencies)

Expand Down
Loading