Skip to content

Deep Pray(深度祈祷): LEGO for deep learning, Making AI easier, faster and cheaper👻

License

Notifications You must be signed in to change notification settings

deepray-AI/deepray

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


PyPI Status Badge PyPI - Python Version Documentation Gitter chat Code style: yapf

Continuous Build Status

Build Status
Ubuntu Status

Deepray is a repository of contributions that conform to well-established API patterns, but implement new functionality not available in core TensorFlow. TensorFlow natively supports a large number of operators, layers, metrics, losses, and optimizers. However, in a fast moving field like ML, there are many interesting new developments that cannot be integrated into core TensorFlow (because their broad applicability is not yet clear, or it is mostly used by a smaller subset of the community).

Maintainership

The maintainer of Deepray now is @fuhailin. If you would like to maintain something, please feel free to submit a PR. We encourage multiple owners for all submodules.

Installation

Stable Builds

Deepray is available on PyPI for Linux. To install the latest version, run the following:

pip install deepray

To ensure you have a version of TensorFlow that is compatible with Deepray, you can specify the tensorflow extra requirement during install:

pip install deepray[tensorflow]

Similar extras exist for the tensorflow-gpu and tensorflow-cpu packages. To use Deepray:

import tensorflow as tf
import deepray as dp

Python Op Compatility

Deepray is actively working towards forward compatibility with TensorFlow 2.x. However, there are still a few private API uses within the repository so at the moment we can only guarantee compatibility with the TensorFlow versions which it was tested against. Warnings will be emitted when importing deepray if your TensorFlow version does not match what it was tested against.

Python Op Compatibility Matrix

Deepray TensorFlow Python
deepray-0.18.0 2.9.3 3.8, 3.9, 3.10, 3.11

C++ Custom Op Compatibility

TensorFlow C++ APIs are not stable and thus we can only guarantee compatibility with the version Deepray was built against. It is possible custom ops will work with multiple versions of TensorFlow, but there is also a chance for segmentation faults or other problematic crashes. Warnings will be emitted when loading a custom op if your TensorFlow version does not match what it was built against.

Additionally, custom ops registration does not have a stable ABI interface so it is required that users have a compatible installation of TensorFlow even if the versions match what we had built against. A simplification of this is that Deepray custom ops will work with pip-installed TensorFlow but will have issues when TensorFlow is compiled differently. A typical example of this would be conda-installed TensorFlow. RFC #133 aims to fix this.

C++ Custom Op Compatibility Matrix

Deepray TensorFlow Compiler cuDNN CUDA
deepray-0.18.0 2.12 GCC 9.3.1 8.1 11.8

Installing from Source

You can also install from source. This requires the Bazel build system (version >= 1.0.0).

CPU Custom Ops
git clone https://github.com/deepray-AI/deepray.git
cd deepray

# This script links project with TensorFlow dependency
python3 ./configure.py

bazel build build_pip_pkg
bazel-bin/build_pip_pkg artifacts

pip install artifacts/deepray-*.whl
GPU and CPU Custom Ops
git clone https://github.com/deepray-AI/deepray.git
cd deepray

export TF_NEED_CUDA="1"

# Set these if the below defaults are different on your system
export TF_CUDA_VERSION="11"
export TF_CUDNN_VERSION="8"
export CUDA_TOOLKIT_PATH="/usr/local/cuda"
export CUDNN_INSTALL_PATH="/usr/lib/x86_64-linux-gnu"

# This script links project with TensorFlow dependency
python3 ./configure.py

bazel build build_pip_pkg
bazel-bin/build_pip_pkg artifacts

pip install artifacts/deepray-*.whl

Tutorials

See docs/tutorials/ for end-to-end examples of various deepray.

Core Concepts

Standardized API within Subpackages

User experience and project maintainability are core concepts in Deepray. In order to achieve these we require that our additions conform to established API patterns seen in core TensorFlow.

GPU and CPU Custom Ops

Deepray supports precompiled custom ops for CPU and GPU. However, GPU custom ops currently only work on Linux distributions. For this reason Windows and macOS will fallback to pure TensorFlow Python implementations whenever possible.

The order of priority on macOS/Windows is:

  1. Pure TensorFlow + Python implementation (works on CPU and GPU)
  2. C++ implementation for CPU

The order of priority on Linux is:

  1. CUDA implementation
  2. C++ implementation
  3. Pure TensorFlow + Python implementation (works on CPU and GPU)

If you want to change the default priority, "C++ and CUDA" VS "pure TensorFlow Python", you can set the environment variable DEEPRAY_PY_OPS=1 from the command line or run dp.options.disable_custom_kernel() in your code.

For example, if you are on Linux and you have compatibility problems with the compiled ops, you can give priority to the Python implementations:

From the command line:

export DEEPRAY_PY_OPS=1

or in your code:

import deepray as dp
dp.options.disable_custom_kernel()

This variable defaults to True on Windows and macOS, and False on Linux.

Contributing

Deepray is a community-led open source project (only a few maintainers work for Google!). As such, the project depends on public contributions, bug fixes, and documentation. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

Do you want to contribute but are not sure of what? Here are a few suggestions:

  1. Add a new tutorial. Located in docs/tutorials/, these are a great way to familiarize yourself and others with Deepray. See the guidelines for more information on how to add examples.
  2. Improve the docstrings. The docstrings are fetched and then displayed in the documentation. Do a change and hundreds of developers will see it and benefit from it. Maintainers are often focused on making APIs, fixing bugs and other code related changes. The documentation will never be loved enough!
  3. Solve an existing issue. These range from low-level software bugs to higher-level design problems. Check out the label help wanted. If you're a new contributor, the label good first issue can be a good place to start.
  4. Review a pull request. So you're not a software engineer but you know a lot about a certain field a research? That's awesome and we need your help! Many people are submitting pull requests to add layers/optimizers/functions taken from recent papers. Since Deepray maintainers are not specialized in everything, you can imagine how hard it is to review. It takes very long to read the paper, understand it and check the math in the pull request. If you're specialized, look at the list of pull requests. If there is something from a paper you know, please comment on the pull request to check the math is ok. If you see that everything is good, say it! It will help the maintainers to sleep better at night knowing that he/she wasn't the only person to approve the pull request.
  5. You have an opinion and want to share it? The docs are not very helpful for a function or a class? You tried to open a pull request but you didn't manage to install or test anything and you think it's too complicated? You made a pull request but you didn't find the process good enough and it made no sense to you? Please say it! We want feedback. Maintainers are too much the head into the code to understand what it's like for someone new to open source to come to this project. If you don't understand something, be aware there are no people who are bad at understanding, there are just bad tutorials and bad guides.

Please see contribution guidelines to get started (and remember, if you don't understand something, open an issue, or even make a pull request to improve the guide!).

Community

License

Apache License 2.0