Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version bump to 1.0.0 + small tweaks to backend #16

Merged
merged 4 commits into from
May 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ jobs:

- name: Install package
run: |
pip install ".[dev]"
pip install ".[tensorflow,dev]"

- name: Lint with flake8
run: |
Expand All @@ -45,7 +45,7 @@ jobs:
- name: Upload coverage to Coveralls
if: ${{ github.ref == 'refs/heads/main' }}
run: |
coveralls --service=github
coveralls -i --service=github
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
COVERALLS_FLAG_NAME: ${{ matrix.python-version }}
Expand Down Expand Up @@ -74,6 +74,6 @@ jobs:
- name: Finished
if: ${{ github.ref == 'refs/heads/main' }}
run: |
coveralls --service=github --finish
coveralls -i --service=github --finish
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
6 changes: 5 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,11 @@ You can use `pip` to install the latest version of UniSim:
pip install unisim
```

By default, UniSim uses [Onnx](https://github.com/onnx/onnx) as the runtime. You can switch to using TensorFlow by setting the `BACKEND` environment variable (e.g. `os.environ["BACKEND"] = "tf"`).
By default, UniSim uses [Onnx](https://github.com/onnx/onnx) when running on CPU, and [TensorFlow](https://www.tensorflow.org/) for GPU acceleration. You can switch backends by setting the `BACKEND` environment variable (e.g. `os.environ["BACKEND"] = "tf"` or `"onnx"`). If you have a GPU, you can additionally install TensorFlow using:
2
```
pip install unisim[tensorflow]
```

## Text UniSim (TextSim)

Expand Down
225 changes: 108 additions & 117 deletions notebooks/unisim_text_demo.ipynb

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,14 +41,14 @@ def get_version(rel_path):
"tabulate",
"numpy",
"tqdm",
"onnx",
"jaxtyping",
"onnxruntime-gpu",
"onnx",
"onnxruntime",
"pandas",
"tensorflow>=2.11,<2.16",
"usearch>=2.6.0",
],
extras_require={
"tensorflow": ["tensorflow>=2.11,<2.16"],
"dev": [
"datasets",
"mypy",
Expand Down
2 changes: 1 addition & 1 deletion unisim/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,5 @@
# license that can be found in the LICENSE file or at
# https://opensource.org/licenses/MIT.

__version__ = "0.0.2"
__version__ = "1.0.0"
from .textsim import TextSim # noqa: F401
23 changes: 15 additions & 8 deletions unisim/backend/load_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,18 @@
except ImportError:
TF_AVAILABLE = False

# detect accelerator
if TF_AVAILABLE or get_backend() == BackendType.tf:
devices_types = [d.device_type for d in tf.config.list_physical_devices()]

if "GPU" in devices_types:
set_accelerator(AcceleratorType.gpu)
else:
set_accelerator(AcceleratorType.cpu)

else:
set_accelerator(AcceleratorType.cpu)

# choose backend if not set by user
accel = get_accelerator()
backend = get_backend()
Expand All @@ -41,7 +53,7 @@
elif accel == AcceleratorType.cpu:
# on CPU always onnx
set_backend(BackendType.onnx)
elif TF_AVAILABLE:
elif TF_AVAILABLE and accel == AcceleratorType.gpu:
# on GPU use TF by default
set_backend(BackendType.tf)
else:
Expand All @@ -50,15 +62,10 @@

# post detection
if get_backend() == BackendType.onnx:
import onnxruntime as rt

from .onnx import * # noqa: F403, F401

# FIXME onnx accelerator type support
if rt.get_device() == "GPU":
set_accelerator(AcceleratorType.gpu)
else:
set_accelerator(AcceleratorType.cpu)
# FIXME(marinazh): onnx accelerator type support
set_accelerator(AcceleratorType.cpu)

elif get_backend() == BackendType.tf:
from .tf import * # type: ignore # noqa: F403, F401
Expand Down
1 change: 0 additions & 1 deletion unisim/backend/tf.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
from tensorflow import Tensor
from tensorflow.keras import Model

# typing
from ..types import BatchEmbeddings


Expand Down
2 changes: 2 additions & 0 deletions unisim/textsim.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
# license that can be found in the LICENSE file or at
# https://opensource.org/licenses/MIT.

from __future__ import annotations

from typing import Any, Dict, List, Sequence

from pandas import DataFrame
Expand Down
14 changes: 8 additions & 6 deletions unisim/types.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,17 @@
# license that can be found in the LICENSE file or at
# https://opensource.org/licenses/MIT.

from __future__ import annotations

from typing import Union
from typing import TYPE_CHECKING, Union

from jaxtyping import Float32
from numpy import ndarray
from tensorflow import Tensor

Array = Union[Tensor, ndarray]
if TYPE_CHECKING:
from numpy import ndarray
from tensorflow import Tensor

Array = Union[Tensor, ndarray]
else:
from numpy import ndarray as Array

# Embeddings
Embedding = Float32[Array, "embedding"]
Expand Down
Loading