Towhee is a cutting-edge framework designed to streamline the processing of unstructured data through the use of Large Language Model (LLM) based pipeline orchestration. It is uniquely positioned to extract invaluable insights from diverse unstructured data types, including lengthy text, images, audio and video files. Leveraging the capabilities of generative AI and the SOTA deep learning models, Towhee is capable of transforming this unprocessed data into specific formats such as text, image, or embeddings. These can then be efficiently loaded into an appropriate storage system like a vector database. Developers can initially build an intuitive data processing pipeline prototype with user friendly Pythonic APU, then optimize it for production environments.
🎨 Multi Modalities: Towhee is capable of handling a wide range of data types. Whether it's image data, video clips, text, audio files, or even molecular structures, Towhee can process them all.
📃 LLM Pipeline orchestration: Towhee offers flexibility to adapt to different Large Language Models (LLMs). Additionally, it allows for hosting open-source large models locally. Moreover, Towhee provides features like prompt management and knowledge retrieval, making the interaction with these LLMs more efficient and effective.
🎓 Rich Operators: Towhee provides a wide range of ready-to-use state-of-the-art models across five domains: CV, NLP, multimodal, audio, and medical. With over 140 models like BERT and CLIP and rich functionalities like video decoding, audio slicing, frame sampling, and dimensionality reduction, it assists in efficiently building data processing pipelines.
🔌 Prebuilt ETL Pipelines: Towhee offers ready-to-use ETL (Extract, Transform, Load) pipelines for common tasks such as Retrieval-Augmented Generation, Text Image search, and Video copy detection. This means you don't need to be an AI expert to build applications using these features. ⚡️ High performance backend: Leveraging the power of the Triton Inference Server, Towhee can speed up model serving on both CPU and GPU using platforms like TensorRT, Pytorch, and ONNX. Moreover, you can transform your Python pipeline into a high-performance docker container with just a few lines of code, enabling efficient deployment and scaling.
🐍 Pythonic API: Towhee includes a Pythonic method-chaining API for describing custom data processing pipelines. We also support schemas, which makes processing unstructured data as easy as handling tabular data.
Towhee requires Python 3.6+. You can install Towhee via pip
:
pip install towhee towhee.models
Towhee provides some pre-defined pipelines to help users quickly implement some functions. Currently implemented are:
All pipelines can be found on Towhee Hub. Here is an example of using the sentence_embedding pipeline:
from towhee import AutoPipes, AutoConfig
# get the built-in sentence_similarity pipeline
config = AutoConfig.load_config('sentence_embedding')
config.model = 'paraphrase-albert-small-v2'
config.device = 0
sentence_embedding = AutoPipes.pipeline('sentence_embedding', config=config)
# generate embedding for one sentence
embedding = sentence_embedding('how are you?').get()
# batch generate embeddings for multi-sentences
embeddings = sentence_embedding.batch(['how are you?', 'how old are you?'])
embeddings = [e.get() for e in embeddings]
If you can't find the pipeline you want in towhee hub, you can also implement custom pipelines through the towhee Python API. In the following example, we will create a cross-modal retrieval pipeline based on CLIP.
from towhee import ops, pipe, DataCollection
# create image embeddings and build index
p = (
pipe.input('file_name')
.map('file_name', 'img', ops.image_decode.cv2())
.map('img', 'vec', ops.image_text_embedding.clip(model_name='clip_vit_base_patch32', modality='image'))
.map('vec', 'vec', ops.towhee.np_normalize())
.map(('vec', 'file_name'), (), ops.ann_insert.faiss_index('./faiss', 512))
.output()
)
for f_name in ['https://raw.githubusercontent.com/towhee-io/towhee/main/assets/dog1.png',
'https://raw.githubusercontent.com/towhee-io/towhee/main/assets/dog2.png',
'https://raw.githubusercontent.com/towhee-io/towhee/main/assets/dog3.png']:
p(f_name)
# Flush faiss data into disk.
p.flush()
# search image by text
decode = ops.image_decode.cv2('rgb')
p = (
pipe.input('text')
.map('text', 'vec', ops.image_text_embedding.clip(model_name='clip_vit_base_patch32', modality='text'))
.map('vec', 'vec', ops.towhee.np_normalize())
# faiss op result format: [[id, score, [file_name], ...]
.map('vec', 'row', ops.ann_search.faiss_index('./faiss', 3))
.map('row', 'images', lambda x: [decode(item[2][0]) for item in x])
.output('text', 'images')
)
DataCollection(p('puppy Corgi')).show()
Towhee is composed of four main building blocks - Operators
, Pipelines
, DataCollection API
and Engine
.
-
Operators: An operator is a single building block of a neural data processing pipeline. Different implementations of operators are categorized by tasks, with each task having a standard interface. An operator can be a deep learning model, a data processing method, or a Python function.
-
Pipelines: A pipeline is composed of several operators interconnected in the form of a DAG (directed acyclic graph). This DAG can direct complex functionalities, such as embedding feature extraction, data tagging, and cross modal data analysis.
-
DataCollection API: A Pythonic and method-chaining style API for building custom pipelines, providing multiple data conversion interfaces: map, filter, flat_map, concat, window, time_window, and window_all. Through these interfaces, complex data processing pipelines can be built quickly to process unstructured data such as video, audio, text, images, etc.
-
Engine: The engine sits at Towhee's core. Given a pipeline, the engine will drive dataflow among individual operators, schedule tasks, and monitor compute resource usage (CPU/GPU/etc). We provide a basic engine within Towhee to run pipelines on a single-instance machine and a Triton-based engine for docker containers.
- TowheeHub: https://towhee.io/
- docs: https://towhee.readthedocs.io/en/latest/
- examples: https://github.com/towhee-io/examples
Writing code is not the only way to contribute! Submitting issues, answering questions, and improving documentation are just some of the many ways you can help our growing community. Check out our contributing page for more information.
Special thanks goes to these folks for contributing to Towhee, either on Github, our Towhee Hub, or elsewhere:
Looking for a database to store and index your embedding vectors? Check out Milvus.