Skip to content
forked from jina-ai/serve

Cloud-native neural search framework for ๐™–๐™ฃ๐™ฎ kind of data

License

Notifications You must be signed in to change notification settings

nishchay47b/jina

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Jina logo: Jina is a cloud-native neural search framework

Cloud-Native Neural Search? Framework for Any Kind of Data

Python 3.7 3.8 3.9 PyPI Docker Image Version (latest semver) codecov

Jina is a neural search framework that empowers anyone to build SOTA and scalable deep learning search applications in minutes.

โฑ๏ธ Save time - The design pattern of neural search systems. Native support for PyTorch/Keras/ONNX/Paddle. Build solutions in just minutes.

๐ŸŒŒ All data types - Process, index, query, and understand videos, images, long/short text, audio, source code, PDFs, etc.

๐ŸŒฉ๏ธ Local & cloud friendly - Distributed architecture, scalable & cloud-native from day one. Same developer experience on both local and cloud.

๐Ÿฑ Own your stack - Keep end-to-end stack ownership of your solution. Avoid integration pitfalls you get with fragmented, multi-vendor, generic legacy tools.

Install

pip install -U jina

More install options including Conda, Docker, Windows can be found here.

Get Started

Get started with Jina to build production-ready neural search solution via ResNet in less than 20 minutes

We promise you can build a scalable ResNet-powered image search service in 20 minutes or less, from scratch. If not, you can forget about Jina.

Basic Concepts

Document, Executor, and Flow are three fundamental concepts in Jina.

  • Document is the basic data type in Jina;
  • Executor is how Jina processes Documents;
  • Flow is how Jina streamlines and distributes Executors.

Leveraging these three components, let's build an app that find similar images using ResNet50.

ResNet50 Image Search in 20 Lines

๐Ÿ’ก Preliminaries: download dataset, install PyTorch & Torchvision

from jina import DocumentArray, Document

def preproc(d: Document):
    return (d.load_uri_to_image_blob()  # load
             .set_image_blob_normalization()  # normalize color 
             .set_image_blob_channel_axis(-1, 0))  # switch color axis
docs = DocumentArray.from_files('img/*.jpg').apply(preproc)

import torchvision
model = torchvision.models.resnet50(pretrained=True)  # load ResNet50
docs.embed(model, device='cuda')  # embed via GPU to speedup

q = (Document(uri='img/00021.jpg')  # build query image & preprocess
     .load_uri_to_image_blob()
     .set_image_blob_normalization()
     .set_image_blob_channel_axis(-1, 0))
q.embed(model)  # embed
q.match(docs)  # find top-20 nearest neighbours, done!

Done! Now print q.matches and you'll see the URIs of the most similar images.

Print q.matches to get visual similar images in Jina using ResNet50

Add three lines of code to visualize them:

for m in q.matches:
    m.set_image_blob_channel_axis(0, -1).set_image_blob_inv_normalization()
q.matches.plot_image_sprites()

Visualize visual similar images in Jina using ResNet50

Sweet! FYI, you can use Keras, ONNX, or PaddlePaddle for the embedding model. Jina supports them well.

As-a-Service in 10 Extra Lines

With an extremely trivial refactoring and ten extra lines of code, you can make the local script a ready-to-serve service:

  1. Import what we need.

    from jina import Document, DocumentArray, Executor, Flow, requests
  2. Copy-paste the preprocessing step and wrap it via Executor:

    class PreprocImg(Executor):
        @requests
        def foo(self, docs: DocumentArray, **kwargs):
            for d in docs:
                (d.load_uri_to_image_blob()  # load
                 .set_image_blob_normalization()  # normalize color
                 .set_image_blob_channel_axis(-1, 0))  # switch color axis
  3. Copy-paste the embedding step and wrap it via Executor:

    class EmbedImg(Executor):
        def __init__(self, **kwargs):
            super().__init__(**kwargs)
            import torchvision
            self.model = torchvision.models.resnet50(pretrained=True)        
    
        @requests
        def foo(self, docs: DocumentArray, **kwargs):
            docs.embed(self.model)
  4. Wrap the matching step into an Executor:

    class MatchImg(Executor):
        _da = DocumentArray()
    
        @requests(on='/index')
        def index(self, docs: DocumentArray, **kwargs):
            self._da.extend(docs)
    
        @requests(on='/search')
        def foo(self, docs: DocumentArray, **kwargs):
            docs.match(self._da)
            for d in docs.traverse_flat('r,m'):  # only require for visualization
                d.convert_uri_to_datauri()  # convert to datauri
                d.pop('embedding', 'blob')  # remove unnecessary fields for save bandwidth
  5. Connect all Executors in a Flow, scale embedding to 3:

    f = Flow(port_expose=12345, protocol='http').add(uses=PreprocImg).add(uses=EmbedImg, replicas=3).add(uses=MatchImg)

    Plot it via f.plot('flow.svg') and you get:

  6. Index image data and serve REST query publicly:

    with f:
        f.post('/index', DocumentArray.from_files('img/*.jpg'), show_progress=True, request_size=8)
        f.block()

Done! Now query it via curl and you get the most similar images:

Use curl to query image search service built by Jina & ResNet50

Or go to http://0.0.0.0:12345/docs and test requests via a Swagger UI:

Visualize visual similar images in Jina using ResNet50

Or use a Python client to access the service:

from jina import Client, Document
from jina.types.request import Response

def print_matches(resp: Response):  # the callback function invoked when task is done
    for idx, d in enumerate(resp.docs[0].matches):  # print top-3 matches
        print(f'[{idx}]{d.scores["cosine"].value:2f}: "{d.uri}"')

c = Client(protocol='http', port=12345)  # connect to localhost:12345
c.post('/search', Document(uri='img/00021.jpg'), on_done=print_matches)

At this point, you probably have taken 15 minutes but here we are: an image search service with rich features:

โœ… Solution as microservices โœ… Scale in/out any component โœ… Query via HTTP/WebSocket/gRPC/Client
โœ… Distribute/Dockerize components โœ… Async/non-blocking I/O โœ… Extendable REST interface

Deploy to Kubernetes in 7 Minutes

Have another seven minutes? We'll show you how to bring your service to the next level by deploying it to Kubernetes.

  1. Create a Kubernetes cluster and get credentials (example in GCP, more K8s providers here):
    gcloud container clusters create test --machine-type e2-highmem-2  --num-nodes 1 --zone europe-west3-a
    gcloud container clusters get-credentials test --zone europe-west3-a --project jina-showcase
  2. Move each Executor class to a separate folder with one Python file in each:
    • PreprocImg -> ๐Ÿ“ preproc_img/exec.py
    • EmbedImg -> ๐Ÿ“ embed_img/exec.py
    • MatchImg -> ๐Ÿ“ match_img/exec.py
  3. Push all Executors to Jina Hub:
    jina hub push preproc_img
    jina hub push embed_img
    jina hub push embed_img
    You will get three Hub Executors that can be used via Docker container.
  4. Adjust Flow a bit and open it:
    f = Flow(name='readme-flow', port_expose=12345, infrastructure='k8s').add(uses='jinahub+docker://PreprocImg').add(uses='jinahub+docker://EmbedImg', replicas=3).add(uses='jinahub+docker://MatchImg')
    with f:
        f.block()

Intrigued? Find more about Jina from our docs.

Run Quick Demo

Support

Join Us

Jina is backed by Jina AI and licensed under Apache-2.0. We are actively hiring AI engineers, solution engineers to build the next neural search ecosystem in open source.

Contributing

We welcome all kinds of contributions from the open-source community, individuals and partners. We owe our success to your active involvement.

All Contributors

About

Cloud-native neural search framework for ๐™–๐™ฃ๐™ฎ kind of data

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 54.6%
  • HTML 44.4%
  • Shell 0.4%
  • CSS 0.2%
  • Dockerfile 0.2%
  • EJS 0.1%
  • JavaScript 0.1%