- Extract sections of images from your image by using OpenAI's CLIP and YoloSmall implemented on HuggingFace Transformers
- Added new capability for segmentation using CLIP and Detr segmentation models
pip install clipcrop
Extract sections of images from your image by using OpenAI's CLIP and YoloSmall implemented on HuggingFace Transformers
from clipcrop import clipcrop
cc = clipcrop.ClipCrop("/content/sample.jpg")
DFE, DM, CLIPM, CLIPP = cc.load_models()
result = cc.extract_image(DFE, DM, CLIPM, CLIPP, "text content", num=2)
Solve captacha images using CLIP and Object detection models. Ensure Tesseract is installed and executable in your path
from clipcrop import clipcrop
cc = clipcrop.ClipCrop(image_path)
DFE, DM, CLIPM, CLIPP = cc.load_models()
result = cc.auto_captcha(CLIPM, CLIPP, 4)
Segment out images using Detr Panoptic segmentation pipeline and leverage CLIP models to derive the most probable one for your query
from clipcrop import clipcrop
clipseg = clipcrop.ClipSeg("/content/input.png", "black colored car")
segmentor, clipmodel, clipprocessor = clipseg.load_models()
result = clipseg.segment_image(segmentor, clipmodel, clipprocessor)
from clipcrop import clipcrop
clipseg = clipcrop.ClipSeg("/content/input.png", "black colored car")
result = clipseg.remove_background()
- SnapCode : Extract code blocks from images mixed with normal text
- HuggingFaceInference: Inference of different uses cases of finetued models
- Feel free to contact me on "nkumarvishnu25@gmail.com"