-
-
Notifications
You must be signed in to change notification settings - Fork 16.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get predictions as base64 #2291
Conversation
@kinoute thanks for the PR! I'm not a API expert, could you explain the use case a bit? Is this for returning image overlays over an API, like .show() but sending the contents back to the sender? Yes the Detection() methods are all batch capable. Are API calls always for single-image use cases? |
Exactly. Example: you have a web app that accepts a URL and a job that takes a screenshot of that URL. Then the screenshot is encoded as base64 and send as JSON to your API that hosts your Yolov5 model to detect some things on it. You can then return the screenshot from the API with bounding boxes this time, as base64, to show the results to the client. With our previous PR, we added the possibility to have class names next to the bounding boxes. But in case you're building a API around your model, you would still have to use I just thought that adding a small function to get the result directly as base64 would be handy and avoid this save/re-open system. But of course this could be done by the user manually if he wants to. But I don't see any harm to add this here.
Not necessarily. I'm pretty sure there are cases where people would want to be able to send multiple images at the same time and get the results for all these images but overall, you treat and return one image (at least your function/model) at a time. |
@kinoute ok I understand a bit better now! Is base64 what the our notebook stores images as? Line 622 in a82dce7
We don't want to add any overhead on common use cases so if this helps then it makes sense to add. There's two things I noticed
|
Here's an example of render(), no files are saved. Should probably update tutorial as this is undocumented at the moment: import torch
# Model
model = ...
# Images
imgs = ...
# Inference
results = model(imgs)
# Do stuff
results.imgs # array of original images (as np array) passed to model for inference
results.render() # updates results.imgs with boxes and labels, returns nothing
results.imgs # same array of images as before but now includes all predictions EDIT: results.render() returns updated results.imgs, does not return nothing |
Ok then my PR doesn't make any sense anymore. Didn't know that we could do that, thanks. You should update the documentation to let users know they can get the images with predictions like this and convert them to base64 on #36: import base64
import cv2
import torch
from PIL import Image
from io import BytesIO
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True) # for file/URI/PIL/cv2/np inputs and NMS
# Images
for f in ['zidane.jpg', 'bus.jpg']: # download 2 images
print(f'Downloading {f}...')
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/' + f, f)
img1 = Image.open('zidane.jpg') # PIL image
img2 = cv2.imread('bus.jpg')[:, :, ::-1] # OpenCV image (BGR to RGB)
imgs = [img1, img2] # batched list of images
# Inference
results = model(imgs, size=640) # includes NMS
# Results
results.print() # print results to screen
results.show() # display results
results.save() # save as results1.jpg, results2.jpg... etc.
# Data
print('\n', results.xyxy[0]) # print img1 predictions
# x1 (pixels) y1 (pixels) x2 (pixels) y2 (pixels) confidence class
# tensor([[7.47613e+02, 4.01168e+01, 1.14978e+03, 7.12016e+02, 8.71210e-01, 0.00000e+00],
# [1.17464e+02, 1.96875e+02, 1.00145e+03, 7.11802e+02, 8.08795e-01, 0.00000e+00],
# [4.23969e+02, 4.30401e+02, 5.16833e+02, 7.20000e+02, 7.77376e-01, 2.70000e+01],
# [9.81310e+02, 3.10712e+02, 1.03111e+03, 4.19273e+02, 2.86850e-01, 2.70000e+01]])
# Transform images with predictions from numpy arrays to base64 encoded images
results.imgs # array of original images (as np array) passed to model for inference
results.render() # updates results.imgs with boxes and labels, returns nothing
for img in results.imgs:
buffered = BytesIO()
img_base64 = Image.fromarray(img)
img_base64.save(buffered, format="JPEG")
print(base64.b64encode(buffered.getvalue()).decode('utf-8')) # base64 encoded image with results |
@kinoute thanks I will update the tutorial with your example! |
Since we integrated the bounding boxes and the class names on inference images in the Pytorch Hub version with #2243, it would be nice to integrate another functionality to export this image as base64 as well.
We can see that some people are starting to integrate yolov5 as an API service, and they face some issues to generate the bounding boxes and the class names:
https://github.com/WelkinU/yolov5-fastapi-demo/blob/910f09d731d13c71a68ae2e5c09b7699b47f097d/server.py#L55-L91
Now that we fixed it, adding this little
tobase64
could help people to return a predicted image to their client if needed.There is one question though with the
display
function, it seems it can be used for batches and I don't know if thistobase64
function will handle this properly.🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
Introduced base64 image encoding and dataset improvement features
📊 Key Changes
base64
image encoding support inmodels/common.py
for exporting detection results.autosplit
parameters to filter datasets based on annotations and add background images inutils/datasets.py
.🎯 Purpose & Impact
annotated_only
parameter, datasets can be built using only images with corresponding annotation files, which can help with training accuracy.bg_imgs_path
andbg_imgs_ratio
aims to reduce false positives by providing more diverse non-object examples.The changes make YOLOv5 more versatile for different user scenarios, potentially leading to more robust models and easier deployment in web environments. 🚀