Skip to content

khulnasoft/superverse

Repository files navigation

MetaVision

notebooks | inference | autodistill | maestro


version downloads snyk license python-version colab gradio discord built-with-material-for-mkdocs

πŸ‘‹ hello

We write your reusable computer vision tools. Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! 🀝

πŸ’» install

Pip install the superverse package in a Python>=3.8 environment.

pip install superverse

Read more about conda, mamba, and installing from source in our guide.

πŸ”₯ quickstart

models

Superverse was designed to be model agnostic. Just plug in any classification, detection, or segmentation model. For your convenience, we have created connectors for the most popular libraries like Ultralytics, Transformers, or MMDetection.

import cv2
import superverse as sv
from ultralytics import YOLO

image = cv2.imread(...)
model = YOLO("yolov8s.pt")
result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)

len(detections)
# 5
πŸ‘‰ more model connectors
  • inference

    import cv2
    import superverse as sv
    from inference import get_model
    
    image = cv2.imread(...)
    model = get_model(model_id="yolov8s-640", api_key=<KHULNASOFT API KEY>)
    result = model.infer(image)[0]
    detections = sv.Detections.from_inference(result)
    
    len(detections)
    # 5

annotators

import cv2
import superverse as sv

image = cv2.imread(...)
detections = sv.Detections(...)

box_annotator = sv.BoxAnnotator()
annotated_frame = box_annotator.annotate(
  scene=image.copy(),
  detections=detections)

datasets

import superverse as sv
from khulnasoft import Khulnasoft

project = Khulnasoft().workspace(<WORKSPACE_ID>).project(<PROJECT_ID>)
dataset = project.version(<PROJECT_VERSION>).download("coco")

ds = sv.DetectionDataset.from_coco(
    images_directory_path=f"{dataset.location}/train",
    annotations_path=f"{dataset.location}/train/_annotations.coco.json",
)

path, image, annotation = ds[0]
    # loads image on demand

for path, image, annotation in ds:
    # loads image on demand
πŸ‘‰ more dataset utils
  • load

    dataset = sv.DetectionDataset.from_yolo(
        images_directory_path=...,
        annotations_directory_path=...,
        data_yaml_path=...
    )
    
    dataset = sv.DetectionDataset.from_pascal_voc(
        images_directory_path=...,
        annotations_directory_path=...
    )
    
    dataset = sv.DetectionDataset.from_coco(
        images_directory_path=...,
        annotations_path=...
    )
  • split

    train_dataset, test_dataset = dataset.split(split_ratio=0.7)
    test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)
    
    len(train_dataset), len(test_dataset), len(valid_dataset)
    # (700, 150, 150)
  • merge

    ds_1 = sv.DetectionDataset(...)
    len(ds_1)
    # 100
    ds_1.classes
    # ['dog', 'person']
    
    ds_2 = sv.DetectionDataset(...)
    len(ds_2)
    # 200
    ds_2.classes
    # ['cat']
    
    ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
    len(ds_merged)
    # 300
    ds_merged.classes
    # ['cat', 'dog', 'person']
  • save

    dataset.as_yolo(
        images_directory_path=...,
        annotations_directory_path=...,
        data_yaml_path=...
    )
    
    dataset.as_pascal_voc(
        images_directory_path=...,
        annotations_directory_path=...
    )
    
    dataset.as_coco(
        images_directory_path=...,
        annotations_path=...
    )
  • convert

    sv.DetectionDataset.from_yolo(
        images_directory_path=...,
        annotations_directory_path=...,
        data_yaml_path=...
    ).as_pascal_voc(
        images_directory_path=...,
        annotations_directory_path=...
    )

πŸ“š documentation

Visit our documentation page to learn how superverse can help you build computer vision applications faster and more reliably.

πŸ† contribution

We love your input! Please see our contributing guide to get started. Thank you πŸ™ to all our contributors!