From 341f38152311153800283866cfe8be15b6b66d20 Mon Sep 17 00:00:00 2001 From: Marco Ambrosio <62962632+marco-ambrosio@users.noreply.github.com> Date: Thu, 27 Apr 2023 17:33:27 +0200 Subject: [PATCH 01/10] fixed pyqt conflict with opencv --- environment.yaml | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/environment.yaml b/environment.yaml index 81ce0cc..d5280a0 100644 --- a/environment.yaml +++ b/environment.yaml @@ -17,6 +17,9 @@ dependencies: - wheel=0.38.4 - xz=5.2.10 - zlib=1.2.13 + - qt + - pyqt + - qtpy - pip: - black==23.3.0 - click==8.1.3 @@ -37,7 +40,7 @@ dependencies: - networkx==3.1 - numpy==1.24.2 - onnxruntime==1.14.1 - - opencv-python==4.7.0.72 + - opencv-python-headless==4.7.0.72 - packaging==23.0 - pathspec==0.11.1 - pillow==9.5.0 From 57defc20dc5bf599f04e88e263800f1c22761416 Mon Sep 17 00:00:00 2001 From: Marco Ambrosio <62962632+marco-ambrosio@users.noreply.github.com> Date: Thu, 27 Apr 2023 17:43:40 +0200 Subject: [PATCH 02/10] Update README.md --- README.md | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 9b2b111..c186e78 100644 --- a/README.md +++ b/README.md @@ -6,20 +6,21 @@ Under active development, apologies for rough edges and bugs. Use at your own ri ## Installation +### Pre-processing 1. Install [Segment Anything](https://github.com/facebookresearch/segment-anything) on any machine with a GPU. (Need not be the labelling machine.) -2. Create a conda environment using `conda env create -f environment.yaml` on the labelling machine (Need not have GPU). -3. (Optional) Install [coco-viewer](https://github.com/trsvchn/coco-viewer) to scroll through your annotations quickly. + +### Labelling +1. Create a conda environment using `conda env create -f environment.yaml` on the labelling machine (Need not have GPU). +1. (Optional) Install [coco-viewer](https://github.com/trsvchn/coco-viewer) to scroll through your annotations quickly. ## Usage 1. Setup your dataset in the following format `/images/*` and create empty folder `/embeddings`. - Annotations will be saved in `/annotations.json` by default. 2. Copy the `helpers` scripts to the base folder of your `segment-anything` folder. - - Call `extract_embeddings.py` to extract embeddings for your images. - - Call `generate_onnx.py` generate `*.onnx` files in models. -4. Copy the models in `models` folder. -5. Symlink your dataset in the SALT's root folder as ``. -6. Call `segment_anything_annotator.py` with argument `` and categories `cat1,cat2,cat3..`. + - Call `extract_embeddings.py` to extract embeddings for your images. For example ` python3 extract_embeddings.py --dataset-path ` + - Call `generate_onnx.py` generate `*.onnx` files in models. For example ` python3 generate_onnx.py --dataset-path --onnx-models-path /models ` +6. Call `segment_anything_annotator.py` with argument `` and categories `cat1,cat2,cat3..`. For example ` python3 segment_anything_annotator.py --dataset-path --categories cat1,cat2,cat3 ` - There are a few keybindings that make the annotation process fast. - Click on the object using left clicks and right click (to indicate outside object boundary). - `n` adds predicted mask into your annotations. (Add button) From 8e732d08f470b4f2efeeea9098300fb955755826 Mon Sep 17 00:00:00 2001 From: Marco Ambrosio <62962632+marco-ambrosio@users.noreply.github.com> Date: Thu, 27 Apr 2023 17:44:41 +0200 Subject: [PATCH 03/10] Update README.md --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index c186e78..e3d1e48 100644 --- a/README.md +++ b/README.md @@ -15,11 +15,14 @@ Under active development, apologies for rough edges and bugs. Use at your own ri ## Usage +### On the pre-processing machine 1. Setup your dataset in the following format `/images/*` and create empty folder `/embeddings`. - Annotations will be saved in `/annotations.json` by default. 2. Copy the `helpers` scripts to the base folder of your `segment-anything` folder. - Call `extract_embeddings.py` to extract embeddings for your images. For example ` python3 extract_embeddings.py --dataset-path ` - Call `generate_onnx.py` generate `*.onnx` files in models. For example ` python3 generate_onnx.py --dataset-path --onnx-models-path /models ` + +### On the labelling machine 6. Call `segment_anything_annotator.py` with argument `` and categories `cat1,cat2,cat3..`. For example ` python3 segment_anything_annotator.py --dataset-path --categories cat1,cat2,cat3 ` - There are a few keybindings that make the annotation process fast. - Click on the object using left clicks and right click (to indicate outside object boundary). From a1421f8c359ab7bc0299c95c2f1b79faf742bdae Mon Sep 17 00:00:00 2001 From: Marco Ambrosio <62962632+marco-ambrosio@users.noreply.github.com> Date: Thu, 27 Apr 2023 17:45:10 +0200 Subject: [PATCH 04/10] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index e3d1e48..16dfbeb 100644 --- a/README.md +++ b/README.md @@ -23,7 +23,7 @@ Under active development, apologies for rough edges and bugs. Use at your own ri - Call `generate_onnx.py` generate `*.onnx` files in models. For example ` python3 generate_onnx.py --dataset-path --onnx-models-path /models ` ### On the labelling machine -6. Call `segment_anything_annotator.py` with argument `` and categories `cat1,cat2,cat3..`. For example ` python3 segment_anything_annotator.py --dataset-path --categories cat1,cat2,cat3 ` +1. Call `segment_anything_annotator.py` with argument `` and categories `cat1,cat2,cat3..`. For example ` python3 segment_anything_annotator.py --dataset-path --categories cat1,cat2,cat3 ` - There are a few keybindings that make the annotation process fast. - Click on the object using left clicks and right click (to indicate outside object boundary). - `n` adds predicted mask into your annotations. (Add button) @@ -31,7 +31,7 @@ Under active development, apologies for rough edges and bugs. Use at your own ri - `a` and `d` to cycle through images in your your set. (Next and Prev) - `l` and `k` to increase and decrease the transparency of the other annotations. - `Ctrl + S` to save progress to the COCO-style annotations file. -7. [coco-viewer](https://github.com/trsvchn/coco-viewer) to view your annotations. +1. [coco-viewer](https://github.com/trsvchn/coco-viewer) to view your annotations. - `python cocoviewer.py -i -a /annotations.json` ## Demo From 655b97e94fa63687ba3606542c781eb99e58e49a Mon Sep 17 00:00:00 2001 From: Marco Ambrosio <62962632+marco-ambrosio@users.noreply.github.com> Date: Thu, 11 May 2023 16:11:43 +0200 Subject: [PATCH 05/10] Update README.md --- README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/README.md b/README.md index 16dfbeb..8e10517 100644 --- a/README.md +++ b/README.md @@ -15,6 +15,10 @@ Under active development, apologies for rough edges and bugs. Use at your own ri ## Usage +### (Optional) Create a container with the configured environment + +` docker run -it -v /home/marcoambrosio/dataset/:/root/dataset --privileged --env=NVIDIA_VISIBLE_DEVICES=all --env=NVIDIA_DRIVER_CAPABILITIES=all --gpus 1 --name salt andreaostuni/salt:salt-cuda-11.8-base /bin/bash` + ### On the pre-processing machine 1. Setup your dataset in the following format `/images/*` and create empty folder `/embeddings`. - Annotations will be saved in `/annotations.json` by default. From 2985864d4b24b687e0c5f5439852506edcdb37c4 Mon Sep 17 00:00:00 2001 From: Marco Ambrosio <62962632+marco-ambrosio@users.noreply.github.com> Date: Thu, 11 May 2023 16:15:37 +0200 Subject: [PATCH 06/10] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 8e10517..8a44346 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ Under active development, apologies for rough edges and bugs. Use at your own ri ### (Optional) Create a container with the configured environment -` docker run -it -v /home/marcoambrosio/dataset/:/root/dataset --privileged --env=NVIDIA_VISIBLE_DEVICES=all --env=NVIDIA_DRIVER_CAPABILITIES=all --gpus 1 --name salt andreaostuni/salt:salt-cuda-11.8-base /bin/bash` + docker run -it -v /home/marcoambrosio/dataset/:/root/dataset --privileged --env=NVIDIA_VISIBLE_DEVICES=all --env=NVIDIA_DRIVER_CAPABILITIES=all --gpus 1 --name salt andreaostuni/salt:salt-cuda-11.8-base /bin/bash ### On the pre-processing machine 1. Setup your dataset in the following format `/images/*` and create empty folder `/embeddings`. From 34f2e30996a56ff30f5fc94a5a7056d3f8c2f1d9 Mon Sep 17 00:00:00 2001 From: ale-navo Date: Fri, 6 Oct 2023 17:13:56 +0200 Subject: [PATCH 07/10] fixed mask creation problem on angles --- salt/dataset_explorer.py | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/salt/dataset_explorer.py b/salt/dataset_explorer.py index eb58463..73d963f 100644 --- a/salt/dataset_explorer.py +++ b/salt/dataset_explorer.py @@ -4,7 +4,7 @@ import shutil import itertools import numpy as np -from simplification.cutil import simplify_coords_vwp +from simplification.cutil import simplify_coords_vwp, simplify_coords import os, cv2, copy from distinctipy import distinctipy @@ -87,8 +87,9 @@ def parse_mask_to_coco(image_id, anno_id, image_mask, category_id, poly=False): ) if poly == True: for contour in contours: - sc = simplify_coords_vwp(contour[:,0,:], 2).ravel().tolist() - annotation["segmentation"].append(sc) + sc = contour.ravel().tolist() + if len(sc) > 4: + annotation["segmentation"].append(sc) return annotation From c0b3eb0398f595b19ef435658cf0224edc6cd6be Mon Sep 17 00:00:00 2001 From: ale-navo Date: Mon, 9 Oct 2023 12:44:43 +0200 Subject: [PATCH 08/10] added binary mask generation --- README.md | 2 ++ coco_to_binary_mask.py | 54 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 56 insertions(+) create mode 100644 coco_to_binary_mask.py diff --git a/README.md b/README.md index 8a44346..7225ae8 100644 --- a/README.md +++ b/README.md @@ -38,6 +38,8 @@ Under active development, apologies for rough edges and bugs. Use at your own ri 1. [coco-viewer](https://github.com/trsvchn/coco-viewer) to view your annotations. - `python cocoviewer.py -i -a /annotations.json` +1. Call 'coco_to_binary_mask.py' with argument ``. For example ` python3 coco_to_binary_mask.py --dataset-path `. It will create a new folder 'masks' in the dataset folder with the binary masks. For now only one binary mask is created with all the segmented regions in the same image. (Multiple categories are not supported yet.) + ## Demo ![How it Works Gif!](https://github.com/anuragxel/salt/raw/main/assets/how-it-works.gif) diff --git a/coco_to_binary_mask.py b/coco_to_binary_mask.py new file mode 100644 index 0000000..592aa29 --- /dev/null +++ b/coco_to_binary_mask.py @@ -0,0 +1,54 @@ +#to put in folder testX +import os +import argparse +import sys + + +from pycocotools.coco import COCO +from matplotlib import image +from pathlib import Path +import numpy as np +from re import findall + +if __name__ == "__main__": + + parser = argparse.ArgumentParser() + parser.add_argument("--dataset-path", type=str, default="./dataset") + args = parser.parse_args() + #TODO: differentiate masks of different categories + + dataset_path = args.dataset_path + masks_path = os.path.join(dataset_path, "masks") + if not os.path.exists(masks_path): + os.makedirs(masks_path) + annFile = os.path.join(dataset_path, "annotations.json") + + coco = COCO(annFile) + + catIds = coco.getCatIds() + imgIds = coco.getImgIds() + annsIds = coco.getAnnIds() + + for imgId in imgIds: + img = coco.loadImgs(imgId)[0] + width = coco.imgs[imgId]["width"] + height = coco.imgs[imgId]["height"] + annIds = coco.getAnnIds(imgIds=img['id'], catIds=catIds, iscrowd=None) + anns = coco.loadAnns(annIds) + img_id = findall(r'\d+', img["file_name"])[0] + + mask = np.zeros((height, width)) + + try: + mask = np.zeros(coco.annToMask(anns[0]).shape) + for ann in anns: + mask += coco.annToMask(ann) + mask[mask >= 1] = 1 + except: + pass + + mask_png_name = "mask" +str(img_id) + ".png" + mask_png_path = os.path.join(masks_path, mask_png_name) + image.imsave(mask_png_path, mask, cmap='gray') + + From 7ee041d3775438ab87e57f53a9e9007c17666822 Mon Sep 17 00:00:00 2001 From: ale-navo Date: Mon, 9 Oct 2023 12:46:12 +0200 Subject: [PATCH 09/10] Readme integration --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 7225ae8..98b80a4 100644 --- a/README.md +++ b/README.md @@ -38,7 +38,7 @@ Under active development, apologies for rough edges and bugs. Use at your own ri 1. [coco-viewer](https://github.com/trsvchn/coco-viewer) to view your annotations. - `python cocoviewer.py -i -a /annotations.json` -1. Call 'coco_to_binary_mask.py' with argument ``. For example ` python3 coco_to_binary_mask.py --dataset-path `. It will create a new folder 'masks' in the dataset folder with the binary masks. For now only one binary mask is created with all the segmented regions in the same image. (Multiple categories are not supported yet.) +1. Call `coco_to_binary_mask.py` with argument ``. For example ` python3 coco_to_binary_mask.py --dataset-path `. It will create a new folder `masks` in the dataset folder with the binary masks. For now only one binary mask is created with all the segmented regions in the same image. (Multiple categories are not supported yet.) ## Demo From 277874f05bc3910a836628563d4286f99f35a215 Mon Sep 17 00:00:00 2001 From: Marco Ambrosio Date: Mon, 19 Feb 2024 17:49:49 +0100 Subject: [PATCH 10/10] add change category feature --- salt/dataset_explorer.py | 5 +++++ salt/display_utils.py | 5 ++++- salt/editor.py | 6 ++++++ salt/interface.py | 24 +++++++++++++++++++----- 4 files changed, 34 insertions(+), 6 deletions(-) diff --git a/salt/dataset_explorer.py b/salt/dataset_explorer.py index 73d963f..ab025e3 100644 --- a/salt/dataset_explorer.py +++ b/salt/dataset_explorer.py @@ -203,3 +203,8 @@ def add_annotation(self, image_id, category_id, mask, poly=True): def save_annotation(self): with open(self.coco_json_path, "w") as f: json.dump(self.coco_json, f) + + def update_annotation(self, image_id, category_id, selected_annotations, mask): + for annotation in selected_annotations: + self.coco_json["annotations"][annotation]["category_id"] = category_id + diff --git a/salt/display_utils.py b/salt/display_utils.py index 71b49cc..b390aab 100644 --- a/salt/display_utils.py +++ b/salt/display_utils.py @@ -58,7 +58,10 @@ def draw_box_on_image(self, image, ann, color): def draw_annotations(self, image, annotations, colors): for ann, color in zip(annotations, colors): image = self.draw_box_on_image(image, ann, color) - mask = self.__convert_ann_to_mask(ann, image.shape[0], image.shape[1]) + if type(ann["segmentation"]) is dict: + mask = coco_mask.decode(ann["segmentation"]) + else: + mask = self.__convert_ann_to_mask(ann, image.shape[0], image.shape[1]) image = self.overlay_mask_on_image(image, mask, color) return image diff --git a/salt/editor.py b/salt/editor.py index d6333aa..55312c3 100644 --- a/salt/editor.py +++ b/salt/editor.py @@ -143,6 +143,12 @@ def save_ann(self): def save(self): self.dataset_explorer.save_annotation() + def change_category(self, selected_annotations=[]): + self.dataset_explorer.update_annotation( + self.image_id, self.category_id, selected_annotations, self.curr_inputs.curr_mask + ) + self.__draw(selected_annotations) + def next_image(self): if self.image_id == self.dataset_explorer.get_num_images() - 1: return diff --git a/salt/interface.py b/salt/interface.py index ed5b825..dd153c8 100644 --- a/salt/interface.py +++ b/salt/interface.py @@ -134,29 +134,34 @@ def reset(self): global selected_annotations self.editor.reset(selected_annotations) self.graphics_view.imshow(self.editor.display) + self.get_side_panel_annotations() def add(self): global selected_annotations self.editor.save_ann() self.editor.reset(selected_annotations) self.graphics_view.imshow(self.editor.display) + self.get_side_panel_annotations() def next_image(self): global selected_annotations self.editor.next_image() selected_annotations = [] self.graphics_view.imshow(self.editor.display) + self.get_side_panel_annotations() def prev_image(self): global selected_annotations self.editor.prev_image() selected_annotations = [] self.graphics_view.imshow(self.editor.display) + self.get_side_panel_annotations() def toggle(self): global selected_annotations self.editor.toggle(selected_annotations) self.graphics_view.imshow(self.editor.display) + self.get_side_panel_annotations() def transparency_up(self): global selected_annotations @@ -170,6 +175,13 @@ def transparency_down(self): def save_all(self): self.editor.save() + def change_category(self): + global selected_annotations + self.editor.change_category(selected_annotations) + self.editor.reset(selected_annotations) + self.graphics_view.imshow(self.editor.display) + self.get_side_panel_annotations() + def get_top_bar(self): top_bar = QWidget() button_layout = QHBoxLayout(top_bar) @@ -187,6 +199,7 @@ def get_top_bar(self): "Remove Selected Annotations", lambda: self.delete_annotations(), ), + ("Change Category", lambda: self.change_category()), ] for button, lmb in buttons: bt = QPushButton(button) @@ -248,25 +261,24 @@ def annotation_list_item_clicked(self, item): self.graphics_view.imshow(self.editor.display) def keyPressEvent(self, event): - if event.key() == Qt.Key_Escape: - self.app.quit() + # if event.key() == Qt.Key_Escape: + # self.app.quit() if event.key() == Qt.Key_A: self.prev_image() - self.get_side_panel_annotations() if event.key() == Qt.Key_D: self.next_image() - self.get_side_panel_annotations() if event.key() == Qt.Key_K: self.transparency_down() if event.key() == Qt.Key_L: self.transparency_up() if event.key() == Qt.Key_N: self.add() - self.get_side_panel_annotations() if event.key() == Qt.Key_R: self.reset() if event.key() == Qt.Key_T: self.toggle() + if event.key() == Qt.Key_C: + self.change_category() if event.modifiers() == Qt.ControlModifier and event.key() == Qt.Key_S: self.save_all() elif event.key() == Qt.Key_Space: @@ -274,3 +286,5 @@ def keyPressEvent(self, event): # self.clear_annotations(selected_annotations) # Do something if the space bar is pressed # pass + +