Skip to content

(ECCV 2024) Open-Vocabulary Camouflaged Object Segmentation

License

Notifications You must be signed in to change notification settings

lartpang/OVCamo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

(ECCV 2024) Open-Vocabulary Camouflaged Object Segmentation

arXiv PDF
logo

@inproceedings{OVCOS_ECCV2024,
  title={Open-Vocabulary Camouflaged Object Segmentation},
  author={Pang, Youwei and Zhao, Xiaoqi and Zuo, Jiaming and Zhang, Lihe and Lu, Huchuan},
  booktitle=ECCV,
  year={2024},
}

Note

Details of the proposed OVCamo dataset can be found in the document for our dataset.

Prepare Dataset

image

  1. Prepare the training and testing splits: See the document in our dataset for details.
  2. Set the training and testing splits in the yaml file env/splitted_ovcamo.yaml:
    • OVCamo_TR_IMAGE_DIR: Image directory of the training set.
    • OVCamo_TR_MASK_DIR: Mask directory of the training set.
    • OVCamo_TR_DEPTH_DIR: Depth map directory of the training set. Depth maps of the training set which are generated by us, can be downloaded from https://github.com/lartpang/OVCamo/releases/download/dataset-v1.0/depth-train-ovcoser.zip
    • OVCamo_TE_IMAGE_DIR: Image directory of the testing set.
    • OVCamo_TE_MASK_DIR: Mask directory of the testing set.
    • OVCamo_CLASS_JSON_PATH: Path of the json file class_info.json storing class information of the proposed OVCamo.
    • OVCamo_SAMPLE_JSON_PATH: Path of the json file sample_info.json storing sample information of the proposed OVCamo.

Training/Inference

  1. Install dependencies: pip install -r requirements.txt.
    1. The versions of torch and torchvision are listed in the comment of requirements.txt.
  2. Run the script to:
    1. train the model: python .\main.py --config .\configs\ovcoser.py --model-name OVCoser;
    2. inference the model: python .\main.py --config .\configs\ovcoser.py --model-name OVCoser --evaluate --load-from <path of the local .pth file.>.

Evaluate the Pretrained Model

  1. Download the pretrained model.
  2. Run the script: python .\main.py --config .\configs\ovcoser.py --model-name OVCoser --evaluate --load-from model.pth.

Evaluate Our Results

  1. Download our results and unzip it into <path>/ovcoser-ovcamo-te.
  2. Run the script: python .\evaluate.py --pre <path>/ovcoser-ovcamo-te

LICENSE