Skip to content
/ ubdd Public

[ICDM 2023] Code implementation of "Learning Efficient Unsupervised Satellite Image-based Building Damage Detection"

License

Notifications You must be signed in to change notification settings

fzmi/ubdd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

U-BDD++: Unsupervised Building Damage Detection from Satellite Imagery

Code implementation of "Learning Efficient Unsupervised Satellite Image-based Building Damage Detection" from ICDM 2023.

[Paper on ArXiv (Full Ver.)] [Paper on ICDM (Short Ver.)] [BibTeX]

Overview

This repository contains code for U-BDD++.

U-BDD Benchmark

Data Preparation

Our work uses the public xBD dataset from xView2 challenge. You can find the dataset from here (account required). Please download the "Challenge training set", "Challenge test set" and "Challenge holdout set" datasets and follow the instructions on the website to unpack the files.

After downloading the dataset, the file structure should be similar to:

[xBD root folder]
├── hold
│   ├── images
│   └── labels
├── test
│   ├── images
│   └── labels
└── train
    ├── images
    └── labels

Firstly, the data needs to be preprocessed before training. Please run the following command to preprocess the data:

python datasets/preprocess-data.py --data_dir <path to xBD root folder>

This will create a new folder masks under each dataset split folder, which contains the damage masks for each building.

Installation

To start, please clone this repository to your local machine and follow the instructions below.

Requirements

This repository requires python>=3.9, torch>=1.13 and torchvision>=0.14. Older versions may work, but they are not tested.

Note

As per installation requirement from Grounding DINO, please make sure the environment variable CUDA_HOME is set.

export CUDA_HOME=/path/to/cuda-xx.x

# Grounding DINO does not support CUDA 12+ yet, this sample index-url uses CUDA 11.8
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118

pip install git+https://github.com/IDEA-Research/GroundingDINO.git
pip install git+https://github.com/facebookresearch/segment-anything.git
pip install git+https://github.com/openai/CLIP.git
pip install -r requirements.txt

Note

Additionally, DINO requires building the custom PyTorch ops for MultiScaleDeformableAttention:

cd models/dino/models/dino/ops
python setup.py build install

Pre-trained Weights

You can download the pre-trained weights of U-BDD++ for evaluation.

Model Backbone Weights
U-BDD++ Resnet Google Drive
U-BDD++ Swin Google Drive

[Coming Soon]

Evaluation

To evaluate U-BDD++ on xBD dataset, please run:

CUDA_VISIBLE_DEVICES=0 python predict-pretrain.py --test-set-path "path/to/xbd/test" --dino-path "path/to/dino/weights" --dino-config "path/to/dino/config" --sam-path "path/to/sam/weights"

# for example
CUDA_VISIBLE_DEVICES=0 python predict-pretrain.py --test-set-path "/home/datasets/xbd/test" --dino-path "/home/ubdd-dino-resnet.pth" --dino-config "/home/ubdd/models/dino/config/DINO_4scale_UBDD_resnet.py" --sam-path "/home/checkpoints/SAM/sam_vit_h_4b8939.pth"

License

This repository is released under the MIT license. Please see the LICENSE file for more information.

Attribution

Part of this repository used the following repositories:

Related repositories:

Thanks to the authors for their great work!

Citation

If you find this repository useful in your research, please use the following BibTeX for citation:

@article{zhang2023learning,
  title={Learning Efficient Unsupervised Satellite Image-based Building Damage Detection},
  author={Zhang, Yiyun and Wang, Zijian and Luo, Yadan and Yu, Xin and Huang, Zi},
  journal={arXiv preprint arXiv:2312.01576},
  year={2023}
}

About

[ICDM 2023] Code implementation of "Learning Efficient Unsupervised Satellite Image-based Building Damage Detection"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published