The aim of this project is to use Landsat-8 imagery to perform forest cover change detection in the Billion Tree Tsunami Afforestation Regions in Pakistan. We do binary land cover segmentation of an image into forest/non-forest classes for our Areas of Interest (AOI), then repeat the same for a whole 7-year temporal series of images from 2014 to 2020 and lastly, compare them to see what forestation changes occured in selected areas. The selected image below shows our results for Battagram district from 2014 to 2020, where red pixels are non-forest labels, green pixels are forest labels and the last image shows overall gain/loss map from 2014 to 2020.
Our paper contains much more detailed explanation of our methodology, dataset retrieval and preparation, Machine Learning application, model design and band combinations used in our experiments. PDF of the paper is available as jars-spie-accepted-work.pdf
in the main repository and it may be accessed online at JARS website.
If you decide to use this work or any proposed ideas in your research, kindly cite our work:
Zulfiqar, A., Ghaffar, M. M., Shahzad, M., Weis, C., Malik, M. I., Shafait, F., & Wehn, N. (2021). AI-ForestWatch: semantic segmentation based end-to-end framework for forest estimation and change detection using multi-spectral remote sensing imagery. Journal of Applied Remote Sensing, 15(2), 024518.
We analyse the following labelled regions in Pakistan from 2014 to 2020.
Essentially, we extract a per-pixel median image representative of a full year for every given region from Landsat-8. This is done in order to minimize effect of clouds and other weather sensitivities in the results. Google Earth Engine was heavily utilized for the retrieval and preprocessing of data. The pipeline including the preprocessing and there onwards is summarized in the following diagram. Step-(1) to step-(5) depict the digitization process of the available 2015 ground truth maps that we use for training. Step-(6) shows the UNet model used as our per-pixel classifier and the step-(7) and step-(8) show the generation of forest estimation and change detection, respectively. The available land cover maps are for the year 2015 only, so we digitize them using QGIS. We georeference the maps onto actual satellite images from Landsat-8 and use distinct visual features for Ground Control Point (GCP) selection. The steps involved are summarized below. We use a UNet segmentation model to make per-pixel classification decisions to segment satellite images into forest/non-forest pixels. The results are binary maps with forest and non-forest pixels, with additional invalid bits that define regions outside of the district boundaries (which are present since images are rectangles but actual district boundaries might be any arbitrary shape). Once we learn to segment images in 2015 tests, we run the same model to infer on the images from 2014, 2016, 2017, 2018, 2019 and 2020. This gives us temporal series of forest cover maps that can be compared to deduce forest cover change statistics as we demonstrate in our paper.All of the models in this repo are written with pytorch.
You will need the following modules to get the code running
BTT-2020-GroundTruth
contains the ground truth labels (digitized maps) used as targets for our UNet model. Most of the training scripts are located in LandCoverSegmentation/semantic_segmentation
directory. This directory also contains the data set generator script which divides the huge Satellite Images for the entire district into 256x256 tiles, zips their ground truth with them, and writes these blobs to drive. The path to the blobs directory is consumed by the training script. inference_btt_2020.py
is the script used to run inference on a given test image using the trained model.