-
Notifications
You must be signed in to change notification settings - Fork 0
SaketR3/Satellite-Imagery-Poverty-Detector
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
That's where deep learning comes in. ML engineers have trained neural networks in the past that analyze satellite imagery and determine how much wealth the regions in the images have, based on features such as housing, light levels, and more. These neural networks can tell from the images which regions are more likely to be in poverty.
This project is my take on this deep learning application.
After searching through various datasets, I decided to train my model on the WILDS PovertyMap dataset, which consists of a large amount of 224 x 224 satellite images.
I created a convolutional neural network (CNN) with several convolutional and pooling layers followed by a few dense (fully-connected) layers at the end. The model also included a few batch normalization and drop-out layers.
As for training, I trained the model using a custom training loop, and was able to achieve an RMSE of 0.34.
I used TensorFlow's Keras API for defining the model, and I used TensorFlow's lower-level API for model training
About
A CNN that recognizes areas of poverty from satellite imagery