Solution:
In this homework, we'll build a model for predicting if we have an image of a dino or a dragon. For this, we will use the "Dino or Dragon?" dataset that can be downloaded from Kaggle.
You can get a wget-able version here:
wget https://github.com/alexeygrigorev/dino-or-dragon/releases/download/data/dino-dragon.zip
unzip dino-dragon.zip
In the lectures we saw how to use a pre-trained neural network. In the homework, we'll train a much smaller model from scratch.
Note: You will need an environment with a GPU for this homework. We recommend to use Saturn Cloud. You can also use a computer without a GPU (e.g. your laptop), but it will be slower.
The dataset contains around 1900 images of dinos and around 1900 images of dragons.
The dataset contains separate folders for training and test sets.
For this homework we will use Convolutional Neural Network (CNN). Like in the lectures, we'll use Keras.
You need to develop the model with following structure:
- The shape for input should be
(150, 150, 3)
- Next, create a convolutional layer (
Conv2D
):- Use 32 filters
- Kernel size should be
(3, 3)
(that's the size of the filter) - Use
'relu'
as activation
- Reduce the size of the feature map with max pooling (
MaxPooling2D
)- Set the pooling size to
(2, 2)
- Set the pooling size to
- Turn the multi-dimensional result into vectors using a
Flatten
layer - Next, add a
Dense
layer with 64 neurons and'relu'
activation - Finally, create the
Dense
layer with 1 neuron - this will be the output- The output layer should have an activation - use the appropriate activation for the binary classification case
As optimizer use SGD
with the following parameters:
SGD(lr=0.002, momentum=0.8)
For clarification about kernel size and max pooling, check Office Hours.
Since we have a binary classification problem, what is the best loss function for us?
binary crossentropy
focal loss
mean squared error
categorical crossentropy
Note: since we specify an activation for the output layer, we don't need to set from_logits=True
What's the total number of parameters of the model? You can use the summary
method for that.
- 9215873
- 11215873
- 14215873
- 19215873
For the next two questions, use the following data generator for both train and test sets:
ImageDataGenerator(rescale=1./255)
- We don't need to do any additional pre-processing for the images.
- When reading the data from train/test directories, check the
class_mode
parameter. Which value should it be for a binary classification problem? - Use
batch_size=20
- Use
shuffle=True
for both training and test sets.
For training use .fit()
with the following params:
model.fit(
train_generator,
epochs=10,
validation_data=test_generator
)
What is the median of training accuracy for all the epochs for this model?
- 0.40
- 0.60
- 0.90
- 0.20
What is the standard deviation of training loss for all the epochs for this model?
- 0.11
- 0.66
- 0.99
- 0.33
For the next two questions, we'll generate more data using data augmentations.
Add the following augmentations to your training data generator:
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
Let's train our model for 10 more epochs using the same code as previously. Make sure you don't re-create the model - we want to continue training the model we already started training.
What is the mean of test loss for all the epochs for the model trained with augmentations?
- 0.15
- 0.77
- 0.37
- 0.97
What's the average of test accuracy for the last 5 epochs (from 6 to 10) for the model trained with augmentations?
- 0.84
- 0.54
- 0.44
- 0.24
- Submit your results here: https://forms.gle/XdH5ztBddvTvxzpT6
- You can submit your solution multiple times. In this case, only the last submission will be used
- If your answer doesn't match options exactly, select the closest one
The deadline for submitting is 21 November 2022, 23:00 CEST.
After that, the form will be closed.