- Lab 1: Introduction
- Lab 6: Data Exploration
- Lab 7: Deep Learning for Images and Text
- Lab 8: Transfer Learning
- Assignment
- Help
Please make sure you have filled this Google form so we can have your repository information. This task is mandatory and if you ignore it you may lose marks of your lab sessions.
- Create a Google Account if you don't have.
- Open Google Colab
- For help on Colab, please watch YouTube Tutorials or watch my YouTube Tutorial
- Now, jump to Lab 1 and read the whole notebook for NumPy and Matplotlib.
- Now, Lab 1 Exercise. Please feel free to take the help from web.
- Once you complete the questions make sure your lab instructor has check and marked you.
- http://cs231n.github.io/python-numpy-tutorial/
- https://numpy.org/devdocs/user/quickstart.html
- https://www.geeksforgeeks.org/python-numpy/
- https://matplotlib.org/tutorials/introductory/pyplot.html
- https://www.tutorialspoint.com/matplotlib/index.htm
- https://github.com/rougier/matplotlib-tutorial
Please open notebook in Colab and complete all the tasks.
Task 1: Check the counts of each wine class
Task 2: Cluster wine data using K Means Algorithm
Task 3: Scatter plot wine data into 3 classes based on True Labels and plot with legends. Hint: use any two variables
Task 4: Use cluster model labels to group data based on predicted classes
Task 5: Apply PCA with n_components=2 on X_train_std wine data and transform test data accordingly
Task 6: Apply Logistic Regression on training features and predict test features
Task 7: To complete this task, please create a new notebook in Google Colab.
Your data is stored in Lab_6/Data folder. There are two .csv files: 1) Country-data.csv and 2) data-dictionary.csv. Please explore the data and use any clustering method to find the list of countries, which can categorised as follows:
S.No | Categories |
---|---|
1 | under-developing country |
2 | developing country |
3 | developed country |
Note: please justify your reason, why and how you have concluded your answer.
Complete the following tasks:
Go to Exercise 1 and find task 1. Now, change the activation function and other parameters such as optimizer to see the effect on the network and it's performance. If possible create a grid search.
Go to Exercise2_DogvsCat_CNN and find task 2. We have used Dropout to enhance the performance of the CNN model. Can you please use whatever you like to further enhance the performance from val_acc: 0.7506
to above?
Please consider this Time Series Prediction with LSTM Recurrent Neural Networks. We can see that the LSTM model in the Exercise 3 has an average error of about 23 passengers (in thousands) on the training dataset, and about 53 passengers (in thousands) on the test dataset. Not that bad. Can you please improve the performance?
A pre-trained model is a saved network that was previously trained on a large dataset, typically on a large-scale image-classification task. You either use the pre-trained model as is or use transfer learning to customize this model to a given task. The intuition behind transfer learning for image classification is that if a model is trained on a large and general enough dataset, this model will effectively serve as a generic model of the visual world. You can then take advantage of these learned feature maps without having to start from scratch by training a large model on a large dataset.
In this Lab, you will try two ways to customize a pre-trained model:
-
Feature Extraction: Use the representations learned by a previous network to extract meaningful features from new samples. You simply add a new classifier, which will be trained from scratch, on top of the pre-trained model so that you can repurpose the feature maps learned previously for the dataset. You do not need to (re)train the entire model. The base convolutional network already contains features that are generically useful for classifying pictures. However, the final, classification part of the pre-trained model is specific to the original classification task, and subsequently specific to the set of classes on which the model was trained. Please read Example 1: Transfer Learning as Feature Extractor
-
Fine-Tuning: Unfreeze a few of the top layers of a frozen model base and jointly train both the newly-added classifier layers and the last layers of the base model. This allows us to "fine-tune" the higher-order feature representations in the base model in order to make them more relevant for the specific task. Please read Example 2: Transfer Learning as Fine-Tuning
Note: We will be using VGG-16 pre-trained model in our lab work for both feature extraction and fine-tuning.
In your Lab_8 folder, we have stored data under <data.zip> file. This is an image dataset with 4 classes (i.e. cats, dogs, humans, horses). Please implement both Feature Extractor and Fine-Tuning methods on data.zip. Please ignore use VG166 and you are free to use any other pre-trained models https://keras.io/api/applications/
Good luck!
How to prepare and write the assignment
- We recommend you to use https://www.overleaf.com/ for preparing your assignment in LaTeX
- Please use IEEE conference templates to write the assignment, you can download LaTeX template by click here Downlaod LaTeX Template
- We have created a YouTube tutorial for you on how to write the assignment in LaTeX
- If you are new to writing in Science, please complete this writing course by Standford, which can help you in writing a good assignment.
Here is a list of cheatsheets that may be useful: