This is the repository for the upcoming learning group meetup in October based on fast.ai v3 part 2 course, fastai v2 library development, and PyTorch v1.2 course taking place in Vienna. (See also the repository from the previous fastai pytorch course in Vienna v1 based on the fast.ai v3 part 1 course material.)
โ Please register in order to get the updates for the meetups.
For this learning group meetup you are expected to have basic knowledge of deep learning or have gone through the fast.ai v3 part 1 course material and to have at least one year experience with programming. You should feel comfortable with programming in Python as well as having basic knowledge in Calculus and Linear Algebra. Some machine learning background is advised to make best use of the course.
- Lesson 8: 16.10.2019 18:00-20:00 - Matrix multiplication; Forward and backward passes - Michael M. Pieler
- Lesson 9: 6.11.2019 18:30-20:30 - Loss functions, optimizers, and the training loop - Liad Magen & Thomas Keil
- Lesson 10: 20.11.2019 18:30-20:30 - Looking inside the model - Albert Rechberger, Moritz Reinhardt, Johannes Hofmanninger
- Lesson 11: 10.12.2019 18:30-20:30 - Data Block API, and generic optimizer - Sebastian Dรผrr
- Lesson 12: 18.12.29019 18:30-20:30 - Advanced training techniques - Michael M. Pieler
- Lesson 13: 15.01.2020 18:30-20:30 - ULMFiT from scratch - Liad Magen & Michael M. Pieler (Last meetup!)
Note: All the learning group meetups will take place at Nic.at, Karlsplatz 1, 1010 Wien.
(The first lesson already starts with number 8, because the part 1 course contained 7 lessons.)
- To dos before the lesson:
- watch the fastai lesson 8 (lesson notes)
- run the matrix multiplication and the forward and backward pass notebooks
- Do not worry, the first lesson is quite dense and we will tackle the building blocks piece by piece! :-)
- Matrix multiplication on German Wikipedia (the German version has better visualisations)
- Animated matrix multiplication
- Broadcasting visualisation
- Refresh your PyTorch basics with the lerning material from our previous fast.ai v3 part 1 learning group.
- Get familiar with PyTorch einsum to get more intuition for matrix multiplication.
- What is torch.nn really? (This nicely explains the steps needed for training a deep learning model with PyTorch. It covers torch.nn, torch.optim, Dataset, and DataLoader. This setup is a "blueprint" for a deep learning library based on PyTorch.)
- PyTorch basics: introduction, torch.nn, view vs. permute, debugging, and scaled dot product attention as a matrix multiplication example
- fastai v2 dev test setup
- Go deeper with DL debugging, troubleshooting (pdf or video), and how to avoid it in the first place (i.e., the Karpathy recipe).
- Why understanding backprop can be important for debugging.
- Xavier Glorot and Kaiming He init
- Publications:
- Matrix calculus for DL (web) (arxiv)
- Xavier Glorot init
- Kaiming He init
- Fixup init
- Batch norm and how does it help optimization
- If you want to present one of the papers in this or the next lectures reach out to us via email! :-)
- If you want to know more about matrix multiplication & Co. on your (Nvidia) GPU (or why everything should be a multiple of 8 for super fast calculations on Nvidia GPUs).
- PyTorch code examples: The Annotated Transformer
- Visual information theory, KL & cross entropy (see section 1., question 3.)
- Mish (new activation function)
- Project ideas:
- ReLU with different backward pass function
- ? (we mentioned something else, but I forgot it, if you know it, please make a pull request)
- To dos before the lesson:
- watch the fastai lesson 9 (lesson notes)
- run the lesson 9 notebooks: why sqrt(5), init, minibatch training, callbacks, and anneal
- Lesson presentation slides
- A super intro to NN weight initialization from cs231n
- Weights initialization - blog post about Xavier Initialization
- Neural Network visualizer playground - allows you to play with parameters such as learning rate, batch size and regularization, and see the result while training directly on the browser
- What is torch.nn?
- Common neural network mistakes (Twitter thread, combine with publication below.)
- Pytorch under the hood
- Bias in NN?
- Correlation and dependence (have a look a the correlation coefficient figure)
- Floating point basics: We (usually) use FP32 or FP16 (in combination with FP32) for mixed precision training (detailed information on floating point arithmetic).
- Efficient Methods and Hardware for Deep Learning (quantization, ternary net, & Co.)
- Publications:
- Taxonomy of Real Faults in Deep Learning Systems (see page 7 for a nice overview)
- To dos before the lesson:
- watch the fastai lesson 10 (lesson notes)
- run the lesson 10 notebooks: foundations, early stopping, CUDA CNN hooks init, batch norm
- Python data model for __dunder__ & Co., a Guide to Python's Magic Methods, and exceptions
- Illustrated Explanation of Performing 2D Convolutions Using Matrix Multiplications
- An infinitely customizable training loop with Sylvain Gugger
- Publications:
- Debugging: "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." - Kernighanโs law
- To dos before the lesson:
- watch the fastai lesson 11 (lesson notes)
- run the lesson 11 notebooks: LSUV, Data Block API, Optimizers, new Learner, progress bar for Learner, and Data Augmentation
- Lesson 11 presentation
- LSUV notebook (see also publication below)
- CNNs: from the Basics to Recent Advances (Dmytro Mishkin) (see also publication below)
- Imagenette dataset
- Exponentially weighted averages, momentum, RMSprob, and Adam
- Optimizer setup in fastai v2
- Publications:
- All you need is a good init (LSUV)
- Systematic evaluation of CNN advances on the ImageNet
- L2 Regularization versus Batch and Weight Normalization
- Decoupled Weight Decay Regularization (Adam with decoupled weight decay, i.e., AdamW)
- Blog post on understanding the LARS/LAMB optimizer
- To dos before the lesson:
- watch the fastai lesson 12 (lesson notes)
- run the lesson 12 notebooks: MixUp & Label Smoothing, Mixed Precision Training, Train Imagenette, and Transfer Learning.
- Coordinate Transform pipeline
- MixUp, CutMix, and others for fastai
- AugMix
- Albumentations image data augmentation library
- fastai XResNet models
- A Full Hardware Guide to Deep Learning by Tim Dettmers
- Publications:
- Bag of Tricks for Image Classification with CNNs (Highly recommended!)
- mixup: Beyond Empirical Risk Minimization (Beta distribution)
- When Does Label Smoothing Help?
- Adversarial Examples Improve Image Recognition (Why you always should take care about BN!)
- Very interesting blog post series on adversarial learning
- Understanding the generalization of โlottery ticketsโ in neural networks
- What's Hidden in a Randomly Weighted Neural Network?
- Understandning transfer learning for medical imaging
- We will cover the text/NLP notebooks from lesson 12 in the lessons after Xmas in order to have more time and go deeper on the concepts!
- Please suggest a project, dataset, kaggle competitions, etc.!
- APTOS2019 blindness-detection (in combination with the interesting starting points outlined in a blog post about transfer learning for medical imaging by Google)
- Understanding clouds
- Get familiar with fastai v2 based on the part 1 notebooks
- Other competitive data science platforms you can have a look at
- To dos before the lesson:
- run the remaining lesson 12 notebooks: Text, AWD-LSTM, Language Model Pretraining, and ULMFiT
- Understanding LSTMs
- Language Models (e.g. ULMFiT) vs. Masked Language Models (e.g. BERT)
- fastai AWD-LSTM docs
- SentencePiece
- Publications:
- RSNA Intracranial Hemorrhage Kaggle Competition
- Softmax & log-likelihood?
- Final wrap up of the material from our fast.ai v3 part 2 PyTorch learning group!
- Build your foundation (Blooms taxonomy)
- Do not forget, the path to mastery is not a straight line! (From the book "Chop Wood Carry Water".)
- We will not cover the S4TF part in our meetup series, but feel free to explore the material at your pace:
- fast.ai v3 part 2 course details
- fast.ai v3 part 2 course material (this should be your first address if you are searching for something)
- fast.ai v3 part 2 course notebooks
- fastai v1 docs (this should be your second address if you are searching for something)
- fastai v2 dev repo (We will have a look at the notebooks used for the development of fastai v2 to see how the different parts end up in the library.)
- fast.ai forum (this should be your third address if you are searching for something)
- TWiML fast.ai v3 part 2 study group material
- Deep Reinforcement Learning
- GANs
- Transformers
- BERT
- Multifit (code)
- Sentencepiece
- Quasi-RNN (QRNN)
- Feel free to add important topics which should be covered in a future courses!
- Learning tips
- Please feel free to send us suggestions!