Fundamental assumptions that the learner makes about the target function that enables it to generalize beyond the training data. These assumptions are used to choose one generalization over another.
Examples:
- Support Vector Machines - Distinct classes tend to be separated by wide margins.
- Naive Bayes - Each input depends only on the output class or label; the inputs are independent from each other.
- Linear Regression - The relationship between the attributes x and the output y is linear.
More common types:
- Interpretable Machine Learning - A Guide for Making Black Box Models Explainable.
- Machine Learning Explainability - Kaggle Tutorial
Tools:
- SHAP - A game theoretic approach to explain the output of any machine learning model.
- LIME - Local Interpretable Model-Agnostic Explanations
Distillation == once a neural network has been trained, its full output distributions can be approximated using a smaller network.
- Awesome Knowledge Distillation - a great compilation of resources.
System order:
- First-order algorithms require a first-derivative/gradient (Jacobian).
- Second-order algorithms require a second-derivative/gradient (Hessian).
- https://www.kaggle.com
- https://tianchi.aliyun.com/competition/gameList/activeList
- https://evalai.cloudcv.org/
- https://www.drivendata.org/competitions/
- https://www.aicrowd.com/
- https://datahack.analyticsvidhya.com/
- https://competitions.codalab.org/competitions/
- http://tunedit.org/challenges
- https://www.innocentive.com/ar/challenge/browse
- https://www.crowdanalytix.com/community
- https://www.hackerearth.com/challenges
- https://www.topcoder.com/challenges?filter[tracks][data_science]=true&bucket=ongoing
- https://www.machinehack.com/course-cat/modeling/
- https://quant-quest.com/competitions/
Aggregators: