Skip to content

Commit

Permalink
Remove interpretability changes from this pull request
Browse files Browse the repository at this point in the history
  • Loading branch information
agitter committed Aug 9, 2020
1 parent 8c431ee commit d60d129
Showing 1 changed file with 13 additions and 37 deletions.
50 changes: 13 additions & 37 deletions content/06.discussion.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Despite the disparate types of data and scientific goals in the learning tasks covered above, several challenges are broadly important for deep learning in the biomedical domain.
Here we examine these factors that may impede further progress, ask what steps have already been taken to overcome them, and suggest future research directions.

### Preventing overfitting and hyperparameter tuning
### Customizing deep learning models reflects a tradeoff between bias and variance

Some of the challenges in applying deep learning are shared with other machine learning methods.
In particular, many problem-specific optimizations described in this review reflect a recurring universal tradeoff---controlling the flexibility of a model in order to maximize predictivity.
Expand All @@ -12,14 +12,8 @@ One way of understanding such model optimizations is that they incorporate exter
This balance is formally described as a tradeoff between "bias and variance"
[@tag:goodfellow2016deep].

Although the bias-variance trade-off is is important to take into account with many classical machine learning models, recent empirical and theoretical observations suggest that deep neural networks in particular do not the tradeoff as expected [@tag:Belkin2019_PNAS; @tag:Zhang2017_generalization; @tag:Lin2017_why_dl_works].
It has been demonstrated that poor generalizability (test error) can often be remedied by adding more layers and increasing the number of free parameters, in conflict with the classic bias-variance theory.
This phenomena, known as "double descent" indicates that deep neural networks achieve their best performance when they smoothly interpolate training data - resulting in near zero training error [@tag:Belkin2019_PNAS].

To optimize neural networks, hyperparaters must be tuned to yield the network with the best test error.
This is computationally expensive and often not done, however it is important to do when making claims about the superiority of one machine learning method vs. another.
Several examples have now been uncovered where a new method was said to be superior to a baseline method (like an LSTM or vanilla CNN) but later it was found that the difference went away after sufficient hyperparameter tuning [@tag:Sculley2018].
A related practice which should be more widely adopted is to perform "ablation studies", where parts of a network are removed and the network is retrained, as this helps with understanding the importance of different components, including any novel ones [@tag:Sculley2018].
Although the bias-variance tradeoff is common to all machine learning applications, recent empirical and theoretical observations suggest that deep learning models may have uniquely advantageous generalization properties [@tag:Zhang2017_generalization; @tag:Lin2017_why_dl_works].
Nevertheless, additional advances will be needed to establish a coherent theoretical foundation that enables practitioners to better reason about their models from first principles.

#### Evaluation metrics for imbalanced classification

Expand Down Expand Up @@ -112,35 +106,18 @@ As a result, several opportunities for innovation arise: understanding the cause
Unfortunately, uncertainty quantification techniques are underutilized in the computational biology communities and largely ignored in the current deep learning for biomedicine literature.
Thus, the practical value of uncertainty quantification in biomedical domains is yet to be appreciated.

### Interpretability

As deep learning models achieve state-of-the-art performance in a variety of domains, there is a growing need to develop methods for interpreting how they function.
There are several important reasons one might be interested in interpretability, which is also called "explainability".

Firstly, a model that achieves breakthrough performance may have identified patterns in the data that practitioners in the field would like to understand.
For instance, interpreting a model for predicting chemical properties from molecular graphs may illuminate previously unknown structure-property relations.
It is also useful to see if a model is using known relationships - if not, this may suggest a way to improve the model.
Finally, there is a chance that the model may have learned relationships that are known to be wrong.
This can be due to improper training data or due to overfitting on spurious correlations in the training data.
### Interpretation

This is particularly important if a model is making medical diagnoses.
As deep learning models achieve state-of-the-art performance in a variety of domains, there is a growing need to make the models more interpretable.
Interpretability matters for two main reasons.
First, a model that achieves breakthrough performance may have identified patterns in the data that practitioners in the field would like to understand.
However, this would not be possible if the model is a black box.
Second, interpretability is important for trust.
If a model is making medical diagnoses, it is important to ensure the model is making decisions for reliable reasons and is not focusing on an artifact of the data.
A motivating example of this can be found in Caruana et al. [@tag:Caruana2015_intelligible], where a model trained to predict the likelihood of death from pneumonia assigned lower risk to patients with asthma, but only because such patients were treated as higher priority by the hospital.
In the context of deep learning, understanding the basis of a model's output is particularly important as deep learning models are unusually susceptible to adversarial examples [@tag:Nguyen2014_adversarial] and can output confidence scores over 99.99% for samples that resemble pure noise.

It has been shown that deep learning models are unusually susceptible to carefully crafted adversarial examples [@tag:Nguyen2014_adversarial] and can output confidence scores over 99.99% for samples that resemble pure noise.
While this is largely still an unsolved problem, the interpretation of deep learning models may help understand these failure modes and how to prevent them.

Several different levels of interpretability can be distinguished.
Consider a prototypical CNN used for image classification.
At a high level, one can perform an occlusion or sensitivity analysis to determine what sections of an image are most important for making a classification, generating a "saliency" heatmap.
Then, if one wishes to understand what is going on in the layers of the model, several tools have been developed for visualizing the learned feature maps, such as the deconvnet[@tag:Zeiler2013_visualizing].
Finally, if one wishes to analyze the flow of information through a deep neural network layer-wise relevance propagation can be performed to see how each layer contributes to different classifications.[@tag:Montavon2018_visualization]

A starting point for many discussions of interpretability is the interpretability-accuracy trade-off.
The trade-off assumes that only simple models are interpretable and often a delineation is made between “white box" models (linear regression, decision trees) that are assumed to be not very accurate and “black box" models (neural networks, kernel SVMs) which are assumed to be more accurate.
This view is becoming outmoded, however with the development of sophisticated tools for interrogating and understanding deep neural networks, [@tag:Montavon2018_visualization; @tag:Zeiler2013_visualizing] and new methods for creating highly accurate interpretable models [@tag:Rudin2019].
Still, this trade-off motivates a common practice whereby a easy to interpret model is trained next to a hard to interpret one, which is sometimes called "post-hoc interpretation".
For instance, in the example discussed by Caruana et al. mentioned earlier, a rule-based model was trained next to a neural network using the same training data to understand the types of relations which may have been learned by the neural network.
Along similar lines, a method for "distilling" a neural network into a decision tree has been developed.[@tag:Frosst2017_distilling]
As the concept of interpretability is quite broad, many methods described as improving the interpretability of deep learning models take disparate and often complementary approaches.

#### Assigning example-specific importance scores

Expand Down Expand Up @@ -242,8 +219,7 @@ Towards this end, Che et al. [@tag:Che2015_distill] used gradient boosted trees

Finally, it is sometimes possible to train the model to provide justifications for its predictions.
Lei et al. [@tag:Lei2016_rationalizing] used a generator to identify "rationales", which are short and coherent pieces of the input text that produce similar results to the whole input when passed through an encoder.
Shen et al. [@tag:Shen2019] trained a CNN for lung nodule malignancy classification which also provides a series of attributes for the nodule, which they argue help understand how the network functions.
These are both simple examples of an emerging approach towards engendering trust in AI systems which Elton calls "self-explaining AI" [@tag:Elton2020].
The authors applied their approach to a sentiment analysis task and obtained substantially superior results compared to an attention-based method.

#### Future outlook

Expand Down

0 comments on commit d60d129

Please sign in to comment.