From ebb27b1f6e36717b1b215eb74f1ad19abf7e760a Mon Sep 17 00:00:00 2001 From: Daniel Elton Date: Fri, 14 Feb 2020 18:11:32 -0500 Subject: [PATCH] rehash/update my previous commit - single lines and other fixes --- content/05.treat.md | 34 ++++++++++++++++++++++++---------- content/06.discussion.md | 38 +++++++++++++++++++++++++++++--------- content/citation-tags.tsv | 7 +++++++ 3 files changed, 60 insertions(+), 19 deletions(-) diff --git a/content/05.treat.md b/content/05.treat.md index 3c71ba95..217b6c71 100644 --- a/content/05.treat.md +++ b/content/05.treat.md @@ -182,22 +182,36 @@ However, in the long term, atomic convolutions may ultimately overtake grid-base *De novo* drug design attempts to model the typical design-synthesize-test cycle of drug discovery in-silico [@doi:10.1002/wcms.49; @doi:10.1021/acs.jmedchem.5b01849]. It explores an estimated 1060 synthesizable organic molecules with drug-like properties without explicit enumeration [@doi:10.1002/wcms.1104]. -To test or score structures, physics-based simulation could be used, or machine learning models based on techniques discussed may be used, as they are much more computationally efficient. +To score molecules after generation or during optimization, physics-based simulation could be used [@tag:Sumita2018], but machine learning models based on techniques discussed earlier may be preferable [@tag:Gomezb2016_automatic], as they are much more computationally expedient. Computationally efficiency is particularly important during optimization as the "scoring function" may need to be called thousands of times. + To "design" and "synthesize", traditional *de novo* design software relied on classical optimizers such as genetic algorithms. +These approaches can lead to overfit, "weird" molecules, which are difficult to synthesize in the lab. +A popular approach which may help ensure synthesizability is to use rule-based virtual chemical reactions to generate molecular structures [@doi:10.1021/acs.jmedchem.5b01849]. +Deep learning models that generate realistic, synthesizable molecules have been proposed as an alternative. +In contrast to the classical, symbolic approaches, generative models learned from data would not depend on laboriously encoded expert knowledge. -In the past few years a large number of techniques for the generative modeling and optimization of molecules with deep learning have been explored, including recursive neural networks, variational autoencoders, generative adversarial networks, and reinforcement learning -- for a review see Elton, et al.[@tag:Elton_molecular_design_review] +In the past few years a large number of techniques for the generative modeling and optimization of molecules with deep learning have been explored, including recursive neural networks, variational autoencoders, generative adversarial networks, and reinforcement learning -- for a review see Elton, et al.[@tag:Elton_molecular_design_review] or Vamathevan et al.[@tagVamathevan2019]. Building off the large amount of work that has already gone into text generation,[@arxiv:1308.0850] many generative neural networks for drug design represent chemicals with the simplified molecular-input line-entry system (SMILES), a standard string-based representation with characters that represent atoms, bonds, and rings [@tag:Segler2017_drug_design]. -The first successful demonstration of a deep learning based approach for molecular optimization occured in 2016 with the development of a SMILES-to-SMILES autoencoder capable of learning a continuous latent feature space for molecules[@tag:Gomezb2016_automatic]. -In this learned continuous space it is possible to interpolate between molecular structures in a manner that is not possible with discrete -(e.g. bit vector or string) features or in symbolic, molecular graph space. Even more interesting is that one can perform gradient-based or Bayesian optimization of molecules within this latent space. The strategy of constructing simple, continuous features before applying supervised learning techniques is reminiscent of autoencoders trained on high-dimensional EHR data [@tag:BeaulieuJones2016_ehr_encode]. +The first successful demonstration of a deep learning based approach for molecular optimization occurred in 2016 with the development of a SMILES-to-SMILES autoencoder capable of learning a continuous latent feature space for molecules[@tag:Gomezb2016_automatic]. +In this learned continuous space it is possible to interpolate between molecular structures in a manner that is not possible with discrete (e.g. bit vector or string) features or in symbolic, molecular graph space. +Even more interesting is that one can perform gradient-based or Bayesian optimization of molecules within this latent space. +The strategy of constructing simple, continuous features before applying supervised learning techniques is reminiscent of autoencoders trained on high-dimensional EHR data [@tag:BeaulieuJones2016_ehr_encode]. A drawback of the SMILES-to-SMILES autoencoder is that not all SMILES strings produced by the autoencoder's decoder correspond to valid chemical structures. The Grammar Variational Autoencoder, which takes the SMILES grammar into account and is guaranteed to produce syntactically valid SMILES, helps alleviate this issue to some extent [@arxiv:1703.01925]. Another approach to *de novo* design is to train character-based RNNs on large collections of molecules, for example, ChEMBL [@doi:10.1093/nar/gkr777], to first obtain a generic generative model for drug-like compounds [@tag:Segler2017_drug_design]. -These generative models successfully learn the grammar of compound representations, with 94% [@tag:Olivecrona2017_drug_design] or nearly 98% [@tag:Segler2017_drug_design] of generated SMILES corresponding to valid molecular structures. The initial RNN is then fine-tuned to generate molecules that are likely to be active against a specific target by either continuing training on a small set of positive examples [@tag:Segler2017_drug_design] or adopting reinforcement learning strategies [@tag:Olivecrona2017_drug_design; @arxiv:1611.02796]. Both the fine-tuning and reinforcement learning approaches can rediscover known, held-out active molecules. - -Reinforcement learning approaches where operations are performed directly on the molecular graph bypass the need to learn the details of SMILES syntax, allowing the model to focus purely on chemistry. Additionally, they seem to require less training data and generate more valid molecules since they are constrained by design only to graph operations which satisfy chemical valiance rules.[@tag:Elton_molecular_design_review] A reinforcement learning agent developed by Zhou et al. demonstrated superior molecular optimization performance on certain easy to compute metrics when compared with other deep learning based approaches such as the Junction Tree VAE, Objective Reinforced Generative Adversarial Network, and Graph Convolutional Policy Network.[@doi:10.1038/s41598-019-47148-x] As another example, Zhavoronkov et al. used generative tensorial reinforcement learning to discover potent inhibitors of discoidin domain receptor 1 (DDR1).[@tag:Zhavoronkov2019_drugs] Their work is unique in that six lead candidates discovered using their approach were synthesized and tested in the lab, with 4/6 achieving some degree of binding to DDR1.[@tag:Zhavoronkov2019_drugs] - -It is worth pointing out that it has been shown that classical genetic algorithms can compete with many of the most advanced deep learning methods for molecular optimization.[@doi:10.1246/cl.180665; @doi:10.1039/C8SC05372C] Such genetic algorithms use hard coded rules based possible chemical reactions to generate molecular structures [@doi:10.1021/acs.jmedchem.5b01849]. Still, there are many avenues for improving current deep learning systems and the future of the field looks bright. +These generative models successfully learn the grammar of compound representations, with 94% [@tag:Olivecrona2017_drug_design] or nearly 98% [@tag:Segler2017_drug_design] of generated SMILES corresponding to valid molecular structures. +The initial RNN is then fine-tuned to generate molecules that are likely to be active against a specific target by either continuing training on a small set of positive examples [@tag:Segler2017_drug_design] or adopting reinforcement learning strategies [@tag:Olivecrona2017_drug_design; @arxiv:1611.02796]. +Both the fine-tuning and reinforcement learning approaches can rediscover known, held-out active molecules. + +Reinforcement learning approaches where operations are performed directly on the molecular graph bypass the need to learn the details of SMILES syntax, allowing the model to focus purely on chemistry. +Additionally, they seem to require less training data and generate more valid molecules since they are constrained by design only to graph operations which satisfy chemical valiance rules.[@tag:Elton_molecular_design_review] +A reinforcement learning agent developed by Zhou et al. demonstrated superior molecular optimization performance on certain easy to compute metrics when compared with other deep learning based approaches such as the Junction Tree VAE, Objective Reinforced Generative Adversarial Network, and Graph Convolutional Policy Network [@doi:10.1038/s41598-019-47148-x]. +As another example, Zhavoronkov et al. used generative tensorial reinforcement learning to discover potent inhibitors of discoidin domain receptor 1 (DDR1) [@tag:Zhavoronkov2019_drugs]. +Their work is unique in that six lead candidates discovered using their approach were synthesized and tested in the lab, with 4/6 achieving some degree of binding to DDR1 [@tag:Zhavoronkov2019_drugs]. + +In concluding this section, it is worth pointing out that it has been shown that classical genetic algorithms can compete with some of the most advanced deep learning methods for molecular optimization [@doi:10.1246/cl.180665; @doi:10.1039/C8SC05372C]. +Such genetic algorithms use hard coded rules based possible chemical reactions to generate molecular structures [@doi:10.1021/acs.jmedchem.5b01849]. +Still, there are many avenues for improving current deep learning systems and the future of the field looks bright. diff --git a/content/06.discussion.md b/content/06.discussion.md index 27c13b64..936a1d75 100644 --- a/content/06.discussion.md +++ b/content/06.discussion.md @@ -3,7 +3,7 @@ Despite the disparate types of data and scientific goals in the learning tasks covered above, several challenges are broadly important for deep learning in the biomedical domain. Here we examine these factors that may impede further progress, ask what steps have already been taken to overcome them, and suggest future research directions. -### Customizing deep learning models reflects a tradeoff between bias and variance +### Preventing overfitting via hyperparameter tuning Some of the challenges in applying deep learning are shared with other machine learning methods. In particular, many problem-specific optimizations described in this review reflect a recurring universal tradeoff---controlling the flexibility of a model in order to maximize predictivity. @@ -12,7 +12,13 @@ One way of understanding such model optimizations is that they incorporate exter This balance is formally described as a tradeoff between "bias and variance" [@tag:goodfellow2016deep]. -Although the bias-variance tradeoff is is important to take into account in many machine learning tasks, recent empirical and theoretical observations suggest that deep neural networks have uniquely advantageous generalization properties and do not obey the tradeoff as expected [@tag:Belkin2019_PNAS; @tag:Zhang2017_generalization; @tag:Lin2017_why_dl_works]. According to the bias-variance theory, many of the most successful deep neural networks have so many free parameters they should overfit.[@tag:Belkin2019_PNAS] It has been shown that deep neural networks operate in a regime where they can exactly interpolate their training data yet are still able to generalize.[@tag:Belkin2019_PNAS] Thus, poor generalizability can often be remedied by adding more layers and increasing the number of free parameters, in conflict with the classic bias-variance theory. Additional advances will be needed to establish a coherent theoretical foundation that enables practitioners to better reason about their models from first principles. +Although the bias-variance trade-off is is important to take into account with many classical machine learning models, recent empirical and theoretical observations suggest that deep neural networks in particular do not the tradeoff as expected [@tag:Belkin2019_PNAS; @tag:Zhang2017_generalization; @tag:Lin2017_why_dl_works]. +It has been demonstrated that poor generalizability (test error) can often be remedied by adding more layers and increasing the number of free parameters, in conflict with the classic bias-variance theory. +This phenomena, known as "double descent" indicates that deep neural networks achieve their best performance when they smoothly interpolate training data - resulting in near zero training error [@tag:Belkin2019_PNAS]. + +To optimize neural networks, hyperparaters must be tuned to yield the network with the best test error. +This is computationally expensive and often not done, however it is important to do when making claims about the superiority of one machine learning method vs. another. +Several examples have now been uncovered where a new method said to be superior to a baseline method (like an LSTM) after sufficient hyperparameter tuning [@tag:Sculley2018]. #### Evaluation metrics for imbalanced classification @@ -107,20 +113,33 @@ Thus, the practical value of uncertainty quantification in biomedical domains is ### Interpretability -As deep learning models achieve state-of-the-art performance in a variety of domains, there is a growing need to make the models more interpretable. There are several important reasons to care about interpretability. +As deep learning models achieve state-of-the-art performance in a variety of domains, there is a growing need to develop methods for interpreting how they function. +There are several important reasons one might be interested in interpretability, which is also called "explainability". Firstly, a model that achieves breakthrough performance may have identified patterns in the data that practitioners in the field would like to understand. For instance, interpreting a model for predicting chemical properties from molecular graphs may illuminate previously unknown structure-property relations. It is also useful to see if a model is using known relationships - if not, this may suggest a way to improve the model. -Finally, there is a chance that the model may have learned relationships that are known to be wrong. This can be due to improper training data or due to overfitting on spurious correlations in the training data. +Finally, there is a chance that the model may have learned relationships that are known to be wrong. +This can be due to improper training data or due to overfitting on spurious correlations in the training data. -This is particularly important if a model is making medical diagnoses. A motivating example of this can be found in Caruana et al. [@tag:Caruana2015_intelligible], where a model trained to predict the likelihood of death from pneumonia assigned lower risk to patients with asthma, but only because such patients were treated as higher priority by the hospital. +This is particularly important if a model is making medical diagnoses. +A motivating example of this can be found in Caruana et al. [@tag:Caruana2015_intelligible], where a model trained to predict the likelihood of death from pneumonia assigned lower risk to patients with asthma, but only because such patients were treated as higher priority by the hospital. -It has been shown that deep learning models are unusually susceptible to carefully crafted adversarial examples [@tag:Nguyen2014_adversarial] and can output confidence scores over 99.99% for samples that resemble pure noise. While this is largely still an unsolved problem, the interpretation of deep learning models can help understand these failure modes and how to prevent them. +It has been shown that deep learning models are unusually susceptible to carefully crafted adversarial examples [@tag:Nguyen2014_adversarial] and can output confidence scores over 99.99% for samples that resemble pure noise. +While this is largely still an unsolved problem, the interpretation of deep learning models may help understand these failure modes and how to prevent them. -Several different levels of interpretability can be distinguished. Consider a prototypical CNN used for image classification. At a high level, one can perform an occulusion or sensitivity analysis to determine what sections of an image are most important for making a classification, generating a "saliency" heatmap. Then, if one wishes to understand what is going on in the layers of the model, several tools have been developed for visualizing the learned feature maps, such as the deconvnet[@tag:Zeiler2013_visualizing]. Finally, if one wishes to analyze the flow of information through a deep neural network layer-wise relevance propagation can be performed to see how each layer contributes to different classifications.[@tag:Montavon2018_visualization] +Several different levels of interpretability can be distinguished. +Consider a prototypical CNN used for image classification. +At a high level, one can perform an occlusion or sensitivity analysis to determine what sections of an image are most important for making a classification, generating a "saliency" heatmap. +Then, if one wishes to understand what is going on in the layers of the model, several tools have been developed for visualizing the learned feature maps, such as the deconvnet[@tag:Zeiler2013_visualizing]. +Finally, if one wishes to analyze the flow of information through a deep neural network layer-wise relevance propagation can be performed to see how each layer contributes to different classifications.[@tag:Montavon2018_visualization] -A starting point for many discussions of interpretability is the interpretability-accuracy trade-off. The trade-off assumes that only simple models are interpretable and often a delineation is made between “white box" models (linear regression, decision trees) that are assumed to be not very accurate and “black box" models (neural networks, kernel SVMs) which are assumed to be more accurate. This view is becoming outmoded, however with the development of sophisticated tools for interrogating and understanding deep neural networks.[@tag:Montavon2018_visualization; @tag:Zeiler2013_visualizing] Still, this trade-off motivates a common practice whereby a easy to interpret model is trained next to a hard to interpret one. For instance, in the example discussed by Caruana et al. mentioned earlier, a rule-based model was trained next to a neural network using the same training data to understand the types of relations were learned by the neural network. More recently, a method for "distilling" a neural network into a decision tree has been developed.[@tag:Frosst2017_distilling] +A starting point for many discussions of interpretability is the interpretability-accuracy trade-off. +The trade-off assumes that only simple models are interpretable and often a delineation is made between “white box" models (linear regression, decision trees) that are assumed to be not very accurate and “black box" models (neural networks, kernel SVMs) which are assumed to be more accurate. +This view is becoming outmoded, however with the development of sophisticated tools for interrogating and understanding deep neural networks, [@tag:Montavon2018_visualization; @tag:Zeiler2013_visualizing] and new methods for creating highly accurate interpretable models [@tag:Rudin2019]. +Still, this trade-off motivates a common practice whereby a easy to interpret model is trained next to a hard to interpret one, which is sometimes called "post-hoc interpretation". +For instance, in the example discussed by Caruana et al. mentioned earlier, a rule-based model was trained next to a neural network using the same training data to understand the types of relations which may have been learned by the neural network. +Along similar lines, a method for "distilling" a neural network into a decision tree has been developed.[@tag:Frosst2017_distilling] #### Assigning example-specific importance scores @@ -222,7 +241,8 @@ Towards this end, Che et al. [@tag:Che2015_distill] used gradient boosted trees Finally, it is sometimes possible to train the model to provide justifications for its predictions. Lei et al. [@tag:Lei2016_rationalizing] used a generator to identify "rationales", which are short and coherent pieces of the input text that produce similar results to the whole input when passed through an encoder. -The authors applied their approach to a sentiment analysis task and obtained substantially superior results compared to an attention-based method. +Shen et al. [@tag:Shen2019] trained a CNN for lung nodule malignancy classification which also provides a series of attributes for the nodule, which they argue help understand how the network functions. +These are both simple examples of an emerging approach towards engendering trust in AI systems which Elton calls "self-explaining AI" [@tag:Elton2020]. #### Future outlook diff --git a/content/citation-tags.tsv b/content/citation-tags.tsv index ee40470f..74e7e90b 100644 --- a/content/citation-tags.tsv +++ b/content/citation-tags.tsv @@ -68,6 +68,7 @@ Edwards2015_growing_pains doi:10.1145/2771283 Ehran2009_visualizing url:http://www.iro.umontreal.ca/~lisa/publications2/index.php/publications/show/247 Elephas url:https://github.com/maxpumperla/elephas Elton_molecular_design_review doi:10.1039/C9ME00039A +Elton2020 arxiv:2002.05149 Errington2014_reproducibility doi:10.7554/eLife.04333 Eser2016_fiddle doi:10.1101/081380 Esfahani2016_melanoma doi:10.1109/EMBC.2016.7590963 @@ -195,6 +196,7 @@ Mrzelj url:https://repozitorij.uni-lj.si/IzpisGradiva.php?id=85515 matis doi:10.1016/S0097-8485(96)80015-5 nbc doi:10.1093/bioinformatics/btq619 Murdoch2017_automatic arxiv:1702.02540 +Murdoch2019 doi:10.1073/pnas.1900654116 Nazor2012 doi:10.1016/j.stem.2012.02.013 Nemati2016_rl doi:10.1109/EMBC.2016.7591355 Ni2018 doi:10.1101/385849 @@ -237,6 +239,7 @@ Rogers2010_fingerprints doi:10.1021/ci100050t Roth2015_view_agg_cad doi:10.1109/TMI.2015.2482920 Romero2017_diet url:https://openreview.net/pdf?id=Sk-oDY9ge Rosenberg2015_synthetic_seqs doi:10.1016/j.cell.2015.09.054 +Rudin2019 doi:10.1038/s42256-019-0048-x Russakovsky2015_imagenet doi:10.1007/s11263-015-0816-y Sa2015_buckwild pmcid:PMC4907892 Salas2018_GR doi:10.1101/gr.233213.117 @@ -245,6 +248,7 @@ Salzberg doi:10.1186/1471-2105-11-544 Schatz2010_dna_cloud doi:10.1038/nbt0710-691 Schmidhuber2014_dnn_overview doi:10.1016/j.neunet.2014.09.003 Scotti2016_missplicing doi:10.1038/nrg.2015.3 +Sculley2018 url:https://openreview.net/pdf?id=rJWF0Fywf Segata doi:10.1371/journal.pcbi.1004977 Segler2017_drug_design arxiv:1701.01329 Seide2014_parallel doi:10.1109/ICASSP.2014.6853593 @@ -254,6 +258,7 @@ Serden doi:10.1016/S0168-8510(02)00208-7 Shaham2016_batch_effects doi:10.1093/bioinformatics/btx196 Shapely doi:10.1515/9781400881970-018 Shen2017_medimg_review doi:10.1146/annurev-bioeng-071516-044442 +Shen2019 doi:10.1016/j.eswa.2019.01.048 Shin2016_cad_tl doi:10.1109/TMI.2016.2528162 Shrikumar2017_learning arxiv:1704.02685 Shrikumar2017_reversecomplement doi:10.1101/103663 @@ -276,6 +281,7 @@ Su2015_gpu arxiv:1507.01239 Subramanian2016_bace1 doi:10.1021/acs.jcim.6b00290 Sun2016_ensemble arxiv:1606.00575 Sundararajan2017_axiomatic arxiv:1703.01365 +Sumita2018 doi:10.1021/acscentsci.8b00213 Sutskever arxiv:1409.3215 Swamidass2009_irv doi:10.1021/ci8004379 Tan2014_psb doi:10.1142/9789814644730_0014 @@ -291,6 +297,7 @@ Torracinta2016_sim doi:10.1101/079087 Tu1996_anns doi:10.1016/S0895-4356(96)00002-9 Unterthiner2014_screening url:http://www.bioinf.at/publications/2014/NIPS2014a.pdf Vanhoucke2011_cpu url:https://research.google.com/pubs/pub37631.html +Vamathevan2019 doi:10.1038/s41573-019-0024-5 Vera2016_sc_analysis doi:10.1146/annurev-genet-120215-034854 Vervier doi:10.1093/bioinformatics/btv683 Wallach2015_atom_net arxiv:1510.02855